A Lidar system includes a photodetector, or an array of photodetectors. Light is emitted into a field of view of the photodetector. The photodetector detects light that is reflected by an object in the field of view. For example, a flash Lidar system emits pulses of light, e.g., laser light, into essentially the entire the field of view. The detection of reflected light is used to generate a 3D environmental map of the surrounding environment. The time of flight of the reflected photon detected by the photodetector is used to determine the distance of the object that reflected the light.
The Lidar system may be mounted on a vehicle to detect objects in the environment surrounding the vehicle and to detect distances of those objects for environmental mapping. The output of the Lidar system may be used, for example, to autonomously or semi-autonomously control operation of the vehicle, e.g., propulsion, braking, steering, etc. Specifically, the system may be a component of or in communication with an advanced driver-assistance system (ADAS) of the vehicle.
The photodetector may be a single-photon avalanche diode (SPAD). The SPAD is a semiconductor device having a p-n junction that is reverse biased (herein referred to as “bias) at a voltage that exceeds the breakdown voltage of the p-n junction. The bias voltage is at a magnitude such that a single photon injected into the depletion layer triggers a self-sustaining avalanche, which produces a readily-detectable avalanche current. The leading edge of the avalanche current indicates the arrival time of the detected photon.
After the initiation of the avalanche, the SPAD does not detect additional photons and thus must be reset by quenching the avalanche current. Specifically, the SPAD is connected to a quenching circuit that senses the leading edge of the avalanche current and quenches the avalanche current by lowering the bias voltage to the breakdown voltage to reset the SPAD to the operative level.
The quenching can be a passive quench or an active quench. During quenching, the SPAD does not detect a returned photon and thus experiences dead time. This dead time can be many nanoseconds for passive quenching and can be 5-10 ns for active quenching. 5 ns of dead time equates to approximately 1.5 m of detection (based on the speed of light to and from an object.
With reference to the Figures, wherein like numerals indicate like parts throughout the several views, a Lidar system 10 includes a focal-plane array 12 having a plurality of pixels 14. Each pixel 14 includes a first single-photon avalanche diode (SPAD) and a second SPAD 18. The Lidar system 10 operates by, for each of a plurality of the pixels 14, applying a bias voltage to the first SPAD 16 of the pixel 14 while no bias voltage or low bias voltage is applied to the second SPAD 18 of the pixel 14, then removing the bias voltage from the first SPAD 16 of the pixel 14 and applying a bias voltage to the second SPAD 18 of the pixel 14 while no bias voltage or low bias voltage is applied to the first SPAD 16 of the pixel 14, and then recording detection of a photon by the first SPAD 16 of the pixel 14 and detection of a photon by the second SPAD 18 on a common memory bank 34 of the pixel 14.
Since each pixel 14 includes at least two SPADs, i.e., the first SPAD 16 and the second SPAD 18, and the bias voltage is applied at different times to the first SPAD 16 and the second SPAD 18 of the pixel 14, one of the SPADs may be operable (i.e., biased) to detect a returned photon while the other SPAD is inoperable (i.e., not biased or biased at a low voltage, i.e., below Geiger mode) to detect a returned photon. Accordingly, in an example in which the first SPAD 16 is operable and detects a returned photon, the second SPAD 18 is available to be operated while the first SPAD 16 is quenched in deadtime to the avalanche and subsequent quench of the first SPAD 16 (hereinafter referred to as “quench”). Specifically, the second SPAD 18 is biased so that the second SPAD 18 is operable to detect a returned photon during the dead time of the first SPAD 16 while the first SPAD 16 is quenched. Likewise, after the first SPAD 16 is quenched, the first SPAD 16 is available to be operated when the second SPAD 18 is quenched after detection of a returned photon by the second SPAD 18. This reduces or eliminates the dead time of the pixel 14. The separate operation of the SPADS of the pixel 14 at different times also reduces the chance of a neighboring SPAD causing a false signal since the neighboring SPAD is off or essentially off. As described further below, the pixel 14 may include more than two SPADs with each of the SPADs being alternately biased and/or groups of the SPADs of the pixel 14 being alternately biased.
The Lidar system 10 may be a solid-state Lidar system. In such an example, the Lidar system 10 is stationary relative to the vehicle 20. For example, the Lidar system 10 may include a casing 22 (shown in
As a solid-state Lidar system, the Lidar system 10 may be a flash Lidar system. In such an example, the Lidar system 10 emits pulses of light into the field of illumination FOI. More specifically, the Lidar system 10 may be a 3D flash Lidar system that generates a 3D environmental map of the surrounding environment, as shown in part in
The Lidar system 10 includes a light-emitting system 24, a light-receiving system 26, and a computer 28 that controls the light-emitting system 24 and the light-receiving system 26. The FPA 12 is a component of the light-receiving system 26 of the Lidar system 10 as discussed below.
In such an example, the Lidar system 10 is a unit. Specifically, the Lidar system 10 may include the casing 22 that supports the light-emitting system 24 and the light-receiving system 26. The casing 22, for example, may be plastic or metal and may protect the other components of the Lidar system 10 from environmental precipitation, dust, etc. In the alternative to the Lidar system 10 being a unit, components of the Lidar system 10, e.g., the light-emitting system 24 and the light-receiving system 26, may be separate and disposed at different locations of the vehicle 20. The Lidar system 10 may include mechanical attachment features to attach the casing 22 to the vehicle 20 and may include electronic connections to connect to and communicate with electronic system of the vehicle 20, e.g., components of the ADAS.
The Lidar system 10 includes outer optical fascia (not numbered), which may be referred to as a “window,” that allows light to pass through, e.g., light generated by the light-emitting system 24 exits the Lidar system 10 and/or light from environment enters the Lidar system 10. The outer optical fascia protects an interior of the Lidar system 10 from environmental conditions such as dust, dirt, water, etc. The outer optical fascia is typically formed of a transparent or semi-transparent material, e.g., glass, plastic, with respect to the wavelength of the emitted light. The outer optical fascia may be opaque at other wavelengths, which may assist in the overall reduction of ambient light. The outer optical fascia may extend from the casing and/or may be attached to the casing.
The Lidar system 10 includes the light emitter 30 that emits shots, i.e., pulses, of light into the field of illumination FOI for detection by a light-receiving system 26 when the light is reflected by an object in the field of view FOV to return photons to the light-receiving system 26. The light-receiving system 26 has a field of view (hereinafter “FOV”) that overlaps the field of illumination FOI and receives light reflected by surfaces of objects, buildings, road, etc., in the FOV. The light emitter 30 may be in electrical communication with the computer 28, e.g., to provide the shots in response to commands from the computer 28.
The light-emitting system 24 may include one or more light emitter 30 and optical components such as a lens package, lens crystal, pump delivery optics, etc. The optical components, e.g., lens package, lens crystal, etc., may be between the light emitter 30 on a back end of the casing and the outer optical fascia on a front end of the casing. Thus, light emitted from the light emitter 30 passes through the optical components before exiting the casing through the outer optical fascia. Examples of the light emitting system are shown in
The light emitter 30 may be a semiconductor light emitter, e.g., laser diodes. In one example, as shown in
With reference to
With continued reference to
With reference to
Each pixel 14 may include any suitable number of SPADs, i.e., two or more. As one example, each pixel 14 may include two SPADs, i.e., the first SPAD 16 and the second SPAD 18. In the examples shown in
The pixels 14 may be arranged as an array, e.g., a 2-dimensional (2D) or a 1-dimensional (1D) arrangement of components. A 2D array of pixels 14 includes a plurality of pixels 14 arranged in columns and rows.
The FPA 12 detects photons by photo-excitation of electric carriers, e.g., with the SPADs. An output from the FPA 12 indicates a detection of light. The outputs of FPA 12 are collected to generate a 3D environmental map, e.g., 3D location coordinates of objects and surfaces within FOV of the Lidar system 10. The FPA 12 may include the SPADs, e.g., that include semiconductor components for detecting light reflections from the FOV of the Lidar system 10. Optical elements such as a lens package of the light-receiving system 26 may be positioned between the FPA 12 in the back end of the casing and the outer optical fascia on the front end of the casing.
The ROIC 32 converts an electrical signal received from the SPADs of the pixel 14 to digital signals. The ROIC 32 may include electrical components which can convert electrical voltage to digital data. The ROIC 32 may be connected to the computer 28, which receives the data from the ROIC 32 and may generate 3D environmental map based on the data received from the ROIC 32.
The pixel 14 functions to output a single signal or stream of signals corresponding to a count of photons incident on the pixel 14 within one or more sampling periods. Each sampling period may be picoseconds, nanoseconds, microseconds, or milliseconds in duration. The pixel 14 can output a count of incident photons, a time between incident photons, a time of incident photons (e.g., relative to an illumination output time), or other relevant data, and the Lidar system 10 can transform these data into distances from the system to external surfaces in the fields of view of these pixels 14. By merging these distances with the position of pixels 14 at which these data originated and relative positions of these pixels 14 at a time that these data were collected, the computer 28 (or other device accessing these data) can reconstruct a three-dimensional 3D (virtual or mathematical) model of a space within FOV, such as in the form of 3D image represented by a rectangular matrix of range values, wherein each range value in the matrix corresponds to a polar coordinate in 3D space.
As set forth above, the FPA 12 includes a power-supply circuit 40 that powers the pixels 14, i.e., to the SPADs. The FPA 12 may include a single power-supply circuit 40 in communication with all pixels 14 or may include a plurality of power-supply circuits 40 in communication with a group of the pixels 14.
The power-supply circuit 40 may include active electrical components such as MOSFET (Metal-Oxide-Semiconductor Field-Effect Transistor), BiCMOS (Bipolar CMOS), IGBT (Insulated-gate bipolar transistor), VMOS (vertical MOSFET), HexFET, DMOS (double-diffused MOSFET) LDMOS (lateral DMOS), BJT (Bipolar junction transistor), etc., and passive components such as resistors, capacitors, etc. The power-supply circuit 40 may include a power-supply control circuit. The power-supply control circuit may include electrical components such as a transistor, logical components, etc. The power-supply control circuit may control the power-supply circuit 40, e.g., in response to a command from the computer 28, to apply bias voltage and quench and reset the SPAD.
As set forth above, each pixel 14 includes a memory bank 34. Specifically, the computer 28 of the Lidar system 10 includes a processor and memory storing instructions executable by the processor. The memory of the Lidar system 10 includes a plurality of memory banks 34. Each memory bank 34 is dedicated to one of the pixels 14. Data output from the ROIC 32 may be stored in the memory bank 34 of the pixel 14, e.g., for processing by the computer 28. The memory of the Lidar system 10 may be DRAM (Dynamic Random Access Memory), SRAM (Static Random Access Memory), and/or MRAM (Magneto-resistive Random Access Memory) electrically connected to the ROIC 32.
Each SPAD (e.g., SPADs 16, 18, 36, 38) is a semiconductor device having a p-n junction that is reverse biased (herein referred to as “bias) at a voltage that exceeds the breakdown voltage of the p-n junction, i.e., in Geiger mode. The bias voltage is at a magnitude such that a single photon injected into the depletion layer triggers a self-sustaining avalanche, which produces a readily-detectable avalanche current. The leading edge of the avalanche current indicates he arrival time of the detected photon. In other words, the SPAD is a triggering device of which usually the leading edge determines the trigger.
“Geiger mode” means that the SPAD is operated above the breakdown voltage of the semiconductor and a single electron-hole pair (generated by absorption of one photon) can trigger a strong avalanche. The SPAD may be biased above its zero-frequency breakdown voltage to produce an average internal gain on the order of one million. Under such conditions, a readily-detectable avalanche current can be produced in response to a single input photon, thereby allowing the SPAD to be utilized to detect individual photons. “Avalanche breakdown” is a phenomenon that can occur in both insulating and semiconducting materials. It is a form of electric current multiplication that can allow very large currents within materials which are otherwise good insulators. It is a type of electron avalanche. In the present context, “gain” is a measure of an ability of a two-port circuit, e.g., the SPAD, to increase power or amplitude of a signal from the input to the output port. As used herein, application of bias voltage to the SPAD results in Geiger mode operation. As also used herein, application of “low bias voltage” to the SPAD is the application of bias voltage that does not result in Geiger mode operation. Specifically, during application of low bias voltage, avalanche breakdown does not occur in response to a single input photon. The SPAD may be supplied bias voltage so that a single photon triggers an avalanche and the SPAD may be supplied no bias voltage or low bias voltage so that avalanche is not triggered by a single input photon.
When the SPAD is triggered in a Geiger-mode in response to a single input photon, the avalanche current continues as long as the bias voltage remains above the breakdown voltage of the SPAD. Thus, in order to detect the next photon, the avalanche current must be “quenched” and the SPAD must be reset. Quenching the avalanche current and resetting the SPAD involves a two-step process: (i) the bias voltage is reduced below the SPAD breakdown voltage to quench the avalanche current as rapidly as possible, and (ii) the SPAD bias is then raised by the power-supply circuit 40 to a voltage above the SPAD breakdown voltage so that the next photon can be detected.
Quenching is performed using, e.g., a quenching circuit or the like, e.g., using known techniques. Each SPAD includes a quenching circuit. Quenching is performed by sensing a leading edge of the avalanche current, generating a standard output pulse synchronous with the avalanche build-up, quenching the avalanche by lowering the bias down to the breakdown voltage and resetting the SPAD to the operative level.
Quenching may be passive or active quenching. A passive quenching circuit typically includes a single resistor in series with the SPAD. The avalanche current self-quenches because it develops a voltage drop across a resistor, e.g., 100 kΩ (Kilo Ohm) or more. After the quenching of the avalanche current, the SPAD bias voltage recovers and therefore will be ready to detect the next photon. An active circuit element can be used for resetting while performing a passive quench active reset (PQAR).
In an active quenching, upon measuring an onset of the avalanche current across a resistor, e.g., 50Ω, a digital output pulse, synchronous with the photon arrival time is generated. The quenching circuit then quickly reduces the bias voltage to below breakdown voltage, then returns bias voltage to above the breakdown voltage ready to sense the next photon. This mode is called active quench active reset (AQAR), however, depending on circuit requirements, active quenching passive reset (AQPR) may be provided. AQAR circuit typically allows lower dead times (times in which a photon cannot be detected) and reduces dead time compare to circuits having passive quenching and/or passive resetting.
The computer 28 of the Lidar system 10 is a microprocessor-based controller implemented via circuits, chips, or other electronic components. The computer 28 is in electronic communication with the pixels 14 (e.g., with the ROIC 32 and power-supply circuits 40) and the vehicle 20 (e.g., with the ADAS) to receive data and transmit commands. The computer 28 includes a processor and a memory. The computer 28 of the vehicle 20 may be programmed to execute operations disclosed herein, including method 900 described below. Specifically, the memory stores instructions executable by the processor to execute the operations disclosed herein and electronically stores data and/or databases, electronically storing data and/or databases. The memory includes one or more forms of computer-readable media, and stores instructions executable by the computer 28 for performing various operations, including as disclosed herein. For example, the computer 28 may include a dedicated electronic circuit including an ASIC (Application Specific Integrated Circuit) that is manufactured for a particular operation, e.g., calculating a histogram of data received from the Lidar system 10 and/or generating a 3D environmental map for a Field of View (FOV) of the vehicle 20. In another example, the computer 28 may include an FPGA (Field Programmable Gate Array) which is an integrated circuit manufactured to be configurable by a customer. As an example, a hardware description language such as VHDL (Very High Speed Integrated Circuit Hardware Description Language) is used in electronic design automation to describe digital and mixed-signal systems such as FPGA and ASIC. For example, an ASIC is manufactured based on VHDL programming provided pre-manufacturing, and logical components inside an FPGA may be configured based on VHDL programming, e.g. stored in a memory electrically connected to the FPGA circuit. In some examples, a combination of processor(s), ASIC(s), and/or FPGA circuits may be included inside a chip packaging. The computer 28 may be a set of computers communicating with one another via the communication network of the vehicle 20, e.g., a computer 28 in the Lidar system 10 and a second computer 28 in another location in the vehicle 20.
The computer 28 may operate the vehicle 20 in an autonomous, a semi-autonomous mode, or a non-autonomous (or manual) mode. For purposes of this disclosure, an autonomous mode is defined as one in which each of vehicle propulsion, braking, and steering are controlled by the computer 28; in a semi-autonomous mode the computer 28 controls one or two of vehicle propulsion, braking, and steering; in a non-autonomous mode a human operator controls each of vehicle 20 propulsion, braking, and steering.
The computer 28 may include programming to operate one or more of vehicle brakes, propulsion (e.g., control of acceleration in the vehicle 20 by controlling one or more of an internal combustion engine, electric motor, hybrid engine, etc.), steering, climate control, interior and/or exterior lights, etc., as well as to determine whether and when the computer 28, as opposed to a human operator, is to control such operations. Additionally, the computer 28 may be programmed to determine whether and when a human operator is to control such operations.
The computer 28 may include or be communicatively coupled to, e.g., via a vehicle 20 communication bus, more than one processor, e.g., controllers or the like included in the vehicle 20 for monitoring and/or controlling various vehicle 20 controllers, e.g., a powertrain controller, a brake controller, a steering controller, etc. The computer 28 is generally arranged for communications on a vehicle 20 communication network that can include a bus in the vehicle 20 such as a controller area network (CAN) or the like, and/or other wired and/or wireless mechanisms.
As set forth above, each pixel 14 in the examples shown in
As described further below, in the example in
The method 900 is shown in
With reference to block 905, the method includes emitting light from the light emitter 30. Specifically, the light emitter 30 emits one pulse of light at block 905. The computer 28 instructs the light emitter 30 to emit the light. For example, the computer 28 may pulse the light routinely based on a clock. The application of the bias voltage to the first SPAD 16 is clocked based on the emission of light from the light emitter 30. The computer 28 operates the SPADs as described below during an acquisition time. The acquisition time may be the time between pulses of light from the light emitter 30.
With reference to block 910, the method includes applying bias voltage to the first SPAD 16. In some examples, bias voltage is simultaneously applied to multiple SPADs of the pixel 14. For example, in the example shown in
In decision block 915, the method includes determining whether a photon is detected by the first SPAD 16 during the application of the bias voltage to the first SPAD 16. If a photon is not detected, the method includes determining whether the acquisition time has ended in decision block 920. The bias voltage is applied to the first SPAD 16 until the acquisition time ends or a photon is detected by the first SPAD 16.
If the first SPAD 16 detects a photon at decision block 915, the method includes quenching the first SPAD 16 (block 925), recording the detection of the photon in the memory bank 34 of the pixel 14 (block 930), and applying bias voltage to the second SPAD 18 (block 935). The quenching of the first SPAD 16 in block 925 and the application of the bias voltage to the second SPAD 18 in block 935 may be simultaneous to reduce or eliminate dead time between the operation of the first SPAD 16 and the second SPAD 18. The quenching in block 925 and the recording of the detection of the photon in the memory bank 34 of the pixel 14 in block 930 may be performed during operation of the second SPAD 18, i.e., while bias voltage is applied to the second SPAD 18. The method includes removing the bias voltage from the first SPAD 16 after detecting the photon with the first SPAD 16. The bias voltage is removed from the first SPAD 16 when the first SPAD 16 detects the photon and, accordingly, the bias voltage is applied to the second SPAD 18 while no bias voltage or low bias voltage is applied to the first SPAD 16. The computer 28 may read from the memory bank 34 of the pixel 14 to perform operations of the Lidar system 10 described above. The memory bank 34 is common to all SPADs of the pixel 14; and the memory bank 34 may store data from more than one SPAD of the pixel 14 before being read out or the memory bank 34 may read out the data from one SPAD before the data from another SPAD is recorded in the memory bank 34.
In examples in which other SPADs of the pixel 14 are grouped with the first SPAD 16, e.g., the third SPAD 36 in
With reference to block 935, the method includes applying bias voltage to the second SPAD 18 after detection of the photon by the first SPAD 16 (and/or detection by any other SPADs grouped with the first SPAD 16). In some examples, bias voltage is simultaneously applied to multiple SPADs of the pixel 14 at block 935. For example, in the example shown in
In decision block 940, the method includes determining whether a photon is detected by the second SPAD 18 during the application of the bias voltage to the second SPAD 18. If a photon is not detected, the method includes determining whether the acquisition time has ended in decision block 945. Once the process at block 935 is initiated, the bias voltage is applied to the second SPAD 18 until the acquisition time ends or a photon is detected by the second SPAD 18.
If the second SPAD 18 detects a photon at decision block 940, the method includes quenching the second SPAD 18 (block 945), recording the detection of the photon in the memory bank 34 of the pixel 14 (block 950), and applying bias voltage to another second SPAD 18 (block 955). The quenching of the second SPAD 18 in block 945 and the application of the bias voltage to another SPAD in block 955 may be simultaneous to reduce or eliminate dead time between the operation of the second SPAD 18 and the next SPAD. The quenching in block 945 and the recording of the detection of the photon in the memory bank 34 of the pixel 14 in block 950 may be performed during operation of another SPAD (e.g., Nth SPAD in
In examples in which other SPADs of the pixel 14 are grouped with the second SPAD 18, e.g., the fourth SPAD 38 in
As shown in blocks 955-975, the process of biasing a SPAD, detecting a photon with the SPAD, quenching the SPAD, recording the detection, and then biasing another SPAD is repeated for all of the SPADs of the pixel 14 if photons are detected by each SPASD of the pixel 14 before the acquisition time ends. In the example shown in
The biasing a SPAD, detecting a photon with the SPAD, quenching the SPAD, recording the detection, and then biasing another SPAD is repeated until each of the SPADs of the pixel 14 has been biased or the acquisition time has ended. As shown in block 980, if each SPAD of the pixel 14 has detected a photon and the acquisition time has not ended, then the method repeats starting at block 910.
The disclosure has been described in an illustrative manner, and it is to be understood that the terminology which has been used is intended to be in the nature of words of description rather than of limitation. Many modifications and variations of the present disclosure are possible in light of the above teachings, and the disclosure may be practiced otherwise than as specifically described.