The disclosure herein relates to image sensors for Lidar (Light Detection and Ranging) systems.
An image sensor or imaging sensor is a sensor that can detect a spatial intensity distribution of a radiation. An image sensor usually represents the detected image by electrical signals. Image sensors based on semiconductor devices may be classified into several types, including semiconductor charge-coupled devices (CCD), complementary metal-oxide-semiconductor (CMOS), and N-type metal-oxide-semiconductor (NMOS).
In addition to being used for capturing a two-dimensional (2D) image of objects (i.e., for detecting a spatial intensity distribution of an incoming radiation) as mentioned above, an image sensor can also be used in a Lidar (Light Detection and Ranging) system for capturing a range image of objects (i.e., for detecting a spatial distance distribution of incoming radiation).
Disclosed herein is a method of operating an apparatus which comprises (a) an image sensor comprising an array of avalanche photodiodes (APDs)(i), i=1, . . . ,N, N being a positive integer, for i=1, . . . ,N, the APD (i) comprising an absorption region (i) and an amplification region (i), wherein the absorption region (i) is configured to generate charge carriers from a photon absorbed by the absorption region (i), wherein the amplification region (i) comprises a junction (i) with a junction electric field (i) in the junction (i), wherein the junction electric field (i) is at a value sufficient to cause an avalanche of charge carriers entering the amplification region (i), but not sufficient to make the avalanche self-sustaining, and wherein the junctions (i), i=1, . . . ,N are discrete, (b) a radiation source, and (c) an optical system, the method comprising: using the radiation source to emit a pulse of illumination photons at a time point Ta; for i=1, . . . ,N, measuring a time of flight (i) from Ta to a time point Tb(i) at which a photon of the illumination photons returns to the APD (i) through the optical system after bouncing off a surface spot (i) of a targeted object corresponding to the APD (i); and determining a three-dimensional (3D) contour of the targeted objects based on the times of flights (i), i=1, . . . ,N.
According to an embodiment, N is greater than 1.
According to an embodiment, the illumination photons comprise infrared photons, and for i=1, . . . ,N, the APD (i) comprises silicon.
According to an embodiment, for i=1, . . . , N, the absorption region (i) has a thickness of 10 microns or above.
According to an embodiment, for i=1, . . . ,N, an absorption region electric field (i) in the absorption region (i) is not high enough to cause avalanche effect in the absorption region (i).
According to an embodiment, for i=1, . . . ,N, the absorption region (i) is an intrinsic semiconductor or a semiconductor with a doping level less than 1012 dopants/cm3.
According to an embodiment, N>1, and at least some absorption regions of the absorption regions (i), i=1, . . . ,N are joined together.
According to an embodiment, for i=1, . . . ,N, the APD (i) further comprises an amplification region (i′) such that the amplification region (i) and the amplification region (i′) are on opposite sides of the absorption region (i).
According to an embodiment, the amplification regions (i), i=1, . . . ,N are discrete.
According to an embodiment, for i=1, . . . ,N, the junction (i) is a p-n junction or a heterojunction.
According to an embodiment, for i=1, . . . ,N, the junction (i) comprises a first layer (i) and a second layer (i), and for i=1, . . . ,N, the first layer (i) is a doped semiconductor and the second layer (i) is a heavily doped semiconductor.
According to an embodiment, for i=1, . . . ,N, the junction (i) further comprises a third layer (i) sandwiched between the first layer (i) and the second layer (i), and for i=1, . . . ,N, the third layer (i) comprises an intrinsic semiconductor.
According to an embodiment, N>1, and at least some third layers of the third layers (i), i=1, . . . ,N, are joined together.
According to an embodiment, for i=1, . . . ,N, the first layer (i) has a doping level of 1013 to 1017 dopants/cm3.
Docket No. 1810-0132
According to an embodiment, N>1, and at least some first layers of the first layers (i), i=1, . . . ,N are joined together.
According to an embodiment, the image sensor further comprises electrodes (i), i=1, . . . ,N in electrical contact with the second layers (i), i=1, . . . ,N, respectively.
According to an embodiment, the image sensor further comprises a passivation material configured to passivate a surface of the absorption regions (i), i=1, . . . ,N.
According to an embodiment, the image sensor further comprises a common electrode electrically connected to the absorption regions (i), i=1, . . . ,N.
According to an embodiment, for i=1, . . . ,N, the junction (i) is separated from a junction of a neighbor junction by (a) a material of the absorption region (i), (b) a material of the first layer (i) or of the second layer (i), (c) an insulator material, or (d) a guard ring (i) of a doped semiconductor.
According to an embodiment, for i=1, . . . ,N, the guard ring (i) is a doped semiconductor of a same doping type as the second layer (i), and for i=1, . . . ,N, the guard ring (i) is not heavily doped.
According to an embodiment, the method further comprises matching the determined 3D contour against a previously known 3D contour.
According to an embodiment, the optical system is configured to converge photons incident on the optical system.
According to an embodiment, the optical system comprises a first cylindrical lens and a second cylindrical lens, and the first cylindrical lens is positioned between the targeted objects and the second cylindrical lens.
According to an embodiment, the first cylindrical lens is configured to converge photons incident thereon in a first dimension, the second cylindrical lens is configured to further converge the incident photons after passing through the first cylindrical lens in a second dimension, and the first dimension is perpendicular to the second dimension.
According to an embodiment, each focal length of the first and second cylindrical lenses is positive, and the focal length of the first cylindrical lens is shorter than the focal length of the second cylindrical lens.
Docket No. 1810-0132
An avalanche photodiode (APD) is a photodiode that uses the avalanche effect to generate an electric current upon exposure to light. The avalanche effect is a process where free charge carriers in a material are subjected to strong acceleration by an electric field and subsequently collide with other atoms of the material, thereby ionizing them (impact ionization) and releasing additional charge carriers which accelerate and collide with further atoms, releasing more charge carriers—a chain reaction.
Impact ionization is a process in a material by which one energetic charge carrier can lose energy by the creation of other charge carriers. For example, in semiconductors, an electron (or hole) with enough kinetic energy can knock a bound electron out of its bound state (in the valence band) and promote it to a state in the conduction band, creating an electron-hole pair.
An APD may work in the Geiger mode or the linear mode. When the APD works in the Geiger mode, it may be called a single-photon avalanche diode (SPAD) (also known as a Geiger-mode APD or G-APD). A SPAD is an APD working under a reverse bias above the breakdown voltage. Here the word “above” means that absolute value of the reverse bias is greater than the absolute value of the breakdown voltage.
A SPAD may be used to detect low intensity light (e.g., down to a single photon) and to signal the arrival times of the photons with a jitter of a few tens of picoseconds. A SPAD may be in a form of a p-n junction under a reverse bias (i.e., the p-type region of the p-n junction is biased at a lower electric potential than the n-type region) above the breakdown voltage of the p-n junction. The breakdown voltage of a p-n junction is a reverse bias, above which exponential increase in the electric current in the p-n junction occurs.
An APD may work in linear mode. An APD working at a reverse bias below the breakdown voltage is operating in the linear mode because the electric current in the APD is proportional to the intensity of the light incident on the APD.
The electric field in the amplification region 220 may be a result of a doping profile in the amplification region 220. For example, the amplification region 220 may include a p-n junction or a heterojunction that has an electric field in its depletion zone. The threshold electric field for the avalanche effect (i.e., the electric field above which the avalanche effect occurs and below which the avalanche effect does not occur) is a property of the material of the amplification region 220. The amplification region 220 may be on one or two opposite sides of the absorption region 210.
The amplification regions 312+313 of the APDs 350 are discrete regions. Namely the amplification regions 312+313 of the APDs 350 are not joined together. In an embodiment, the absorption layer 311 may be in form of a semiconductor wafer such as a silicon wafer. The absorption regions 310 may be an intrinsic semiconductor or very lightly doped semiconductor (e.g., <1012 dopants/cm3, <1011 dopants/cm3, <109 dopants/cm3, <109 dopants/cm3), with a sufficient thickness and thus a sufficient absorptance (e.g., >80% or >90%) for incident photons of interest (e.g., X-ray photons).
The amplification regions 312+313 may have a junction 315 formed by at least two layers 312 and 313. The junction 315 may be a heterojunction of a p-n junction. In an embodiment, the layer 312 is a p-type semiconductor (e.g., silicon) and the layer 313 is a heavily doped n-type layer (e.g., silicon). The phrase “heavily doped” is not a term of degree. A heavily doped semiconductor has its electrical conductivity comparable to metals and exhibits essentially linear positive thermal coefficient. In a heavily doped semiconductor, the dopant energy levels are merged into an energy band. A heavily doped semiconductor is also called degenerate semiconductor.
The layer 312 may have a doping level of 1013 to 1017 dopants/cm3. The layer 313 may have a doping level of 1018 dopants/cm3 or above. The layers 312 and 313 may be formed by epitaxy growth, dopant implantation or dopant diffusion. The band structures and doping levels of the layers 312 and 313 can be selected such that the depletion zone electric field of the junction 315 is greater than the threshold electric field for the avalanche effect for electrons (or for holes) in the materials of the layers 312 and 313, but is not too high to cause self-sustaining avalanche. Namely, the depletion zone electric field of the junction 315 should cause avalanche when there are incident photons in the absorption region 310 but the avalanche should cease without further incident photons in the absorption region 310.
The image sensor 300 may further include electrodes 304 respectively in electrical contact with the layer 313 of the APDs 350. The electrodes 304 are configured to collect electric currents flowing through the APDs 350. The image sensor 300 may further include a passivation material 303 configured to passivate surfaces of the absorption regions 310 and the layer 313 of the APDs 350 to reduce recombination at these surfaces.
The image sensor 300 may further include an electronics layer 120 which may include an electronic system electrically connected to the electrodes 304. The electronic system is suitable for processing or interpreting electrical signals (i.e., the charge carriers) generated in the APDs 350 by the radiation incident on the absorption regions 310. The electronic system may include an analog circuitry such as a filter network, amplifiers, integrators, and comparators, or a digital circuitry such as a microprocessor, and memory. The electronic system may include one or more analog-to-digital converters.
The image sensor 300 may further include a heavily doped layer 302 disposed on the absorption regions 310 opposite to the amplification regions 312+313, and a common electrode 301 on the heavily doped layer 302. The common electrode 301 of at least some or all of the APDs 350 may be joined together. The heavily doped layer 302 of at least some or all of the APDs 350 may be joined together.
When a photon incidents on the image sensor 300, it may be absorbed by the absorption region 310 of one of the APDs 350, and charge carriers may be generated in the absorption region 310 as a result. One type (electrons or holes) of the charge carriers drift toward the amplification region 312+313 of that one APD. When the charge carriers enter the amplification region 312+313, the avalanche effect occurs and causes amplification of the charge carriers. The amplified charge carriers may be collected by the electronics layer 120 through the electrode 304 of that one APD, as an electric current.
When that one APD is in the linear mode, the electric current is proportional to the number of incident photons in the absorption region 310 per unit time (i.e., proportional to the light intensity at that one APD). The electric currents at the APDs may be compiled to represent a spatial intensity distribution of light, i.e., a 2D image. The amplified charge carriers may alternatively be collected through the electrode 304 of that one APD, and the number of photons may be determined from the charge carriers (e.g., by using the temporal characteristics of the electric current).
The junctions 315 of the APDs 350 should be discrete, i.e., the junction 315 of one of the APDs should not be joined with the junction 315 of another one of the APDs. Charge carriers amplified at one of the junctions 315 of the APDs 350 should not be shared with another of the junctions 315.
The junction 315 of one of the APDs may be separated from the junctions 315 of the neighboring APDs (a) by the material of the absorption region wrapping around the junction, (b) by the material of the layer 312 or 313 wrapping around the junction, (c) by an insulator material wrapping around the junction, or (d) by a guard ring of a doped semiconductor.
As shown in
A heavily doped layer 402 (
A doped layer 412 (
An optional layer 417 (
A layer 413 (
The layer 413 may be formed by diffusing or implanting a suitable dopant into the substrate 411 or by epitaxy growth. The layer 413, the layer 412, and the layer 417 if present, form discrete junctions 415 (e.g., p-n junctions, p-i-n junctions, hetero junctions).
Optional guard rings 416 (
A passivation material 403 (
An electronics layer 120 (
A top view of the image sensor 300 of
In an embodiment, the operation of the Lidar system 500 in capturing range images of objects may be as follows. Firstly, the Lidar system 500 may be arranged or configured (or both) so that the objects whose range image is to be captured (referred to as targeted objects) are in a field of view (FOV) 510f of the Lidar system 500. The targeted objects may also be arranged (or moved) if possible so as to be in the FOV 510f of the Lidar system 500. For example, if the Lidar system 500 is used for capturing a range image of a person's face, then the Lidar system 500 may be arranged or configured (or both) and/or the person may move so that the person's face is in the FOV 510f and facing the Lidar system 500. All photons propagating in the FOV 510f and then into the optical system 510 are guided by the optical system 510 to the 12 APDs 350 of the image sensor 300.
In an embodiment, the FOV 510f may be 40° horizontal and 30° vertical. In other words, the FOV 510f has a shape of a right pyramid with its apex being the Lidar system 500 (or the optical system 510, to be more specific) and its base 510b being a rectangle at a very large distance from the apex (or at infinity for simplicity). Because the optical system 510 is considered the apex of the FOV 510f, the apex can be referred to as the apex 510.
In an embodiment, the FOV 510f may be deemed to include 12 sub-fields of view (sub-FOV) corresponding to the 12 APDs 350 of the image sensor 300 such that all photons propagating in a sub-FOV and then into the optical system 510 is guided by the optical system 510 to the corresponding APD 350. Specifically, the base 510b of the FOV 510f may be deemed to comprise 12 base rectangles arranged in an array of 3 rows and 4 columns. Each base rectangle and the apex 510 form a subpyramid that represents a sub-FOV of the 12 sub-FOVs. For example, the base rectangle 510b.1 and the apex 510 form a subpyramid that represents the sub-FOV corresponding to the APD 350.1 (hereafter, this subpyramid, this sub-FOV, and this base rectangle use the same reference numeral 510b.1 for simplicity). As a result, all photons propagating in this sub-FOV 510b.1 and then into the optical system 510 are guided by the optical system 510 to the corresponding APD 350.1 of the image sensor 300.
In an embodiment, while the targeted objects are in the FOV 510f of the Lidar system 500, the radiation source 520 may emit a pulse (or flash or burst) 520′ of illumination photons toward the targeted objects so as to illuminate these targeted objects.
Regarding the operation of the Lidar system 500 with respect to the APD 350.1, assume that the corresponding sub-FOV 510b.1 intersects a surface of a targeted object facing the Lidar system 500 via a surface spot 540 (also referred to as a spot of the scene). Assume further that a photon of the pulse 520′ bounces off the surface spot 540, returns to the Lidar system 500 (or the optical system 510 to be more specific), and is guided by the optical system 510 to the corresponding APD 350.1. As a result, this photon contributes to cause a spike (i.e., a sharp increase) in the number of charge carriers in the APD 350.1. The more photons of the pulse 520′ that bounce off the surface spot 540 in the sub-FOV 510b.1, return to the Lidar system 500, and enter the APD 350.1, the larger the spike, and the more easily the spike may be detected by the electronics layer 120 of the image sensor 300.
In an embodiment, the electronics layer 120 may be configured to (a) measure the time period (called the time-of-flight or TOF for short) from the time at which the pulse 520′ is emitted by the radiation source 520 to the time at which the spike in the number of charge carriers in the APD 350.1 occurs, and then (b) based on the measured TOF, determine the spot distance from the Lidar system 500 to the surface spot 540. In an embodiment, the formula used to determine this spot distance is: D=½ (c×TOF), where D is the spot distance and c is the speed of light in vacuum (around 3×108 m/s). For example, if the measured TOF is 60 ns, then D=½ (3×108 m/s×60 ns)=9 m.
In an alternative embodiment, the spot distance may be expressed in terms of the time it would take light to propagate from the Lidar system 500 to the surface spot 540. In this alternative embodiment, the formula used to determine this spot distance is: D=½ TOF. For example, if the measured TOF is 60 ns, then D=½ (60 ns)=30 ns.
In an embodiment, the operation of the Lidar system 500 with respect to the other 11 APDs 350 are similar to the operation of the Lidar system 500 with respect to the APD 350.1 as described above. As a result, in total, the Lidar system 500 determines 12 spot distances from the Lidar system 500 to 12 surface spots in the 12 sub-FOVs. These 12 spot distances include the one spot distance from the Lidar system 500 to the surface spot 540 in the sub-FOV 510b.1 described above. These 12 spot distances constitute a range image of the targeted objects in the FOV 510f. In other words, by determining the 12 spot distances as described above, the Lidar system 500 has captured a range image of the targeted objects in the FOV 510f. This range image of the targeted objects may be deemed to have 12 image pixels arranged in a rectangular array of 3 rows and 4 columns, wherein the 12 image pixels contain the 12 spot distances mentioned above.
In an embodiment, the determined 3D contour of the targeted objects may be matched against (i.e., compared with) a previously known 3D contour. For example, the determined 3D contour may be that of the face of a person trying to pass a security checkpoint so as to enter a government building, and the determined 3D contour may be compared with a previously known 3D contour from a ban list. If there is a match, then the person may be denied entry.
In an embodiment, the pulse 520′ of photons may include infrared photons. Because infrared photons are safe for human eyes, the Lidar system 500 may be safely used in applications that usually have people near the Lidar system 500 (e.g., self driving cars, facial image capturing, etc.). Silicon is not good in absorbing incident infrared photons (i.e., Si allows infrared photons to pass essentially without absorption). As a result, the electric signals (or charge carriers) created in silicon absorption regions of a typical image sensor of the prior art are rather weak and therefore may be obscured by electrical noise within the typical image sensor. In contrast, the APDs 350 of the present disclosure, even if being made of silicon, through the avalanche effect, significantly amplify the electrical signals which incident infrared photons create in the silicon absorption regions 310. As a result, these amplified electrical signals (i.e., the spikes mentioned above) may be easily detected by the electronics layer 120. This means that the Lidar system 500 which comprises mostly silicon is functional. Because Si is a reasonably cheap semiconductor material, the Lidar system 500 which comprises mostly silicon (in an embodiment) is reasonably cheap to make.
In the embodiments described above, the image sensor 300 includes 12 APDs 350. In general, the image sensor 300 may include N APDs 350 (N being a positive integer) arranged in any way (i.e., not necessarily in a rectangular array as described above). The more APDs 350 the image sensor 300 has, the higher spatial distance resolution the captured range image has. With N>1 as described above, the Lidar system 500 is usually referred to as a Flash Lidar system.
For the case N=1, the image sensor 300 has only 1 APD 350. In this case, in an embodiment, the FOV 510f may be narrowed down such that the FOV 510f is, for example, 1° horizontal and 1° vertical. Accordingly, the pulse 520′ of illumination photons may be focused on the narrow FOV 510f and would look like a narrow beam that illuminates essentially only the targeted objects in the narrow FOV 510f. An advantage of this case (N=1) is that because the power of the pulse 520′ of illumination photons is focused on the narrow FOV 510f, the Lidar system 500 may capture a range image of targeted objects farther away from the Lidar system 500. For example, the Lidar system 500 of this case (N=1) may be mounted on a flying airplane to capture range images of the ground below in sequence while the FOV 510f scans the ground (i.e., the FOV 510f is directed at a new spot of the scene before the Lidar system 500 captures a new range image).
In the embodiments described above, the electronic system of the electronics layer 120 of the image sensor 300 includes all the electronics components needed for TOF measurements and spot distance determinations. In an alternative embodiment, the Lidar system 500 may further include a separate signal processor (or even a computer) electrically connected to the image sensor 300 and the radiation source 520 such that both the electronic system of the electronics layer 120 and the signal processor may work together to handle the TOF measurements and spot distance determinations. As a result, in this alternative embodiment, the electronics layer 120 of the image sensor 300 does not have to include all the electronics needed for the TOF measurements and spot distance calculations, and therefore may be fabricated more easily.
In an embodiment, after capturing the range image of the targeted objects as described above, the Lidar system 500 may be used for capturing more range images in a similar manner. Specifically, if the Lidar system 500 is mounted on a self driving car to monitor surrounding objects, then before each range image is captured, the Lidar system 500 may be arranged or configured (or both) so that the FOV 510f is directed at a new scene. For example, the Lidar system 500 (or the FOV 510f, to be more specific) may be rotated 40° around a vertical axis going through the Lidar system 500 before each new range image is captured. As a result, 9 range images are captured for each revolution of 360° scene surrounding the self driving car.
Alternatively, if the Lidar system 500 is used to monitor a room for intruders, then in an embodiment, the FOV 510f of the Lidar system 500 may remain stationary with respect to the room while the Lidar system 500 captures range images of the room objects in the FOV 510f in sequence (i.e., captures one range image after another).
Next, in an embodiment, the Lidar system 500 may be configured to compare a first range image captured by the Lidar system 500 at a first time point and a second range image captured by the Lidar system 500 at a second time point, wherein the second time point is Td seconds after the first time point. For example, Td may be chosen to be 10 seconds to make it unlikely that the intruder's image in the first range image overlaps the intruder's image in the second range image when the first and second range images are superimposed on each other.
In an embodiment, the comparison of the first and second range images may include determining the difference between the first and second range images as follows. A range change image of size 3×4 representing the difference between the first and second range images may be obtained by subtracting the second range image from the first range image. Specifically, assume the first range image includes 12 spot distances D1(i), i=1, . . . ,12, and the second range image includes 12 spot distances D2(i), i=1, . . . ,12, then the range change image includes 12 range changes RC(i), i=1, . . . ,12 wherein for i=1, . . . ,12, the range change RC(i)=D1(i)−D2(i). In an embodiment, an alarm may be triggered if the absolute value (i.e., modulus) of at least one of the 12 range changes RC(i), i=1, . . . ,12 exceeds a pre-specified positive threshold.
Next, in an embodiment, based on the range change image obtained as described above, the Lidar system 500 may be configured to identify the suspicious pixel positions of the 3×4 array of 12 pixel positions that experience changes when the first and second range images are compared. Specifically, based on the range change image, the Lidar system 500 may be configured to obtain a Boolean image of size 3×4 including 12 Boolean image pixels (i), i=1, . . . ,12 as follows. For i=1, . . . ,12, if the absolute value of RC(i) exceeds a positive threshold value pre-specified by the user of the Lidar system 500, then the Boolean image pixel (i) of the Boolean image is set to TRUE. Otherwise, the Boolean image pixel (i) of the Boolean image is set to FALSE. The TRUE Boolean image pixels identify the suspicious pixel positions.
Next, in an embodiment, the Lidar system 500 may be configured to apply an algorithm on the suspicious pixel positions identified as described above to determine whether these suspicious pixel positions collectively have a size and shape of a human body in the 3×4 array of the 12 pixel positions. If the answer is yes, then the Lidar system 500 may be configured to trigger a security alarm system to indicate that an intruder is likely in the room.
In an embodiment, with reference to
In an embodiment, the first cylindrical lens 802 and the second cylindrical lens 804 may be arranged orthogonal to each other, that is, the axial axis of the first cylindrical lens 802 (e.g., dashed line 806 in Z direction in
In an embodiment, each focal length of the first and second cylindrical lenses 802 and 804 may be positive. In example of
In example of
When the targeted objects 810 are illuminated by a pulse of illumination photons generated by the radiation source 520 (
While various aspects and embodiments have been disclosed herein, other aspects and embodiments will be apparent to those skilled in the art. The various aspects and embodiments disclosed herein are for purposes of illustration and are not intended to be limiting, with the true scope and spirit being indicated by the following claims.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/CN2019/098265 | Jul 2019 | US |
Child | 17571942 | US |