The present invention relates generally to systems and methods for depth mapping, and particularly to beam sources and sensor arrays used in time-of-flight sensing.
Existing and emerging consumer applications have created an increasing need for real-time three-dimensional (3D) imagers. These imaging devices, also known as depth sensors, depth mappers, or light detection and ranging (LiDAR) sensors, enable the remote measurement of distance (and often intensity) to each point in a target scene—referred to as target scene depth—by illuminating the target scene with an optical beam and analyzing the reflected optical signal. A commonly-used technique to determine the distance to each point on the target scene involves transmitting one or more pulsed optical beams towards the target scene, followed by the measurement of the round-trip time, i.e. time-of-flight (ToF), taken by the optical beams as they travel from the source to the target scene and back to a detector array adjacent to the source.
Some ToF systems use single-photon avalanche diodes (SPADs), also known as Geiger-mode avalanche photodiodes (GAPDs), in measuring photon arrival time. For example, U.S. Pat. No. 9,997,551, whose disclosure is incorporated herein by reference, describes a sensing device that includes an array of SPAD sensing elements. Each sensing element includes a photodiode, including a p-n junction, and a local biasing circuit, which is coupled to reverse-bias the p-n junction at a bias voltage greater than the breakdown voltage of the p-n junction by a margin sufficient so that a single photon incident on the p-n junction triggers an avalanche pulse output from the sensing element. A bias control circuit is coupled to set the bias voltage in different ones of the sensing elements to different, respective values.
U.S. Patent Application Publication 2017/0176579, whose disclosure is incorporated herein by reference, describes the use of this sort of variable biasing capability in selectively actuating individual sensing elements or groups of sensing elements in a SPAD array. For this purpose, an electro-optical device includes a laser light source, which emits at least one beam of light pulses, a beam steering device, which transmits and scans the at least one beam across a target scene, and an array of sensing elements. Each sensing element outputs a signal indicative of a time of incidence of a single photon on the sensing element. (Each sensing element in such an array is also referred to as a “pixel.”) Light collection optics image the target scene scanned by the transmitted beam onto the array. Circuitry is coupled to actuate the sensing elements only in a selected region of the array and to sweep the selected region over the array in synchronization with scanning of the at least one beam.
Embodiments of the present invention that are described hereinbelow provide improved depth mapping systems and methods for operating such systems.
There is therefore provided, in accordance with an embodiment of the invention, depth sensing apparatus, including a radiation source, which includes a first array of emitters arranged in multiple banks, which are configured to emit a first plurality of pulsed beams of optical radiation toward a target scene. A second plurality of sensing elements are arranged in a second array and are configured to output signals indicative of respective times of incidence of photons on the sensing elements, wherein the second plurality exceeds the first plurality. Objective optics are configured to form an image of the target scene on the array of sensing elements. Processing and control circuitry is coupled to actuate the multiple banks in alternation to emit the pulsed beams and is coupled to receive the signals from the sensing elements, and is configured to identify, responsively to the signals, areas of the second array on which the pulses of optical radiation reflected from corresponding regions of the target scene are incident, and to process the signals from the sensing elements in the identified areas in order measure depth coordinates of the corresponding regions of the target scene based on the times of incidence.
In a disclosed embodiment, the emitters in the array include vertical-cavity surface-emitting lasers (VCSELs).
In some embodiments, the multiple banks include at least four banks, each bank containing at least four of the emitters. In one embodiment, the radiation source includes a diffractive optical element (DOE) which is configured to split the optical radiation emitted by each of the emitters into multiple ones of the pulsed beams. Additionally or alternatively, the at least four banks include at least eight banks. Each bank may contain at least twenty of the emitters. In a disclosed embodiment, the banks of emitters are interleaved on a substrate.
Additionally or alternatively, the sensing elements include single-photon avalanche diodes (SPADs).
In some embodiments, the processing and control circuitry is configured to group the sensing elements in each of the identified areas together to define super-pixels, and to process together the signals from the sensing elements in each of the super-pixels in order to measure the depth coordinates. In a disclosed embodiment, the processing and control circuitry includes multiple processing units, wherein each of the processing units is coupled to process the signals from a respective one of the super-pixels. Additionally or alternatively, each of the processing units is configured to construct a histogram of the times of incidence of the photons on the sensing elements in each of the super-pixels.
In some embodiments, the pulses of the optical radiation emitted from the multiple banks of the emitters are incident, after reflection from the target scene, on different, respective sets of the identified areas of the second array, and the processing and control circuitry is configured to receive and process the signals from the sensing elements in the respective sets in synchronization with actuation of the multiple banks. In the disclosed embodiments, when any given bank of the emitters is actuated, the processing and control circuitry is configured to read and process the signals only from the sensing elements in a corresponding set of the identified areas of the second array, while the remaining sensing elements in the array are inactive.
There is also provided, in accordance with an embodiment of the invention, a method for depth sensing, which includes driving a first array of emitters arranged in multiple banks to emit a first plurality of pulsed beams of optical radiation toward a target scene, while actuating the multiple banks in alternation to emit the pulsed beams. An image of the target scene is formed on a second plurality of sensing elements, which are arranged in a second array and are configured to output signals indicative of respective times of incidence of photons on the sensing elements, wherein the second plurality exceeds the first plurality. Responsively to the signals from the sensing elements, areas of the second array are identified on which the pulses of optical radiation reflected from corresponding regions of the target scene are incident. The signals from the sensing elements in the identified areas are processed in order measure depth coordinates of the corresponding regions of the target scene based on the times of incidence.
The present invention will be more fully understood from the following detailed description of the embodiments thereof, taken together with the drawings in which:
In some of the embodiments described in the above-mentioned U.S. Patent Application Publication 2017/0176579, SPADs are grouped together into “super-pixels,” wherein the term “super-pixel” refers to a group of mutually-adjacent pixels along with data processing elements that are coupled directly to these pixels. At any instant during operation of the system, only the sensing elements in the area or areas of the array that are to receive reflected illumination from a beam are actuated, for example by appropriate biasing of the SPADs in selected super-pixels, while the remaining sensing elements are inactive. The sensing elements are thus actuated only when their signals provide useful information. This approach reduces the background signal, thus enhancing the signal-to-background ratio, and lowers both the electrical power needs of the detector array and the number of data processing units that must be attached to the SPAD array.
One issue to be resolved in a depth mapping system of this sort is the sizes and locations of the super-pixels to be used. For accurate depth mapping, with high signal/background ratio, it is important that the super-pixels contain the detector elements onto which most of the energy of the reflected beams is imaged, while the sensing elements that do not receive reflected beams remain inactive. Even when a static array of emitters is used, however, the locations of the reflected beams on the detector array can change, for example due to thermal and mechanical changes over time, as well as optical effects, such as parallax.
In response to this problem, some embodiments of the present invention provide methods for calibrating the locations of the laser spots on the SPAD array. For this purpose, processing and control circuitry receives timing signals from the array and searches over the sensing elements in order to identify the respective regions of the array on which the light pulses reflected from the target scene are incident. Detailed knowledge of the depth mapping system may be used in order to pre-compute likely regions of the reflected laser spots to be imaged onto the SPAD array. A random search in these regions will converge rapidly to the correct locations of the laser spots on the array. Alternatively or additionally, a small subset of the locations of laser spots can be identified in an initialization stage. These locations can be used in subsequent iterative stages to predict and verify the positions of further laser spots until a sufficient number of laser spots have been located.
Even following meticulous calibration, it can occur in operation of the depth mapping system that some of the pixels or super-pixels on which laser spots are expected to be imaged fail to output usable timing signals. In some cases, ancillary image data can be used to identify areas of the scene that are problematic in terms of depth mapping, and to recalibrate the super-pixel locations when necessary. This ancillary image data can be provided, for example, by a color image sensor, which captures two-dimensional (2D) images in registration with the SPAD array.
The emitter arrays used in the embodiments described below are “sparse,” in the sense that the number of pulsed beams of optical radiation that are emitted toward a target scene is substantially less than the number of pixels (i.e., SPADs or other sensing elements) in the array that receives the radiation reflected from the scene. The illumination power available from the emitter array is projected onto a correspondingly sparse grid of spots in the scene. The processing and control circuitry in the apparatus then receives and processes signals only from the pixels onto which these spots are imaged in order to measure depth coordinates.
The pixels onto which the spots are imaged are referred to herein as the “active pixels,” and the “super-pixels” are made up of groups of adjacent active pixels, for example 2×2 groups. The pixels in the array that fall between the active pixels are ignored, and need not be actuated or read out at all, as they do not contribute to the depth measurement and only increase the background level and noise. Alternatively, a different number, such as one, two, three or more pixels, may be included in a super-pixel. Furthermore, although the embodiments described herein relate specifically to rectangular super-pixels, the group of SPAD pixels in a super-pixel may have a different shape, such as, for example, diamond shape, triangular, circular, or irregular. The exact location of the spot within the SPAD pixels varies slightly depending on the distance to the scene due to a small amount of parallax. At any given time, the signals from the SPAD pixels of the super-pixel are processed together in measuring for a given laser spot both its strength (intensity) and its time of flight. Additionally, the signals from the SPAD pixels may be processed as individual signals for determining the location of the laser spot within the super-pixel.
An advantage of using a sparse emitter array is that the available power budget can be concentrated in the small number of projected spots, rather than being spread over the entire field of view of the sensing array. As a result of this concentration of optical power in a small number of spots, the signal levels from the corresponding active pixels—and thus the accuracy of ToF measurement by these pixels—are enhanced. This signal enhancement is particularly beneficial for long-range depth measurements and for depth mapping in conditions of strong ambient light, such as outdoors.
The concentration of optical power in a sparse array of spots can be further enhanced by arranging the emitters in multiple banks, and actuating these banks in alternation. The laser beams generated by the emitters are typically collimated by a collimating lens and may be replicated by a diffractive optical element (DOE) in order to increase the number of projected spots. The pulses of optical radiation emitted from the different banks of the emitters are incident, after reflection from the target scene, on different, respective sets of the active pixels. The processing and control circuitry can then receive and process the signals from the active pixels in these respective sets in synchronization with actuation of the corresponding banks of emitters. Thus, during any given period in the operation of the apparatus, the processing and control circuitry need receive and process the signals only from one active set of sensing elements, while all other sets remain inactive. This sort of multi-bank, synchronized operation makes it possible to time-multiplex processing resources among the different sets of sensing elements, and thus reduce circuit complexity and power consumption.
Because the spots reflected from the target scene are imaged sparsely onto the SPAD array, the number of possible super-pixels is much larger than the number of laser spots, and only a small fraction of the total number of pixels in the SPAD array should be active at any given time and coupled to a processing unit for the purpose of measuring time-of-flight. Therefore, information is required as to which of the SPAD super-pixels to activate at any given time.
A mapping of SPAD pixels to processing units, i.e., the assignment of SPAD pixels to super-pixels, may be determined initially during a factory calibration.
However, temperature changes during operation, as well as mechanical shocks, may alter the mechanical parameters of the mapping, thus modifying the positions of the laser spots on the SPAD array and necessitating recalibration during operation in the field. An exhaustive search could be used to determine which of the SPAD pixels to connect to the processing units, wherein all pixels are searched to detect laser spots; but this approach suffers from at least two basic problems:
The embodiments of the present invention that are described herein address these problems by providing improved methods for calibrating the locations of the laser spots on the SPAD array. These methods can be applied not only in the sorts of arrays that are shown in the figures and described hereinbelow, but also in other SPAD-based systems, such as systems comprising multiple banks of SPADs, as well as SPADs of various sizes, and systems using various sorts of emitters and emitter arrays, including emitters whose beams are replicated by a DOE. The present methods can then be extended, mutatis mutandis, to multi-bank systems, by activating the SPAD pixels and performing the calibration bank by bank.
In a disclosed embodiment, detailed knowledge of the depth mapping system is utilized to pre-compute likely regions of the reflected laser spots to be imaged onto the SPAD array. A search in these regions, for example a random search, will converge rapidly to the correct locations of the laser spots on the array.
Another disclosed embodiment uses a two-stage solution: in an initialization stage, a small subset of the locations of laser spots is identified, and in a subsequent iterative stage, the positions of further laser spots are predicted by a model and verified. Iterative steps of spot detection are utilized to refine the model and add locations, until a sufficient number of laser spots have been located.
A receiver 23 in system 20 comprises a two-dimensional SPAD array 24, together with J processing units 28 and select lines 31 for coupling the processing units to the SPADs, along with a combining unit 35 and a controller 26. SPAD array 24 comprises a number of detector elements N that is much larger than M, for example, 100×100 pixels or 200×200 pixels. The number J of processing units 28 depends on the number of pixels of SPAD array 24 to which each processing unit is coupled, as will be further described with reference to
Array 22, together with beam optics 37, emits M pulsed beams of light 30 towards a target scene 32. Although beams 30 are depicted in
A Cartesian coordinate system 33 defines the orientation of depth mapping system 20 and scene 32. The x-axis and the y-axis are oriented in the plane of SPAD array 24. The z-axis is perpendicular to the array and points to scene 32 that is imaged onto SPAD array 24.
For clarity, processing units 28 are shown as if separate from SPAD array 24, but they are commonly integrated with the SPAD array. Similarly, combining unit 35 is commonly integrated with SPAD array 24. Processing units 28, together with combining unit 35, comprise hardware amplification and logic circuits, which sense and record pulses output by the SPADs in respective super-pixels, and thus measure the times of arrival of the photons that gave rise to the pulses, as well as the strengths of the optical pulses impinging on SPAD array 24.
As further described below in reference to
Controller 26 is coupled to both radiation source 21 and receiver 23. Controller 26 actuates the banks of emitters in array 22 in alternation to emit the pulsed beams. The controller also provides to the processing and combining units in receiver 23 an external control signal 29, and receives output signals from the processing and combining units. The output signals may comprise histogram data, and may be used by controller 26 to derive both times of incidence and signal strengths, as well as a precise location of each laser spot that is imaged onto SPAD array 24.
To make optimal use of the available sensing and processing resources, controller 26 identifies the respective areas of SPAD array 24 on which the pulses of optical radiation reflected from corresponding regions of target scene 32 are imaged by lens 34, and chooses the super-pixels to correspond to these areas. The signals output by sensing elements outside these areas are not used, and these sensing elements may thus be deactivated, for example by reducing or turning off the bias voltage to these sensing elements. Methods for choosing the super-pixels initially and for verifying and updating the selection of super-pixels are described, for example, in the above-mentioned provisional patent applications.
External control signal 29 controls select lines 31 so that each processing unit 28 is coupled to a respective super-pixel, comprising four SPAD pixels, for example. The control signal selects the super-pixels from which the output signals are to be received in synchronization with the actuation of the corresponding banks of emitters. Thus, at any given time, processing units 28 and combining unit 35 read and process the signals only from the sensing elements in the areas of SPAD array 24 that receive the reflected pulses from scene 32, while the remaining sensing elements in the array are inactive. The processing of the signals from SPAD array 24 is further described in reference to
For clarity, the dimensions of emitter array 22 and SPAD array 24 have been exaggerated in
Of those M super-pixels that are activated and coupled to the J processing units 28, either all of them or a subset of m super-pixels, wherein m≤M, will receive a reflected laser beam 30. The magnitude of m depends on two factors:
Even if all M laser beams 30 were to be reflected from scene 32, m will be less than M if SPAD array 24 is not properly calibrated. (Calibration procedures described in the above-mentioned provisional patent applications can be used to maximize m.) Consequently, controller 26 will receive signals indicating times of arrival and signal strengths from only m processing units 28. Controller 26 calculates from the timing of the emission of beams 30 by VCSEL array 22 and from the times of arrival measured by the m processing units 28 the time-of-flight of the m beams, and thus maps the distance to the corresponding m points on scene 32.
Controller 26 typically comprises a programmable processor, which is programmed in software and/or firmware to carry out the functions that are described herein. Alternatively or additionally, controller 26 comprises hard-wired and/or programmable hardware logic circuits, which carry out at least some of the functions of the controller. Although controller 26 is shown in the figure, for the sake of simplicity, as a single, monolithic functional block, in practice the controller may comprise a single chip or a set of two or more chips, with suitable interfaces for receiving and outputting the signals that are illustrated in the figure and are described in the text.
One of the functional units of controller 26 is a depth processing unit (DPU) 27, which processes signals from both processing units 28 and combining unit 35, as will be further described below. DPU 27 calculates the times of flight of the photons in each of beams 30, and thus maps the distance to the corresponding points in target scene 32. This mapping is based on the timing of the emission of beams 30 by emitter array 22 and from the times of arrival (i.e., times of incidence of reflected photons) measured by processing units 28. Controller 26 typically stores the depth coordinates in a memory, and may output the corresponding depth map for display and/or further processing.
To enable selection and switching among the different banks, array 22 is mounted on a driver chip 50, for example, a silicon chip with CMOS circuits for selecting and driving the individual VCSELs or banks of VCSELs. The banks of VCSELS in this case may be physically separated, for ease of fabrication and control, or they may be interleaved on the VCSEL chip, with suitable connections to driver chip 50 to enable actuating the banks in alternation. Thus, beams 30 likewise irradiate the target scene in a time-multiplexed pattern, with different sets of the beams impinging on the respective regions of the scene at different times.
As further alternatives to the pictured embodiments, array 22 may comprise a larger or smaller number of banks and emitters. Typically, for sufficient coverage of the target scenes with static (non-scanned) beams, array 22 comprises at least four banks 52 or 62, with at least four emitters 54 in each bank, and possibly with a DOE for splitting the radiation emitted by each of the emitters.
For denser coverage, array 22 comprises at least eight banks 52 or 62, with twenty emitters 54 or more in each bank. These options enhance the flexibility of system 20 in terms of time-multiplexing of the optical and electrical power budgets, as well as processing resources.
At some later stage, however, spots 72 shifted to new locations 72b on array 24. This shift may have occurred, for example, due to mechanical shock or thermal effects or due to other causes. Spots 72 at locations 72b no longer overlap with super-pixels 80 in area 76, or overlap only minimally with the super-pixels. Sensing elements 78 on which the spots are now imaged, however, are inactive and are not connected to any of processing units 28. To rectify this situation, controller 26 may recalibrate the locations of super-pixels 80, as described in the above-mentioned provisional patent applications.
In
The time-of-arrival information from the four processing units 28 is aggregated by combining unit 35, using weights 144, to yield a single histogram 146 for super-pixel 80. This combined histogram 146 is sent to DPU 27, which in turn detects, based on histogram 146, whether any object or structure was detected in scene 32 by super-pixel 80 and, if so, reports its depth information based on time-of-flight data.
Additionally, the respective numbers of events reported by the four processing units 28 may be separately summed in combining unit 35 over a predefined interval of arrival times to yield an indication of the received signal strength for that interval for each sensing element 78. Typically the interval is configured to start after the end of a so-called “stray pulse” and continue to the end of the histogram. A stray pulse is a pulse that is generated within system 20 as a result of, for example, an imperfect coating of an optical surface, which causes a reflection of the pulses emitted by VCSEL array 22 directly back into the optical path to SPAD array 24. It is typically an undesired pulse, but one that is very difficult to eliminate altogether. The stray pulse may be utilized for calibrating the timing signals as follows: A time of arrival of a stray pulse is recorded and subtracted from a subsequent timing signal due to a laser pulse that has been reflected by scene 32. This subtraction yields a relative time-of-flight for the received laser pulse, and compensates for any random firing delays of VCSEL array 22, as well as for most of the VCSEL and SPAD drifts related to temperature changes.
These four indicators of signal strength are also transferred to DPU 27 (in conjunction with combined histogram 146). The indicators may be used by DPU 27 to determine a precise location of the spot on sensing elements 78.
In some embodiments, the four units of TDC 143, as well as combining unit 35, reside in the same chip as SPAD array 24, while the rest of signal processing, including DPU 27, resides in separate controller 26. A major reason for generating single combined histogram 146 for super-pixel 80 is to reduce the information that is transferred from SPAD array 24 to DPU 27 and to controller 26. The partitioning into two separate units reflects the fact that SPAD array 24 and the associated units perform primarily optical and analog functions, while controller 26 performs mostly digital and software-driven operations.
Super-Pixel Calibration by Search in Precomputed Regions
Alternatively, however, the principles of this method may be applied, mutatis mutandis, in other depth mapping systems of similar configuration. For example, VCSEL array 22 could be replaced by a single laser (or a small number of lasers), with a beamsplitting element, such as a diffractive optical element (DOE), to split the laser output into multiple beams. As another example, other types of sensing arrays, comprising other sorts of detector elements, could be used in place of SPADs. The method of
In the method of
The above inputs include multiple parameters. For example, a typical focal length of collection lens 34 has a nominal value of 2 mm, an assembly tolerance of 0.1 mm and an operational tolerance of 0.05 mm. Each tolerance is normally distributed around zero, with a standard deviation equal to the above tolerance. The probability distribution of the focal length is a normal distribution combined from the two normal distributions and centered at the nominal value of 2 mm. An additional example of a parameter is the baseline between VCSEL array 22 and SPAD array 24. The multiple parameters, such as the two examples described above, allow controller 26 to model accurately the optical path taken by the laser pulses and thus calculate the locations where the spots impinge on SPAD array 24.
Based on these inputs, controller 26 calculates a search region for each of the M laser spots expected on SPAD array 24 (
Once the search regions have been chosen in pre-computation step 156, controller 26, in a random iterative search step 158, fires a succession of pulses of beams 32 from VCSEL array 22 (
Alternatively, controller 26 may apply other search strategies, not necessarily random, within the search regions. During step 158, each processing unit 28 is coupled to receive signals from a different pixel following each laser pulse or sequence of multiple pulses, and controller 26 checks, using DPU 27, which pixels have output signals due to an incident photon, and which have not. Based on the results, controller 26 selects the pixels to include in each super-pixel as those on which the photons were found to be incident. In simulations, the search was found to converge within a succession of 8-10 repeated sequences of pulsed beams 32 and thus identify the M super-pixels of SPAD array 24 that receive the M beams.
Once controller 26 has found the M super-pixels, it finishes the search and assigns, in an assignment step 160, these super-pixels for use in 3D mapping of scene 32 by depth mapping system 20.
These potential candidates are coupled to respective processing units 28, in a candidate processing step 202. In a first detection step 204, controller 26 fires a sequence of pulses of beams 32 from VCSEL array 22 and queries processing units 28 and combining unit 35 to find out how many of the m0 process candidates on SPAD array 24 reported “hits,” i.e., output signals indicating that they had received photons. In a first comparison step 206, controller 26 checks whether the number of reported hits in first detection step 204 exceeds a first preset threshold, for example 8% of M (if initially 10% of M were selected as process candidates).
If the number of hits was below the threshold, controller 26 searches, in a search step 208, for hits in the areas around the process candidates by firing successive pulsed beams 32 from VCSEL array 22 and performing a single pixel search around the candidates. After new hits have been identified, the previous process candidates in process candidate step 202 are replaced by the new hits, and steps 204 and 206 are repeated until the number of detected hits in first comparison step 206 exceeds the first preset threshold.
The detected hits in first comparison step 206 are used by controller 26 to build a model in a modeling step 210. The model expresses the deviation of the locations of the hits in SPAD array 24 relative to their nominal locations, i.e., the locations where the reflected laser beams were expected to be incident on SPAD array 24 according to the design geometry of system 20, for example. The model may be a quadratic model, a simplified pinhole camera model, or a homographic model, for example, and it may take into account system tolerances as previously described.
A homographic model h is an eight-parameter transformation (h1, . . . , h8), mapping a point p=(x,y) to another point p′=(x′,y′) through the relation:
The coordinates x and y refer to Cartesian coordinate system 33 of
A quadratic model is given by:
x′=a1+b1x+c1y+d1x2+e1y2+f1xy
y′=a2+b2x+c2y+d2x2+e2y2+f2xy
The equations of a simplified pinhole camera model are as follows: Given a point (x,y,z) in Cartesian coordinate system 33, we first compute the undistorted image coordinates:
We then apply a distortion operation to obtain the final image coordinates:
xd=cx+(xu−cx)·p(r),yd=y+(yu−cy)·p(r),
where
r=√{square root over ((xu−cx)2+(yu−cy)2)}
and p(r)=1+k1r2+k2r4+k3r6 is a distortion polynomial. The parameters of the model are, therefore, the following constants: f, cx, cy, k1, k2, k3 (see G. Bradski and A. Kaehler, Learning OpenCV, 1st edition, O'Reilly Media, Inc., Sebastopol, California, 2008).
Additionally or alternatively, other models, such as splines or more elaborate models that describe the optics of system 20 to a higher degree of complexity, may be employed.
Based on one of the above models, controller 26 predicts the locations of a number of new process candidates, by applying the model to the nominal locations of a number of additional pixels where other reflected laser beams were expected to be incident, in a candidate addition step 212, making now up a total of m1 process candidates. Typically, m1 increases in each iteration at candidate addition step 212. In a second detection step 214 controller 26 fires an additional sequence of pulses of beams 30 from VCSEL array 22 and queries how many of the m1 process candidates on SPAD array 24 have reported hits.
In a second comparison step 216, controller 26 compares the relative number of hits (the ratio between the hits and the total number M of pulsed beams 30) to a second preset threshold. This latter threshold is typically set to a high value, corresponding to a situation in which the large majority of beams 30 are successfully received by corresponding super-pixels. If the relative number of hits is less than the second preset threshold, controller 26 adjusts the model in modeling step 210 based on the detected hits. The adjustment of the model includes recalculating the model coefficients, as well as, where required, an increase in the complexity of the model. In an iterative process, of 5-8 loops, for example, controller 26 adds new process candidates based on the model in candidate addition step 212, queries hits in second detection step 214, and compares their relative number to the second preset threshold in second comparison step 216. As long as the relative number of hits does not exceed the second preset threshold, controller 26 keeps looping back to model step 210, improving the model based on the new hits.
Once the relative number of detected hits exceeds the second preset threshold, controller 26 finishes the search and assigns, in an assignment step 218, the detected hits for use in 3D mapping of scene 32 by depth mapping system 20.
In case the number of process candidates at step 214 does not increase at a given stage of the iteration and is still too low, controller 26 may initiate a single-pixel offset search in an offset search step 222. In offset search step 222, a search for the yet undetected laser spots is performed with a single-pixel offset around their expected locations.
It will be appreciated that the embodiments described above are cited by way of example, and that the present invention is not limited to what has been particularly shown and described hereinabove. Rather, the scope of the present invention includes both combinations and subcombinations of the various features described hereinabove, as well as variations and modifications thereof which would occur to persons skilled in the art upon reading the foregoing description and which are not disclosed in the prior art.
This application claims the benefit of U.S. Provisional Patent Application 62/803,612, filed Feb. 11, 2019, and U.S. Provisional Patent Application 62/809,647, filed Feb. 24, 2019. This application is related to U.S. patent application Ser. No. 16/532,527, filed Aug. 6, 2019 (now U.S. Pat. No. 10,955,234), entitled “Calibration of depth sensing using a sparse array of pulsed beams”. All of these related applications are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
4623237 | Kaneda et al. | Nov 1986 | A |
4757200 | Shepherd | Jul 1988 | A |
5164823 | Keeler | Nov 1992 | A |
5270780 | Moran et al. | Dec 1993 | A |
5373148 | Dvorkis et al. | Dec 1994 | A |
5699149 | Kuroda et al. | Dec 1997 | A |
6301003 | Shirai et al. | Oct 2001 | B1 |
6384903 | Fuller | May 2002 | B1 |
6710859 | Shirai et al. | Mar 2004 | B2 |
7126218 | Darveaux et al. | Oct 2006 | B1 |
7193690 | Ossig et al. | Mar 2007 | B2 |
7303005 | Reis et al. | Dec 2007 | B2 |
7405812 | Bamji | Jul 2008 | B1 |
7508496 | Mettenleiter et al. | Mar 2009 | B2 |
7800067 | Rajavel et al. | Sep 2010 | B1 |
7800739 | Rohner et al. | Sep 2010 | B2 |
7812301 | Oike et al. | Oct 2010 | B2 |
7969558 | Hall | Jun 2011 | B2 |
8193482 | Itsler | Jun 2012 | B2 |
8259293 | Andreou | Sep 2012 | B2 |
8275270 | Shushakov et al. | Sep 2012 | B2 |
8279418 | Yee et al. | Oct 2012 | B2 |
8355117 | Niclass | Jan 2013 | B2 |
8405020 | Menge | Mar 2013 | B2 |
8675181 | Hall | Mar 2014 | B2 |
8736818 | Weimer et al. | May 2014 | B2 |
8766164 | Sanfilippo et al. | Jul 2014 | B2 |
8766808 | Hogasten | Jul 2014 | B2 |
8925814 | Schneider et al. | Jan 2015 | B1 |
8963069 | Drader et al. | Feb 2015 | B2 |
9002511 | Dickerson et al. | Apr 2015 | B1 |
9024246 | Jiang et al. | May 2015 | B2 |
9052356 | Chu et al. | Jun 2015 | B2 |
9076707 | Harmon | Jul 2015 | B2 |
9016849 | Duggal et al. | Aug 2015 | B2 |
9335220 | Shpunt et al. | May 2016 | B2 |
9354332 | Zwaans et al. | May 2016 | B2 |
9465111 | Wilks et al. | Oct 2016 | B2 |
9516248 | Cohen et al. | Dec 2016 | B2 |
9709678 | Matsuura | Jul 2017 | B2 |
9736459 | Mor et al. | Aug 2017 | B2 |
9739881 | Pavek et al. | Aug 2017 | B1 |
9761049 | Naegle et al. | Sep 2017 | B2 |
9786701 | Mellot et al. | Oct 2017 | B2 |
9810777 | Williams et al. | Nov 2017 | B2 |
9874635 | Eichenholz et al. | Jan 2018 | B1 |
9997551 | Mandai | Jun 2018 | B2 |
10063844 | Adam et al. | Aug 2018 | B2 |
10067224 | Moore et al. | Sep 2018 | B2 |
10132616 | Wang | Nov 2018 | B2 |
10215857 | Oggier et al. | Feb 2019 | B2 |
10269104 | Hannuksela et al. | Apr 2019 | B2 |
10386487 | Wilton et al. | Aug 2019 | B1 |
10424683 | Do Valle et al. | Sep 2019 | B1 |
10613203 | Rekow et al. | Apr 2020 | B1 |
10782393 | Dussan et al. | Sep 2020 | B2 |
11693102 | Kudla et al. | Jul 2023 | B2 |
20010020673 | Zappa et al. | Sep 2001 | A1 |
20020071126 | Shirai et al. | Jun 2002 | A1 |
20020131035 | Watanabe et al. | Sep 2002 | A1 |
20020154054 | Small | Oct 2002 | A1 |
20020186362 | Shirai et al. | Dec 2002 | A1 |
20040051859 | Flockencier | Mar 2004 | A1 |
20040135992 | Munro | Jul 2004 | A1 |
20040212863 | Schanz et al. | Oct 2004 | A1 |
20050018200 | Guillermo et al. | Jan 2005 | A1 |
20060044546 | Lewin et al. | Mar 2006 | A1 |
20060106317 | McConnell et al. | May 2006 | A1 |
20060176469 | O'Connor et al. | Aug 2006 | A1 |
20070145136 | Wiklof et al. | Jun 2007 | A1 |
20070164004 | Matsuda et al. | Jul 2007 | A1 |
20080231498 | Menzer et al. | Sep 2008 | A1 |
20090009747 | Wolf et al. | Jan 2009 | A1 |
20090262760 | Krupkin et al. | Oct 2009 | A1 |
20090273770 | Bauhahn et al. | Nov 2009 | A1 |
20090275841 | Melendez et al. | Nov 2009 | A1 |
20100019128 | Itzler | Jan 2010 | A1 |
20100045965 | Meneely | Feb 2010 | A1 |
20100096459 | Gurevich | Apr 2010 | A1 |
20100121577 | Zhang et al. | May 2010 | A1 |
20100250189 | Brown | Sep 2010 | A1 |
20100286516 | Fan et al. | Nov 2010 | A1 |
20100309288 | Stettner et al. | Dec 2010 | A1 |
20110006190 | Alameh et al. | Jan 2011 | A1 |
20110128524 | Vert et al. | Jun 2011 | A1 |
20110181864 | Schmitt et al. | Jul 2011 | A1 |
20110279366 | Lohbihler | Nov 2011 | A1 |
20120038904 | Fossum et al. | Feb 2012 | A1 |
20120075615 | Niclass et al. | Mar 2012 | A1 |
20120132636 | Moore | May 2012 | A1 |
20120153120 | Baxter | Jun 2012 | A1 |
20120154542 | Katz et al. | Jun 2012 | A1 |
20120176476 | Schmidt et al. | Jul 2012 | A1 |
20120249998 | Eisele et al. | Oct 2012 | A1 |
20120287242 | Gilboa et al. | Nov 2012 | A1 |
20120294422 | Cheung et al. | Nov 2012 | A1 |
20130015331 | Birk et al. | Jan 2013 | A1 |
20130079639 | Hoctor et al. | Mar 2013 | A1 |
20130092846 | Henning et al. | Apr 2013 | A1 |
20130107016 | Federspiel | May 2013 | A1 |
20130208258 | Eisele et al. | Aug 2013 | A1 |
20130236171 | Saunders | Sep 2013 | A1 |
20130258099 | Ovsiannikov et al. | Oct 2013 | A1 |
20130278917 | Korekado et al. | Oct 2013 | A1 |
20130300838 | Borowski | Nov 2013 | A1 |
20130342835 | Blacksberg | Dec 2013 | A1 |
20140027606 | Raynor et al. | Jan 2014 | A1 |
20140071433 | Eisele et al. | Mar 2014 | A1 |
20140077086 | Batkilin et al. | Mar 2014 | A1 |
20140078491 | Eisele et al. | Mar 2014 | A1 |
20140162714 | Kim et al. | Jun 2014 | A1 |
20140191115 | Webster et al. | Jul 2014 | A1 |
20140198198 | Geissbuehler et al. | Jul 2014 | A1 |
20140231630 | Rae et al. | Aug 2014 | A1 |
20140240317 | Go et al. | Aug 2014 | A1 |
20140240691 | Mheen et al. | Aug 2014 | A1 |
20140268127 | Day | Sep 2014 | A1 |
20140300907 | Kimmel | Oct 2014 | A1 |
20140321862 | Frohlich et al. | Oct 2014 | A1 |
20140353471 | Raynor et al. | Dec 2014 | A1 |
20150041625 | Dutton et al. | Feb 2015 | A1 |
20150062558 | Koppal et al. | Mar 2015 | A1 |
20150131080 | Retterath et al. | May 2015 | A1 |
20150163429 | Dai et al. | Jun 2015 | A1 |
20150192676 | Kotelnikov et al. | Jul 2015 | A1 |
20150200222 | Webster | Jul 2015 | A1 |
20150200314 | Webster | Jul 2015 | A1 |
20150204978 | Hammes et al. | Jul 2015 | A1 |
20150229912 | Masalkar | Aug 2015 | A1 |
20150260830 | Ghosh et al. | Sep 2015 | A1 |
20150285625 | Deane et al. | Oct 2015 | A1 |
20150362585 | Ghosh et al. | Dec 2015 | A1 |
20150373322 | Goma et al. | Dec 2015 | A1 |
20160003944 | Schmidtke et al. | Jan 2016 | A1 |
20160041266 | Smits | Feb 2016 | A1 |
20160072258 | Seurin et al. | Mar 2016 | A1 |
20160080709 | Viswanathan et al. | Mar 2016 | A1 |
20160182101 | Marcovic et al. | Jun 2016 | A1 |
20160259038 | Retterath et al. | Sep 2016 | A1 |
20160259057 | Ito | Sep 2016 | A1 |
20160274222 | Yeun | Sep 2016 | A1 |
20160334508 | Hall et al. | Nov 2016 | A1 |
20160344965 | Grauer | Nov 2016 | A1 |
20170006278 | Vandame et al. | Jan 2017 | A1 |
20170038459 | Kubacki et al. | Feb 2017 | A1 |
20170052065 | Sharma et al. | Feb 2017 | A1 |
20170067734 | Heidemann et al. | Mar 2017 | A1 |
20170068393 | Viswanathan et al. | Mar 2017 | A1 |
20170131388 | Campbell et al. | May 2017 | A1 |
20170131718 | Matsumura et al. | May 2017 | A1 |
20170139041 | Drader et al. | May 2017 | A1 |
20170176577 | Halliday | Jun 2017 | A1 |
20170176579 | Niclass | Jun 2017 | A1 |
20170179173 | Mandai et al. | Jun 2017 | A1 |
20170184450 | Doylend et al. | Jun 2017 | A1 |
20170184704 | Yang et al. | Jun 2017 | A1 |
20170184709 | Kenzler et al. | Jun 2017 | A1 |
20170188016 | Hudman | Jun 2017 | A1 |
20170219695 | Hall et al. | Aug 2017 | A1 |
20170242102 | Dussan et al. | Aug 2017 | A1 |
20170242108 | Dussan et al. | Aug 2017 | A1 |
20170257617 | Retterath | Sep 2017 | A1 |
20170269209 | Hall et al. | Sep 2017 | A1 |
20170303789 | Tichauer et al. | Oct 2017 | A1 |
20170329010 | Warke et al. | Nov 2017 | A1 |
20170343675 | Oggier et al. | Nov 2017 | A1 |
20170356796 | Nishio | Dec 2017 | A1 |
20170356981 | Yang et al. | Dec 2017 | A1 |
20180045816 | Jarosinski et al. | Feb 2018 | A1 |
20180059220 | Irish et al. | Mar 2018 | A1 |
20180062345 | Bills et al. | Mar 2018 | A1 |
20180081030 | McMahon et al. | Mar 2018 | A1 |
20180081032 | Torruellas et al. | Mar 2018 | A1 |
20180081041 | Niclass et al. | Mar 2018 | A1 |
20180115762 | Bulteel et al. | Apr 2018 | A1 |
20180131449 | Kare et al. | May 2018 | A1 |
20180167602 | Pacala et al. | Jun 2018 | A1 |
20180203247 | Chen et al. | Jul 2018 | A1 |
20180205943 | Trail | Jul 2018 | A1 |
20180209846 | Mandai et al. | Jul 2018 | A1 |
20180259645 | Shu et al. | Sep 2018 | A1 |
20180299554 | Van Dyck et al. | Oct 2018 | A1 |
20180341009 | Niclass et al. | Nov 2018 | A1 |
20190004156 | Niclass et al. | Jan 2019 | A1 |
20190011556 | Pacala et al. | Jan 2019 | A1 |
20190011567 | Pacala et al. | Jan 2019 | A1 |
20190018117 | Perenzoni et al. | Jan 2019 | A1 |
20190018118 | Perenzoni et al. | Jan 2019 | A1 |
20190018119 | Laifenfeld et al. | Jan 2019 | A1 |
20190018143 | Thayer et al. | Jan 2019 | A1 |
20190037120 | Ohki | Jan 2019 | A1 |
20190056497 | Pacala et al. | Feb 2019 | A1 |
20190094364 | Fine et al. | Mar 2019 | A1 |
20190170855 | Keller et al. | Jun 2019 | A1 |
20190178995 | Tsai et al. | Jun 2019 | A1 |
20190257950 | Patanwala et al. | Aug 2019 | A1 |
20190277952 | Beuschel et al. | Sep 2019 | A1 |
20190361404 | Mautner et al. | Nov 2019 | A1 |
20200142033 | Shand | May 2020 | A1 |
20200233068 | Henderson et al. | Jul 2020 | A1 |
20200249324 | Steinberg et al. | Aug 2020 | A1 |
20200314294 | Schoenlieb et al. | Oct 2020 | A1 |
20220146647 | Sakazume | May 2022 | A1 |
Number | Date | Country |
---|---|---|
2605339 | Oct 1994 | CA |
201054040 | Apr 2008 | CN |
101401107 | Apr 2009 | CN |
103763485 | Apr 2014 | CN |
103983979 | Aug 2014 | CN |
104730535 | Jun 2015 | CN |
104914446 | Sep 2015 | CN |
105992960 | Oct 2016 | CN |
106405572 | Feb 2017 | CN |
110609293 | Dec 2019 | CN |
202013101039 | Mar 2014 | DE |
102015013710 | Apr 2017 | DE |
2157445 | Feb 2010 | EP |
2322953 | May 2011 | EP |
2469297 | Jun 2012 | EP |
2477043 | Jul 2012 | EP |
2827175 | Jan 2015 | EP |
3285087 | Feb 2018 | EP |
3318895 | May 2018 | EP |
3370080 | Sep 2018 | EP |
3521856 | Aug 2019 | EP |
H02287113 | Nov 1990 | JP |
H0567195 | Mar 1993 | JP |
09197045 | Jul 1997 | JP |
H10170637 | Jun 1998 | JP |
H11063920 | Mar 1999 | JP |
2011089874 | May 2011 | JP |
2011237215 | Nov 2011 | JP |
2013113669 | Jun 2013 | JP |
2014059301 | Apr 2014 | JP |
2020197457 | Dec 2020 | JP |
7383558 | Nov 2023 | JP |
101318951 | Oct 2013 | KR |
202343020 | Nov 2023 | TW |
9008946 | Aug 1990 | WO |
2007144565 | Dec 2007 | WO |
2010149593 | Dec 2010 | WO |
2012154356 | Nov 2012 | WO |
2013028691 | Feb 2013 | WO |
2015199615 | Dec 2015 | WO |
2016034408 | Mar 2016 | WO |
2017106875 | Jun 2017 | WO |
2018122560 | Jul 2018 | WO |
2020101576 | May 2020 | WO |
2020109378 | Jun 2020 | WO |
2020201452 | Oct 2020 | WO |
2022244322 | Nov 2022 | WO |
Entry |
---|
U.S. Appl. No. 15/844,665 office action dated Jun. 1, 2020. |
U.S. Appl. No. 15/950,186 office action dated Jun. 23, 2020. |
International Application # PCT/US2020/058760 Search Report dated Feb. 9, 2021. |
TW Application # 109119267 Office Action dated Mar. 10, 2021. |
U.S. Appl. No. 16/752,653 Office Action dated Apr. 5, 2021. |
Charbon et al., “SPAD-Based Sensors”, TOF Range-Imaging Cameras, Springer-Verlag, pp. 11-38, year 2013. |
Niclass et al., “A 0.18 um CMOS SoC for a 100m range, 10 fps 200×96 pixel Time of Flight depth sensor”, IEEE International Solid- State Circuits Conference- (ISSCC), Session 27, Image Sensors, 27.6, pp. 488-490, Feb. 20, 2013. |
Walker et al., “A 128×96 pixel event-driven phase-domain ΔΣ-based fully digital 3D camera in 0.13μm CMOS imaging technology”, IEEE International Solid- State Circuits Conference—(ISSCC), Session 23, Image Sensors, 23.6, pp. 410-412, Feb. 23, 2011. |
Niclass et al., “Design and characterization of a 256×64-pixel single-photon imager in CMOS for a MEMS-based laser scanning time-of-flight sensor”, Optics Express, vol. 20, issue 11, pp. 11863-11881, May 21, 2012. |
Kota et al., “System Design and Performance Characterization of a MEMS-Based Laser Scanning Time-of-Flight Sensor Based on a 256 × 64-pixel Single-Photon Imager”, IEEE Photonics Journal, vol. 5, issue 2, pp. 1-15, Apr. 2013. |
Webster et al., “A silicon photomultiplier with >30% detection efficiency from 450-750nm and 11.6μm pitch NMOS-only pixel with 21.6% fill factor in 130nm CMOS”, Proceedings of the European Solid-State Device Research Conference (ESSDERC) , pp. 238-241, Sep. 7-21, 2012. |
Roth et al., U.S. Appl. No. 16/532,517, filed Aug. 6, 2019. |
Buttgen et al., “Pseudonoise Optical Modulation for Real-Time 3-D Imaging With Minimum Interference”, IEEE Transactions on Circuits and Systems I: Regular Papers, vol. 54, Issue10, pp. 2109-2119, Oct. 1, 2007. |
Bradski et al., “Learning OpenCV”, first edition, pp. 1-50, O'Reilly Media, Inc, California, USA, year 2008. |
CN Application # 201680074428.8 Office Action dated Jun. 23, 2021. |
Zhu Jian, “Research of Simulation of Super-Resolution Reconstruction of Infrared Image”, abstract page, Master's Thesis, p. 1, Nov. 15, 2005. |
U.S. Appl. No. 15/586,286 office action dated Dec. 2, 2019. |
International Application # PCT/US2019/45187 search report dated Nov. 15, 2019. |
U.S. Appl. No. 16/752,653 Office Action dated Oct. 1, 2021. |
EP Application # 17737420.4 Office Action dated Oct. 28, 2021. |
KR Application # 1020200068248 Office Action dated Nov. 12, 2021. |
KR Application # 1020207015906 Office Action dated Oct. 13, 2021. |
JP Application # 2020001203 Office Action dated Feb. 4, 2021. |
U.S. Appl. No. 16/752,653 Office Action dated Feb. 4, 2021. |
Morbi et al., “Short range spectral lidar using mid-infrared semiconductor laser with code-division multiplexing technique”, Technical Digest, CLEO 2001, pp. 491-492 pages, May 2001. |
Al et al., “High-resolution random-modulation cw lidar”, Applied Optics, vol. 50, issue 22, pp. 4478-4488, Jul. 28, 2011. |
Chung et al., “Optical orthogonal codes: design, analysis and applications”, IEEE Transactions on Information Theory, vol. 35, issue 3, pp. 595-604, May 1989. |
Lin et al., “Chaotic lidar”, IEEE Journal of Selected Topics in Quantum Electronics, vol. 10, issue 5, pp. 991-997, Sep.-Oct. 2004. |
CN Utility Model Patent # 201520865151.7 UMPER report dated Oct. 9, 2019. |
International application PCT/US2019/45188 Search report dated Oct. 21, 2019. |
EP Application No. 20177707 Search Report dated Sep. 29, 2020. |
U.S. Appl. No. 15/586,286 office action dated Feb. 24, 2020. |
U.S. Appl. No. 16/532,517 Office Action dated Oct. 14, 2020. |
EP Application # 20177707.5 Search Report dated Nov. 12, 2020. |
U.S. Appl. No. 16/679,360 Office Action dated Jun. 29, 2022. |
EP Application # 22167103.5 Search Report dated Jul. 11, 2022. |
CN Application # 201780058088.4 Office Action dated Aug. 23, 2022. |
U.S. Appl. No. 16/885,316 Office Action dated Jun. 30, 2022. |
IN Application # 202117029897 Office Action dated Mar. 10, 2022. |
IN Application # 202117028974 Office Action dated Mar. 2, 2022. |
CN Application # 201810571820.4 Office Action dated Sep. 9, 2022. |
KR Application # 1020220101419 Office Action dated Sep. 28, 2022. |
U.S. Appl. No. 17/026,365 Office Action dated Nov. 7, 2022. |
U.S. Appl. No. 17/079,548 Office Action dated Mar. 3, 2023. |
CN Application # 201780097602.5 Office Action dated Mar. 15, 2023. |
CN Application # 202010063812.6 Office Action dated Mar. 18, 2023. |
KR Application # 1020217025136 Office Action dated Apr. 4, 2023. |
U.S. Appl. No. 17/026,365 Office Action dated Jan. 26, 2023. |
Zhu et al., “Measurement Method for Real-Time Transmission of Optical Signal Based on Single Photon Detection,” Chinese Journal of Lasers, vol. 43, No. 2, pp. 1-6, year 2016. |
Yang, “The Study of Phase-Demodulation Range-Finding Techniques Based o SPAD,” Chinese Master's Thesis Full-text Database, Engineering Science and Technology, Xiangtan University, pp. 1-63, May 2016. |
Zhang, “Structured Light Based Fast and High Accuracy Depth Sensing,” China Doctoral Dissertations Full-text Database, Information Science and Technology, University of Science and Technology of China, pp. 1-110, Apr. 2015. |
Ionescu et al., “A 3D NIR Camera for Gesture Control of Video Game Consoles,” Conference Paper, 2014 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA), pp. 1-5, year 2014. |
U.S. Appl. No. 16/769,346 Office Action dated Aug. 3, 2023. |
CN Application # 202010063812.6 Office Action dated Aug. 1, 2023. |
CN Application # 201980090098.5 Office Action dated Dec. 4, 2023. |
IN Application # 202117028974 Summons to Hearing dated Dec. 8, 2023. |
CN Application # 202010521767.4 Office Action dated Dec. 8, 2023. |
CN Application # 201980090030.7 Office Action dated Jan. 5, 2024. |
U.S. Appl. No. 17/189,300 Office Action dated Jun. 4, 2024. |
Number | Date | Country | |
---|---|---|---|
20200256993 A1 | Aug 2020 | US |
Number | Date | Country | |
---|---|---|---|
62809647 | Feb 2019 | US | |
62803612 | Feb 2019 | US |