The present invention relates generally to systems and methods for depth mapping, and particularly to beam sources used in time-of-flight (ToF) sensing.
Existing and emerging consumer applications have created an increasing need for real-time three-dimensional (3D) imagers. These imaging devices, also known as depth sensors or depth mappers, enable the remote measurement of distance (and often intensity) to each point in a target scene—referred to as target scene depth—by illuminating the target scene with an optical beam and analyzing the reflected optical signal. A commonly-used technique to determine the distance to each point on the target scene involves transmitting one or more pulsed optical beams towards the target scene, followed by the measurement of the round-trip time, i.e. time-of-flight (ToF), taken by the optical beams as they travel from the source to the target scene and back to a detector array adjacent to the source.
Some ToF systems use single-photon avalanche diodes (SPADs), also known as Geiger-mode avalanche photodiodes (GAPDs), in measuring photon arrival time.
Embodiments of the present invention that are described hereinbelow provide improved depth mapping systems and methods for operating such systems.
There is therefore provided, in accordance with an embodiment of the invention, sensing apparatus, including a radiation source, which is configured to emit pulses of optical radiation toward multiple points in a target scene. A receiver is configured to receive the optical radiation that is reflected from the target scene and to output signals, responsively to the received optical radiation, that are indicative of respective times of flight of the pulses to and from the points in the target scene. Processing and control circuitry is configured to select a first pulse repetition interval (PRI) and a second PRI, greater than the first PRI, from a permitted range of PRIs, and to drive the radiation source to emit a first sequence of the pulses at the first PRI and a second sequence of the pulses at a second PRI, and to process the signals output by the receiver in response to both the first and second sequences of the pulses in order to compute respective depth coordinates of the points in the target scene.
In a disclosed embodiment, the radiation source includes an array of vertical-cavity surface-emitting lasers (VCSELs). Additionally or alternatively, the radiation source includes an array of emitters, which are arranged in multiple banks, and the processing and control circuitry is configured to drive the multiple banks sequentially so that each bank emits respective first and second sequences of the pulses at the first and second PRIs. Further additionally or alternatively, the sensing elements include single-photon avalanche diodes (SPADs).
In some embodiments, the first PRI defines a range limit, at which a time of flight of the pulses is equal to the first PRI, and the processing and control circuitry is configured to compare the signals output by the receiver in response to the first and second sequences of pulses in order to distinguish the points in the scene for which the respective depth coordinates are less than the range limit from the points in the scene for which the respective depth coordinates are greater than the range limit, thereby resolving range folding of the depth coordinates. In one such embodiment, the processing and control circuitry is configured to compute, for each of the points in the scene, respective first and second histograms of the times of flight of the pulses in the first and second sequences, and to detect that range folding has occurred at a given point responsively to a difference between the first and second histograms.
In some embodiments, the apparatus includes one or more radio transceivers, which communicate over the air by receiving signals in at least one assigned frequency band, wherein the processing and control circuitry is configured to identify the permitted range of the PRIs responsively to the assigned frequency band. Typically, the permitted range is defined so that the PRIs in the permitted range have no harmonics within the assigned frequency band. Additionally or alternatively, the processing and control circuitry is configured to modify the permitted range in response to a change in the assigned frequency band of the radio transceiver, and select new values of one or both of the first PRI and the second PRI so that the new values fall within the modified range.
In a disclosed embodiment, the processing and control circuitry is configured to store a record of multiple groups of the PRIs, to identify an operating environment of the apparatus, and to select one of the groups to apply in driving the radiation source responsively to the identified operating environment. The processing and control circuitry can be configured to select the one of the groups responsively to a geographical region in which the apparatus is operating. Additionally or alternatively, the groups of the PRIs have respective priorities that are assigned responsively to a likelihood of interference with frequencies used by the radio transceiver, and the processing and control circuitry is configured to select the one of the groups responsively to the respective priorities. In one embodiment, the PRIs in each group are co-prime with respect to the other PRIs in the group.
In another embodiment, the processing and control circuitry is configured to select a third PRI, greater than the second PRI, from the permitted range of the PRIs, and to drive the radiation source to emit a third sequence of the pulses at the third PRI, and to process the signals output by the receiver in response to the first, second and third sequences of the pulses in order to compute the respective depth coordinates of the points in the target scene.
Additionally or alternatively, the processing and control circuitry is configured to select the first and second PRIs so as to maximize a range of the depth coordinates while maintaining a resolution of the depth coordinates to be no greater than a predefined resolution limit.
There is also provided, in accordance with an embodiment of the invention, a method for sensing, which includes selecting a first pulse repetition interval (PRI) and a second PRI, greater than the first PRI, from a permitted range of PRIs. A radiation source is driven to emit a first sequence of pulses of optical radiation at the first PRI and a second sequence of the pulses of the optical radiation at the second PRI toward each of multiple points in a target scene. The optical radiation that is reflected from the target scene is received, and signals are output, responsively to the received optical radiation, that are indicative of respective times of flight of the pulses to and from the points in the target scene. The signals are processed in response to both the first and second sequences of the pulses in order to compute respective depth coordinates of the points in the target scene.
The present invention will be more fully understood from the following detailed description of the embodiments thereof, taken together with the drawings in which:
Embodiments of the present invention provide ToF-based depth sensing apparatus, in which a radiation source emits pulses of optical radiation toward multiple points in a target scene. (The term “optical radiation” is used interchangeably with the term “light” in the context of the present description and the claims, to mean electromagnetic radiation in any of the visible, infrared and ultraviolet spectral ranges.) A receiver receives the optical radiation reflected from the target scene and outputs signals that are indicative of the respective times of flight of the pulses to and from the points in the target scene. Processing and control circuitry drives the radiation source and processes the signals output by the receiver in order to compute respective depth coordinates of the points in the target scene.
Apparatus of this sort often suffers from problems of low signal/noise ratio (SNR). To increase the SNR, the processing and control circuitry collects and analyzes signals from the receiver over sequences of many pulses that are emitted by the radiation source. In some cases, the processing and control circuitry computes histograms of the times of flight of the sequences of pulses that are reflected from each point in the target scene, and uses analysis of the histogram (e.g., the mode of the histogram at each point) as an indicator of the corresponding depth coordinate. To generate and output the depth coordinates at a reasonable frame rate (for example, 30 frames/sec), while collecting signals over sequences of many pulses, it is desirable that the radiation source emit the sequences of pulses at a high pulse repetition frequency (PRF), or equivalently, with a low pulse repetition interval (PRI). For example, the radiation source may output pulses of about 1 ns duration, with a PRI of 40-50 ns.
The use of a short PRI, however, gives rise to problems of range folding: Because optical radiation propagates at approximately 30 cm/ns, when a pulse emitted in a sequence with PRI of 40 ns reflects from an object that is more than about 6 m away from the apparatus, the reflected radiation will reach the receiver only after the next pulse has already been emitted by the radiation source. The processing and control circuitry will then be unable to determine whether the received radiation originated from the most recent emitted pulse, due to reflection from a nearby object, or from a pulse emitted earlier in the sequence, due to a distant object. The PRI thus effectively defines a range limit, which is proportional to the PRI and sets an upper bound on the distance of objects that can be sensed by the apparatus.
Embodiments of the present invention address this problem by using two or more different PRIs in succession. The processing and control circuitry selects (at least) first and second PRIs from a permitted range of PRIs. It then drives the radiation source to emit a first sequence of the pulses at the first PRI and a second sequence of the pulses at a second PRI, and processes the signals output by the receiver in response to both the first and second sequences of pulses in order to compute depth coordinates of the points in the target scene. The sequences may be transmitted one after the other, or they may be interleaved, with pulses transmitted in alternation at the first PRI and the second PRI. Although the embodiments described below mainly refer, for the sake of simplicity, to only first and second PRIs, the principles of these embodiments may be readily extended to three or more different PRIs.
More specifically, in order to resolve and disambiguate possible range folding, the processing and control circuitry compares the signals output by the receiver in response to the first sequence of pulses to those output in response to the second sequence, in order to distinguish the points in the scene for which the respective depth coordinates are less than the range limit defined by the PRI from the points in the scene for which the respective depth coordinates are greater than the range limit. For example, the processing and control circuitry may compute, for each of the points in the scene, respective first and second histograms of the times of flight of the pulses in the first and second sequences. For objects closer than the range limit, the two histograms will be roughly identical. Objects beyond the range limit, however, will give rise to different histograms in response to the different PRIs of the first and second pulse sequences. The processing and control circuitry is thus able to detect that range folding has occurred at each point in the target scene based on the similarity or difference between the first and second histograms at each point.
Another problem arises when the depth sensing apparatus is incorporated in a mobile communication device, such as a smartphone: The mobile communication device comprises at least one radio transceiver (and often multiple radio transceivers), which communicates over the air by receiving signals in an assigned frequency band, for example one of the bands defined by the ubiquitous LTE standards for cellular communications. Furthermore, the assigned frequency band will often change as the device roams from one cell to another. Meanwhile, the sequences of short, intense current pulses that are used to drive the radiation source at high PRF give rise to harmonics, some of which may fall within the assigned frequency band of the transceiver. The noise due to these harmonics can severely degrade the SNR of the radio transceiver.
To overcome this problem, in some embodiments of the present invention, the processing and control circuitry identifies a permitted range of the PRIs in a manner that avoids interference with the assigned frequency band of the radio transceiver, and selects the first and second PRI values to be within this permitted range. The permitted range is preferably defined, in other words, so that the PRIs in the permitted range will have no harmonics within the assigned frequency band. The permitted range may be defined, for example, as a list of permitted PRI values, or as a set of intervals within the PRI values may be chosen. Additionally or alternatively, multiple groups of two or more PRIs may be defined in advance and stored in a record held by the apparatus. The appropriate group can then be identified and used depending on the radio operating environment of the apparatus.
When the assigned frequency band of the radio transceiver changes, the processing and control circuitry will modify the permitted range or group of PRI values accordingly. When necessary, the processing and control circuitry will select new values of one or all of the PRIs so that the new values fall within the modified range. The PRIs may be selected, subject to these range constraints, by applying predefined optimization criteria, for example to maximize the range of the depth coordinates while maintaining the resolution of the depth coordinates at a value no greater than a predefined resolution limit.
As noted earlier, although some of the embodiments described herein relate, for the sake of simplicity, to scenarios using two PRI values, the principles of the present invention may similarly be applied to selection and use of three or more PRI values. The use of a larger number of PRI values, within different parts of the permitted range, can be useful in enhancing the range and resolution of depth mapping.
Device 10 comprises multiple radio transceivers 12, which transmit radio signals to and/or receive radio signals from respective networks. For LTE cellular networks, for example, the radio signals can be in any of a number of different frequency bands, typically in the range between 800 MHz and 3000 MHz, depending on territory and type of service. As device 10 roams, the frequency bands on which it transmits and receives signals will typically change. A frequency controller 14 in device 10 selects the frequencies to be used in radio communication by transceivers 12 at any given time.
Camera 20 senses depth by outputting trains of optical pulses toward a target scene and measuring the times of flight of the pulses that are reflected back from the scene to the camera. Details of the structure and operation of camera 20 are described with reference to the figures that follow.
Generation of the optical pulses emitted from camera 20 gives rise to substantial electrical noise within device 10 both at the pulse repetition frequency of camera 20 (PRF, which is the inverse of the pulse repetition interval, or PRI) and at harmonics of the PRF. To avoid interfering with the operation of transceivers 12, frequency controller 14 provides camera 20 with a current range of permitted PRIs, whose harmonics fall entirely outside the frequency band or bands on which transceivers 12 are currently transmitting and receiving. (Alternatively, the frequency controller may notify the camera of the frequency band or bands on which the transceiver is currently transmitting and receiving, and the camera may itself derive the current range of permitted PRIs on this basis.) The range may have the form, for example, of a list of permitted PRIs (or equivalently, PRFs) or a set of intervals within which the PRI (or PRF) may be chosen. Camera 20 selects a pair of PRIs from the permitted range that will give optimal depth mapping performance, or possibly three or more PRIs, while thus avoiding interference with communications by device 10. Details of the criteria and process for selection are explained below.
Beam optics 37 typically comprise a collimating lens and may comprise a diffractive optical element (DOE), which replicates the actual beams emitted by array 22 to create the M beams that are projected onto the scene 32. (For example, an array of four banks with 16 VCSELs in a 4×4 arrangement in each bank may be used to create 8×8 beams, and a DOE may split each beam into 3×3 replicas to give a total of 24×24 beams.) For the sake of simplicity, these internal elements of beam optics 37 are not shown.
A receiver 23 in camera 20 comprises a two-dimensional detector array, such as SPAD array 24, together with J processing units 28 and select lines 31 for coupling the processing units to the SPADs. A combining unit 35 passes the digital outputs of processing units 28 to controller 26. SPAD array 24 comprises a number of detector elements N, which may be equal to M or possibly much larger than M, for example, 100×100 pixels or 200×200 pixels. The number J of processing units 28 depends on the number of pixels of SPAD array 24 to which each processing unit is coupled.
Array 22 emits M pulsed beams 30 of light, which are directed by beam optics 37 toward a target scene 32. Although beams 30 are depicted in
A Cartesian coordinate system 33 defines the orientation of depth camera 20 and scene 32. The x-axis and the y-axis are oriented in the plane of SPAD array 24. The z-axis is perpendicular to the array and points to scene 32 that is imaged onto SPAD array 24.
For clarity, processing units 28 are shown as if separate from SPAD array 24, but they are commonly integrated with the SPAD array. Similarly, combining unit 35 is commonly integrated with SPAD array 24. Processing units 28, together with combining unit 35, comprise hardware amplification and logic circuits, which sense and record pulses output by the SPADs in respective pixels or groups of pixels (referred to as “super-pixels”). These circuits thus measure the times of arrival of the photons that gave rise to the pulses, as well as the strengths of the optical pulses impinging on SPAD array 24.
Processing units 28 together with combining unit 35 may assemble one or more histograms of the times of arrival of multiple pulses emitted by array 22, and thus output signals that are indicative of the distance to respective points in scene 32, as well as of signal strength. Circuitry that can be used for this purpose is described, for example, in U.S. Patent Application Publication 2017/0176579, whose disclosure is incorporated herein by reference. Alternatively or additionally, some or all of the components of processing units 28 and combining unit 35 may be separate from SPAD array 24 and may, for example, be integrated with controller 26. For the sake of generality, controller 26, processing units 28 and combining unit 35 are collectively referred to herein as “processing and control circuitry.”
Controller 26 is coupled to both radiation source 21 and receiver 23. Controller 26 drives the banks of emitters in array 22 in alternation, at the appropriate PRIs, to emit the pulsed beams. The controller also provides to the processing and combining units in receiver 23 an external control signal 29, and receives output signals from the processing and combining units. The output signals may comprise histogram data, and may be used by controller 26 to derive both times of incidence and signal strengths. Controller 26 calculates from the timing of the emission of beams 30 by VCSEL array 22 and from the times of arrival measured by the M processing units 28 the time-of-flight of the M beams, and thus maps the distance to the corresponding M points in scene 32.
In some embodiments, in order to make optimal use of the available sensing and processing resources, controller 26 identifies the respective areas of SPAD array 24 on which the pulses of optical radiation reflected from corresponding regions of target scene 32 are imaged by lens 34, and chooses the super-pixels to correspond to these areas. The signals output by sensing elements outside these areas are not used, and these sensing elements may thus be deactivated, for example by reducing or turning off the bias voltage to these sensing elements.
For clarity, the dimensions of emitter array 22 and SPAD array 24 have been exaggerated in
Controller 26 typically comprises a programmable processor, which is programmed in software and/or firmware to carry out the functions that are described herein. Alternatively or additionally, controller 26 comprises hard-wired and/or programmable hardware logic circuits, which carry out at least some of the functions of the controller. Although controller 26 is shown in the figure, for the sake of simplicity, as a single, monolithic functional block, in practice the controller may comprise a single chip or a set of two or more chips, with suitable interfaces for receiving and outputting the signals that are illustrated in the figure and are described in the text.
One of the functional units of controller 26 is a depth processing unit (DPU) 27, which receives and processes signals from both processing units 28. DPU 27 calculates the times of flight of the photons in each of beams 30, and thus maps the distance to the corresponding points in target scene 32. This mapping is based on the timing of the emission of beams 30 by emitter array 22 and from the times of arrival (i.e., times of incidence of reflected photons) measured by processing units 28. DPU 27 makes use of the histograms accumulated at the two different PRIs of emitter array 22 in disambiguating any “range folding” that may occur, as explained below with reference to
To enable selection and switching among the different banks, array 22 may be mounted on a driver chip (not shown), for example, a silicon chip with CMOS circuits for selecting and driving the individual VCSELs or banks of VCSELs. The banks of VCSELS in this case may be physically separated, for ease of fabrication and control, or they may be interleaved on the VCSEL chip, with suitable connections to the driver chip to enable actuating the banks in alternation. Thus, beams 30 likewise irradiate the target scene in a time-multiplexed pattern, with different sets of the beams impinging on the respective regions of the scene at different times.
As further alternatives to the pictured embodiments, array 22 may comprise a larger or smaller number of banks and emitters. Typically, for sufficient coverage of the target scenes with static (non-scanned) beams, array 22 comprises at least four banks 62, with at least four emitters 54 in each bank, and possibly with a DOE for splitting the radiation emitted by each of the emitters. For denser coverage, array 22 comprises at least eight banks, with twenty emitters or more in each bank. These options enhance the flexibility of camera 20 in terms of time-multiplexing of the optical and electrical power budgets, as well as processing resources.
In the pictured scenario, radiation source 22 transmits a pulse 70. An object at a small distance (for example, 2.4 m from camera 20) returns a reflected pulse 72, which reaches receiver 23 after a ToF of 16 ns. To measure the ToF, receiver 23 counts the time elapsed between transmitted pulse 70 and the receipt of reflected pulse 72. The count value for the pulse sequence at PRI1 is represented in
On the other hand, an object at a distance larger than the range limit (for example, 8.4 m from camera 20) will return a reflected pulse 74, which reaches receiver 23 after a ToF of 56 ns. Pulse 74 thus reaches the receiver after radiation source 22 has already transmitted the next pulse in the sequence, and after the counters represented by sawtooth 76 and sawtooth 78 have been zeroed. Therefore, receiver 23 will record a ToF of 16 ns for pulse 74 during the sequence at PRI1. Because of the larger PRI during the sequence at PRI2, however, the receiver will record a ToF of 12 ns during this sequence.
Upon processing the ToF results in any of the above schemes, controller 26 will detect that that a certain point in scene 32 had two different ToF values during the pulse sequences at the two different PRI values. These two different ToF values are separated by the difference between the PRI values (4 ns). In this manner, controller 26 is able to detect that range folding has occurred at this point. Thus, the controller distinguishes the points in the scene whose respective depth coordinates are less than range limit 75 from the points in the scene whose respective depth coordinates are greater than the range limit, thereby resolving range folding of the depth coordinates. Depending on the difference between the PRI values (or equivalently, the beat frequency of the PRF values), controller 26 may be able to distinguish between different multiples of range limit 75 and thus extend the range of detection even farther.
Receiver 23 may experience a “dead zone” immediately after each transmitted pulse 70, in which the receiver is unable to detect pulses reflected from target objects due to stray reflections within camera 20. This dead zone is exemplified by the last reflected pulse k in
It is advantageous to choose the PRI values that are used together in the sort of scheme that is shown in
This process of transmitting pulse sequences at PRI1 and PRI2 is repeated for each of the other banks 62b, 62c and 62d, and controller 26 thus receives the dual histograms and extracts depth coordinates for the corresponding sets of pixels in receiver 23. Controller 26 combines the depth coordinates generated over all four banks of emitters in order to create and output a complete depth map of scene 32.
Although
Frequency controller 14 identifies the radio frequency band over which transceiver 12 is to communicate, at a frequency assignment step 100. Based on this assignment, the frequency controller computes a list of PRIs with no harmonics in the assigned radio frequency band, at a PRI list generation step 102. Alternatively, frequency controller 14 may compute and output available PRI values. Further alternatively, frequency controller 14 may convey the assignment of the radio frequency band to camera 20, and controller 26 may then compute the list of available PRIs.
Controller 26 of camera 20 selects a first PRI value (PRI1) from the list, at a first PRI selection step 104, and selects a second PRI value (PRI2) at a second PRI selection step 106. Optionally, controller 26 may choose one or more additional PRI values, up to PRIk, at a further PRI selection step 107. The controller may apply any suitable optimization criteria in choosing the PRI values. For example, controller 26 may select PRI1 and PRI2 so as to optimize SNR and maximize the range of the depth coordinates that can be measured by camera 20, while maintaining a resolution of the depth coordinates to be no greater (i.e., no worse) than a predefined resolution limit. Criteria that may be applied in this regard include:
Following the choice of PRI values, camera 20 captures depth data by emitting sequences of pulses at PRI1 and PRI2 in succession, as explained above, at a depth mapping step 108.
Camera 20 typically continues operating with the selected pair of PRI values, until frequency controller 14 assigns a new frequency band for communication by transceiver 12, at a new frequency assignment step 110. In this case, the method returns to step 100, where the frequency controller 14 modifies the permitted PRI range of camera 20. Controller 26 will then select new values of one or both of PRI1 and PRI2, so that the new values fall within the modified range. Operation of camera 20 continues using these new values.
Specifically, the method of
In typical use, mobile communication device 10 comprises multiple transceivers, which operate concurrently in different frequency bands, such as the Global Position System (GPS) operating in the range of 1.5-1.7 GHz; wireless local area networks (Wi-Fi) operating on channels around 2.4 GHz and 5 GHz; and various cellular bands between 600 MHz and 5 GHz. In choosing the groups of PRI values, controller 26 can give priority to certain frequencies, so that PRIs with harmonics in high-priority radio bands are avoided. For example, because GPS signals are weak and require sensitive receivers, the GPS band will have high priority. Cellular channels that are used in critical signaling, as well as the lower ranges of cellular frequencies, which are more susceptible to interference, may be prioritized, as well. Wi-Fi channels may have lower priority, as long as for any given group of PRI values, there is at least one Wi-Fi channel that is free of interference.
In the method of
Various approaches may be adopted in building the groups of PRI values at step 124. In the present embodiment, for example, the computer begins by selecting the largest PRI value remaining in the list, at a starting PRI selection step 130. The computer then searches for another, smaller PRI value that is co-prime with the other values already selected for inclusion in this group, at a further PRI selection step 132. The computer starts by searching for PRI values that are close to the values already in the group, while ensuring that there is at least one Wi-Fi band with which none of the harmonics of any of the PRIs in the group will interfere. PRI values that do not satisfy this latter requirement are not selected in step 132. This process of adding and evaluating PRI values for incorporation in the present group continues iteratively until the group has k member PRI values, at a group completion step 134.
After a given group of k PRI values has been assembled, the computer returns to step 130 in order to construct the next group of PRI values. The process of building PRI groups continues until a sufficient number of groups has been constructed and stored, at a record completion step 136. For example, the computer may check the harmonics of the PRIs in each group to ensure that for each radio frequency band that may be used by transceiver 12, including cellular and Wi-Fi bands, there is at least one group of PRI values that will not interfere with the band. Controller 26 will then be able to choose the appropriate PRI group, at step 126, in order to accommodate the actual radio frequencies that are in use at any given time.
It will be appreciated that the embodiments described above are cited by way of example, and that the present invention is not limited to what has been particularly shown and described hereinabove. Rather, the scope of the present invention includes both combinations and subcombinations of the various features described hereinabove, as well as variations and modifications thereof which would occur to persons skilled in the art upon reading the foregoing description and which are not disclosed in the prior art.
This application claims the benefit of U.S. Provisional Patent Application 62/859,211, filed Jun. 10, 2019, which is incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
4623237 | Kaneda et al. | Nov 1986 | A |
4757200 | Shepherd | Jul 1988 | A |
5164823 | Keeler | Nov 1992 | A |
5270780 | Moran et al. | Dec 1993 | A |
5373148 | Dvorkis et al. | Dec 1994 | A |
5699149 | Kuroda et al. | Dec 1997 | A |
6301003 | Shirai et al. | Oct 2001 | B1 |
6384903 | Fuller | May 2002 | B1 |
6710859 | Shirai et al. | Mar 2004 | B2 |
7126218 | Darveaux et al. | Oct 2006 | B1 |
7193690 | Ossig et al. | Mar 2007 | B2 |
7303005 | Reis et al. | Dec 2007 | B2 |
7405812 | Bamji | Jul 2008 | B1 |
7508496 | Mettenleiter et al. | Mar 2009 | B2 |
7800067 | Rajavel et al. | Sep 2010 | B1 |
7800739 | Rohner et al. | Sep 2010 | B2 |
7812301 | Oike et al. | Oct 2010 | B2 |
7969558 | Hall | Jun 2011 | B2 |
8193482 | Itsler | Jun 2012 | B2 |
8259293 | Andreou | Sep 2012 | B2 |
8275270 | Shushakov et al. | Sep 2012 | B2 |
8355117 | Niclass | Jan 2013 | B2 |
8405020 | Menge | Mar 2013 | B2 |
8675181 | Hall | Mar 2014 | B2 |
8736818 | Weimer et al. | May 2014 | B2 |
8766164 | Sanfilippo et al. | Jul 2014 | B2 |
8766808 | Hogasten | Jul 2014 | B2 |
8891068 | Eisele et al. | Nov 2014 | B2 |
8925814 | Schneider et al. | Jan 2015 | B1 |
8963069 | Drader et al. | Feb 2015 | B2 |
9002511 | Hickerson et al. | Apr 2015 | B1 |
9024246 | Jiang et al. | May 2015 | B2 |
9052356 | Chu et al. | Jun 2015 | B2 |
9076707 | Harmon | Jul 2015 | B2 |
9016849 | Duggal et al. | Aug 2015 | B2 |
9335220 | Shpunt et al. | May 2016 | B2 |
9354332 | Zwaans et al. | May 2016 | B2 |
9465111 | Wilks et al. | Oct 2016 | B2 |
9516248 | Cohen et al. | Dec 2016 | B2 |
9709678 | Matsuura | Jul 2017 | B2 |
9736459 | Mor et al. | Aug 2017 | B2 |
9739881 | Pavek et al. | Aug 2017 | B1 |
9761049 | Naegle et al. | Sep 2017 | B2 |
9786701 | Mellot et al. | Oct 2017 | B2 |
9810777 | Williams et al. | Nov 2017 | B2 |
9874635 | Eichenholz et al. | Jan 2018 | B1 |
10063844 | Adam et al. | Aug 2018 | B2 |
10067224 | Moore et al. | Sep 2018 | B2 |
10132616 | Wang | Nov 2018 | B2 |
10215857 | Oggier et al. | Feb 2019 | B2 |
10269104 | Hannuksela et al. | Apr 2019 | B2 |
10386487 | Wilton et al. | Aug 2019 | B1 |
10424683 | Do Valle et al. | Sep 2019 | B1 |
10613203 | Rekow et al. | Apr 2020 | B1 |
10782393 | Dussan et al. | Sep 2020 | B2 |
10955234 | Roth | Mar 2021 | B2 |
10955552 | Fine | Mar 2021 | B2 |
20010020673 | Zappa et al. | Sep 2001 | A1 |
20020071126 | Shirai et al. | Jun 2002 | A1 |
20020131035 | Watanabe et al. | Sep 2002 | A1 |
20020154054 | Small | Oct 2002 | A1 |
20020186362 | Shirai et al. | Dec 2002 | A1 |
20040051859 | Flockencier | Mar 2004 | A1 |
20040135992 | Munro | Jul 2004 | A1 |
20040212863 | Schanz et al. | Oct 2004 | A1 |
20060044546 | Lewin et al. | Mar 2006 | A1 |
20060106317 | McConnell et al. | May 2006 | A1 |
20070145136 | Wiklof et al. | Jun 2007 | A1 |
20090009747 | Wolf et al. | Jan 2009 | A1 |
20090262760 | Krupkin et al. | Oct 2009 | A1 |
20090273770 | Bauhahn et al. | Nov 2009 | A1 |
20090275841 | Melendez et al. | Nov 2009 | A1 |
20100019128 | Itzler | Jan 2010 | A1 |
20100045965 | Meneely | Feb 2010 | A1 |
20100096459 | Gurevich | Apr 2010 | A1 |
20100121577 | Zhang et al. | May 2010 | A1 |
20100250189 | Brown | Sep 2010 | A1 |
20100286516 | Fan et al. | Nov 2010 | A1 |
20110006190 | Alameh et al. | Jan 2011 | A1 |
20110128524 | Vert et al. | Jun 2011 | A1 |
20110181864 | Schmitt et al. | Jul 2011 | A1 |
20120038904 | Fossum et al. | Feb 2012 | A1 |
20120075615 | Niclass et al. | Mar 2012 | A1 |
20120132636 | Moore | May 2012 | A1 |
20120153120 | Baxter | Jun 2012 | A1 |
20120154542 | Katz et al. | Jun 2012 | A1 |
20120176476 | Schmidt et al. | Jul 2012 | A1 |
20120249998 | Eisele et al. | Oct 2012 | A1 |
20120287242 | Gilboa et al. | Nov 2012 | A1 |
20120294422 | Cheung et al. | Nov 2012 | A1 |
20130015331 | Birk et al. | Jan 2013 | A1 |
20130079639 | Hector et al. | Mar 2013 | A1 |
20130092846 | Henning et al. | Apr 2013 | A1 |
20130107016 | Federspiel | May 2013 | A1 |
20130208258 | Eisele et al. | Aug 2013 | A1 |
20130236171 | Saunders | Sep 2013 | A1 |
20130258099 | Ovsiannikov et al. | Oct 2013 | A1 |
20130278917 | Korekado et al. | Oct 2013 | A1 |
20130300838 | Borowski | Nov 2013 | A1 |
20130342835 | Blacksberg | Dec 2013 | A1 |
20140027606 | Raynor et al. | Jan 2014 | A1 |
20140077086 | Batkilin et al. | Mar 2014 | A1 |
20140078491 | Eisele et al. | Mar 2014 | A1 |
20140162714 | Kim | Jun 2014 | A1 |
20140191115 | Webster et al. | Jul 2014 | A1 |
20140198198 | Geissbuehler et al. | Jul 2014 | A1 |
20140231630 | Rae et al. | Aug 2014 | A1 |
20140240317 | Go et al. | Aug 2014 | A1 |
20140240691 | Mheen et al. | Aug 2014 | A1 |
20140268127 | Day | Sep 2014 | A1 |
20140300907 | Kimmel | Oct 2014 | A1 |
20140321862 | Frohlich et al. | Oct 2014 | A1 |
20140353471 | Raynor et al. | Dec 2014 | A1 |
20150041625 | Dutton et al. | Feb 2015 | A1 |
20150062558 | Koppal et al. | Mar 2015 | A1 |
20150131080 | Retterath et al. | May 2015 | A1 |
20150163429 | Dai et al. | Jun 2015 | A1 |
20150192676 | Kotelnikov et al. | Jul 2015 | A1 |
20150200222 | Webster | Jul 2015 | A1 |
20150200314 | Webster | Jul 2015 | A1 |
20150260830 | Ghosh et al. | Sep 2015 | A1 |
20150285625 | Deane et al. | Oct 2015 | A1 |
20150362585 | Ghosh et al. | Dec 2015 | A1 |
20150373322 | Goma et al. | Dec 2015 | A1 |
20160003944 | Schmidtke et al. | Jan 2016 | A1 |
20160041266 | Smits | Feb 2016 | A1 |
20160072258 | Seurin et al. | Mar 2016 | A1 |
20160080709 | Viswanathan et al. | Mar 2016 | A1 |
20160259038 | Retterath et al. | Sep 2016 | A1 |
20160259057 | Ito | Sep 2016 | A1 |
20160274222 | Yeun | Sep 2016 | A1 |
20160334508 | Hall et al. | Nov 2016 | A1 |
20160344965 | Grauer | Nov 2016 | A1 |
20170006278 | Vandame et al. | Jan 2017 | A1 |
20170038459 | Kubacki et al. | Feb 2017 | A1 |
20170052065 | Sharma et al. | Feb 2017 | A1 |
20170067734 | Heidemann et al. | Mar 2017 | A1 |
20170131388 | Campbell et al. | May 2017 | A1 |
20170131718 | Matsumura et al. | May 2017 | A1 |
20170139041 | Drader et al. | May 2017 | A1 |
20170176577 | Halliday | Jun 2017 | A1 |
20170176579 | Niclass et al. | Jun 2017 | A1 |
20170179173 | Mandai et al. | Jun 2017 | A1 |
20170184450 | Doylend et al. | Jun 2017 | A1 |
20170184704 | Yang et al. | Jun 2017 | A1 |
20170184709 | Kenzler et al. | Jun 2017 | A1 |
20170188016 | Hudman | Jun 2017 | A1 |
20170219695 | Hall et al. | Aug 2017 | A1 |
20170242102 | Dussan et al. | Aug 2017 | A1 |
20170242108 | Dussan et al. | Aug 2017 | A1 |
20170257617 | Retterath | Sep 2017 | A1 |
20170269209 | Hall et al. | Sep 2017 | A1 |
20170303789 | Tichauer et al. | Oct 2017 | A1 |
20170329010 | Warke et al. | Nov 2017 | A1 |
20170343675 | Oggier et al. | Nov 2017 | A1 |
20170356796 | Nishio | Dec 2017 | A1 |
20170356981 | Yang et al. | Dec 2017 | A1 |
20180045816 | Jarosinski et al. | Feb 2018 | A1 |
20180059220 | Irish et al. | Mar 2018 | A1 |
20180062345 | Bills et al. | Mar 2018 | A1 |
20180081032 | Torruellas et al. | Mar 2018 | A1 |
20180081041 | Niclass et al. | Mar 2018 | A1 |
20180115762 | Bulteel et al. | Apr 2018 | A1 |
20180131449 | Kare et al. | May 2018 | A1 |
20180167602 | Pacala et al. | Jun 2018 | A1 |
20180203247 | Chen et al. | Jul 2018 | A1 |
20180205943 | Trail | Jul 2018 | A1 |
20180209846 | Mandai | Jul 2018 | A1 |
20180259645 | Shu et al. | Sep 2018 | A1 |
20180299554 | Van Dyck et al. | Oct 2018 | A1 |
20180341009 | Niclass et al. | Nov 2018 | A1 |
20190004156 | Niclass et al. | Jan 2019 | A1 |
20190011556 | Pacala et al. | Jan 2019 | A1 |
20190011567 | Pacala et al. | Jan 2019 | A1 |
20190018117 | Perenzoni et al. | Jan 2019 | A1 |
20190018118 | Perenzoni et al. | Jan 2019 | A1 |
20190018119 | Laifenfeld | Jan 2019 | A1 |
20190018143 | Thayer et al. | Jan 2019 | A1 |
20190037120 | Ohki | Jan 2019 | A1 |
20190056497 | Pacala et al. | Feb 2019 | A1 |
20190094364 | Fine et al. | Mar 2019 | A1 |
20190170855 | Keller et al. | Jun 2019 | A1 |
20190178995 | Tsai et al. | Jun 2019 | A1 |
20190257950 | Patanwala et al. | Aug 2019 | A1 |
20190277952 | Beuschel et al. | Sep 2019 | A1 |
20190361404 | Mautner et al. | Nov 2019 | A1 |
20200142033 | Shand | May 2020 | A1 |
20200233068 | Henderson et al. | Jul 2020 | A1 |
20200256669 | Roth et al. | Aug 2020 | A1 |
20200256993 | Oggier | Aug 2020 | A1 |
20200314294 | Schoenlieb et al. | Oct 2020 | A1 |
20200386890 | Oggier | Dec 2020 | A1 |
Number | Date | Country |
---|---|---|
2605339 | Sep 2008 | CA |
103763485 | Apr 2014 | CN |
104730535 | Jun 2015 | CN |
104914446 | Sep 2015 | CN |
105992960 | Oct 2016 | CN |
106405572 | Feb 2017 | CN |
110609293 | Dec 2019 | CN |
110869804 | Mar 2020 | CN |
112068149 | Dec 2020 | CN |
202013101039 | Mar 2014 | DE |
2157445 | Feb 2010 | EP |
2322953 | May 2011 | EP |
2469297 | Jun 2012 | EP |
2477043 | Jul 2012 | EP |
2827175 | Jan 2015 | EP |
3285087 | Feb 2018 | EP |
3285087 | Feb 2018 | EP |
3318895 | May 2018 | EP |
3521856 | Aug 2019 | EP |
3751307 | Dec 2020 | EP |
H02287113 | Nov 1990 | JP |
H0567195 | Mar 1993 | JP |
09197045 | Jul 1997 | JP |
H10170637 | Jun 1998 | JP |
H11063920 | Mar 1999 | JP |
2011089874 | May 2011 | JP |
2011237215 | Nov 2011 | JP |
2013113669 | Jun 2013 | JP |
2014059301 | Apr 2014 | JP |
101318951 | Oct 2013 | KR |
9008946 | Aug 1990 | WO |
2010149593 | Dec 2010 | WO |
2013028691 | Feb 2013 | WO |
2015199615 | Dec 2015 | WO |
2017106875 | Jun 2017 | WO |
2018122560 | Jul 2018 | WO |
2020101576 | May 2020 | WO |
2020109378 | Jun 2020 | WO |
2020201452 | Oct 2020 | WO |
Entry |
---|
JP Application # 2020001203 Office Action dated Feb. 4, 2021. |
U.S. Appl. No. 16/752,653 Office Action dated Feb. 4, 2021. |
Morbi et al., “Short range spectral lidar using mid-infrared semiconductor laser with code-division multiplexing technique”, Technical Digest, CLEO 2001, pp. 491-492 pages, May 2001. |
Al et al., “High-resolution random-modulation cw lidar”, Applied Optics, vol. 50, issue 22, pp. 4478-4488, Jul. 28, 2011. |
Chung et al., “Optical orthogonal codes: design, analysis and applications”, IEEE Transactions on Information Theory, vol. 35, issue 3, pp. 595-604, May 1989. |
Lin et al., “Chaotic lidar”, IEEE Journal of Selected Topics in Quantum Electronics, vol. 10, issue 5, pp. 991-997, Sep.-Oct. 2004. |
U.S. Appl. No. 15/844,665 office action dated Jun. 1, 2020. |
U.S. Appl. No. 15/950,186 office action dated Jun. 23, 2020. |
Charbon et al., “SPAD-Based Sensors”, TOF Range-Imaging Cameras, Springer-Verlag, pp. 11-38, year 2013. |
Niclass et al., “A 0.18 um CMOS SoC for a 100m range, 10 fps 200×96 pixel Time of Flight depth sensor”, IEEE International Solid- State Circuits Conference—(ISSCC), Session 27, Image Sensors, 27.6, pp. 488-490, Feb. 20, 2013. |
Walker et al., “A 128×96 pixel event-driven phase-domain ΔΣ-based fully digital 3D camera in 0.13μm CMOS imaging technology”, IEEE International Solid- State Circuits Conference—(ISSCC), Session 23, Image Sensors, 23.6, pp. 410-412, Feb. 23, 2011. |
Niclass et al., “Design and characterization of a 256×64-pixel single-photon imager in CMOS for a MEMS-based laser scanning time-of-flight sensor”, Optics Express, vol. 20, issue 11, pp. 11863-11881, May 21, 2012. |
Kota et al., “System Design and Performance Characterization of a MEMS-Based Laser Scanning Time-of-Flight Sensor Based on a 256 × 64-pixel Single-Photon Imager”, IEEE Photonics Journal, vol. 5, issue 2, pp. 1-15, Apr. 2013. |
Webster et al., “A silicon photomultiplier with >30% detection efficiency from 450-750nm and 116μm pitch NMOS-only pixel with 21.6% fill factor in 130nm CMOS”, Proceedings of the European Solid-State Device Research Conference (ESSDERC), pp. 238-241, Sep. 7-21, 2012. |
Bradski et al., “Learning OpenCV”, first edition, pp. 1-50, O'Reilly Media, Inc, California, USA, year 2008. |
Buttgen et al., “Pseudonoise Optical Modulation for Real-Time 3-D Imaging With Minimum Interference”, IEEE Transactions on Circuits and Systems I: Regular Papers, vol. 54, Issue10, pp. 2109-2119, Oct. 1, 2007. |
International Application # PCT/US2020/058760 Search Report dated Feb. 9, 2021. |
FW Application # 109119267 Office Action dated Mar. 10, 2021. |
U.S. Appl. No. 16/752,653 Office Action dated Apr. 5, 2021. |
EP Application No. 20177707.5 Search Report dated Sep. 29, 2020. |
U.S. Appl. No. 16/532,517 Office Action dated Oct. 14, 2020. |
EP Application # 20177707.5 Search Report dated Nov. 12, 2020. |
IN Application # 202117029897 Office Action dated Mar. 10, 2022. |
IN Application # 202117028974 Office Action dated Mar. 2, 2022. |
U.S. Appl. No. 16/752,653 Office Action dated Oct. 1, 2021. |
EP Application # 17737420.4 Office Action dated Oct. 28, 2021. |
KR Application # 1020200068248 Office Action dated Nov. 12, 2021. |
KR Application # 1020207015906 Office Action dated Oct. 13, 2021. |
CN Application # 201680074428.8 Office Action dated Jun. 23, 2021. |
Zhu Jian, “Research of Simulation of Super-Resolution Reconstruction of Infrared Image”, abstract page, Master's Thesis, p. 1, Nov. 15, 2005. |
U.S. Appl. No. 16/679,360 Office Action dated Jun. 29, 2022. |
EP Application #22167103.5 Search Report dated Jul. 11, 2022. |
CN Application #201780058088.4 Office Action dated Aug. 23, 2022. |
U.S. Appl. No. 16/532,513 Office Action dated Aug. 4, 2022. |
CN Application #201810571820.4 Office Action dated Sep. 9, 2022. |
Number | Date | Country | |
---|---|---|---|
20200386890 A1 | Dec 2020 | US |
Number | Date | Country | |
---|---|---|---|
62859211 | Jun 2019 | US |