This invention relates generally to the field of optical sensors and more specifically to a new and useful optical system for collecting distance information in the field of optical sensors.
The following description of embodiments of the invention is not intended to limit the invention to these embodiments but rather to enable a person skilled in the art to make and use this invention. Variations, configurations, implementations, example implementations, and examples described herein are optional and are not exclusive to the variations, configurations, implementations, example implementations, and examples they describe. The invention described herein can include any and all permutations of these variations, configurations, implementations, example implementations, and examples.
1. One-Dimensional Optical System: Aperture Array
As shown in
1.1 Applications
Generally, the one-dimensional optical system 100 (the “system”) functions as an image sensor that, when rotated about an axis parallel to a column of apertures, collects three-dimensional distance data of a volume occupied by the system. Specifically, the one-dimensional optical system 100 can scan a volume to collect three-dimensional distance data that can then be reconstructed into a virtual three-dimensional representation of the volume, such as based on recorded times between transmission of illuminating beams from the illumination sources and detection of photons—likely originating from the illumination sources—incident on the set of pixels 170, based on phase-based measurements techniques, or based on any other suitable distance measurement technique. The system 100 includes: a column of offset apertures arranged behind a bulk imaging optic 130 and defining discrete fields of view in a field ahead of the bulk imaging optic 130 (that is non-overlapping fields of view beyond a threshold distance from the system); a set of illumination sources 110 that project discrete illuminating beams at an operating wavelength into (and substantially only into) the fields of view defined by the apertures; a column of lenses that collimate light rays passed by corresponding apertures; and an optical filter 160 that selectively passes a narrow band of wavelengths of light (i.e., electromagnetic radiation) including the operating wavelength; and a set of pixels 170 that detect incident photons (e.g., count incident photons, tracks times between consecutive incident photons). The system can therefore selectively project illuminating beams into a field ahead of the system according to an illumination pattern that substantially matches—in size and geometry across a range of distances from the system—the fields of view of the apertures. In particular, the illumination sources are configured to illuminate substantially only surfaces in the field ahead of the system that can be detected by pixels in the system such that minimal power output by the system (via the illumination sources) is wasted by illuminating surfaces in the field for which the pixels are blind. The system can therefore achieve a relatively high ratio of output signal (i.e., illuminating beam power) to input signal (i.e., photons passed to an incident on the pixel array). Furthermore, the set of lenses 150 can collimate light rays passed by adjacent apertures such that light rays incident on the optical filter 160 meet the optical filter 160 at an angle of incidence of approximately 0°, thereby maintaining a relatively narrow band of wavelengths of light passed by the optical filter 160 and achieving a relatively high signal-to-noise ratio (“SNR”) for light rays reaching the set of pixels 170.
The system includes pixels arranged in a column and aligned with the apertures, and each pixel can be non-square in geometry (e.g., short and wide) to extend the sensing area of the system for a fixed aperture pitch and pixel column height. The system also includes a diffuser 180 that spreads light rays passed from an aperture through the optical filter 160 across the area of a corresponding pixel such that the pixel can detect incident photons across its full width and height thereby increasing the dynamic range of the system.
The system is described herein as projecting electromagnetic radiation into a field and detecting electromagnetic radiation reflected from a surface in the field back to bulk receiver optic. Terms “illumination beam,” “light,” “light rays,” and “photons” recited herein refer to such electromagnetic radiation. The term “channel” recited herein refers to one aperture in the aperture layer 140, a corresponding lens in the set of lenses 150, and a corresponding pixel in the set of pixels 170.
1.2 Bulk Imaging Optic
The system includes a bulk imaging optic 130 characterized by a focal plane opposite the field. Generally, the bulk imaging optic 130 functions to project incident light rays from outside the system toward the focal plane where light rays incident on a stop region 146 of the aperture layer 140 are rejected (e.g., mirrored or absorbed) and where light rays incident on apertures in the aperture layer 140 are passed into a lens characterized by a focal length and offset from the focal plane by the focal length.
In one implementation, the bulk imaging optic 130 includes a converging lens, such as a bi-convex lens (shown in
1.3 Aperture Layer
As shown in
The aperture layer 140 includes a relatively thin opaque structure coinciding with (e.g., arranged along) the focal plane of the bulk imaging optic 130, as shown in
In the one-dimensional optical system 100, the aperture layer 140 can define a single column of multiple discrete circular apertures of substantially uniform diameter, wherein each aperture defines an axis substantially parallel to and aligned with one lens in the lens array, as shown in
In one implementation, a first aperture 141 in the aperture layer 140 passes light rays—reflected from a discrete region of a surface in the field (the field of view of the sense channel) ahead of the bulk imaging optic 130—into its corresponding lens; a stop region 146 interposed between the first aperture 141 and adjacent apertures in the aperture layer 140 blocks light rays—reflected from a region of the surface outside of the field of view of the first aperture 141—from passing into the lens corresponding to the first aperture 141. In the one-dimensional optical system 100, the aperture layer 140 therefore defines a column of apertures that define multiple discrete, non-overlapping fields of view of substantially infinite depth of field, as shown in
In this implementation, a first aperture 141 in the aperture layer 140 defines a field of view that is distinct and that does not intersect a field of view defined by another aperture in the aperture layer 140, as shown in
Generally, photons projected into the field by the first illumination source 111 illuminate a particular region of a surface (or multiple surfaces) in the field within the field of view of the first sense channel and are reflected (e.g., scattered) by the surface(s); at least some of these photons reflected by the particular region of a surface may reach the bulk imaging optic 130, which directs these photons toward the focal plane. Because these photons were reflected by a region of a surface within the field of view of the first aperture 141, the bulk imaging optic 130 may project these photons into the first aperture 141, and the first aperture 141 may pass these photons into the first lens 151 (or a subset of these photons incident at an angle relative to the axis of the first aperture 141 below a threshold angle). However, because a second aperture 142 in the aperture layer 140 is offset from the first aperture 141 and because the particular region of the surface in the field illuminated via the first illumination source 111 does not (substantially) coincide with the field of view of the second aperture 142, photons reflected by the particular region of the surface and reaching the bulk imaging optic 130 are projected into the second aperture 142 and passed to a second lens 152 behind the second aperture 142, and vice versa, as shown in
For a first aperture 141 in the aperture layer 140 paired with a first illumination source 111 in the set of illumination sources 110, the first aperture 141 in the aperture layer 140 defines a first field of view and passes—into the first lens 151—incident light rays originating at or reflected from a surface in the field coinciding with the first field of view. Because the first illumination source 111 projects an illuminating beam that is substantially coincident (and substantially the same size as or minimally larger than) the field of view defined by the first aperture 141 (as shown in
In one variation, the system includes a second aperture layer interposed between the lens array and the optical filter 160, wherein the second aperture layer defines a second set of apertures 144, each aligned with a corresponding lens in the set of lenses 150, as described above. In this variation, an aperture in the second aperture layer 140 can absorb or reflect errant light rays passed by a corresponding lens, as described above, to further reduce crosstalk between channels, thereby improving SNR within the system. Similarly, the system can additionally or alternatively include a third aperture layer interposed between the optical filter 160 and the diffuser(s) 180, wherein the third aperture layer defines a third set of apertures 144, each aligned with a corresponding lens in the set of lenses 150, as described above. In this variation, an aperture in the third aperture layer can absorb or reflect errant light rays passed by the light filter, as described above, to again reduce crosstalk between channels, thereby improving SNR within the system.
1.4 Lens Array
The system includes a set of lenses 150, wherein each lens in the set of lenses 150 is characterized by a second focal length, is offset from the focal plane opposite the bulk imaging optic 130 by the second focal length, is aligned with a corresponding aperture in the set of apertures 144, and is configured to collimate light rays passed by the corresponding aperture. Generally, a lens in the set of lenses 150 functions to collimate lights rays passed by its corresponding aperture and to pass these collimated light rays into the optical filter 160.
In the one-dimensional optical system 100, the lenses are arranged in a single column, and adjacent lenses are offset by a uniform lens pitch distance (i.e., a center-to-center-distance between adjacent pixels), as shown in
Lenses in the set of lenses 150 can be substantially similar. A lens in the set of lenses 150 is configured to collimate light rays focused into its corresponding aperture by the bulk imaging optic 130. For example, a lens in the set of lenses 150 can include a bi-convex or plano-convex lens characterized by a focal length selected based on the size (e.g., diameter) of its corresponding aperture and the operating wavelength of the system. In this example, the focal length (f) of a lens in the set of lenses 150 can be calculated according to the formula:
where d is the diameter of the corresponding aperture in the aperture layer and A is the operating wavelength of light output by the illumination source (e.g., 900 nm). The geometry of a lens in the set of lenses 150 can therefore be matched to the geometry of a corresponding aperture in the aperture layer such that the lens passes a substantially sharp image of light rays—at or near the operating wavelength—into the optical filter 160 and thus on to the pixel array.
However, the set of lenses 150 can include lenses of any other geometry and arranged in any other way adjacent the aperture layer.
1.5 Optical Filter
As shown in
In one implementation, the optical filter 160 includes an optical bandpass filter that passes a narrow band of electromagnetic radiation substantially centered at the operating wavelength of the system. In one example, the illumination sources output light (predominantly) at an operating wavelength of 900 nm, and the optical filter 160 is configured to pass light between 899.95 nm and 900.05 nm and to block light outside of this band.
The optical filter 160 may selectively pass and reject wavelengths of light as a function of angle of incidence on the optical filter 160. Generally, optical bandpass filters may pass wavelengths of light inversely proportional to their angle of incidence on the light optical bandpass filter. For example, for an optical filter 160 including a 0.5 nm-wide optical bandpass filter, the optical filter 160 may pass over 95% of electromagnetic radiation over a sharp band from 899.75 nm to 900.25 nm and reject approximately 100% of electromagnetic radiation below 899.70 nm and above 900.30 nm for light rays incident on the optical filter 160 at an angle of incidence of approximately 0°. However, in this example, the optical filter 160 may pass over 95% of electromagnetic radiation over a narrow band from 899.5 nm to 900.00 nm and reject approximately 100% of electromagnetic radiation over a much wider band below 899.50 nm and above 900.30 nm for light rays incident on the optical filter 160 at an angle of incidence of approximately 15°. Therefore, the incidence plane of the optical filter 160 can be substantially normal to the axes of the lenses, and the set of lenses 150 can collimate light rays received through a corresponding aperture and output these light rays substantially normal to the incidence plane of the optical filter 160 (i.e., at an angle of incidence of approximately 0° on the optical filter). Specifically, the set of lenses 150 can output light rays toward the optical filter 160 at angles of incidence approximating 0° such that substantially all electromagnetic radiation passed by the optical filter 160 is at or very near the operating wavelength of the system.
In the one-dimensional optical system 100, the system can include a single optical filter 160 that spans the column of lens in the set of lenses 150. Alternatively, the system can include multiple optical filters 160, each adjacent a single lens or a subset of lenses in the set of lenses 150. However, the optical filter 160 can define any other geometry and can function in any other way to pass only a limited band of wavelengths of light.
1.6 Pixel Array and Diffuser
The system includes a set of pixels 170 adjacent the optical filter 160 opposite the set of lenses 150, each pixel in the set of pixels 170 corresponding to a lens in the set of lenses 150 and including a set of subpixels arranged along a second axis non-parallel to the first axis. Generally, the set of pixels 170 are offset from the optical filter 160 opposite the set of lenses 150, and each pixel in the set of pixels 170 functions to output a single signal or stream of signals corresponding to the count of photons incident on the pixel within one or more sampling periods, wherein each sampling period may be picoseconds, nanoseconds, microseconds, or milliseconds in duration.
The system also includes a diffuser 180 interposed between the optical filter 160 and the set of pixels 170 and configured to spread collimated light output from each lens in the set of lenses 150 across a set of subpixels of a single corresponding pixel in the set of pixels 170. Generally, for each lens in the set of lenses 150, the diffuser 180 functions to spread light rays—previously collimated by the lens and passed by the optical filter 160—across the width and height of a sensing area within a corresponding pixel. The diffuser 180 can define a single optic element spanning the set of lenses 150, or the diffuser 180 can include multiple discrete optical elements, such as including one optical diffuser element aligned with each channel in the system.
In one implementation, a first pixel 171 in the set of pixels 170 includes an array of single-photon avalanche diode detectors (hereinafter “SPADs”), and the diffuser 180 spreads lights rays—previously passed by a corresponding first aperture 141, collimated by a corresponding first lens 151, and passed by the optical filter 160—across the area of the first pixel 171, as shown in
In one example, each pixel in the set of pixels 170 is arranged on an image sensor, and a first pixel 171 in the set of pixels 170 includes a single row of 16 SPADs spaced along a lateral axis perpendicular to a vertical axis bisecting the column of apertures and lenses. In this example, the height of a single SPAD in the first pixel 171 can be less than the height (e.g., diameter) of the first lens 151, but the total length of the 16 SPADs can be greater than the width (e.g., diameter) of the first lens 151; the diffuser 180 can therefore converge light rays output from the first lens 151 to a height corresponding to the height of a SPAD at the plane of the first pixel 171 and can diverge light rays output from the first lens 151 to a width corresponding to the width of the 16 SPADs at the plane of the first pixel 171. In this example, the remaining pixels in the set of pixels 170 can include similar rows of SPADs, and the diffuser 180 can similarly converge and diverge light rays passed by corresponding apertures onto corresponding pixels.
In the foregoing example, the aperture layer can include a column of 16 like apertures, the set of lenses 150 can include a column of 16 like lenses arranged behind the aperture layer, and the set of pixels 170 can include a set of 16 like pixels—each including a similar array of SPADs—arranged behind the set of lenses 150. For a 6.4 mm-wide, 6.4 mm-tall image sensor, each pixel can include a single row of 16 SPADs, wherein each SPAD is electrically coupled to a remote analog front-end processing electronics/digital processing electronics circuit 240. Each SPAD can be arranged in a 400 μm-wide, 400 μm-tall SPAD area and can define an active sensing area approaching 400 μm in diameter. Adjacent SPADs can be offset by a SPAD pitch distance of 400 μm. In this example, the aperture pitch distance along the vertical column of apertures, the lens pitch distance along the vertical column of lenses, and the pixel pitch distance along the vertical column of pixels can each be approximately 400 μm accordingly. For the first sense channel in the system (i.e., the first aperture 141, the first lens 151, and the first pixel 171, etc.), a first diffuser 180 can diverge a cylindrical column of light rays passed from the first lens 151 through the optical filter 160—such as a column of light approximately 100 μm in diameter for an aperture layer aspect ratio of 1:4—to a height of approximately 400 μm aligned vertically with the row of SPADs in the first pixel 171. The first diffuser can similarly diverge the cylindrical column of light rays passed from the first lens 151 through the optical filter 160 to a width of approximately 6.4 μm centered horizontally across the row of SPADs in the first pixel 171. Other diffusers 180 in the system can similarly diverge (or converge) collimated light passed by corresponding lenses across corresponding pixels in the set of pixels 170. Therefore, in this example, by connecting each SPAD (or each pixel) to a remote analog front-end processing electronics/digital processing electronics circuit 240 and by incorporating diffusers 180 that spread light passed by the optical filter 160 across the breadths and heights of corresponding pixels, the system can achieve a relatively high sensing area fill factor across the imaging sensor.
Therefore, in the one-dimensional optical system 100, pixels in the set of pixels 170 can include an array of multiple SPADS arranged in aspect ratio exceeding 1:1, and the diffuser 180 can spread light rays across corresponding non-square pixels that enables a relatively large numbers of SPADs to be tiled across a single pixel to achieve a greater dynamic range across the image sensor than an image sensor with a single SPAD per pixel, as shown in
However, pixels in the set of pixels 170 can include any other number of SPADs arranged in any other arrays, such as in a 64-by-1 grid array (as described above), in a 32-by-2 grid array, or in a 16-by-4 grid array, and the diffuser 180 can converge and/or diverge collimated light rays onto corresponding pixels accordingly in any other suitable way. Furthermore, rather than (or in addition to) SPADs, each pixel in the set of pixels 170 can include one or more linear avalanche photodiodes, Geiger mode avalanche photodiodes, photomultipliers, resonant cavity photodiodes, QUANTUM DOT detectors, or other types of photodetectors arranged as described above, and the diffuser(s) 180 can similarly converge and diverge signals passed by the optical filter(s) 160 across corresponding pixels, as described herein.
1.7 Illumination Sources
The system includes a set of illumination sources 110 arranged along a first axis, each illumination source in the set of illumination sources 110 configured to output an illuminating beam of an operating wavelength toward a discrete spot in a field ahead of the illumination source. Generally, each illumination source functions to output an illuminating beam coincident a field of view defined by a corresponding aperture in the set of apertures 144, as shown in
In one implementation, the set of illumination sources 110 includes a bulk transmitter optic and one discrete emitter per sense channel. For example, the set of illumination sources 110 can include a monolithic VCSEL arrays including a set of discrete emitters. In this implementation, the bulk transmitter optic can be substantially identical to the bulk imaging optic 130 in material, geometry (e.g., focal length), thermal isolation, etc., and the bulk transmitter optic is adjacent and offset laterally and/or vertically from the bulk imaging optic 130. In a first example, set of illumination sources 110 includes a laser array including discrete emitters arranged in a column with adjacent emitters offset by an emitter pitch distance substantially identical to the aperture pitch distance. In this first example, each emitter outputs an illuminating beam of diameter substantially identical to or slightly greater than the diameter of a corresponding aperture in the apertures layer, and the column of emitters is arranged along the focal plane of the bulk transmitter optic such that each illuminating beam projected from the bulk transmitter optic into the field intersects and is of substantially the same size and geometry as the field of view of the corresponding sense channel, as shown in
In a second example, the discrete emitters are similarly arranged in a column with adjacent emitters offset by an emitter pitch distance twice the aperture pitch distance, as shown in
The system can also include multiple discrete sets of illumination sources, each set of illumination sources 110 paired with a discrete bulk transmitter optic adjacent the bulk imaging optic 130. For example, the system can include a first bulk transmitter optic, a second bulk transmitter optic, and a third bulk transmitter optic patterned radially about the bulk imaging optic 130 at a uniform radial distance from the center of the bulk imaging optic 130 and spaced apart by an angular distance of 120°. In this example, the system can include a laser array with one emitter—as described above—behind each of the first, second, and third bulk transmitter optics. Each discrete laser array and its corresponding bulk transmitter optic can thus project a set of illuminating beams into the fields of view of defined by corresponding in the apertures in the aperture layer. Therefore, in this example, the three discrete laser arrays and the three corresponding bulk transmitter optics can cooperate to project three times the power onto the fields of view of the sense channels in the system, as compared to a single laser array and one bulk transmitter optic. Additionally or alternatively, the system can include multiple discrete layer arrays and bulk transmitter optics to both: 1) achieve a target illumination power output into the field of view of each sensing channel in the receiver subsystem with multiple lower-power emitters per sensing channel; and 2) distribute optical energy over a larger area in the near-field to achieve an optical energy density less than a threshold allowable optical energy density for the human eye.
However, the system can include any other number and configuration of illumination source sets and bulk transmitter optics configured to illuminate fields of view defined by the sense channels. The set of illumination sources 110 can also include any other suitable type of optical transmitter, such as a 1×16 optical splitter powered by a single laser diode, a side-emitting laser diode array, an LED array, or a quantum dot LED array, etc.
1.8 Fabrication
In one implementation, the bulk receiver lens, the aperture layer, the set of lenses 150, the optical filter 160, and the diffuser 180 are fabricated and then aligned with and mounted onto an image sensor. For example, the optical filter 160 can be fabricated by coating a fused silica substrate. Photoactive optical polymer can then be deposited over the optical filter 160, and a lens mold can be placed over the photoactive optical polymer and a UV light source activated to cure the photoactive optical polymer in the form of lenses patterned across the optical filter 160. Standoffs can be similarly molded or formed across the optical filter 160 via photolithography techniques, and an aperture layer defined by a selectively-cured, metallized glass wafer can then be bonded or otherwise mounted to the standoffs to form the aperture layer. The assembly can then be inverted, and a set of discrete diffusers and standoffs can be similarly fabricated across the opposite side of the optical filter 160. A discrete image sensor can then be aligned with and bonded to the standoffs, and a bulk imaging optic 130 can be similarly mounted over the aperture layer.
Alternatively, photolithography and wafer level bonding techniques can be implemented to fabricate the bulk imaging optics, the aperture layer, the set of lenses 150, the optical filter 160, and the diffuser 180 directly on to the un-diced semiconductor wafer containing the detector chips in order to simplify manufacturing, reduce cost, and reduce optical stack height for decreased pixel crosstalk.
2. One-Dimensional Optical System: Lens Tube
One variation of the system includes: a set of illumination sources 110 arranged along a first axis, each illumination source in the set of illumination sources 110 configured to output an illuminating beam of an operating wavelength toward a discrete spot in a field ahead of the illumination source; a bulk imaging optic 130 characterized by a focal plane opposite the field; a set of lens tubes 210 arranged in a line array parallel to the first axis, each lens tube in the set of lens tubes 210 including: a lens characterized by a focal length, offset from the focal plane by the focal length, and configured to collimate light rays reflected into the bulk imaging optic 130 from a discrete spot in the field illuminated by a corresponding illumination source in the set of optics into the bulk imaging optic 130; and a cylindrical wall 218 extending from the lens opposite the focal plane, defining a long axis substantially perpendicular to the first axis, and configured to absorb incident light rays reflected into the bulk imaging optic 130 from a region in the field outside the discrete spot illuminated by the corresponding illumination source. In this variation, the system also includes: an optical filter 160 adjacent the set of lens tubes 210 opposite the focal plane and configured to pass light rays at the operating wavelength; a set of pixels 170 adjacent the optical filter 160 opposite the set of lenses 150, each pixel in the set of pixels 170 corresponding to a lens in the set of lenses 150 and including a set of subpixels aligned along a third axis perpendicular to the first axis; and a diffuser 180 interposed between the optical filter 160 and the set of pixels 170 and configured to spread collimated light output from each lens in the set of lenses 150 across a set of subpixels of a corresponding pixel in the set of pixels 170.
Generally, in this variation, the system includes a lens tube in replacement of (or in addition to) each aperture and lens pair described above. In this variation, each lens tube can be characterized by a second (short) focal length and can be offset from the focal plane of the bulk imaging optic 130 by the second focal length to preserve the aperture of the bulk imaging optic 130 and to collimate incident light received from the bulk imaging optic 130, as described above and as shown in
Each lens tube also defines an opaque cylindrical wall 218 defining an axis normal to the incidence plane of the adjacent optical filter 160 and configured to absorb incident light rays, as shown in
The cylindrical wall 218 of a lens tube can define a coarse or patterned opaque interface about a transparent (or translucent) lens material, as shown in
As shown in
3. Two-Dimensional Optical System
Another variation of the system includes: a set of illumination sources 110 arranged in a first rectilinear grid array, each illumination source in the set of illumination sources 110 configured to output an illuminating beam of an operating wavelength toward a discrete spot in a field ahead of the illumination source; a bulk imaging optic 130 characterized by a focal plane opposite the field; an aperture layer coincident the focal plane, defining a set of apertures 144 in a second rectilinear grid array proportional to the first rectilinear grid array, and defining a stop region 146 around the set of apertures 144, each aperture in the set of apertures 144 defining a field of view in the field coincident a discrete spot output by a corresponding illumination source in the set of illumination sources 110, the stop region 146 absorbing light rays reflected from surfaces in the field outside of fields of view defined by the set of apertures 144 and passing through the bulk imaging optic 130; a set of lenses 150, each lens in the set of lenses 150 characterized by a second focal length, offset from the focal plane opposite the bulk imaging optic 130 by the second focal length, aligned with an aperture in the set of apertures 144, and configured to collimate light rays passed by the aperture; an optical filter 160 adjacent the set of lenses 150 opposite the aperture layer and configured to pass light rays at the operating wavelength; a set of pixels 170 adjacent the optical filter 160 opposite the set of lenses 150, each pixel in the set of pixels 170 aligned with a subset of lenses in the set of lenses 150; and a diffuser 180 interposed between the optical filter 160 and the set of pixels 170 and configured to spread collimated light output from each lens in the set of lenses 150 across a corresponding pixel in the set of pixels 170.
Generally, in this variation, the system includes a two-dimensional grid array of channels (i.e., aperture, lens, and pixel sets or lens tube and pixel sets) and is configured to image a volume occupied by the system in two dimensions. The system can collect one-dimensional distance data—such as counts of incident photons within a sampling period and/or times between consecutive photons incident on pixels of known position corresponding to known fields of view in the field—across a two-dimensional field. The one-dimensional distance data can then be merged with known positions of the fields of view for each channel in the system to reconstruct a virtual three-dimensional representation of the field ahead of the system.
In this variation, the aperture layer can define a grid array of apertures, the set of lenses 150 can be arranged in a similar grid array with one lens aligned with one aperture in the aperture layer, and the set of pixels 170 can include one pixel per aperture and lens pair, as described above. For example, the aperture layer can define a 24-by-24 grid array of 200-μm-diameter apertures offset vertically and laterally by an aperture pitch distance of 300 μm, and the set of lenses 150 can similarly define a 24-by-24 grid array of lenses offset vertically and laterally by a lens pitch distance of 300 μm. In this example, the set of pixels 170 can include a 24-by-24 grid array of 300-μm-square pixels, wherein each pixel includes a 3×3 square array of nine 100-μm-square SPADs.
Alternatively, in this variation, the set of pixels 170 can include one pixel per group of multiple aperture and lens pairs. In the foregoing example, the set of pixels 170 can alternatively include a 12-by-12 grid array of 600-μm-square pixels, wherein each pixel includes a 6×6 square array of 36 100-μm-square SPADs and wherein each pixel is aligned with a group of four adjacent lenses in a square grid. In this example, for each group of four adjacent lenses, the diffuser 180: can bias collimated light rays output from a lens in the (1,1) position in the square grid upward and to the right to spread light rays passing through the (1,1) lens across the full breadth and width of the corresponding pixel; can bias collimated light rays output from a lens in the (2,1) position in the square grid upward and to the left to spread light rays passing through the (2,1) lens across the full breadth and width of the corresponding pixel; can bias collimated light rays output from a lens in the (1,2) position in the square grid downward and to the right to spread light rays passing through the (1,2) lens across the full breadth and width of the corresponding pixel; and can bias collimated light rays output from a lens in the (2,2) position in the square grid downward and to the left to spread light rays passing through the (2,2) lens across the full breadth and width of the corresponding pixel, as shown in
In the foregoing example, for each group of four illumination sources in a square grid and corresponding to one group of four lenses in a square grid, the system can actuate one illumination source in the group of four illumination sources at any given instance in time. In particular, for each group of four illumination sources in a square grid corresponding to one pixel in the set of pixels 170, the system can actuate a first illumination source 111 in a (1,1) position during a first sampling period to illuminate a field of view defined by a first aperture 141 corresponding to a lens in the (1,1) position in the corresponding group of four lenses, and the system can sample all 36 SPADs in the corresponding pixel during the first sampling period. The system can then shut down the first illumination source 111 and actuate a second illumination source 112 in a (1,2) position during a subsequent second sampling period to illuminate a field of view defined by a second aperture 142 corresponding to a lens in the (1,2) position in the corresponding group of four lenses, and the system can sample all 36 SPADs in the corresponding pixel during the second sampling period. Subsequently, the system can then shut down the first and second illumination sources 112 and actuate a third illumination source in a (2,1) position during a subsequent third sampling period to illuminate a field of view defined by a third aperture corresponding to a lens in the (2,1) position in the corresponding group of four lenses, and the system can sample all 36 SPADs in the corresponding pixel during the third sampling period. Finally, the system can shut down the first, second, and third illumination sources and actuate a fourth illumination source in a (2,2) position during a fourth sampling period to illuminate a field of view defined by a fourth aperture corresponding to a lens in the (2,2) position in the corresponding group of four lenses, and the system can sample all 36 SPADs in the corresponding pixel during the fourth sampling period. The system can repeat this process throughout its operation.
Therefore, in the foregoing example, the system can include a set of pixels 170 arranged across an image sensor 7.2 mm in width and 7.2 mm in length and can implement a scanning schema such that each channel in the system can access (can project light rays onto) a number of SPADs otherwise necessitating a substantially larger image sensor (e.g., a 14.4 mm by 14.4 mm image sensor). In particular, the system can implement a serial scanning schema per group of illumination sources to achieve an exponential increase in the dynamic range of each channel in the system. In particular, in this variation, the system can implement the foregoing imaging techniques to increase imaging resolution of the system.
In the foregoing implementation, the system can also include a shutter 182 between each channel and the image sensor, and the system can selectively open and close each shutter 182 when the illumination source for the corresponding channel is actuated and deactivated, respectively. For example, the system can include one independently-operable electrochromic shutter 182 interposed between each lens, and the system can open the electrochromic shutter 182 over the (1,1) lens in the square-gridded group of four lenses and close electrochromic shutters 182 over the (1,2), (2,1), and (2,2) lens when the (1,1) illumination source is activated, thereby rejecting noise passing through the (1,2), (2,1), and (2,2) lens from reaching the corresponding pixel on the image sensor. The system can therefore selectively open and close shutters 182 between each channel and the image sensor to increase SNR per channel during operation. Alternatively, the system can include one independently-operable electrochromic shutter 182 arranged over select regions of each pixel, as shown in
In this variation, the system can define two-dimension grid arrays of apertures, lenses, diffusers, and/or pixels characterized by a first pitch distance along a first (e.g., X) axis and a second pitch distance—different from the first pitch distance—along a second (e.g., Y) axis. For example, the image sensor can include pixels offset by a 25 μm horizontal pitch and a 300 μm vertical pitch, wherein each pixel includes a single row of twelve subpixels.
However, in this variation, the two-dimensional optical system can include an array of any other number and pattern of channels (e.g., apertures, lenses (or lens tubes), and diffusers) and pixels and can execute any other suitable scanning schema to achieve higher spatial resolutions per channel than the raw pixel resolution of the image sensor. The system can additionally or alternatively include a converging optic, a diverging optic, and/or any other suitable type of optical element to spread light rights passed from a channel across the breadth of a corresponding pixel.
As a person skilled in the art will recognize from the previous detailed description and from the figures and claims, modifications and changes can be made to the embodiments of the invention without departing from the scope of this invention as defined in the following claims.
This application claims the benefit of U.S. Provisional Patent Application No. 62/232,222 filed Sep. 24, 2015.
Number | Name | Date | Kind |
---|---|---|---|
4358851 | Scifres et al. | Nov 1982 | A |
4634272 | Endo | Jan 1987 | A |
4676599 | Cruz | Jun 1987 | A |
4702600 | Handrich et al. | Oct 1987 | A |
4744667 | Fay et al. | May 1988 | A |
4851664 | Rieger | Jul 1989 | A |
5267016 | Meinzer et al. | Nov 1993 | A |
5288992 | Fohl | Feb 1994 | A |
5982552 | Nakama et al. | Nov 1999 | A |
6014232 | Clarke | Jan 2000 | A |
6133989 | Stettner et al. | Oct 2000 | A |
6255133 | Ormond et al. | Jul 2001 | B1 |
6374024 | Iijima | Apr 2002 | B1 |
6414746 | Stettner et al. | Jul 2002 | B1 |
6690019 | Stettner et al. | Feb 2004 | B2 |
6721262 | Jordache et al. | Apr 2004 | B1 |
7091462 | Wilson et al. | Aug 2006 | B2 |
D531525 | Dold et al. | Nov 2006 | S |
7170542 | Hanina et al. | Jan 2007 | B2 |
7295298 | Willhoeft et al. | Nov 2007 | B2 |
7345271 | Boehlau et al. | Mar 2008 | B2 |
7421159 | Yang et al. | Sep 2008 | B2 |
7433042 | Cavanaugh et al. | Oct 2008 | B1 |
7808706 | Fadel et al. | Oct 2010 | B2 |
7969558 | Hall | Jun 2011 | B2 |
8013983 | Lin et al. | Sep 2011 | B2 |
8089618 | Yang | Jan 2012 | B2 |
8130367 | Stettner et al. | Mar 2012 | B2 |
D659030 | Anselment et al. | May 2012 | S |
8319949 | Cantin et al. | Nov 2012 | B2 |
8330840 | Lenchenkov | Dec 2012 | B2 |
8374405 | Lee et al. | Feb 2013 | B2 |
8384997 | Shpunt et al. | Feb 2013 | B2 |
8494252 | Freedman et al. | Jul 2013 | B2 |
8675181 | Hall | Mar 2014 | B2 |
8717488 | Shpunt et al. | May 2014 | B2 |
8742325 | Droz et al. | Jun 2014 | B1 |
8743176 | Stettner et al. | Jun 2014 | B2 |
8761495 | Freedman et al. | Jun 2014 | B2 |
8767190 | Hall | Jul 2014 | B2 |
8829406 | Akerman et al. | Sep 2014 | B2 |
8836922 | Pennecot et al. | Sep 2014 | B1 |
8848039 | Spektor et al. | Sep 2014 | B2 |
9041915 | Earnhart et al. | May 2015 | B2 |
9063549 | Pennecot et al. | Jun 2015 | B1 |
9071763 | Templeton et al. | Jun 2015 | B1 |
9086273 | Gruver et al. | Jul 2015 | B1 |
9111444 | Kaganovich | Aug 2015 | B2 |
9157790 | Shpunt et al. | Oct 2015 | B2 |
9164511 | Ferguson et al. | Oct 2015 | B1 |
9176051 | Mappes et al. | Nov 2015 | B2 |
9229109 | Stettner et al. | Jan 2016 | B2 |
9285464 | Pennecot et al. | Mar 2016 | B2 |
9285477 | Smith et al. | Mar 2016 | B1 |
9299731 | Lenius et al. | Mar 2016 | B1 |
9368936 | Lenius et al. | Jun 2016 | B1 |
9369689 | Tran et al. | Jun 2016 | B1 |
9383753 | Templeton et al. | Jul 2016 | B1 |
9425654 | Lenius et al. | Aug 2016 | B2 |
9435891 | Oggier | Sep 2016 | B2 |
9470520 | Schwarz et al. | Oct 2016 | B2 |
9489601 | Fairfield et al. | Nov 2016 | B2 |
9525863 | Nawasra et al. | Dec 2016 | B2 |
9529079 | Droz et al. | Dec 2016 | B1 |
20030006676 | Smith et al. | Jan 2003 | A1 |
20030047752 | Campbell | Mar 2003 | A1 |
20040061502 | Hasser | Apr 2004 | A1 |
20040223071 | Wells | Nov 2004 | A1 |
20050030409 | Matherson | Feb 2005 | A1 |
20060244851 | Cartlidge | Nov 2006 | A1 |
20070007563 | Mouli | Jan 2007 | A1 |
20070060806 | Hunter et al. | Mar 2007 | A1 |
20070228262 | Cantin et al. | Oct 2007 | A1 |
20080153189 | Plaine | Jun 2008 | A1 |
20090016642 | Hart | Jan 2009 | A1 |
20090040629 | Bechtel | Feb 2009 | A1 |
20090179142 | Duparre et al. | Jul 2009 | A1 |
20090295910 | Mir et al. | Dec 2009 | A1 |
20100020306 | Hall | Jan 2010 | A1 |
20100110275 | Mathieu | May 2010 | A1 |
20100123893 | Yang | May 2010 | A1 |
20100204964 | Pack et al. | Aug 2010 | A1 |
20110025843 | Oggier et al. | Feb 2011 | A1 |
20110032398 | Lenchenkov | Feb 2011 | A1 |
20110037849 | Niclass | Feb 2011 | A1 |
20110116262 | Marson | May 2011 | A1 |
20110216304 | Hall | Sep 2011 | A1 |
20120044476 | Earnhart et al. | Feb 2012 | A1 |
20120140109 | Shpunt et al. | Jun 2012 | A1 |
20120154914 | Moriguchi et al. | Jun 2012 | A1 |
20120182464 | Shpunt et al. | Jul 2012 | A1 |
20120287417 | Mimeault | Nov 2012 | A1 |
20130044310 | Mimeault | Feb 2013 | A1 |
20130141549 | Beers et al. | Jun 2013 | A1 |
20130206967 | Shpunt et al. | Aug 2013 | A1 |
20130294089 | Freedman et al. | Nov 2013 | A1 |
20130300840 | Borowski | Nov 2013 | A1 |
20140118335 | Gurman | May 2014 | A1 |
20140118493 | Sali et al. | May 2014 | A1 |
20140153001 | Chayat et al. | Jun 2014 | A1 |
20140158900 | Yoon et al. | Jun 2014 | A1 |
20140168631 | Haslim et al. | Jun 2014 | A1 |
20140176933 | Haslim et al. | Jun 2014 | A1 |
20140211194 | Pacala et al. | Jul 2014 | A1 |
20140269796 | Geske | Sep 2014 | A1 |
20140285628 | Shpunt et al. | Sep 2014 | A1 |
20140291491 | Shpunt et al. | Oct 2014 | A1 |
20140313519 | Shpunt et al. | Oct 2014 | A1 |
20140375977 | Ludwig et al. | Dec 2014 | A1 |
20140376092 | Mor | Dec 2014 | A1 |
20150002636 | Brown et al. | Jan 2015 | A1 |
20150131080 | Retterath et al. | May 2015 | A1 |
20150184999 | Stettner | Jul 2015 | A1 |
20150192677 | Yu et al. | Jul 2015 | A1 |
20150293224 | Eldada et al. | Oct 2015 | A1 |
20150316473 | Kester et al. | Nov 2015 | A1 |
20150355470 | Herschbach | Dec 2015 | A1 |
20150358601 | Oggier | Dec 2015 | A1 |
20150378241 | Eldada | Dec 2015 | A1 |
20150379371 | Yoon et al. | Dec 2015 | A1 |
20160003946 | Gilliland et al. | Jan 2016 | A1 |
20160047895 | Dussan | Feb 2016 | A1 |
20160047896 | Dussan | Feb 2016 | A1 |
20160047897 | Dussan | Feb 2016 | A1 |
20160047898 | Dussan | Feb 2016 | A1 |
20160047899 | Dussan | Feb 2016 | A1 |
20160047900 | Dussan | Feb 2016 | A1 |
20160047901 | Pacala et al. | Feb 2016 | A1 |
20160047903 | Dussan | Feb 2016 | A1 |
20160049765 | Eldada | Feb 2016 | A1 |
20160097858 | Mundhenk et al. | Apr 2016 | A1 |
20160161600 | Eldada et al. | Jun 2016 | A1 |
20160218727 | Maki | Jul 2016 | A1 |
20160265902 | Nawasra et al. | Sep 2016 | A1 |
20160291134 | Droz et al. | Oct 2016 | A1 |
20160306032 | Schwarz et al. | Oct 2016 | A1 |
20160327779 | Hillman | Nov 2016 | A1 |
20160328619 | Yi et al. | Nov 2016 | A1 |
20170146640 | Hall et al. | May 2017 | A1 |
20170219426 | Pacala et al. | Aug 2017 | A1 |
20170219695 | Hall et al. | Aug 2017 | A1 |
20170269197 | Hall et al. | Sep 2017 | A1 |
20170269198 | Hall et al. | Sep 2017 | A1 |
20170269209 | Hall et al. | Sep 2017 | A1 |
20170269215 | Hall et al. | Sep 2017 | A1 |
20170289524 | Pacala | Oct 2017 | A1 |
20170350983 | Hall et al. | Dec 2017 | A1 |
Number | Date | Country |
---|---|---|
2124069 | Nov 2009 | EP |
3002548 | Sep 2016 | EP |
H3-6407 | Jan 1991 | JP |
07-049417 | Feb 1995 | JP |
2015052616 | Apr 2015 | WO |
2016116733 | Jul 2016 | WO |
2016125165 | Aug 2016 | WO |
2017132704 | Aug 2017 | WO |
Entry |
---|
PCTUS2017039306 “International Search Report and Written Opinion” dated Nov. 7, 2017 21 pages. |
PCTUS2017039306 “Invitation to Pay Add'l Fees and Partial Search Report” dated Aug. 31, 2017 2 pages. |
PCTUS2017048379 “International Search Report and Written Opinion” dated Nov. 2, 2017, 15 pages. |
Non-Final Office Action dated Jul. 28, 2017 in U.S. Appl. No. 15/419,053, filed Jan. 30, 2017, 26 pages. |
Velodyne Lidar, Inc., HDL-32E Data Sheet 2010, 2017. |
Velodyne Lidar, Inc., HDL-32E, User's Manual 2010.; Aug. 2016. |
Velodyne Lidar, Inc., HDL-32E, HDL-32E, webpage: http://www.velodynelidar.com/hdl-32e.html; retrieved Dec. 6, 2017. |
Velodyne Lidar, Inc., HDL-64E Data Sheet, 2017. |
Velodyne Lidar, Inc., HDL-64E S2 and S2.1; User's Manual and Programming Guide 2007; Firmware version 4; 2007, revision Nov. 2012. |
Velodyne Lidar, Inc., HDL-64E, S3; User's Manual and Programming Guide revision J; 2007.; Dec. 2017. |
Velodyne Lidar, Inc., HDL-64E; webpage: http://www.velodynelidar.com/hdl-64e.html; retrieved Dec. 6, 2017. |
Velodyne Lidar, Inc., VLP-16 data sheet, Puck, Real Time 3D Lidar Sensor, 2014. |
Velodyne Lidar Inc., Velodyne Lidar Puck; User's Manual and Programming Guide; 2014. |
Velodyne Lidar, Inc., VLP-16; retrieved via website: http://www.velodynelidar.com/vlp-16.html; Dec. 6, 2017. |
Velodyne, Lidar, Inc.; Puck Hi-Res Data Sheet; Sep. 2016. |
Velodyne Lidar, Inc.; Puck Hi-Res User Manual; Sep. 2016. |
Velodyne Lidar, Inc.; Puck Hi-Res retrieved via website: http://www.velodynelidar.com/vlp-16-hi-res.html; Dec. 13, 2017. |
Velodyne Lidar, Inc.; Puck Lite Data Sheet; Feb. 2016. |
Velodyne Lidar, Inc.; Puck Lite User Manual; Feb. 2016. |
Velodyne Lidar, Inc.; Puck Lite, Our Lightest Sensor Ever, Apr. 2016; retrieved via website: http://www.velodynelidar.com/vlp-16-lite.html; Dec. 13, 2017. |
Velodyne Lidar, Inc.,; Ultra Puck VLP-32C; Nov. 2017; retrieved via website: http://www.velodynelidar.com/vlp-32c.html; Dec. 13, 2017. |
Bronzi, Danilo, “100 000 Frames/s 64×32 Single Photon Detector Array for 2-D Imaging and 3-D Ranging”, IEEE Journal of Selected Topic in Quantum Electronics, vol. 20, No. 6, Nov./Dec. 2014; 10 pages. |
Quanergy Systems Ex, 1005, Review of Scientific Instruments; vol. 72, No. 4, Apr. 2001, 13 pages. |
Itzler, Mark A., “Geiger-mode avalance photodiode focal plane arrays for three-dimensional imaging LADAR”; Princeton Lightwave, Inc., Proc of SPIE vol. 7808 780890C-, 14 pages, 2010. |
Cova, Sergio D.; Single-Photon Counting Detectors, IEEE Photonics Journal; vol. 3, No. 2, Apr. 2011, 5 pages. |
Guerrieri, Fabrizio, Two-Dimensional SPAD Imaging Camera for Photon Counting, vol. 2, No. 5, Oct. 2010, 17 pages. |
Charbon, Edoardo, et al. “SPAD-Based Sensors”; TOF Range-Imaging Cameras, Remondino, F. ; Stoppa, D. (Eds.), 2013, V, 240 p. 138 Illus., 85 illus. in color., Hardcover ISBN: 978-3-642-27522-7. |
Number | Date | Country | |
---|---|---|---|
20170289524 A1 | Oct 2017 | US |
Number | Date | Country | |
---|---|---|---|
62232222 | Sep 2015 | US |