There are needs in the art for improved imaging systems and methods. For example, there are needs in the art for improved lidar imaging techniques, such as flash lidar systems and methods. As used herein, “lidar”, which can also be referred to as “ladar”, refers to and encompasses any of light detection and ranging, laser radar, and laser detection and ranging.
Flash lidar provides a tool for three-dimensional imaging that can be capable of imaging over large fields of view (FOVs), such as 160 degrees (horizontal) by 120 degrees (vertical). Conventional flash lidar systems typically suffer from limitations that require large detector arrays (e.g., focal plane arrays (FPAs)), large lenses, and/or large spectral filters. Furthermore, conventional flash lidar systems also suffer from the need for large peak power. For example, conventional flash lidar systems typically need to employ detector arrays on the order of 1200×1600 pixels to image a 120 degree by 160 degree FOV with a 0.1×0.1 degree resolution. Not only is such a large detector array expensive, but the use of a large detector array also translates into a need for a large spectral filter and lens, which further contributes to cost.
The principle of conservation of etendue typically operates to constrain the design flexibility with respect to flash lidar systems. Lidar systems typically require a large lens in order to collect more light given that lidar systems typically employ a laser source with the lowest feasible power. It is because of this requirement for a large collection aperture and a wide FOV with a conventional wide FOV lidar system that the etendue of the wide FOV lidar system becomes large. Consequently, in order to preserve etendue, the filter aperture area (especially for narrowband filters which have a narrow angular acceptance) may become very large. Alternately, the etendue at the detector plane may be the limiting one for the system. If the numerical aperture of the imaging system is high (which means a low f #) and the area of the focal plane is large (because there are many pixels in the array and their pitch is not small, e.g., they are 10 μm or 20 μm or 30 μm in pitch), then the detector's etendue becomes the critical one that drives the filter area.
A
lΩ1=AfΩ2=AFPAΩ3
The first term of this expression (AlΩ1) is typically fixed by system power budget and FOV. The second term of this expression (AfΩ2) is typically fixed by filter technology and the passband. The third term of this expression (AFPAΩ3) is typically fixed by lens cost and manufacturability. With these constraints, conservation of etendue typically means that designers are forced into deploying expensive system components to achieve desired imaging capabilities.
As a solution to this problem in the art, the inventor discloses a flash lidar technique where the lidar system spatially steps flash emissions and acquisitions across a FOV to achieve zonal flash illuminations and acquisitions within the FOV, and where these zonal acquisitions constitute subframes that can be post-processed to assemble a wide FOV lidar frame. In doing so, the need for large lenses, large spectral filters, and large detector arrays is reduced, providing significant cost savings for the flash lidar system while still retaining effective operational capabilities. In other words, the spatially-stepped zonal emissions and acquisitions operate to reduce the FOV per shot relative to conventional flash lidar systems, and reducing the FOV per shot reduces the light throughput of the system, which in turn enables for example embodiments a reduction in filter area and a reduction in FPA area without significantly reducing collection efficiency or optics complexity.
With this approach, a practitioner can design an imaging system which can provide a wide field of view with reasonable resolution (e.g. 30 frames per second (fps)), while maintaining a low cost, low power consumption, and reasonable size (especially in depth, for side integration). Furthermore, this approach can also provide reduced susceptibility to motion artifacts which may arise due to fast angular velocity of objects at close range. Further still, this approach can have reduced susceptibility to shocks and vibrations. Thus, example embodiments described herein can serve as imaging systems that deliver high quality data at low cost. As an example, lidar systems using the techniques described herein can serve as a short-range imaging system that provides cocoon 3D imaging around a vehicle such as a car.
Accordingly, as an example embodiment, disclosed herein is a lidar system comprising (1) an optical emitter that emits optical signals into a field of view, wherein the field of view comprises a plurality of zones, (2) an optical sensor that senses optical returns of a plurality of the emitted optical signals from the field of view, and (3) a plurality of light steering optical elements that are movable to align different light steering optical elements with (1) an optical path of the of the emitted optical signals at different times and/or (2) an optical path of the optical returns to the optical sensor at different times. Each light steering optical element corresponds to a zone within the field of view and provides (1) steering of the emitted optical signals incident thereon into its corresponding zone and/or (2) steering of the optical returns from its corresponding zone to the optical sensor so that movement of the light steering optical elements causes the lidar system to step through the zones on a zone-by-zone basis according to which of the light steering optical elements becomes aligned with the optical path of the emitted optical signals and/or the optical path of the optical returns over time. The inventors also disclose a corresponding method for operating a lidar system.
As another example embodiment disclosed herein is a flash lidar system for illuminating a field of view over time, the field of view comprising a plurality of zones, the system comprising (1) a light source, (2) a movable carrier, and (3) a circuit. The light source can be an optical emitter that emits optical signals. The movable carrier can comprise a plurality of different light steering optical elements that align with an optical path of the emitted optical signals at different times in response to movement of the carrier, wherein each light steering optical element corresponds to one of the zones and provides steering of the emitted optical signals incident thereon into its corresponding zone. The circuit can drive movement of the carrier to align the different light steering optical elements with the optical path of the emitted optical signals over time to flash illuminate the field of view with the emitted optical signals on a zone-by-zone basis.
Furthermore, the system may also include an optical sensor that senses optical returns of the emitted optical signals, and the different light steering optical elements can also align with an optical path of the returns to the optical sensor at different times in response to the movement of the carrier and provide steering of the returns incident thereon from their corresponding zones to the optical sensor so that the optical sensor senses the returns on the zone-by-zone basis. The zone-specific sensed returns can be used to form lidar sub-frames, and these lidar sub-frames can be aggregated to form a full FOV lidar frame. With such a system, each zone's corresponding light steering optical element may include (1) an emitter light steering optical element that steers emitted optical signals incident thereon into its corresponding zone when in alignment with the optical path of the optical signals during movement of the carrier and (2) a paired receiver light steering optical element that steers returns incident thereon from its corresponding zone to the optical sensor when in alignment with the optical path of the returns to the optical sensor during movement of the carrier. The zone-specific paired emitter and receiver light steering optical elements can provide the same steering to/from the field of view. In an example embodiment for spatially-stepped flash (SSF) imaging, the system can spatially step across the zones and acquire time correlated single photon counting (TCSPC) histograms for each zone.
Also disclosed herein is a lidar method for flash illuminating a field of view over time, the field of view comprising a plurality of zones, the method comprising (1) emitting optical signals for transmission into the field of view and (2) moving a plurality of different light steering optical elements into alignment with an optical path of the emitted optical signals at different times, wherein each light steering optical element corresponds to one of the zones and provides steering of the emitted optical signals incident thereon into its corresponding zone to flash illuminate the field of view with the emitted optical signals on a zone-by-zone basis.
This method may also include steps of (1) steering optical returns of the emitted optical signals onto a sensor via the moving light steering optical elements, wherein each moving light steering optical element is synchronously aligned with the sensor when in alignment with the optical path of the emitted optical signals during the moving and (2) sensing the optical returns on the zone-by-zone basis based on the steered optical returns that are incident on the sensor.
As examples, the movement discussed above for the lidar system and method can take the form of rotation, and the carrier can take the form of a rotator, in which case the circuit drives rotation of the rotator to (1) align the different light steering optical elements with the optical path of the emitted optical signals over time to flash illuminate the field of view with the emitted optical signals on the zone-by-zone basis and (2) align with the optical path of the returns to the optical sensor at different times in response to the rotation of the rotator and provide steering of the returns incident thereon from their corresponding zones to the optical sensor so that the optical sensor senses the returns on the zone-by-zone basis. The rotation can be continuous rotation, but the zonal changes would still take the form of discrete steps across the FOV because the zone changes would occur in a step-wise fashion as new light steering optical elements become aligned with the optical paths of the emitted optical signals and returns. For example, each zone can correspond to multiple angular positions of a rotator or carrier on which the light steering optical elements are mounted. In this way, the rotating light steering optical elements can serve as an optical translator that translates continuous motion of the light steering optical elements into discrete changes in the zones of illumination and acquisition over time.
This ability to change zones of illumination/acquisition in discrete steps even if the carrier is continuously moving (e.g., rotating) enables the use of relatively longer dwell times per zone for a given amount of movement than would be possible with prior art approaches to beam steering in the art. For example, Risley prisms are continuously rotated to produce a beam that is continuously steered in space in synchronicity with a continuous rotation of the Risley prisms (in which case any rotation of the Risley prism would produce a corresponding change in light steering). By contrast, with example embodiments that employ a continuous movement (such as rotation) of the carrier, the same zone will remain illuminated by the system even while the carrier continues to move for the time duration that a given light steering optical element is aligned with the optical path of the emitted optical signals. The zone of illumination will not change (or will remain static) until the next light steering optical element becomes aligned with the optical path of the emitted optical signals. Similarly, the sensor will acquire returns from the same zone even while the carrier continues to move for the time duration that a given light steering optical element is aligned with the optical path of the returns to the sensor. The zone of acquisition will not change until the next light steering optical element becomes aligned with the optical path of the returns to the sensor. By supporting such discrete changes in zonal illumination/acquisition even while the carrier is continuously moving, the system has an ability to support longer dwell times per zone and thus deliver sufficient optical energy (e.g., a sufficiently large number of pulses) into each zone and/or provide sufficiently long acquisition of return signals from targets in each zone, without needing to stop and settle at each imaging position.
However, it should be understood that with other example embodiments, the movement need not be rotation; for example, the movement can be linear movement (such as back and forth movement of the light steering optical elements).
Further still, in example embodiments, the light steering optical elements can take the form of transmissive light steering optical elements.
In other example embodiments, the light steering optical elements can take the form of diffractive optical elements (DOEs). In example embodiments, the DOEs may comprise metasurfaces. Due to their thin and lightweight nature, it is expected that using metasurfaces as the light steering optical elements will be advantageous in terms of system dimensions and cost as well as their ability in example embodiment to steer light to larger angles without incurring total internal reflection.
Further still, in other example embodiments, the light steering optical elements can take the form of reflective light steering optical elements.
Further still, the use of light steering optical elements as described herein to provide spatial stepping through zones of a field of view can also be used with lidar systems that operate using point illumination and/or with non-lidar imaging systems such as active illumination imaging systems (e.g., active illumination cameras).
These and other features and advantages of the invention will be described in greater detail below.
Operation of the system 100 (whereby the light source 102 emits optical signals 112 while the carrier 104 rotates) produces flash illuminations that step across different portions of the FOI 114 over time in response to the rotation of the carrier 104, whereby rotation of the carrier 104 causes discrete changes in the steering of the optical signals 112 over time. These discrete changes in the zones of illumination can be referenced as illumination on a zone-by-zone basis in response to the movement of the carrier 104.
The overall FOI 114 for system 100 can be a wide FOI, for example with coverage such as 135 degrees (horizontal) by 135 degrees (vertical). However, it should be understood that wider or narrower sizes for the FOI 114 could be employed if desired by a practitioner. With an example 135 degree by 135 degree FOI 114, each zone 120 could exhibit a sub-portion of the FOI such as 45 degrees (horizontal) by 45 degrees (vertical). However, it should also be understood that wider, e.g. 50×50 degrees or narrower, e.g., 15×15 degrees, sizes for the zones 120 could be employed by a practitioner if desired. Moreover, as noted above, the sizes of the different zones could be non-uniform and/or non-square if desired by a practitioner.
The carrier 104 holds a plurality of light steering optical elements 130 (see
As noted above, in an example embodiment, the movement exhibited by the carrier 104 can be rotation 110 (e.g, clockwise or counter-clockwise rotation). With such an arrangement, each zone 120 would correspond to a number of different angular positions for rotation of carrier 104 that define an angular extent for alignment of that zone's corresponding light steering optical element 130 with the emitted optical signals 112. For example, with respect to an example embodiment where the carrier is placed vertically, Zone 1 could be illuminated while the carrier 104 is rotating through angles from 1 degree to 40 degrees with respect to the top, Zone 2 could be illuminated while the carrier 104 is rotating through angles from 41 degrees to 80 degrees, Zone 3 could be illuminated while the carrier 104 is rotating through angles from 81 degrees to 120 degrees, and so on. However, it should be understood that the various zones could have different and/or non-uniform corresponding angular extents with respect to angular positions of the carrier 104. Moreover, as noted above, it should be understood that forms of movement other than rotation could be employed if desired by a practitioner, such as a linear back and forth movement. With linear back and forth movement, each zone 120 would correspond to a number of different movement positions of the carrier 104 that define a movement extent for alignment of that zone's corresponding light steering optical element 130 with the emitted optical signals. However, it should be noted that the rotational movement can be advantageous relative to linear movement in that rotation can benefit from not experiencing a settling time as would be experienced by a linear back and forth movement of the carrier 104 (where the system may not produce stable images during the transient time periods where the direction of back and forth movement is reversed until a settling time has passed).
In the example of
In the example of
The zone-specific sensed signals 212 will be indicative of returns 210 from objects in the FOV 214, and zone-specific lidar sub-frames can be generated from signals 212. Lidar frames that reflect the full FOV 214 can then be formed from aggregations of the zone-specific lidar sub-frames. In the example of
While
Light Source 102:
The optical signals 112 can take the form of modulated light such as laser pulses produced by an array of laser emitters. For example, the light source 102 can comprise an array of Vertical Cavity Surface-Emitting Lasers (VCSELs) on one or more dies. The VCSEL array can be configured to provide diffuse illumination or collimated illumination. Moreover, as discussed in greater detail below, a virtual dome technique for illumination can be employed. Any of a number of different laser wavelengths can be employed the light source 102 (e.g., a 532 nm wavelength, a 650 nm wavelength, a 940 nm wavelength, etc. can be employed (where 940 nm can provide CMOS compatibility)). Additional details about example emitters that can be used with example embodiments are described in greater detail in connection with
Integrated or hybrid lenses may be used to collimate or otherwise shape the output beam from the light source 102. Moreover, driver circuitry may either be wire-bonded or vertically interconnected to the light source (e.g., VCSEL array).
The light source 102 can be deployed in a transmitter module (e.g., a barrel or the like) having a transmitter aperture that outputs optical signals 112 toward the carrier 104 as discussed above. The module may include a microlens array aligned to the emitter array, and it may also include a macrolens such as a collimating lens that collimates the emitted optical signals 112 (e.g., see
Carrier 104:
The carrier 104 can take any of a number of forms, such as a rotator, a frame, a wheel, a doughnut, a ring, a plate, a disk, or other suitable structure for connecting the light steering optical elements 130 to a mechanism for creating the movement (e.g., a spindle 118 for embodiments where the movement is rotation 110). For example, the carrier 104 could be a rotator in the form of a rotatable structural mesh that the light steering optical elements 130 fit into. As another example, the carrier 104 could be a rotator in the form of a disk structure that the light steering optical elements 130 fit into. The light steering optical elements 130 can be attached to the carrier 104 using any suitable technique for connection (e.g., adhesives (such as glues or epoxies), tabbed connectors, bolts, friction fits, etc.). Moreover, in example embodiments, one or more of the light steering optical elements 130 can be detachably connectable to the carrier 104 and/or the light steering optical elements 130 and carrier 104 can be detachably connectable to the system (or different carrier/light steering optical elements combinations can be fitted to different otherwise-similar systems) to provide different zonal acquisitions. In this manner, users or manufacturers can swap out one or more of the light steering elements (or change the order of zones for flash illumination and collection and/or change the number and/or nature of the zones 120 as desired).
While carrier 104 is movable (e.g., rotatable about an axis), it should be understood that with an example embodiment the light source 102 and sensor 202 are stationary/static with respect to an object that carries the lidar system 100 (e.g., an automobile, airplane, building, tower, etc.). However, for other example embodiments, it should be understood that the light source 102 and/or sensor 202 can be moved while the light steering optical elements 130 remain stationary. For example, the light source 102 and/or sensor 202 can be rotated about an axis so that different light steering optical elements 130 will become aligned with the light source 102 and/or sensor 202 as the light source 102 and/or sensor 202 rotates. As another example, both the light source 102/sensor 202 and the light steering optical elements 130 can be movable, and their relative rates of movement can define when and which light steering optical elements become aligned with the light source 102/sensor 202 over time.
For example,
Further still,
As another example,
Further still, while
Light Steering Optical Elements 130:
The light steering optical elements 130 can take any of a number of forms. For example, one or more of the light steering optical elements 130 can comprise optically transmissive material that exhibit a geometry that produces the desired steering for light propagating through the transmissive light steering optical element 130 (e.g., a prism).
With reference to
The examples of
It should be understood that the arc shapes corresponding to
For example,
As another example,
As yet another example, a transmissive light steering optical element that provides “up left” steering can be produced by rotating a 2D cross-sectional trapezoid like that shown by
The 2D cross-sectional geometries of the light steering optical elements 130 can be defined by a practitioner to achieve a desired degree and direction of steering; and the geometries need not match those shown by
It should also be understood that facets with non-linear radial slopes could also be employed to achieve more complex beam shapes, as shown by
Further still, it should be understood that a given light steering optical element 130 can take the form of a series of multiple transmissive steering elements to achieve higher degree of angular steering, as indicated by the example shown in cross-section in
The transmissive material can be any material that provides suitable transmissivity for the purposes of light steering. For example, the transmissive material can be glass. As another example, the transmissive material can be synthetic material such as optically transmissive plastic or composite materials (e.g., Plexiglas, acrylics, polycarbonates, etc.). For example, Plexiglas is quite transparent to 940 nm infrared (IR) light (for reasonable thicknesses of Plexiglas). Further still, if there is a desire to filter out visible light, there are also types of Plexiglas available that absorb visible light but transmit near-IR light (e.g., G 3142 or 1146 Plexiglas). Plexiglas with desired transmissive characteristics are expected to be available from plastic distributors in various thicknesses, and such Plexiglas is readily machinable to achieve desired or custom shapes. As another example, if a practitioner desires the light steering optical elements 130 to act as a lens or prism rather than just a window, acrylic can be used as a suitable transmissive material. Acrylics can also be optically quite transparent as visible wavelengths if desired and fairly hard (albeit brittle). As yet another example, polycarbonate is also fully transparent to near-IR light (e.g., Lexan polycarbonate).
Furthermore, the transmissive material may be coated with antireflective coating on either its lower facet or upper facet or both if desired by a practitioner.
As another example, one or more of the light steering optical elements 130 can comprise diffractive optical elements (DOE) rather than transmissive optical elements (see
As an example embodiment, each DOE that serves as a light steering optical element 130 can be a metasurface that is adapted to steer light with respect to its corresponding zone 120. For example, a DOE used for transmission/emission can be a metasurface that is adapted to steer incoming light from the light source 102 into the corresponding static zone 120 for that DOE; and a DOE used for reception can be a metasurface that is adapted to steer incoming light from the corresponding zone 120 for that DOE to the sensor 202. A metasurface is a material with features spanning less than the wavelength of light (sub-wavelength features; such as sub-wavelength thickness) and which exhibits optical properties that introduce a programmable phase delay on light passing through it. In this regard, the metasurfaces can be considered to act as phase modulation elements in the optical system. Each metasurface's phase delay can be designed to provide a steering effect for the light as discussed herein; and this effect can be designed to be rotationally-invariant as discussed below and in connection with
In example embodiments, the metasurfaces can be arranged on a flat planar disk (or pair of flat planar disks) or other suitable carrier 104 or the like that rotates around the axis of rotation to bring different metasurfaces into alignment with the emitter and/or receiver apertures over time as discussed above.
A phase delay function can be used to define the phase delay properties of the metasurface and thus control the light steering properties of the metasurface. In this fashion, phase delay functions can be defined to cause different metasurfaces to steer light to or from its corresponding zone 120. In example embodiments where movement of the light steering elements 130 is rotation 110, the phase delay functions that define the metasurfaces are rotationally invariant phase delay functions so the light is steered to or from each metasurface's corresponding zone during the time period where each metasurface is aligned with the emitter or receiver. These phase delay functions can then be used as parameters by which nanostructures are imprinted or deposited on the substrate to create the desired metasurface. Examples of vendors which can create metasurfaces according to defined phase delay functions include Metalenz, Inc. of Boston, Mass. and NIL Technology ApS of Kongens Lyngby, Denmark. As examples, a practitioner can also define additional features for the metasurfaces, such as a transmission efficiency, a required rejection ratio of higher order patterns, an amount of scattering from the surface, the materials to be used to form the features (e.g., which can be dielectric or metallic), and whether anti-reflection coating is to be applied.
The discussion below in connection with
Regarding light steering, we can consider the steering in terms of radial and tangential coordinates with respect to the axis of rotation for the metasurface.
In terms of radial steering, we can steer the light away from the center of rotation or toward the center of rotation. If the metasurface's plane is vertical, the steering of light away and toward the center of rotation would correspond to the steering of light in the up and down directions respectively. To achieve such radial steering via a prism, the prism would need to maintain a constant radial slope on a facet as the prism rotates around the axis of rotation, which can be achieved by taking a section of a cone (which can be either the internal surface or the external surface of the cone depending on the desired radial steering direction). Furthermore, we can maintain a constant radial slope of more than one facet—for example, the prism may be compound (such as two prisms separated by air)—to enable wide angle radial steering without causing total internal reflection.
In terms of tangential steering, we can steer the light in a tangential direction in the direction of rotation or in a tangential direction opposite the direction of rotation. If the metasurface's plane is vertical, the steering of light tangentially in the direction of rotation and opposite the direction of rotation would correspond to the steering of light in the right and left directions respectively. To achieve such tangential steering via a prism, we want to maintain a constant tangential slope as the prism rotates around the axis of rotation, which can be achieved by taking a section of a screw-shaped surface.
Further still, one can combine radial and tangential steering to achieve diagonal steering. This can be achieved by combining prism pairs that provide radial and tangential steering to produce steering in a desired diagonal direction.
A practitioner can define a flat (2D) prism that would exhibit the light steering effect that is desired for the metasurface. This flat prism can then be rotated around an axis of rotation to add rotational symmetry (and, if needed, translational symmetry) to create a 3D prism that would produce the desired light steering effect. This 3D prism can then be translated into a phase delay equation that describes the desired light steering effect. This phase delay equation can be expressed as a phase delay plot (Z=ϕ(X,Y)). This process can then be repeated to create the phase delay plots for each of the 9 zones 120 (e.g., an upper left zone, upper zone, upper right zone, a left zone, a central zone (for which no metasurface need be deployed as the central zone can be a straight ahead pass-through in which case the light steering optical element 130 can be the optically transparent substrate that the metasurface would be imprinted on), a right zone, a lower left zone, a lower zone, and a lower right zone).
where ϕ(X,Y) represents the phase delay ϕ at coordinates X and Y of the metasurface, where λ is the laser wavelength, where θ is the deflection angle (e.g., see
As an example, one can use approximate sizes such as Re=50 mm, Ri=45 mm, and α=40° (which is approximately 0.70 rad) (see
X=R sin(t)
Y=R cos(t)
Z=C*R; (C=const>0)
In this case:
45 mm R<50 mm; −0.35 rad<t<0.35 rad
One can then compare with:
As shown by
For metasurface phase delay as a function of X,Y, one can subtract n*2π, where n is an integer number (see
As an example, one can use approximate sizes such as Re=50 mm, Ri=45 mm, and α=40° (which is approximately 0.70 rad) (see
X=R sin(t)
Y=R cos(t)
Z=C*(−R); (C=const>0)
One can then compare with:
As shown by
The helicoid shape 2900 can be represented by the phase delay function expression:
For metasurface phase delay as a function of X,Y, one can subtract n*2π, where n is an integer number to yield:
As an example, one can use approximate sizes such as Re=50 mm, Ri=45 mm, and α=40° (which is approximately 0.70 rad). Furthermore, consider the helicoid surface equations:
X=R sin(t)
Y=R cos(t)
Z=C*t; (C=const)
One can then compare with:
As shown by
The helicoid shape 3100 can be represented by the phase delay function expression:
For metasurface phase delay as a function of X,Y, one can subtract n*2π, where n is an integer number to yield:
As an example, one can use approximate sizes such as Re=50 mm, Ri=45 mm, and α=40° (which is approximately 0.70 rad). Furthermore, consider the helicoid surface equations:
X=R sin(t)
Y=R cos(t)
Z=C*t; (C=const)
One can then obtain:
As shown by
For metasurface phase delay as a function of X,Y, one can subtract n*2π, where n is an integer number to yield:
where the choice of whether to use addition or subtraction at the two locations where the plus/minus operator is shown will govern whether the steering goes to the upper right, upper left, lower right, or lower left zones.
As an example, one can use approximate sizes such as Re=50 mm, Ri=45 mm, and α=40° (which is approximately 0.70 rad). Furthermore, consider the sloped helicoid surface equations:
X=R sin(t)
Y=R cos(t)
Z=R+C*t; (C=const)
One can then obtain:
As shown by
Accordingly, with this example, the expressions below show (1) a phase delay function for steering light to/from the upper right zone, (2) a phase delay function for steering light to/from the lower right zone, (3) a phase delay function for steering light to/from the lower left zone, and (4) a phase delay function for steering light to/from the upper left zone.
For upper right steering, the configuration defined by the following phase delay function is shown by
For lower right steering, the configuration defined by the following phase delay function is shown by
For lower left steering, the configuration defined by the following phase delay function is shown by
For upper left steering as discussed above, the configuration defined by the following phase delay function is shown by
While
Furthermore, for sufficiently large angles, a single prism would not suffice due to total internal reflection. However, techniques can be employed to increase the maximum deflection angle. For example, one can use two angled surfaces (with respect to the optical axis). As another example, one can use more than one prism such that the prisms are placed at a fixed separation (distance and angle) from each other. This could be applicable for both side and diagonal steerage. For example, a double prism can be made rotationally symmetric about the axis of rotation to yield a shape which provides a greater maximum deflection angle than could be achieved by a single prism that was made rotationally symmetric about the axis of rotation. Phase delay functions can then be defined for the rotationally symmetric double prism shape.
Furthermore, it should be understood that additional metasurfaces can be used in addition to the metasurfaces used for light steering. For example, a second metasurface can be positioned at a controlled spacing or distance from a first metasurface, where the first metasurface is used as a light steering optical element 130 while the second metasurface can be used as a diffuser, beam homogenizer, and/or beam shaper. For example, in instances where the rotating receiver prism or metasurface may cause excessive distortion of the image on the sensor 202, a secondary rotating (or counter-rotating) prism or metasurface ring (or a secondary static lens or metasurface) may be used to compensate for the distortion. Mechanical structures may be used to reduce stray light effects resulting from the receiver metasurface arrangement.
As yet another example, one or more of the light steering optical elements 130 can comprise a transmissive material that serves as beam steering slab in combination with a DOE that provides diffraction of the light steered by the beam steering slab (see
As yet another example, the light steering optical elements 130 can comprise reflective materials that provide steering of the optical signals 112 via reflections. Examples of such arrangements are shown by
Sensor 202:
Sensor 202 can take the form of a photodetector array of pixels that generates signals indicative of the photons that are incident on the pixels. The sensor 202 can be enclosed in a barrel which receives incident light through an aperture and passes the incident light through receiver optics such as a collection lens, spectral filter, and focusing lens prior to reception by the photodetector array. An example of such a barrel architecture is shown by
The barrel funnels the signal light (as well as an ambient light) passed through the window toward the sensor 202. The light propagating through the barrel passes through the collection lens, spectral filter, and focusing lens on its way to the sensor 202. The barrel may be of a constant diameter (cylindrical) or may change its diameter so as to enclose each optical element within it. The barrel can be made of a dark, non-reflective and/or absorptive material within the signal wavelength.
The collection lens is designed to collect light from the zone that corresponds to the aligned light steering optical element 130 after the light has been refracted toward it.
The collection lens can be, for example, either an h=f tan (Theta) or an h=f sin (Theta) or an h=f Theta lens. It may contain one or more elements, where the elements may be spherical or aspherical. The collection lens can be made of glass or plastic. The aperture area of the collection lens may be determined by its field of view, to conserve etendue, or it may be determined by the spectral filter diameter, so as to keep all elements inside the barrel the same diameter. The collection lens may be coated on its external edge or internal edge or both edges with anti-reflective coating.
The spectral filter may be, for example, an absorptive filter or a dielectric-stack filter. The spectral filter may be placed in the most collimated plane of the barrel in order to reduce the input angles. Also, the spectral filter may be placed behind a spatial filter in order to ensure the cone angle entering the spectral filter. The spectral filter may have a wavelength thermal-coefficient that is approximately matched to that of the light source 102 and may be thermally-coupled to the light source 102. The spectral filter may also have a cooler or heater thermally-coupled to it in order to limit its temperature-induced wavelength drift.
The focusing lens can then focus the light exiting the spectral filter onto the photodetector array (sensor 202).
The photodetector array can comprise an array of single photon avalanche diodes (SPADs) that serve as the detection elements of the array. As another example, the photodetector array may comprise photon mixing devices that serve as the detection elements. Generally speaking, the photodetector array may comprise any sensing devices which can measure time-of-flight. Further still, the detector array may be front-side illuminated (FSI) or back-side illuminated (BSI), and it may employ microlenses to increase collection efficiency. Processing circuitry that reads out and processes the signals generated by the detector array may be in-pixel, on die, hybrid-bonded, on-board, or off-board, or any suitable combination thereof. An example architecture for sensor 202 is shown by
Returns can be detected within the signals 212 produced by the sensor 202 using techniques such as correlated photon counting. For example, time correlated single photon counting (TCSPC) can be employed. With this approach, a histogram is generated by accumulating photon arrivals within timing bins. This can be done on a per-pixel basis; however, it should be understood that a practitioner may also group pixels of the detector array together, in which case the counts from these pixels would be added up per bin. As shown by
As noted above, the zones 120 may have some overlap. For example, each zone 120 may comprise 60×60 degrees and have 5×60 degrees overlap with its neighbor. Post-processing can be employed that identifies common features in return data for the two neighboring zones for use in aligning the respective point clouds.
Control Circuitry:
For ease of illustration,
It should be understood that the lidar system 100 can employ additional control circuitry, such as the components shown by
The receiver board, laser driver, and/or system controller may also include one or more processors that provide data processing capabilities for carrying out their operations. Examples of processors that can be included among the control circuitry include one or more general purpose processors (e.g., microprocessors) that execute software, one or more field programmable gate arrays (FPGAs), one or more application-specific integrated circuits (ASICs), or other compute resources capable of carrying out tasks described herein.
In an example embodiment, the light source 102 can be driven to produce relatively low power optical signals 112 at the beginning of each subframe (zone). If a return 210 is detected at sufficiently close range during this beginning time period, the system controller can conclude that an object is nearby, in which case the relatively low power is retained for the remainder of the subframe (zone) in order to reduce the risk of putting too much energy into the object. This can allow the system to operate as an eye-safe low power for short range objects. As another example, if the light source 102 is using collimated laser outputs, then the emitters that are illuminating the nearby object can be operated at the relatively low power during the remainder of the subframe (zone), while the other emitters have their power levels increased. If a return 210 is not detected at sufficiently close range or sufficiently high intensity during this beginning time period, then the system controller can instruct the laser driver to increase the output power for the optical signals 112 for the remainder of the subframe. Such modes of operation can be referred to as providing a virtual dome for eye safety. Furthermore, it should be understood that such modes of operation provide for adaptive illumination capabilities where the system can adaptively control the optical power delivered to regions within a given zone such that some regions within a given zone can be illuminated with more light than other regions within that given zone.
The control circuitry can also employ range disambiguation to reduce the risk of conflating or otherwise mis-identifying returns 210. An example of this is shown by
In another example, the control circuitry can employ interference mitigation to reduce the risk of mis-detecting interference as returns 210. For example, as noted, the returns 210 can be correlated with the optical signals 112 to facilitate discrimination of returns 210 from non-correlated light that may be incident on sensor 202. As an example, the system can use correlated photon counting to generate histograms for return detection.
The system controller can also command the rotator actuator to rotate the carrier 104 to a specific position (and then stop the rotation) if it is desired to perform single zone imaging for an extended time period. Further still, the system controller can reduce the rotation speed created by the rotation actuator if low power operation is desired at a lower frame rate (e.g., more laser cycles per zone). As another example, the rotation speed can be slowed by n by repeating the zone cycle n times and increasing the radius n times. For example, for 9 zones at 30 frames per second (fps), the system can use 27 light steering optical elements 130 around the carrier 104, and the carrier 104 can be rotated at 10 Hz.
As examples of sizes for example embodiments of a lidar system as described herein that employs rotating light steering optical elements 130 and 9 zones in the field of view, the size of the system will be significantly affected by the values for X and Y in the ring diameter for a doughnut or other similar form for carrying the light steering optical elements 130. We can assume that a 5 mm×5 mm emitter array can be focused to 3 mm×3 mm by increasing beam divergence by 5/3. We can also assume for purposes of this example that 10% of time can be sacrificed in transitions between light steering optical elements 130. Each arc for a light steering optical element 130 can be 3 mm×10 (or 30 mm in perimeter), which yields a total perimeter of 9×30 mm (270 mm). The diameter for the carrier of the light steering optical elements can thus be approximately 270/3.14 (86 mm). Moreover, depth can be constrained by cabling and lens focal length, which we can assume at around 5 cm.
Spatial-Stepping Through Zones for Scanning Lidar Systems:
The spatial stepping techniques discussed above can be used with lidar systems other than flash lidar if desired by a practitioner. For example, the spatial stepping techniques can be combined with scanning lidar systems that employ point illumination rather than flash illumination. With this approach, the aligned light steering optical elements 130 will define the zone 120 within which a scanning lidar transmitter directs its laser pulse shots over a scan pattern (and the zone 120 from which the lidar receiver will detect returns from these shots).
The example scanning lidar transmitter 3800 shown by
The light source 102 fires laser pulses 3822 in response to firing commands 3820 received from the control circuit 3806. In the example of
The mirror subsystem 3804 includes a mirror that is scannable to control where the lidar transmitter 3800 is aimed. In the example embodiment of
In the example of
A practitioner may choose to control the scanning of mirrors 3810 and 3812 using any of a number of scanning techniques to achieve any of a number of shot patterns.
For example, mirrors 3810 and 3812 can be controlled to scan line by line through the field of view in a grid pattern, where the control circuit 3806 provides firing commands 3820 to the light source 102 to achieve a grid pattern of shots 3822 as shown by the example of
As another example, in a particularly powerful embodiment, mirror 110 can be driven in a resonant mode according to a sinusoidal signal while mirror 112 is driven in a point-to-point mode according to a step signal that varies as a function of the range points to be targeted with laser pulses 3822 by the lidar transmitter 100. This agile scan approach can yield a shot pattern for intelligently selected laser pulse shots 3822 as shown by
For example, the control circuit 3806 can intelligently select which range points in the relevant zone 120 should be targeted with laser pulse shots (e.g., based on an analysis of a scene that includes the relevant zone 120 so that salient points are selected for targeting—such as points in high contrast areas, points near edges of objects in the field, etc.; based on an analysis of the scene so that particular software-defined shot patterns are selected (e.g., foveation shot patterns, etc.)). The control circuit 3806 can then generate a shot list of these intelligently selected range points that defines how the mirror subsystem will scan and the shot pattern that will be achieved. The shot list can thus serve as an ordered listing of range points (e.g., scan angles for mirrors 3810 and 3812) to be targeted with laser pulse shots 3822. Mirror 3810 can be operated as a fast-axis mirror while mirror 3812 is operated as a slow-axis mirror. When operating in such a resonant mode, mirror 3810 scans through scan angles in a sinusoidal pattern. In an example embodiment, mirror 3810 can be scanned at a frequency in a range between around 100 Hz and around 20 kHz. In a preferred embodiment, mirror 3810 can be scanned at a frequency in a range between around 10 kHz and around 15 kHz (e.g., around 12 kHz). As noted above, mirror 3812 can be driven in a point-to-point mode according to a step signal that varies as a function of the range points on the shot list. Thus, if the lidar transmitter 3800 is to fire a laser pulse 3822 at a particular range point having an elevation of X, then the step signal can drive mirror 3812 to scan to the elevation of X. When the lidar transmitter 3800 is later to fire a laser pulse 3822 at a particular range point having an elevation of Y, then the step signal can drive mirror 3812 to scan to the elevation of Y. In this fashion, the mirror subsystem 3804 can selectively target range points that are identified for targeting with laser pulses 3822. It is expected that mirror 3812 will scan to new elevations at a much slower rate than mirror 3810 will scan to new azimuths. As such, mirror 3810 may scan back and forth at a particular elevation (e.g., left-to-right, right-to-left, and so on) several times before mirror 3812 scans to a new elevation. Thus, while the mirror 112 is targeting a particular elevation angle, the lidar transmitter 100 may fire a number of laser pulses 3822 that target different azimuths at that elevation while mirror 110 is scanning through different azimuth angles. Because of the intelligent selection of range points for targeting with the shots 3822, it should be understood that the scan pattern exhibited by the mirror subsystem 3804 may include a number of line repeats, line skips, interline skips, and/or interline detours as a function of the ordered scan angles for the shots on the shot list.
Control circuit 3806 is arranged to coordinate the operation of the light source 3802 and mirror subsystem 3804 so that laser pulses 3822 are transmitted in a desired fashion. In this regard, the control circuit 3806 coordinates the firing commands 3820 provided to light source 3802 with the mirror control signal(s) 3830 provided to the mirror subsystem 3804. In the example of
As discussed in the above-referenced and incorporated U.S. Pat. No. 11,442,152 and U.S. Patent App. Pub. No. 2022/0308171, control circuit 3806 can use a laser energy model to schedule the laser pulse shots 3822 to be fired toward targeted range points. This laser energy model can model the available energy within the laser source 102 for producing laser pulses 3822 over time in different shot schedule scenarios. For example, the laser energy model can model the energy retained in the light source 102 after shots 3822 and quantitatively predict the available energy amounts for future shots 3822 based on prior history of laser pulse shots 3822. These predictions can be made over short time intervals—such as time intervals in a range from 10-100 nanoseconds. By modeling laser energy in this fashion, the laser energy model helps the control circuit 3806 make decisions on when the light source 102 should be triggered to fire laser pulses 3822.
Control circuit 3806 can include a processor that provides the decision-making functionality described herein. Such a processor can take the form of a field programmable gate array (FPGA) or application-specific integrated circuit (ASIC) which provides parallelized hardware logic for implementing such decision-making. The FPGA and/or ASIC (or other compute resource(s)) can be included as part of a system on a chip (SoC). However, it should be understood that other architectures for control circuit 3806 could be used, including software-based decision-making and/or hybrid architectures which employ both software-based and hardware-based decision-making. The processing logic implemented by the control circuit 3806 can be defined by machine-readable code that is resident on a non-transitory machine-readable storage medium such as memory within or available to the control circuit 3806. The code can take the form of software or firmware that define the processing operations discussed herein for the control circuit 3806.
As the lidar system of 100 of
Furthermore, the lidar systems 100 of
The lidar receiver 4000 comprises photodetector circuitry 4002 which includes the sensor 202, where sensor 202 can take the form of a photodetector array. The photodetector array comprises a plurality of detector pixels 4004 that sense incident light and produce a signal representative of the sensed incident light. The detector pixels 4004 can be organized in the photodetector array in any of a number of patterns. In some example embodiments, the photodetector array can be a two-dimensional (2D) array of detector pixels 4004. However, it should be understood that other example embodiments may employ a one-dimensional (1D) array of detector pixels 4004 (or 2 differently oriented 1D arrays of pixels 4004) if desired by a practitioner.
The photodetector circuitry 4002 generates a return signal 4006 in response to a pulse return 4022 that is incident on the photodetector array. The choice of which detector pixels 4004 to use for collecting a return signal 4006 corresponding to a given return 4022 can be made based on where the laser pulse shot 3822 corresponding to the return 4022 was targeted. Thus, if a laser pulse shot 3822 is targeting a range point located at a particular azimuth angle, elevation angle pair; then the lidar receiver 4000 can map that azimuth, elevation angle pair to a set of pixels 4004 within the sensor 202 that will be used to detect the return 4022 from that laser pulse shot 3822. The azimuth, elevation angle pair can be provided as part of scheduled shot information 4012 that is communicated to the lidar receiver 4000. The mapped pixel set can include one or more of the detector pixels 4004. This pixel set can then be activated and read out from to support detection of the subject return 4022 (while the pixels 4004 outside the pixel set are deactivated so as to minimize potential obscuration of the return 4022 within the return signal 4006 by ambient or interfering light that is not part of the return 4022 but would be part of the return signal 4006 if unnecessary pixels 4004 were activated when return 4022 was incident on sensor 202). In this fashion, the lidar receiver 4000 will select different pixel sets of the sensor 202 for readout in a sequenced pattern that follows the sequenced spatial pattern of the laser pulse shots 3822. Return signals 4006 can be read out from the selected pixel sets, and these return signals 4006 can be processed to detect returns 4022 therewithin.
Examples of circuitry and control logic that can used for this selective pixel set readout are described in U.S. Pat. Nos. 9,933,513, 10,386,467, 10,663,596, and 10,743,015, U.S. Patent App. Pub. No. 2022/0308215, and U.S. patent application Ser. No. 17/490,265, filed Sep. 30, 2021, entitled “Hyper Temporal Lidar with Multi-Processor Return Detection” and U.S. patent application Ser. No. 17/554,212, filed Dec. 17, 2021, entitled “Hyper Temporal Lidar with Controllable Tilt Amplitude for a Variable Amplitude Scan Mirror”, the entire disclosures of each of which are incorporated herein by reference. These incorporated patents and patent applications also describe example embodiments for the photodetector circuitry 4002, including the use of a multiplexer to selectively read out signals from desired pixel sets as well as an amplifier stage positioned between the sensor 202 and multiplexer.
Signal processing circuit 4020 operates on the return signal 4006 to compute return information 4024 for the targeted range points, where the return information 4024 is added to the lidar point cloud 4044. The return information 4024 may include, for example, data that represents a range to the targeted range point, an intensity corresponding to the targeted range point, an angle to the targeted range point, etc. As described in the above-referenced and incorporated U.S. Pat. Nos. 9,933,513, 10,386,467, 10,663,596, and 10,743,015, U.S. Patent App. Pub. No. 2022/0308215, and U.S. patent application Ser. Nos. 17/490,265 and 17/554,212, the signal processing circuit 4020 can include an analog-to-digital converter (ADC) that converts the return signal 4006 into a plurality of digital samples. The signal processing circuit 4020 can process these digital samples to detect the returns 4022 and compute the return information 4024 corresponding to the returns 4022. In an example embodiment, the signal processing circuit 4020 can perform time of flight (TOF) measurement to compute range information for the returns 4022. However, if desired by a practitioner, the signal processing circuit 4020 could employ time-to-digital conversion (TDC) to compute the range information.
The lidar receiver 4000 can also include circuitry that can serve as part of a control circuit for the lidar system 100. This control circuitry is shown as a receiver controller 4010 in
The receiver controller 4010 and/or signal processing circuit 4020 may include one or more processors. These one or more processors may take any of a number of forms. For example, the processor(s) may comprise one or more microprocessors. The processor(s) may also comprise one or more multi-core processors. As another example, the one or more processors can take the form of a field programmable gate array (FPGA) or application-specific integrated circuit (ASIC) which provide parallelized hardware logic for implementing their respective operations. The FPGA and/or ASIC (or other compute resource(s)) can be included as part of a system on a chip (SoC). However, it should be understood that other architectures for such processor(s) could be used, including software-based decision-making and/or hybrid architectures which employ both software-based and hardware-based decision-making. The processing logic implemented by the receiver controller 4010 and/or signal processing circuit 4020 can be defined by machine-readable code that is resident on a non-transitory machine-readable storage medium such as memory within or available to the receiver controller 4010 and/or signal processing circuit 4020. The code can take the form of software or firmware that define the processing operations discussed herein.
In operation, the lidar system 100 of
Spatial-Stepping Through Zones for Non-Lidar Imaging Systems:
The spatial stepping techniques discussed above can be used with imaging systems that need not use lidar if desired by a practitioner. For example, there are many applications where a FOV needs to be imaged under a variety of ambient lighting conditions where signal acquisition would benefit from better illumination of the FOV. Examples of such imaging applications include but are not limited to imaging systems that employ active illumination, such as security imaging (e.g., where a perimeter, boundary, and/or border needs to be imaged under diverse lighting conditions such as day and night), microscopy (e.g., fluorescence microscopy), and hyperspectral imaging.
With the spatial stepping techniques described herein, the discrete changes in zonal illumination/acquisition even while the carrier is continuously moving allows for a receiver to minimize the number of readouts, particularly for embodiments that employ a CMOS sensor such as a CMOS active pixel sensor (APS) or CMOS image sensor (CIS). Since the zone of illumination will change on a discrete basis with relatively long dwell times per zone (as compared to a continuously scanned illumination approach), the photodetector pixels will be imaging the same solid angle of illumination for the duration of an integration for a given zone. This stands in contrast to non-CMOS scanning imaging modalities such as time delay integration (TDI) imagers which are based on Charge-Coupled Devices (CCDs). With TDI imagers, the field of view is scanned with illuminating light continuously (as opposed to discrete zonal illumination), and this requires precise synchronization of the charge transfer rate of the CCD with the mechanical scanning of the imaged objects. Furthermore, TDI imagers require a linear scan of the object along the same axis as the TDI imager. With the zonal illumination/acquisition approach for example embodiments described herein, imaging systems are able to use less expensive CMOS pixels with significantly reduced read noise penalties and without requiring fine mechanical alignments with respect to scanning.
Thus, if desired by a practitioner, a system 100 as discussed above in connection with, for example,
With example embodiments for active illumination imaging systems 100 that employ spatial stepping, it should be understood that the light source 102 need not be a laser. For example, the light source 102 can be a light emitting diode (LED) or other type of light source so long as the light it produces can be sufficiently illuminated by appropriate optics (e.g., a collimating lens or a microlens array) before entering a light steering optical element 130. It should also be understood that the design parameters for the receiver should be selected so that photodetection exhibits sufficient sensitivity in the emitter's emission/illumination band and the spectral filter (if used) will have sufficient transmissivity in that band.
With example embodiments for active illumination imaging systems 100 that employ spatial stepping, it should also be understood that the sensor 202 may be a photodetector array that comprises an array of CMOS image sensor pixels (e.g., ASP or CIS pixels), CCD pixels, or other photoelectric devices which convert optical energy into an electrical signal, directly or indirectly. Furthermore, the signals generated by the sensor 202 may be indicative of the number and/or wavelength of the incident photons. In an example embodiment, the pixels may have a spectral or color filter deposited on them in a pattern such as a mosaic pattern, e.g., RGGB (red green blue) so that the pixels provide some spectral information regarding the detected photons.
Furthermore, in an example embodiment, the spectral filter used in the receiver architecture for the active illumination imaging system 100 may be placed or deposited directly on the photodetector array; or the spectral filter may comprise an array of filters (such as RGGB filters).
In another example embodiment for the active illumination imaging system 100, the light steering optical elements 130 may incorporate a spectral filter. For example, in an example embodiment with fluorescence microscopy, the spectral filter of a light steering optical element 130 may be centered on a fluorescence emission peak of one or more fluorophores for the system. Moreover, with an example embodiment, more than one light steering optical element 130 may be used to illuminate and image a specific zone (or a first light steering optical element 130 may be used for the emitter while a second light steering optical element 130 may be used for the receiver). Each of the light steering optical elements 130 that correspond to the same zone may be coated with a different spectral filter corresponding to a different spectral band. As an example, continuing with the fluorescence microscopy use case, the system may illuminate the bottom right of the field with a single light steering optical element 130 for a time period (e.g., 100 msec) at 532 nm, while the system acquires images from that zone using a first light steering optical element 130 containing a first spectral filter (e.g., a 20 nm-wide 560 nm-centered spectral filter) for a first portion of the relevant time period (e.g., the first 60 msec) and then with a second light steering optical element 130 containing a second spectral filter (e.g., a 30 nm-wide 600 nm-centered spectral filter) for the remaining portion of the relevant time period (e.g., the next 40 msec), where these two spectral filters correspond to the emissions of two fluorophone species in the subject zone.
As noted above, the imaging techniques described herein can be employed with security cameras. For example, security cameras may be used for perimeter or border security, and a large FoV may need to be imaged day and night at high resolution. In such a scenario, it can be expected that the information content will be very sparse (objects of interest will rarely appear, and will appear in a small portion of the field of view if present). An active illumination camera that employs imaging techniques described herein with spatial stepping could be mounted in a place where it can image and see the desired FOV.
For an example embodiment, consider a large FoV that is to be imaged day and night with fine resolution. For example, a field of view of 160 degree horizontal by 80 degrees vertical may need to be imaged such that a person 1.50 m tall is imaged by 6 pixels while 500 m away. At 500 m, 1.50 m subtends arctan (1.5/500)=0.17 degrees. This means that each pixel in the sensor needs to image 0.028×0.028 degrees and that a sufficient illumination power must be emitted to generate a sufficiently high SNR in the receiver that overcomes electrical noise in the receiver. With a traditional non-scanning camera, we would need an image sensor with (160×80)/(0.028×0.028)=5,700×2,900 pixels, i.e., 16 MPixels, in which case a very expensive camera would be needed to support this field of view and resolution. Mechanically scanning cameras which would try to scan this FoV with this resolution would be slow, and the time between revisits of the same angular position would be too long, in which case critical images may be lost. A mechanically scanning camera would also be able to only image one zone at a given time, before it slowly moves to the other location. Moreover, the illumination area required to illuminate a small, low-reflective object, for example at night, if illuminating the whole FoV, would be very high, resulting in high power consumption, high cost, and high heat dissipation. However, the architecture described herein can image with the desired parameters at much lower cost. For example, using the architecture described herein, we may use 9 light steering optical elements, each corresponding to a zone of illumination and acquisition of 55 degrees horizontal x 30 degrees horizontal. This provides 1.7×3.5 degree overlap between zones. The image sensor for this example needs only (55×30)/(0.028×0.028)=2,000×1,000 pixels=2 Mpixels; and the required optics would be small and introduce less distortion. In cases where the dominant noise source is proportional to the integration time (e.g., sensor dark noise), the required emitter power would be reduced by sqrt(9)=3, because each integration is 9 times shorter than that of a full field system. Each point in the field of view will be imaged at the same frame rate as with the original single-FoV camera.
Furthermore, as noted above, the imaging techniques described herein can be employed with microscopy, such as active illumination microscopy (e.g., fluorescence microscopy). In some microscopy applications there is a desire to reduce the excitation filter's total power and there is also a desire to achieve maximal imaging resolution without using very large lenses or focal plane arrays. Furthermore, there is sometimes a need to complete an acquisition of a large field of view in a short period of time, e.g., to achieve screening throughput or to prevent degradation to a sample. Imaging techniques like those described herein can be employed to improve performance. For example, a collimated light source can be transmitted through a rotating slab ring which steers the light to discrete FOIs via the light steering optical elements 130. A synchronized ring then diverts the light back to the sensor 202 through a lens, thus reducing the area of the sensor's FPA. The assumption is that regions which are not illuminated contribute negligible signal (e.g., there is negligible autofluorescence) and that the system operates with a sufficiently high numerical aperture such that the collimation assumption for the returned light still holds. In microscopy, some of the FPA's are very expensive (e.g., cooled scientific CCD cameras with single-photon sensitivity or high-sensitivity single-photon sensors for fluorescence lifetime imaging (FLIM) of fluorescence correlation spectroscopy (FCS), and it is desirable to reduce the number of pixels in the FPA array in order to reduce the cost of these systems.
As yet another example, the imaging techniques described herein can also be employed with hyperspectral imaging. For example, these imaging techniques can be applied to hyperspectral imaging using etalons or Fabry-Perot interferometers (e.g., see U.S. Pat. No. 10,012,542). In these systems, a cavity (which may be a tunable cavity) is formed between two mirrors, and the cavity only transmits light for which its wavelength obeys certain conditions (e.g., the integer number of wavelengths match a round trip time in the cavity). It is often desirable to construct high-Q systems, i.e., with very sharp transmission peaks and often with high finesse. These types of structures may also be deposited on top of image sensor pixels to achieve spectral selectivity. The main limitation of such systems is light-throughput or Etendue. In order to achieve high-finesse Fabry-Perot imaging, the incoming light must be made collimated, and in order to conserve Etendue, the aperture of the conventional FPI (Fabry-Perot Interferometer) must increase. A compromise is typically made whereby the FoV of these systems is made small (for example, by placing them very far, such as meters, from the imaged objects, which results in less light collected and lower resolution). This can be addressed by flooding the scene with very high power light, but this results in higher-power and more expensive systems. Accordingly, the imaging techniques described herein which employ spatial stepping can be used to maintain a larger FOV for hyperspectral imaging applications such as FPIs.
With the rotating light steering optical elements 130 as described herein, the directional (partially collimated) illumination light can be passed through the rotating light steering optical elements 130, thereby illuminating one zone 120 at a time, and for a sufficient amount of time for the hyperspectral camera to collect sufficient light through its cavity. A second ring with a sufficiently large aperture steers the reflected light to the FPI. Thus, the field-of-view into the FPI is reduced (e.g., by 9×) and this results either in a 9× decrease in its aperture area, and therefore in its cost (or an increase in its yield). If it is a tunable FPI, then the actuators which scan the separation between its mirrors would need to actuate a smaller mass, making them less expensive and less susceptible to vibration at low frequencies. Note that while the size of the FPI is reduced, the illumination power is not reduced because for 9× smaller field, we have 9× shorter time to deliver the energy, so the required power is the same. In cases where the noise source is proportional to the acquisition time (e.g., in SWIR or mid infrared (MIR) hyperspectral imaging, such as for gas detection), we do get a reduction in illumination power because the noise would scale down with the square root of the integration time.
While the invention has been described above in relation to its example embodiments, various modifications may be made thereto that still fall within the invention's scope. These and other modifications to the invention will be recognizable upon review of the teachings herein.
This patent application is a continuation of PCT patent application PCT/US22/47262 (designating the US), filed Oct. 20, 2022, and entitled “Systems and Methods for Spatially-Stepped Imaging”, which claims priority to (1) U.S. provisional patent application Ser. No. 63/271,141, filed Oct. 23, 2021, and entitled “Spatially-Stepped Flash Lidar System”, (2) U.S. provisional patent application Ser. No. 63/281,582, filed Nov. 19, 2021, and entitled “System and Method for Spatially-Stepped Flash Lidar”, and (3) U.S. provisional patent application Ser. No. 63/325,231, filed Mar. 30, 2022, and entitled “Systems and Methods for Spatially-Stepped Flash Lidar Using Diffractive Optical Elements for Light Steering”, the entire disclosures of each of which are incorporated herein by reference. This patent application also claims priority to (1) U.S. provisional patent application Ser. No. 63/271,141, filed Oct. 23, 2021, and entitled “Spatially-Stepped Flash Lidar System”, (2) U.S. provisional patent application Ser. No. 63/281,582, filed Nov. 19, 2021, and entitled “System and Method for Spatially-Stepped Flash Lidar”, and (3) U.S. provisional patent application Ser. No. 63/325,231, filed Mar. 30, 2022, and entitled “Systems and Methods for Spatially-Stepped Flash Lidar Using Diffractive Optical Elements for Light Steering”, the entire disclosures of each of which are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
63271141 | Oct 2021 | US | |
63281582 | Nov 2021 | US | |
63325231 | Mar 2022 | US | |
63271141 | Oct 2021 | US | |
63281582 | Nov 2021 | US | |
63325231 | Mar 2022 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/US22/47262 | Oct 2022 | US |
Child | 17970761 | US |