The term “LIDAR” refers to a technique for measuring distances of visible surfaces by emitting light and measuring properties of the reflections of the emitted light.
A LIDAR system has at least one light emitter and a corresponding light sensor. The light emitter may comprise a laser that directs light in the direction of an object or surface. The light sensor may comprise a photodetector, such as a photomultiplier or avalanche photodiode (APD), that converts light intensity to a corresponding electrical signal. Optical elements such as lenses may be used in the light transmission and reception paths to focus light, depending on the particular nature of the LIDAR system.
A LIDAR system has signal processing components that analyze reflected light signals to determine the distances to surfaces from which the emitted laser light has been reflected. For example, the system may measure the “flight time” of a light signal as it travels from the laser, to the surface, and back to the light sensor. A distance is then calculated based on the flight time and the known speed of light.
The detailed description is described with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The use of the same reference numbers in different figures indicates similar or identical components or features.
As discussed above, LIDAR systems may measure the “flight time” of a light signal as it travels from a laser, to a surface, and back to a light sensor. A distance is then calculated based on the flight time and the known speed of light. Generally, limitations of performance of components of the optical system of a LIDAR system may affect the accuracy or dependability of the distance calculations. For example, some existing LIDAR systems may have limited distance detection and/or may have a relatively larger size to accommodate more or less efficient optical components. Improvements to a LIDAR system may be realized by, for example, incorporating an optical system that introduces astigmatism to shape a beam emitted by the laser of the LIDAR system (e.g., a more-focused beam and better distance/range may be realized for such a LIDAR system). In some examples, a freeform lens surface may be used to introduce beam-shaping astigmatism. In such examples, additional optical elements need not be added to an optical system, and thus a sensor assembly of a LIDAR system may be relatively small.
Examples herein describe an optical system that includes, among other things, a Cooke triplet lens or other three-element group that has a freeform surface. Such an optical system may be used to shape the beam of a laser diode, for example, which typically produces a beam that diverges at different rates in different axes. Such an optical system may be well-suited for use in any of a variety of applications, particularly those that incorporate laser diodes. An example of such an application, is a LIDAR system, which may be used in various types of machine vision systems to produce point clouds indicating three-dimensional coordinates of surfaces that are visible from the perspective of the LIDAR system. As an example, a LIDAR system may be used by guidance, navigation, and control systems of autonomous vehicles such as automobiles, aircraft, boats, etc.
Generally, a lens having a freeform surface may include digitally generated curves that render biconic, rotationally asymmetric, or atoric surfaces, just to name a few examples. This is in contrast to spherical lens surfaces, for instance. As used herein, a biconic surface is a surface having two differing curvatures along a first axis and a second axis, and a biconic lens is a lens which includes a biconic surface or which has similar properties (e.g., providing differing focal lengths in different axes).
In some examples, a LIDAR system includes a rotatable chassis that houses components that implement a LIDAR measurement system. The chassis is configured to rotate about a vertical rotational axis to allow the system to scan horizontally across a field of view. During rotation, laser light may be emitted at various vertical and horizontal directions. The vertical angle of light emission may be varied by using lasers that are at different positions within the chassis. The horizontal angle of light emission varies with the rotation of the chassis.
The LIDAR system may have one or more lenses that define a field of view of a scene surrounding the system. As the chassis rotates, the field of view moves or scans horizontally. The lasers may be positioned within the chassis to project laser light outward through the one or more lenses and into the field of view. Multiple photodiodes may be positioned so that reflected light originating from any particular laser and reflected off an object transmits through the one or more lenses to a corresponding photodiode in the LIDAR system.
In some particular examples, the lasers and photodiodes, or any other type of photo-sensitive detector, may have arrangements that are similar or identical to one another. The photodiodes may be arranged within a sensor image frame having an x-axis that corresponds to the horizontal axis of the scene and an orthogonal y-axis that corresponds to the vertical axis of the scene. Because the rotation of the chassis causes the field of view to scan horizontally, the horizontal axis and the x-axis of the sensor image frame may be referred to as the scan axis. During rotation, an image of the scene translates along the scan axis over the sensor image frame. In some embodiments, the chassis may have a rotational axis that is not vertical, and the scan or x direction may therefore not always correspond to horizontal, with respect to gravity.
The chassis 102 has an outer contour that is generally symmetrical about the rotational axis 104. An upper portion 106 of chassis 102 includes a cutout forming a vertically oriented flat surface 108 that faces in a forward direction 110, also referred to as the z-direction, relative to the housing 102. In some implementations, flat surface 108 has one or more openings to accommodate first lens 112 and second lens 114. Forward direction 110 may be parallel with a direction that first lens 112 and second lens 114 face. In other implementations, flat surface 108 is configured to accommodate mounting of a lens holder (not illustrated in
Lenses 112 and 114 may be mounted so that their principal axes are generally perpendicular to rotational axis 104, and generally parallel to forward direction 110. In practice, each of lenses 112 and 114 may comprise a Cooke triplet lens or other type of lens group, and may therefore have multiple individual lens elements, as described in detail below.
Lenses 112 and 114 may have a common field of view of a scene. Rotation of chassis 102 causes the field of view to move or scan in a scan direction 116. In the illustrated embodiment, in which rotational axis 104 is vertical, scan direction 116 is horizontal.
Chassis 102 may include a partition wall 118 that forms a sensor compartment on one side of chassis 102 and a laser compartment on the other side of chassis 102. Partition wall 118 may prevent or reduce stray light inside chassis 102. Such stray light may undesirably lead to false electronic signals. The sensor compartment houses an array of light sensors 120. The laser compartment houses one or more rows of laser light sources 122.
In some examples, light sensors 120 may be arranged to have a uniform spacing or pitch. For instance, light sensors 120 may be arranged as a series of staggered rows that are tilted slightly in a first direction to produce a uniform pitch in an orthogonal direction.
Laser light sources 122, generally laser diodes, may be arranged within an emitter image frame. Lenses 112 and 114 may direct light produced by the laser light sources 122 from the laser image frame outwardly into the lenses' field of view.
Light sensors 120 may be mounted on a single, planar printed circuit board. Laser light sources 122, however, may be mounted on multiple printed circuit boards. Each printed circuit board supports a corresponding row of laser light sources 122, which may be mounted on edges of the boards and emit toward lenses 112 and 114. The edges may be curved and the printed circuit boards may be inclined inwardly with respect to one another so that laser light sources 122 are all equidistant from a lens focal point and are also all directed to converge at the lens focal point.
First lens 112 is generally above the laser compartment and forward of laser light sources 122. Second lens 114 is generally above the sensor compartment and forward of the light sensors 120.
One or more mirrors 124 are positioned within the chassis 102 behind lenses 112 and 114 to redirect or fold emitted and received light between nominally horizontal and vertical directions. Received light 126 enters the chassis generally horizontally through lens 114 and is redirected as downward light 128 by the one or more mirrors 124 toward the light sensor 120. Laser light sources 122 emit laser light 130 in an upward direction. The emitted light impinges on the one or more mirrors 124 and is redirected horizontally, in forward direction 110 through lens 112, producing an outward beam 132.
The LIDAR system may be used to sense any of a number of parameters for an object 134 in a field of view. Such parameters may include distances to various points of the object to determine 3D coordinates of its surface, for example. Sensing an object involves reflecting at least a portion of outward beam 132 from the object and receiving the reflected light 126 at light sensors 120.
In some particular examples, each of laser light sources 122 are fired individually and in sequence to obtain individual distance measurements. For each measurement, a single laser is fired in a burst of two closely spaced pulses and a return reflection is detected by a corresponding light sensor 120 (e.g., a photodiode). The light sensor creates a return signal representing the intensity of the reflected light over time. Assuming the emitted burst has been reflected, the return signal comprises a pair of pulses, similar in shape to the emitted pulses, that are delayed with respect to the emitted pulses. Among a number of other techniques, a cross correlation may be performed between the return signal and a reference signal to determine a time delay. In some examples, another technique for determining time delay may involve Gaussian or polynomial regression of the pulse shape of the return signal. The peak of the auto-correlation is identified, and the timing of the peak is used to determine the round-trip travel time of the emitted burst. In other examples, any number of one or more pulses may be used.
In examples using multiple pulses, the amount by which the pulses of a burst are spaced from each other may be varied over time and between lasers to reduce an impact of cross-talk. Cross-talk may occur, for example, when a photodiode receives a reflection of light that was emitted by a non-corresponding laser, or when a photodiode receives light that was emitted from another LIDAR apparatus. Varying the pulse spacing may reduce ambiguity between different light emissions, so that the cross-correlation inherently tends to mask out reflected bursts whose spacings are different than the spacing of the originally emitted burst. The spacing may be varied across the different lasers and also may be varied over time for an individual laser. For example, the pulse spacing for a particular laser may be changed randomly for every rotation of chassis 102.
The lasers may be sequentially fired in a defined sequence at a rate such that each laser is fired during the maximum expected flight time of a previously fired laser. Thus, two laser emissions (where each emission is a pulse pair) may be “in flight” at any given time.
Two analog to digital converters (ADCs) may be used to digitize signals produced by light sensors 120. The ADCs operate in an alternate fashion, so that a particular ADC digitizes every other laser emission. For example, the reflection from a first laser burst is digitized by a first ADC, the reflection corresponding to a second laser burst is digitized by a second ADC, the reflection corresponding to a third laser burst is digitized by the first ADC, the reflection corresponding to a fourth laser burst is digitized by the second ADC, and so on. Two ADCs may be adequate because only two laser emissions are in flight at any given time, in this example.
Each laser (among laser light sources 122) may be associated with a pair of capacitors that are used to generate two energy pulses for a corresponding individual laser emission. The capacitors of each pair may be charged in common by a regular boost circuit, and discharged into the corresponding laser using a pair of gallium nitride field-effect transistors (GaN FETs). Laser light sources 122 may be divided into two charge banks. The capacitors corresponding to the lasers of one charge bank may be charged while the lasers of the other charge bank are being fired.
A firing order of the lasers may be selected to maximize the physical distance between adjacently-fired lasers, subject to constraints that (a) adjacently-fired lasers should correspond to photodiodes of different ADC groups and (b) the sequence should repeatedly fire all the lasers of the first charge bank and then all the lasers of the second charge bank. Each charge bank may include lasers corresponding to photodiodes of both ADC groups.
In some examples, first lens group 308 may include a freeform lens, as described below. Such a lens group may have an intentionally-designed astigmatism that desirably shapes the beam of laser 304, which typically emits an undesirably non-circular (cross-section) beam.
In some examples, to accommodate differences of beam divergence in difference directions, a lens group, such as a Cooke triplet lens, may include at least one lens element (e.g., the Cooke triplet lens comprising three such lens elements) having a freeform surface. In some particular examples, a Cooke triplet lens may include at least one lens element that is a rotationally asymmetric lens, as described below. Examples illustrated in
Optical parameters of lens group 400 include entrance pupil diameter, field of view, a focal length, and operable spectral range, just to name a few examples. In particular examples, considered individually, values or ranges of optical parameters are as follows. Entrance pupil diameter may be up to about 35 millimeters. Field of view may be up to about 25 degrees. Focal length may be in a range between about 20 millimeters and 200 millimeters, and f-number may be greater than about 2. Lens group 400 may be configured to operate in a spectral range from about 800 nanometers to about 1000 nanometers (e.g., and optimized for about 905 nanometers). Of course, such values are merely examples, and claimed subject matter is not limited in this respect.
In the horizontal direction (X-Z plane) illustrated in
Generally, non freeform lens surfaces may be spherical and have any degree of curvature, which may be selected based, at least in part, by optical design that considers parameters of the other lenses and their surfaces, including any freeform surfaces.
Lens group 800 may be similar to lens groups 700 or 600 except that the combination of lens surface curvatures and spacings separating the lens elements of the lens groups may differ.
In block 1002, the first Cooke triplet lens may be placed into a first portion of a lens holder (e.g., 206) so that the first Cooke triplet lens is optically aligned with a laser diode (e.g., 304). As explained above, a beam of the laser diode diverges in a non-constant or asymmetrical pattern, such as beam illustrated in
In block 1004, the second Cooke triplet lens may be placed into a second portion of the lens holder so that the second Cooke triplet lens is optically aligned with a photodetector. In some examples, placing the first Cooke triplet lens into the first portion of the lens holder and placing the second Cooke triplet lens into the second portion of the lens holder may involve aligning the second Cooke triplet lens such that an optical axis of the second Cooke triplet lens is substantially parallel to an optical axis of the first Cooke triplet lens.
In some examples, a freeform lens may be designed using digital-point technology, so that each portion of a lens may be unique. Such lens may comprise plastic or glass. In fabrication of glass lenses (and some plastic lenses), point technology generators may use a router bit-type tool guided by a computer program. This is in contrast to a grinding process typically used for spherical lens surfaces. Fabrication of plastic lenses may involve injection molding, for example. In other examples, plastic lenses may be shaped by a single-point diamond-turn process.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described. Rather, the specific features and steps are disclosed as example forms of implementing the claims.
Conditional language such as, among others, “can,” “could,” “may” or “may,” unless specifically stated otherwise, are understood within the context to present that certain examples include, while other examples do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that certain features, elements and/or steps are in any way required for one or more examples or that one or more examples necessarily include logic for deciding, with or without user input or prompting, whether certain features, elements and/or steps are included or are to be performed in any particular example.
Conjunctive language such as the phrase “at least one of X, Y or Z,” unless specifically stated otherwise, is to be understood to present that an item, term, etc. may be either X, Y, or Z, or a combination thereof.
It should be emphasized that many variations and modifications may be made to the above-described examples, the elements of which are to be understood as being among other acceptable examples. All such modifications and variations are intended to be included herein within the scope of this disclosure and protected by the following claims.
This application claims priority to U.S. Provisional Application No. 62/440,951, filed on Dec. 30, 2016, entitled “Lens Assembly for a LIDAR System,” which are hereby incorporated by reference in their entirety for all purposes.
Number | Name | Date | Kind |
---|---|---|---|
3790277 | Hogan | Feb 1974 | A |
4154529 | Dyott | May 1979 | A |
4516158 | Grainge et al. | May 1985 | A |
4700301 | Dyke | Oct 1987 | A |
4709195 | Hellekson et al. | Nov 1987 | A |
5098185 | Watanabe et al. | Mar 1992 | A |
5202742 | Frank et al. | Apr 1993 | A |
5303084 | Pflibsen et al. | Apr 1994 | A |
5337189 | Krawczyk et al. | Aug 1994 | A |
5428438 | Komine | Jun 1995 | A |
5703351 | Meyers | Dec 1997 | A |
5805275 | Taylor | Sep 1998 | A |
6046800 | Ohtomo et al. | Apr 2000 | A |
6115128 | Vann | Sep 2000 | A |
6778732 | Fermann | Aug 2004 | B1 |
7089114 | Huang | Aug 2006 | B1 |
7248342 | Degnan | Jul 2007 | B1 |
7255275 | Gurevich et al. | Aug 2007 | B2 |
7259838 | Carlhoff et al. | Aug 2007 | B2 |
7311000 | Smith et al. | Dec 2007 | B2 |
7361948 | Hirano et al. | Apr 2008 | B2 |
7417716 | Nagasaka et al. | Aug 2008 | B2 |
7544945 | Tan et al. | Jun 2009 | B2 |
7969558 | Hall | Jun 2011 | B2 |
8050863 | Trepagnier et al. | Nov 2011 | B2 |
8477290 | Yamada | Jul 2013 | B2 |
8675181 | Hall | Mar 2014 | B2 |
8742325 | Droz et al. | Jun 2014 | B1 |
8767190 | Hall | Jul 2014 | B2 |
8836922 | Pennecot et al. | Sep 2014 | B1 |
9086273 | Gruver et al. | Jul 2015 | B1 |
9285464 | Pennecot et al. | Mar 2016 | B2 |
9368936 | Lenius et al. | Jun 2016 | B1 |
RE46672 | Hall | Jan 2018 | E |
20020140924 | Wangler et al. | Oct 2002 | A1 |
20080316463 | Okada et al. | Dec 2008 | A1 |
20100220141 | Ozawa | Sep 2010 | A1 |
20100302528 | Hall | Dec 2010 | A1 |
20110216304 | Hall | Sep 2011 | A1 |
20110255070 | Phillips et al. | Oct 2011 | A1 |
20150293228 | Retterath et al. | Oct 2015 | A1 |
20160004073 | Kipfer | Jan 2016 | A1 |
20160047901 | Pacala et al. | Feb 2016 | A1 |
20170269340 | Shmunk | Sep 2017 | A1 |
Number | Date | Country |
---|---|---|
GB2276997 | Oct 1994 | DE |
2410358 | Jan 2012 | EP |
WO03073123 | Sep 2003 | WO |
WO2012172526 | Dec 2012 | WO |
Entry |
---|
U.S. Appl. No. 14/462,075, filed Aug. 18, 2014, Pennecot et al., “Devices and Methods for a Rotating LIDAR Platform with a Shared Transmit/Receive Path,” 55 pages. |
Efficient Power Conversion, Why GaN circuits make better Lidar, retrieved on Mar. 3, 2017 at <<http://epc-co.com/epc/DesignSupport/TrainingVideos/eGaNDemos/GaN-circuits-make-better-LiDAR.aspx>> 2 pages. |
Notice of Allowance from the U.S. Patent and Trademark Office for U.S. Appl. No. 14/462,075, dated Nov. 18, 2015. 8 pages. |
Office Action from the U.S. Patent and Trademark Office ofr U.S. Appl. No. 14/462,075, dated Jun. 17, 2015. 14 pages. |
The PCT Search Report and Written Opinion dated Nov. 19, 2014 for PCT Application No. PCT/US2014/047864, 12 pages. |
Rim et al., “The optical advantages of curved focal plane arrays,” Optics Express, vol. 16, No. 7, Mar. 31, 2008, 1 page. |
Xu et al., “A calibration method of the multi-channel imaging lidar,” Proceedings SPIE 9080, Laser Radar Technology and Applications XIX; and Atmospheric Propagation XI, 90800V, Jun. 9, 2014, 2 pages. |
The PCT Search Report and Written Opinion dated Mar. 23, 2018 for PCT application No. PCT/US2017/168289, 12 pages. |
Number | Date | Country | |
---|---|---|---|
20180188545 A1 | Jul 2018 | US |
Number | Date | Country | |
---|---|---|---|
62440951 | Dec 2016 | US |