This document pertains generally, but not by way of limitation, to estimation of distance between a detection system and a target, using an optical transmitter and an optical receiver.
In an optical detection system, such as a system for providing light detection and ranging (LIDAR), various automated techniques can be used for performing depth or distance estimation, such as to provide an estimate of a range to a target from an optical assembly, such as an optical transceiver assembly. Such detection techniques can include one or more “time-of-flight” determination techniques. For example, a distance to one or more objects in a field of view can be estimated or tracked, such as by determining a time difference between a transmitted light pulse and a received light pulse.
LIDAR systems, such as automotive LIDAR systems, may operate by transmitting one or more pulses of light towards a target region. The one or more transmitted light pulses can illuminate a portion of the target region. A portion of the one or more transmitted light pulses can be reflected and/or scattered by the illuminated portion of the target region and received by the LIDAR system. The LIDAR system can then measure a time difference between the transmitted and received light pulses, such as to determine a distance between the LIDAR system and the illuminated portion of the target region. The distance can be determined according to the expression
where d can represent a distance from the LIDAR system to the illuminated portion of the target, t can represent a round trip travel time, and c can represent a speed of light. However, more than one pulse may be received from the illuminated portion of the target for a single transmitted pulse, such as due to a surface of one or more objects in the illuminated portion of the target region.
Over time, a shape of the transmitted pulse may vary, such as due to varying environmental parameters such as temperature, pressure, or humidity. The shape of the pulse can also vary over time, such as due to aging of the LIDAR system. The inventors have recognized, among other things, that it may be advantageous to measure a shape of the transmitted pulse contemporaneously with the transmitted pulse, such as to account for variations in the shape of the transmitted pulse. The measured shape of the transmitted pulse can then be used to provide improved accuracy in the determination of an arrival time of the received pulse reflected or scattered from the illuminated portion of the target region.
In an example, a technique (such as implemented using an apparatus, a method, a means for performing acts, or a device readable medium including instructions that, when performed by the device, can cause the device to perform acts) can include improving range resolution in an optical detection system, the technique including transmitting a first light pulse towards a target region using a transmitter, receiving a first portion of the first transmitted light pulse from the transmitter and determining a temporal profile of the first transmitted light pulse from the received first portion, and receiving a second portion of the first transmitted light pulse from the target region and determining an arrival time of the second received portion from the target region based at least in part on the determined temporal profile of the first transmitted light pulse.
In an example, an optical detection system can provide improved range resolution, the system comprising a transmitter configured to transmit a light pulse towards a target region, a receiver configured to receive a first portion of the transmitted light pulse from the transmitter, and control circuitry configured to determine a temporal profile of the transmitted light pulse from the received first portion, wherein the receiver is configured to receive a second portion of the transmitted light pulse from the target region and the control circuitry is configured to determine an arrival time of the second received portion from the target region based at least in part on the determined temporal profile of the transmitted light pulse.
This summary is intended to provide an overview of subject matter of the present patent application. It is not intended to provide an exclusive or exhaustive explanation of the invention. The detailed description is included to provide further information about the present patent application.
In the drawings, which are not necessarily drawn to scale, like numerals may describe similar components in different views. Like numerals having different letter suffixes may represent different instances of similar components. The drawings illustrate generally, by way of example, but not by way of limitation, various embodiments discussed in the present document.
LIDAR systems, such as automotive LIDAR systems, may operate by transmitting one or more pulses of light towards a target region. The one or more transmitted light pulses can illuminate a portion of the target region. A portion of the one or more transmitted light pulses can be reflected and/or scattered by the illuminated portion of the target region and received by the LIDAR system. The LIDAR system can then measure a time difference between the transmitted and received light pulses, such as to determine a distance between the LIDAR system and the illuminated portion of the target region. The distance can be determined according to the expression
where d can represent a distance from the LIDAR system to the illuminated portion of the target, t can represent a round trip travel time, and c can represent a speed of light.
More than one pulse may be received in response to a single transmitted pulse, for example due to multiple objects in the illuminated portion of the target region. The shape of the received pulse may also be distorted, for example if the surface of the reflecting object is not oriented orthogonally to the LIDAR system. Additionally, the shape of the transmitted pulse may vary, such as due to varying environmental parameters such as temperature, pressure, or humidity. The shape of the pulse can also vary over time, such as due to aging of the LIDAR system. The inventors have recognized, among other things, that it may be advantageous to measure a shape of the transmitted pulse, such as contemporaneously with generation or transmission of the pulse, such as to account for variations in the shape of the transmitted pulse. The measured shape of the transmitted pulse can then be used to provide improved accuracy in the determination of an arrival time of the received pulse reflected or scattered from the illuminated portion of the target region.
The optical system 116 can receive at least a portion of the light beam from the target region 112 and can image the scanned segments 114 onto the photosensitive detector 120 (e.g., a CCD). The detection circuitry 124 can receive and process the image of the scanned points from the photosensitive detector 120, such as to form a frame. A distance from the LIDAR system 100 to the target region 112 can be determined for each scanned point, such as by determining a time difference between the light transmitted towards the target region 112 and the corresponding light received by the photosensitive detector 120. In an example, the LIDAR system 100 can be installed in an automobile, such as to facilitate an autonomous self-driving automobile. In an example, the LIDAR system 100 can be operated in a flash mode, where the illuminator 105 can illuminate the entire field of view without the scanning element 106.
where d can represent a distance from the LIDAR system to the feature of the target region 112, t can represent a round trip travel time, and c can represent a speed of light.
The photodetector 110 can detect a portion of each of the outgoing pulses, such as to determine a temporal shape of each of the outgoing pulses. The outgoing pulses can be scattered by the features 304(a) and 304(b) in the target region 112. The control circuitry 104 can then use the determined temporal shapes to determine an arrival time of each of the detected pulses, where the detected pulses can correspond to a received portion of the outgoing pulse scattered or reflected from features 304(a) and 304(b). Markers 308(a) and 308(b) can represent the distance from the LIDAR system 100 to the features 304(a) and 304(b), respectively. In an example, the control circuitry can use a matched filter to determine the arrival time of each of the detected pulses. One or more parameters of the matched filter can be updated based on the determined temporal shapes. The first feature of the target region 304(a) can correspond to a first distance from the LIDAR system, and the second feature of the target region 304(b) can correspond to a second distance from the LIDAR system. The control circuitry can determine a first distance 312(a) corresponding to the first received pulse and a second distance 312(b) corresponding to the second received pulse. In the example illustrated in
The outgoing pulses can be reflected or scattered by the feature 404 in the target region 112. The control circuitry 104 can then use the determined temporal shapes to determine an arrival time of each of the detected pulses, where the detected pulses can correspond to a received portion of the outgoing pulse scattered or reflected from feature 404. Markers 408 can represent the distances from the LIDAR system 100 to various portions of the feature 404. Each of the emitted light pulses can correspond to a different distance from the LIDAR system 100 to the feature 404. The optical system 116 and photosensitive detector 120 can receive a portion of scattered light corresponding to the emitted light pulses, such as to form a temporal profile 411 of the received light, such as that shown in
A time difference between light received from different faces of the features 504(a) and 504(b) can be less than a width of each of the emitted light pulses. Markers 508(a) and 508(b) can represent the distances from the LIDAR system 100 to the features 504(a) and 504(b), respectively. The control circuitry 104 can then apply a matched filter to the temporal profile of the received light. One or more parameters of the matched filter can be updated based on the temporal shapes of the emitted light pulses as determined by the photodetector 110. In the example illustrated in
The optical system 116 and photosensitive detector 120 can receive a portion of scattered light corresponding to the emitted light pulses, such as to form a temporal profile of the received light 511, such as that shown in
Light scattered or reflected by a target in response to a light pulse from the illuminator 105 can be received through a second window 820B, such as through a signal chain similar to the reference waveform signal chain. For example, the received light can be detected by a photodiode 110B, and a signal representative of the received light can be amplified by a TIA 822B and digitized by an ADC 830B. In an example, the signal chains defined by the TIAs 822A and 822B, along with photodiodes 110A and 110B, and ADCs 830A and 830B can be matched. For example, one or more of gain factor, bandwidth, filtering, and ADC timing can be matched between the two signal chains to facilitate use of the pulse detector 824 to detect scattered or reflected light pulses from the target using the locally-generated representation of the reference waveform. Pulse detector 824 may implement one or more detection techniques amongst a variety of detection techniques, such as tuned in response to the output of ADC 830A. One example includes a matched filter with coefficients that can be adjusted, such as adpatively. In another example, a threshold detection scheme can be used, such as having an adjustable threshold.
The architecture 800 can include other elements. For example, the digital representation of the reference waveform can be constructed at least in part using a reference waveform generator 826, such as by aggregating representations of several transmit pulses or performing other processing to reduce noise or improve accuracy. Noise removal can be performed such as using noise removal elements 828A and 828B, with each implementing a digital filter. Detected receive pulses can be processed such as to provide a representation of a field of regard being scanned using the
Each of the non-limiting aspects above can stand on its own, or can be combined in various permutations or combinations with one or more of the other aspects or other subject matter described in this document.
The above detailed description includes references to the accompanying drawings, which form a part of the detailed description. The drawings show, by way of illustration, specific embodiments in which the invention can be practiced. These embodiments are also referred to generally as “examples.” Such examples can include elements in addition to those shown or described. However, the present inventors also contemplate examples in which only those elements shown or described are provided. Moreover, the present inventors also contemplate examples using any combination or permutation of those elements shown or described (or one or more aspects thereof), either with respect to a particular example (or one or more aspects thereof), or with respect to other examples (or one or more aspects thereof) shown or described herein. In the event of inconsistent usages between this document and any documents so incorporated by reference, the usage in this document controls.
In this document, the terms “a” or “an” are used, as is common in patent documents, to include one or more than one, independent of any other instances or usages of “at least one” or “one or more.” In this document, the term “or” is used to refer to a nonexclusive or, such that “A or B” includes “A but not B,” “B but not A,” and “A and B,” unless otherwise indicated. In this document, the terms “including” and “in which” are used as the plain-English equivalents of the respective terms “comprising” and “wherein.” Also, in the following claims, the terms “including” and “comprising” are open-ended, that is, a system, device, article, composition, formulation, or process that includes elements in addition to those listed after such a term in a claim are still deemed to fall within the scope of that claim. Moreover, in the following claims, the terms “first,” “second,” and “third,” etc. are used merely as labels, and are not intended to impose numerical requirements on their objects.
Method examples described herein can be machine or computer-implemented at least in part. Some examples can include a computer-readable medium or machine-readable medium encoded with instructions operable to configure an electronic device to perform methods as described in the above examples. An implementation of such methods can include code, such as microcode, assembly language code, a higher-level language code, or the like. Such code can include computer readable instructions for performing various methods. The code may form portions of computer program products. Further, in an example, the code can be tangibly stored on one or more volatile, non-transitory, or non-volatile tangible computer-readable media, such as during execution or at other times. Examples of these tangible computer-readable media can include, but are not limited to, hard disks, removable magnetic disks, removable optical disks (e.g., compact disks and digital video disks), magnetic cassettes, memory cards or sticks, random access memories (RAMs), read only memories (ROMs), and the like.
The above description is intended to be illustrative, and not restrictive. For example, the above-described examples (or one or more aspects thereof) may be used in combination with each other. Other embodiments can be used, such as by one of ordinary skill in the art upon reviewing the above description. The Abstract is provided to comply with 37 C.F.R. § 1.72(b), to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. Also, in the above Detailed Description, various features may be grouped together to streamline the disclosure. This should not be interpreted as intending that an unclaimed disclosed feature is essential to any claim. Rather, inventive subject matter may lie in less than all features of a particular disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description as examples or embodiments, with each claim standing on its own as a separate embodiment, and it is contemplated that such embodiments can be combined with each other in various combinations or permutations. The scope of the invention should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.