Light Detection and Ranging (lidar) technology provides a way to directly measure a distance of objects from a lidar sensor. A lidar apparatus generally includes an emitter and a receiver, or sensor, co-located in the same housing. The lidar emitter emits light, e.g., a pulsed laser beam, which reflects from objects in its path. Reflected light is then detected by the lidar sensor, and the detected signal is analyzed to determine a range of the object, or target, that is, the distance between the target and the lidar sensor. Such lidar range measurements are inherently limited by a transmission delay—the time required for a light pulse to travel a round trip distance between the detector and the target, or time-of-flight (TOF). Given the speed of light in air, the round trip signal TOF is 0.67 microseconds for every 100 m of distance between the sensor and the target.
The lidar emitter may emit repeated laser beam pulses at a fixed pulse emission rate. When a pulse is emitted, the detector may be activated, or “armed,” for a time interval t, to detect TOF reflections of that pulse. After an activation time t, the detector is disarmed. Each time interval during which the detector is armed is referred to as a “range gate.” The time interval dedicated to each complete TOF measurement is referred to as a “lidar frame.”
The frame duration limits the TOF, and therefore the range, of detectable objects, to less than a maximum range, Rmax, or equivalently, to within a measurable range window, 0≤R≤Rmax. The lidar detector is generally armed for a finite period of time corresponding to Rmax. For example, if a lidar emits a single light pulse and the detector is armed for 2 μs, the light sensor will detect only return signals having a time-of-flight of 2 μs or less, corresponding to a maximum range of 300 m. Light reflecting from objects farther away than 300 m will not have time to make a round trip back to the detector before the end of the range gate, and the detector disarms.
An aliasing effect can arise when a series of light pulses is emitted and the detector is repeatedly armed in accordance with the pulse frequency. When an emitted pulse is reflected from a distant target beyond Rmax, the reflected signal may be detected in a subsequent lidar frame. The detected signal from the distant target may be misinterpreted as a reflection of a later pulse from a closer target. Aliasing thus arises because the detector cannot distinguish which pulse generated the reflected signal.
Disclosed herein, in accordance with aspects, are systems, methods, and computer program products for receiving, by a lidar detector during a lidar frame, a reflected laser signal corresponding to a laser pulse emitted by a lidar emitter, wherein the received reflected laser signal is associated with a time bin of the lidar frame and with a pulse code offset applied to a laser signal emitted during that lidar frame. The received reflected laser signal is aggregated into an avalanche histogram at a time bin of the avalanche histogram corresponding with the time bin of the lidar frame, wherein one or more additional received reflected laser signals are further aggregated into the avalanche histogram at corresponding time bins of the avalanche histogram as a set of received reflected laser signals, each of the one or more additional received reflected laser signals having a corresponding pulse code offset. The set of received reflected laser signals is decoded by shifting each received reflected laser signal of the set of received reflected laser signals to a time bin of a decoded avalanche histogram based on the corresponding pulse code offset.
The accompanying drawings are incorporated herein and form a part of the specification. It is noted that, in accordance with common practice in the industry, various features are not drawn to scale. Dimensions of the various features may be arbitrarily increased or reduced for clarity of discussion.
In the drawings, like reference numbers generally indicate identical or similar elements. Additionally, generally, the left-most digit(s) of a reference number identifies the drawing in which the reference number first appears.
Provided herein are system, apparatus, device, method and/or computer program product aspects, and/or combinations and sub-combinations thereof, for range enhancement using pulse coding.
Range ambiguity due to aliasing can be eliminated by detecting only pulses that were emitted in the current frame. One way to distinguish pulses is by identifying them electronically using pulse modulation. A temporal pulse coding scheme is disclosed herein for application to pulsed lidar systems, and in particular, pulsed lidar systems used as sensors on autonomous vehicles. The temporal pulse coding scheme can be used to eliminate lidar aliasing effects through the use of avalanche histograms and appropriate pulse decoding techniques. Alternatively, pulse coding can be used to leverage aliased measurements to. extend the dynamic range of the lidar system. That is, instead of discarding reflected signals from distant objects that are identified as being associated with laser pulses emitted during a previous lidar frame, this information is retained and used to calculate the range of the distant objects, thus effectively increasing the Rmax to arbitrarily large ranges. In addition, pulse coding can be combined with arm coding to eliminate dead zones during times when the detector is disarmed.
The term “vehicle” refers to any moving form of conveyance that is capable of carrying either one or more human occupants and/or cargo and is powered by any form of energy. The term “vehicle” includes, but is not limited to, cars, trucks, vans, trains, autonomous vehicles, aircraft, aerial drones and the like. An “autonomous vehicle” (or “AV”) is a vehicle having a processor, programming instructions and drivetrain components that are controllable by the processor without requiring a human operator. An autonomous vehicle may be fully autonomous in that it does not require a human operator for most or all driving conditions and functions, or it may be semi-autonomous in that a human operator may be required in certain conditions or for certain operations, or that a human operator may override the vehicle's autonomous system and may take control of the vehicle.
Notably, the present solution is being described herein in the context of an autonomous vehicle. However, the present solution is not limited to autonomous vehicle applications. The present solution may be used in other applications such as robotic applications, radar system applications, metric applications, and/or system performance applications.
AV 102a is generally configured to detect objects 102b, 114, 116 in proximity thereto. The objects can include, but are not limited to, a vehicle 102b, cyclist 114 (such as a rider of a bicycle, electric scooter, motorcycle, or the like) and/or a pedestrian 116.
As illustrated in
The sensor system 111 may include one or more sensors that are coupled to and/or are included within the AV 102a, as illustrated in
As will be described in greater detail, AV 102a may be configured with a lidar system, e.g., lidar system 264 of
It should be noted that the lidar systems for collecting data pertaining to the surface may be included in systems other than the AV 102a such as, without limitation, other vehicles (autonomous or driven), robots, satellites, etc.
Network 108 may include one or more wired or wireless networks. For example, the network 108 may include a cellular network (e.g., a long-term evolution (LTE) network, a code division multiple access (CDMA) network, a 3G network, a 4G network, a 5G network, another type of next generation network, etc.). The network may also include a public land mobile network (PLMN), a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a telephone network (e.g., the Public Switched Telephone Network (PSTN)), a private network, an ad hoc network, an intranet, the Internet, a fiber optic-based network, a cloud computing network, and/or the like, and/or a combination of these or other types of networks.
AV 102a may retrieve, receive, display, and edit information generated from a local application or delivered via network 108 from database 112. Database 112 may be configured to store and supply raw data, indexed data, structured data, map data, program instructions or other configurations as is known.
The communications interface 117 may be configured to allow communication between AV 102a and external systems, such as, for example, external devices, sensors, other vehicles, servers, data stores, databases etc. The communications interface 117 may utilize any now or hereafter known protocols, protection schemes, encodings, formats, packaging, etc. such as, without limitation, Wi-Fi, an infrared link, Bluetooth, etc. The user interface system 115 may be part of peripheral devices implemented within the AV 102a including, for example, a keyboard, a touch screen display device, a microphone, and a speaker, etc.
As shown in
Operational parameter sensors that are common to both types of vehicles include, for example: a position sensor 236 such as an accelerometer, gyroscope and/or inertial measurement unit; a speed sensor 238; and an odometer sensor 240. The vehicle also may have a clock 242 that the system uses to determine vehicle time during operation. The clock 242 may be encoded into the vehicle on-board computing device, it may be a separate device, or multiple clocks may be available.
The vehicle also includes various sensors that operate to gather information about the environment in which the vehicle is traveling. These sensors may include, for example: a location sensor 260 (e.g., a Global Positioning System (“GPS”) device); object detection sensors such as one or more cameras 262; a lidar system 264; and/or a radar and/or a sonar system 266. The sensors also may include environmental sensors 268 such as a precipitation sensor and/or ambient temperature sensor. The object detection sensors may enable the vehicle to detect objects that are within a given distance range of the vehicle 200 in any direction, while the environmental sensors collect data about environmental conditions within the vehicle's area of travel.
During operations, information is communicated from the sensors to a vehicle on-board computing device 220. The on-board computing device 220 may be implemented using the computer system of
Geographic location information may be communicated from the location sensor 260 to the on-board computing device 220, which may then access a map of the environment that corresponds to the location information to determine known fixed features of the environment such as streets, buildings, stop signs and/or stop/go signals. Captured images from the cameras 262 and/or object detection information captured from sensors such as lidar system 264 is communicated from those sensors) to the on-board computing device 220. The object detection information and/or captured images are processed by the on-board computing device 220 to detect objects in proximity to the vehicle 200. Any known or to be known technique for making an object detection based on sensor data and/or captured images can be used in the aspects disclosed in this document.
Lidar information is communicated from lidar system 264 to the on-board computing device 220. Additionally, captured images are communicated from the camera(s) 262 to the vehicle on-board computing device 220. The lidar information and/or captured images are processed by the vehicle on-board computing device 220 to detect objects in proximity to the vehicle 200. The manner in which the object detections are made by the vehicle on-board computing device 220 includes such capabilities detailed in this disclosure.
The on-board computing device 220 may include and/or may be in communication with a routing controller 231 that generates a navigation route from a start position to a destination position for an autonomous vehicle. The routing controller 231 may access a map data store to identify possible routes and road segments that a vehicle can travel on to get from the start position to the destination position. The routing controller 231 may score the possible routes and identify a preferred route to reach the destination. For example, the routing controller 231 may generate a navigation route that minimizes Euclidean distance traveled or other cost function during the route, and may further access the traffic information and/or estimates that can affect an amount of time it will take to travel on a particular route. Depending on implementation, the routing controller 231 may generate one or more routes using various routing methods, such as Dijkstra's algorithm, Bellman-Ford algorithm, or other algorithms. The routing controller 231 may also use the traffic information to generate a navigation route that reflects expected conditions of the route (e.g., current day of the week or current time of day, etc.), such that a route generated for travel during rush-hour may differ from a route generated for travel late at night. The routing controller 231 may also generate more than one navigation route to a destination and send more than one of these navigation routes to a user for selection by the user from among various possible routes.
In various aspects, the on-board computing device 220 may determine perception information of the surrounding environment of the AV 102a. Based on the sensor data provided by one or more sensors and location information that is obtained, the on-board computing device 220 may determine perception information of the surrounding environment of the AV 102a. The perception information may represent what an ordinary driver would perceive in the surrounding environment of a vehicle. The perception data may include information relating to one or more objects in the environment of the AV 102a. For example, the on-board computing device 220 may process sensor data (e.g., lidar or RADAR data, camera images, etc.) in order to identify objects and/or features in the environment of AV 102a. The objects may include traffic signals, road way boundaries, other vehicles, pedestrians, and/or obstacles, etc. The on-board computing device 220 may use any now or hereafter known object recognition algorithms, video tracking algorithms, and computer vision algorithms (e.g., track objects frame-to-frame iteratively over a number of time periods) to determine the perception.
In some aspects, the on-board computing device 220 may also determine, for one or more identified objects in the environment, the current state of the object. The state information may include, without limitation, for each object: current location; current speed and/or acceleration, current heading; current pose; current shape, size, or footprint; type (e.g., vehicle vs. pedestrian vs. bicycle vs. static object or obstacle); and/or other state information.
The on-board computing device 220 may perform one or more prediction and/or forecasting operations. For example, the on-board computing device 220 may predict future locations, trajectories, and/or actions of one or more objects. For example, the on-board computing device 220 may predict the future locations, trajectories, and/or actions of the objects based at least in part on perception information (e.g., the state data for each object comprising an estimated shape and pose determined as discussed below), location information, sensor data, and/or any other data that describes the past and/or current state of the objects, the AV 102a, the surrounding environment, and/or their relationship(s). For example, if an object is a vehicle and the current driving environment includes an intersection, the on-board computing device 220 may predict whether the object will likely move straight forward or make a turn. If the perception data indicates that the intersection has no traffic light, the on-board computing device 220 may also predict whether the vehicle may have to fully stop prior to entering the intersection.
In various aspects, the on-board computing device 220 may determine a motion plan for the autonomous vehicle. For example, the on-board computing device 220 may determine a motion plan for the autonomous vehicle based on the perception data and/or the prediction data. Specifically, given predictions about the future locations of proximate objects and other perception data, the on-board computing device 220 can determine a motion plan for the AV 102a that best navigates the autonomous vehicle relative to the objects at their future locations.
In some aspects, the on-board computing device 220 may receive predictions and make a decision regarding how to handle objects and/or actors in the environment of the AV 102a. For example, for a particular actor (e.g., a vehicle with a given speed, direction, turning angle, etc.), the on-board computing device 220 decides whether to overtake, yield, stop, and/or pass based on, for example, traffic conditions, map data, state of the autonomous vehicle, etc. Furthermore, the on-board computing device 220 also plans a path for the AV 102a to travel on a given route, as well as driving parameters (e.g., distance, speed, and/or turning angle). That is, for a given object, the on-board computing device 220 decides what to do with the object and determines how to do it. For example, for a given object, the on-board computing device 220 may decide to pass the object and may determine whether to pass on the left side or right side of the object (including motion parameters such as speed). The on-board computing device 220 may also assess the risk of a collision between a detected object and the AV 102a. If the risk exceeds an acceptable threshold, it may determine whether the collision can be avoided if the autonomous vehicle follows a defined vehicle trajectory and/or implements one or more dynamically generated emergency maneuvers in a pre-defined time period (e.g., N milliseconds). If the collision can be avoided, then the on-board computing device 220 may execute one or more control instructions to perform a cautious maneuver (e.g., mildly slow down, accelerate, change lane, or swerve). In contrast, if the collision cannot be avoided, then the on-board computing device 220 may execute one or more control instructions for execution of an emergency maneuver (e.g., brake and/or change direction of travel).
As discussed above, planning and control data regarding the movement of the autonomous vehicle is generated for execution. The on-board computing device 220 may, for example, control braking via a brake controller; direction via a steering controller; speed and acceleration via a throttle controller (in a gas-powered vehicle) or a motor speed controller (such as a current level controller in an electric vehicle); a differential gear controller (in vehicles with transmissions); and/or other controllers.
As shown in
Inside the rotating shell or stationary dome is a light emitter system 304 that is configured and positioned to generate and emit pulses of light through the aperture 312 or through the transparent dome of the housing 306 via one or more laser emitter chips or other light emitting devices. The light emitter system 304 may include any number of individual emitters (e.g., 8 emitters, 64 emitters, or 128 emitters). The emitters may emit light of substantially the same intensity or of varying intensities. The lidar system also includes a light detector 308 containing a photodetector or array of photodetectors positioned and configured to receive light reflected back into the system. The light emitter system 304 and light detector 308 would rotate with the rotating shell, or they would rotate inside the stationary dome of the housing 306. One or more optical element structures 310 may be positioned in front of the light emitter system 304 and/or the light detector 308 to serve as one or more lenses or waveplates that focus and direct light that is passed through the optical element structure 310.
One or more optical element structures 310 may be positioned in front of a mirror (not shown) to focus and direct light that is passed through the optical element structure 310. As shown below, the system includes an optical element structure 310 positioned in front of the mirror and connected to the rotating elements of the system so that the optical element structure 310 rotates with the mirror. Alternatively or in addition, the optical element structure 310 may include multiple such structures (for example lenses and/or waveplates). Optionally, multiple optical element structures 310 may be arranged in an array on or integral with the shell portion of the housing 306.
Lidar system 300 includes a power unit 318 to power the light emitting unit 304, a motor 316, and electronic components. Lidar system 300 also includes an analyzer 314 with elements such as a processor 322 and non-transitory computer-readable memory 320 containing programming instructions that are configured to enable the system to receive data collected by the light detector unit, analyze it to measure characteristics of the light received, and generate information that a connected system can use to make decisions about operating in an environment from which the data was collected. Optionally, the analyzer 314 may be integral with the lidar system 300 as shown, or some or all of it may be external to the lidar system and communicatively connected to the lidar system via a wired or wireless communication network or link.
Referring to
The use of techniques disclosed herein within the example lidar apparatus 400 may serve to enhance the ability of lidar apparatus 400 to perform range determinations. It is noted that, although the lidar apparatus 400 is depicted in
In some aspects, detector 508 is configured to detect laser pulse reflections from a target using a single-photon type of detector that indicates whether or not one or more photons has been received. Single-photon detectors are not sensitive to the number of photons in the reflected pulse; they simply act as digital optical switches that indicate whether one or more photons have been received. To build up analog contrast information from such a receiver, it is possible to use multiple pulses, or temporal averaging over multiple pulses. The dynamic range of the measurement will then scale with the number of pulses that are used. If only a small subset of the multiple pulses triggers a detection event, then the signal intensity returned from a target is low. If a large subset of the pulses triggers detection events, then the intensity is high.
In some aspects, controller 504 includes a pulse coder 510 and a pulse decoder 512 for providing signal processing functions. Controller 504 may be programmed to apply temporal pulse coding to emitted laser pulses via pulse coder 510, and to decode detected signals via pulse decoder 512, to distinguish between reflections from a close target and reflections from a distant target. Pulse coder 510 and pulse decoder 512 can be implemented in hardware (e.g., using application specific integrated circuits (ASICs) or software, or combinations thereof.
Lidar apparatus 400 and controller 504 are depicted as being within the contours of vehicle 102a and as separate entities, in accordance with aspects of the disclosure. However, one skilled in the relevant arts will appreciate that the particular placements of lidar apparatus 400 and controller 504 may include a variety of arrangements, including combination into a single unit, and the depiction is not limiting.
If an out-of-range target B reflects a pulse (instead of an in-range target such as target A reflecting the pulse) such as with emitted light pulse 608, reflected light pulse 612 is detected in a subsequent lidar frame, in this case frame N+1 at time bin 2. Similarly, in this case, a reflected light pulse 614 from out-of-range target B, associated with a transmission at time bin 0 of a previous lidar frame, N−1, is detected within lidar frame N at time bin 2. Targets even further out-of-range may be detected several lidar frames later.
Since avalanche histogram 620 cannot disambiguate between the out-of-range target RB and an in-range target that would give rise to equivalent histogram detections at the same time bin, i.e., time bin 2, the detected reflections end up creating ambiguity over where the actual in-range target is located, as both sets of reflections aggregate to the same measure within the histogram.
In order to resolve the range ambiguity effect, it is possible to use a pulse coding technique to identify which reflections are coming from in-range targets and which are coming from out-of-range targets (and even a specific range for the out-of-range targets).
As shown in
Referring to
In short, in the scenario presented in
The pulse code implemented through the use of different offsets can then be unwound (decoded) before building an avalanche histogram. The numbers shown in the bins corresponding to each received reflected pulse (e.g., 810 and 816) indicate the pulse offset that was applied to the single pulse that was emitted at that particular lidar frame. The bins are tagged with these offset values. Since the number shown in each bin matches the offset of the signal emitted within that same frame, reversing the offset on each detected reflection within that lidar frame across all lidar frames within a data frame, will have the cumulative effect of aggregating the received reflected pulses that are from in-range objects. The received reflected pulses from out-of-range objects would be inconsistently bin-shifted across lidar frames, such that their cumulative effect within an avalanche histogram across a data frame will be negligible (identifiable as noise).
In
Finally, at step 712, with a decoded histogram built to aggregate the in-range received pulses, the position of the coinciding events within that decoded histogram can be used to determine a distance to the in-range target.
Effectively, use of alternate coding scheme 900 doubles the range over which target distances can be measured, from Rmax to 2Rmax without an increase in total measurement time as it is able to compute targets within this further range from the same data frame. By emitting multiple pulses, one per lidar frame, with an offset particular to that lidar frame, it is possible to distinguishing reflections within multiple ranges. One decoding sequence [1,3,0] provides measurements at close range, in a first zone, up to Rmax, and a second cyclic decoding sequence [0,1,3]—cyclically related to the first decoding sequence—provides measurements at a longer range, in a second zone, between Rmax and 2Rmax.
One skilled in the relevant arts will appreciate that this approach can be extended to provide detection of targets in additional zones. For example, a third cyclic decoding sequence [3,0,1] can be used to decode reflections from targets in a third zone, at a range between 2Rmax and 3Rmax. In general, for N pulses within a data frame and with appropriate selection of the corresponding offsets within a lidar frame, it is possible to disambiguate ranges as far as N*Rmax.
In some aspects, a decoding sequence is chosen so as to keep the same maximum range and to double, or increase, the pulse rate to define multiple detection zones. Depending on the temporal pulse code that is used, one zone will become a detection zone while other zones will be treated as being out of range, or interfering zones. An advantage of this approach is that it offers a way to double the number of pulse statistics used in resolving targets because more pulses emitted per unit time results in a greater number of detections.
After each range gate having 13 time bins, as described above, there is a hold-off time, during which the detector is unarmed and not available for detecting reflected pulses. In this context, a lidar frame encompasses the 13-time bin range gate plus the hold-off time. In some aspects, the hold-off time may include a combination of a frame deadtime, any dither, and/or any arm offset, as shown in
To detect objects over a continuous range from 0 to 2Rmax, pulse coding can be combined with arm coding as discussed above. While pulse coding varies the laser emission time between frames, arm coding varies the timing of the detector arming and disarming from one frame to the next using appropriate “arm offsets.” In particular, the hold-off time can be shifted relative to the duration of the lidar frame so that the detector is not disarmed for the same range interval in every lidar frame. This can be done by varying the period within each frame during which the detector is armed.
The use of a fixed hold-off time constrains the detector to measuring only pulses reflected from distances between 0 and f·Rmax, where f is a fraction in unit interval 0<f<1. By arming the detector at the beginning of the range gate and disarming it at a time corresponding to a reflection from a distance f·Rmax, as shown in the first series of frames 1000, any reflections from distances in the range of f·Rmax to Rmax will reach the detector during the hold-off time 1004, resulting in a “dead zone” in which objects are not seen. However, through the use of arm coding, the range intervals for which the detector is armed can be varied from [0, f·Rmax] to [(1−f)*Rmax, Rmax], thereby enabling detections from all ranges between 0 and Rmax.
With arm coding, hold-off times can vary, but as long as the minimum hold-off time is sufficient, the detector will maintain good performance. A minimum hold-off time 1024 is maintained between the disarming of the detector during lidar frame N−1 at 1026 and the arming of the detector at lidar frame N. In some aspects, a length of the range gate may be a fixed value with the hold-off time being a variable value that is equal to or longer than the minimum hold-off time 1024. For example, for a lidar frame length of 80 m, the length of the range gate may be fixed at 72 m and with the hold-off time may be between 4 m and 12 m, with 4 m being the minimum hold-off time 1024. In some aspects, the length of the range gate may be a variable value with the hold-off time being a fixed value set at the minimum hold-off time 1024. For example, for a lidar frame length of 80 m, the length of the range gate may vary between 74 m and 76 m and with the hold-off time being constant at 4 m. In some aspects, the length of the range gate and the hold-off time may both be variable. For example, for a lidar frame length of 80 m, the length of the range gate may vary between 74 m and 76 m and with the hold-off time being between 4 m and 6 m, with 4 m being the minimum hold-off time 1024. It should be understood by those of ordinary skill in the art that these are examples of the length of the range gate and minimum hold-off time and that other values are further contemplated in accordance with aspects of the present disclosure. By applying the arm coding technique described herein and introducing arm offsets 1028a and 1028b, the start and end of the range gate, such as range gate 1022 and subsequent range gates in series of frames 1020, can be shifted to eliminate any blind zones. Thus, by combining arm coding with pulse coding of the laser emissions, lidar apparatus 400 can measure objects over a continuous range from 0 to 2Rmax.
Each pulse code sequence shown in
In some aspects, pulse codes can be chosen such that the difference between successive offsets, d, in a sequence, d(n)-d(n−1), is a different value for each n. Pulse codes can be selected in this manner to ensure that the pulse code results in a diffuse, or spread-out, decoded histogram for out-of-range targets, with a commensurate improvement in noise reduction associated with faraway objects. In the example shown in the first row of
However, as would be understood by those skilled in the relevant arts from the coding and decoding histogram processes described above, any coding approach that results in the ability to distinguish in-range and out-of-range targets can be used, and these approaches are provided by way of non-limiting example.
Each lidar frame begins with a recurring master trigger that is timed to account for the maximal length of the lidar frame 1202, in accordance with aspects of the disclosure. When the master trigger occurs, a pulse coding interval (dither) 1204 is applied in the event that pulse coding is used to aid in differentiating in-range targets from out-of-range targets, as discussed with respect to
Subsequent to any dither, a light pulse is emitted. The total travel time from which this light pulse is emitted, reflected from a target, and detected must occur within the range gate 1208 for an in-range target. If the pulse is instead detected in a subsequent range gate, based on a reflection from an out-of-range target, then it is possible to use the pulse coding interval of the emitted light pulse to disambiguate the reflected signal, as discussed above with respect to
However, after the light pulse is emitted, an additional delay is present before the detector is armed (and the range gate begins). This delay is termed arm offset 1206. In certain aspects of the disclosure, arm offset 1206 can be controlled, with respect to deadtime 1210 that occurs after the detector is disarmed, in the manner discussed above with respect to
For some time later, here a two bin time span, the detector is in a disarmed state corresponding to the arm offset. At the end of this period, the detector is armed. From then, until the point that the detector is disarmed, is the gate width corresponding to the time span during which a reflected pulse can be detected (within the range gate 1304). As shown in the example of structural view 1300, a time bin avalanche event occurs at time bin 9 at the first shown lidar frame, again at time bin 9 in the second shown lidar frame, and at time bin 8 for the third shown lidar frame. Lidar frame 1302 also includes a period between when the detector is disarmed and the next pulse is emitted corresponding to dead time 1306.
In the aggregate, these three lidar frames (by way of non-limiting example—more lidar frames can be used) are considered part of a data frame 1308. In the case where there are N lidar frames within a given data frame 1308, a data frame such as data frame 1308 can be constituted out of non-overlapping sets of N lidar frames (a first set of N lidar frames, the next set of N lidar frames, and so on). Alternatively, data frame 1308 can be a rolling window of lidar frames, such that a first data frame such as data frame 1308 encompasses lidar frames 0 to N, and the subsequent data frame encompasses lidar frames 1 to N+1, and so on. This can be done because, while a lidar emitter goes through a 360 degree sweep periodically, each lidar frame within a data frame can be treated as essentially being taken at a fixed position of the emitter and the sweep motion can be disregarded.
Avalanche histogram 1310 can then be constructed from all of the avalanche events across lidar frames within data frame 1308, in accordance with an aspect. And, as detailed in
Various aspects can be implemented, for example, using one or more computer systems, such as computer system 1400 shown in
Computer system 1400 can be any well-known computer capable of performing the functions described herein.
Computer system 1400 includes one or more processors (also called central processing units, or CPUs), such as a processor 1404. Processor 1404 is connected to a communication infrastructure or bus 1406.
One or more processors 1404 may each be a graphics processing unit (GPU). In an aspect, a GPU is a processor that is a specialized electronic circuit designed to process mathematically intensive applications. The GPU may have a parallel structure that is efficient for parallel processing of large blocks of data, such as mathematically intensive data common to computer graphics applications, images, videos, etc.
Computer system 1400 also includes user input/output device(s) 1403, such as monitors, keyboards, pointing devices, etc., that communicate with communication infrastructure 1406 through user input/output interface(s) 1402.
Computer system 1400 also includes a main or primary memory 1408, such as random access memory (RAM). Main memory 1408 may include one or more levels of cache. Main memory 1408 has stored therein control logic (i.e., computer software) and/or data.
Computer system 1400 may also include one or more secondary storage devices or memory 1410. Secondary memory 1410 may include, for example, a hard disk drive 1412 and/or a removable storage device or drive 1414. Removable storage drive 1414 may be a floppy disk drive, a magnetic tape drive, a compact disk drive, an optical storage device, tape backup device, and/or any other storage device/drive.
Removable storage drive 1414 may interact with a removable storage unit 1418. Removable storage unit 1418 includes a computer usable or readable storage device having stored thereon computer software (control logic) and/or data. Removable storage unit 1418 may be a floppy disk, magnetic tape, compact disk, DVD, optical storage disk, and/any other computer data storage device. Removable storage drive 1414 reads from and/or writes to removable storage unit 1418 in a well-known manner.
According to an exemplary aspect, secondary memory 1410 may include other means, instrumentalities or other approaches for allowing computer programs and/or other instructions and/or data to be accessed by computer system 1400. Such means, instrumentalities or other approaches may include, for example, a removable storage unit 1422 and an interface 1420. Examples of the removable storage unit 1422 and the interface 1420 may include a program cartridge and cartridge interface (such as that found in video game devices), a removable memory chip (such as an EPROM or PROM) and associated socket, a memory stick and USB port, a memory card and associated memory card slot, and/or any other removable storage unit and associated interface.
Computer system 1400 may further include a communication or network interface 1424. Communication interface 1424 enables computer system 1400 to communicate and interact with any combination of remote devices, remote networks, remote entities, etc. (individually and collectively referenced by reference number 1428). For example, communication interface 1424 may allow computer system 1400 to communicate with remote devices 1428 over communications path 1426, which may be wired and/or wireless, and which may include any combination of LANs, WANs, the Internet, etc. Control logic and/or data may be transmitted to and from computer system 1400 via communication path 1426.
In an aspect, a tangible, non-transitory apparatus or article of manufacture comprising a tangible, non-transitory computer useable or readable medium having control logic (software) stored thereon is also referred to herein as a computer program product or program storage device. This includes, but is not limited to, computer system 1400, main memory 1408, secondary memory 1410, and removable storage units 1418 and 1422, as well as tangible articles of manufacture embodying any combination of the foregoing. Such control logic, when executed by one or more data processing devices (such as computer system 1400), causes such data processing devices to operate as described herein.
Based on the teachings contained in this disclosure, it will be apparent to persons skilled in the relevant art(s) how to make and use aspects of this disclosure using data processing devices, computer systems and/or computer architectures other than that shown in
It is to be appreciated that the Detailed Description section, and not any other section, is intended to be used to interpret the claims. Other sections can set forth one or more but not all exemplary aspects as contemplated by the inventor(s), and thus, are not intended to limit this disclosure or the appended claims in any way.
While this disclosure describes exemplary aspects for exemplary fields and applications, it should be understood that the disclosure is not limited thereto. Other aspects and modifications thereto are possible, and are within the scope and spirit of this disclosure. For example, and without limiting the generality of this paragraph, aspects are not limited to the software, hardware, firmware, and/or entities illustrated in the figures and/or described herein. Further, aspects (whether or not explicitly described herein) have significant utility to fields and applications beyond the examples described herein.
Aspects have been described herein with the aid of functional building blocks illustrating the implementation of specified functions and relationships thereof. The boundaries of these functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternate boundaries can be defined as long as the specified functions and relationships (or equivalents thereof) are appropriately performed. Also, alternative aspects can perform functional blocks, steps, operations, methods, etc. using orderings different than those described herein.
References herein to “one aspect,” “an aspect,” “an example aspect,” or similar phrases, indicate that the aspect described can include a particular feature, structure, or characteristic, but every aspect can not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same aspect. Further, when a particular feature, structure, or characteristic is described in connection with an aspect, it would be within the knowledge of persons skilled in the relevant art(s) to incorporate such feature, structure, or characteristic into other aspects whether or not explicitly mentioned or described herein. Additionally, some aspects can be described using the expression “coupled” and “connected” along with their derivatives. These terms are not necessarily intended as synonyms for each other. For example, some aspects can be described using the terms “connected” and/or “coupled” to indicate that two or more elements are in direct physical or electrical contact with each other. The term “coupled,” however, can also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other.
The breadth and scope of this disclosure should not be limited by any of the above-described exemplary aspects, but should be defined only in accordance with the following claims and their equivalents.