The present disclosure generally relates to object detection capabilities of autonomous vehicle systems and, more particularly, to redundant beam scanning technologies that reduce data loss resulting from obscurants expected to contact an optical window of a lidar system for an autonomous vehicle.
Generally speaking, autonomous vehicle systems need to control vehicle operations such that the vehicle effectively and safely drives on active roadways. Accordingly, the autonomous system must recognize upcoming environments in order to determine and execute appropriate actions in response. Lidar systems are typically included as part of the environmental recognition systems, and at a high level, obtain information through emitting and receiving collimated laser light. However, these emissions suffer from environmental obscurants that interfere and/or block the optical path of the light.
Particularly, obscurants that adhere to the optical window of the lidar system can block a significant portion of the optical path of the light, resulting in data loss corresponding to substantial portions of the external vehicle environment. In extreme cases, the data loss may cause the autonomous systems to overlook or otherwise not identify obstacles or other objects in the vehicle’s path. As a result, the autonomous vehicle may unintentionally perform hazardous driving actions that put, at a minimum, the vehicle occupants at risk.
Accordingly, a need exists for systems that are resilient to these environmental obscurants, and particularly for systems that can effectively recognize entire upcoming vehicle environments despite the presence of an optical window obscurant.
The scanning lidar systems of the present disclosure may eliminate/minimize data loss from optical window and environmental obscurants by providing multiple offset lasers that perform a redundant beam scan. Namely, the scanning lidar systems of the present disclosure include a first light source and a second light source that is spatially displaced relative to the first light source. This spatial displacement of the second light source relative to the first light source is greater than an average diameter of environmental obscurants that the scanning lidar system generally encounters when scanning the external vehicle environment. More specifically, the spatial displacement is greater than the average diameter of obscurants that may physically contact (i.e., attach to) the optical window, through which the light pulses are transmitted/received to/from the external vehicle environment. In this manner, the scanning lidar systems of the present disclosure may effectively scan an external vehicle environment without the data loss conventional systems encounter due to environmental obscurants, and particularly obscurants contacting the optical window.
In one embodiment, a scanning lidar system for performing a redundant beam scan to reduce data loss resulting from obscurants comprises: a first light source configured to emit a first light beam comprising a first light pulse; a second light source configured to emit a second light beam comprising a first light pulse and having a spatial displacement relative to the first light source; a mirror assembly configured to adjust an azimuth emission angle and an elevation emission angle of the first light pulse and the second light pulse; an optical window configured to transmit the first light pulse and the second light pulse, wherein the spatial displacement of the second light source relative to the first light source is such that the first light pulse and the second light pulse produce two pixels corresponding to a same portion of an image, wherein the two pixels are used to render the same portion of the image; and a receiver configured to receive the first light pulse and the second light pulse that are scattered by one or more targets, the receiver including two or more detectors configured to detect the first light pulse or the second light pulse and output an electric signal for generating the two pixels.
Techniques of this disclosure are used to perform a redundant beam scan, such that data loss resulting from obscurants expected to contact an optical window of a lidar system for an autonomous vehicle may be reduced/eliminated. The vehicle may be a fully self-driving or “autonomous” vehicle, a vehicle controlled by a human driver, or some hybrid of the two. For example, the disclosed techniques may be used to capture more complete vehicle environment information than was conventionally possible to improve the safety/performance of an autonomous vehicle, to generate alerts for a human driver, or simply to collect data relating to a particular driving trip. The sensors described herein are part of a lidar system, but it should be understood that the techniques of the present disclosure may be applicable to any type or types of sensors capable of sensing an environment through which the vehicle is moving, such as radar, cameras, and/or other types of sensors that may experience data loss resulting from obscurants. Moreover, the vehicle may also include other sensors, such as inertial measurement units (IMUs), and/or include other types of devices that provide information on the current position of the vehicle (e.g., a GPS unit).
As mentioned, the systems and methods of the present disclosure may provide redundant beam scanning for autonomous vehicles in a manner that reduces/eliminates data loss resulting from obscurants. More specifically, systems of the present disclosure may include two light sources spatially displaced relative to one another at greater than an average diameter of obscurants expected to contact an optical window through which light pulses from the two light sources are emitted. Light pulses emitted from the two light sources may pass through the optical window maintaining the spatial displacement of the two light sources, and as a result, may generally avoid simultaneous signal disruption/blockage by the obscurant. A mirror assembly may adjust the azimuthal and elevation emission angles of light pulses emitted by the two light sources in a scanning pattern that defines the field of regard for the lidar system. In this manner, the systems of the present disclosure may effectively and reliably receive lidar data for the entire field of regard because at least one of the two emitted light pulses corresponding to a point in the field of regard may return to the lidar system for pixel generation regardless of whether or not an obscurant is contacting the optical window. These techniques are described in greater detail below.
As an example of the scanning lidar systems of the present disclosure, assume that an environmental obscurant (e.g., a rain droplet, a dirt particle, etc.) attaches to the optical window during operation of an autonomous vehicle, and more specifically, during scanning of the scanning lidar systems of the present disclosure. Further, assume that the environmental obscurant has a diameter of approximately 1 millimeter (mm), light pulses emitted from each light source (first and second light sources) have a beam diameter of approximately 2 mm, and the spatial separation of the two light sources is approximately 7 mm. In this example, as the optical paths of the light pulses from the two light sources are adjusted by the azimuth and elevation mirrors, one or more light pulses from at most one light source may be partially blocked (e.g., 1 mm obscurant may block up to half of the 2 mm diameter light pulse) by the obscurant at any particular combination of azimuth and elevation emission angles. However, at these particular combinations of azimuth and elevation emission angles, the light pulses from the unblocked light source are transmitted through the optical window without interference from the obscurant because the unblocked light source light pulses are 7 mm away from the obscurant. As a result, the unblocked light source obtains data corresponding to the external vehicle environment that the partially blocked light source is unable to obtain due to the presence of the obscurant.
As referenced herein, the unblocked light source may obtain data (e.g., pixel data) corresponding to a same portion of an image that the partially/completely blocked light source is unable to obtain due to the presence of the obscurant. It should therefore be understood that references to “same pixel data”, “same data”, and pixels generated from two different light sources being the “same” may represent pixel data corresponding to a same portion of an image, and not strictly identical pixels within the image. For example, references to “same pixel data”, “same data”, and pixels generated from two different light sources being the “same” may represent pixel data associated with two pixels that are adjacent to one another, within several pixels of one another, and/or identical, such that the pixel data of the two pixels corresponds to a same portion of the resulting image.
In the discussion below, example systems and methods for configuring a redundant beam scan to reduce data loss resulting from obscurants will first be described, with reference to
As the term is used herein, an “autonomous” or “self-driving” vehicle is a vehicle configured to sense its environment and navigate or drive with no human input, with little human input, with optional human input, and/or with circumstance-specific human input. For example, an autonomous vehicle may be configured to drive to any suitable location and control or perform all safety-critical functions (e.g., driving, steering, braking, parking) for the entire trip, with the driver not being expected (or even able) to control the vehicle at any time. As another example, an autonomous vehicle may allow a driver to safely turn his or her attention away from driving tasks in particular environments (e.g., on freeways) and/or in particular driving modes.
An autonomous vehicle may be configured to drive with a human driver present in the vehicle, or configured to drive with no human driver present. As an example, an autonomous vehicle may include a driver’s seat with associated controls (e.g., steering wheel, accelerator pedal, and brake pedal), and the vehicle may be configured to drive with no one seated in the driver’s seat or with limited, conditional, or no input from a person seated in the driver’s seat. As another example, an autonomous vehicle may not include any driver’s seat or associated driver’s controls, with the vehicle performing substantially all driving functions (e.g., driving, steering, braking, parking, and navigating) at all times without human input (e.g., the vehicle may be configured to transport human passengers or cargo without a driver present in the vehicle). As another example, an autonomous vehicle may be configured to operate without any human passengers (e.g., the vehicle may be configured for transportation of cargo without having any human passengers onboard the vehicle).
As the term is used herein, a “vehicle” may refer to a mobile machine configured to transport people or cargo. For example, a vehicle may include, may take the form of, or may be referred to as a car, automobile, motor vehicle, truck, bus, van, trailer, off-road vehicle, farm vehicle, lawn mower, construction equipment, golf cart, motorhome, taxi, motorcycle, scooter, bicycle, skateboard, train, snowmobile, watercraft (e.g., a ship or boat), aircraft (e.g., a fixed-wing aircraft, helicopter, or dirigible), or spacecraft. In particular embodiments, a vehicle may include an internal combustion engine or an electric motor that provides propulsion for the vehicle.
Generally, the example lidar system 100 may be used to determine the distance to one or more downrange objects. By scanning the example lidar system 100 across a field of regard, the system 100 can be used to map the distance to a number of points within the field of regard. Each of these depth-mapped points may be referred to as a pixel or a voxel. A collection of pixels captured in succession (which may be referred to as a depth map, a point cloud, or a point cloud frame) may be rendered as an image or may be analyzed to identify or detect objects or to determine a shape or distance of objects within the field of regard. For example, a depth map may cover a field of regard that extends 60° horizontally and 15° vertically, and the depth map may include a frame of 100-2000 pixels in the horizontal direction by 4-400 pixels in the vertical direction.
The example lidar system 100 may be configured to repeatedly capture or generate point clouds of a field of regard at any suitable frame rate between approximately 0.1 frames per second (FPS) and approximately 1,000 FPS, for example. The point cloud frame rate may be substantially fixed or dynamically adjustable, depending on the implementation. In general, the example lidar system 100 can use a slower frame rate (e.g., 1 Hz) to capture one or more high-resolution point clouds, and use a faster frame rate (e.g., 10 Hz) to rapidly capture multiple lower-resolution point clouds.
The field of regard of the example lidar system 100 can overlap, encompass, or enclose at least a portion of an object, which may include all or part of an object that is moving or stationary relative to example lidar system 100. For example, an object may include all or a portion of a person, vehicle, motorcycle, truck, train, bicycle, wheelchair, pedestrian, animal, road sign, traffic light, lane marking, road-surface marking, parking space, pylon, guard rail, traffic barrier, pothole, railroad crossing, obstacle in or near a road, curb, stopped vehicle on or beside a road, utility pole, house, building, trash can, mailbox, tree, any other suitable object, or any suitable combination of all or part of two or more distinct objects.
As illustrated in
Moreover, as illustrated in
The output beams 125A, 125B may be directed downrange by a mirror assembly 120 across a field of regard for the example lidar system 100 based on the angular orientation of a first mirror 120A and a second mirror 120B. A “field of regard” of the example lidar system 100 may refer to an area, region, or angular range over which the example lidar system 100 may be configured to scan or capture distance information. When the example lidar system 100 scans the output beams 125A, 125B within a 30-degree scanning range, for example, the example lidar system 100 may be referred to as having a 30-degree angular field of regard. The mirror assembly 120 may be configured to scan the output beams 125A, 125B horizontally and vertically, and the field of regard of the example lidar system 100 may have a particular angular width along the horizontal direction and another particular angular width along the vertical direction. For example, the example lidar system 100 may have a horizontal field of regard of 10° to 120° and a vertical field of regard of 2° to 30°.
In particular, the mirror assembly 120 includes at least the first mirror 120A and the second mirror 120B configured to adjust the azimuth emission angle and elevation emission angle of the light pulses emitted from the two light sources 110A, 110B. Generally speaking, the mirror assembly 120 steers the output beams 125A, 125B in one or more directions downrange using one or more actuators driving the first mirror 120A and the second mirror 120B to rotate, tilt, pivot, or move in an angular manner about one or more axes, for example. While
The first mirror 120A and the second mirror 120B may be communicatively coupled to a controller (not shown), which may control the mirrors 120A, 120B so as to guide the output beams 125A, 125B in a desired direction downrange or along a desired scan pattern. In general, a scan (or scan line) pattern may refer to a pattern or path along which the output beams 125A, 125B is directed. The example lidar system 100 can use the scan pattern to generate a point cloud with points or “pixels” that substantially cover the field of regard. The pixels may be approximately evenly distributed across the field of regard, or distributed according to a particular non-uniform distribution.
The first mirror 120A is configured to adjust the azimuth emission angle of the emitted light pulses 125A, 125B, and the second mirror 120B is configured to adjust the elevation emission angle of the emitted light pulses 125A, 125B. In certain aspects, the first mirror 120A configured to adjust the azimuth emission angle is a polygonal mirror configured to rotate along an orthogonal axis (e.g., by an angle θx) relative to the propagation axis of the light pulses 125A, 125B. For example, the first mirror 120A may rotate by approximately 35° along an orthogonal axis relative to the propagation axis of the light pulses 125A, 125B. In certain aspects, the rotation axis of the first mirror 120A may not be orthogonal to the propagation axis of the light pulses 125A, 125B. For example, the first mirror 120A may be a folding mirror with a rotation axis that is approximately parallel to the propagation axis of the light pulses 125A, 125B. In this example, when the beams are unfolded for analysis, the rotation axis of the first mirror 120A may be oriented in a direction that corresponds to an orthogonal direction relative to the propagation axis of the light pulses 125A, 125B.
Further, in some aspects, the second mirror 120B configured to adjust the elevation emission angle is a plane mirror configured to rotate along an axis (e.g., by an angle θy) that is orthogonal relative to the propagation axis of the light pulses 125A, 125B. Generally, the angular range of the vertical field of regard is approximately 12-30° (and is usually dynamically adjustable), which corresponds to an angular range of motion for the second mirror 120B of 6-15°. Thus, as an example, the second mirror 120B may rotate by up to 15° along an orthogonal axis relative to the propagation axis of the light pulses 125A, 125B. However, it will be appreciated that the mirrors may be of any suitable geometry, may be arranged in any suitable order, and may rotate by any suitable amount to obtain lidar data corresponding to a suitable field of regard.
As an example of the mirror assembly 120 rotation axes, assume that the propagation axis of the light pulses 125A, 125B is in a z-axis direction. The first mirror 120A may have a rotation axis corresponding to a y-axis direction (for scanning in the θx direction), and the second mirror 120B may have a rotation axis corresponding to an x-axis direction (for scanning in the θy direction). Thus, in this example both the first mirror 120A and the second mirror 120B have scan axes that correspond to orthogonal directions relative to the propagation axis of the light pulses 125A, 125B.
In any event, as the vehicle including the example lidar system 100 travels along a roadway, various obscurants (e.g., water droplets, dirt) may contact the optical window 130, causing the light pulses 125A, 125B emitted by one or more of the light sources 110A, 110B to be obscured during transmission through the optical window 130. Because the emitted light pulses 125A, 125B are scattered, blocked, and/or otherwise obscured by the obscurants, the amount of data received by the receiver 140 is reduced, and information corresponding to the blocked portions of the field of regard is eliminated. However, unlike conventional systems, the spatial displacement of the two light sources 110A, 110B is greater than an average diameter of obscurants (e.g., obscurant 132) that are expected to contact the optical window 130, such that at least one of the two emitted light pulses 125A, 125B will transmit through the optical window 130 without being obscured by the obscurant 132 for each data point within the field of regard. In some aspects, the average diameter of the obscurant 132 contacting the optical window 130 is approximately 1 millimeter. In some aspects, the spatial displacement corresponds to a lateral (or transverse) displacement along an axis orthogonal to the propagation axis of the light pulses 125A, 125B, and the light sources 110A, 110B may also be displaced axially.
Once the light pulses 125A, 125B pass the mirror assembly 120, the light pulses 125A, 125B exit through the optical window 130, reflect/scatter off of an object located in the external environment of the vehicle, and return through the optical window 130 to generate data corresponding to the environment of the vehicle. Depending on the azimuthal/elevation emission angles of the light pulses 125A, 125B, one of the light pulses 125A, 125B may be blocked, scattered, and/or otherwise obscured by the obscurant 132 when exiting through the optical window 130. However, the spatial displacement of the two light pulses 125A, 125B relative to one another is greater than the diameter of the obscurant 132, ensuring that at least one of the light pulses 125A, 125B always returns through the optical window 130 to provide data corresponding to the environment of the vehicle. As a result, the example lidar system 100 is configured to reliably collect environmental data corresponding to the entire field of regard of the lidar system 100 regardless of whether or not an obscurant 132 contacts the optical window 130.
As an example, assume that the first light pulse 125A is obscured by the obscurant 132 at a first azimuthal emission angle and a first elevation emission angle, but the second light pulse 125B is unobscured at these emission angles. The second light pulse 125B may reach a first object located in the external environment of the vehicle and return through the optical window 130 where the second light pulse 125B is again unobscured by the obscurant 132. Continuing this example, assume that the second light pulse 125B is obscured by the obscurant 132 at a second azimuthal emission angle and a second elevation emission angle, but the first light pulse 125A is unobscured at these emission angles. The first light pulse 125A may reach a second object located in the external environment of the vehicle and return through the optical window 130 where the second light pulse 125B is again unobscured by the obscurant 132. Thus, in this example, the example lidar system 100 successfully collects lidar data corresponding to the first object and the second object despite light pulses from both light sources 110A, 110B being obscured by the obscurant at various emission angles. In this manner, and as previously stated, the lidar systems of the present disclosure improve over conventional systems by eliminating/reducing data loss resulting from optical window obscurants (e.g., obscurant 132).
In certain aspects, the first light pulse 125A and the second light pulse 125B have a beam diameter at the optical window 130 of approximately 2 millimeters. The light pulses 125A, 125B are generally collimated light beams with a minor amount of beam divergence (e.g., approximately 0.06-0.12°). Thus, the beam diameter of the light pulses 125A, 125B may increase as the light pulses 125A, 125B propagate towards objects in the environment of the vehicle. For example, the beam diameter of the light pulses 125A, 125B may be approximately 10-20 centimeters at 100 meters from the lidar system 100.
As the light pulses 125A, 125B return through the optical window 130 (as input beams 135), each pulse reflects back through the mirror assembly 120. The input beams 135 may include light from the output beams 125A, 125B that is scattered by the object, light from the output beams 125A, 125B that is reflected by the object, or a combination of scattered and reflected light from object. According to some implementations, the example lidar system 100 can include an “eye-safe” laser that presents little or no possibility of causing damage to a person’s eyes. The input beams 135 may contain only a relatively small fraction of the light from the output beams 125A, 125B.
Further, the output beams 125A, 125B and input beams 135 may be substantially coaxial. In other words, the output beams 125A, 125B and input beams 135 may at least partially overlap or share a common propagation axis, so that the input beams 135 and the output beams 125A, 125B travel along substantially the same optical path (albeit in opposite directions). As the example lidar system 100 scans the output beams 125A, 125B across a field of regard, the input beams 135 may follow along with the output beams 125A, 125B, so that the coaxial relationship between the two beams is maintained.
The light pulses 125A, 125B, returning as input beams 135, eventually reach the receiver 140, which is configured to detect a light pulse and output an electric signal corresponding to the detected light pulse. Generally, the light pulses emitted from the first light source 110A and the second light source 110B are emitted with an angular displacement relative to one another in order to increase the point density of the scanned external vehicle environment. This angular displacement translates to a physical displacement at the focal plane of the receiver 140, thereby rendering a single detector insufficient to accurately detect the location of the light pulses emitted from both the first light source 110A and the second light source 110B. As a result, the receiver 140 may comprise a first detector 140A configured to receive a first portion of the light pulses emitted from the first light source 110A and the second light source 110B, and a second detector 140B configured to receive a second portion of the light pulses emitted from the first light source 110A and the second light source 110B.
The receiver 140 may receive or detect photons from the input beams 135 and generate one or more representative signals. For example, the receiver 140 may generate an output electrical signal that is representative of the input beams 135. The receiver 140 may send the electrical signal to a controller (not shown). Depending on the implementation, the controller may include one or more instruction-executing processors, an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), and/or other suitable circuitry configured to analyze one or more characteristics of the electrical signal in order to determine one or more characteristics of the object, such as its distance downrange from the example lidar system 100. More particularly, the controller may analyze the time of flight or phase modulation for the output beams 125A, 125B transmitted by the light sources 110A, 110B. If the example lidar system 100 measures a time of flight of T (e.g., T representing a round-trip time of flight for an emitted pulse of light to travel from the example lidar system 100 to the object and back to the example lidar system 100), then the distance (D) from the object to the example lidar system 100 may be expressed as D=c• T/2, where c is the speed of light (approximately 3.0x108 m/s).
Moreover, in some implementations, the light sources 110A, 110B, the mirror assembly 120, and the receiver 140 may be packaged together within a single housing, which may be a box, case, or enclosure that holds or contains all or part of the example lidar system 100. In some implementations, the housing includes multiple lidar sensors, each including a respective mirror assembly and a receiver. Depending on the particular implementation, each of the multiple sensors can include a separate light source or a common light source. The multiple sensors can be configured to cover non-overlapping adjacent fields of regard or partially overlapping fields of regard, for example, depending on the implementation.
As described above for the example lidar system 100, the two light sources 110A, 110B emit light pulses 125A, 125B that pass through an optical window 130 which may have an obscurant 132 attached or otherwise contacting the window 130 as a result of the vehicle traveling along a roadway. To provide a clearer understanding of how the obscurant 132 causes data loss for conventional systems, and how the techniques of the present disclosure solve such issues,
Accordingly, the first zone of interest 152 may have a first zone height 152A and a first zone width 152B sufficient to define a full FOR for any suitable system (e.g., example lidar system 100), and the second zone of interest 154 may have a second zone height 154A and a second zone width 154B sufficient to define such a high density region for the suitable system. As an example, the second zone height 154A for the optical window 130 within the example lidar system 100 may be approximately 25 millimeters and the second zone width 154B for the optical window 130 within the example lidar system 100 may be approximately 34 millimeters to define a high density region encompassing approximately 35° of azimuth and 10° of elevation.
The obscurant 132 may be any suitable blocking obscurant, such as moisture (e.g., water droplets, ice, snow, etc.), dirt, and/or any other object contacting the optical window 130. As previously mentioned, obscurants contacting an optical window of a lidar system (or any sensor including such a window) included within a vehicle may generally have an average diameter of approximately 1 millimeter. In conventional systems using single output beams, obscurants of such size result in significant data loss because the single output beams are blocked and/or otherwise obscured from returning data to a receiver corresponding to a significant portion of the FOR. For example,
As illustrated in
By contrast,
Thus, the lidar systems of the present disclosure may receive data outputs similar to data output 176 that includes two partial data shadows 177A, 177B. Each of the partial data shadows 177A, 177B includes data representative of target objects located in those regions of the FOR because the output beam that is not blocked/obscured by the obscurant 132 at those azimuth/elevation angles transmits through the optical window 173, scatters off of the target object 175, and returns through the optical window 173 to the receiver (not shown). For example, at the azimuth/elevation angles represented by the partial data shadow 177A, the second output beam 174B is blocked or otherwise obscured by the obscurant 132, such that the partial data shadow 177A represents a region of the FOR for which no data was received from the second output beam 174B. In this example, the partial data shadow 177A includes data from the first output beam 174A because that beam 174A is not blocked or otherwise obscured by the obscurant 132.
In this manner, the techniques of the present disclosure improve over conventional systems by reliably collecting data representative of an entire FOR of a lidar system, despite the presence of an obscurant on the optical window. Accordingly, the techniques of the present disclosure reduce data loss that plagues conventional techniques, and thereby increases the accuracy and consistency of decision-making and vehicle control operations for autonomous vehicles and autonomous vehicle functionalities.
As described for the example lidar system 100 provided above and illustrated in
In
The example scan pattern 200 may include multiple points or pixels 210, and each pixel 210 may be associated with one or more laser pulses and one or more corresponding distance measurements. A cycle of the example scan pattern 200 may include a total of Px×Py pixels 210 (e.g., a two-dimensional distribution of Px by Py pixels). The number of pixels 210 along a horizontal direction may be referred to as a horizontal resolution of the example scan pattern 200, and the number of pixels 210 along a vertical direction may be referred to as a vertical resolution of the example scan pattern 200.
Each pixel 210 may be associated with a distance/depth (e.g., a distance to a portion of an object from which the corresponding laser pulse was scattered) and one or more angular values. As an example, the pixel 210 may be associated with a distance value and two angular values (e.g., an azimuth and altitude) that represent the angular location of the pixel 210 with respect to the example lidar system 100. A distance to a portion of an object may be determined based at least in part on a time-of-flight measurement for a corresponding pulse. More generally, each point or pixel 210 may be associated with one or more parameter values in addition to its two angular values. For example, each point or pixel 210 may be associated with a depth (distance) value, an intensity value as measured from the received light pulse, and/or one or more other parameter values, in addition to the angular values of that point or pixel.
An angular value (e.g., an azimuth or altitude) may correspond to an angle (e.g., relative to the center of the FOR) of the output beams 125A, 125B (e.g., when corresponding pulses are emitted from example lidar system 100) or an angle of the input beam 135 (e.g., when an input signal is received by example lidar system 100). In some implementations, the example lidar system 100 determines an angular value based at least in part on a position of a component of the mirror assembly 120. For example, an azimuth or altitude value associated with the pixel 210 may be determined from an angular position of the first mirror 120A or the second mirror 120B of the mirror assembly 120. The zero elevation, zero azimuth direction corresponding to the center of the FOR may be referred to as a neutral look direction (or neutral direction of regard) of the example lidar system 100. Thus, each of the scan lines 230A-D, 230Aʹ-Dʹ represent a plurality of pixels 210 with different combinations of azimuth and altitude values. For example, half of the pixels 210 included as part of the scan line 230A may include positive azimuth values and altitude values, and the remaining half may include negative azimuth values and positive altitude values. By contrast, each of the pixels 210 included as part of the scan line 230Dʹ may include negative altitude values.
The sensor heads 312A-D in
In the example of
Data from each of the sensor heads 312A-D may be combined or stitched together to generate a point cloud that covers a greater than or equal to 30-degree horizontal view around a vehicle. For example, the laser corresponding to each sensor head 312A-D may include a controller or processor that receives data from each of the sensor heads 312A-D (e.g., via a corresponding electrical link 320) and processes the received data to construct a point cloud covering a 360-degree horizontal view around a vehicle or to determine distances to one or more targets. The point cloud or information from the point cloud may be provided to a vehicle controller 322 via a corresponding electrical, optical, or radio link 320. The vehicle controller 322 may include one or more CPUs, GPUs, and a non-transitory memory with persistent components (e.g., flash memory, an optical disk) and/or non-persistent components (e.g., RAM).
In some implementations, the point cloud is generated by combining data from each of the multiple sensor heads 312A-D at a controller included within the laser(s), and is provided to the vehicle controller 322. In other implementations, each of the sensor heads 312A-D includes a controller or processor that constructs a point cloud for a portion of the 360-degree horizontal view around the vehicle and provides the respective point cloud to the vehicle controller 322. The vehicle controller 322 then combines or stitches together the points clouds from the respective sensor heads 312A-D to construct a combined point cloud covering a 360-degree horizontal view. Still further, the vehicle controller 322 in some implementations communicates with a remote server to process point cloud data.
In any event, the vehicle 300 may be an autonomous vehicle where the vehicle controller 322 provides control signals to various components 330 within the vehicle 300 to maneuver and otherwise control operation of the vehicle 300. The components 330 are depicted in an expanded view in
The vehicle controller 322 can include a perception module 352 that receives input from the components 330 and uses a perception machine learning (ML) model 354 to provide indications of detected objects, road markings, etc. to a motion planner 356, which generates commands for the components 330 to maneuver the vehicle 300.
In some implementations, the vehicle controller 322 receives point cloud data from the sensor heads 312A-D via the links 320 and analyzes the received point cloud data, using any one or more of the aggregate or individual SDCAs disclosed herein, to sense or identify targets/objects and their respective locations, distances, speeds, shapes, sizes, type of object (e.g., vehicle, human, tree, animal), etc. The vehicle controller 322 then provides control signals via another link 320 to the components 330 to control operation of the vehicle based on the analyzed information.
In addition to the lidar system 302, the vehicle 300 may also be equipped with other sensors 345 such as a camera, a thermal imager, a conventional radar (none illustrated to avoid clutter), etc. The additional sensors 345 can provide additional data to the vehicle controller 322 via wired or wireless communication links. Further, the vehicle 300 in an example implementation includes a microphone array operating as a part of an acoustic source localization system configured to determine sources of sounds.
As another example,
By contrast,
The first pixels 512 correspond to pixel data generated based on input signals received from a first light source (e.g., first light source 110A), and the second pixels 514 correspond to pixel data generated based on input signals received from a second light source (e.g., second light source 110B). As illustrated in
Moreover, as illustrated in
For example, in the first shadow region 516, the pixel data received from the first light source includes multiple pixels 512 that correspond to substantially similar data the second light source would have obtained within the first shadow region 516 without the presence of the obscurant, as represented by the gaps between the rows of pixels 514 within the first shadow region 516. Without this pixel data from the first light source within the first shadow region 516, the perception components of the vehicle may miss an object within the first shadow region 516 that ought to be considered when determining vehicle control operations. However, because the pixels 512 generated by the light from the first light source are substantially similar to the pixels 514 generated by the light from the second light source, the first light source generates pixel data within the first shadow region 516 that provides sufficient data to determine whether or not such an object exists, features/characteristics of the object, and how best to maneuver the vehicle as a result of the object’s presence. Thus, utilizing two spatially displaced lasers to perform a redundant beam scan in the manners described herein enables a lidar system to analyze an entire FOR regardless of the presence of an obscurant contacting the optical window.
As described above, the size of obscurants contacting the optical window is the primary consideration when determining how to spatially displace the light sources of the example lidar system 100. Accordingly, understanding what size of obscurants vehicles typically encounter, and more particularly, what size of obscurants typically contact and remain affixed to vehicle surfaces for appreciable periods of time is of paramount importance. Thus, sizes and contact periods of typical optical window obscurants will now be described with reference to
As illustrated in
As previously mentioned, the spatial displacement between the light sources (e.g., light sources 110A, 110B) of the example lidar system 100 are approximately 7 mm, and the beam diameter of the output beams (e.g., output beams 125A, 125B) is approximately 2 mm. Thus, any obscurant contacting the optical window with a diameter equal to or less than 1 mm will not completely block a single output beam, much less obscure/block both output beams simultaneously. To better illustrate this point, the range 610 shown in
Nevertheless, it is known that droplet diameters for typical rainfall may range from a minimum of 0.1 mm to approximately 3 mm, and that natural soil on road surfaces may have diameters ranging from 2.5-10 micrometers (µm). Still, it is highly unlikely that any obscurant with a diameter in excess of 1 mm (a “larger” obscurant) will contact and/or remain in contact with the optical window for an extended duration under any driving condition. These larger obscurants are naturally unstable, and as a result, will flow away from the contact point on the vehicle (e.g., an optical window) after a short period (e.g. a few seconds or less). Namely, a stationary vehicle will allow these larger obscurants to coalesce and flow away quickly due to gravity, and a moving vehicle will cause these larger obscurants to coalesce and flow away due to the airflow over the optical window.
In order to perform the redundant beam scanning functionality described above, a lidar system (e.g. example lidar system 100) may be configured according to a method 700, as represented by a flow diagram illustrated in
Moreover, in some aspects, the first light source has an angular displacement relative to the second light source, and the angular displacement may be in an orthogonal direction relative to the spatial displacement of the first light source from the second light source. For example, if the spatial displacement of the first light source from the second light source is in a perpendicular direction relative to the direction of travel of the vehicle, then the angular displacement of the light sources may be in a parallel direction relative to the direction of travel of the vehicle. The angular displacement enables the lidar system to obtain higher pixel density during the scanning process, because the angular displacement results in receiving pixel data for objects/portions of objects that are slightly offset from one another.
The method 700 also includes configuring a mirror assembly to adjust an azimuth emission angle and an elevation emission angle of the first light pulse and the second light pulse (block 706). Generally, the mirror assembly includes two mirrors that are individually configured to adjust either the azimuth emission angle or the elevation emission angle of the light pulses. However, in certain aspects, the mirror assembly additionally comprises an intermediate mirror configured to reflect the first light pulse and the second light pulse from the azimuth mirror to the elevation mirror.
In some aspects, the mirror assembly may comprise an azimuth mirror configured to adjust the azimuth emission angle of the first light pulse and the second light pulse. In these aspects, the azimuth mirror may be a polygonal mirror and may be configured to adjust the azimuth emission angle of the first light pulse and the second light pulse first light pulse and the second light pulse by rotating at least 35 degrees along an axis that is orthogonal to a propagation axis of the first light pulse and the second light pulse.
Further, in certain aspects, the mirror assembly may comprise an elevation mirror configured to adjust the elevation emission angle of the first light pulse and the second light pulse first light pulse and the second light pulse. The elevation mirror may be configured to adjust the elevation emission angle of the first light pulse and the second light pulse first light pulse and the second light pulse by rotating up to 15 degrees along an axis that is orthogonal to a propagation axis of the first light pulse and the second light pulse.
The method 700 may also include configuring an optical window to transmit the first light pulse and the second light pulse (block 708), and determining an average diameter of an obscurant expected to contact the optical window (block 710). In certain aspects, the average diameter of the obscurant expected to contact the optical window is approximately 1 mm.
The method 700 may also include spatially displacing the second light source relative to the first light source so that the spatial displacement is greater than the average diameter of the obscurant (block 712). Further, in certain aspects, the spatial displacement of the second light source relative to the first light source is such that the first light pulse and the second light pulse produce two pixels corresponding to a same portion of an image, wherein the two pixels are used to render the same portion of the image. Upon transmission through the optical window, the light beams may diffuse such that once they reach a target object and return to the receiver, the pixels generated as a result may be adjacent to one another and/or within several pixels of one another. Thus, the spatial displacement (and, in certain aspects, the angular displacement) of the second light source relative to the first light source may generate similar and/or identical pixel data despite the light sources being spatially displaced at a distance greater than the average diameter of an obscurant expected to contact the optical window. Moreover, in these aspects, the two or more detectors may be configured to output the electric signal(s) for generating the two pixels.
The method 700 may also include configuring a receiver to receive the first light pulse and the second light pulse that are scattered by one or more targets (block 714). The receiver may include two or more detectors, and each detector may be configured to detect the first light pulse or the second light pulse and output an electric signal. In other words, each detector may be paired with a respective light source, such that each detector will only receive scattered light from the corresponding respective light source. For example, a first detector (e.g., first detector 140A) may be paired with a first light source (e.g., first light source 110A) and a second detector (e.g., second detector 140B) may be paired with a second light source (e.g., second light source 110B). In this example, the first detector may only detect light emitted by the first light source, and the second detector may only detect light emitted by the second light source, such that light emitted from the first light source being detected by the second detector (e.g., crosstalk) is minimized/eliminated to reduce false/spurious detections.
Thus, in certain aspects, the two or more detectors may comprise a first detector configured to receive a first portion of the first light pulse, and a second detector configured to receive a second portion of the second light pulse. Of course, it should be understood that the receiver may include four or more detectors, such that two (or more) detectors are configured to receive the first portion of the first light pulse and two (or more) detectors are configured to receive the second portion of the second light pulse.
Additionally, in certain aspects, the detectors may be configured to detect the first light beam or the second light beam and output an electric signal for generating a first set of pixel data corresponding to the first light beam and a second set of pixel data corresponding to the second light beam. In these aspects, the first set of pixel data may include a first gap and the second set of pixel data may include a second gap that does not completely overlap the first gap. As a result of the spatial displacement of the light sources, a single obscurant may block a portion of the pixel data obtained by the first light source and a different portion of the pixel data obtained by the second light source (e.g., as illustrated in
This application claims the benefit of U.S. Provisional Application No. 63/250,726, filed Sep. 30, 2021, and entitled “LIDAR SENSOR WITH A REDUNDANT BEAM SCAN”, which is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
63250726 | Sep 2021 | US |