The subject disclosure relates to the identification of edge points and planar points in a point cloud obtained by a vehicle lidar system.
Vehicles (e.g., automobiles, trucks, construction equipment, farm equipment) increasingly include sensors that obtain information about the vehicle and its environment. The information facilitates semi-autonomous or autonomous operation of the vehicle. For example, sensors (e.g., camera, radar system, lidar system, inertial measurement unit (IMU), steering angle sensor) may facilitate semi-autonomous maneuvers such as automatic braking, collision avoidance, or adaptive cruise control. A lidar system obtains a point cloud that must be processed to obtain information that would facilitate control of vehicle operation. Accordingly, it is desirable to provide the identification of edge points and planar points in the point cloud obtained by a vehicle lidar system.
In one exemplary embodiment, a system in a vehicle includes a lidar system to transmit incident light and receive reflections from one or more objects as a point cloud of points. The system also includes processing circuitry to identify feature points among the points of the point cloud using principal component analysis, the feature points being planar points that form one or more surfaces or edge points. A set of edge points forms a linear surface.
In addition to one or more of the features described herein, the lidar system is a beam-based lidar system that transmits each beam of incident light across a horizontal scan line.
In addition to one or more of the features described herein, the lidar system is a non-beam-based lidar system that transmits each beam of incident light over an area.
In addition to one or more of the features described herein, the processing circuitry identifies neighbor points of each point in the point cloud. The neighbor points are all points of the point cloud that are within a threshold distance of the point.
In addition to one or more of the features described herein, the processing circuitry uses principal component analysis by calculating eigenvalues [λi0, λi1, λi2] of a covariance matrix of the neighbor points of each point, where λi0>λi1>λi2.
In addition to one or more of the features described herein, eigenvectors corresponding to the eigenvalues λi0, λi1, and λi2 are not limited to a particular orientation.
In addition to one or more of the features described herein, the processing circuitry identifies the point as an edge point, a planar point, or neither the edge point or the planar point alone based on comparing the eigenvalues to threshold values.
In addition to one or more of the features described herein, the processing circuitry identifies the point as the edge point based on the eigenvalue λi0 exceeding a first threshold while the eigenvalues λi1 and λi2 are less than a second threshold.
In addition to one or more of the features described herein, the processing circuitry identifies the point as the planar point based on the eigenvalues λi0 and λi1 exceeding a third threshold while the eigenvalue λi2 is less than a fourth threshold.
In addition to one or more of the features described herein, the processing circuitry identifies an object based on the feature points.
In another exemplary embodiment, a method in a vehicle includes obtaining, at processing circuitry from a lidar system configured to transmit incident light and receive reflections from one or more objects, a point cloud of points. The method also includes identifying, by the processing circuitry, feature points among the points of the point cloud using principal component analysis, the feature points being planar points that form one or more surfaces or edge points. A set of edge points forms a linear surface.
In addition to one or more of the features described herein, the obtaining the point cloud is from a beam-based lidar system that transmits each beam of incident light across a horizontal scan line.
In addition to one or more of the features described herein, the obtaining the point cloud is from a non-beam-based lidar system that transmits each beam of incident light over an area.
In addition to one or more of the features described herein, the method also includes the processing circuitry identifying neighbor points of each point in the point cloud, wherein the neighbor points are all points of the point cloud that are within a threshold distance of the point.
In addition to one or more of the features described herein, the method also includes the processing circuitry using principal component analysis by calculating eigenvalues [λi0, λi1, λi2] of a covariance matrix of the neighbor points of each point, where λi0>λi1>λi2.
In addition to one or more of the features described herein, eigenvectors corresponding to the eigenvalues λi0, λi1, and λi2 are not limited to a particular orientation.
In addition to one or more of the features described herein, the method also includes the processing circuitry comparing the eigenvalues to threshold values to identify the point as an edge point, a planar point, or neither the edge point or the planar point alone.
In addition to one or more of the features described herein, the method also includes the processing circuitry identifying the point as the edge point based on the eigenvalue λi0 exceeding a first threshold while the eigenvalues λi1, and λi2 are less than a second threshold.
In addition to one or more of the features described herein, the method also includes the processing circuitry identifying the point as the planar point based on the eigenvalues λi0 and λi1 exceeding a third threshold while the eigenvalue λi2 is less than a fourth threshold.
In addition to one or more of the features described herein, the method also includes the processing circuitry identifying an object based on the feature points.
The above features and advantages, and other features and advantages of the disclosure are readily apparent from the following detailed description when taken in connection with the accompanying drawings.
Other features, advantages and details appear, by way of example only, in the following detailed description, the detailed description referring to the drawings in which:
The following description is merely exemplary in nature and is not intended to limit the present disclosure, its application or uses. It should be understood that throughout the drawings, corresponding reference numerals indicate like or corresponding parts and features.
As previously noted, a point cloud obtained with a lidar system must be processed in order to obtain information about detected objects. The process is referred to as feature extraction. More specifically, feature extraction refers to the identification of features such as edges and planes within the point cloud. The identification of these edges and planes facilitates the identification of objects in the scene. A beam-based point cloud refers to one that is made up of multiple horizontal scan lines corresponding to multiple beams of the light source (e.g., laser) that are transmitted to obtain the point cloud as reflections. That is, each scan line corresponds to a transmitted beam. The vertical resolution of a beam-based point cloud is limited by how close the transmitted beams and, consequently, how close the scan lines are to each other. Thus, another type of point cloud that may be obtained is a non-beam-based point cloud. A non-beam-based point cloud may refer, for example, to a point cloud formed as a patch (e.g., cube) per beam. Such a point cloud does not include the horizontal scan lines that define a beam-based point cloud.
Prior feature extraction techniques (e.g., laser odometry and mapping (LOAM)) are well-suited to beam-based point clouds but rely on the horizontal scan lines and, thus, are unsuited for non-beam-based point clouds. Embodiments of the systems and methods detailed herein relate to the identification of edge points and planar points in a point cloud obtained with a vehicle lidar system. Generally, a set of edge points may indicate a linear surface (e.g., long and narrow) (e.g., of a tree, lamp post) while a set of planar points may indicate a surface area (e.g., of a road, broadside of a car). A grouping or set of neighbor points is identified. Principal component analysis (PCA) (i.e., an eigen-decomposition of the covariance matrix of the neighbor points) is used to identify edge points and planar points, as detailed.
In accordance with an exemplary embodiment,
The feature extraction processes (i.e., processes to identify edge points E and planar points P among the point cloud 205) discussed for the lidar system 110 may be performed by the lidar controller 115, controller 130, or a combination of the two. The lidar controller 115 and controller 130 may include processing circuitry that may include an application specific integrated circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and memory that executes one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality.
The lidar system 110 transmits incident light and receives reflected light. The reflected light is a result of reflection of the incident light by different parts of the objects 140 in the field of view of the lidar system 110. The reflected light is in the form of points pi that form a point cloud 205 (
As noted, the feature extraction results in the identification of points pi that are edge points E or planar points P. The points pi that are planar points P may be part of a horizontal plane h or a vertical plane v. An exemplary horizontal plane h from which points pi in the point cloud 205 may be obtained is illustrated by the road surface (object 140a). Similarly, an exemplary vertical plane v from which points pi in the point cloud 205 may be obtained is illustrated by the hedge row 140b. As shown in
At block 220, the processes include performing principal component analysis (PCA). Specifically, the processes include calculating eigenvalues [λi0, λi1, λi2] of a covariance matrix of neighbor points Ni. In PCA, eigenvalues [λi0, λi1, λi2] represent the variance in magnitude in the direction of the largest spread of the neighbor points N. The eigenvalues [λi0, λi1, λi2] are in order of decreasing magnitude (i.e., λi0>λi1>λi2) and may pertain to any orientation. That is, for example, based on the specific set of neighbor points Nj, the eigenvalue λi0 (i.e., the one with the highest magnitude) may pertain to the x, y, or z axis, whichever represents the direction of largest spread. This is further discussed with reference to
At block 240, a check is done of whether all of the following are true: λi0 and λi1 (the highest and second highest eigenvalues)>threshold3 and λi2 (the lowest eigenvalue)<threshold4. If the check at block 240 indicates that all the conditions are met, then the point pi is identified as a planar point P at block 245. An exemplary set of edge points P is shown. If the check at block 240 indicates that not only the conditions checked at block 230 but also the conditions checked at block 240 are not met, then the point pi is designated, at block 250, as neither an edge point E nor a planar point P alone. The point pi may be both an edge point E and a planar point P or neither. The point pi may be an edge between two planes formed in different directions, as another example of a point pi that may be designated at block 250. As previously noted, identifying the planar points P and the edge points E may facilitate identifying the object 140 that reflected light from the lidar system 110 to provide the point cloud 205.
As shown in
While the above disclosure has been described with reference to exemplary embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from its scope. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the disclosure without departing from the essential scope thereof. Therefore, it is intended that the present disclosure not be limited to the particular embodiments disclosed, but will include all embodiments falling within the scope thereof