SINGLE-STAGE 3-DIMENSION MULTI-OBJECT DETECTING APPARATUS AND METHOD FOR AUTONOMOUS DRIVING

Information

  • Patent Application
  • 20230071437
  • Publication Number
    20230071437
  • Date Filed
    December 08, 2021
    2 years ago
  • Date Published
    March 09, 2023
    a year ago
Abstract
According to at least one embodiment, the present disclosure provides an apparatus for single-stage three-dimensional (3D) multi-object detection by using a LiDAR sensor to detect 3D multiple objects, comprising: a data input module configured to receive raw point cloud data from the LiDAR sensor; a BEV image generating module configured to generate bird's eye view (BEV) images from the raw point cloud data; a learning module configured to perform a deep learning algorithm-based learning task to extract a fine-grained feature image from the BEV images; and a localization module configured to perform a regression operation and a localization operation to find 3D candidate boxes and classes corresponding to the 3D candidate boxes for detecting 3D objects from the fine-grained feature image.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is based on, and claims priority from, Korean Patent Application Number 10-2021-0108154, filed Aug. 17, 2021, the disclosure of which is incorporated by reference herein in its entirety.


TECHNICAL FIELD

The present disclosure in at least one embodiment relates to a multi-object detection apparatus. More particularly, the present disclosure relates to an efficient and compact single-stage three-dimensional multi-object detection apparatus for autonomous driving.


BACKGROUND

For an autonomous driving of an unmanned vehicle, a driving route needs to be generated upon detecting a moving object in front and estimating its dynamic motion. Currently, studies are being conducted on such a dynamic object detection and tracking methods using radar and cameras. With the recent cost down of laser scanners, it is a tendency that a majority of automakers are adopting a driver-assistance system in their cars.


For the detection of a moving object using a laser scanner, laser pointers are respectively converted to depth values to generate a point cloud around the vehicle equipped with the scanner. Since each point itself in the point cloud gives no specific meaning, detecting and tracking a moving object requires multiple points to be grouped through a clustering technique into a single object.


In such a process, perception of driving environment plays an essential role in autonomous driving tasks and demands robustness in cluttered dynamic environments such as complex urban scenarios.


Automated driving systems capable of performing all driving tasks under any and all roadway and environmental conditions, are classified into the highest level of automation defined by the Society of Automotive Engineers (SAE) International. While Advanced Driving Assistants (ADAs) are currently commercially available, they either require human intervention or operate only under limited environmental conditions. The implementation of this autonomy has put forth huge requirements on the associated research domains, including Multiple Object Detection and Tracking (MODT). Understanding dynamic properties of coexisting entities in the environment is crucial for enhancing the overall automation since the knowledge directly impacts the quality of localization, mapping, and motion planning.


Over the past decade, numerous MODT approaches using cameras for recognition have been studied. Objects are detected by the camera reference frame either in a 2D coordinate system or in a 3D coordinate system under a stereo setup, producing 2D or 3D trajectories, respectively. However, a spatial information obtained by utilizing camera geometry suffers from inconsistent accuracy, and the field of view (FOV) remains limited. Such a panoramic camera-based tracking is yet to be developed. The camera-based approaches also face various challenges, including object truncation, poor lighting conditions, high-speed targets, sensor motion, and interactions between targets.


In autonomous driving tasks, the 3D object coordinates require high level of accuracy and robustness in its positioning function. Most of object detectors work upon the vehicle-installed systems. To overcome these constraints, efficient and compact 3D detection frameworks are required within the context of a full self-driving embedded system. Therefore, a compact 3D object detection using point cloud techniques needs to be embedded system-friendly so it can help achieve more feasible autonomous driving.


Recently, detection technology using Light Detector and Ranging (LiDAR) has become increasingly popular, providing sparse panoramic information of the environment. Capable of providing panoramic sparse measurements, ranging up to as wide as 100 m, and at a reasonable rate of 10-15 Hz, LiDARs are regarded as an ideal sensor for MODT tasks.


Among the various sensors, LiDAR stands as an ideal candidate for the 3D object detection task since it can provide a robot vision with 3D point clouds, which are ubiquitous in many mobile robotics applications, and in autonomous driving in particular. In addition, unlike visual information, LiDAR provides, against any weather, highly sparse point density distribution due to factors such as non-uniform sampling of 3D real-world, effective operating range, occlusion, noise, and relative pose, for which visual sensors provide only limited performance.


SUMMARY

According to at least one embodiment, the present disclosure provides an apparatus for single-stage three-dimensional (3D) multi-object detection by using a LiDAR sensor to detect 3D multiple objects, comprising: a data input module configured to receive raw point cloud data from the LiDAR sensor; a BEV image generating module configured to generate bird's eye view (BEV) images from the raw point cloud data; a learning module configured to perform a deep learning algorithm-based learning task to extract a fine-grained feature image from the BEV images; and a localization module configured to perform a regression operation and a localization operation to find 3D candidate boxes and classes corresponding to the 3D candidate boxes for detecting 3D objects from the fine-grained feature image.


The BEV image generating module is configured to generate the BEV images by projecting the raw 3D point cloud data into 2D pseudo-images and discretizing a result of the projecting.


The BEV image generating module is configured to generate four feature map images based on a height, a density, an intensity, and a distance of the raw 3D point cloud data, by encoding the raw 3D point cloud data.


The learning module may be configured to perform a convolutional neural network-based (CNN-based) learning task.


According to another embodiment, the present disclosure provides a method performed by an apparatus for single-stage three-dimensional (3D) multi-object detection by using a LiDAR sensor to detect 3D multiple objects, the method comprising: performing a data input operation by receiving raw point cloud data from the LiDAR sensor; generating bird's eye view (BEV) images from the raw point cloud data; performing a deep learning algorithm-based learning task to extract a fine-grained feature image from the BEV images; and performing a regression operation and a localization operation to find 3D candidate boxes and classes corresponding to the 3D candidate boxes for detecting 3D objects from the fine-grained feature image.


The generating of the BEV images may include generating the BEV images by projecting the raw 3D point cloud data into 2D pseudo-images and discretizing a result of the projecting.


The generating of the BEV images may include generating four feature map images based on a height, a density, an intensity, and a distance of the raw 3D point cloud data, by encoding the raw 3D point cloud data.


The performing of the deep learning algorithm-based learning task may include performing a convolutional neural network-based (CNN-based) learning task.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a block diagram of the configuration of a 3D multi-object detection apparatus according to at least one embodiment of the present disclosure.



FIG. 2 is a flowchart of a 3D multi-object detection method according to at least one embodiment of the present disclosure.



FIG. 3 is a diagram illustrating an overall framework of a 3D multi-object detection apparatus according to at least one embodiment of the present disclosure.



FIG. 4 shows a detailed structure of a Bird's Eye View (BEV) feature map generation.



FIG. 5 is a graph showing the average point cloud distribution in terms of distance on a region of interest in training dataset samples.



FIG. 6 shows a detailed CNN architecture of a 3D multi-object detection apparatus according to at least one embodiment of the present disclosure.





DETAILED DESCRIPTION

The present disclosure in at least one embodiment aims to provide a compact and efficient 3D object detector framework by utilizing point cloud projection methods and anchor-free methods, which excels the existing state-of-the-art point cloud projection methods in terms of performance.


The objects of the present disclosure are not limited to those particularly described hereinabove and the above and other objects that the present disclosure could achieve will be clearly understood by those skilled in the art from the following detailed description.


Advantages and features of the embodiments disclosed herein, and methods of achieving them will become apparent by referring to the relevant descriptions below in conjunction with the accompanying drawings. However, the embodiments to be presented by the present disclosure are not limited to the embodiments disclosed below but may be implemented in various forms, and the present embodiments are merely provided to those of ordinary skill in the art to fully indicate the category of the embodiments.


The terms used in this specification will be briefly described before proceeding to the detailed description.


The terms used in this specification have been selected as currently widely used common terms as possible while considering the functions of the disclosed embodiments, but these are subject to change by the intention of a person skilled in the art, a precedent, or the emergence of new technology, etc. Some of the terms may be arbitrarily selected by the applicant, wherein the meaning will accompany the relevant detailed description in the specification. Therefore, the terms used herein should be defined based on the meaning of the terms together with the description throughout the specification rather than as mere designations of entities.


The terms used in this specification with singular articles “a,” “an,” and “the” are intended to include their plural equivalents as well unless the context specifies them as being singular.


Throughout this specification, when parts “include” or “comprise” a component, they are meant to further include other components, not excluding thereof unless there is a particular description contrary thereto. At least one of the components, elements, modules or units (collectively “modules” in this paragraph) termed as such in this specification or represented by a block in the drawings may be embodied as various numbers of hardware, software and/or firmware structures that execute respective functions described above, according to an example embodiment. According to example embodiments, at least one of these modules may use a direct circuit structure, such as a memory, a processor, a logic circuit, a look-up table, etc. that may execute the respective functions through controls of one or more microprocessors or other control apparatuses. Also, at least one of these modules may be specifically embodied by a program, or a part of code, which contains one or more executable instructions for performing specified logic functions, and executed by one or more microprocessors or other control apparatuses. Further, at least one of these modules may include or may be implemented by a processor such as a central processing unit (CPU) that performs the respective functions, a microprocessor, or the like. Two or more of these modules may be combined into one single module which performs all operations or functions of the combined two or more components. Also, at least part of functions of at least one of these modules may be performed by another of these modules. Functional aspects of the embodiments described herein may be implemented in algorithms that execute on one or more processors. Furthermore, the modules represented by a block or processing steps may employ any number of related art techniques for electronics configuration, signal processing and/or control, data processing and the like.


Hereinafter, some embodiments of the present disclosure will be described in detail by referring to the accompanying drawings. In the following description, like reference numerals preferably designate like elements, although the elements are shown in different drawings. Further, in the following description of some embodiments, a detailed description of related known components and functions when considered to obscure the subject of the present disclosure will be omitted for the purpose of clarity and for brevity.


The present disclosure relates to a single-stage 3D multi-object detection apparatus and method for detection of 3D multi-objects by using a LiDAR sensor.



FIG. 1 is a block diagram of the configuration of a 3D multi-object detection apparatus 100 according to at least one embodiment of the present disclosure.


As shown in FIG. 1, the 3D multi-object detection apparatus 100 according to at least one embodiment includes a data input module 110, a Bird's Eye View (BEV) image generation module 120, a learning module 130, and a localization module 140.


The data input module 110 receives raw point cloud data from a LiDAR sensor.


The BEV image generation module 120 generates a BEV image from the raw point cloud data.


The learning module 130 performs deep learning algorithm-based learning to extract a fine-grained or subdivided feature image from the BEV image.


In at least one embodiment of the present disclosure, the learning module 130 performs Convolutional Neural Network (CNN)-based learning.


The localization module 140 performs a regression operation and a localization operation to find, in the subdivided feature image, 3D candidate boxes and their corresponding classes for detecting a 3D object.


The BEV image generation module 120 may generate a BEV image by projecting and discretizing the raw 3D point cloud data into 2D pseudo-images.


The BEV image generation module 120 may encode the raw 3D point cloud data to generate four feature map images based on a height feature map, a density feature map, an intensity feature map, and a distance feature map.



FIG. 2 is a flowchart of a 3D multi-object detection method according to at least one embodiment of the present disclosure.


As shown in FIG. 2, the 3D multi-object detection method includes performing a data input operation by receiving raw point cloud data from the LiDAR sensor (S110), generating BEV images from the raw point cloud data (S120), performing a deep learning algorithm-based learning task to extract a fine-grained feature image from the BEV images (S130), and performing a regression operation and a localization operation to find 3D candidate boxes and classes corresponding to the 3D candidate boxes for detecting 3D objects in the fine-grained feature image (S140).


The step of generating BEV images (S120) may generate the BEV images by projecting and discretizing raw 3D point cloud data into 2D pseudo-images.


The step of generating BEV images (S120) may encode the raw 3D point cloud data to generate four feature map images based on a height feature map, a density feature map, an intensity feature map, and a distance feature map.


The step of performing a deep learning algorithm-based learning task (S130) may perform a convolutional neural network-based (CNN-based) learning task.


The present disclosure provides an efficient and compact single-stage 3D multi-object detection apparatus for a real-time and secure system. First, a compact 2D representation of LiDAR sensor data is utilized, followed by an introduction of a CNN method suitable for extracting detailed functions for a learning task. The present disclosure estimates the heading angle as well as the 3D bounding box position.


The following describes a learning and inference partial strategy of the present disclosure toward compact input generation, a suitable CNN architecture, and localization of the final 3D object candidates.



FIG. 3 is a diagram illustrating an overall framework of a 3D multi-object detection apparatus according to at least one embodiment of the present disclosure.


As shown in FIG. 3, the overall framework of the single-stage 3D multi-object detection apparatus of the present disclosure is composed of four parts including (a) receiving raw point-cloud data inputted from a LIDAR sensor, (b) generating BEV pseudo images which include four feature images from the raw point cloud in a compact way, (c) performing CNN-based learning to extract the fine-grained feature image for learning task and having multiple head blocks, and (d) performing regression and localization to find 3D candidate boxes and their corresponding classes.


The generation of BEV (Bird's Eye View) is done as follows.



FIG. 4 shows a detailed structure of a Bird's Eye View feature map generation.


As shown in FIG. 4, the 3D multi-object detection apparatus of at least one embodiment extracts four compact feature maps, the features including a height feature, an intensity feature, a density feature, and a distance feature.


The existing approach usually encodes the raw 3D LiDAR point cloud data into standard 3D grid cells and a standard voxel representation, where a sparse 3D CNN is used to extract the features. However, most of the 3D spaces are sparse or empty, so these methods are not considered optimized approaches, which leads to both time and hardware inefficient. Alternatively, the raw 3D LiDAR point cloud data is encoded into Front View (FV) representation. Although these representations are compact, they cannot get rid of the object overlapping problems.


The LiDAR sensor provides the 3D point location (x, y, z) and reflectance value r of every point and obtains thousands to millions of points per second.


The present disclosure provides a novel compact BEV generation, which projects and discretizes the raw 3D point cloud data into 2D pseudo-images. This is considered a time-efficient preprocessing scheme wherein the object's physical shapes remain explicitly.


Over the total investigated space of the 3D environment, L×W×H is encoded by the single height, density, intensity, and distance feature maps.


The height feature has cell values each calculated as the maximum height among the point heights within the cell. Then, a normalization step is applied to get the normalized height feature map.


The density feature indicates the density of points within the cell with various point cloud distributions in the 3D real-world. The density feature is normalized by a formula of







min

(

1
,


log

(

Q
+
1

)


log

(
64
)



)

,




where Q is the quantity of the points within the cell.


The intensity feature is recorded with LiDAR intensity and the return strength of a laser beam, reflecting object surface value between [0, 1]. In the present disclosure, the intensity feature is the raw reflectance value of the point, which has a maximum height in the reference cell.


Most of the cells are sparse or empty, especially for the far range, and an examination of the training dataset confirmed that mostly 97% number of point clouds are located in the first [0,30 m] range along to X direction. The point cloud distribution in terms of distance on the training dataset is clearly shown in FIG. 5.



FIG. 5 is a graph showing the average point cloud distribution on the region of interest in the front view of [0,80 m]×[−40 m,40 m]×[−2.8 m,1.2 m] of 7481 samples of the training dataset.


Physically, this point cloud distribution as shown in FIG. 5 is based on the difference between the LiDAR scanning angles and the scene scenarios. For the short-range, the beaming angle is so small that the LiDAR sensor acquired many points, while in the long-range, it obtains a small number of points because of the larger beaming angles. The present disclosure presents this feature map for complementing the distance information and reinforcing the BEV representation. This feature map is useful for the learning task and further helps the model to learn the point clouds distribution by range. The normalized distance feature Di_norm in each cell is calculated by Equation 1.










D

i

_

norm


=



D

O


P
i




D
max


=



(


x
Pi
2

+

y
Pi
2

+

z
Pi
2


)




(


x
max
2

+

y
max
2

+

z
max
2


)








Equation


1







Here, DO→Pi is the distance between the LiDAR origin (0,0,1.73 m) and the current point Pi. Dmax is the possible max distance from the LiDAR origin to the furthest point Pmax within the investigated area Ψ. (xPi,yPi,zPi) and (xmax,ymax,zmax) are the locations of the points Pi and Pmax, respectively.


The quick and efficient 3D multi-object detection apparatus of the present disclosure takes a network architecture that robustly exploits and learns the 2D representation of LIDAR point clouds to detect and classify the objects among dense 2D BEV pseudo images. In terms of encoding of the 3D objects and their labels in the training dataset, the network in the present embodiment is capable of directly extracting and encoding without further resorting to predefined object anchors, or tuning the region proposals passed from the first stage to the second stage. The overall network architecture is illustrated in FIG. 6.


The network architecture provided in the present disclosure may be divided into two sub-networks.


First, the backbone network is used to reclaim general information from the raw BEV representation in the form of convolutional feature maps, and it is compact and has a high representation capability to learn and exploit robust feature representation.


Second, the header network, which has inputs, is composed of the final blocks of the backbone network, and it is designed to learn task-specific predictions. The header network contains five subtasks including the object center point (x,y), the offset information (Δx, Δy), the extending Z coordinate (z), the object size (l,w,h), and the object rotating angle (yaw).


The following details the backbone network and the header network by referring to the drawings.



FIG. 6 shows a detailed CNN architecture of a 3D multi-object detection apparatus according to at least one embodiment of the present disclosure.


As shown in FIG. 6, the overall network of the CNN architecture of the 3D multi-object detection apparatus of the present disclosure is divided into two main parts.


The first is the backbone network and is composed of the following three sub-modules (a, b, c).


a) Res_Block is a modified Resnet block module that represents the set of continuous kernels, down-sampling ratio, and quantity of repetition.


b) US_Block is a module that represents an up-sampling block for each scale.


c) DS_Block is a down-sampling module.


The second is the header network including d) Head module. The main role of this module is to exploit the feature for 5 targets of the object center, which are the offset, Z dimension, 3D object size, and rotation angle for learning tasks.


In deep learning-based object detection tasks, CNNs need to extract the input information in the form of convolutional feature maps. For the learning tasks, the present disclosure provides a compact and robust backbone architecture with the criteria of having fewer layers in a high solution and more layers in a low resolution.


In the embodiment of FIG. 6, the present disclosure provides the network with a total of ten blocks.


The first block is the convolutional layer with channel number 64, kernel 7, padding 3, and stride 2.


The second to the fifth blocks are composed of modified residual layers with a down-sampling factor of 2 for every block having the numbers of skip connections of 3, 8, 8, and 3, respectively.


In total, the down-sampling factor is 32 from the first to fifth blocks.


The sixth to the eighth blocks are top-down up-sampling blocks, while the last two blocks are bottom-up down-sampling blocks. Then, the last three blocks are selected to feed as the input of the header network.


The header network is designed to be small and efficient to learn the multi-specific tasks that handle both classification and 3D object localization. There are five sub-tasks in this network including object center point corresponding with its class ({circumflex over (x)},ŷ), offset information (Δ{circumflex over (x)},Δŷ), extending custom-character2 to custom-character3 coordinate {circumflex over (z)}, object size ({circumflex over (l)},ŵ,ĥ), and object rotating angle factors (custom-character). At the inference stage, the present disclosure can easily decode the object's rotating angle as tan−1(custom-character) within the range of [−π,π].


The final prediction result is formed as: [C,({circumflex over (x)}i+Δ{circumflex over (x)}ii+Δŷi,{circumflex over (z)}i), tan−1(custom-character), ({circumflex over (l)}iii)] for every chosen center point P({circumflex over (x)},ŷ) which has a higher value than the pre-defined threshold.


In the present disclosure, the learning and inference process is implemented in a compact, efficient, and safe manner and to be embedded system-friendly, which will be further described below.


The anchor-free single-stage 3D multi-object detection apparatus according to at least one embodiment of the present disclosure predicts five heads in total for each candidate among a keypoint heatmap head, a local offset head, an object orientation head, a Z-axis position head, and a 3D object dimension head. These heads are needed to produce the final candidates at the inference stages.


The learning process may employ a center regression, offset regression, orientation regression, Z-axis location regression, and size regression. The center regression outputs center points after passing through CNN architecture, each point corresponding to one object category. The shape of the center heat map may be defined as








H
^




[

0
,
1

]



L
S

×

W
S

×
C



,




where S is the downsampling ratio and C stands for the number of predicted classes.


Keypoint heatmap H is divided by factor R and used to find where the object center is in the BEV. Ĥx,y,c=1 stands for detected center point, while Ĥx,y,c=0 stands for background.


The main roles of the offset regression are to reinforce the object center points' accuracy and mitigate the quantization error in the BEV generation process. To this end, the offset regression is applied to predict the offset feature map







O
~



R


L
S

×

W
S

×
2






for every center point. The present disclosure chooses the L1 loss as the learning target for the offset.


For safety, the orientation regression is used to ensure accurate prediction of not only the 3D object location but also the heading angle. The heading angle around Z-axis is considered as the yaw angle, and for the leaning target, the present disclosure encodes the yaw angle φ as (cos(ϕ) sin(ϕ)), while in inference the present disclosure decodes the yaw angle φ as tan−1(sin(ϕ), cos(ϕ)).


The orientation regression outputs the feature map:







Y
^



R


L
S

×

W
S

×
2






for every single center point, when the L1 loss function is applied for training as in Equation 2.










L
yaw

=


1
N





p





i


{


sin
(
ϕ
)

,

cos
(
ϕ
)


}






"\[LeftBracketingBar]"



σ

(


Y
^



p
~

,
i


)

-

Y

p
,
i





"\[RightBracketingBar]"









Equation


2







With the Z-axis location regression, the object center points are predicted in custom-character2, and an extension along the Z-axis is needed for localizing the center points in custom-character3. The Z-axis location regression predicts Z-axis feature map







Z
^



R


L
S

×

W
S

×
I






for each predicted center point. The Z-axis regression result has a tremendous influence on 3D bounding box localization precision on account of the unbounded regression targets in the Z-axis with various object attribute samples. Thus, the prediction is easily sensitive to outliers, especially the imbalance training set. To overcome this issue, the balanced L1 loss is introduced to minimize the imbalance training set and improve the stability of the model. The balanced L1 loss may be addressed for learning Z-axis regression.










L
z

=


1
N





p



L
b

(



"\[LeftBracketingBar]"




Z
ˆ


p
~


-

Z
p




"\[RightBracketingBar]"


)







Equation


3







Where Lb is balanced L1 loss, following the balanced L1 loss definition of Equation 4.











L
b

(
μ
)

=

{






a
b



(


b




"\[LeftBracketingBar]"

μ


"\[RightBracketingBar]"



+
1

)



ln

(


b




"\[LeftBracketingBar]"

μ


"\[RightBracketingBar]"



+
1

)


-

a




"\[LeftBracketingBar]"

μ


"\[RightBracketingBar]"








if





"\[LeftBracketingBar]"

μ


"\[RightBracketingBar]"



<
1







γ




"\[LeftBracketingBar]"

μ


"\[RightBracketingBar]"



+
C



otherwise








Equation


4







Being L1 balanced loss hyper-parameters, a, b, γ have the constraint relation by a ln(b+1)=γ.


The size regression process produces the 3D object spatial dimension: the length l, width w, and height h along the 3D object center coordinate (x, y, z). This task has three values to predict, so for each center point, it returns a size regression feature map:







S
~




?

.








?

indicates text missing or illegible when filed




The size regression has the same characteristic as the Z-axis regression, and it is also sensitive to outliers owing to the unbounded regression targets. Hence, the balanced L1 loss is chosen as the learning target for size regression:










L
size

=


1
N





p





i


{

l
,
w
,
h

}





L
b





"\[LeftBracketingBar]"




S
^



p
~

,
i


-

S

p
,
i





"\[RightBracketingBar]"










Equation


5







The total loss function of the single-stage 3D multi-object detection apparatus of the present disclosure is the weighted sum of all above head regression losses:






L
totalhmLhmoffLoffyawLyawZLZsizeLsize  Equation 6


Where, χhm, χoff, χyaw, χZ, and χsize are represented for the balanced coefficients for the heatmap center regression, offset regression, orientation regression, Z-axis location regression, and size regression, respectively.


The following describes an inference process in the single-stage 3D multi-object detection apparatus of the present disclosure.


For accurately localizing 3D bounding boxes, after extracting fine-grained feature maps, the present disclosure first checks the presence of center keypoints by comparing whether whose value is greater than eight connected neighbors. Here, comparing with eight of the neighbors is suitable for both fast and accurate popular method of finding keypoints.


Then, the present disclosure only keeps the center points by two criteria: The center point value is higher than the predefined threshold, and the confidence score filters out the detected center point number in priority order of a redefined object number in the detection range.


The object in the custom-character3 environment can be described as (cx, cy, cz, r p, y, l, w, h) where, (cx, cy, cz) is the 3D object center, (r, p, y) represents roll, pitch, and yaw rotating angle, (l, w, h) is the object length, width, and height, respectively.


Assuming that the object is on a flat road plane, r=p=0, so an object in custom-character3 has 7 degrees of freedom (cx, cy, cz, y, l, w, h). During inference, that {circumflex over (P)}Ci=ln{{circumflex over (x)}ii} is the set of predictions where n is the quantity of detected center points of class C.


After prediction, ({circumflex over (x)}ii), are (Δ{circumflex over (x)}i,Δŷi), (custom-character,custom-character), {circumflex over (z)}i, ({circumflex over (l)}iii) are obtained corresponding to the heatmap center point, offset, orientation angle, Z-axis location, and size dimension.


Then, all the candidate targets are then fused to produce the accurate 3D bounding box for class C as





(C,{circumflex over (x)}i+Δ{circumflex over (x)}ii+Δŷi,{circumflex over (z)}i,tan(custom-character,custom-character),{circumflex over (l)}iii).


The present disclosure handles these tasks as an embedded system-friendly approach. Therefore, the present disclosure can find the object centers by using a lightweight max-pooling operation, way faster than involving the conventional NMS process.


According to some embodiments, by providing a powerful real-time 3D multi-object detection apparatus for autonomous driving, the present disclosure can improve the accuracy of a 3D object detection task while maintaining a very fast inference speed.


Although exemplary embodiments of the present disclosure have been described for illustrative purposes, those skilled in the art will appreciate that various modifications, additions, and substitutions are possible, without departing from the idea and scope of the claimed invention. Therefore, exemplary embodiments of the present disclosure have been described for the sake of brevity and clarity. The scope of the technical idea of the embodiments of the present disclosure is not limited by the illustrations. Accordingly, one of ordinary skill would understand the scope of the claimed invention is not to be limited by the above explicitly described embodiments but by the claims and equivalents thereof.

Claims
  • 1. An apparatus for single-stage three-dimensional (3D) multi-object detection by using a LiDAR sensor to detect 3D multiple objects, comprising: a data input module configured to receive raw point cloud data from the LiDAR sensor;a BEV image generating module configured to generate bird's eye view (BEV) images from the raw point cloud data;a learning module configured to perform a deep learning algorithm-based learning task to extract a fine-grained feature image from the BEV images; anda localization module configured to perform a regression operation and a localization operation to find 3D candidate boxes and classes corresponding to the 3D candidate boxes for detecting 3D objects from the fine-grained feature image.
  • 2. The apparatus of claim 1, wherein the BEV image generating module is configured to generate the BEV images by projecting the raw 3D point cloud data into 2D pseudo-images and discretizing a result of the projecting.
  • 3. The apparatus of claim 2, wherein the BEV image generating module is configured to generate four feature map images based on a height, a density, an intensity, and a distance of the raw 3D point cloud data, by encoding the raw 3D point cloud data.
  • 4. The apparatus of claim 3, wherein the learning module is configured to perform a convolutional neural network-based (CNN-based) learning task.
  • 5. A method performed by an apparatus for single-stage three-dimensional (3D) multi-object detection by using a LiDAR sensor to detect 3D multiple objects, the method comprising: performing a data input operation by receiving raw point cloud data from the LiDAR sensor;generating bird's eye view (BEV) images from the raw point cloud data;performing a deep learning algorithm-based learning task to extract a fine-grained feature image from the BEV images; andperforming a regression operation and a localization operation to find 3D candidate boxes and classes corresponding to the 3D candidate boxes for detecting 3D objects from the fine-grained feature image.
  • 6. The method of claim 5, wherein the generating of the BEV images comprises: generating the BEV images by projecting the raw 3D point cloud data into 2D pseudo-images and discretizing a result of the projecting.
  • 7. The method of claim 6, wherein the generating of the BEV images comprises: generating four feature map images based on a height, a density, an intensity, and a distance of the raw 3D point cloud data, by encoding the raw 3D point cloud data.
  • 8. The method of claim 7, wherein the performing of the deep learning algorithm-based learning task comprises: performing a convolutional neural network-based (CNN-based) learning task.
Priority Claims (1)
Number Date Country Kind
10-2021-108154 Aug 2021 KR national