This application is a National Stage Entry of PCT/JP2019/009169 filed on Mar. 7, 2019, the contents of all of which are incorporated herein by reference, in their entirety.
The present invention relates to an image processing device, an image processing method, and a recording medium.
Currently, statistical machine learning methods are used for recognition technologies using images and the like. Such statistical machine learning indispensably prepare a large amount of label data assigned correct labels related to recognition targets, and use the label data for learning and evaluation. Technologies related to labeling are described in Non Patent Literatures 1 and 2.
As for the technologies, Patent Literature 1 discloses a shape extracting device capable of extracting the shape of an object from three-dimensional point cloud data at high speed even if the amount of three-dimensional point cloud data is enormous. The technology pertaining to Patent Literature 1 sets an inclusive voxel that covers all the points in the three-dimensional point cloud data. The technology pertaining to Patent Literature 1 divides the inclusive voxel into a plurality of virtual voxels having a larger size than the pitch of the points. The technology pertaining to Patent Literature 1 assigns unique identification numbers to the respective virtual voxels. Furthermore, the technology pertaining to Patent Literature 1 performs a labeling process in units of virtual voxels according to whether any point is present in the corresponding virtual voxel assigned the identification number, and assigns the same label value to linked virtual voxels among the virtual voxels containing the points.
According to the method of assigning the same label value to the linked virtual voxels as in Patent Literature 1 described above, it is unknown which object the virtual voxels correspond to. Accordingly, in order to perform an object recognition process using the method pertaining to Patent Literature 1, it may be required to manually identify which object the label value assigned to the virtual voxel corresponds to. Consequently, according to the method pertaining to Patent Literature 1, it is difficult to easily generate label data used for the object recognition technology using three-dimensional point cloud data.
The present disclosure has been made in order to solve such a problem, and has an object to provide an image processing device, an image processing method and a recording medium that can easily generate label data used for an object recognition technology using three-dimensional point cloud data.
An image processing device according to the present disclosure includes: three-dimensional point cloud obtaining means for obtaining three-dimensional point cloud data that represents a three-dimensional point cloud of an object; two-dimensional label obtaining means for obtaining two-dimensional label data that is label data corresponding to a two-dimensional image of the object; and label conversion means for associating the two-dimensional label data with the three-dimensional point cloud data, and converting the two-dimensional label data into three-dimensional point cloud label data that indicates a label of the three-dimensional point cloud.
An image processing method according to the present disclosure includes: obtaining three-dimensional point cloud data that represents a three-dimensional point cloud of an object; obtaining two-dimensional label data that is label data corresponding to a two-dimensional image of the object; and associating the two-dimensional label data with the three-dimensional point cloud data, and converting the two-dimensional label data into three-dimensional point cloud label data that indicates a label of the three-dimensional point cloud.
A program according to the present disclosure causes a computer to execute: a step of obtaining three-dimensional point cloud data that represents a three-dimensional point cloud of an object; a step of obtaining two-dimensional label data that is label data corresponding to a two-dimensional image of the object; and a step of associating the two-dimensional label data with the three-dimensional point cloud data, and converting the two-dimensional label data into three-dimensional point cloud label data that indicates a label of the three-dimensional point cloud.
According to the present disclosure, an image processing device, an image processing method and a recording medium that can easily generate label data used for an object recognition technology using three-dimensional point cloud data can be provided.
Prior to description of example embodiments of the present disclosure, an overview of the example embodiments according to the present disclosure is described.
The three-dimensional point cloud obtaining unit 2 obtains three-dimensional point cloud data that represents a three-dimensional point cloud of an object. The two-dimensional label obtaining unit 4 obtains two-dimensional label data corresponding to a two-dimensional image of the object. The label conversion unit 6 associates the two-dimensional label data with the three-dimensional point cloud data, and converts the two-dimensional label data into three-dimensional point cloud label data that indicates a label of the three-dimensional point cloud.
<Problem of High Cost of Data for General Machine Learning>
Hereinafter, a problem of a related art is described. Currently, a statistical machine learning method is used for recognition technologies using images and the like. In order to achieve this, it is indispensable to prepare a large amount of data assigned correct labels related to a recognition target domain, and to use the data for learning and evaluation. Currently, to prepare such labelled data, a significantly high cost is required in view of time, money, and efforts.
<Image Label Generation Method>
In a case of intending to detect recognition target classes on an image (e.g., a person, a car, furniture, etc.), typically, the label data is saved text data that associates the positions of recognition target objects (target objects) on the image with target class names. Here, for example, the position of each target object can be represented by a vector that represents a rectangle encircling the target object. For example, this vector is represented such as [upper left x coordinate, upper left y coordinate, rectangle width, rectangle height] in pixels of the image. The target class name is represented as “Car”, “Pedestrian”, for example.
To perform a label assigning operation, a tool that includes a GUI (Graphical User Interface) operated by a user is used. The GUI presents a visual image on a screen. The user operates a mouse cursor and the like to select the upper left coordinates and the lower right coordinates of a rectangle that encircles each target object on the visual image, thus designating a rectangular region. The user inputs label information, such as a class name, through text input, a preliminarily designated selection button or the like.
In a case where the intended output result is not by position detection through a rectangle but is image segmentation for the sake of distinctly painting regions on a pixel level, the regions of objects designated by the label assigning operation are distinctly painted in units of pixels instead of rectangles. Here, designation of rectangles is relatively simple. However, the cost of designating image segmentation regions is relatively high.
<3D LiDAR and Point Cloud Data>
In recent years, technologies of measuring three-dimensional structures widely used for geomorphic investigation, automated driving, etc. (or a measurement device using the technologies) include, for example, 3D LiDAR (Light Detection And Ranging, Laser Imaging Detection and Ranging). The 3D LiDAR means technologies capable of accumulating, as three-dimensional point cloud data, distance data from a measurement point on shielding objects in a three-dimensional space highly densely and widely with respect to a conventional single-line scanner and the like, and measurement devices thereof. The three-dimensional point cloud data is data in which the structure of an actual environment obtained by the 3D LiDAR is reflected, for example. The three-dimensional point cloud data is data indicating a three-dimensional point cloud that is a set of points (three-dimensional points) having three-dimensional coordinate information obtained through reflection from target objects, such as people and automobiles, and structural objects, such as the ground, buildings, and signage.
<Label Assigning to Three-Dimensional Point Cloud Data>
To visualize a calculation result of three-dimensional point cloud data obtained through 3D LiDAR or the like, typically, the three-dimensional coordinates of individual points are depicted through a three-dimensional viewer or the like. The points obtained this time are scattered in a three-dimensional space. It is significantly difficult to manually label such a large amount of three-dimensionally distributed points while viewing through the three-dimensional viewer or the like, and there is a possibility that a high cost is caused.
Here, one example of technologies pertaining to labelling is a method of assigning pixel labels to data, as described in Non Patent Literature 1. This method assigns common label data to a visual camera image and a depth image. The method obtains a combination of a visual camera image and a depth image that stores distance data associated with the visual camera image in units of pixels, using an RGB-D camera, such as KINECT®. The target object region in the visual camera image is distinctly painted on a pixel level with respect to each classification, thereby assigning common label data to the visual camera image and the depth image. The RGB-D camera can reconstruct the three-dimensional point cloud that is an aggregation of three-dimensional coordinate data items on the object captured by the sensor, from the geometrical relational expression between the optical system and pixels on the image, and the depth image. Accordingly, it is conceivable that use of this method can generate three-dimensional point cloud labels from image labels in units of pixels. However, this method is required to manually perform a distinct painting operation of the labels in units of pixels on the basis of the object region on the image, and has a problem in that the operation cost is significantly high.
Another example of technologies pertaining to labelling is a method of assigning a three-dimensional box label to data, as described in Non Patent Literature 2. This method proposes an GUI capable of assigning label information, as an “annotation tool”. This method uses the output of the RGB-D camera to present a visual image, and a pseudo viewpoint image of a color point cloud from multiple viewpoints (front, top-down, side, etc.) obtained by assigning color information obtained from the visual image to a three-dimensional point cloud. The user then uses the annotation tool to designate a three-dimensional box region encircling a label-assigned target object while operating the mouse cursor on four types of viewpoint images and assigns the label information corresponding to the region. This method has an advantage of capability of correctly designating the three-dimensional region encircling the target object, but has a problem in that the designation of the position on the multi-viewpoint images is required to be finely adjusted through the mouse cursor, and the input requires efforts.
With respect to such a problem, the image processing device 1 according to the present disclosure is configured as described above. Accordingly, this device can reduce the cost of labeling the three-dimensional point cloud, and more simply generate the three-dimensional point cloud label. Specifically, since the image processing device 1 according to the present disclosure is configured as described above, this device can label the corresponding three-dimensional point cloud only on the basis of the two-dimensional object label defined on the visual image. Accordingly, label assigning to the three-dimensional point cloud can be achieved by executing simpler label assigning to the two-dimensional plane, thereby allowing the cost through the manual operation to be reduced. Furthermore, the two-dimensional label to the two-dimensional camera image is used as an input. Accordingly, a database having common label data between the camera image and the corresponding three-dimensional point cloud can be generated.
As described above, according to the present disclosure, in an imaging system where an RGB-D camera or a 3D LiDAR and a visual camera are combined, the two-dimensional rectangular label defined on the visual image plane is assigned, which can automatically achieve label assigning to the three-dimensional point cloud. Accordingly, learning data required to develop recognition technologies, such as object identification and detection using three-dimensional point cloud data, can be easily generated. Note that the present example embodiment assumes that there is calibration data allowing mapping from the three-dimensional point cloud onto the visual image.
The two-dimensional label that is defined on the visual image and is used as an input may thus be provided through manual operation; not limited thereto, the process may be semi-automated through use of an output of a detector for an object having already been preliminarily learned with existing learning data, by a method of manually correcting only errors. Specifically, accompanied by significant development of image recognition technologies in recent years, abundant labelled datasets have already been prepared in readily available states for object detection on a two-dimensional image. Even a detector through existing deep learning or the like can be expected to achieve a performance to a certain extent, except for special objects, and can automatically generate the two-dimensional label accordingly.
Furthermore, as described above, the present disclosure has an advantage by commonizing the label data to the camera image and the three-dimensional point cloud in a state of allowing association with each other. In a specific example, labelled three-dimensional point clouds are projected onto a two-dimensional image plane, and a process of filling up the gaps between point clouds on the plane that are assigned the same label or the like is applied, which can generate segmentation label data on image data. As described above, the label data is shared by the two-dimensional image and the three-dimensional point cloud, thereby allowing mutual conversion and interpolation of label data.
An example embodiment is described with reference to the drawings. In order to clarify the illustration, items of the following description and drawings are appropriately omitted and simplified. In each drawing, the same elements are assigned the same symbols, and redundant description is omitted as required.
The description of the example embodiments hereafter mainly assumes a recognition task that detects and identifies objects, such as automobiles and pedestrians, in an actual environment from observation data, that is, machine learning for the sake of object detection, and label data generation for evaluation thereof. However, the application range of the present disclosure is not specifically limited thereto. The method of obtaining the three-dimensional point cloud is not limited to LiDAR. The three-dimensional point cloud may be obtained using another three-dimensional range sensor (RGB-D camera).
The image output device 20 may be not only the camera device, but also an image capture device that reads image information stored in a storage medium, converts the information into information having an image format, such as the NTSC format or PAL format, readable by the electronic control device 10, and outputs the information. The image output device 20 in this case can be achieved as a software program operable in a CPU of the electronic control device 10.
The point cloud output device 30 is a device that at least outputs three-dimensional point cloud data to the electronic control device 10. The three-dimensional point cloud data is data that at least includes a set (three-dimensional point cloud) of points having coordinate information (x, y, z) in a three-dimensional space. Typically, the three-dimensional point cloud data is obtained as data through sampling positions on the surface of objects (objects and structural objects) in the three-dimensional space. The point cloud output device 30 is, for example, a 3D LiDAR. The 3D LiDAR is a measurement device that emits laser beams in a plurality of directions at certain horizontal angles and perpendicular angles, measures the distances of objects residing in the corresponding emitting directions on the basis of the reflection time period of the emitted light beams, obtains the three-dimensional structure therearound from the measurement result, as three-dimensional point cloud data.
Note that the point cloud output device 30 may be a reproduction device that outputs, from a storage medium, three-dimensional point cloud data obtained by any type of means. The point cloud output device 30 in this case can be achieved as a software program operable in the CPU of the electronic control device 10.
Note that the example embodiments described below assume that observation targets in a visual image (image information) obtained by the image output device 20 correspond to observation targets in a three-dimensional point cloud obtained by the point cloud output device 30. That is, data output by the image output device 20, and data output by the point cloud output device 30 are assumed to be obtained in a condition where positional relationships therebetween at the same time in the same environment have already been known, if not otherwise specified. The visual image and the three-dimensional point cloud can be at least partially associated with each other. That is, the visual field region of image information, and the three-dimensional point cloud at least partially involve each other in the corresponding actual three-dimensional space.
The electronic control device 10 is a computer that performs information processing (image processing) according to the present disclosure. The electronic control device 10 corresponds to the image processing device 1 shown in
The terminal device 40 operates as an user interface that allows an operation of the electronic control device 10, and is for monitoring the internal state and output of the electronic control device 10. The terminal device 40 is, for example, an output device or an input device. The output device is, for example, a display. For example, the display presents an input image from the image output device 20, presents a converted image obtained by converting the input image, presents various image processing results using the converted image, or presents an image obtained by overlaying label data on the input image. The input device inputs instructions to the electronic control device 10 that include start/stop of the device, parameter designation, and selection of information to be presented on the display. The input device is, for example, a switchboard, a keyboard, a mouse, a touch panel, etc. Note that the image processing system 100 can be achieved also by a configuration that does not include the terminal device 40.
A device that uses, as an input, information obtained by the image processing system 100 can be connected to the electronic control device 10. The function of a device connected to the electronic control device 10 can be achieved as a software program operable by the CPU in the electronic control device 10. It is conceivable that examples of devices to be connected include an image processing device or a machine learning device that receives, as an input, the three-dimensional point cloud label generated according to the present example embodiment.
The electronic control device 10 includes a central processing device 11 (CPU), a storage device 12 (Mem), a storage device 13 (DB), and interfaces 14, 15 and 16. Note that in
The interface 14 is a device that mediates information exchange between the central processing device 11, the storage device 12 and the storage device 13, and the image output device 20. Note that in
The interface 15 is a device that mediates information exchange between the central processing device 11, the storage device 12 and the storage device 13, and the point cloud output device 30. The interface 16 is a device that mediates information exchange between the central processing device 11, the storage device 12 and the storage device 13, and the terminal device 40. The interface 16 performs mediation in a case of outputting a result of information processing performed in the electronic control device 10 to the terminal device 40. Furthermore, the interface 16 performs mediation in a case of receiving an instruction input that is to be output from the terminal device 40 to the electronic control device 10.
The storage device 12 is a device that stores temporary data, and is electrically connected to the central processing device 11. The storage device 13 is a device that mainly stores a database (DB), and is electrically connected to the central processing device 11. Note that in
The central processing device 11 is a device that performs information processing, and is electrically connected to the interfaces 14, 15 and 16, the storage device 12, and the storage device 13. The central processing device 11 executes a program stored in the storage device 13. Accordingly, the central processing device 11 performs information processing according to predetermined procedures while appropriately referring to the information stored in the storage device 12 and the storage device 13, on the basis of image information input through the interface 14 and of various pieces of information input through the interfaces 15 and 16.
As shown in
The three-dimensional point cloud input unit 101 has a function of obtaining three-dimensional point cloud data output from the point cloud output device 30 to the electronic control device 10. The obtained three-dimensional point cloud data is data in which the structure of an actual environment obtained by the 3D LiDAR is reflected, for example. For example, the three-dimensional point cloud data is a set of points (three-dimensional points) having three-dimensional coordinate information obtained through reflection from objects, such as people or automobiles, or structural objects, such as the ground, buildings, or signage.
The two-dimensional label input unit 102 has a function of assigning label information to image data output from the image output device 20 to the electronic control device 10. The label data assigned here is what is assigned to a region defined with the two-dimensional coordinates of an image. Accordingly, the label data is hereinafter called a two-dimensional label. The two-dimensional label is made of information that includes at least the position of a recognition target object on an image plane, and a class label assigned to the object (or a unique ID label to an individual object), with respect to an image of a certain frame. Here, the “class label” is a label that indicates the type of an object, such as “Car”, “Van”, “Truck”, or “Pedestrian”. The “(unique) ID label” is, for example, a label for individual identification, such as 1, 2, 3, . . . , assigned to respective objects in the entire frame. Note that the two-dimensional label may include both the class label and the ID label.
Typically, the two-dimensional label is manually generated for the stored image data, using an image labeling tool. Note that the two-dimensional label generation method can be achieved by automatically generating the two-dimensional label using the detection result of an existing image detection technology. Alternatively, the method can be achieved by a method of manually correcting an error of a label generated by the image detection technology. As described above, according to the present example embodiment, the two-dimensional label obtaining method is not specifically limited. The main point of the present example embodiment is to generate the label associated with the three-dimensional point cloud, from the given two-dimensional label.
The label conversion unit 103 has a function of assigning label information corresponding to the three-dimensional point cloud, using the two-dimensional label and the three-dimensional point cloud provided by the three-dimensional point cloud input unit 101 and the two-dimensional label input unit 102. The label conversion unit 103 according to the first example embodiment is characterized by converting the two-dimensional label into the three-dimensional point cloud label on the basis of the following basic method.
Here, referring to
Here, a problem of associating the two-dimensional label with the point cloud in the three-dimensional space is that the number of dimensions on the labeling side is smaller than the number of dimensions on the side to be associated, that is, there is dimension degeneracy, and in general, association is not uniquely determined accordingly. The method according to the present example embodiment can solve such a problem by separating the three-dimensional point cloud on the side to be associated into small clusters on the basis of the three-dimensional information on the point cloud itself. That is, based on expectation that the point cloud constituting the generated small clusters is derived from a single object, the problem is simplified as a matching problem in units of small clusters instead of units of points, thereby allowing unique association to be achieved with the two-dimensional label.
Hereinafter, the detailed function of the label conversion unit 103 is described.
where N is the number of three-dimensional point clouds, and M is the number of clusters. N and M are defined depending on the dividing (clustering) method and data. In the example in
Specifically, the three-dimensional point cloud dividing unit 1031 can process the point cloud in units of small clusters by assigning cluster IDs different among the small clusters to the points that are elements of small clusters. That is, the clustering described here has the same meaning as assigning of corresponding cluster labels to the individual three-dimensional points pk. In other words, after the clustering is executed, for the points classified as the elements of a small cluster Ci, the same cluster ID (i) can be referred to. That is, the following Equation 2 holds.
[Expression 2]
ClusterID(∀p∈Ci)=i (2)
The method of dividing into small clusters is described. An existing clustering method can be used for division into small clusters. For example, as shown in the following Equation 3, distance-based clustering can be used that regards the points having an Euclidean distance between two points in a three-dimensional space smaller than a threshold, as those in the same small cluster.
[Expression 3]
Ci={pk|∀k∈{l,m},norm(pl,pm)<thresdist} (3)
where C* and p* respectively indicate the small cluster, and the point that is an element thereof. Furthermore, norm( ) indicates the distance between two points, and thresdist is a distance threshold for classification into the same cluster. Moreover, i indicates an index assigned to each small cluster generated by dividing the three-dimensional point cloud data.
An optimization method using a two-dimensional rectangle can be used for division into small clusters. For example, a reference that is a combination of a reference for reducing the distance between the cluster and a point, and a reference for increasing an expected value of inclusion of elements of the cluster projected on an image plane, in an identical two-dimensional label rectangle, can be used as a reference for optimizing belongingness to each of the clusters. Accordingly, a condition of constraint of a spatial distribution for an object on a two-dimensional plane provided by a two-dimensional label rectangle can be provided for clustering of the three-dimensional point cloud. Consequently, even in a case where separate objects reside closer with reference to a predefined distance threshold, there is a possibility that separation to small clusters can be achieved correctly (on an object-by-object basis).
For example, clustering that integrates points within the distance threshold into the same cluster is performed as described above, and then the following method is performed. That is, a penalty term is assumed which involves, for the point p∈Ci included in each cluster Ci, the number of points p′ that are mapped therefrom onto the image plane and are not included in the same two-dimensional label rectangle Bj. A method of redividing the cluster C so as to reduce the penalty term can be assumed. Here, for example, the penalty term can be defined as follows.
where all the regions where no two-dimensional label rectangle is defined belong to the same background region B0. Num is a function of counting the number of elements. Equation 4 represents a concept equivalent to the entropy for the inclusion relation of the two-dimensional label rectangle for all the elements in the cluster Ci. Accordingly, the value of Equation 4 increases with increase in the number of cases where the elements of the cluster Ci are included in different two-dimensional label rectangles on the image plane. Consequently, redivision may be performed so as to minimize the value or makes the value equal to or less than the threshold.
The cluster mapping projection unit 1032 projects each small cluster of three-dimensional point clouds onto the plane coordinate system where the two-dimensional labels are defined. Here, in order to achieve the projection described above, mapping from the three-dimensional point cloud coordinates onto the plane is required to be defined. It is herein assumed that the plane coordinates where the two-dimensional labels are defined are in a coordinate system of the original image to which the two-dimensional label is assigned, and mapping from the three-dimensional point cloud coordinates onto the image plane are given. Specifically, it is assumed that calibration (position association) between the camera that is the image output device 20 and the 3D LiDAR that is the point cloud output device 30 has already been executed.
[Expression 5]
Xcam=TvcXvel (5)
In a case where the camera model is nonlinear, mapping from the three-dimensional point Xvel to the point Ximg on the image plane in the camera image Im is represented as the following Equation 6 using a function Pci:R4→R3.
[Expression 6]
Ximg=Pci(Xcam)=Pci(Tvc·Xvel) (6)
In a case where the camera model is linear, mapping from the three-dimensional point Xvel to the point Ximg on the image plane in the camera image Im is represented as the following Equation 7 using a matrix Pci∈R(3×4).
[Expression 7]
Ximg=PciXcam=Pci·Tvc·Xvel (7)
where all the vectors indicated by Equations 5 to 7 are homogeneous vectors. That is, Xvel, Xcam ∈R4, Ximg ∈R3. As described above, it should be noted that mapping of the three-dimensional point onto the image plane is not one-to-one correspondence, and only mapping from the three-dimensional point onto the image plane is uniquely determined.
In general, the elements of the matrix Tve are called extrinsic parameters, the coefficients or the matrix elements of the function Pci are called (camera) intrinsic parameters, and acquisition of these parameters are called calibration. The present example embodiment assumes that the calibration has already been executed by a certain type of means. At this time, the coordinates of the points of the three-dimensional point cloud are uniquely projected onto the respective points on the image plane by a geometrical operation. Likewise, a small cluster that is a set of points of the three-dimensional point cloud is projected onto the image plane, as a set of mapped points of elements.
Here,
The label matching unit 1033 has a function of matching the small clusters projected onto the image plane 90 by the cluster mapping projection unit 1032 with the two-dimensional labels assigned by the two-dimensional label input unit 102, and selecting the small clusters matching appropriately with the respective two-dimensional labels. Specifically, the label matching unit 1033 searches for the two-dimensional label rectangles (bounding-box) Bj corresponding to the points p∈Ci in units of small clusters Ci as shown in the following Equation 8, and associates class labels (and ID labels).
Specifically, the label matching unit 1033 calculates the in-box ratio IBR according to the following Equation 9.
where Ci indicates a small cluster, Bj indicates a two-dimensional label rectangle, and pk indicates the point of an element of the small cluster. Furthermore, i, j and k are indices of a corresponding set member. Num( ) is a function of counting the number of elements.
Specifically, the label matching unit 1033 calculates the intersection over union (IoU) by the following Equation 10.
where Di is the convex hull of the small cluster Ci on the image plane. Area( ) is a function of obtaining area.
The label matching unit 1033 may combine the point in-box ratio indicated by the above Equation 9 with the intersection over union indicated by the above Equation 10, and determine the two-dimensional label rectangle corresponding to the small cluster. For example, as indicated by the following Equation 11, with respect to the small cluster and the point cloud constituting the small cluster, the label matching unit 1033 may regard the two-dimensional label rectangle Bj that has IBR exceeding the threshold thibr and has the maximum IoU, as the label of the small cluster and the point cloud constituting the small cluster. Here, Label(*) represents label data, assigned to data *, where position information is excluded. In a specific example, the label data is [class label and ID label].
According to the image processing system 100 according to the first example embodiment, from the two-dimensional label assigned to the image data, the three-dimensional label for the corresponding three-dimensional point cloud data can be automatically generated. Accordingly, an imaging system (e.g., an imaging system including a camera and 3D LiDAR) capable of simultaneously obtaining a two-dimensional image and a three-dimensional point cloud can generate label data common between the two-dimensional image and the three-dimensional point cloud, only with a cost of assigning the label to the image. Consequently, the label data can be easily used for a machine learning task using two data items that are the two-dimensional image and the three-dimensional point cloud.
The thus generated three-dimensional point cloud label can be used as follows. For example, the three-dimensional point cloud label can be used as an evaluation dataset for performance evaluation for a three-dimensional object detection task for estimating the point cloud corresponding to the object only from the three-dimensional point cloud and outputting the three-dimensional region position of the specific object. For example, the three-dimensional point cloud label can be used as a dataset for learning in a case of achieving a three-dimensional object detection task through statistical machine learning.
(Operation)
Next, a second example embodiment is described with reference to the drawings. In order to clarify the illustration, items of the following description and drawings are appropriately omitted and simplified. In each drawing, the same elements are assigned the same symbols, and redundant description is omitted as required. Note that the system configuration according to the second example embodiment is substantially similar to that shown in
The image processing system 100 according to the second example embodiment generates an image segmentation label using the three-dimensional point cloud label generated by the image processing method according to the first example embodiment. Here, the image segmentation label is the class label (or the ID label) assigned along the shape of the object in the image in units of pixels of the region corresponding to the object.
Here, the first example embodiment described above relates to the two-dimensional label data (rectangle) used for the problem of object detection that estimates an object in an image on a two-dimensional rectangle level.
Meanwhile, the image segmentation label is label data used for a problem of image (semantic) segmentation that performs distinct painting (classification) of the object on a pixel level. The image segmentation label requires distinct painting (classification) on a pixel level. Accordingly, the cost is high for manual label assigning. On the other hand, in the second example embodiment, the three-dimensional point cloud label can be automatically generated from the two-dimensional label (rectangle), and the image segmentation label can be automatically generated from the three-dimensional point cloud label.
The image input unit 104 functions as image input means. The image input unit 104 obtains (receives) three-dimensional point cloud data, a corresponding two-dimensional label, and a corresponding camera image. That is, the image input unit 104 obtains labelled three-dimensional point cloud data. The segmentation label generation unit 105 functions as segmentation label generation means. The segmentation label generation unit 105 generates image segmentation label data using the labelled three-dimensional point cloud data output from the label conversion unit 103. There are a case where an image obtained by the image input unit 104 is used, and a case where this image is not used. According to the second example embodiment, the three-dimensional points output from the label conversion unit 103 are projected on the image plane, and are interpolated with image pixels between the projected points with respect to each label (class label or ID label), thereby generating the segmentation label.
Here, an existing image interpolation process can be used for the pixel interpolation process. For example, a range, such as of 3×3 pixels, is designated, closing of a morphology operation is performed, and a process of filling up adjacent pixels is performed. That is, pixels around the labelled three-dimensional point are assigned the label. By performing such a process, image segmentation label data where pixels overlapping the object in the image are filled can be generated from the three-dimensional point cloud label.
The interpolation process may be optimized with reference to input image information (camera image). After the interpolation process is executed for each label, a method of determining a boundary with reference to the edges of image, color information and the like upon occurrence of interference of interpolation pixels between different labels may be used.
The image segmentation label generated by the method according to the second example embodiment may be used as described below. For example, the image segmentation label can be used as an evaluation dataset for performance evaluation for the image segmentation task that estimates region pixels of an object only from an input image, and outputs a result where a mask region is filled. For example, the image segmentation label can be used as a dataset for learning in a case of achieving an image segmentation task through statistical machine learning.
(Operation)
Note that the present invention is not limited to the aforementioned example embodiments, and can be appropriately modified in a range without departing from the spirit. For example, one or more processes of steps in the flowcharts described above can be omitted.
In the examples described above, the program can be stored using any of various types of non-transitory computer readable media, and provided for the computer. The non-transitory computer-readable media encompasses various types of tangible storage medium. Examples of the non-transitory computer-readable media include magnetic recording media (e.g., a flexible disk, a magnetic tape, and a hard disk drive), magnetooptical recording media (e.g., an magnetooptical disk), a CD-ROM, a CD-R, a CD-R/W, semiconductor memories (e.g., a mask ROM, a PROM (Programmable ROM), an EPROM (Erasable PROM), a flash ROM, and a RAM). The program can be supplied to the computer through any of various types of transitory computer readable media. Examples of transitory computer-readable media include an electric signal, an optical signal, and electromagnetic waves. The transitory computer-readable media can be supplied to the computer through any of wired communication paths, such as electric wire or an optical fiber, or a wireless communication path.
The invention of the present application has been described above with reference to the example embodiments. However, the invention of the present application is not limited by the above description. The configuration and details of the invention of the present application can be variously modified within the scope of the invention in a manner allowing those skilled in the art to understand.
A part or all of the aforementioned example embodiments can be described also as the following Supplementary notes, but are not limited to the followings.
(Supplementary Note 1)
An image processing device, comprising:
three-dimensional point cloud obtaining means for obtaining three-dimensional point cloud data that represents a three-dimensional point cloud of an object;
two-dimensional label obtaining means for obtaining two-dimensional label data that is label data corresponding to a two-dimensional image of the object; and
label conversion means for associating the two-dimensional label data with the three-dimensional point cloud data, and converting the two-dimensional label data into three-dimensional point cloud label data that indicates a label of the three-dimensional point cloud.
(Supplementary Note 2)
The image processing device according to Supplementary Note 1, wherein the label conversion means associates the two-dimensional label data with the three-dimensional point cloud data, by associating a mapped point cloud obtained by projecting the three-dimensional point cloud onto an image plane corresponding to the two-dimensional image, with the two-dimensional label data.
(Supplementary Note 3)
The image processing device according to Supplementary Note 2, wherein the label conversion means associates the two-dimensional label data with the three-dimensional point cloud data, by matching the two-dimensional label data with the mapped three-dimensional point cloud.
(Supplementary Note 4)
The image processing device according to any one of Supplementary Notes 1 to 3, wherein the label conversion means divides the three-dimensional point cloud into a plurality of small clusters by applying clustering to the three-dimensional point cloud in conformity with a predetermined reference, and associates the two-dimensional label data with each of the divided small clusters.
(Supplementary Note 5)
The image processing device according to Supplementary Note 4, wherein the label conversion means associates the two-dimensional label data with the three-dimensional point cloud data, according to a parameter that represents a correlation between the mapped small clusters obtained by projecting the small clusters onto an image plane corresponding to the two-dimensional image, and the two-dimensional label data.
(Supplementary Note 6)
The image processing device according to Supplementary Note 5, wherein the parameter corresponds to at least one of an in-box ratio that is a ratio of elements, of each of the mapped small clusters, included in the two-dimensional label data, and an intersection over union between a convex hull formed of each of the mapped small clusters, and the two-dimensional label data.
(Supplementary Note 7)
The image processing device according to any one of Supplementary Notes 4 to 6, wherein for the clustering, the label conversion means uses, as a reference for optimizing belongingness to each of the clusters of the three-dimensional point cloud, a reference that is a combination of a reference of reducing a distance between the cluster and a point, and a reference of increasing an expected value of inclusion of elements of the cluster projected on an image plane, in an identical two-dimensional label rectangle.
(Supplementary Note 8)
The image processing device according to any one of Supplementary Notes 1 to 7, further comprising segmentation label generation means for generating image segmentation label data, by applying a pixel interpolation process to mapped data obtained by projecting the three-dimensional point cloud label data onto an image plane corresponding to the two-dimensional image.
(Supplementary Note 9)
An image processing method, comprising:
obtaining three-dimensional point cloud data that represents a three-dimensional point cloud of an object;
obtaining two-dimensional label data that is label data corresponding to a two-dimensional image of the object; and
associating the two-dimensional label data with the three-dimensional point cloud data, and converting the two-dimensional label data into three-dimensional point cloud label data that indicates a label of the three-dimensional point cloud.
(Supplementary Note 10)
The image processing method according to Supplementary Note 9, wherein the two-dimensional label data is associated with the three-dimensional point cloud data, by associating a mapped point cloud obtained by projecting the three-dimensional point cloud onto an image plane corresponding to the two-dimensional image, with the two-dimensional label data.
(Supplementary Note 11)
The image processing method according to Supplementary Note 10, wherein the two-dimensional label data is associated with the three-dimensional point cloud data, by matching the two-dimensional label data with the mapped three-dimensional point cloud.
(Supplementary Note 12)
The image processing method according to any one of Supplementary Notes 9 to 11, wherein the three-dimensional point cloud is divided into a plurality of small clusters by applying clustering to the three-dimensional point cloud in conformity with a predetermined reference, and the two-dimensional label data is associated with each of the divided small clusters.
(Supplementary Note 13)
The image processing method according to Supplementary Note 12, wherein the two-dimensional label data is associated with the three-dimensional point cloud data, according to a parameter that represents a correlation between the mapped small clusters obtained by projecting the small clusters onto an image plane corresponding to the two-dimensional image, and the two-dimensional label data.
(Supplementary Note 14)
The image processing method according to Supplementary Note 13, wherein the parameter corresponds to at least one of an in-box ratio that is a ratio of elements, of each of the mapped small clusters, included in the two-dimensional label data, and an intersection over union between a convex hull formed of each of the mapped small clusters, and the two-dimensional label data.
(Supplementary Note 15) The image processing method according to any one of Supplementary Notes 12 to 14, wherein for the clustering, the method uses, as a reference for optimizing belongingness to each of the clusters of the three-dimensional point clouds, a reference that is a combination of a reference of reducing a distance between the cluster and a point, and a reference of increasing an expected value of inclusion of elements of the cluster projected on an image plane, in an identical two-dimensional label rectangle.
(Supplementary Note 16)
The image processing method according to any one of Supplementary Notes 9 to 15, the method generates image segmentation label data, by applying a pixel interpolation process to mapped data obtained by projecting the three-dimensional point cloud label data onto an image plane corresponding to the two-dimensional image.
(Supplementary Note 17)
A non-transitory computer-readable medium storing a program causing a computer to execute:
a step of obtaining three-dimensional point cloud data that represents a three-dimensional point cloud of an object;
a step of obtaining two-dimensional label data that is label data corresponding to a two-dimensional image of the object; and
a step of associating the two-dimensional label data with the three-dimensional point cloud data, and converting the two-dimensional label data into three-dimensional point cloud label data that indicates a label of the three-dimensional point cloud.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2019/009169 | 3/7/2019 | WO |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2020/179065 | 9/10/2020 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
9251399 | Hwang | Feb 2016 | B2 |
20030076991 | Nishide | Apr 2003 | A1 |
20130051658 | Hwang | Feb 2013 | A1 |
20180157920 | Hu | Jun 2018 | A1 |
20210287058 | Yoshimi | Sep 2021 | A1 |
20210326608 | Yoshimi | Oct 2021 | A1 |
20220027654 | Iwami | Jan 2022 | A1 |
20220092291 | Lai | Mar 2022 | A1 |
20220405894 | Yoshimi | Dec 2022 | A1 |
Number | Date | Country |
---|---|---|
2003-141548 | May 2003 | JP |
2009-076096 | Apr 2009 | JP |
2016-003886 | Jan 2016 | JP |
Entry |
---|
Z. Zhongyang, C. Yinglei, S. Xiaosong, Q. Xianxiang and S. Li, “Classification of LiDAR Point Cloud based on Multiscale Features and PointNet, ” 2018 Eighth International Conference on Image Processing Theory, Tools and Applications (IPTA), Xi'an, China, 2018, pp. 1-7, (Year: 2018). |
W. Dongyu, H. Fuwen, T. Mikolajczyk and H. Yunhua, “Object Detection for Soft Robotic Manipulation Based on RGB-D Sensors,” 2018 WRC Symposium on Advanced Robotics and Automation (WRC SARA), Beijing, China, 2018, pp. 52-58, doi: 10.1109/WRC-SARA.2018.8584237. (Year: 2018). |
Engelmann, Francis, et al. “Exploring spatial context for 3D semantic segmentation of point clouds.” Proceedings of the IEEE international conference on computer vision workshops. 2017. (Year: 2017). |
L. Tchapmi, C. Choy, I. Armeni, J. Gwak and S. Savarese, “SEGCloud: Semantic Segmentation of 3D Point Clouds, ” 2017 International Conference on 3D Vision (3DV), Qingdao, China, 2017, pp. 537-547, doi: 10.1109/3DV.2017.00067. (Year: 2017). |
J. Wang, Q. Xie, Y. Xu, L. Zhou, and N. Ye, “Cluttered indoor scene modeling via functional part-guided graph matching,” Comput. Aided Geometric Des., vol. 43, pp. 82-94, 2016 (Year: 2016). |
Hu et al, Semantic Labeling and Instance Segmentationof 3D Point Clouds Using Patch Context Analysis and Multiscale Processing) IEEE Transactions on Visualization and Computer Graphics, vol. 26, No. 7, Jul. 2020 2485 (Year: 2020). |
Boulch et al, Unstructured point cloud semantic labeling using deep segmentation networks, Eurographics Workshop on 3D Object Retrieval (2017) (Year: 2017). |
Zhang, Jie, et al. “Point cloud normal estimation via low-rank subspace clustering.” Computers & Graphics 37.6 (2013): 697-706. (Year: 2013). |
International Search Report for PCT Application No. PCT/JP2019/009169, mailed on May 7, 2019. |
N. Silberman et al., “Indoor Segmentation and Support Inference from RGBD Images”, 12th European Conference on Computer Vision, Florence, Italy, Oct. 7-13, 2012, pp. 1-14. |
S. Song et al., “SUN RGB-D: A RGB-D Scene Understanding Benchmark Suite”, Proceedings of 28th IEEE Conference on Computer Vision and Pattern Recognition (CVPR2015), Boston, MA, USA, Jun. 7-12, 2015. |
Midori KURAHASHI et al., “An approach to detection of car position using LiDAR data for automatic parking”, Proceedings of the 79th National Convention of IPSJ (Information Processing Society of Japan) (2) Artificial Intelligence and Cognitive Science, Mar. 16, 2017. |
Number | Date | Country | |
---|---|---|---|
20220139094 A1 | May 2022 | US |