The disclosed embodiments generally relate to forming 3D models of objects based on point cloud data associated with the objects.
3D surface reconstruction is important for many applications, including augmented reality, virtual production, and autonomous driving, among others. There are numerous approaches to 3D surface reconstruction methods, including image based 3D reconstruction, multi-view based 3D reconstruction, and point cloud based 3D reconstruction. Of these, point cloud based 3D reconstruction methods may be the most ubiquitous due to the availability of a wide range of affordable hardware (such as 3D scanners and laser distance and ranging (LIDAR) scanners, among others) which perform surface measurements and output point cloud data. Such point cloud data generally comprises a series of coordinates associated with points located on or near the surface of an object that is measured using 3D scanning hardware. The goal of point cloud based 3D reconstruction methods is to output a surface model defining the scanned object using the point cloud data.
Recent work on point cloud based 3D reconstruction has largely focused on 3D deep learning procedures. Deep learning based methods are the go to choice for point cloud data that is sparse, noisy, or incomplete. Such sparsity, noisiness, or incompleteness is common in point cloud data. Unfortunately, previous 3D deep learning procedures suffer from a variety of drawbacks. For instance, some 3D deep learning procedures are only capable of outputting surface models with limited spatial resolution. Other 3D deep learning procedures suffer from discretization errors that reduce the accuracy of the surface model. Still other 3D deep learning procedures can only be used to output surface models for a narrow and limited class of objects (e.g., objects that are similar to the training set used to train the 3D deep learning procedure). As such, previous 3D deep learning procedures are often incapable of outputting accurate surface models for a wide range of objects.
Methods and systems for generating 3D models of surfaces that accurately reconstruct both the global structure of an object and its local features are described. The methods and systems generally operated by fusing point features from the point cloud data with voxel features extracted from voxelization procedures. Furthermore, the methods and systems utilize voxelization at multiple spatial resolutions. The use of point-voxel fusion and multiple spatial resolutions may permit the extraction of both global and local geometric features, increasing the accuracy of 3D modeling of objects.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosed embodiments, as claimed.
The accompanying drawings, which comprise a part of this specification, illustrate several embodiments and, together with the description, serve to explain the principles and features of the disclosed embodiments. In the drawings:
Reference will now be made in detail to exemplary embodiments, discussed with regards to the accompanying drawings. In some instances, the same reference numbers will be used throughout the drawings and the following description to refer to the same or like parts. Unless otherwise defined, technical and/or scientific terms have the meaning commonly understood by one of ordinary skill in the art. The disclosed embodiments are described in sufficient detail to enable those skilled in the art to practice the disclosed embodiments. It is to be understood that other embodiments may be utilized and that changes may be made without departing from the scope of the disclosed embodiments. Thus, the materials, methods, and examples are illustrative only and are not intended to be necessarily limiting.
3D surface reconstruction is important for many applications, including augmented reality, virtual production, and autonomous driving, among others. There are numerous approaches to 3D surface reconstruction methods, including image based 3D reconstruction, multi-view based 3D reconstruction, and point cloud based 3D reconstruction. Of these, point cloud based 3D reconstruction methods may be the most ubiquitous due to the availability of a wide range of affordable hardware (such as 3D scanners and laser distance and ranging (LIDAR) scanners, among others) which perform surface measurements and output point cloud data. Such point cloud data generally comprises a series of coordinates associated with points located on or near the surface of an object that is measured using 3D scanning hardware. The goal of point cloud based 3D reconstruction methods is to output a surface model defining the scanned object using the point cloud data.
Recent work on point cloud based 3D reconstruction has largely focused on 3D deep learning procedures. Deep learning based methods are the go to choice for point cloud data that is sparse, noisy, or incomplete. Such sparsity, noisiness, or incompleteness is common in point cloud data. Among the most promising approaches to deep learning methods for point cloud based 3D reconstruction are a variety of approaches that use neural implicit 3D modeling (also referred to herein as “implicit representation”) with latent codes (also referred to herein as “feature vectors”).
Implicit representation generally operates by obtaining a latent code through encoding networks, concatenating the latent code with the coordinates of a point in the point cloud data (referred to as a “query point”), and passing the concatenated data to a machine learning procedure such as a neural network (NN), convolutional NN (CNN), 3D CNN, or multi-layer perceptron (MLP). The neural network then outputs a value indicating whether the query point is inside (negative distance) or outside (positive distance) of a model surface representing an object. AtlasNet (described in T. Groueix et al, “AtlasNet: a papier-mache approach to learning 3D surface generation,” arXiv:1802.05384, 2018), DeepSDF (described in J. J. Park et al, “DeepSDF: learning continuous signed distance functions for shape representation,” arXiv:109.05103, 2019), occupancy networks (described in L. Mescheder et al, “Occupancy networks: learning 3D reconstruction in function space,” arXiv:1812.03828, 2018), and IF-Net (described in Z. Chen and H. Zhang, “Learning implicit fields for generative shape modeling,” arXiv:1812.02822, 2018) are among the earliest methods for using deep learning to learn neural implicit representations for 3D modeling. Unfortunately, the outputs of such models are generally constrained to narrow classes of surfaces that fall within the class of surfaces from the training data used to train the models. Thus, such approaches are typically unable to output 3D surface models for most classes of objects.
One possible reason why pioneering works like deepSDF, occupancy networks, IF-Net, and the like are not able to produce satisfying 3D models for large classes of objects is due to their reliance on encoding the whole input to a single latent code. Unfortunately, the use of a single latent code, while good at encoding large-scale geometric features, is quite poor at encoding local geometric details. Points2surf (described in P. Erler et al, “Points2Surf: learning implicit surfaces from point cloud patches,” arXiv:2007.10453, 2020) tried to generalize the network modeling capabilities of SDF by combining local features (which provide unsigned distances of the points in the point cloud data) and large range neighbor features (which provide the sign, i.e., whether a given point is inside or outside of surface). IF-Net (described in P.-H. Chen et al, “IF-Net: an illumination-invariant feature network,” arXiv:2008.03897, 2020) and NDF (described in J. Chibane et al, “Neural distance fields for implicit function learning,” arXiv:2010.13938, 2020) further extend these ideas. Instead of encoding surfaces with a single vector, they encode multiple levels of rich features by 3D convolution on a grid of discrete volume.
A majority of current approaches to deep learning based 3D reconstruction methods classify points as inside or outside of a surface by occupancy indicator or associate a signed distance to the surface with each point. Unfortunately, such representations can only be applied to closed surfaces. Many 3D objects such as clothes, cars, surface of thin sheet, or partial scans of a shape, and the like should be represented as open surface. Thus, methods such as NDF learn an implicit unsigned distance field, allowing the reconstruction of sharp edges of open surfaces and the like.
All previous approaches for generating 3D models from point cloud data either utilize the point cloud directly or voxelize the point cloud data into discrete volumes (also referred to as “voxels”). Using the point cloud data directly is light weight but often lacks information about local neighbor features. Volume based approaches require voxelization of points into discrete 3D volumes (a process referred to as “voxelization”). Such approaches include more local neighbor information but require substantial memory resources, as the memory requirements increase cubically with spatial resolution. For instance, a spatial resolution of 128{circumflex over ( )}3 (i.e., dividing the volume into 128{circumflex over ( )}3 voxels) requires 8 times as much memory as a spatial resolution of 64{circumflex over ( )}3. Such voxelization of points leads to decreases in accuracy owing to this discretization. Moreover, such techniques often require tradeoffs in accuracy between local structures and global features. That is, techniques that utilize high spatial resolution are often capable of accurately reconstructing local features but struggle to accurately reproduce the global structure of the model surface. Techniques that utilize low spatial resolution are often capable of accurately reconstructing the global structure of the model surface but struggle to capture local features. Thus, there is a need for methods and systems that generate 3D models of surfaces that accurately reconstruct both the global structure of an object and its local features.
The methods and systems disclosed herein generate 3D models of surfaces that accurately reconstruct both the global structure of an object and its local features. The methods and systems generally operated by fusing point features from the point cloud data with voxel features extracted from voxelization procedures. Furthermore, the methods and systems utilize voxelization at multiple spatial resolutions. The use of point-voxel fusion and multiple spatial resolutions may permit the extraction of both global and local geometric features, increasing the accuracy of 3D modeling of objects.
As used herein, unless specifically stated otherwise, the terms “a” and “an” mean “one or more,” except where infeasible. Similarly, the use of a plural term does not necessarily denote a plurality unless it is unambiguous in the given context. Further, since numerous modifications and variations will readily occur from studying the present disclosure, it is not desired to limit the disclosure to the exact construction and operation illustrated and described, and accordingly, all suitable modifications and equivalents may be resorted to, falling within the scope of the disclosure.
As used herein, unless specifically stated otherwise, the term “or” encompasses all possible combinations, both conjunctive and disjunctive, except where infeasible. For example, if it is stated that a component may include A or B, then, unless specifically stated otherwise or infeasible, the component may include A alone, or B alone, or A and B. As a second example, if it is stated that a component may include A, B, or C, then, unless specifically stated otherwise or infeasible, the component may include A alone, or B alone, or C alone, or A and B, or A and C, or B and C, or A and B and C.
At step 110, point cloud data is received. In some embodiments, the point cloud data comprises a series of coordinates associated with a plurality of points located on or near a surface of the object. That is, in some embodiments, the point cloud data comprises a series of coordinates {(x1,y1,z1), (x2,y2,z2), . . . }, where (x1,y1,z1) is the Cartesian coordinates of the first point in the point cloud data, (x2,y2,z2) is the Cartesian coordinates of the second point in the point cloud data, and so forth. In some embodiments, the point cloud data is obtained using a 3D scanner, tomographic imaging scanner, LIDAR scanner, or any other scanner that outputs point cloud data associated with the surface.
At step 120, the point cloud data is voxelized. In some embodiments, the point cloud data is voxelized based on a predetermined spatial resolution. For instance, in some embodiments, the point cloud data is voxelized based on a spatial resolution of 16{circumflex over ( )}3, 32{circumflex over ( )}3, 64{circumflex over ( )}3, 128{circumflex over ( )}3, 256{circumflex over ( )}3, 512{circumflex over ( )}3, 1024{circumflex over ( )}3, or the like. That is, in some embodiments, the point cloud data is voxelized into 16{circumflex over ( )}3, 32{circumflex over ( )}3, 64{circumflex over ( )}3, 128{circumflex over ( )}3, 256{circumflex over ( )}3, 512{circumflex over ( )}3, 1024{circumflex over ( )}3, or the like voxels. In some embodiments, each of the voxels has a volume of V/16{circumflex over ( )}3, V/32{circumflex over ( )}3, V/64{circumflex over ( )}3, V/128{circumflex over ( )}3, V/256{circumflex over ( )}3, V/512{circumflex over ( )}3, V/1024{circumflex over ( )}3, or so forth, where V is the volume occupied by the point cloud data. In some embodiments, the point cloud data is voxelized based on an occupancy procedure. That is, in some embodiments, each voxel is given a value of 1 if one or more points from the point cloud data are located in that voxel and a value of 0 if no points from the point cloud data are located in that voxel. In other embodiments, the point cloud data is voxelized based on a pointgrid procedure (described in T. Len and Y. Duan, “PointGrid: a deep network for 3D space understanding,” IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, doi :10.1109/CVPR.2018.00959, which is incorporated herein by reference in its entirety for all purposes). In some embodiments, voxelizing the point cloud data forms a voxelized representation of the point cloud data.
At step 130, a point-voxel machine learning procedure is applied to the voxelized representation of the point cloud data and/or the point cloud data. In some embodiments, the point-voxel machine learning procedure comprises a NN, CNN, 3D CNN, or MLP. In some embodiments, the point-voxel machine learning procedure comprises a neural implicit representation procedure (described in J. Chibane et al, “Neural distance fields for implicit function learning,” arXiv:2010.13938, 2020, which is incorporated herein by reference in its entirety for all purposes). In some embodiments, the point-voxel machine learning procedure determines or outputs a feature vector Fk associated with the predetermined spatial resolution from step 120.
At step 140, steps 120 and 130 are repeated for a plurality of spatial resolutions. In some embodiments, each repetition of steps 120 and 130 outputs a feature vector Fk, where k ∈ {1, 2, . . . , n} and n is the number of times steps 120 and 130 are repeated. Thus, in some embodiments, a plurality of feature vectors are determined and each feature vector is associated with a spatial resolution. In some embodiments, n is at least 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, or more, at most 20, 19, 18, 17, 16, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, or 2, or within a range defined by any two of the preceding values. As a first example, in some embodiments, n=3 and feature vectors F1(p), F2(p), and F3(p) are associated with spatial resolutions of 128{circumflex over ( )}3, 64{circumflex over ( )}3, and 32{circumflex over ( )}3, respectively. As a second example, in some embodiments, n=4 and feature vectors F1(p), F2(p), F3(p), and F4(p) are associated with spatial resolutions of 128{circumflex over ( )}3, 64{circumflex over ( )}3, 32{circumflex over ( )}3, and 16{circumflex over ( )}3, respectively. Here, p denotes the points from the point cloud data.
At step 150, a 3D modeling machine learning procedure is applied to the plurality of feature vectors to determine a 3D model of the object. In some embodiments, the 3D modeling machine learning procedure comprises a NN, a CNN, a 3D CNN, or an MLP. In some embodiments, the 3D modeling machine learning procedure comprises a neural implicit representation procedure. In some embodiments, the plurality of feature vectors are concatenated to form a concatenated feature vector and the 3D modeling machine learning procedure is applied to the concatenated feature vector. As a first example, in some embodiments, n=3 and feature vectors F1(p), F2(p), and F3(p) are associated with spatial resolutions of 128{circumflex over ( )}3, 64{circumflex over ( )}3, and 32{circumflex over ( )}3, respectively. The feature vectors are concatenated to form a concatenated feature vector F(p)=F1(p)F2(p)F3(p). The concatenated feature vector is then passed to the 3D modeling machine learning procedure and, which outputs a 3D model s=f (F(p)) of the object. As a second example, in some embodiments, =4 and feature vectors F1(p), F2(p), F3(p), and F4(p) are associated with spatial resolutions of 128{circumflex over ( )}3, 64{circumflex over ( )}3, 32{circumflex over ( )}3, and 16{circumflex over ( )}3, respectively. The feature vectors are concatenated to form a concatenated feature vector F(p)=F1(p)F2(p)F3(p)F4(p). The concatenated feature vector is then passed to the 3D modeling machine learning procedure and, which outputs a 3D model s=f(F (p)) f (F(p)) of the object. In some embodiments, the 3D model of the object comprises a continuous shape representation of the object. In some embodiments, the continuous shape representation of the object comprises a set of occupancies associated with the object, a signed distance function associated with the object, or an unsigned distance function associated with the object.
The point-voxel machine learning procedure may be trained using a training data sets comprising input point cloud data x and corresponding surface mesh data s. Points p are sampled from the point cloud data and their ground truth distance DF (p, s) to the surface is computed. A point-voxel machine learning procedure is learned by any of the following regression loss functions during training with a training dataset:
Here, Equation (1) is used for unsigned distance field learning, Equation (2) is used for signed distance field learning, and Equation (3) is used for occupancy learning. CE is the cross-entropy loss, B is the mini-batch size, and fw=f (S(g(p)), A(g(p), p−Pn)) is the point-voxel machine learning procedure consisting of encoder g, predictor S, and adaptive weighting A. The truncation threshold is given by δ.
Additionally, systems are disclosed that can be used to perform the method 100 of
The microprocessor subsystem 401 is coupled bi-directionally with memory 404, which can include a first primary storage, typically a random access memory (RAM), and a second primary storage area, typically a read-only memory (ROM). As is well known in the art, primary storage can be used as a general storage area and as scratch-pad memory, and can also be used to store input data and processed data. It can also store programming instructions and data, in the form of data objects and text objects, in addition to other data and instructions for processes operating on microprocessor subsystem. Also as well known in the art, primary storage typically includes basic operating instructions, program code, data and objects used by the microprocessor subsystem to perform its functions. Primary storage devices 404 may include any suitable computer-readable storage media, described below, depending on whether, for example, data access needs to be bi-directional or uni-directional. The microprocessor subsystem 401 can also directly and very rapidly retrieve and store frequently needed data in a cache memory (not shown).
A removable mass storage device 405 provides additional data storage capacity for the computer system 400, and is coupled either bi-directionally (read/write) or uni-directionally (read only) to microprocessor subsystem 401. Storage 405 may also include computer-readable media such as magnetic tape, flash memory, signals embodied on a carrier wave, PC-CARDS, portable mass storage devices, holographic storage devices, and other storage devices. A fixed mass storage 409 can also provide additional data storage capacity. The most common example of mass storage 409 is a hard disk drive. Mass storage 405 and 409 generally store additional programming instructions, data, and the like that typically are not in active use by the processing subsystem. It will be appreciated that the information retained within mass storage 405 and 409 may be incorporated, if needed, in standard fashion as part of primary storage 404 (e.g., RAM) as virtual memory.
In addition to providing processing subsystem 401 access to storage subsystems, bus 406 can be used to provide access other subsystems and devices as well. In the described embodiment, these can include a display monitor 408, a network interface 407, a keyboard 402, and a pointing device 403, as well as an auxiliary input/output device interface, a sound card, speakers, and other subsystems as needed. The pointing device 403 may be a mouse, stylus, track ball, or tablet, and is useful for interacting with a graphical user interface.
The network interface 407 allows the processing subsystem 401 to be coupled to another computer, computer network, or telecommunications network using a network connection as shown. Through the network interface 407, it is contemplated that the processing subsystem 401 might receive information, e.g., data objects or program instructions, from another network, or might output information to another network in the course of performing the above-described method steps. Information, often represented as a sequence of instructions to be executed on a processing subsystem, may be received from and outputted to another network, for example, in the form of a computer data signal embodied in a carrier wave. An interface card or similar device and appropriate software implemented by processing subsystem 401 can be used to connect the computer system 400 to an external network and transfer data according to standard protocols. That is, method embodiments of the present invention may execute solely upon processing subsystem 401, or may be performed across a network such as the Internet, intranet networks, or local area networks, in conjunction with a remote processing subsystem that shares a portion of the processing. Additional mass storage devices (not shown) may also be connected to processing subsystem 401 through network interface 407.
An auxiliary I/O device interface (not shown) can be used in conjunction with computer system 400. The auxiliary I/O device interface can include general and customized interfaces that allow the processing subsystem 401 to send and, more typically, receive data from other devices such as microphones, touch-sensitive displays, transducer card readers, tape readers, voice or handwriting recognizers, biometrics readers, cameras, portable mass storage devices, and other computers.
In addition, embodiments of the present invention further relate to computer storage products with a computer readable medium that contains program code for performing various computer-implemented operations. The computer-readable medium is any data storage device that can store data which can thereafter be read by a computer system. The media and program code may be those specially designed and constructed for the purposes of the present invention, or they may be of the kind well known to those of ordinary skill in the computer software arts. Examples of computer-readable media include, but are not limited to, all the media mentioned above: magnetic media such as hard disks, floppy disks, and magnetic tape; optical media such as CD-ROM disks; magneto-optical media such as floptical disks; and specially configured hardware devices such as application-specific integrated circuits (ASICs), programmable logic devices (PLDs), and ROM and RAM devices. The computer-readable medium can also be distributed as a data signal embodied in a carrier wave over a network of coupled computer systems so that the computer-readable code is stored and executed in a distributed fashion. Examples of program code include both machine code, as produced, for example, by a compiler, or files containing higher level code that may be executed using an interpreter. The computer system shown in
The foregoing description has been presented for purposes of illustration. It is not exhaustive and is not limited to precise forms or embodiments disclosed. Modifications and adaptations of the embodiments will be apparent from consideration of the specification and practice of the disclosed embodiments. For example, the described implementations include hardware, but systems and methods consistent with the present disclosure can be implemented with hardware and software. In addition, while certain components have been described as being coupled to one another, such components may be integrated with one another or distributed in any suitable fashion.
Moreover, while illustrative embodiments have been described herein, the scope includes any and all embodiments having equivalent elements, modifications, omissions, combinations (e.g., of aspects across various embodiments), adaptations or alterations based on the present disclosure. The elements in the claims are to be interpreted broadly based on the language employed in the claims and not limited to examples described in the present specification or during the prosecution of the application, which examples are to be construed as nonexclusive. Further, the steps of the disclosed methods can be modified in any manner, including reordering steps or inserting or deleting steps.
The features and advantages of the disclosure are apparent from the detailed specification, and thus, it is intended that the appended claims cover all systems and methods falling within the true spirit and scope of the disclosure.
The methods described herein were utilized to extract 3D models of a variety of objects contained in the shapenet database. The reconstructed 3D models were compared with those generated using a variety of other 3D model reconstruction methods described in previous works.
Embodiment 1. A method for forming a three-dimensional (3D) model of an object, comprising:
Embodiment 2. The method of Embodiment 1, wherein the point cloud data is obtained using a 3D scanner, tomographic imaging scanner, or laser distance and ranging (LIDAR) scanner.
Embodiment 3. The method of Embodiment 1 or 2, wherein (b) comprises voxelizing the point cloud data based on an occupancy procedure or a pointgrid procedure.
Embodiment 4. The method of any one of Embodiments 1-3, wherein the point-voxel machine learning procedure comprises a neural network (NN), a convolutional NN (CNN), a 3D CNN, or a multi-layer perceptron (MLP).
Embodiment 5. The method of any one of Embodiments 1-4, wherein the point-voxel machine learning procedure comprises a neural implicit representation procedure.
Embodiment 6. The method of any one of Embodiments 1-5, wherein the 3D modeling machine learning procedure comprises an NN, a CNN, a 3D CNN, or an MLP.
Embodiment 7. The method of any one of Embodiments 1-6, wherein the 3D modeling machine learning procedure comprises a neural implicit representation procedure.
Embodiment 8. The method of any one of Embodiments 1-7, wherein (e) comprises concatenating the plurality of features vectors to thereby form a concatenated feature vector and applying the 3D modeling machine learning procedure to the concatenated feature vector.
Embodiment 9. The method of any one of Embodiments 1-8, wherein the 3D model of the object comprises a continuous shape representation of the object.
Embodiment 10. The method of Embodiment 9, wherein the continuous shape representation of the object comprises an unsigned distance function or signed distance function associated with the object.
Embodiment 11. A system for forming a three-dimensional (3D) model of an object, comprising a computing system configured to implement a method comprising:
Embodiment 12. The system of Embodiment 11, wherein the point cloud data is obtained using a 3D scanner, tomographic imaging scanner, or laser distance and ranging (LIDAR) scanner.
Embodiment 13. The system of Embodiment 11 or 12, wherein (b) comprises voxelizing the point cloud data based on an occupancy procedure or a pointgrid procedure.
Embodiment 14. The system of any one of Embodiments 11-13, wherein the point-voxel machine learning procedure comprises a neural network (NN), a convolutional NN (CNN), a 3D CNN, or a multi-layer perceptron (MLP).
Embodiment 15. The system of any one of Embodiments 11-14, wherein the point-voxel machine learning procedure comprises a neural implicit representation procedure.
Embodiment 16. The system of any one of Embodiments 11-15, wherein the 3D modeling machine learning procedure comprises an NN, a CNN, a 3D CNN, or an MLP.
Embodiment 17. The system of any one of Embodiments 11-16, wherein the 3D modeling machine learning procedure comprises a neural implicit representation procedure.
Embodiment 18. The system of any one of Embodiments 11-17, wherein (e) comprises concatenating the plurality of features vectors to thereby form a concatenated feature vector and applying the 3D modeling machine learning procedure to the concatenated feature vector.
Embodiment 19. The system of any one of Embodiments 11-18, wherein the 3D model of the object comprises a continuous shape representation of the object.
Embodiment 20. The system of Embodiment 19, wherein the continuous shape representation of the object comprises an unsigned distance function or signed distance function associated with the object.
Embodiment 21. A non-transitory machine-readable storage medium configured to implement a method for forming a three-dimensional (3D) model of an object, the method comprising:
Embodiment 22. The non-transitory machine-readable storage medium of Embodiment 21, wherein the point cloud data is obtained using a 3D scanner, tomographic imaging scanner, or laser distance and ranging (LIDAR) scanner.
Embodiment 23. The non-transitory machine-readable storage medium of Embodiment 21 or 22, wherein (b) comprises voxelizing the point cloud data based on an occupancy procedure or a pointgrid procedure.
Embodiment 24. The non-transitory machine-readable storage medium of any one of Embodiments 21-23, wherein the point-voxel machine learning procedure comprises a neural network (NN), a convolutional NN (CNN), a 3D CNN, or a multi-layer perceptron (MLP).
Embodiment 25. The non-transitory machine-readable storage medium of any one of Embodiments 21-24, wherein the point-voxel machine learning procedure comprises a neural implicit representation procedure.
Embodiment 26. The non-transitory machine-readable storage medium of any one of Embodiments 21-25, wherein the 3D modeling machine learning procedure comprises an NN, a CNN, a 3D CNN, or an MLP.
Embodiment 27. The non-transitory machine-readable storage medium of any one of Embodiments 21-26, wherein the 3D modeling machine learning procedure comprises a neural implicit representation procedure.
Embodiment 28. The non-transitory machine-readable storage medium of any one of Embodiments 21-27, wherein (e) comprises concatenating the plurality of features vectors to thereby form a concatenated feature vector and applying the 3D modeling machine learning procedure to the concatenated feature vector.
Embodiment 29. The non-transitory machine-readable storage medium of any one of Embodiments 21-28, wherein the 3D model of the object comprises a continuous shape representation of the object.
Embodiment 30. The non-transitory machine-readable storage medium of Embodiment 29, wherein the continuous shape representation of the object comprises an unsigned distance function or signed distance function associated with the object.
This application claims priority to U.S. Provisional Application No. 63/424,391, filed on Nov. 10, 2022, entitled “POINT VOXEL FEATURE FUSION FOR IMPLICIT FIELD OF 3D RECONSTRUCTION,” which is incorporated herein by reference in its entirety for all purposes.
Number | Date | Country | |
---|---|---|---|
63424391 | Nov 2022 | US |