The present disclosure generally relates to three-dimensional (3D) computer graphics. More specifically, but not by way of limitation, the present disclosure relates to techniques for efficiently rendering an updated graphical representation of an object based on the surface of another object.
Graphic design software applications are used for graphic illustration, multimedia development, specialized image development, and graphics editing. Such applications utilize either raster or vector graphic editing methods to create, edit, and view digital media (e.g. animations, graphics, images, designs, media objects, etc.). Maps are arguably one of the most fundamental ways to define and operate on surfaces depicted as part of digital media objects with such applications. Although certain existing solutions allow graphic designers to use maps for many core operations carried out with graphic design software, most computational representations of surface maps do not lend themselves to computationally efficient and accurate manipulation and optimization.
For example, consider a function of such software such as surface-to-surface mapping, which enables defining correspondences between surfaces. Such correspondences can in turn be used to perform, as examples, shape analysis, deformations, and the transfer of properties from one surface to another. The target surface for surface-to-surface mapping is typically a three-dimensional (3D) mesh. Such meshes are combinatorial representations, meaning that combinatorial representations of the maps must be used, resulting in a surface-to-surface mapping process that is computationally expensive, or produces only approximate depictions.
Certain aspects and features of the present disclosure relate to neural network based 3D object surface mapping. For example, a computer-implemented method involves generating a surface mapping function for mapping a first surface of a first three-dimensional (3D) object in a 3D space to a second surface of a second 3D object in the 3D space. The surface mapping function is defined by a first representation of the first surface, a second representation of the second surface, and a neural network model configured to map a first two-dimensional (2D) representation to a second 2D representation. The first representation corresponds to a mapping from the first 2D representation of the first surface to the first surface of the first 3D object. The second representation corresponds to a mapping from the second 2D representation of the second surface to the second surface of the second 3D object. Generating the surface mapping function includes adjusting parameters of the neural network model to optimize an objective function. The objective function includes a distortion term defining distortion between the first surface and the second surface mapped through the surface mapping function. The method also involves applying a feature of a first 3D mesh on the first surface to a second 3D mesh on the second surface to produce a modified second surface. The first 3D mesh on the first surface maps to the second 3D mesh on the second surface are determined by the surface mapping function. The method can include rendering the modified second surface.
Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods.
This summary is not intended to identify key or essential features of the claimed subject matter, nor is it intended to be used in isolation to determine the scope of the claimed subject matter. The subject matter should be understood by reference to appropriate portions of the entire specification of this disclosure, any or all drawings, and each claim.
Features, embodiments, and advantages of the present disclosure are better understood when the following Detailed Description is read with reference to the accompanying drawings, where:
As described above, surface-to-surface mapping features in existing applications, when used with 3D mesh target surfaces, can result in surface-to-surface mappings that are computationally expensive or exhibit reduced accuracy. As an example, consider the problem of a mesh-to-mesh mapping in which a continuous mapping from one surface to another is computed. The software needs to account for the image of each source vertex, which may land on a triangle of the target mesh, and the image of a source edge, which may span several triangles of the target mesh. If distortion is to be minimized, extensive bookkeeping must be carried out, using significant memory. Further, reducing the resulting distortion may require extensive combinatorial optimization of the choice of target triangle for each source vertex, which is computationally expensive.
Embodiments herein produce at least two representations of surfaces, where each surface is from a 3D object depicted using a 3D mesh. One or both of these representations may be a neural network representation. A surface mapping function is generated for mapping one of the surfaces, a source surface of a source 3D object, to a target surface of a target 3D object. The surface mapping function is defined by the representations, as well as by a neural network model configured to map the first representation to the second representation. Parameters of the neural network model are adjusted to optimize an objective function that includes a distortion term. The surface mapping function with the adjusted parameters can then be used to map features of the source 3D mesh to the target 3D mesh to produce a modified target surface.
Certain embodiments provide improvements over existing techniques for generating surface-to-surface mapping for 3D objects. In particular, neural networks are used to approximate the surface map. The use of neural networks eliminates the need of mappings between two 3D meshes, thereby reducing the computational complexity of the mapping algorithm. In addition, the neural networks are differentiable and composable with one another leading to a model for efficiently defining a differentiable composition of surface maps. As such, multiple maps (maps from 2D representation to 3D surface, one 2D representation to another 2D representation) can be composed and optimized for objectives defined over the composition. A surface mapping function can be generated and used in the surface-to-surface mapping, where the surface mapping function is defined by representations of the surfaces, as well as by a neural network model. One or both surface representations can be neural network representations. Alternatively, as an example, a target surface can be represented by a differentiable function. A neural network representation of a surface can optionally be trained using a neural network model, for example, the same neural network model that is used in generating the surface mapping function. A new surface resulting from a surface-to-surface mapping can be efficiently rendered and displayed on a display device.
The following non-limiting example is provided to introduce certain embodiments. In this example, a graphics editing application transfers surface features from a source 3D graphical object to a target 3D graphical object. The graphics editing application identifies the surface of the source 3D graphical object and highlights a surface of the target 3D graphical object. The graphic editing application further performs a surface-to-surface mapping of the identified surface of the source 3D graphical object to the selected surface of the target 3D graphical object. Either or both surfaces can be minimal or extensive, even covering all of the relevant 3D object.
Continuing with this example, the graphics editing application produces and stores surface representations in memory and generates the mapping function and the objective function with the distortion term. One or both of the surface representations can be neural network representations. A neural network representation can be produced using points in a 2D representation based a 3D mesh of an object. The mapping function is defined based on the surface representations and a neural network model. The neural network model can include an input layer including a first pair of nodes representing ordinates in one 2D representation and an output layer including a second pair of nodes representing coordinates in another 2D representation. The resulting mapping function is stored and referenced by the graphics editing application, which uses the mapping function to efficiently map the source surface to the target surface. The mapping function parameters are stored in memory and adjusted, with new values being successively stored until an objective function is optimized to provide the mapping. The resulting surface is stored and the graphics editing application efficiently renders and displays the resulting surface with minimal distortion.
By using neural networks as stored, parametric representations of surfaces, a graphical computing process can access stored models that rely on the differentiable algebra of a surface map. Distortion can be minimized without extensive combinatorial representations to handle triangular meshes. Thus, accurate renderings can be produced quickly with relatively modest computing resources. Mapping one surface onto another can provide a mechanism for efficiently and quickly editing the target surface, for example, by transferring textures or surface features onto the target surface.
Aspects and features herein can treat neural networks as parametric representations of surface maps and surfaces. Neural networks that receive 2D points as input and output points either in 2D or 3D can be used as representations. In some examples, a differentiable function can alternatively be used to represent a target surface.
The graphics editing application 102 also generates a graphics editing interface 130. In some embodiments, the graphics editing application 102 uses inputs related to editing tools 134 received via the graphics editing interface 130 to control one or more operations of the graphics editing application 102. The graphics editing application 102 provides the editing interface 130 for display at a presentation device 108, which can be a local presentation device or a computing device that is remotely accessible over a data network. The graphics editing application includes one or more software modules, for example, a rendering module (not shown) that render modified surfaces 136 for display in the editing interface 130.
Neural network model h is configured to generate the 2D representation 212 of the surface 220 based on the 2D representation 210 of the surface 204. Given first surface representation (φ), the second surface representation (ψ), and the neural network model h, the mapping function ƒ can be derived to map one surface (surface 204) to another (surface 220), resulting in a modified second surface 220. Mapping function ƒ can be derived through distortion minimizing and may be subject to constraints, for example, constraints to match corresponding portions of the objects 202 and 206 such as feet 224 of the hippopotamus to feet 226 of the cow. Additional or alternative constraints may be provided, for example, for facial features of the animals. These objects, surfaces, and surface representations based on ϕ and ψ will be referenced as an example below in discussing the flowcharts of
ƒ:2→
n
The second representation corresponds to a mapping from the second 2D representation of the second surface to the second surface of the second 3D object 206. In terms of the example of
The surface mapping function is defined by the first representation φ, the second representation ψ, and a neural network model h configured to map the first 2D representation 210 to the second 2D representation 212, for example:
ƒ∘φψh.
As referenced herein, a neural network representation, such as φ or ψ of a surface and the neural network model h are each neural networks. The neural network model is a neural network that serves as an intermediary between a neural network representation of a surface and a representation of another surface, which may be another neural network representation. A neural network representation represents a surface of a 3D object through overfitting. Generating the surface mapping function includes adjusting parameters of the neural network model to optimize an objective function. The objective function includes a distortion term defining distortion between the first surface and the second surface mapped through the surface mapping function.
At block 304 of process 300, the computing device applies one or more features of a first 3D mesh on the first surface to a second 3D mesh on the second surface to produce a modified second surface as determined by the surface mapping function. The 3D meshes may be stored in computing device 101, for example, as 3D meshes 123. In terms of the example of
ϕ:2→
n,
where the output dimension n is two or three. This ensures the map's image is always a 3D surface, and, assuming the map is non-singular, also a 2-manifold map. Neural surface maps can be seen as an alternative method to represent a surface map that provides the advantages of differentiability, and the ability to be composed with other neural maps. Neural surface maps enable compositions, for example, φ∘ψ, and define an objective over the composition ∘0 (φ∘ψ), which can be differentiated and optimized via standard deep-learning libraries and optimizers. The operator ∘ represents the composition of one map with the other, for example, taking the output of ψ and plugging in as the input of φ. ∘ (φ∘ψ) is the equivalent of φ(ψ). Any size of architecture and activation function can be used, and the universal approximation theorem will ensure that a given network is capable of approximating a given surface function.
At block 402 of process 400, the computing device generates a first 2D representation based on a first 3D mesh of a first 3D object. The 2D representation can be obtained by computing a parameterization of the 3D mesh into 2D using scalable, locally injective mappings, providing a one-to-one map from 2D to 3D. Its inverse is treated as a ground truth representation of the 3D surface (a map of 2D points to 3D). A neural network is then overfit to that (inverse) map and treated as the neural representation of the surface, which can then be composed with other neural maps. At block 404, the computing device selects a set of points in the first 2D representation. The set of points includes points corresponding to the vertex points in the first 3D mesh. In some examples, the set of points can also include points corresponding to internal points (points falling inside the triangles or polygons) of the first 3D mesh. At block 405, the computing device generates a first neural network representation based on the selected set of points with an input layer including two nodes representing two ordinates of a point in the first 2D representation and an output layer representing three coordinates of a point in 3D space. The neural network is trained. As an example, representation of 3 can respectively represent two surfaces
,
so that the maps can be used to define and optimize a mapping between two 3D surfaces ƒ:
→
. Representing the 3D surfaces using neural network models allows the representations to be associated more closely.
At block 408 of process 400, the computing device defines a distortion term, based on pairwise comparison of the surfaces. The distortion term is based on a Jacobian of ƒ for mappings to provide multiple distortion terms. The Jacobian of ƒ can be derived from the Jacobian of ψ and the inverted Jacobian of φ. At block 410, the neural network model is defined. The neural network model includes an input layer with a first pair of nodes representing ordinates in the first 2D representation and an output layer with at least a second pair of nodes representing ordinates in the second 2D representation
Still referring to ψh, with φ and ψ shown in
The representation of 3D geometries is, in this example, via an overfitted neural surface map ϕ: Ω→3 that approximates a given map ƒ. ϕ can be treated as a de-facto representation of the geometry. Optimization of the map with traditional techniques may be non-trivial since it will immediately change the 3D geometry. To overcome this, an intermediate neural network, neural network model h: Ω→Ω is produced to define a new map, ϕh=ϕ∘h. As long as the process 400 solely optimizes h and insures h maps onto Ω, the image of ϕh will correspond to the image of ϕ, i.e., will respect the original surface. The distortion of ϕh can be optimized by optimizing h and keeping ϕ fixed, thereby finding a map from Ω to
, which is at least a local minimizer of the distortion. The distortion in this example can be measured by:
The distortion is a differentiable property of the map, and hence is readily available, e.g., via automatic differentiation.
Returning to
For overfitting the neural surface maps, Ω=[−1,1]2⊂2 is a unit circle. In this example, the neural maps make use of Q as a canonical domain. Given any map ƒ: Ω→
n, the map can be approximated via neural surface map ϕ by using black-box methods to train the neural network and overfit it to replicate ƒ. The least-square deviation of ϕ from ƒ and the surface normal deviation can be approximated by minimizing the integrated error:
where nϕp is the estimated normal at p, and nƒp is the ground truth normal. In case ƒ describes a continuous map, for example a piecewise-linear map for mapping triangles to triangles, the objective function can be optimized by approximating the integral in Monte-Carlo fashion, for example, by summing the integrand at random sample points. To use neural surface maps to represent surfaces, the ground truth map ƒ is computed to overfit to a UV parameterization of the mesh into 2D, computed via any bijective parameterization algorithm. Examples include Tutte's embedding, and scalable, locally injected mappings, by which an injective map of the mesh onto Ω⊂2 can be achieved. Treating the inverse of this map, which maps back into the 3D mesh
, as the input ƒ: Ω→
, and overfitting ϕ to it by minimizing Equation 1, a neural network representation of the surface can be obtained. More specifically, a mapping into the surface is obtained, where the mapping is endowed with specific UV coordinates, with point ϕ(x,y) having UV coordinates x,y.
In order to optimize several energies related to the neural surface maps, for a neural network map ϕ: Ω→n, the Jacobian of ϕ can be denoted by Jpϕ∈
n×2, the matrix of partial derivatives at point p∈Ω. The Jacobian quantifies the local deformation at a point. For isometric distortion, letting Mp=JpTJp, the symmetric Dirichlet energy can be quantified as,
Diso=∫Ωtrace(Mp)+trace((Mp+εI)−1) (2)
where I is the identity matrix, added with a small constant ( 1/100) to regularize the inverse. Likewise, a measure of conformal distortion can be defined via,
The integrals can be accounted for by random sampling of the function in the domain.
Even though ƒ itself is not tangible for optimization, as it is implicitly defined by the neural network model h, the differential quantity from ƒ used in this example to compute the distortion is the Jacobian of ƒ denoted Jqƒ at point q=ϕp. Using differential calculus, Jqƒ can be derived to be:
Jqƒ=Jpψh(Jpϕ)−1, (4)
which is composed of the Jacobian of ψ and the inverted Jacobian of ϕ at point p, both readily determinable. Hence, to optimize the distortion of ƒ, Equation 4 can be used as the Jacobian to define M, which can be denoted as D(ƒ).
In order for h to well-define a surface map, it needs to map bijectively (1-to-1, and onto) the source domain of ϕ, which is Ω. This can be assured by ensuring that h has a positive-determinant Jacobian everywhere, and maps to the target surface boundary injectively. h can be optimized to map the boundary onto itself, via the energy,
B(h)=λB∫∂Ωσ(h(p)). (5)
where σ is the squared signed distance function to the boundary of Ω. Note that the boundary map is free to slide along the boundary of Ω during optimization, enabling the boundary map to change. This is true for all points on the boundary except those mapped to the four corners, which are fixed in place and serve as keypoint constraints between the map models. h is also optimized to encourage its Jacobians determinant to be positive, via,
G=λinv∫max(−sign(|Jh|)exp(−|Jh|),0). (6)
Optimization can be subject to keypoint constraints. When corresponding keypoints on the two surfaces are known, it may be desirable to require that the mapping function ƒ maps those points to one another. For example, the feet 224 of the hippopotamus and the feet 226 of the cow in , in a preprocess before optimization, the system can access or determine their pre-images in Ω, to obtain a set of points P s.t. ϕ (Pi) that maps to the i-th keypoint. Likewise, pre-images of the keypoints from
and their pre-images Q under ψ cam be obtained. If mapping these keypoints to one another between the two surfaces is required by ƒ, requiring h(Pi)=Qi can guarantee that the induced function ƒ associates the points correctly. This equality can be optimized by reducing its least-squares error:
To compute the surface-to-surface map, distortion of ƒ can be optimized with regard to h, while ensuring that h respects the mapping constraints, as given by:
The above yields a model h that maps onto the unit circle, and represents a distortion-minimizing surface mapping function f that maps the given sets of corresponding keypoints correctly.
At block 502 of process 500, the computing device generates a first 2D representation based on a first 3D mesh of a first 3D object. The representation can be obtained as previously described. At block 504, the computing device selects a set of points in the first 2D representation. The set of points includes points corresponding to the vertex points in the first 3D mesh. In some examples, the set of points can also include points corresponding to internal points (points falling inside the triangles or polygons) of the first 3D mesh. At block 505, the computing device generates a first neural network representation based on the selected set of points with an input layer including two nodes representing two ordinates of a point in the first 2D representation and an output layer representing three coordinates of a point in 3D space. The neural network is trained. At block 506, the computing device defines a differential function to represent a second surface of a second 3D object in 3D space, wherein the differentiable function maps a second 2D representation to the second 3D surface.
At block 508 of process 500, the computing device defines a distortion term. The distortion term is based on a Jacobian for mappings to provide multiple distortion terms as previously described. At block 510, the neural network model is defined. The neural network model includes an input layer with a first pair of nodes representing ordinates in the first 2D representation and an output layer with at least a second pair of nodes representing call ordinates in the second 2D representation.
Still referring to
At block 516 of process 500, the computing device applies one or more features of a first 3D mesh on the first surface to a second 3D mesh on the second surface to produce a modified second surface as previously described. This modified second surface can be rendered at block 518 for display in editing interface 130 on presentation device 108.
Either the process of 1,
2, . . . ,
k represented respectively via neural network maps ϕ1, ϕ2, . . . , ϕk. For example, in order to extend process 400 of
i→
j via Fi→j∘ϕih
ϕjh. This definition facilitates extraction of a set of mutually consistent maps while additionally optimizing for all pairs of surface-to-surface maps. Achieving similar qualities via traditional processes is significantly challenging, and makes it difficult to optimize for distortion minimization over the entire collection of surfaces without using significant computing resources.
Still referring to
The system 700 of
Staying with
In addition to the surface-to-surface mapping described herein, neural network representations of surfaces and mappings can be used in various other mapping scenarios. Neural network representations as described herein can capture even very detailed features of the original shape with high fidelity. Neural network mapping as described herein can also be used for, as examples, surface parameterization, composition with analytical maps, cycle-consistent mapping for collections of surfaces, and baseline comparisons.
In an example implementation, neural network representations and/or neural network models as described herein include ten-layer, residual, fully connected networks, with 256 units per layer. Initial meshes can be uniformly sampled with 500,000 points. Since the networks are fully optimized, they can be trained until the gradient's normal drops below a threshold of 0.1. Optimization can be initialized with a learning rate of 10−4 and a momentum of 0.9 and a step size modulated with stochastic gradient descent with warm restarts. Surface maps can include four-layer, fully connected networks of 128 hidden units.
Numerous specific details are set forth herein to provide a thorough understanding of the claimed subject matter. However, those skilled in the art will understand that the claimed subject matter may be practiced without these specific details. In other instances, methods, apparatuses, or systems that would be known by one of ordinary skill have not been described in detail so as not to obscure claimed subject matter.
Unless specifically stated otherwise, it is appreciated that throughout this specification discussions utilizing terms such as “processing,” “computing,” “determining,” and “identifying” or the like refer to actions or processes of a computing device, such as one or more computers or a similar electronic computing device or devices, that manipulate or transform data represented as physical electronic or magnetic quantities within memories, registers, or other information storage devices, transmission devices, or display devices of the computing platform.
The system or systems discussed herein are not limited to any particular hardware architecture or configuration. A computing device can include any suitable arrangement of components that provide a result conditioned on one or more inputs. Suitable computing devices include multi-purpose microprocessor-based computer systems accessing stored software that programs or configures the computing system from a general purpose computing apparatus to a specialized computing apparatus implementing one or more implementations of the present subject matter. Any suitable programming, scripting, or other type of language or combinations of languages may be used to implement the teachings contained herein in software to be used in programming or configuring a computing device.
Embodiments of the methods disclosed herein may be performed in the operation of such computing devices. The order of the blocks presented in the examples above can be varied—for example, blocks can be re-ordered, combined, and/or broken into sub-blocks. Certain blocks or processes can be performed in parallel.
The use of “configured to” herein is meant as open and inclusive language that does not foreclose devices adapted to or configured to perform additional tasks or steps. Headings, lists, and numbering included herein are for ease of explanation only and are not meant to be limiting.
While the present subject matter has been described in detail with respect to specific embodiments thereof, it will be appreciated that those skilled in the art, upon attaining an understanding of the foregoing, may readily produce alterations to, variations of, and equivalents to such embodiments. Accordingly, it should be understood that the present disclosure has been presented for purposes of example rather than limitation, and does not preclude inclusion of such modifications, variations, and/or additions to the present subject matter as would be readily apparent to one of ordinary skill in the art.
Number | Name | Date | Kind |
---|---|---|---|
20080106547 | Kataoka | May 2008 | A1 |
20090213131 | DeRose | Aug 2009 | A1 |
20100053172 | DeRose | Mar 2010 | A1 |
20190087957 | Burris | Mar 2019 | A1 |
20190336220 | Hladio | Nov 2019 | A1 |
20200320386 | Filippov | Oct 2020 | A1 |
20210350620 | Bronstein | Nov 2021 | A1 |
20220020195 | Kuta | Jan 2022 | A1 |
20220027720 | Lysenkov | Jan 2022 | A1 |
20220122250 | Besson | Apr 2022 | A1 |
20230044644 | Elbaz | Feb 2023 | A1 |
20230132479 | Karanam | May 2023 | A1 |
Entry |
---|
Groueix, Thibault, et al. “3d-coded: 3d correspondences by deep deformation.” Proceedings of the european conference on computer vision (ECCV). 2018. |
Achlioptas et al., Learning Representations and Generative Models for 3D Point Clouds, Cornell University, Computer Science, Jun. 12, 2018, 18 pages. |
Aigerman et al., Hyperbolic Orbifold Tutte Embeddings, ACM Transactions on Graphics, vol. 35, No. 6, Nov. 2016, pp. 1-14. |
Aigerman et al., Lifted Bijections for Low Distortion Surface Mappings, ACM Transactions on Graphics, vol. 33, No. 4, Jul. 2014, pp. 1-12. |
Aigerman et al., Orbifold Tutte Embeddings, ACM Transactions on Graphics, vol. 34, No. 6, Nov. 2015, pp. 1-12. |
Aigerman et al., Seamless Surface Mappings, ACM Transactions on Graphics, vol. 34, No. 4, Aug. 2015, pp. 1-13. |
Atzmon et al., SAL: Sign Agnostic Learning of Shapes from Raw Data, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 2565-2574. |
Atzmon et al., SALD: Sign Agnostic Learning with Derivatives, arXiv preprint arXiv:2006.05400, Available Online at: https://arxiv.org/pdf/2006.05400.pdf, 2020, 14 pages. |
Bednarik et al., Shape Reconstruction by Learning Differentiable Surface Representations, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 4716-4725. |
Ben-Hamu et al., Multi-chart Generative Surface Modeling, ACM Transactions on Graphics (TOG), vol. 37, No. 6, Nov. 2018, pp. 215:1-215:15. |
Bommes et al., Quad-mesh Generation and Processing: A Survey, Computer Graphics, vol. 32, No. 6, Sep. 2013, pp. 51-76. |
Bradley et al., Markerless Garment Capture, ACM Transactions on Graphics, vol. 27, No. 3, Aug. 2008, pp. 1-9. |
Brock et al., Generative and Discriminative Voxel Modeling with Convolutional Neural Networks, CoRR, abs/1608.04236, 2016, 9 pages. |
Cybenko, Approximation by Superpositions of a Sigmoidal Function, Mathematics of Control, Signals, and Systems, vol. 2, No. 4, 1989, pp. 303-314. |
Dai et al., Scan2Mesh: From Unstructured Range Scans to 3D Meshes, IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, 10 pages. |
Dai et al., Shape Completion Using 3D-encoder-predictor CNNs and Shape Synthesis, Proceedings of Computer Vision and Pattern Recognition (CVPR), 2017, 14 pages. |
Davies et al., Overfit Neural Networks as a Compact Shape Representation, arXiv:2009.09808v2, 2020, pp. 1-9. |
Deprelle et al., Learning Elementary Structures for 3D Shape Generation and Matching, arXiv preprint arXiv:1908.04725, 2019, 11 pages. |
Donati et al., Deep Geometric Functional Maps: Robust Feature Learning for Shape Correspondence, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 8592-8601. |
Floater, One-to-one Piecewise Linear Mappings Over Triangulations, Mathematics of Computation, vol. 72, No. 242, Oct. 17, 2002, pp. 685-696. |
Floater et al., Surface Parameterization: A Tutorial and Survey, Advances in Multiresolution for Geometric Modelling, 2005, pp. 157-186. |
Girdhar et al., Learning a Predictable and Generative Vector Representation for Objects, Computer Vision and Pattern Recognition (cs.CV), ECCV, Available Online at: https://arxiv.org/abs/1603.08637, Aug. 31, 2016, 26 pages. |
Gortler, Discrete One-forms on Meshes and Applications to 3D Mesh Parameterization, Computer Aided Geometric Design, vol. 23, No. 2, Feb. 2006, pp. 83-112. |
Gropp et al., Implicit Geometric Regularization for Learning Shapes, arXiv preprint arXiv:2002.10099, 2020, 14 pages. |
Groueix et al., A Papier-Mache Approach to Learning 3D Surface Generation, Cornell University, Computer Science; Computer Vision and Pattern Recognition, Accessed from internet on Mar. 18, 2020, 16 pages. |
Ha et al., HyperNetworks, Available Online at : https://arxiv.org/abs/1609.09106, Dec. 1, 2016, 29 pages. |
Haim et al., Surface Networks via General Covers, Proceedings of the IEEE International Conference on Computer Vision, 2019, pp. 632-641. |
Hanocka et al., MeshCNN: A Network with an Edge, ACM Transactions on Graphics (TOG), vol. 38, No. 4, Jul. 2019, pp. 1-12. |
Huang et al., Consistent Shape Maps via Semidefinite Programming, Computer Graphics Forum, vol. 32, Oct. 15, 2013, pp. 177-186. |
Kazhdan et al., Can Mean-curvature Flow Be Modified to Be Non-singular?, arXiv:1203.6819, Available Online at: https://arxiv.org/pdf/1203.6819.pdf, 2012, 9 pages. |
Kraevoy et al., Cross-Parameterization and Compatible Remeshing of 3D Models, ACM Transactions on Graphics, vol. 23, No. 3, Aug. 2004, pp. 861-869. |
Lee et al., Multiresolution Mesh Morphing, Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH '99, Aug. 1999, pp. 343-350. |
Levy et al., Least Squares Conformal Maps for Automatic Texture Atlas Generation, ACM Transactions on Graphics, vol. 21, No. 3, Jul. 2002, pp. 362-371. |
Lipman, Bounded Distortion Mapping Spaces for Triangular Meshes, ACM Transactions on Graphics, vol. 31, No. 4, Jul. 2012, 13 pages. |
Litany et al., Deep Functional Maps: Structured Prediction for Dense Shape Correspondence, Proceedings of the IEEE international conference on computer vision, 2017, 9 pages. |
Littwin et al., Deep Meta Functionals for Shape Representation, Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 1824-1833. |
Liu et al., Interactive 3D Modeling with a Generative Adversarial Network, International Conference on 3D Vision (3DV), 2017, 9 pages. |
Liu et al., Neural Subdivision, arXiv preprint arXiv:2005.01819, 2020, 16 pages. |
Loshchilov, SGDR: Stochastic Gradient Descent with Warm Restarts, arXiv preprint arXiv:1608.03983, 2017, 16 pages. |
Maron et al., Convolutional Neural Networks on Surfaces via Seamless Toric Covers, Journal ACM Transactions on Graphics, vol. 36, No. 4, Jul. 2017, pp. 71:1-71:10. |
Mildenhall et al., NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis, European Conference on Computer Vision, Aug. 3, 2020, pp. 1-25. |
Myles et al., Global Parametrization by Incremental Flattening, ACM Transactions on Graphics (TOG), vol. 31, No. 4, Jul. 2012, pp. 1-11. |
Nguyen et al., An Optimization Approach to Improving Collections of Shape Maps, Computer Graphics Forum, vol. 30, No. 5, Aug. 2011, pp. 1481-1491. |
Ovsjanikov et al., Functional Maps: A Flexible Representation of Maps Between Shapes, ACM Transactions on Graphics (TOG), vol. 31, No. 4, Jul. 2012, pp. 1-11. |
Park et al., DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, 19 pages. |
Pinkall et al., Computing Discrete Minimal Surfaces and their Conjugates, Experimental Mathematics, vol. 2, No. 1, 1993, pp. 15-36. |
Poursaeed et al., Coupling Explicit and Implicit Surface Representations for Generative 3D Modeling, European Conference on Computer Vision (ECCV), Jul. 20, 2020, 16 pages. |
Rabinovich et al., Scalable Locally Injective Mappings, ACM Transactions on Graphics (TOG), vol. 36, No. 2, Apr. 2017, 16 pages. |
Schreiner et al., Inter-surface Mapping, ACM Transactions on Graphics, vol. 23, No. 3, Aug. 2004, pp. 870-877. |
Sheffer et al., Mesh Parameterization Methods and their Applications, Foundations and Trends® in Computer Graphics and Vision, vol. 2, No. 2, 2006, pp. 105-171. |
Sinha et al., Deep Learning 3D Shape Surfaces Using Geometry Images, European Conference on Computer Vision, 2016, pp. 223-240. |
Sitzmann et al., Implicit Neural Representations with Periodic Activation Functions, arXiv:2006.09661, Available Online at: https://arxiv.org/pdf/2006.09661.pdf, 2020, 35 pages. |
Sorkine et al., As-Rigid-As-Possible Surface Modeling, SGP '07: Proceedings of the fifth Eurographics symposium on Geometry processing, vol. 4, Jul. 2007, pp. 109-116. |
Fan et al., A Point Set Generation Network for 3D Object Reconstruction from a Single Image, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, 12 pages. |
Tutte, How to Draw a Graph, Proceedings of the London Mathematical Society, vol. 3, No. 1, May 22, 1962, pp. 743-767. |
Vaswani et al., Attention is all You Need, 31st Conference on Neural Information Processing Systems, Available Online at: https://arxiv.org/pdf/1706.03762.pdf, Dec. 6, 2017, pp. 1-15. |
Weber et al., Locally Injective Parametrization with Arbitrary Fixed Boundaries, ACM Transactions on Graphics (TOG), vol. 33, No. 4, Jul. 2014, pp. 1-12. |
Williams et al., Deep Geometric Prior for Surface Reconstruction, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 10130-10139. |
Yang et al., FoldingNet: Point Cloud Auto-encoder via Deep Grid Deformation, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 206-215. |
Number | Date | Country | |
---|---|---|---|
20230169714 A1 | Jun 2023 | US |