METHOD FOR MODIFYING A 3D MODEL BY USING A PARTIAL SKETCH IN AN AR/VR ENVIRONMENT

Information

  • Patent Application
  • 20250078435
  • Publication Number
    20250078435
  • Date Filed
    September 05, 2024
    a year ago
  • Date Published
    March 06, 2025
    8 months ago
Abstract
A computer-implemented method for designing a 3D model in an AR/VR environment including obtaining a 3D model in a 3D scene, the 3D model including at least one extruded section which results from the extrusion of a planar section, said extruded section being defined by a set of parameters, receiving a 3D user sketch in the 3D scene, at each iteration of a plurality of iterations: modifying at least one of said parameters, thereby obtaining a modified 3D model, performing a discretization of the modified 3D model, thereby obtaining a 3D point cloud, computing an energy which comprises a first term which penalizes an inconsistency between the modified 3D model and the initial 3D model, and a second term which penalizes a mismatch between the 3D point cloud and the 3D user sketch, said parameters being modified so as to minimize said energy, and outputting the modified 3D model.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims priority under 35 U.S.C. § 119 or 365 to European Application No. 23306472.4, filed on Sep. 5, 2023. The entire contents of the above application are incorporated herein by reference.


FIELD

The disclosure relates to the field of computers programs and systems, and more specifically to the field of computer-implemented method for modifying a three-dimensional (3D) modeled object in a 3D scene of an AR/VR (Augmented Reality/Virtual Reality) environment by using a 3D user sketch. In particular, the disclosure belongs to the sketching field. The present embodiments could be used in any three-dimensional-based CAD software.


BACKGROUND

2D/3D Sketching and 3D modeling are two major steps of industrial design. Sketching is typically done first, as it allows designers to express their vision quickly and approximately. Design sketches are then converted into 3D models for downstream engineering and manufacturing, using CAD tools that offer high precision and editability.


Existing software propose the creation of 3D models from hand-drawn sketches. The user draws in 2D, which is natural for the user, instead of having to sculpt 3D shapes. New immersive (AR/VR) systems have demonstrated that directly sketching in 3D space is another very expressive and natural interaction paradigm for 3D model creation.


However, the design intention of the user is not entirely fulfilled, and it may be useful to edit the 3D model, so as to refine it.


In “Sketch2Mesh: Reconstructing and Editing 3D Shapes from Sketches” (Guillard et al., ICCV 2021), an encoder/decoder architecture, based on a deep learning model, is trained to regress surface meshes from synthetic sketches. The user draws an initial sketch to obtain its 3D reconstruction. Then, he can manipulate the object in 3D and draw one or more desired modifications, with a complete or a partial sketch. 3D surfaces are then optimized to match each constraint, by solving an optimization problem of minimizing a 2D Chamfer Distance, which is based on an energy minimization strategy.


For example, in figure 8 of the article, the user draws a car as an input sketch, from a three-quarter view. A 3D model of a car is reconstructed from that view, using the same camera parameters and thus rendered in same image plane. Then, the user changes the viewpoint so as to have the car in a side view, from where he can shorten the length of the trunk and move forward the rear wheels.


Sketch2Mesh is only intended for 2D sketch; thus, it is not transposable to an AR/VR environment. Besides, in Sketch2Mesh, the user can only edit the external silhouette of the mesh and not sharp visible inner edges. Indeed, in Sketch2Mesh, the minimization of the 2D Chamfer Distance comprises finding the 3D mesh points that project to the contour of the foreground image, then minimizing the Chamfer distance between the contour and the sketch. Thus, the user cannot modify internal details of the 3D model, such as the windows, or the external rear glass (cf. example in figure 8 of the article).


Besides, editing a mesh has too many degrees of freedom (usually thousands of vertices to optimize). In addition of being computationally expensive, it may results in non-plausible mesh or with many defaults in the edited area. Thus, plausibilty constraints are added to the optimization algorithm in order to avoid those issues, which adds more processing to the optimization algorithm.


Therefore, there is a need for providing a computer-implemented method for designing a 3D model in an AR/VR environment, which enables an edition of the inner edges of a 3D model, and which does not edit directly the mesh.


SUMMARY

An object of the present disclosure is a computer-implemented method for designing a 3D model in an AR/VR environment, which comprises the steps of:

    • a) Providing a 3D model in a 3D scene, the 3D model comprising at least one extruded section which results from the extrusion of a planar section, said extruded section being defined by a set of parameters;
    • b) Receiving a 3D user sketch in the 3D scene;
    • c) At each iteration of a plurality of iterations:
      • c1) Modifying at least one of said parameters, thereby obtaining a modified 3D model;
      • c2) Performing a discretization of the modified 3D model, thereby obtaining a 3D point cloud;
      • c3) Computing an energy which comprises a first term which penalizes an inconsistency between the modified 3D model and the initial 3D model, and a second term which penalizes a mismatch between the 3D point cloud and the 3D user sketch;
    • said parameters being modified so as to minimize said energy;
    • d) outputting the modified 3D model.





BRIEF DESCRIPTION OF THE DRAWINGS

Additional features and advantages of the present embodiments will become apparent from the subsequent description, taken in conjunction with the accompanying drawings:



FIG. 1 illustrates a flowchart of the invented method.



FIGS. 2 and 3 illustrate two examples of a 3D primitive inference from a user sketch.



FIGS. 4, 5 and 6 illustrate several examples of edition of a 3D model by using the invented method.



FIG. 7 illustrates the vertices and the extrusion vector of a 3D primitive.



FIGS. 8, 9 and 10 illustrate different steps of discretization of the modified 3D model.



FIGS. 11, 12 and 13 illustrate the computing of the custom Chamfer energy.



FIG. 14 illustrates the computing of the symmetry energy.



FIG. 15 illustrates the invented method from different viewpoints.



FIG. 16 illustrates an example of providing an initial 3D model in an AR/VR environment.



FIG. 17 illustrates an embodiment of the invention for the edition of revolve surfaces.



FIG. 18 illustrates a computer for carrying out the invented method.



FIG. 19 illustrates a block diagram of a computer system suitable for carrying out a method.





DETAILED DESCRIPTION

Referring to the flowchart of FIG. 1, the invented method comprises a first step a) of providing an initial 3D model 1 comprising at least one extruded section 2, in a 3D scene. Examples of the initial 3D model 1 are provided in FIGS. 2 and 3.


Such an initial 3D model 1 may be inferred by using the Convolutional Neural Network (CNN) encoder which is disclosed in the patent application EP3958162A1. The disclosed encoder uses partial user sketches 18 and the learned patterns to encode the 2D sketch into data defining a 3D model.


The Convolutional Neural Network (CNN) encoder which is disclosed in the patent application EP3958162A1 infers a real time 3D primitive; thus the user receives a real-time feedback of the 3D primitive. Thus, the design process is not interrupted, and the user can continuously sketch while being guided by the proposed inference.


For example, in FIG. 2, the user sketches a rectangle, topped by a half-circle (left part of the figure). The encoder infers a half-cylinder as a 3D primitive. In FIG. 3, the user sketches a rectangle, and a stroke from an upper vertex of the rectangle. The encoder infers a parallelepiped as a 3D primitive.


To describe a 3D primitive, the following representation is used in patent application EP3958162A1:

    • A list of 3D points pi of the planar section (typically the vertices of the planar section);
    • A list of line type li in {0, 1}describing whether two consecutive points of the section are connected with a segment or an arc;
    • A 3D vector h representing the direction and length of extrusion.


It can be noted that it is not essential for the 3D model to be inferred by using the aforementioned encoder. The 3D model may simply be available in a CAD software, or it can be obtained from any other method such as a download from the Internet. The 3D model parameterization of such a 3D model may be easily and automatically converted to the aforementioned parameterization format (list of 3D points, list of line type, 3D vector) to represent a CAD model.


With the invented method, the parameter corresponding to the line type does not change (the connection between two vertices will always be of the same type, the invented method only impacts the position of the vertices and the extrusion vector).


The method comprises a second step b) which consists in receiving a 3D user sketch 3 in the 3D scene. Exemplary user sketches 3 are illustrated on the left parts of FIGS. 4-6.


The method can be implemented in any environment that allows 3D drawing in space, for example by means of a virtual reality headset and a wireless handheld controller of type “HTC Vive” ™. Alternatively, the 3D user sketch may be provided by any 3D input device, such as a 3D AR pen, for example the one disclosed in “Solpen: An Accurate 6-DOF Positioning Tool for Vision-Guided Robotics” (Trung-Son Le et al.).


The initial 3D model 1 is rendered in 3D, by means of a 3D display device such as a virtual reality headset.


By using a wireless handheld controller, the user may press on one of the buttons while moving the wireless handheld controller, in order to input a user sketch in the 3D scene.


In the example of FIG. 4, the user sketches a stroke 3.


As a result, after several iterations (step c) of the method), the modified 3D model 4 is displayed (step d) of the method), as illustrated on the right part of FIG. 4. On FIG. 4, the modified parallelepiped is larger than the initial parallelepiped. The modified 3D model may be displayed continuously after each iteration, or at the end of all the iterations.


Another example is illustrated on FIG. 5. The initial 3D model 1 is a revolution cylinder, which is characterized by a basis and a height. The user sketch 3 is an elongated circle next to the basis of the cylinder. In reaction to the user sketch, after several iterations (step c) of the method), the modified 3D model 4 is displayed (step d) of the method), as illustrated on the right part of FIG. 5.


Another example is illustrated on FIG. 6. The initial 3D model 1 is a revolution cylinder. In reaction to a user sketch 3, the height of the cylinder is shortened, and fits on the user sketch.


It can be seen, on the examples of FIGS. 4-6, that the modified 3D model 4 matches with the user sketch 3, and the consistency between the modified 3D model 4 and the initial 3D model 1 is maintained.


An idea on which an embodiment relies is computing and optimizing parameters which define the 3D model. In one embodiment, the initial 3D model is defined by a linear extruded section, which can be expressed in the 3D scene by means of parameters, namely the position of 3D points pi of the section in the 3D scene and the extrusion vector h, expressed in the world coordinate space Rw.


Let custom-character the points of the edited primitive (modified 3D model) and ĥ the direction of extrusion of the edited primitive, expressed in the world coordinate space Rw.



FIG. 7 illustrates an exemplary section of a parallelepiped. The section is made of 3D points p1, p2, p3, p4, which are the vertices of the section.


In the present application, and for sake of clarity and conciseness, line types (i.e. segment or arc) are not optimized, because they are discrete parameters: they can either be a straight segment or an arc. During the optimization process, a straight segment remains a straight segment, an arc remains an arc. However, the optimization of the line types could be implemented by using a curve parameterization, or continuous parameters for the line type (parameter in [0, 1] with a linear transformation from segment to half circle arc).


A straightforward possibility would consist in directly optimizing the position of 3D points pi and the extrusion vector h, thus, having three degrees of freedom for each point and for the extrusion vector. However, it may lead to non-plausible results (the edited 3D points are not lying on a common plane), and it would make the optimization process harder to converge to the optimal solution due to a complex optimization space.


The disclosure advantageously defines all the points custom-character of the edited primitive in a common orthonormal local reference frame RL=(u,v,n), wherein u and v correspond to arbitrary vectors which define a plane of the section, and n is a normal to said plan (cf. FIG. 7).


The coplanarity of the points {circumflex over (p)}l of the edited primitive can be expressed as follows:








p
1

^

=


p
i

+


o

i
,
u



u

+


o

i
,
v



v

+


o
n


n






oi,u corresponds to a first offset of the point pi along vector u.


oi,v corresponds to a second offset of the point pi along vector v.


on corresponds to a third offset of the point pi along vector n, said third offset on being identical for all the points pi of the section.


All the points of the edited primitive share the same offset on in the direction of the normal of the plane, in order to keep the points custom-character in a plane. Therefore, the coplanarity between the points is maintained, and the invented method contributes to the achievement of plausible results.


The extrusion vector ĥ of the edited primitive (modified 3D model) is expressed as follows:







h
^

=

h
+


(


0
h

-

0
n


)


n






oh corresponds to a fourth offset of the scale of the extrusion vector. It can be noted that the extrusion vector h of the edited primitive is not computed with the formula ĥ=h+ohn, because when on is different from zero then the section plane is translated. Thus, it is necessary to counter balance the extrusion length in order not to also translate the extruded section. When the offset oh=0 then the extruded section is translated by—on, so in the 3D world space, the extruded section is at the same location. Therefore, with the above referenced definition of the extrusion vector ĥ of the edited primitive, only the length of the extrusion vector is optimized, taking the third offset into account.


As a consequence, the modification of at least one of the parameters (sub-step c1, FIG. 1) comprises modifying at least one among said first offset, second offset, third offset or fourth offset. Thus, a modified 3D model 4, which comprises the modified parameter(s), is generated.


The number of parameters to optimize is equal to 1+nbSides*2+1, wherein nbSides corresponds to the number of vertices of the section of the initial 3D model. Each offset is initialized to zero for the first iteration step.


The second sub-step c2) comprises performing a discretization of the modified 3D model, thereby obtaining a 3D point cloud.


Thanks to the generation of the 3D point cloud 5, the 3D model, which is only defined by parameters, can be displayed in the 3D scene.


The generation of the 3D point cloud 5 is illustrated by FIGS. 8-10. The circumference of the planar section 2 comprises rectilinear parts 26 and/or non-rectilinear parts 27. For example, the planar section of a cube only comprises rectilinear parts, whereas the planar section of a half-cylinder comprises a rectilinear part 26 and a non-rectilinear part 27 (cf. figure 8). A non-rectilinear part 27 comprises at least one arc.


For the rectilinear parts 26, sub-step c2) comprises:

    • an extrusion of each endpoint 30 of each rectilinear part 26 according to the extrusion vector, thereby forming a first set of extruded endpoints 31;
    • a regular discretization of each segment 32 defined by an endpoint 30 and a corresponding extruded endpoint 31 of the first set of extruded endpoints;
    • a regular discretization of each rectilinear parts 26;
    • a regular discretization of each extruded segment 33, an extruded segment being defined by two adjacent extruded endpoints.


For the non rectilinear parts 27 (cf. FIGS. 8-10), sub-step c2) comprises:

    • a discretization of each arc into rectilinear parts 35 (i.e. polygonization of the arc), thereby obtaining a second set of endpoints 34. A few dozen of endpoints 34 are usually sufficient to represent an arc with segments (second set of segments 35).
    • an extrusion of each endpoint 34 according to the extrusion vector, thereby forming a second set of extruded endpoints 36;
    • a regular discretization of each segment 29 defined by an endpoint 34 of the second set of endpoints 34 and a corresponding extruded endpoint 36 of the second set of extruded endpoints 36;
    • a regular discretization of each extruded segment 37, an extruded segment 37 being defined by two adjacent extruded endpoints 36.


On FIG. 9, not all the extruded segments are represented, for the sake of clarity of the figure.


Then, the 3D point cloud 5 is obtained, as illustrated by FIG. 10.


As illustrated by FIG. 1, the invented method comprises, at each iteration, a step c3) which comprises computing an energy made of a first term which penalizes an inconsistency between the modified 3D model 4 and the initial 3D model 1, and a second term which penalizes a mismatch between the 3D point cloud 5 and the user sketch 3. After each iteration, the parameters are modified so as to minimize the energy.


The first term encourages the aforementioned offsets to be close to zero. Indeed, to keep consistency between the original primitive and the edited one, most of the offsets may be equal to zero (which means that there is no change from the original primitive, the 3D models are consistent one to the other). “Consistency” refers to the fact that the geometrical shape remains the same (for example a cube remains a cube, a cylinder remains a cylinder, a pyramid remains a pyramid), but the dimensions may change.


According to an embodiment, the first term, which is referred to as offset regularization energy, is computed as follows:


wherein







e
offsets

=



e
1


l
h


+


e
2


l

s

i

d

e



+


e
3


l
h


+

K





e
1

*

e
2


+


e
1

*

e
3


+


e
2

*

e
3


+
ε














e
1

=

o
n
2






e
2

=



(


0

i
,
u

2

+

0

i
,
v

2


)







e
3

=


(


o
h

-

o
n


)

2














l
h

=



h



2








l

s

i

d

e


=



1

n

b

S

i

d

e

s




Σ

i
=
0


n

b

S

i

d

e

s






p


(

i
+
1

)


%

n

b

S

i

d

e

s



-


p
i




2










nbSides corresponds to the number of edges of the section of the initial 3D model, % nbSides corresponds to a “modulo” numbering, ε is determined to prevent a non-differentiability of the square root for the first iterations, pi corresponds to a position of the point.


oi,u corresponds to a first offset of the point pi along vector u.


oi,v corresponds to a second offset of the point pi along vector v.


on corresponds to a third offset of the point pi along vector n, said third offset on being identical for all the points pi of the section.


oh corresponds to a fourth offset of the scale of the extrusion vector.


Minimizing the first term means looking for a solution that minimizes the offsets, being kept in mind that the optimal solution is usually the simplest one in term of offset changes.







e
1


l
h





corresponds to relative section plane translation in the direction of its normal. e1 is a squared value so as to get a positive value at each iteration, since the energy is to be minimized. lh corresponds to the length of the extrusion vector.







e
2


l

s

i

d

e






corresponds to a relative modification of the points of the section within the plane. lside corresponds to the mean length of a segment, between two points pi of the section. For example, for a four-point section (p1, p2, p3, p4), lside is computed as follows:







l

s

i

d

e


=


1
4



(





p
2


-


p
1




2


+




p
3


-


p
2




2


+




p
4


-


p
3




2


+




p
1


-


p
4




2














e
3


l
h





corresponds to a relative modification of the scale of the extrusion vector.


The first term eoffsets is a sum of quadratic energies, and minimizing a sum of quadratic elements tends to split the errors into each different element. This is not the desired behavior because it usually leads to non-plausible results in terms of user intentionality.


Indeed, it is unlikely that the user edits the extrusion direction length and the section plane/section points using a single stroke. It is also unlikely that he edits the points of the section and the plane of the section at the same time. Because of the interactivity of the invented method, the 3D model is edited iteratively, after each user stroke, and one single stroke usually means a minor modification.


That is why the term K √{square root over (e1*e2+e1*e3+e2*e3+ε)} has been added; it highly penalizes the simultaneous modifications. Indeed, the term is equal to zero when at least two of the energies e1, e2 and e3 are equal to zero.


In a preferred embodiment, 1≤K≤106 and ε≤10−6.


A too high K coefficient (109 for example, or even 106) would make the optimization unstable (the term becomes too overwhelming compared to other energies, especially since floating values are handled: there are no values which are strictly equal to zero). Thus, it is acceptable to lower K in order to accept minor errors of floats (two terms very slightly non-zero).


Consequently, K=103 is a good trade-off.


Advantageously, ε≤10−6, for example ε≤10−8, in order to prevent non differentiability (a square root is not differentiable at zero for the first iterations).


The energy to be minimized comprises a second term, referred to as custom Chamfer energy, which penalizes a mismatch between the 3D point cloud 5 and the user sketch 3.


In what follows, the 3D points of the user sketch 3 are called the “target 3D points”.


The idea is to iterate over each target 3D point 3 and minimize the distance between that point 3 and the closest 3D point from the rendered 3D points list 5. All target points should “attract” (i.e. reduce the distance) the closest point from the other list points, as illustrated by FIG. 11.


The notion of “close” points depends on a distance metric, which is usually the L2 norm in the prior art. However, in the present case, it has been pointed out that a L2 norm (or L2 distance) is not well suited in all situations to find the correct rendered point to attract, as illustrated by FIG. 16.


On the left part of FIG. 12, the 3D point cloud 5 is displayed along with the target 3D points 3 which should “attract” the 3D point cloud 5.


On the middle part of FIG. 12, the points of the 3D point cloud 5 which are closest to each target 3D point, using the L2 norm, are concentrated in the areas 14 and 15, which does not correspond to the user's intention.


Indeed, on the right part of FIG. 12, the segment 16 corresponds to the correct rendered points which should be “attracted” and which should match the user target stroke 3. Here there is no ambiguity about this user intention, because it would be a nonsense (and would not respect the idea of simplest solution stated before) to attract rendered points belonging to an orthogonal edge to the target stroke line.


Since the well-known L2 distance is not satisfying for computing the distance between the 3D point cloud and the target 3D points, the invented method advantageously comprises computing a distance which is particularly well adapted to the present case.


Indeed, the user strokes are seen as a set of lines which must attract close parallel contours. The computed distance incorporates more meaning into the input rather than a set of target 3D points which are all isolated and independent one to each other.



FIG. 13 illustrates this embodiment. For each target 3D point pixi, its 3D local stroke direction ui is approximated, using its target 3D points neighbors (two points of the user sketch 3 which pass by the point of the user sketch pixi). ni and vi are two arbitrary vectors such that (pixi; ui,ni,vi) form a local 3D orthonormal system.


For each target point pixi, a local 3D coordinate system (pixi; ui, vi, ni) is defined, with ni being orthogonal to ui and vi.


Then, for each point pixi of the user sketch, the distance between the point pixi and each point of the 3D point cloud is computed. The distance which is used with the invented method is defined in such a way that the points of the 3D point cloud which are not part of the user sketch are closer than the points of the 3D point cloud which are also points of the user sketch (cf. points 14 and 15 on the middle part of FIG. 12).








d

(


p
i

,

p
k


)

2

=


λ
*


(


p
k

.

u
i


)

2


+


(


p
k

.

n
i


)

2

+


(


p
k

.

v
i


)

2






Therefore, the distance between points over the local direction of the stroke ui is increased when λ>1. A target stroke line will attract a parallel primitive line.


Then the custom Chamfer energy echamfer is computed as follows:







e
chamfer

=


1
n







p
i




P
target





min


p
k




P
model





d

(


p
i

,

p
k


)

2








n corresponds to the number of points pixi of the user sketch 3.


pi corresponds to a point of the user sketch 3, and Ltarget corresponding to the whole set of n points pi of the user sketch 3.


pk corresponds to a point of the 3D point cloud 5 and Lrendered corresponds to the whole set of points pk of the 3D point cloud 5.


λ is a scalar such that λ>1. Indeed, with λ>1, the points which are on the parallel to the user sketch are privileged for computing the custom Chamfer energy echamfer.


Advantageously, 2≤λ≤20, and preferably λ=4.


If λ is too low (for example 1≤λ≤2), the 3D local stroke direction ui will not be enough penalized. If λ is too high, there may be instabilities in view of noise of the local normal ni, which may slow down the real time computations.


Thanks to the normalization factor 1/n, the custom Chamfer energy echamfer does not depend on the size of the sketch. Besides, the energies are easily compared one to the others, since they all have the same order of magnitude, which is beneficial for the calculation of the gradient during the optimization.


It has been seen that, at each iteration of the invented method, an energy is computed. The energy comprises a first term (the offset regularization energy) which penalizes an inconsistency between the modified 3D model (4) and the initial 3D model (1), and a second term (the custom Chamfer energy) which penalizes a mismatch between the 3D point cloud 5 and the user sketch 3.


Optionally, the energy may also comprise a third term referred to as symmetry energy which penalizes a lack of symmetry of the modified 3D model, and which increase the plausibility of the modified 3D model.


Man-made objects are usually composed of symmetric and regular primitives. Indeed, it is more common to find cylindrical man-made objects than elliptic objects; rectangular shapes rather than arbitrary quadrilateral shapes.


The symmetry energy may be computed as follows:







e
symmetry

=


min

q



Q




(







p
l

^


ϵ


P
^






min



p

J
,
q





^






P
q


^






p
l

^



-



p

J
,
q





^



2


+






p

J
,
q





^






P
q


^






min



p
l

^


ϵ


P
^






p
l

^



-



p

J
,
q





^



2



)






P is a set of points including the vertices of the section and the middle point between two consecutive vertices.


Q is a set of symmetry plane candidates which are all the planes containing two different points of P, and with normal orthogonal to the normal of the section.


{circumflex over (P)} is the set of points containing all points (pi) of the section of the initial 3D model and all the extruded points custom-character+ĥ).


Given the set of points {circumflex over (P)} and a plane q defining a symmetry, the set of symmetric points ({circumflex over (P)}q′) is computed, where custom-character is the symmetric point of {circumflex over (P)}i, using the symmetry plane q.


In the example of FIG. 14 (left part), the contours 22 of the section of the 3D model (here a quadrilateral) are to be optimized. The set of points P comprises the points 23. The line 24 represents the axis of the best plane of symmetry Q. The points 25 are the set of points {circumflex over (P)}. The right part of FIG. 14 illustrates the optimized 3D model by the symmetry energy. The set points of points P and {circumflex over (P)} all merge, which means that the symmetry energy has been minimized, meaning a good symmetry with respect to the axis 24.


Finally, the energy to minimize is equal to the sum of at least the first and second terms, and potentially also the third and fourth terms:






e
=



k
chamfer

*

e
chamfer


+


k
offsets

*

e
offsets


+


k
symmetry

*

e
symmetry







kchamfer, koffsets and ksymmetry are weights which are determined so as to balance between regularization energies (eoffsets and esymmetry) and target energy (echamfer).


According to a particular embodiment, the following weights can be used:







k
chamfer

=

1


0
3









k
offsets

=

1


0

-
1










k
symmetry

=

1


0

-
1







As mentioned above, each energy can be expressed using inputs (original CAD parameters of the extrusion and user sketch) and offsets. Thus, at each iteration, the energy and the offsets are determined, and it is possible to minimize the energy e by performing a gradient descent optimization compute








e



o





where o is any offset.


The learning rate parameter of the descent (or descent rate) is adapted to match the scale of the primitive. For that, the following descent rate DR can be used







D

R

=


1
α

*



l

s

i

d

e


+

l
h


2






lside corresponds to the mean length of the section sides of the initial 3D model and lh corresponds to the length of the extrusion vector of the initial 3D model. α is a scalar.


In a preferred embodiment, 1≤α≤106, preferably 5≤α≤104, 10≤α≤103 20≤α≤100, for example α=50.


It is preferred to have a low learning rate parameter during the optimization, so as not to vary the offsets too suddenly.


The invented method comprises a plurality of iterations of steps c1), c2) and c3). The parameters are modified after each iteration, so as to minimize said energy. Thus, the offsets are updated using the gradient descent step results. The number of iterations may be predetermined (for example 200, which has empirically proven to be very satisfying), or the iteration may stop if one or several predetermined criteria has been met.).


Finally, the edited CAD parameters are stored, so that the modified 3D model can be further used in a CAD software.


Because of the interactivity of the invented method, the 3D model is edited iteratively, after each user stroke, and one single stroke usually means a minor modification.


The method comprises a last step d) which comprises displaying the modified 3D model (cf. FIG. 1). The modified 3D model may be displayed continuously after each iteration, or at the end of all the iterations.


It can be noted that the shape modification is not limited to a single view. As the user draws, the user can move from one point of view of the 3D model to another.


For example, in FIG. 15, the user draws a circle 19 according to a first view, thus a cylinder 20 is inferred. However, according to that first view, it would be difficult for the user to precisely modify the height of the cylinder. Then, the user changes the view; the user sketch can be provided according to that second view in order to refine the shape. The inferred shape is shrunk (cf. user sketch 21) to better fit to the strokes.



FIG. 16 illustrates the difficulties of providing an initial 3D model in an AR/VR environment. Left part of FIG. 16 illustrates a “naive” drawing of a cube from a beginner AR user, who tends to draw in screen space/view space rather than in world space/world coordinates. When drawing a cube in AR, users draw the projection of the cube in mid-air (in 2.5 D), meaning mainly in 2D but with some depth information (intentional or not). Indeed, drawing the cube in 3D space, by moving to select every corner, is not intuitive nor it is accurate (cf. right part of FIG. 17).


The predicted 3D model (for example by using the Convolutional Neural Network encoder which is disclosed in the patent application EP3958162A1) could be optimized by taking into account all the strokes from all the views at once, or by optimizing a 3D model from every view, before performing a more global optimization.


The original 2.5 D strokes might not provide intentional depth information which can be used. In the algorithms of prior art, a threshold might be necessary if signal needs to be distinguished from noise, in order to provide depth tips. Contrary to that, the invented method does not need this depth information to work, but in some cases it could provide hints regarding the depth of the object being created.


The method has been disclosed in particular for a linear extrusion of a plane section, with an extrusion vector which is always orthogonal to the section plane.


However, the method can be extended to a section composed of multiple curves, with the line type list containing the different degrees of the curves. It is also possible to consider non-planar sections and extrusion vectors that are not orthogonal to the section plane.


In particular, the invented method is also particularly well adapted for a surface of revolution, which is considered as a curved extrusion (extrusion by revolution). The left part of FIG. 17 illustrates a planar section comprising a list of 3D points (p0-p6), and the axis of revolution is defined by a vector h=ph1−ph0, wherein ph0 and ph1 belong to the plane of the section on the axis of revolution.


It can be seen that connections p0-p1, p2-p3, p3-p4, p4-p5 and p6-p0 are segments and connections p1-p2 and p5-p6 are arcs. It is supposed that the connection types (segment or arc) do not change during the design process.


The right part of FIG. 17 illustrates the solid of revolution (initial 3D model) which is obtained by rotating the planar section p0-p6 around the axis of revolution defined by vector h.


The modified 3D model can be defined with regards to the initial 3D model by a modified set of points custom-character which corresponds to the modified planar section of the modified solid of revolution (modified 3D model):









p
1

^

=


p
i

+


o

i
,
u



u

+

o

i
,

v
v





,




u,v correspond to vectors which define a plane of the planar section, oi,u and oi,v correspond to offsets to optimize.


Besides, the modified positions of the points custom-character and custom-character which correspond to the modified axis of revolution can be defined as follows:








p

h
0


^

=


p

h
0


+


o


h
0

,
u



u

+

o


h
0

,

v
V













p

h
1


^

=


p

h
1


+


o


h
1

,
u




u

+

o


h
1

,

v
V





,






    • oh0,u, oh0,v, oh1,u, oh1,v correspond to offsets to optimize.





At each iteration of the method, the modification at least one of the parameters of the 3D model comprises modifying one of the offsets oi,u, oi,v, oh0,u, oh0,v, oh1,u, oh1,v.


According to that embodiment, steps a), b), c1) and c2) are the same as for a linear extrusion, and will not be detailed for the sake of brevity.


In step c3), the first term (offset regularization energy), is computed as follows:








e
=



e
1


l
h


+


e
2


l

s

i

d

e



+

K





e
1

*

e
2


+
ε
















e
1

=



0
2



h
0

,
u


+


(

0
2

)



h
0

,
v


+


(

0
2

)



h
1

,
u


+


(

0
2

)



h
1

,
v








e
2

=



(


0

i
,
u

2

+

0

i
,
v

2


)














l

s

i

d

e


=



1

n

b

P

o

i

n

t

s


*




i
=
0



n

b

P

o

i

n

t

s

-
1






p


(

i
+
1

)


%

n

b

P

o

i

n

t

s





-


p
i




2











l
h

=





p

h

1



-


p

h

0





2








nbPoints corresponds to the number of points of the section of the initial 3D model, K is a penalization weight which is determined to penalize simultaneous modifications in the user sketch 3, ε is determined to prevent a non-differentiability of the square root for the first iterations.


The values of K and E are determined similarly as for the linear extrusion.


Besides, in step c3), the second term (custom Chamfer energy) is identical to the second term for a linear extrusion. The third term (symmetry energy) is not relevant in case of a surface of revolution.


The inventive method can be performed by a suitably-programmed general-purpose computer or computer system, possibly including a computer network, storing a suitable program in non-volatile form on a computer-readable medium such as a hard disk, a solid state disk or a CD-ROM and executing said program using its microprocessor(s) and memory.


A computer—more precisely a computer aided design station—suitable for carrying out a method according to an exemplary embodiment is described with reference to FIG. 18. In FIG. 18, the computer includes a Central Processing Unit CP which performs the processes described above. The process can be stored as an executable program, i.e. a set of computer-readable instructions in memory, such as RAM MEM1 or ROM MEM2, or on hard disk drive (HDD) MEM3, DVD/CD drive MEM4, or can be stored remotely. 3D models database in a form suitable to be processed by the executable program according to the inventive method—may also be stored on one or more of memory devices MEM1 to MEM4, or remotely.


The disclosure is not limited by the form of the computer-readable media on which the computer-readable instructions of the inventive process are stored. For example, the instructions and databases can be stored on CDs, DVDs, in FLASH memory, RAM, ROM, PROM, EPROM, EEPROM, hard disk or any other information processing device with which the computer aided design station communicates, such as a server or computer. The program and the database can be stored on a same memory device or on different memory devices.


Further, a computer program suitable for carrying out the inventive method can be provided as a utility application, background daemon, or component of an operating system, or combination thereof, executing in conjunction with CPU PR and an operating system such as Microsoft VISTA, Microsoft Windows 7, UNIX, Solaris, LINUX, Apple MAC-OS and other systems known to those skilled in the art.


The Central Processing Unit CP can be a Xenon processor from Intel of America or an Opteron processor from AMD of America, or can be other processor types, such as a Freescale ColdFire, IMX, or ARM processor from Freescale Corporation of America. Alternatively, the Central Processing Unit CP can be a processor such as a Core2 Duo from Intel Corporation of America, or can be implemented on an FPGA, ASIC, PLD or using discrete logic circuits, as one of ordinary skill in the art would recognize. Further, the CPU can be implemented as multiple processors cooperatively working to perform the computer-readable instructions of the inventive processes described above.


The computer aided design station in FIG. 18 also includes a network interface NI, such as an Intel Ethernet PRO network interface card from Intel Corporation of America, for interfacing with a network, such as a local area network (LAN), wide area network (WAN), the Internet and the like. The computer aided design station further includes a display controller DC, such as a NVIDIA GeForce GTX graphics adaptor from NVIDIA Corporation of America for interfacing with display DY, such as a Hewlett Packard HPL2445w LCD monitor or a virtual reality headset. A general purpose I/O interface IF interfaces with a keyboard KB and pointing device PD, such as a roller ball, mouse, touchpad, control devices of a virtual reality system and the like. The display, the keyboard and the pointing device, together with the display controller and the I/O interfaces, form a graphical user interface.


The computer system further includes a head mounted display device HMD having a head tracking device. The general purpose I/O interface IF interface has at least one hand held controller HHC which is equipped with a hand tracking device. The display, the keyboard and the pointing device, together with the display controller and the I/O interfaces, form a graphical user interface, used by the user to provide user sketches and by the computer for displaying the 3D objects.


Disk controller DKC connects HDD MEM3 and DVD/CD MEM4 with communication bus CB, which can be an ISA, EISA, VESA, PCI, or similar, for interconnecting all of the components of the computer aided design station.


The computer also comprises a memory having recorded thereon the data structure which comprises the Convolutional Neural Network (CNN) encoder which infers the 3D primitive based on the 3D sketch and based on the learned patterns. The skilled person may refer to patent application EP3958162A1 for an exemplary description of the encoder.


The modified 3D model may be stored on CDs, DVDs, in FLASH memory, RAM, ROM, PROM, EPROM, EEPROM, hard disk or any other server or computer, for a further use in a computer aided design. A physical (mechanical) object, or a part of said object, may be manufactured based on a file containing the modified 3D model.


The file may be converted in a readable format for the manufacturing process. Thus, the disclosure also relates to a method of manufacturing a mechanical part, which comprises the steps of:

    • Designing a mechanical part by means of the aforementioned design method;
    • Physically manufacturing the mechanical part.


A description of the general features and functionality of the display, keyboard, pointing device, as well as the display controller, disk controller, network interface and I/O interface is omitted herein for brevity as these features are known.



FIG. 19 is a block diagram of a computer system suitable for carrying out a method according to a different exemplary embodiment.


In FIG. 19, the server SC is connected to an administrator system ADS and end user computer EUC via a network NW.


As can be appreciated, the network NW can be a public network, such as the Internet, or a private network such as an LAN or WAN network, or any combination thereof and can also include PSTN or ISDN sub-networks. The network NW can also be wired, such as an Ethernet network, or can be wireless such as a cellular network including EDGE, 3G and 4G wireless cellular systems. The wireless network can also be Wi-Fi, Bluetooth, or any other wireless form of communication that is known. Thus, the network NW is merely exemplary and in no way limits the scope of the present advancements.


The client program stored in a memory device of the end user computer and executed by a CPU of the latter accesses the 3D model databases on the server via the network NW.


Although only one administrator system ADS and one end user system EUX are shown, the system can support any number of administrator systems and/or end user systems without limitation.


Any processes, descriptions or blocks in flowcharts described herein should be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps in the process, and alternate implementations are included within the scope of the exemplary embodiment.

Claims
  • 1. A computer-implemented method for designing a 3D model in an AR/VR environment, comprising: a) obtaining a 3D model in a 3D scene, the 3D model including at least one extruded section which results from the extrusion of a planar section, said extruded section being defined by a set of parameters;b) receiving a 3D user sketch in the 3D scene;c) at each iteration of a plurality of iterations: c1) modifying at least one of said parameters, thereby obtaining a modified 3D model;c2) performing a discretization of the modified 3D model, thereby obtaining a 3D point cloud;c3) computing an energy which comprises a first term which penalizes an inconsistency between the modified 3D model and an initial 3D model, and a second term which penalizes a mismatch between the 3D point cloud and the 3D user sketch,said parameters being modified to minimize said energy; andd) outputting the modified 3D model.
  • 2. The method according to claim 1, wherein the planar section is extruded linearly according to an extrusion vector.
  • 3. The method according to claim 2, wherein the planar section includes a first set of rectilinear parts, and sub-step c2) further comprises: extruding of each endpoint of each rectilinear part according to the extrusion vector, thereby forming a first set of extruded endpoints;regular discretizing of each segment defined by an endpoint of a rectilinear part and a corresponding extruded endpoint of the first set of extruded endpoints;regular discretizing of each rectilinear part; andregular discretizing of each extruded segment, an extruded segment being defined by two adjacent extruded endpoints of the first set of extruded endpoints.
  • 4. The method according to claim 2, wherein the planar section includes non-rectilinear parts, and sub-step c2) further comprises: discretizing each non-rectilinear part into a second set of rectilinear parts, thereby obtaining a second set of endpoints;extruding each endpoint according to the extrusion vector, thereby forming a second set of extruded endpoints; regular discretizing each segment defined by an endpoint of the second set of endpoints and a corresponding extruded endpoint of the second set of extruded endpoints; andregular discretizing each extruded segment, an extruded segment being defined by two adjacent extruded endpoints of the second set of endpoints.
  • 5. The method according to claim 2, wherein the set of parameters includes a position of 3D points (pi) of a section in the 3D scene and the extrusion vector (h), the modified 3D model is defined with regards to the initial 3D model by a modified set of points and by a modified extrusion vector ĥ expressed as follows:
  • 6. The method according to claim 5, wherein the first term, being an offset regularization energy, is computed as follows: wherein
  • 7. The method according to claim 1, wherein, the initial 3D model describes a 3D surface of revolution, which defines an extrusion by revolution, said 3D surface of revolution is defined by the set of parameters having a list of 3D points (pi) of the planar section to revolve, and by an axis of revolution, expressed as follows: h=ph1−ph0 Wherein ph0 and ph1 belong to the plane of a section on the axis of revolution,wherein the modified 3D model is defined with regards to the initial 3D model by a modified set of points of the planar section and by a modified vector of axis of revolution ĥ,wherein =pi+oi,uu+oivv and ĥ=−,u, v correspond to vectors which define a plane of the planar section,oi,u and oi,v correspond respectively to a fifth and sixth offsets to optimize,wherein =ph0+oh0,uu+oh0,vv and =ph1+oh1,uu+oh1,vv, oh0,u, oh0,v, oh1,u, oh1v corresponds to offsets to optimize, andwherein modifying at least one of said parameters includes modifying at least one among said offsets.
  • 8. The method according to claim 7, wherein the first term, being an offset regularization energy, is computed as follows: where:
  • 9. The method according to claim 1, wherein the second term, being a custom Chamfer energy, is computed as follows:
  • 10. The method according to claim 5, wherein the energy includes a third term, being a symmetry energy, which is computed as follows:
  • 11. The method according to claim 2, wherein the energy is minimized by performing a gradient descent optimization, with the following descent rate DR:
  • 12. A non-transitory computer-readable data-storage medium containing computer-executable instructions that cause a computer system to carry out a method for designing a 3D model in an AR/VR environment, the method comprising: a) obtaining a 3D model in a 3D scene, the 3D model including at least one extruded section which results from the extrusion of a planar section, said extruded section being defined by a set of parameters;b) receiving a 3D user sketch in the 3D scene;c) at each iteration of a plurality of iterations: c1) modifying at least one of said parameters, thereby obtaining a modified 3D model;c2) performing a discretization of the modified 3D model, thereby obtaining a 3D point cloud;c3) computing an energy which comprises a first term which penalizes an inconsistency between the modified 3D model and an initial 3D model, and a second term which penalizes a mismatch between the 3D point cloud and the 3D user sketch,said parameters being modified to minimize said energy; andd) outputting the modified 3D model.
  • 13. A computer system comprising: a processor coupled to a memory, the memory storing computer-executable instructions for designing a 3D model in an AR/VR environment that when executed by the processor cause the processor to be configured to: a) obtain a 3D model in a 3D scene, the 3D model including at least one extruded section which results from the extrusion of a planar section, said extruded section being defined by a set of parameters;b) receive a 3D user sketch in the 3D scene;c) at each iteration of a plurality of iterations: c1) modify at least one of said parameters, thereby obtaining a modified 3D model;c2) perform a discretization of the modified 3D model, thereby obtaining a 3D point cloud;c3) compute an energy which comprises a first term which penalizes an inconsistency between the modified 3D model and an initial 3D model, and a second term which penalizes a mismatch between the 3D point cloud and the 3D user sketch,said parameters being modified to minimize said energy; andd) output the modified 3D model.
  • 14. The method according to claim 3, wherein the planar section includes non-rectilinear parts, and sub-step c2) further comprises: discretizing of each non-rectilinear part into a second set of rectilinear parts, thereby obtaining a second set of endpoints;extruding of each endpoint according to the extrusion vector, thereby forming a second set of extruded endpoints;regular discretizing of each segment defined by an endpoint of the second set of endpoints and a corresponding extruded endpoint of the second set of extruded endpoints; andregular discretizing of each extruded segment, an extruded segment being defined by two adjacent extruded endpoints of the second set of endpoints.
  • 15. The method according to claim 3, wherein the set of parameters includes aa position of 3D points (pi) of a section in the 3D scene and the extrusion vector (h), the modified 3D model is defined with regards to the initial 3D model by a modified set of points and by a modified extrusion vector ĥ expressed as follows:
  • 16. The method according to claim 4, wherein the set of parameters includes a position of 3D points (pi) of a section in the 3D scene and the extrusion vector (h), the modified 3D model is defined with regards to the initial 3D model by a modified set of points and by a modified extrusion vector ĥ expressed as follows:
  • 17. The method according to claim 3, wherein the energy is minimized by performing a gradient descent optimization, with the following descent rate DR:
  • 18. The method according to claim 4, wherein the energy is minimized by performing a gradient descent optimization, with the following descent rate DR:
  • 19. The method according to claim 9, wherein the energy is minimized by performing a gradient descent optimization, with the following descent rate DR:
  • 20. The method according to claim 10, wherein the energy is minimized by performing a gradient descent optimization, with the following descent rate DR:
Priority Claims (1)
Number Date Country Kind
23306472.4 Sep 2023 EP regional