This specification relates to computer aided design of physical structures, which can be manufactured using additive manufacturing, subtractive manufacturing and/or other manufacturing systems and techniques.
Computer Aided Design (CAD) software has been developed and used to generate three-dimensional (3D) representations of objects, and Computer Aided Manufacturing (CAM) software has been developed and used to evaluate, plan and control the manufacture of the physical structures of those objects, e.g., using Computer Numerical Control (CNC) manufacturing techniques. Typically, CAD software stores the 3D representations of the geometry of the objects being modeled using a boundary representation (B-Rep) format. A B-Rep model is a set of connected surface elements specifying boundaries between a solid portion and a non-solid portion of the modelled 3D object. In a B-Rep model (often referred to as a B-Rep), geometry is stored in the computer using smooth and precise mathematical surfaces, in contrast to the discrete and approximate surfaces of a mesh model, which can be difficult to work with in a CAD program.
CAD programs have been used in conjunction with subtractive manufacturing systems and techniques. Subtractive manufacturing refers to any manufacturing process where 3D objects are created from stock material (generally a “blank” or “workpiece” that is larger than the 3D object) by cutting away portions of the stock material. Such manufacturing processes typically involve the use of multiple CNC machine cutting tools in a series of operations, starting with a roughing operation, an optional semi-finishing operation, and a finishing operation. In addition to CNC machining, other subtractive manufacturing techniques include electrode discharge machining, chemical machining, waterjet machining, etc. In contrast, additive manufacturing, also known as solid free form fabrication or 3D printing, refers to any manufacturing process where 3D objects are built up from raw material (generally powders, liquids, suspensions, or molten solids) in a series of layers or cross-sections. Examples of additive manufacturing include Fused Filament Fabrication (FFF) and Selective Laser Sintering (SLS). Other manufacturing techniques for building 3D objects from raw materials include casting and forging (both hot and cold).
In addition, CAD software has been designed so as to perform automatic generation of 3D geometry (generative design) for a part or one or more parts in a larger system of parts to be manufactured. This automated generation of 3D geometry is often limited to a design space specified by a user of the CAD software, and the 3D geometry generation is typically governed by design objectives and constraints, which can be defined by the user of the CAD software or by another party and imported into the CAD software. The design objectives (such as minimizing the waste material or weight of the designed part) can be used to drive the geometry generation process toward better designs. The design constraints can include both structural integrity constraints for individual parts (i.e., a requirement that a part should not fail under the expected structural loading during use of the part) and physical constraints imposed by a larger system (i.e., a requirement that a part not interfere with another part in a system during use). Further, examples of design constraints include maximum mass, maximum deflection under load, maximum stress, etc.
Some CAD software has included tools that facilitate 3D geometry enhancements using lattices and skins of various sizes, thicknesses and densities, where lattices are composed of beams or struts that are connected to each other or directly to solid parts at junctions, and skins are shell structures that overlay or encapsulate the lattices. Such tools allow redesign of a 3D part to be lighter in weight, while still maintaining desired performance characteristics (e.g., stiffness and flexibility). Such software tools have used lattice topologies of various types that can be used to generate lattice structures that can be manufactured.
Moreover, the inputs to a generative design process can include a set of input solids (B-Rep input) that specify boundary conditions for the generative design process, but many modern generative design solvers do not operate directly on the exact surface boundary representation of their input solids. Instead, B-Reps are sampled and replaced with volumetric representations such as level sets or tetrahedral or hexahedral meshes, which are significantly more convenient and efficient for the physical simulations and material synthesis computed by the solver. The set of input solids can include “preserve bodies”, which should always be present in the design and which represent interfaces to other parts of the systems or locations on which boundary conditions should be applied (for example mechanical loads and constraints). Other regions in which geometry should or should not be generated can also be provided in a similar manner, such as input solids that define “obstacle bodies”, which represent regions where new geometry should not be generated.
This specification describes technologies relating to computer aided design of physical structures using generative design processes, where the three dimensional (3D) models of the physical structures can be produced with singularities and disconnection prevention, and where the physical structures thus produced can then be manufactured using additive manufacturing, subtractive manufacturing and/or other manufacturing systems and techniques.
In general, one or more aspects of the subject matter described in this specification can be embodied in one or more methods, including: obtaining, by a computer aided design program, a design space for a modeled object, for which a corresponding physical structure will be manufactured, one or more design criteria for the modeled object, and one or more in-use load cases for the physical structure; iteratively modifying, by the computer aided design program, a generatively designed three dimensional shape of the modeled object in the design space in accordance with the one or more design criteria and the one or more in-use load cases for the physical structure, wherein the iteratively modifying includes performing numerical simulation of the modeled object in accordance with a current version of the three dimensional shape and the one or more in-use load cases to produce a current numerical assessment of a physical response of the modeled object, computing shape change velocities for an implicit surface in a level-set representation of the three dimensional shape, changing the shape change velocities in accordance with a polynomial function that has been fit to at least a portion of the shape change velocities above a reference velocity, updating the level-set representation using the shape change velocities to produce an updated version of the three dimensional shape of the modeled object, and repeating at least the performing, the computing, the changing and the updating until a predefined number of shape modification iterations have been performed or until the generatively designed three dimensional shape of the modeled object in the design space converges to a stable solution for the one or more design criteria and the one or more in-use load cases; and providing, by the computer aided design program, the generatively designed three dimensional shape of the modeled object for use in manufacturing the physical structure corresponding to the modeled object using one or more computer-controlled manufacturing systems.
The one or more methods can include setting the reference velocity based on a mean and a standard deviation of a shape derivative on the implicit surface.
The one or more methods can include fitting the polynomial function being a cubic polynomial to velocity values above the reference velocity.
The iteratively modifying can include, after the updating and before the repeating: checking for an excessive change having been made during the updating; making the current version of the three dimensional shape be the updated version of the three dimensional shape for a next iteration to undue the excessive change; and slowing shape changes for the next iteration of the iteratively modifying.
Checking for the excessive change can include comparing a change in the one or more design criteria resulting from the updating with a predefined limit on an amount of change allowed for the one or more design criteria in a single iteration of the iteratively modifying.
Slowing the shape changes for the next iteration can include reducing a target volume change for the generatively designed three dimensional shape of the modeled object for the next iteration of the iteratively modifying.
The iteratively modifying can include, before performing the numerical simulation: identifying any elements, generated from the current version of the three dimensional shape for the numerical simulation, that are partially but not entirely within the implicit surface; and removing any of the identified elements that are not connected to any nodes of the current version of the three dimensional shape with Dirichlet boundary conditions, before performing the numerical simulation.
The iteratively modifying can include, before performing the numerical simulation: identifying any elements, generated from the current version of the three dimensional shape for the numerical simulation, that are partially but not entirely within the implicit surface; and setting densities of the identified elements equal to respective volume fractions of the identified elements, wherein a volume fraction of an identified element is a fractional amount of the identified element that falls within the implicit surface; wherein performing the numerical simulation includes penalizing stiffness of the identified elements according to the densities.
These and other methods described herein can be implemented using a non-transitory computer-readable medium encoding a computer aided design program operable to cause one or more data processing apparatus to perform the method(s). In some implementations, a system includes: a non-transitory storage medium having instructions of a computer aided design program stored thereon; and one or more data processing apparatus configured to run the instructions of the computer aided design program to perform the method(s). Further, such systems can include an additive manufacturing machine, or other manufacturing machines, and the one or more data processing apparatus can be configured to run the instructions of the computer aided design program to generate instructions for such machines (e.g., toolpath specifications for the additive manufacturing machine) from the three dimensional model, and manufacture the physical structure corresponding to the object with the machines using the instructions (e.g., the additive manufacturing machine using the toolpath specifications).
Particular embodiments of the subject matter described in this specification can be implemented to realize one or more of the following advantages.
Singularities, e.g., caused by sharp re-entrant corners or bad meshing, and geometric disconnections, e.g., caused by the use of Ersatz materials, can be mitigated or outright avoided. The techniques described can be implemented for automatic detection and correction, allowing for shape and topology optimization to continue where otherwise it may halt as a result of detecting a disconnection or singularity. Objects can be optimized even in limited design spaces. Identified regions of improvement, i.e., regions containing singularities or disconnections, can be corrected for to prevent affecting the stress constraint defined for the object during design optimization.
The process can be performed automatically and in conjunction with other techniques, such as the level-set method, conventional body-fitted solvers, backup-and-recovery, load case-specific advection to prevent disconnected geometry, Proportional-Integral-Derivative (PID) controllers, other adaptive controllers and related technologies, including adaptive PID tuning, PID auto-tuning, the latter of which can be used to meet arbitrary design constraints.
The details of one or more embodiments of the subject matter described in this specification are set forth in the accompanying drawings and the description below. Other features, aspects, and advantages of the invention will become apparent from the description, the drawings, and the claims.
Like reference numbers and designations in the various drawings indicate like elements.
The numerical simulation performed by the CAD program(s) 116 can simulate one or more physical properties and can use one or more types of simulation to produce a numerical assessment of physical response (e.g., structural response) of the modelled object. For example, finite element analysis (FEA), including linear static FEA, finite difference method(s), and material point method(s) can be used. Further, the simulation of physical properties performed by the CAD program(s) 116 can include Computational Fluid Dynamics (CFD), Acoustics/Noise Control, thermal conduction, computational injection molding, electric or electro-magnetic flux, and/or material solidification (which is useful for phase changes in molding processes) simulations. Moreover, the CAD program(s) 116 can potentially implement hole and/or fixture generation techniques to support clamping during manufacturing and/or manufacturing control functions.
As used herein, CAD refers to any suitable program used to design physical structures that meet design requirements, regardless of whether or not the CAD program is capable of interfacing with and/or controlling manufacturing equipment. Thus, CAD program(s) 116 can include Computer Aided Engineering (CAE) program(s), Computer Aided Manufacturing (CAM) program(s), etc. The CAD program(s) 116 can run locally on computer 110, remotely on a computer of one or more remote computer systems 150 (e.g., one or more third party providers' one or more server systems accessible by the computer 110 via the network 140) or both locally and remotely. Thus, a CAD program 116 can be two or more programs that operate cooperatively on two or more separate computer processors in that one or more programs 116 operating locally at computer 110 can offload processing operations (e.g., generative design and/or numerical simulation operations) “to the cloud” by having one or more programs 116 on one or more computers 150 perform the offloaded processing operations.
The CAD program(s) 116 present a user interface (UI) 122 on a display device 120 of the computer 110, which can be operated using one or more input devices 118 of the computer 110 (e.g., keyboard and mouse). Note that while shown as separate devices in
In the example shown, an initial 3D model 132 is a seed model for input to a generative design process. In this example, the user 160 has defined a mechanical problem, for a generative design process to operate on, to produce a new 3D model from a starting 3D model 132. In this case, the defined problem is the Michell type arch problem, where the user 160 has specified a domain 134 and loading cases 136. However, this is but one of many possible examples.
In some implementations, the user 160 (or other person or program) can specify a design space for an object to be manufactured, a numerical simulation setup (e.g., load(s) and material(s)) for numerical simulation (e.g., FEA, CFD, Acoustics/Noise Control, thermal conduction, computational injection molding simulations, electric or electro-magnetic flux, material solidification, etc.) of the object, at least one design objective (e.g., minimize material usage) for the object, and at least one design constraint (e.g., a volume constraint) for the object. In some implementations, the inputs for use in numerical simulation and generative design processes can include one or more regions of a current 3D model in which to generate new 3D geometry, loading case(s) defining one or more loads in one or more different directions to be borne by a physical structure being designed, one or more materials (e.g., one or more isotropic solid materials identified as a baseline material model for the design space), one or more seed model types to use as input to a generative design process, one or more generative design processes to use, and/or one or more lattice topologies to use in one or more regions of the design space. Inputs to the generative design and numerical simulation processes can include non-design spaces, different types of components (e.g., rods, bearings, shells), one or more target manufacturing processes and associated parameters, obstacle geometries that should be avoided, preserve geometries that should be included in the final design, and parameters related to various aspects, such as resolution of the design, type of synthesis, etc.
Moreover, the CAD program(s) 116 provide user interface elements in the UI 122 to enable the user 160 to specify the various types of inputs noted above, and all (or various subsets) of these inputs can be used in the generative design and numerical simulation processes described in this document. Further, the user 160 can be enabled by the UI 122 of the CAD program(s) 116 to design a part using traditional 3D modelling functions (to build precise geometric descriptions of the 3D design model) and then use generative design and simulation processes in a design space specified within one or more portions of the 3D design model. Thus, as will be appreciated, many possible types of physical structures can be designed using the systems and techniques described in this document, the UI 122 can be used to create a full mechanical problem definition for a part to be manufactured, and the generative design and numerical simulation processes can accelerate new product development by enabling increased performance without time consuming physical testing.
Further, as described herein, the CAD program(s) 116 implement at least one generative design process, which enables the CAD program(s) 116 to generate one or more portions of the 3D model(s) automatically (or the entirety of a 3D model) based on design objective(s) and constraint(s), i.e., design criteria, where the geometric design is iteratively optimized based on simulation feedback. Note that, as used herein, “optimization” (or “optimum”) does not mean that the best of all possible designs is achieved in all cases, but rather, that a best (or near to best) design is selected from a finite set of possible designs that can be generated within an allotted time (e.g., as specified by a predefined number of shape modification iterations) given the available processing resources. The design criteria can be defined by the user 160, or by another party and imported into the CAD program(s) 116. The design criteria can include structural integrity constraints for individual parts (e.g., a requirement that a part should not fail under the expected structural loading during use of the part) and physical constraints imposed by a larger system (e.g., a requirement that a part be contained within a specified volume so as not to interfere with other part(s) in a system during use).
Various generative design processes can be used, which can optimize the shape and topology of at least a portion of the 3D model. The iterative optimization of the geometric design of the 3D model(s) by the CAD program(s) 116 involves topology optimization, which is a method of light-weighting where the optimum distribution of material is determined by minimizing an objective function subject to design constraints (e.g., structural compliance with volume as a constraint). Topology optimization can be addressed using a variety of numerical methods, which can be broadly classified into two groups: (1) material or microstructure techniques, and (2) geometrical or macrostructure techniques. Microstructure techniques are based on determining the optimum distribution of material density and include the Solid Isotropic Material with Penalization (SIMP) method and the homogenization method. In the SIMP method, intermediate material densities are penalized to favor either having p=0 or p=1, denoting a void or a solid, respectively. Intermediate material densities are treated as composites in the homogenization method.
In contrast, macrostructure techniques treat the material as being homogeneous, and the three dimensional topology of the modeled object being produced is represented as one or more boundaries between one or more solid regions (having the homogenous material therein) and one or more void regions (having no material therein) within the design space (also referred to as the domain or a sub-space of the domain for topology optimization). The shape(s) of the one or more boundaries are optimized during the generative design process, while the topology is changed in the domain as a result of the shape optimization in combination with adding/removing and shrinking/growing/merging the void region(s). Thus, the types of final optimized topologies that can result from a generative design process using a macrostructure technique can depend significantly on the number and sizes of voids within the seed geometry along with the addition and removal of voids during the optimization process.
Note that, while only one seed model 132 is shown in
In various implementations, the CAD program(s) 116 provide generative design shape optimization processes (1) controlled convergence, (2) with damage prevention over loading cycles, (3) with size limited fatigue damage, (4) using a build material strength model, and/or (5) with singularities and disconnection prevention, as described herein. In some implementations the CAD program(s) 116 implement all of the above-listed processes, while in other implementations the CAD program(s) 116 implement a subset of the above-listed processes.
Once the user 160 is satisfied with a generatively designed 3D model, the 3D model can be stored as a 3D model document 130 and/or used to generate another representation of the model (e.g., an .STL file for additive manufacturing). This can be done upon request by the user 160, or in light of the user's request for another action, such as sending the 3D model 132 to an additive manufacturing (AM) machine 170, or other manufacturing machinery, which can be directly connected to the computer 110, or connected via a network 140, as shown. This can involve a post-process carried out on the local computer 110 or a cloud service to export the 3D model 132 to an electronic document from which to manufacture. Note that an electronic document (which for brevity will simply be referred to as a document) can be a file, but does not necessarily correspond to a file. A document may be stored in a portion of a file that holds other documents, in a single file dedicated to the document in question, or in multiple coordinated files.
In any case, the CAD program(s) 116 can provide a document 135 (having toolpath specifications of an appropriate format) to the AM machine 170 to produce a complete structure 138, which includes the optimized topology and shape (in this example, an arch design generated for the Michell type arch problem). The AM machine 170 can employ one or more additive manufacturing techniques, such as granular techniques (e.g., Powder Bed Fusion (PBF), Selective Laser Sintering (SLS) and Direct Metal Laser Sintering (DMLS)), extrusion techniques (e.g., Fused Deposition Modelling (FDM), which can include metals deposition AM). In addition, the user 160 can save or transmit the 3D model for later use. For example, the CAD program(s) 116 can store the document 130 that includes the generated 3D model.
In some implementations, subtractive manufacturing (SM) machine(s) 174 (e.g., a Computer Numerical Control (CNC) milling machine, such as a multi-axis, multi-tool milling machine) can also be used in the manufacturing process. Such SM machine(s) 174 can be used to prepare initial work-pieces on which AM machine(s) 170 will operate. In some implementations, a partially complete structure 138 is generated by the AM machine(s) 170 and/or using casting methods (e.g., investment casting (IC) using ceramic shell or sand casting (SC) using sand cores), and this partially complete structure 138 then has one or more portions removed (e.g., finishing) by the CNC machine 174 in order to form the completed structure. In some implementations, the CAD program(s) 116 can provide a corresponding document 135 (having toolpath specifications of an appropriate format, e.g., a CNC numerical control (NC) program) to the SM machine 174 for use in manufacturing the part using various cutting tools, etc. Moreover, in some implementations, the complete structure 138 is produced in its entirely using SM machine(s) 174.
In various implementations, the CAD program(s) 116 of the system 100 can implement one or more generative design processes as described in this document.
Generative design processes seek an optimal geometric shape, topology, or both. For example, generative design processes seek an optimal geometric shape among alternative designs by minimizing a performance-related objective function subject to constraints:
where s is a vector of design variables related to a geometric shape of the domain, and u is a vector of state variables (e.g., displacement) that depend on s. Additional constraints (e.g., equilibrium) are denoted by a set g. For simplicity, equality constraints are assumed here. Mathematical programming methods used to minimize (1) can be gradient-based or non-gradient-based. Gradient-based methods (versus non-gradient-based methods) generally use more information associated with design sensitivity, for example:
which is a derivative of the performance-related objective function with respect to the design variables. In lattice-based methods, s represents a lattice thickness. In level-set based topology optimization methods, s represents a boundary of a solid region.
Additional design variables are possible, such as (1) a design space for generative design geometry production, e.g., a boundary representation (B-Rep) 3D model designed or loaded into CAD program(s) 116 that serves as a sub-space of an optimization domain of a described generative design process, and/or (2) a set of input solids that specify boundary conditions for generative design geometry production, e.g., B-Reps selected using UI 122 to specify sub-space(s) that are preserved for use as connection point(s) with other component(s) in a larger 3D model or separate 3D model(s). Different combinations of design variables can be used, e.g., by CAD program(s) 116 in response to input from the user 160. For example, a user 160 may select different generative design synthesis methods to use within respective different design spaces within a single 3D model.
Other design variables can include a setup for numerical simulation, e.g., densities of elements in an FEA model or a homogenized lattice material representation for a selected lattice topology to be used with a topology optimized 3D shape of the part being generatively designed. The design variables can include various design objectives and constraints, such as described in this document. Furthermore, functions can be provided, e.g., by CAD program(s) 116, that assist the user in specifying design variables. For example, a lattice recommender can provide predictions for suitable lattice settings for a given problem using a single solid simulation. In some implementations, the lattice recommender described in PCT Publication No. WO 2017/186786 A1, filed 26 Apr. 2017, titled “METHOD AND SYSTEM FOR GENERATING LATTICE RECOMMENDATIONS IN COMPUTER AIDED DESIGN APPLICATIONS,” which is hereby incorporated by reference, is used.
With the generative design variables specified, one or more 3D model(s) are produced 185, e.g., by CAD program(s) 116, using one or more generative design processes (e.g., using one or more selected generative design synthesis methods). In some implementations, the one or more generative design processes can use described level-set methods, where s, from Equations 1, 2 & 3, represents a boundary of a solid region that is implicitly represented using one or more level-sets, which are signed distance values computed on a Cartesian background grid. In a level-set-based topology optimization method, the outer shape of a structure is represented by a one-dimensional high-level level set function, and a change in shape and configuration is replaced by a change in the level set function value, so as to obtain an optimum structure. The level set function refers to a function that indicates whether each part of the design domain where the initial structure is set corresponds to a material domain (material phase) that forms the structure and is occupied by a material, a void domain (void phase) where a void is formed, or a boundary between these two domains, wherein a predetermined value between a value representing the material domain and a value representing the void domain represents the boundary between the material domain and the void domain.
In some implementations that use level-set based topology optimization methods, one or more octree data structures are used for resolving geometry accurately. Level-set based topology optimization generally involves optimizing the shape of a design domain using a shape derivative, which is the derivative of a constrained minimization problem with respect to the shape. The shape changes are applied on a level-set, which allows topology changes during shape modifications. The outcome of this type of generative design process is the partitioning of the design space into solid and void regions, resulting in an optimized shape, often with topology changes. For this type of level-set-based topology optimization, as well as the variations on this type of level-set-based topology optimization described in this document, one or more of the following approaches can be used.
Linear Elastic Topology Optimization
Consider the linear elastic boundary value problem for a solid body with the domain Ω:
−∇·D∈(u)=ƒ in Ω (4)
u=0 on ΓD (5)
D∈(u)n=
where ∈(u) is the linear strain tensor, D is the fourth order constitutive tensor, u is the displacement vector, ƒ is the external load vector and
where compliance minimization can be used as the objective function
x
t
=x+tδv,t≥0 (12)
where δv is a prescribed constant vector field, and t is a scalar parameter (see
More than one approach can be used to obtain the directional derivative of the objective function for use in gradient based optimization methods. Approaches that are suitable for use with gradient based optimization methods include direct differentiation, semi-analytical derivatives, adjoint method, and finite difference. Moreover, further approaches to obtaining values for a directional derivative of an objective function, including approximation techniques, are described in detail below in connection with
Adjoint Method
Evaluating the shape derivative (Equation 13) can require the directional derivative of the state variable u in the direction of the velocity vector δv. This is can be seen by using the chain rule
But in some implementations, an adjoint method can be used which involves the formation of a Lagrangian L(δ, u, λ) which depends on domain shape Ω, displacement field u, and Lagrange parameters λ
L(Ω,u,Δ)=J(Ω,u)+λ[∫Ωƒ+∇·D∈(u)dΩ−∫Γ
The stationary condition for the Lagrangian, i.e., δL(Ω, u, λ)=0, can yield a complete set of shape optimization equations. For example, the adjoint problem for compliance minimization (Equation 11) can be given by considering the variation of the Lagrangian with respect to the displacements u. After introducing the cost function (Equation 11) and reformulating the domain term with the divergence theorem
∫Ωƒ·δudΩ+∫Γ
the corresponding boundary value problem, referred to as the adjoint problem, can become
−∇·σ(λ)=−ƒ in Ω (17)
λ=0 in ΓD (18)
σ(λ)·n=
This can lead to determining that λ=−u is the solution of the adjoint problem. This means that the adjoint problem (Equations 17-19) does not need to be solved explicitly for the compliance minimization problem (Equation 11). Such problems are called self-adjoint, where the solution of the direct problem also yields the adjoint solution. However, this is not often the case and different adjoint problems may have to be solved depending on the nature of the direct problem and the objective function. An advantage of the use of the Lagrangian as is includes the identity:
This equation can enable the shape derivative (Equation 13) to be expressed as a boundary integral of the following form:
Without loss of generality, it can be assumed that some boundary variations are not relevant in practical shape optimization. In solid mechanics, the boundary variations can usually be of the form:
This means that only parts of the boundary TN with no traction are free to move during the shape optimization. In this context, the variation of the Lagrangian (Equation 15) in the direction OP with structural compliance (Equation 11) as the cost function can become:
Without restricting δv as stated in Equation 22, the variation of the Lagrangian can contain several more terms. During iterative optimization of the shape, the shape derivative (Equation 23) can be used as gradient information. In order to achieve maximum decrease in the objective function, the boundary perturbation can be chosen as follows
δv=−(2u·ƒ−D∈(u):∈(u)). (24)
This boundary perturbation can be applied along the direction of the normal δ{circumflex over (v)}=vn, where v is the shape change velocity and is given by
Volume Control
Topology optimization using only a compliance minimization objective (Equation 11) can result in the optimum topology covering the full design space. Thus, some form of volume constraint is often required. Moreover, in some implementations, control over volume changes during topology optimization can be important for several reasons: 1) to enforce volume constraints; 2) to provide user control of topology optimization progress, e.g., more volume changes during initial iterations and fewer volume changes during later iterations; and 3) to ensure that arbitrary constraints not having shape derivatives are satisfied.
Note that the presence of shape derivatives for constraints can require modifying the shape change velocity in Equation 25. A modified objective function can be considered where the volume is penalized by a penalty parameter μ in:
The corresponding shape derivative (Equation 25) can then be given by:
wherein μ is constant along the boundary. The velocity term in the shape derivative (Equation 25) can now have an additional term as follows:
v=−(2u·ƒ−D∈(u):∈(u)+μ) (28)
Augmented Lagrangian Method
In some implementations, an Augmented Lagrangian method is used. Some approaches (see Volume Control above) can have limitations such as creating difficulty in meeting prescribed volume targets. Essentially, the final volume of the design can depend on the value of μ prescribed in Equation 26. In such cases, meeting prescribed design constraint targets can be achieved by using the Augmented Lagrangian method. Consider the following Lagrangian for compliance minimization with a final volume target of Vƒ(Ω):
The shape derivative can then be given by:
where the penalty parameters λ, μ can be updated in an increasing sequence such that they converge to the optimal Lagrange multipliers. In some implementations, one or more heuristic methods are used for updating penalty parameters.
Body-Fitted Solvers
In some implementations, one or more body-fitted mesh based solvers are used. Using such body-fitted mesh based solvers with the level-set method involves mapping data from the solid mesh to a Cartesian grid (note that the inverse mapping is trivial due to the structured nature of the Cartesian grid). This involves two mappings as shown in
Data in solid mesh elements (e.g., strain energy, Von Mises stress) can first be mapped to solid mesh nodes. This mapping can be achieved by data averaging. For example, by averaging solid mesh element ei data in the solid mesh 262 at solid node nj. Further, data in the solid mesh nodes can be mapped to voxel grid points using linear shape functions. Data at solid mesh nodes nj can be linearly interpolated to level-set grid point gi in the level-set grid 264. Having such a mapping can allow the level-set method to be used with complex FEA models solved with body-fitted solvers.
A detailed example of a macrostructure topology optimization process is now described. For simplicity of presentation, the compliance minimization problem is used with a penalized volume (Equation 26). Moreover, for all the detailed examples below, it is presumed that FEA is used for numerical simulation for ease of presentation, but the other numerical simulation types noted above can also be used.
The starting shape can simply be the design space or the intersection of design space with a suitable seed geometry. The initial level-set ψ0 can be created by converting the starting shape to a signed distance field (SDF). The conversion can be done using an implementation of an OpenVDB toolkit. Other methods are also possible. Next, the shape can be iteratively optimized until the objective function has converged. The FEA model used for simulation includes solid elements throughout the solid region(s) of the generative design space being optimized. In each iteration, the constitutive model D of each element e in the FEA model can be updated by changing the modulus of elasticity according to the relative location of the element with respect to the current level-set. For example, elements outside the level-set (called void elements) are given very low stiffness Dv, while the constitutive model of those inside are set to that of the original solid material Ds. This can be achieved by checking the average level-set of the element nodes:
where nj denotes the coordinates of element nodes. Once the FEA model is up-to-date with the current geometry represented by the level-set, the boundary value problem (Equations 4-6) can be solved in order to compute an advection velocity. Shape changes can be applied by advecting the level-set ψ using, for example, the Hamilton-Jacobi equation:
where v is the shape derivative (Equation 28). Note that one or more heuristic shape update methods can be used, and one or more methods to move the 0th iso-contour of a level-set can be used given a movement direction specified by the shape derivative. A linear mapping between the FEA nodes and the level-set grid points (see Body-fitted Solvers above) can allow the FEA results, e.g., D∈(u):∈(u), to be transferred to the level-set where the advection velocity v is computed.
Once the objective function has converged, the surface of the final level-set can be extracted using a contouring method which extracts the 0th iso-contour of ψ. An example of the algorithm is as follows:
Note that generalizing this algorithm for arbitrary objectives and constraints can require modifications, such as the following. In line 6, any adjoint problems (see Adjoint Method above) needed for computing shape derivatives of each objective and constraint should be addressed. In line 7, the augmented Lagrangian method (see Augmented Lagrangian Method above) should be used for combining the different shape derivatives to yield a single advection velocity. For further details regarding implementing these modifications to generative design processes generally, see Augmented Lagrangian Algorithm for Constrained Shape Optimization below.
Geometry
Let Σ be a smooth, watertight, oriented surface in Euclidean space, with normal vector field NΣ pointing in the direction of “free space” outside of Σ. A solid object can be created out of Σ by “thickening” it in the negative normal direction. That is: a small h∈ can be chosen to define
Ωh:={x−sNΣ(x): for all x∈[0,h]} (33)
In plain English, Ωh consists of all points sandwiched between and an offset of by a distance h in the negative normal direction, which is denoted
Σh:={x−hNΣ(x): for all x∈Σ} (34)
The boundary ∂χh of Ωh consists of two disjoint surfaces: Σ itself and the offset surface Σh.
Note that what is commonly referred to as the “outward” unit normal vector field of ∂Ωh (which points from inside the solid material defined by Ωh to outside the solid material) is equal to NΣ on Σ but points in the opposite direction to NΣ on Σh. Its exact formula at y∈Σh is −NΣ(projΣ(y)) where projΣ is the mapping that takes a point y to its closest point on Σ—so for example, if it is known that y=x−hNΣ(x) and h is sufficiently small, then projΣ(y)=x.
Deformations of the Geometry
Σ can be allowed to deform. The deformation can be generated by a surface normal speed function which can be denoted as Θ⊥:Σ→. Such a deformation can be achieved in a number of ways: for example, representing Σ as the zero level set of a function on the background Euclidean space and using advection with respect to (an extension of) Θ⊥. The methods can be equivalent to first order in the deformation. What matters is the infinitesimal variation of Σ itself, which is precisely the surface normal speed function. Any deformation of Σ will have a notion of the “magnitude” of the deformation, which is a positive scalar ε. For example, if the deformation is generated by advection, then the magnitude of the deformation corresponds to the advection time. The deformed surface can be denoted by Σε.
Once Σ deforms, then Σh deforms as well. It is simply “dragged along” in such a way that the offset distance h is maintained between Σε and the deformed version of Σh. Consequently the deformation of the thickened object Ωh is completely determined by the deformation of Σ. Note that it can be shown that the infinitesimal variation of Σh at a point y∈Σh is just −Θ⊥ (projΣ(y)) where projΣ is the mapping that takes a point y to its closest point on Σ.
Optimization by Steepest Descent
An optimal Σ can be found for some objective function: Surfaces→ using a Steepest Descent method. This is based on the shape Taylor formula which says that the variation of Σ by a magnitude E with respect to variations generated by some Θ⊥:Σ→ as above satisfies
Φ(Σε)≈Φ(Σ)+εDΦΣ(Θ⊥) (35)
for sufficiently small ε, where DΦε(Φ⊥) symbolizes the shape derivative of Φ at Σ.
Using a formula for the shape derivative in terms of Θ⊥, a particular Θ⊥ can be chosen such that the shape derivative term in the shape Taylor formula is guaranteed to be negative. As a result, the objective function will decrease if Σ is varied with respect to this chosen Θ⊥, at least for a sufficiently small ε. After Σ is updated by performing the variation (e.g., by advection of the level set function representing ε), the updated shape represents an improvement with respect to the objective function. For further improvement, the procedure can be iteratively repeated until convergence occurs.
Shape Derivative for a Class of Objective Functions
Consider objective functions of the following form. Let Φ0: Domains→ be some “volumetric” shape function (e.g., one which can be evaluated on a volumetric domain) such as a convex linear combination of the averaged structural compliance with respect to a set of load cases and the total mass. This allows definition of a “surface” shape function by
Φ(Σ)=Φ0(Ωh) (36)
It can be assumed that it is known how to calculate the shape derivative of Φ0 at an arbitrary domain Ω and with respect to an arbitrary variation Ω∈ of Ω generated by a boundary normal speed function V⊥:∂Ω→ with respect to the outward unit normal vector field of Ω. That is, Φ0 can be standard, meaning that the shape derivative satisfies the Hadamard-Zolésio structure theorem, and providing the formula
Φ0(Ωε)≈Φ0(Ω)+εDΦ0,Ω(Θ⊥) where DΦ0,Ω(Θ⊥):=∫∂ΩGΩ(x)V⊥(x)dσ(x) (37)
for sufficiently small ε. The function GΩ of Ω, is known as the shape gradient of Φ0 at the shape Ω. The precise form of GΩ depends on Φ0 and can be worked out by calculating the derivative
e.g., for averaged structural compliance and volume.
The shape derivative of Φ can be expressed in terms of the shape derivative of Φ0. This can be done by leveraging the above formula for the shape derivative of Φ0, applied to Ωh, almost exactly, and taking into account the nature of the boundary normal speed on the two disjoint surfaces making up ∂Ωh. That is: Σ can have the boundary normal speed V⊥:=Θ⊥, whereas Σh can have the boundary normal speed V⊥:=−Θ⊥∘projΣ. Then, the first-order change in Φ under a variation Σε of Σ generated by the normal speed function Θ⊥ is given by
Φ(Σε)≈Φ(Σ)+ε(∫ΣGΩ
Thus, the desired shape derivative formula is
DΦ
Σ(Θ⊥):=∫ΣGΩ
Note that y can be used as the dummy integration variable on Σh simply to emphasize that the two integrals are over different spaces and cannot a priori be combined in any simple way.
Extracting a Descent Direction
Recall that the utility of the exact formula for the shape derivative is that it should allow Θ⊥ to be chosen in such a way that the shape derivative becomes negative, resulting in a decrease of the objective function to first order according to the shape Taylor formula. It is, however, not immediately clear how to do this for the shape derivative of Φ calculated above. This is due to the fact that there are two competing terms (e.g., the integral over Σ and the integral over Σh) and it is not clear how these balance out given Θ⊥.
There are two ways to proceed. The first way is to apply the Hilbert space method of extracting a descent direction. The second way is to apply a change of variables to the integral over Σh, allowing it to be expressed as an integral over Σ. This is simple: recall that any point y∈Σh can be written in the form y=x−hNΣ(x); for our purpose now, this should be viewed as a mapping from Σ to Σh. Call this mapping n:Σ→Σh, with n(x):=x−hNΣ(x). So by the change of variables formula for surface integrals
where Jac is the Jacobian of n and is based on the fact that projΣ(x−hNΣ(x))=x for all x∈Σ.
It turns out that the Jacobian of n can be determined. With some help from differential geometry, it can be shown that Jac(x)=1+hHΣ(x)+h2KΣ(x) where HΣ is the mean curvature of Σ and KΣ is the Gauss curvature of Σ, providing
DΦ
Σ(Θ⊥)=∫Σ(GΩ
This manipulation of DΦΣ(Θ⊥) provides a nice formula for the shape gradient of Φ at Σ, denoted as
G
Σ(x):=GΩ
Consequently, the choice of Θ⊥=−GΣ makes DΦΣ(Θ⊥) negative, as required for our steepest descent procedure updating Σ towards optimality of Φ.
Augmented Lagrangian Algorithm for Constrained Shape Optimization
Introduction
An algorithm can be developed that solves constrained shape optimization problems where a shape function is to be minimized (e.g., volume or compliance) subject to a mix of equality and inequality constraints. These constraints can use shape functions (e.g., a target volume, an aggregated stress measure, or an aggregated displacement measure). A different algorithm can be used for dealing with non-aggregated (a.k.a. pointwise) constraints, such as the norm of the stress at each point of the shape. Generically, the optimization problems can be of the form
for uΩ satisfying a linear elastic PDE in Ω
where Adm is a set of admissible shapes, F is a scalar-valued shape-differentiable shape function, and GE and GI are vector-valued shape-differentiable shape functions that can depend explicitly on the shape, or implicitly on the shape through its elastic response under loads.
The Classical Augmented Lagrangian Algorithm
Equality Constraints
The augmented Lagrangian method in the classical setting of optimizing a scalar function of a vector variable can apply most simply to equality-constrained optimization problems of the form
The augmented Lagrangian method is an enhancement of a so-called penalty method. The penalty method considers a penalized objective function
and a sequence c:=ck tending to infinity. Then, one can define xk:=argminx Lc
A problem with the penalty method is that the increasing sequence of c-values can make it harder and harder to minimize Lc(x) numerically due to poor conditioning. The augmented Lagrangian method can remedy this problem. This algorithm minimizes an “augmented Lagrangian”
and also maintains a sequence of Lagrange multipliers in and increasing penalty parameters ck. Again, one can define xk:=argminx Lc
The general framework for the augmented Lagrangian algorithm can require a method to increase the penalty parameters and a method for tightening tolerances
Note the feature of the above algorithm that is the update of the Lagrange multiplier. It can be understood as follows. The unconstrained minimum of the augmented Lagrangian at iteration k satisfies ∇Lc
∇ƒ(xk)+Σi(μki+ckgi(xk))∇gi(xk)=0 (50)
Of course, the solution x* and the Lagrange multipliers μi*, for the equality-constrained optimization Equation 48 satisfy
∇ƒ(x*)+Σiμi*∇gi(x*)=0 (51)
g
i(x*)=0 for i=1, . . . ,k (52)
In the augmented Lagrangian algorithm, since it is desirable to ensure that the iterates xk converge to x, it is desirable to have μki→μ* and gi(xk)→0 are consistent with this aim. Therefore, the Lagrange multiplier update can be construed as a fixed-point iteration scheme in the form μk+1i:=μki+ckgi(xk+1) to achieve this.
Inequality Constraints
An elegant extension of the classical equality-constrained augmented Lagrangian algorithm to the classical inequality-constrained case can hinge on two facts. For fact 1, the problem
is equivalent to the problem
The new z-variables in Equation 54 are known as slack variables and satisfy zi=√{square root over (−gi(x*))} at the feasible optimum, where x* is the feasible optimum for Equation 53.
For fact 2, the augmented Lagrangian for Equation 54 is
and the unconstrained optimization arising in the augmented Lagrangian algorithm can be decomposed and partially solved as follows:
which has introduced
This is because the minimization can be performed over z∈ explicitly using elementary calculus techniques and phrase the result in terms of the gi+ function.
The upshot is that one can solve the inequality constrained Equation 53 by applying the augmented Lagrangian algorithm (see Equality Constraints above) to the modified Lagrangian function
whose gradient (after some manipulation) is
∇{tilde over (L)}c(x,μ)=∇ƒ(x)+Σi max{0,μi+cgi(x)}∇gi(x). (59)
Note that for completeness, the solution of the minimization over the z-variables appears in Equation 56. Substituting for this solution yields Equation 57. For simplicity, the minimization over the z-variables is rephrased in the form
where a, b, s are fixed parameters, which can be solved as follows.
The first step is to substitute y:=z2 and replace the above minimization problem by a(s+y)+b(s+y)2. This new problem is now quite simple, since the function Φ(y):=a(s+y)+b(s+y)2 is a simple parabola, and the problem Φ(y) finds the minimum of this parabola constrained to the region y≥0. Therefore the global constrained minimum is either at the global unconstrained minimum of Φ or else at the y=0 boundary of the constrained region, whichever is smaller. This results in either y*=−a/2b−s, or y*=0; from which is selected the result for which Φ*:=Φ(y*) is least. Consequently
after algebraic re-arrangement. With s=gi(x) and a=μi and =c/2, Equation 57 is obtained.
Application to Constrained Shape Optimization Problems
The augmented Lagrangian algorithm can be applied to solve the Equations 45-47 with one scalar equality constraint and one scalar inequality constraint for simplicity. Then, Ω∈Adm can be interpreted to mean that Ω satisfies admissibility constraints of the following form. The surface of every Ω∈Adm contains pre-defined ports; every Ω∈Adm contains pre-defined keep-in regions; and every Ω∈Adm avoids pre-defined keep-out regions.
The classical augmented Lagrangian algorithm (see Equality Constraints above) can be mimicked for both equality and inequality constraints but can be adapted for shapes satisfying admissibility constraints. The augmented Lagrangian is defined as
with GI+(Ω, μ, c):=max{GI(Ω), −μI/c}. The dependence of F, GE, GI on the solution of μΩ of the linear elastic PDE has not been indicated for simplicity. To consider only shape functions that are surface or volume integrals of shape-dependent quantities, it is known from general principles that the shape gradient of a function L of this type, evaluated at a given shape Ω, is a scalar-valued function of ∂Ω that is denoted as dL(Ω):∂Ω→. The shape gradient of the augmented Lagrangian at the shape Ω is
dL
c(Ω,μE,μI)=dF(Ω)+(μE+cGE(Ω))dGE(Ω)+max{0,μI+cGI(Ω)}dGI(Ω) (63)
where dF(Ω), dGE(Ω), dGI(Ω) are the shape gradients of the objective and constraint function, respectively, at the shape Ω.
It is also known from general principles that updating the shape Ω to reduce the value of Lc while maintaining the admissibility constraint can be achieved by advecting the level set function representing Ω for a certain time (determined via a line-search or related procedure) with respect to a speed function constructed from an extension of a projection of the shape gradient of Lc (e.g., the shape gradient of Lc can be zeroed out appropriately when a violation of admissibility is detected and use the Adalsteinsson-Sethian velocity extension algorithm to extend the values of dLc(Ω) from ∂Ω to a narrow band of ∂Ω). This is the basis for an unconstrained, gradient-based shape optimization algorithm for solving the problem minΩ∈AdmLc(Ω, μE, μI) at any fixed values of c, μE, μI.
The augmented Lagrangian algorithm for solving Equations 45-47 can now be given as follows. It requires a method to increase the penalty parameters and a method for tightening tolerances.
Pre-Advection Operations
In some implementations, one or more operations are performed on the velocity field before the shape is advected, including (1) narrow-band velocity restriction, (2) advection prevention using an advection mask, and/or (3) velocity extension. For the first of these, narrow-band velocity restriction involves restricting the velocity to a narrow-band around the 0th iso-contour of the level-set. For the second, the advection mask has values of 0 inside the ports (geometric interfaces containing Neumann and Dirichlet boundary conditions) and values of 1 elsewhere in the domain. Shape derivatives and advection velocity is multiplied by the advection mask to prevent any advection inside the ports. With regard to the third of these, the velocity field should be continuous on both negative and positive sides of the 0th iso-contour of the level-set. However, the objective function (typically strain energy) is often only available inside the negative narrow-band. Velocity extension projects all velocities inside the domain to the positive narrow-band. This can be done by sampling the velocity at a point inside the domain found by moving along the negative the normal a distance equal to the level-set:
v(x)=v(x−ψ(x)n(x))x∉Ω.
An example of the first and third operations is shown in
The above approaches can be used with various types of level-set-based topology optimizations, which are described further below in connection with
If the design is rejected, the process of
Once a design is not rejected 190, the process of
Note that the 3D model that is provided 195 can be the 3D model produced 185 by a generative design synthesis method or a post-processed version of the generative design output. For example, in some implementations, a 3D mesh model produced by a generative design synthesis method can be converted into a watertight B-Rep 3D model before being provided 195. In some implementations, a generative design output can be post-processed using the systems and techniques described in U.S. Patent Application No. 62/758,053, filed Nov. 9, 2018, and titled “CONVERSION OF GENERATIVE DESIGN GEOMETRY TO EDITABLE AND WATERTIGHT BOUNDARY REPRESENTATION IN COMPUTER AIDED DESIGN,” which systems and techniques are included in U.S. Pat. No. 11,016,470, issued on May 25, 2021, which claims priority to U.S. Patent Application No. 62/758,053 and is titled “CONVERSION OF MESH GEOMETRY TO WATERTIGHT BOUNDARY REPRESENTATION.” Further, in some implementations, the post-processed generative design output can be edited using the systems and techniques described in U.S. application Ser. No. 16/186,136, filed on Nov. 9, 2018, issued as U.S. Pat. No. 10,467,807 on Nov. 5, 2019, and titled, “FACILITATED EDITING OF GENERATIVE DESIGN GEOMETRY IN COMPUTER AIDED DESIGN USER INTERFACE.” Moreover, the generative design processes described below in connection with
When the generative process to be used employs a basic level-set method for topology optimization, the level-set is initiated 300 for the design space. Note that a level-set method is an example of a macrostructure generative modeling technique, where the generative model represents the object being designed as one or more boundaries between one or more solid regions (having the homogenous material therein) and one or more void regions (having no material therein) within the design space. Further, identifying the inputs can involve receiving user input, e.g., via UI 122 on display device 120, importing information from another program or a third party source, and/or one or more of these inputs can be predefined in a given implementation.
The setup for numerical simulation can include one or more physical properties to be simulated and one or more types of simulation to be performed, as discussed above, as well as potentially surrogate modelling or other methods of approximation. In some implementations, the type of numerical simulation is predefined, either for all uses of the program or given a particular context in the program from which the generative design process has been launched. Further, the setup for numerical simulation includes at least one set of loading conditions and/or other physical environment information associated with the type of numerical simulation to be performed.
The design space can be an initial 3D model or one or more portions of an initial 3D model to be used as a starting geometry. In some cases, the design space can be determined as the bounding volume of all initial geometry used to specify loading or other physical environment information associated with the type of numerical simulation to be performed. In some cases, the design space can be unbounded. In some implementations, the portion of the design space to be used as starting geometry can be automatically set by a genetic algorithm or other process. For example, bubble-like holes (e.g., holes 132B) can be put in the domain and a genetic algorithm can be used to vary the bubble size and spacing.
Seeding and the Bubble Method
Initial Seeding
The design space can be initialized using a seeding process, in which the design space is defined by a Boolean intersection between the design space D. and a seed geometry Ωs, shown as follows:
Ω0=Ω∩Ωs (64)
where Ω0 is the resulting initialized domain after the seeding geometry Ωs is applied to the initial design space Ω. The seed geometry can be of a variety of different shapes, e.g., an array of bubbles or a mesh, with parametrized characteristics, e.g., bubble diameter and spacing. The parameters can be user-defined or defined automatically by some seeding process, e.g., bubble seeding, described below.
Initial seeding can result in more efficient optimization and flexibility in design variations. For example, initial seeding can be defined so as to avoid local minima and the need to restart optimizations. Further, initial seeding can aide in creating design variations. In some implementations, a seed geometry is user-defined, e.g., according to a final geometry generated from a previous execution of design process, a random process to randomly initialize the seeding geometry, according to other factors of interest to the user, or some combination of the preceding.
Moreover, the other inputs can depend on the type of numerical simulation to be performed and/or the type of generative design process to be used. For example, when a lattice will be used in the generative design process, the other inputs can include, lattice topology, volume fraction, unit size, and thickness. Various types of generative design processes can be used, either alone or in combination, but
Numerical simulation of the physical response of the current model (e.g., a level-set representation of the implicit surface of the 3D shape) is performed 302 under the one or more defined loading conditions. In general, the numerical simulation 302 treats each solid region in the current model as a homogenous solid material, and each void region in the current model as having no material therein. However, in some implementations, this treatment can be altered. For example, in hybrid topology optimization, the numerical simulation of the modelled object is performed 302 with at least a portion of the solid region being treated as having at least one void in it (e.g., what the macrostructure generative modeling technique treats as a solid, the numerical simulation treats as partially containing a void in the form of a lattice) or at least a portion of the void region being treated as having at least one solid in it (e.g., what the macrostructure generative modeling technique treats as a void, the numerical simulation treats as partially containing a solid material in the form of a lattice). As another example, in the case of hollow topology optimization, the numerical simulation of the modelled object is performed 302 with at least a portion of the solid region being treated as having a void in it (i.e., what the macrostructure generative modeling technique treats as a solid, the numerical simulation treats as containing a hollow region). Finally, in a combination of these two, in hybrid hollow topology optimization, the numerical simulation of the modelled object is performed 302 with at least a portion of the solid region being treated as having both a partial void region and a complete void in it (i.e., what the macrostructure generative modeling technique treats as a solid, the numerical simulation treats as partially containing a lattice structure around a hollow region). For further details regarding hybrid, hollow and hybrid-hollow topology optimization, see PCT/US2019/060089, filed Nov. 6, 2019, and published as WO 2020/097216 A1 on May 14, 2020.
Results from the simulation are used to update 304 the current model (e.g., the level-set representation) in accordance with a current numerical assessment of the simulated physical response of the current model. For example, shape change velocities can be calculated for the implicit surface in the level-set representation of the 3D shape of the object being modeled, and this level-set representation can be updated 304 using the calculated shape change velocities to produce an updated version of the 3D shape of the modeled object. Note that in various implementations, other operations can be performed before, during or after the updating 304, as part of the iterative modification of the 3D shape of the modeled object, such as described below in connection with
In addition, to produce topology changes in the current model of the object, one or more voids can be inserted 308 into the current model at each or selected ones of the iterative modifications of the current model. For example, one or more bubbles having positions, sizes and shapes determined from the current model can be inserted 308. Moreover, in some implementations, the insertion 308 of void(s) is done only during an earlier portion of the iterative modification and/or only periodically (e.g., at a regular void insertion interval) during the iterative modification. In the example of
Bubble Method
In general, an optional determining 306 and the inserting 308 can be performed as part of a bubble method. A bubble method allows for the topology of the design space to be changed from the inside, as default level-set methods only allow for changes from the boundary. A location for the bubble is identified using the topology derivative, for example computed using a topology-shape sensitivity method, which relates the shape derivative to the topology derivative.
In some implementations, a bubble method is applied with the following features:
1. Position: The shape derivative of the Lagrangian is used as a proxy for the topology derivative. The shape derivative is described in more detail below, with respect to Equation 105.
2. Frequency: Bubbles are not inserted at every iteration. Instead, bubbles are inserted according to user-defined or automatically determined intervals Bubbles are not inserted after the volume reduction iterations have been completed.
3. Size: The size of an inserted bubble at a given iteration is computed according to a user-defined or automatically determined ratio Pb of the current model volume:
V(Bt)=βbV(Ωt). (65)
where V(Bt) is the volume of the bubble inserted at an iteration t, and V(Ωt) is the volume of the model at the iteration t.
4. Shape: The shape of the inserted bubble is determined based on the distribution of the topology derivative for elements of the current model, as shown below:
Where ek is the k-th element of the current model at a given iteration t. In some implementations, the shape of the inserted bubble is optimized while the overall size of the inserted bubbles increases over multiple iterations.
Then, the volume of elements with the lowest shape derivative are added up, until the necessary bubble volume, i.e., according to Equation 64, is reached. The resultant shape, position, and volume of the bubble Bt generated at iteration t therefore satisfies the following:
B
t
=e
1
∪e
2
∪ . . . ∪e
k s.t. V(Bt)=V(e1)+V(e2)+ . . . +V(ek) (67)
Returning to
Although the bubble insertion history 321 shows bubbles inserted after a first iteration, bubbles can also be inserted during the first iteration of a generative design process. In some implementations, multiple instances of a bubble method can be executed, with different initial conditions and parameters that result in different design variations. A system implementing the described techniques can provide the results of the design variations for the user to select a preferred variation. In some implementations, the system can automatically select a design variation accordingly to predetermined, e.g., user-provided, criteria.
Returning to
Controlled Convergence
In a controlled convergence technique, a pre-determined time period or number of iterations is defined as the shape and topology optimization begins. For each design constraint, a target much closer to the current value for the constraint is specified for each iteration.
This can result in at least two improvements. First, a user may specify the time period or number of iterations. The convergence rates are then controlled to solve a given problem within the user-specified time period or number of iterations. Therefore, a design process using controlled convergence can arrive at a suitable solution given resource restrictions imposed by the user in how much time is allotted to generating a solution. Second, in specifying a target value for each constraint closer to the current value for the constraint, the likelihood of oscillations and radical changes can be reduced.
An iteration amount is identified 418. The iteration amount can be a time or a count of iterations from a user or calculated otherwise. A series of target values for the at least one design constraint is calculated 420 from an initial target value to a final target value, in accordance with the iteration amount and a function.
Let gj(s,u(s))−gT,nj=0 j=1, . . . , ng denote a sequence of constraints where gT,nj is the final target value at iteration n. Define a target value for each constraint to meet in iteration t as
where
and Nd(ξ) is a B-spline of degree d computed using the recurring relationship
where N0(ξ)=0.
Although this description assumes the use of B-splines as the function for the calculating 420, any suitable smooth function can be used to achieve the same result. Different smooth functions include different orders of functions of the same class, e.g., B-Splines of different orders. The choice in smooth function and order of the function changes how quickly an ending design constraint target value is achieved over the identified 418 iteration amount. Other examples of smooth functions that can be used include polynomials of any order, Lagrange polynomials, and subdivision curves. In some implementations, a system implementing the described techniques can prompt a user for a plurality of reference points, and in response to receiving user-selected reference points, the system can generate a curve, e.g., using interpolation or any appropriate technique, that passes through the points.
Note particularly that iterations are divided into two parts according to Equation 70. A first part, while the iterations are less than or equal to nv, includes different target values for different iterations in that first part. In the second part, i.e., after nv, the target value of the constraint at each iteration is the final target value gT,nj.
Graph 401A shows a constraint target gT,nj=0.35 from a starting constraint value of 0.9, nv=30, n=50, and B-Splines of order d=4. Graph 401B shows a constraint target gT,nj=0.85 from a starting constraint value of 0.25, nv=80, n=80, and B-Splines of order d=3. Therefore, optimization methods can be implemented with a variety of different starting and ending constraint values, as well as with different smooth functions and iteration amounts.
Approximate Volume Control
In some implementations, calculating 420 the series of target values for a design constraint includes calculating target changes in a volume fraction of the generatively designed three dimensional shape. Some constraints may have a shape gradient or shape derivative that is not well defined, approximate, or not defined at all. In these cases, a proxy shape derivative can be calculated to improve accuracy and exert more control.
Using a constant value it for all optimization iterations would result in converging to a final volume which depends on the relative magnitude it to velocity v, the latter of which depends on the boundary value problem, given by:
−∇·D∈(u)=ƒ in Ω (71)
u=0 on ΓD (72)
D∈(u)=
where Ω is the domain for a solid body, D is the fourth order constitutive tensor of the solid body, u is the displacement vector, ƒ is the external load vector,
ΔVt=VT,t−Vt−1. (74)
This volume change can be approximated by
ΔVt≈T∫Γa(vse+μ)dΓ (75)
where a∈{0,1} is the advection mask and T is the time step used in solving the Hamilton-Jacobi equation. Note that the approximation error reduces to zero when smaller time steps are used:
The maximum time step is bounded by the Courant-Friedrichs-Lewy (CFL) condition
where C is a constant, Δ s is the voxel size and |v|max is the maximum magnitude of advection velocity, given by
v=—(2u·ƒ−D∈(u):∈(u)+μ (78)
where variables are defined as they are in Equations 71-73.
Let vu, vl denote the bounds of vse such that vl≤vse≤vu. The maximum velocity magnitude is now given by
|v|max=vu+μ≥|ul+μ| (79)
|v|max=−(vl+μ)≥|uu+μ| (80)
|v|max=—(vu+μ)≥|ul+μ| (81)
|v|max=vl+μ|uu+μ| (82)
Substituting the values for |v|max and T in Equation 76 yields:
Note that the number of different cases for μ simplifies to Equations 83 and 84 when the body force term is zero in Equation 78 (ƒ=0). This results in the strain energy component of the velocity being positive, i.e., vu, vl∈R+, resulting in the maximum velocity magnitude being confined to either Equation 83 or Equation 84. The upper bound on mt in Equation 77 implies that a value cannot be found for an arbitrary high volume change ΔVt.
In this way, the volume derivative can be used as a proxy shape derivative for many optimization constraints, such as stress, fatigue safety factor, buckling safety factor, and displacement. As described below, accurate volume control can be achieved using adaptive controllers, including PID controllers.
Returning to
The computing 424 includes computing shape change velocities for an implicit surface in a level-set representation of the three dimensional shape in accordance with the respective next (e.g., normalized) target value in a respective series of target values calculated 420 for each design constraint. In some implementations, the techniques described can be applied to density-based methods, like SIMP. The target values can be normalized, the techniques for which are described below. The series of target values begins with the initial target value and ends with a final target value for the design constraint. As described above with reference to
Volume Control Using Adaptive Controllers, Including Proportional-Integral-Derivative (PID) Controllers
Next, accurate volume control using adaptive controllers is discussed.
In general, an adaptive controller is a technique for providing feedback to continuously compute an error value between a desired value and a measured value, and then applying a correction to some control parameter. A Proportional-Integral-Derivative (PID) controller is a type of adaptive controller that continuously computes an error value between a desired value and a measured value, based on proportional, integral, and derivative terms of the value to some control parameter. Different parameters that effect shape changes (e.g., the relative contribution of a particular constraint gradient) can be controlled using PID controllers. During the iteratively modifying, the proportional, integral and derivative components of the PID controller are adjusted to slow or speedup shape changes implicitly by applying varying amount of control on the controlled parameters in response to an oscillation in the generatively designed three dimensional shape.
The PID components of the PID controller are also adjusted to effect a first level of increase in the measured value in response to repetition of success or failure to satisfy the normalized next target value of the measured value. Increases and reductions to the components can be made using a multiplier value that is based on the average deviation of the measured value of the component from the target value. For example, the multiplier can be 1+abs(deviation). As described below, a PID controller can be used to accurately correct for the target volume of the model for each iteration of an optimization process, including a process implementing controlled convergence.
For convenience, the description below will reference volume fractions denoted by VT,tƒ, Vtƒ to indicate target volume fraction and actual volume fraction at the end of iteration t, respectively.
where V0 denotes the volume of the design space.
Given a maximum number of iterations n and a target final volume fraction VT,nƒ, the target volume fraction for each iteration can be computed using a controlled convergence process described above with reference to Equation 68:
Next, the value for it is computed (Equations 83-86) to achieve this volume target for every iteration. Notice that an error between the desired and actual volume fractions VT,tƒ−Vtƒ remains at the end of each process. This is caused by the approximation in Equation 75 which leads to an error of magnitude VT,t−Vt after iteration t. Note that this error is already accounted for in Equation 74 when computing the volume change for iteration t+1 which is the sum of volume change for the next iteration and the error from the previous iteration
However, in some cases, this is not sufficient to achieve a prescribed volume target. Therefore, PID controllers are used.
To address this issue, an adaptive, e.g., PID, controller can be used to adjust the volume target for each iteration, to maintain better control over volume changes. Define the error in volume fraction (between actual and target volume) at iteration t as follows:
e
t
V
=V
T,t
ƒ
−V
t−1
ƒ.
The target volume for iteration t is now computed by
where Kp,Ki,Kd∈R+ are the PID parameters. Note that the PID controller is applied on target volume fraction change instead of target volume, to ensure the PID parameters are not dependent on initial domain volume. Unless otherwise stated, Kp=1, Ki=0.1, Kd=0.1 are used hereafter. For simplicity, such PID controllers will be denoted as follows hereafter where the subscript t is dropped for clarity:
ΔVt=∫PID(etV) (93)
The end result of this process is applying control over a response parameter etV, based on variations of a control parameter ΔVt. In addition, volume control using PID controllers, and adaptive controller techniques in general, can be applied in combination with all of the other systems and techniques described in this document, including for handling arbitrary equality and inequality constraints, described below, and in combination with the various other shape and topology optimization techniques described in connection with
Adaptive PID Tuning
In some implementations, to provide good control for a broad range of design problems, the Kp, Ki, Kd parameters in an adaptive, e.g., a PID controller are modified in response to controller behavior, by applying a multiplier equal to the average deviation of the iteration result from target. Three key controller statuses monitored for can be:
Oscillations: Oscillations are defined as occurring when, during the observed time period, there exist more than 2 pairs of consecutive success and failure.
Repeat failure or success: A success or failure is considered repeated if it occurs over a number of iterations equal to some predetermined threshold, e.g., 10%, of the specified maximum iterations.
Repeat excessive failure or success: A success or failure is deemed excessive if the relative error between it and the target exceeds the predetermined threshold, e.g., 10%.
PID control theory indicates that the integral term should be used to reduce systematic error, while the derivative term should be used to dampen oscillations. The proportional term guides the speed of convergence. In practice, it is often found the derivative term can be hard to adjust and in fact contribute to oscillatory behavior (a phenomenon known as “derivative kick”).
For the newly adjusted parameters to take effect, a normal status can be applied for a set number of iterations after the adjustment. Similarly, periods accounted for in the determination of controller statuses can be modified to avoid overcompensation. In some implementations, the design problem is allowed to converge rapidly in the initial optimization stages. Hence, during that interval, only the oscillatory status is adapted for and all others are effectively ignored by the adaptive process.
Therefore, returning to
An adaptive controller can also be implemented using any appropriate machine learning technique for predicting changes to a control parameter in response to one or more inputs. For example, a controller can be implemented as a neural network having a plurality of neural network layers. The neural network can include an input layer that receives one or more inputs, and an output layer that outputs a change to a control parameter in response to the inputs. The neural network can include one or more hidden layers between the input and output layer, each hidden layer applying one or more non-linear functions to a received input at the layer, where the functions are weighted according to learned parameter values.
An adaptive controller implemented as a neural network can be trained according to any appropriate supervised learning technique. In the case of supervised learning, a controller can be trained on a training data set that includes inputs to a PID controller paired with output changes to the Proportional, Integral, and Derivative changes of the constraint value in response to the input. The training data can include a subset of all input-output pairs generated from a PID controller, e.g., the previous N iterations where Nis some pre-defined value. The training data can be further modified, e.g., with random oscillations or other variations added.
As described below with reference to Arbitrary Equality and Inequality Constraints, a separate adaptive controller can be implemented for each constraint. In some implementations, the adaptive controller can be implemented as a neural network or other machine learning model that receives, as input, the generated constraint change from each PID controller, and generates, as output, a final volume change considering all constraints. In implementations in which a single machine learning model is used, the training data for training the model can include a subset of input-output pairs, as described above, over a number of iterations, with or without random perturbations.
The adaptive controller can be trained to generate constraint values or a final volume change according to additional inputs besides the proportional, integral, and derivative terms. For example, the additional inputs can be from the topology optimization process, such as features extracted from the current shape of the part being designed for, or features of current stress, strain, and displacement results for the part. An adaptive controller can receive these additional inputs to learn parameter values to generate more accurate controlled values or final shape changes and further stabilize the optimization process.
The adaptive controller can be further trained in response to new test cases and design requirements. Additional training can be done offline, in real-time, or a combination of the two. For example, if a specific optimization task does not converge or is not stable, multiple instances of the optimization process can be run for all test cases but for different controller settings, e.g., different model parameter values weighting the machine learning model, different hyperparameter values, e.g., learning rate, batch size, etc., or both. The settings of the instance with the best performance, e.g., according to some performance metric, can be used in an adaptive controller and corresponding optimization process for producing the final design. The multiple instances can be run offline, e.g., when the number of test cases is small, or adapted online during the optimization process.
Although neural networks are given as an example machine learning technique for implementing adaptive controllers, any suitable technique can be used, e.g., fuzzy logic or any appropriate type of regression model. Moreover, adaptive controllers, as described in this document, can also have one or more extra inputs (other than just a measure of error) that provide more information about the trend and current state of the optimization process
Constraint Normalization During Topology Optimization.
Controlled convergence can be combined with normalization of the constraint values, as described above in reference to the computing 424 of
Normalization Algorithm
Some features of the Normalization Algorithm are noted, as follows.
The reference value gref,t should not be updated in every iteration. Sufficient time should be allowed for the constraint to stabilize after setting a new reference. This is achieved using the notion of an inner loop iteration (lines 3-22 of the Normalization Algorithm). The maximum length of inner loop iterations is given by Ti∈ which can be dynamically adjusted for every design problem, for example, adjusted such that there are at least 6 inner loop iterations with a lower bound of 5 on the inner loop length.
The threshold Δgth used to determine if the existing reference is too far from the current value is computed using a multiplier Kth∈, e.g., 20, of the moving average of the constraint value. The length of the moving average computation is given by Tk∈, e.g., 10. The threshold is increased if oscillations are detected during the last inner loop, otherwise it is reduced when the current reference value is above the threshold.
Any change to the reference value is bounded by a maximum change Δgth=Kggref,t−1 computed using a ratio 0<Kg≤1 of the existing value, e.g., 0.6. Any reduced reference value should also be larger than a multiplier of the current violation |gi−gT,n|. An example of the multiplier is given by Kcur=1.2.
In some implementations, a value computed from the Normalization Algorithm can be assigned to a target reference value {tilde over (g)}ref,t and an adaptive controller (e.g., a PID controller as described above with reference to
Δ{tilde over (g)}ref,t×{tilde over (g)}ref,t−gref,t−1.
g
ref,t
←g
ref,t+ƒPID(Δ{tilde over (g)}ref,t). (94)
Arbitrary Constraint Handling
Arbitrary Constraint Handling is a general method that can be applied to any type of optimization constraint, even when exact shape derivatives are not available. Whether proxy or actual shape derivatives are implemented, more accuracy and control can be achieved over the generative design process.
Arbitrary Equality Constraints
Consider an equality constraint of type g(Ω, u(Ω))=0 in an optimization problem where compliance minimization is used as the objective function. In the absence of a shape derivative
the conventional practice in industry is to optimize while monitoring g and to terminate optimization when g≈0. This implies that the end result is an un-converged solution with a lack of control as the solution approaches zero as g→0.
However, adaptive controllers can be used to enforce equality constraints, without shape derivatives. Let gi(Ω,u(Ω))j=1, . . . , ng, denote a sequence of equality constraints that have been normalized as described above (e.g., using the Normalization Algorithm, above) such that gj(t)∈[0,1]∀t. Consider the normalized error of each constraint, which can be used to define the error in the constraint for each iteration in a manner similar to Equation 91:
This can be substituted in Equation 93 to approximate the volume change needed to satisfy constraint gj in iteration t:
ΔVtj≈IjƒPID(etj). (96)
where Ij=−1 when the constraint is inversely related to the volume change and Ig
V
t
+
:={x|x=I
jƒPID(etj),x>0,j=1, . . . ,ng}.
V
t
−
:={x|x=I
jƒPID(etj),x>0,j=1, . . . ,ng}. (97)
Then, the single volume change applied in this iteration can be computed with
The μ value in the shape derivative for volume control for any non-zero ΔVt can be applied following the method described above for approximate volume control, with reference to Equations 71-87.
Arbitrary Inequality Constraints
The techniques described above with reference to arbitrary equality constraints and Equations 95-98 result in the volume change converging to zero as the constraint becomes satisfied, i.e., ΔVtj→0 as etj→0. When inequality constraints are used, this causes a potential problem as inequality constraints can be satisfied with ΔVtj≠0. Therefore, when inequality constraints are used, a slack variable called an importance factor can be used to manage relative contribution of each violated constraint. An importance factor can mitigate uncontrolled changes in convergence and constraints interfering with the minimization objective when all constraints are given equal importance. The importance factor regulates the relative importance of different constraints during different iterations of the generative design process.
Let pt1, . . . , ptn
ΔVtj≈ptjIjƒPID(etj) (99)
where the applied importance factor is computed using predetermined, e.g., user-provided, importance factors {tilde over (p)}j as follows:
where stk=1 if the inequality constraint gk is violated at iteration t, otherwise stk=0. At each iteration, the applied importance factor is updated according to the sign of stk of all constraints. This allows constraint violations to impact the applied importance factor and hence, the volume change ΔVt.
To prevent sudden changes to itj due to the change of status of constraint from violated to non-violated, and vice versa, a PID controller can be used, e.g., as described above with reference to Equations 96 and 97 and
Δptj=ptj−pt−1j
p
t
j
←p
t
j+ƒPID(Δptj). (101)
Complex generative design problems can be solved according to a combination of the techniques described above, e.g., with controlled convergence, a combination of inequality and equality constraints, and using PID controllers.
Modified Augmented Lagrangian Method for Constraint Handling
The augmented Lagrangian algorithm introduced with reference to Equations 28-30 can be modified as follows to accommodate constraint handling for arbitrary equality and inequality constraints, as described above.
Controlled convergence: The classical augmented Lagrangian method computes the constraint violation term ej as the difference between the current value of the constraint and the final value, e.g., as follows:
Controlled constraint convergence is applied as described above, e.g., with reference to
Constraints without shape derivatives: Although the shape derivatives for many objectives and constraints can be mathematically computed using the adjoint method (Equations 14-25), implementation of the adjoint method in a commercial finite element solver can be a difficult task. Additionally, sometimes optimization constraints can be assessed by black-box evaluators provided by users. Therefore, proxy shape derivatives can be used following the techniques described above, e.g., with reference to Arbitrary Constraint Handling and
Precise Volume Control: Precise control of volume is often quite important for obtaining a good design output from complex engineering examples. The μ calculation methods introduced above with reference to Approximate Volume Control, Volume Control using Adaptive Controllers, and Equations 71-93 can be integrated into the augmented Lagrangian method, while further enhancing the accuracy using a line search algorithm.
First, constraints can be placed into two groups v, nv where the former contains all constraints that are affected by volume changes, and the latter contains constraints that are not, e.g., min/max thickness or centroid constraints. The shape derivative from the augmented Lagrangian method is then modified as follows:
where the constraint error ej is computed using a PID stabilized version of Equation 95 with the importance factor term from Arbitrary Inequality Constraints, described above with reference to Equations 99-101:
When the shape derivative dgj/dΩ is unavailable, it may be approximated using a suitable proxy shape derivative. For example, any such constraint from the set v may be approximated using the volume shape derivative as follows:
where Ij=−1 when the constraint is inversely related to the volume change and Ig
Next, μ* is computed using the concepts described above with reference to Approximate Volume Control, Volume Control using Adaptive Controllers, and Equations 71-93. A target volume change Δvt is computed for each iteration t using Arbitrary Constraint Handling, using all constraints in v. Next, μ* is computed using the method described above with reference to Approximate Volume Control and Equations 71-87. Note that Equation 87 needs to be modified as follows to account for arbitrary constraints and objectives:
In some cases, a pitfall should be accounted for: μ*→∞ when
When this is detected, μ* should be set to 1 in such cases. This usually happens as constraints converge towards the target value, i.e., ej→0.
Updating augmented Lagrangian parameters μj, λj for iteration t can be stabilized using Adaptive, e.g., PID controllers, as described above with respect to
Δμtj=μtj−μt−1j
μtj←μtj+ƒPID(Δμtj). (107)
As earlier mentioned, a line search algorithm, e.g., gradient descent or Newton's Method, can be applied to ensure the volume change achieved after advecting the level-set is within some acceptable tolerance with respect to the target volume change Δvt. Line search is done to find the optimal multiplier lt such that the advection velocity is given by /dΩ. Note that disabling line search is equivalent to setting lt=1.
Returning to
In some implementations, the at least one design constraint without a defined shape gradient includes a first inequality constraint and a second inequality constraint. The first inequality constraint has a first input control parameter to a first proportional-integral-derivative controller and has a first importance factor that is multiplied with a shape change amount provided by the first proportional-integral-derivative controller, e.g., as described above with reference to Arbitrary Inequality Constraints and Equations 99-101. The second inequality constraint has a second input control parameter to a second proportional-integral-derivative controller and has a second importance factor that is multiplied with a shape change amount provided by the second proportional-integral-derivative controller.
Computing 424 the shape change velocities for the implicit surface in accordance with the proxy shape gradient can therefore include adjusting both the first and second importance factors based on whether or not one or more other constraints were violated in a prior iteration of the iteratively modifying. For example, the importance factor can be modified by multiplying the factor with a violation multiplier with respect to the other constraints in prior iterations of the iteratively modifying. In some implementations and as described above, e.g., generally with reference to
Also as described above with reference to
As described above with reference to Adaptive Controllers and
In some implementations, the at least one design constraint includes a first equality constraint and a second equality constraint, the first equality constraint having a first input control parameter to a first adaptive controller, and the second equality constraint having a second input control parameter to a second adaptive controller, e.g., as described above with reference to Arbitrary Equality Constraints and Equations 95-98. Then, the computing 424 the shape change velocities for the implicit surface in accordance with the proxy shape gradient includes using a maximum shape change amount provided by the first and second adaptive controllers when none of the first and second equality constraints are inversely proportional to shape change and using a minimum shape change amount provided by the first and second adaptive controllers when none of the first and second equality constraints are proportional to shape change. When one constraint is inversely proportional to shape change and at least one constraint is proportional to shape change, an average shape change amount is used.
After the computing 424, the level-set representation(s) are updated 426 using the shape change velocities to produce an updated version of the three dimensional shape of the modeled object. The performing 422, the computing 424, and the updating 426 are repeated until a check 428 determines that a predefined number of shape modification iterations have been performed or that the generatively designed three dimensional shape of the modeled object in the design space converges to a stable solution for the one or more design criteria and the one or more in-use load cases. The updating 426 and the check 428 can be performed, e.g., as described above with reference to
Fatigue Constraint
Next, a description of optimizing a design based on provided fatigue constraints on the designed body is provided. In addition to the techniques described above, e.g.,
Two main methods of evaluating this constraint are described: safe life calculation and the damage tolerant approach. The former is about preventing fatigue damage by keeping stresses below an allowable threshold. The latter accepts that there is fatigue damage and aims to ensure a fatigue crack does not cause catastrophic failure before a specific inspection point in time.
Safe Life Fatigue
Safe life optimization is based on a material's fatigue properties. Samples are tested to obtain SN-curves available in material databases. The designer can specify the expected amount of loading cycles over the lifetime of the part for each relevant load case. Their cumulative effect guides the allowable design stress. Usually, stress-based safe life fatigue is used. However, in low cycle loading problems, strain life fatigue may be preferred.
Treatment of SN-Curves.
Both damage tolerant and safe life fatigue approaches described herein can make use of material SN-curves. Due to their potential flat and infinite regions, these pose computational challenges and should be interpreted with caution during the optimization process.
SN-curves normally place stress on the y-axis and cycles on the x-axis, as the typical fatigue workflow involves going from a known number of cycles limit to finding the appropriate maximum stress constraint. This makes it natural for SN-curve user-input to be [cycles, stress] points. Such point definition fits the damage tolerant optimization method well and thus only a sorting of points from lowest to highest cycles need be used. However, in the case of safe-life fatigue, the optimization approach goes from stress to cycle data. Simulation of the object being designed provides stress data, for which the predicted number of cycles should be found from the SN-curve. Thus, an important aspect of the method is to invert the user-provided data points into [stress, cycles] pairs and sort those from lowest to highest stresses. Additionally, when an endurance section is detected, i.e., there exists a flat region of the curve at an endurance stress (below which the number of cycles is effectively infinite), the set of points corresponding to that section of the curve is re-sorted from highest stress to lowest stress. The curve reading algorithm then readily returns the highest reported cycle value instead of interpolating to infinity.
When the input data includes data for different materials, a separate number of expected loading cycles is provided for each of the different materials. This includes returning a number of loading cycles from one or more curves fit to the set of data points in a plastic region and an elastic region of the data relating fatigue strength to loading cycles; and then, returning a highest load cycles number from the set of data points in an endurance region of the data relating fatigue strength to loading cycles.
Stress-Based Safe Life Fatigue
Returning to
The program iteratively modifies a generatively designed three dimensional shape of the modeled object in the design space in accordance with the one or more design criteria, the one or more in-use load cases for the physical structure, and the one or more specifications. The iteratively modifying can include modifying both a geometry and a topology for the three dimensional shape of the object.
The iteratively modifying includes performing 506 numerical simulation of the modeled object in accordance with a current version of the three dimensional shape and the one or more in-use load cases to produce a current numerical assessment of a physical response of the modeled object, e.g., as described above with reference to
A maximized stress or strain element is found 508 for each of the one or more in-use load cases for the physical structure, from the current numerical assessment of the physical response of the modeled object. The element can be a value at a point, location or region of the physical structure. In some implementations and as described below with reference to
Specifically, during topology optimization, each element's stress can be computed at every iteration. After optionally adjusting for singularities (described below in reference to
where Sm
where the fatigue safety factor Sƒ is given by the minimum fatigue safety factor of all materials used, Cσ is the number of cycles obtained from the SN-curve for the max principal stress (maxΩσp), and Clcreg is the required number of cycles for the given load case lc.
During optimization, the fatigue constraint is defined as Sƒ−ST≥0 where STƒ is the target fatigue safety factor. This can be enforced using methods described above with reference to Controlled Convergence, Arbitrary Constraint Handling, and
Shape Derivative of Aggregated Fatigue Metric
Let Ω be a domain in . In this specification, a stress-based fatigue metric for Ω with respect to a number of load cases that are applied repetitively over time is assessed at each point x∈Ω independently. For purposes of topology optimization, these pointwise measurements are aggregated into a global measurement called an aggregated fatigue metric that approximates the maximum value of the pointwise measurements.
The aggregated fatigue metric takes the following form. Let C:→ be the inverse of the SN curve of the material including Ω and let Cl∈ be a reference value for the lth load case. This aggregated fatigue metric enables the summation of damage caused by cyclic loading. A target value of 1/(safety factor) is optimized for, where the damage the part can withstand and the actual damage from the load (stress or strain and number of cycles) are summed.
The aggregated fatigue measurement of Ω is defined as:
where σl(Ω) is an approximation of the maximum of the magnitude of the stress tensor in Ω under the lth load case. In other words,
where ul:Ω→ is the displacement function that satisfies the equations of linear elasticity in Ω with respect to the lth load case, and σ(ul) is the associated stress tensor.
The shape derivative of the aggregated fatigue metric S is described in more detail, below.
Proposition 1. The shape derivative of the shape function S with respect to the variation of a shape Ω generated by the normal speed Θ:∂Ω→ is given by:
where λt:Ω→ is the adjoint displacement function of the lth load case. This is the solution of the linear elastic equations with adjoint force term given in weak form as the linear integral form:
adj(Ω,δu):=∫Ω∥σ(ul)∥2p−1σ(ul):σ(δu) (113)
Since S can be decomposed into a sequence of operations applied to the stress integral ∫Ω∥σ(ul)∥2p this computation breaks down as follows. First, the derivative of this sequence of operations is computed using the chain rule from ordinary calculus. Then, the shape derivative of the stress integral is computed, for which Céa's method is used and described below.
Let Ωε denote the variation of Ω generated by the normal speed function Θ. The computation of the desired shape derivative, denoted as DSΩ·Θ, begins as:
First, the chain rule from ordinary calculus is used to bring the derivative d/dε onto the stress integral, where then the techniques of shape differentiation take over, as shown below:
where ul,ε is the displacement function for the lth load case in the domain Ωε.
To continue, shape differentiation is used. Céa's method is used to compute the shape derivative of the stress integral remaining in Equation 115. The subscripted l is suppressed in the description below because it is the same for all load cases. The Lagrangian follows:
(Ω,u,λ):=∫Ω∥σ(u)∥2p+A(Ω,u,λ)+(Ω,λ). (116)
where
A(Ω,u,λ):=∫Ωσ(u):e(λ) (117)
is the integral form appearing in the weak form of the elasticity equations in Ω.
Here, σ(u) is the stress tensor of the displacement u and e(λ) is the “virtual strain” of the “virtual displacement” λ. Thus, (Ω, u, λ) is the “virtual work” associated to the pair of displacements. Also, (Ω, λ) is the integral linear form appearing in the weak form of the elasticity equations in Ω which encodes the “virtual work” done by the applied body and boundary tractions. Now Céa's method proceeds in three steps.
Step 1. Setting the variation of the Lagrangian with respect to λ equal to zero yields the linear elasticity equations satisfied by u. This is by design, since (Ω, u, λ)+(Ω, λ) is precisely the left-hand-side of the weak elasticity equations.
Step 2. Setting the variation of the Lagrangian with respect to u yields a related system of equations for λ called the adjoint state equations. The solution of these equations is called the adjoint state. The weak version of these equations is:
0=∫Ω2p∥σ(u)∥2p−1σ(u):σ′(δn)+A(Ω,u,λ) (118)
for all variations δu of u. Here, σ′(δn) is the derivative of the stress tensor with respect to the displacements. This has a simple form, because stress is linearly related to strain which is in turn linearly related to u—the gradient is a linear operator, or quite simply, σ′=σ. The symmetry of is also used. The above shows that λ satisfied the linear elastic equations but with a new “adjoint force” term encoded in the integral linear form:
adj(Ω,δu):=∫Ω2p∥σ(u)∥2p−1σ(u):σ′(δu) (119)
Step 3. Céa's method hinges on the following fact, which can be proven: the shape derivative of the Lagrangian, computed by ignoring the shape dependence of u and λ and then plugging in the state for u and the adjoint state for λ, is equal to the shape derivative of ∫Ω∥σ(u)∥2p itself. Since ail expressions in the Lagrangian are volumetric integrals, only the formula for the shape derivative of such an integral of a shape-independent function over Ω is needed.
Next, assume that Θ vanishes on the Dirichlet and in the homogeneous Neumann boundary of Ω (otherwise the shape derivative would include other terms). Finally, assume that body forces are absent, for instance by ignoring the effects of gravity in the given load case. Together, these last two assumptions have the effect of dropping the term from the formula for the shape derivative (Equation 116). Therefore, the following results after all assumptions:
where u is the state and λ is the adjoint state.
Strain-Based Safe Life Fatigue
The strain-based safe life calculation also uses Equations 108 and 109, however the Cσ0 parameter becomes Cε. A strain versus cycles form of the SN-curve is used as input. The strain value is computed from the max principal stress (maxΩσp) according to Neuber's rule, below:
where K is the concentration factor specified by a user (defaults to 1) and E is the material's Young's Modulus.
Returning to
In some implementations, the one or more specifications include two or more specifications of respective different materials from which the physical structure will be manufactured. Accordingly, the data for the specifications include data relating fatigue strength to loading cycles for each of the different materials. As such, the determining 510 includes determining a separate number of expected loading cycles for each of the different materials. Further, the redefining 512 includes calculating a separate fatigue safety factor for each of the different materials based on respective damage fractions calculated from respective ones of the numbers of expected loading cycles for the different materials. From the separate safety factors, a minimum value of the fatigue safety factors for the different materials is used to redefine the fatigue safety factor inequality constraint for the modeled object.
In some implementations, the one or more in-use load cases for the physical structure includes two or more in-use load cases for the physical structure and the one or more design criteria include a required number of loading cycles for the modeled object for each of the two or more in-use load cases for the physical structure. Calculating the separate safety factor for each respective one of the different materials includes summing load-specific damage fractions corresponding to the two or more in-use load cases, wherein each load-specific damage fraction includes a number of expected loading cycles, for one of the different materials and one of the in-use load cases, divided by the required number of loading cycles for the one of the in-use load cases. The separate safety factor is obtained by inverting the sum of the load-specific damage fractions to obtain the separate safety factor. See e.g., Equation 109.
The shape change velocities for an implicit surface in a level-set representation of the three dimensional shape are computed 514 in accordance with at least the fatigue safety factor inequality constraint. The shape change velocities can be computed with respect to other constraints, and proxy derivatives can be computed where a shape derivative is unavailable or not well defined, e.g., as described above with reference to Arbitrary Constraint Handling and
In some implementations, the computing 514 includes computing at least one shape change velocity using an amount determined from a shape derivative formula that approximates a shape derivative of the fatigue safety factor, e.g., the volume shape derivative described above with reference to Equation 105, The formulation is modified to include the stress shape derivative instead of the volume shape derivative.
In some implementations, an importance factor is used for handling the at least one design constraint, including the fatigue safety factor, which is adjusted based on whether or not one or more other constraints were violated in a prior iteration of the iteratively modifying, e.g., as described above with respect to
In some implementations, a target value of the volume fraction is adjusted using adaptive controllers, e.g., as described above with reference to
In some implementations, adaptive control is used to adjust a total contribution of the amount determined from the shape derivative formula to the shape change velocities used in the updating. Adaptive controllers can be used to adjust the contribution of the proxy shape derivative to the total advection velocity, i.e., according to a modified version of the Augmented Lagrangian method for constraint handling, as described above with reference to Equations 102-107.
The performing 506, the finding 508, the determining 510, the redefining 512, the computing 514 and the updating 516 repeat until a check 518 determines that a predefined number of shape modification iterations have been performed or that the generatively designed three dimensional shape of the modeled object in the design space has converged to a stable solution for the one or more design criteria and the one or more in-use load cases.
The check 518 can be a check as described above with reference to
Damage Tolerant Fatigue
Safe Life Fatigue constraints as described above help to ensure that an object does not fail under user-specified load cases by preventing fatigue damage during a specified number of loading cycles. In contrast, damage tolerant fatigue techniques focus on limiting fatigue damage to be within an allowable limit to help ensure the fatigue damage will be detected in the field before the part fails. The key objective is to compute the critical fatigue crack length and ensure that it does not exceed the thickness of the design component during the service inspection interval. Hence, a key requirement of this method is the ability to enforce thickness constraints on topology optimized designs. The thickness constraint can be enforced implicitly, for example by increasing the volume of the design which results in all parts of the design increasing in thickness.
It is noted that, like the Safe Life Fatigue approach, the Damage Tolerant Fatigue techniques described below can be used in any combination with the above-described techniques, e.g., Controlled Convergence, Seeding, and Arbitrary Constraint Handling. Further, in some implementations, the Safe Life Fatigue and Damage Tolerant Fatigue approaches are both available where fatigue design constraints for a given problem are solved with the Safe Life approach in some cases, while in other cases, fatigue design constraints are solved using the following Damage Tolerant Fatigue techniques. An appropriate approach can be selected based on the intended use of a part being designed. For example, in aerospace applications, a part is commonly designed according to a damage tolerant approach. The availability of the two modes together allow for addressing a greater range of design problems.
Numerical simulation of the modeled object is performed 608 in accordance with a current version of the three dimensional shape and the one or more in-use load cases to produce a current numerical assessment of a physical response, e.g., a structural response, of the modeled object, as described above with reference to
An expected number of loading cycles for each of the one or more in-use load cases for the physical structure is determined 610 using the current numerical assessment and the measure of thickness to enforce the design criterion that limits the minimum thickness. The point with the maximum strain or stress from the current numerical assessment can be used to determine the expected number of loading cycles. Computing a current thickness and enforcing the minimum thickness is described below.
Thickness Constraint
First, a thickness constraint is computed and enforced. One or more of multiple methods can be used to measure and enforce thickness, e.g., a combination of at least two distinct thickness measures, as described below. Also, it is to be appreciated that the described Damage Tolerant Fatigue techniques can be implemented with density based topology optimization, e.g., using the SIMP method, rather than with a boundary based topology optimization, e.g., the level-set method, as described in connection with
By contrast, sphere-fitting method is shown in geometry 600B and involves finding the diameter of the largest sphere that can fit inside the domain while touching the measuring point x in the geometry 600B. This is done by running a bisection algorithm for locating the sphere centroid xs between points x and xr. Discrete sampling locations {x1p, x2p, . . . xip} are defined on the sphere perimeter/surface using the polar angle a. The sphere with origin xs is deemed to fit inside the domain when the maximum level-set with respect to the domain is below a particular threshold Δh, i.e., the following condition is satisfied:
maxψ(xip≤Δh (122)
The sphere-fitting thickness is then defined as hs(x)=2|x−xs|. In some implementations, thickness is defined as h(x=max{hs(x), hr(x)}. If the thickness at a point not on the domain surface is required, it is first projected on to the surface using the normal of the level-set before thickness on the surface is measured. The sphere-fitting method can be repeated multiple times for multiple directions (resulting in multiple xr points, depending on the selected direction). The largest sphere computed can be used to define the thickness.
A minimum thickness constraint gt can be defined as
where Γ is the domain surface and hmin is the target minimum thickness. An approximate shape derivative is given by:
The thickness constraint can be applied using methods described above with respect to Arbitrary Constraint Handling and
p
t
j
←S
r(ξ)ptj (125)
The sigmoid function is defined as:
with ξ computed as follows
where {circumflex over (t)} identifies the start of thickness constraint application which is taken as the maximum of volume reduction iterations nv and the first violation of thickness constraint.
Although two techniques for measuring thickness are described here, it is understood that any suitable technique for measuring thickness of a body can be used, without loss of generality. Further, the techniques described can be combined, i.e., a combination of two distinct thickness measurements, to improve the results. For example, the two distinct thickness measures can include (i) a first distance measure being a length within the modeled object of a ray cast in a negative normal direction from a surface point of the modeled object, and (ii) a second distance measure being a diameter of a largest sphere that touches the surface point of the modeled object and fits inside the modeled object as determined by checking discrete sampling locations defined on the sphere's surface, such as described above with reference to
Damage Tolerant Fatigue Constraint
Next, the damage tolerant fatigue constraint is described. The material's critical fatigue crack length hd is defined as
where C is the number of fatigue cycles, ρg is the modulus of the material's fatigue crack growth curve and can be user-specified, e.g., 2.8, a is the initial crack length, which can be set as equal to the minimum defect detection size of the chosen part inspection method, with a default value of a=1 mm, I is a stress intensity factor, e.g., with a default value of I=1.22×10−12), and Y describes the crack type, e.g., with a default value Y=1. The stress range Δσ is derived from the stress ratio R, e.g., R=0.1 and maximum stress σmax as the following:
Δσ=(1−R)σmax. (129)
The minimum thickness of the material is then given by
where RTd is the desired critical thickness to width ratio and Sƒ is the prescribed design safety factor. The RTd multiplier and Y crack type variables are related and can be found in engineering tables describing fatigue crack properties for a given material.
The crack length a is set equal to a critical crack length ac, which tends towards infinity, i.e., catastrophic failure of the part. During optimization, this gives a minimum thickness to optimize for, i.e., if the user desires an a/W ratio of 0.1 (i.e., the minimum thickness should be 10 times ac) then the ac value is calculated and a minimum thickness is applied as a multiple of this value. Generally, the value of Y is small and therefore represents a small crack thickness ratio, i.e., a large thickness of the part versus the critical crack length in the feature of the part. This has the effect of increasing the critical crack length in the feature so that it is easily detectable by the chosen inspection techniques.
One way to add an optimization constraint to satisfy damage tolerant fatigue requirements is to use a constant global thickness target. Assume the design is at the fatigue limit where C is equal to the number of fatigue cycles required and σmax is the corresponding stress from an SN-curve. Equation 128 is used to compute critical crack length hm
Alternatively, the damage tolerant fatigue safety factor can be computed for each point x on the surface of the design at each iteration. This uses the stress σlc(x) and thickness h(x) at each point to compute the supported number of cycles for each load case Clc(x):
The damage tolerant fatigue safety factor of a material Sd can now be worked out similarly to safe life fatigue method, using Miner's rule:
where Clc(x) is the number of cycles obtained from Equation 132 and clcreq is the required number of cycles for the given load case lc.
During optimization, the fatigue constraint is defined as Sd−ST≥0 where STd is the target fatigue safety factor−the lowest of all material safety factors. This can be enforced using methods described above with respect to Arbitrary Constraint Handling and
A computer aided design program obtains 602 a design space for a modeled object for which a corresponding physical structure will be manufactured, one or more design criteria for the modeled object, one or more in-use load cases for the physical structure, and a critical fatigue crack length for a material from which the physical structure will be manufactured. The obtaining 602 can be performed, e.g., as described above with reference to
The program iteratively modifies a generatively designed three dimensional shape of the modeled object in the design space in accordance with the one or more design criteria, the one or more in-use load cases for the physical structure, and the one or more specifications. The iteratively modifying can include modifying both a geometry and a topology of the three dimensional shape of the object, e.g., as described above with reference to
The generatively designed three dimensional shape of the modeled object is provided 606 for use in manufacturing the physical structure corresponding to the modeled object using one or more computer-controlled manufacturing systems.
As described above with reference to Handling Arbitrary Inequality Constraints and
Also as described above with reference to
Returning to
In some implementations, the one or more in-use load cases for the physical structure includes two or more in-use load cases for the physical structure, and the one or more design criteria includes a required number of loading cycles for the modeled object for each of the two or more in-use load cases for the physical structure. In those implementations, the determining 610 the expected number of loading cycles includes determining a separate number of expected loading cycles for each of multiple points e.g., points identified as critical using one or more techniques. Alternatively, the determining 610 is done for each point x on the surface of the design at each iteration as detailed in Equations 132 and 133 on the implicit surface for each of the two or more in-use load cases.
In some implementations, the redefining 612 the fatigue safety factor inequality constraint includes summing, for each of the multiple points, load-specific damage fractions corresponding to the two or more in-use load cases. Each load-specific damage fraction includes an expected number of loading cycles, for one of the multiple points and one of the in-use load cases, divided by the required number of loading cycles for one of the in-use load cases, to produce a sum of the load-specific damage fractions for each of the multiple points. The sums are inverted, and a minimum value of the inverted sums is used to redefine the fatigue safety factor inequality constraint for the modeled object. See e.g., Equation 109.
Shape change velocities for the implicit surface are computed 614 in accordance with at least the fatigue safety factor inequality constraint. Then, the level-set representation is updated 616 using the shape change velocities to produce an updated version of the three dimensional shape of the modeled object. In some implementations, the computing 614 includes computing at least one shape change velocity using an amount determined from a shape derivative formula that approximates a shape derivative of the fatigue safety factor.
The performing 608, the determining 610, the redefining 612, the computing 614, and the updating 616 is repeated until a check 618 determines that a predefined number of shape modification iterations have been performed or that the generatively designed three dimensional shape of the modeled object in the design space has converged to a stable solution for the one or more design criteria and the one or more in-use load cases, e.g., as described above with reference to
Design Digital Twin
The relationship between a manufactured part and a designed part can be made into a digital twin so that the difference between the intended design and physical part is reduced. This is particularly prevalent in additively manufactured parts where effects of thickness and build angle can have significant effect on the performance of the part. It is difficult to account for this in the design process, particularly when applying other optimization techniques, e.g., as described above with reference to
A maximized stress or strain element is found 710 for each of the one or more in-use load cases for the physical structure from the current numerical assessment of the physical response of the modeled object. The maximum stress or strain element can be the stress, strain, or both, of a point, location, or region of a modeled object. A description of Fatigue Constraints and calculating stress constraints is described above, in reference to
Dependency on Feature Size and Build Angle
Additively manufactured components (e.g., metal components) have varying strength for different build angle and component thickness values. The Von Mises stress target at a point x is modified as a function of the thickness h(x) and build angle β(x) which is measured with respect to some constant build direction. Thickness can be measured by any suitable technique or combination of techniques, including the techniques described above with reference to
where n(x) is the normal to the surface at x. An internal location for an object is a location that is not on the surface of the object. For those locations, a build angle can be computed by first projecting the internal location along a predetermined build direction, to a surface location of the three dimensional shape. Then, a build angle is determined for the internal location from an angle of the normal of the surface location with respect to the predetermined build direction, using the above-mentioned formula β(x).
The Von Mises stress constraint is used and takes the following form:
where σT is a fixed stress target for each material. This now varies as a function of thickness and build angle resulting in the stress constraint being modified as
[σ(X)−{tilde over (σ)}T(h(x),β(x))]≤0 ∀x∈Ω. (135)
The variability on build angle and thickness is transferred from the stress target to the stress itself. For convenience, and while maintaining the same safety factor:
The constraint can be enforced using methods described above with reference to Arbitrary Constraint Handling and
Next, a derivative of a shape derivative for a distance-normal-curvature penalty is presented. First, derivations for the shape derivatives of signed distance functions, functions of unit normal vectors, and surface curvature functions are presented, respectively. Then, a derivation of the combined penalty from these three shape derivatives is presented.
Shape Derivative of the Signed Distance Function
Let Ω⊂ be a domain with boundary ∂Ω. In this specification the distance function of ∂Ω, which is the mapping d∂Ω:→ defined by:
For each x∈ define the set II∂Ω(x):={y∈∂Ω:d∂Ω(x)=∥x−y∥} as the set of y∈Ω for which the minimum above is achieved. In the case where Π∂Ω(x) consists of exactly one point, that point is known as the nearest-point projection of x onto ∂Ω and is denoted by proj∂Ω(x). Also assign a sign to the distance function as follows: for x∈Ω, sign∂Ω(x)=−1 and for x∈\Ω sign∂Ω(x)=+1. The signed distance function of ∂Ω can now be defined as
sdf
∂Ω(x):=sign∂Ω(x)*d∂Ω(x). (138)
It is easy to see that d∂Ω is 1-Lipschitz, i.e., its difference quotients are uniformly bounded by one. Thus by Rademacher's theorem, it is differentiable almost everywhere, and this is true regardless of the nature of ∂Ω The medial axis of ∂Ω is defined as the set S⊆ where the square of the distance function is not differentiable. d∂Ω is never differentiable on ∂Ω whereas d∂Ω2 is differentiable on ∂Ω at least when ∂Ω is a differentiable surface. So the medial axis is defined using d∂Ω2 to exclude all but the exceptional points of ∂Ω. By definition, S has a Lebesgue measure equal to zero. It can be shown that the set of points x∈ for which Π∂Ω(x) consists of at least two distinct points is a subset of S.
For support in the following description, the following facts in differential geometry are presented. For ease of description, assume that the boundary of Ω is “reasonable” in the sense that its tangent planes exist and vary differentiably so that the curvature of ∂Ω can be defined using the tangential derivatives of the unit normal vector field. Also assume ∂Ω is at least C2, meaning that an can be expressed as the union of a finite collection of graphs of C2 functions.
Let n∂Ω:∂Ω→ be the unit outward normal vector field of ∂Ω. Let k1(y), k2(y) be the principle curvatures at y∈∂Ω with respect to non and let E1(y), E2(y) be an orthogonal basis of the tangent plane of ∂Ω at y aligned with the principle curvature directions at y.
Proposition 1. Let Ω be a domain with C2 boundary. Suppose x∉S. Then sdf∂Ω is twice-differentiable at x. If in addition, x∉∂Ω, then d∂Ω is also twice-differentiable at x. A point x∈ has a unique projection onto ∂Ω if and only if x∉S. In such a case, d∂Ω=∥x−proj∂Ω(x)∥ and the gradient of sdf∂Ω at x is
For every x∈ and every y∈Π∂Ω(x), the following holds:
−k1(proj∂Ω(x))×sdf∂Ω(x)≤1 ∀i=1,2 (140)
Moreover, the closure of S consists of all points in S together with all points for which at least one of the inequalities above is strict.
Suppose x is not in the closure of S. Then
Let T∈:→ be a one-parameter family of transformations of arising as the flow of a vector field Θ:→. This means: Tε0 satisfies the equations:
Recall that the Lagrangian derivative of a shape-dependent function fΩ:→ with respect to this variation is defined as:
and the Eulerian derivative of fΩ with respect to this variation is defined as:
f
Ω′(x):={dot over (f)}Ω(x)−∇fΩ(x),Θ(x) (146)
The next result gives the Eulerian derivative of the signed distance function.
Proposition 2. If x does not belong to the closure of S, then the Eulerian derivative of the signed distance function with respect to the variation generated by a vector field Θ at x satisfies:
sdf
∂Ω′(x)=−Θ(proj∂Ω(x)),n∂Ω(proj∂Ω(x)) (147)
A proof follows. First, compute the Lagrangian derivative of the signed distance function with respect to the variation generated by a vector field Θ at x. For this sdfTε(∂Ω) (Tε(x))=∥Tε(x)−yε∥ is used where yε minimizes ∥Tε(x)−y∥ over all y∈Tε(∂Ω). The result is
[sdf∂Ω(x)]=n∂Ω(proj∂Ω(x)),Θ(x)−Θ(proj∂Ω(x)) (148)
which is obtained with the help of differentiating the condition sdf∂Ω(Tε−1())=0. Similarly, it can be shown that the gradient term in the definition of the Eulerian derivative is n∂Ω(proj∂Ω(x)), Θ(x). The desired formula follows.
Consider a shape function of the form:
Φ(Ω):=∫Ωϕ(x,sdf∂Ω(x))dx (149)
where ϕ is a smooth function of its arguments. The shape derivative of ϕ is computed using the standard tools of shape calculus, as well as the formula for the Eulerian derivative of the signed distance function.
Proposition 3. Let Tε:→ be a one-parameter family of transformations of arising as the flow of a vector field Θ: →. Then
where Θ⊥is the normal component of Θ on ∂Ω and ∂2φ is the partial derivative of c with respect to its second argument. If the Eulerian derivative of a quantity is denoted by a prime, then
Observe that the first integral appearing in the formula above is not in the standard form of the Hadamard-Zolésio structure theorem, namely a surface integral over ∂Ω. The formula does still depend on Θ⊥ on ∂Ω, so the theorem still holds. Using a change of variables based on the co-area formula, this integral can be expressed in the standard form. The following result is due to Dapogny et al., Geometric Constraints for Shape and Topology Optimization in Architectural Design. Computational Mechanics, Springer Verlag, (2017, 59(6), pp. 933-965).
Proposition 4. Let Ω be a domain with C2 boundary. Then
where T(y) is the distance to the medial axis of ∂Ω along the ray emanating inwards from y∈∂Ω and J(y, τ) is the Jacobian factor of the change of variables, which is
[J(y,T)]−1=Πi=12(1+τki(y)) (153)
Note the Jacobian factor above has a particularly nice alternate form. Expanding the product yields:
where H∂Ω and K∂Ω are the mean curvature and the Gauss curvature of ∂Ω respectively. This is because these are the invariants of the second fundamental form of ∂Ω, of which the principal curvatures are the eigenvalues.
Shape Derivative of a Function of the Unit Normal Vector
Let Σ be any oriented, smooth surface embedded in Euclidean space and let Y:→ be a smooth vector field defined on the background Euclidean space and that is completely independent of Σ. Consider shape functions of the form:
Φ(Σ):=∫Σϕ(x,(Y(x),NΣ(x))dσ(x)) (155)
where ϕ: ×→ is a smooth function of its arguments. The objects appearing in the integrand above are: the unit normal vector NΣ of Σ, the surface area element dα of Σ. The shape derivative of Φ at a given shape Σ is calculated with respect to any given shape variation. This means: let Tε(Σ) be a variation of E generated by a one-parameter family of transformations Tε of Euclidean space; then the shape derivative with respect to this variation is
To calculate the shape derivative of Φ, the transformation Tε is first described in more detail. There are a number of ways to proceed, and all are equivalent to first order in ε. The velocity method of Hadamard is chosen for the following description because it is well-suited to level set-based shape optimization, but it is understood that other methods can be used, such as those described in M. C. Delfour and J. P. Zolésio, Shapes and Geometries: Metrics, Analysis, Differential Calculus, and Optimization, (2nd Ed. 2011).
Suppose Tε:→ is the transformation of arising as the flow of a vector field Θ:→. In other words, Tε satisfies the equations
∇∥ is the tangential gradient operator, div∥ is the tangential divergence operator, and HΣ is the mean curvature of Σ. Also, ϕ(x,Y(x), NΣ(x)) as ϕ(x,q(x)) with q(x):=Y(x), NΣ(x) is written to emphasize the structure of the integrand, and the partial derivative of ϕ is denoted with respect to its second argument as
Proposition 1. Let Σ be an oriented, smooth surface embedded in Euclidean space and let Tε be a variation of Σ generated by a vector field Θ:→. The shape derivative of Φ at Σ is given by
where Θ⊥ is the normal component of Θ on Σ, and
means the total directional derivative in the normal direction. A proof follows. Apply the calculus of Eulerian derivatives to obtain:
where the prime denotes the Eulerian derivative.
q′=Y,NΣ′=Y′,NΣ+Y′,NΣ+Y,NΣ′, =Y∥,∇∥Θ⊥, because Y is shape-independent so its Eulerian derivative vanishes, and NΣ′=−∇∥Θ⊥. Thus
after integrating by parts.
Consider a specific application of the formula derived in the Equation 160. That is, the formula to the function ϕ in Dapogny et al., is applied, namely
ϕ(X,NΣ):=∥Y−NΣ∥2=2−2X,NΣ (161)
where Y is a smooth, shape-independent unit vector field defined on Euclidean space. Since as a function of its argument, ϕ(x,q)=2−2q, then
is the constant function
Therefore Proposition 1 yields in this case and:
At this point the identity
is used that is valid for any vector field on the background Euclidean space and restricted to Σ. Therefore,
This formula is almost identical to the formula in the paper to Dapogny, et al. There are two differences: the first is that only the tangential divergence operator appears in paper to Dapogny; the second is that the mean curvature appears with the opposite sign in paper to Dapogny. The first difference is very likely due to an error, whereas the second difference is due to the fact that the paper to Dapogny, uses the opposite sign convention for the mean curvature. In the paper, HΣ is defined with respect to the inward unit normal, and here it is defined with respect to the outward unit normal. (The former convention has the advantage that the mean curvature of a sphere is positive and equal to 2, while the latter convention has the advantage that the theoretical formulas contain fewer minus signs in total despite the fact that the mean curvature of the unit sphere is −2.)
One can show that the mean curvature of a level set of the form {x∈:F(x)=0} for some level set function F:→ is equal to −div(∇F/∥∇F∥) where ∇F/∥∇F∥ is the unit normal vector field of the level set. Thus if the Y in the formula of Proposition 1 is of this form, the shape derivative can be interpreted as the difference of the mean curvature of Σ and the mean curvature of the level set.
A second example concerns the 2.5D manufacturability penalty. In this case, the penalty function Φ is used as before, but with integrand the function is
ϕ(x,q(x)):=[q(x)]2E(x) (164)
where q(x):=Y(x),NΣ(x) as before but with a crucial difference−Y is in fact a constant vector field which represents the milling direction, thus
The function E(x) is a product of offset exponential bump functions. Therefore it follows that
since Y is constant.
Shape Derivative of Surface Curvature Functions
Let Σ be a smooth oriented surface in . Consider shape functions which depend on the surface curvature invariants of Σ (i.e. the mean curvature and the total curvature) of the form:
Φ(Σ):=∫Σϕ(HΣ,∥AΣ∥2)dσ (166)
where HΣ is the mean curvature of Σ, ∥AΣ∥2 is the total curvature of Σ, and ϕ is a smooth function of its arguments.
A description follows for computing a shape derivative for shape functions of this form. In other words, it will be shown how to compute the derivative
for any variation Tε(Σ) where Tε:→ is a one-parameter family of transformations of arising as the flow of a vector field Θ:→. This means: Tε satisfies the equations
The principal geometric objects associated to the surface Σ are the induced surface metric tensor hΣ, i.e., the first fundamental form, the unit normal vector field NΣ, and the second fundamental form AΣ. A calculation of the derivatives of these components with respect to the variation introduced above is described above (or rather their components with respect to a local basis of the tangent bundle of Σ because these quantities are tensorial).
To carry out the calculations, some preliminary notions are introduced in the context of embedded surfaces in Riemannian geometry. First, a local parametrization of Σ is used near an arbitrary point x∈Σ to parametrize a sufficiently small tubular neighborhood of Σ near x by means of the mapping (ε,x)Tε(x). Under this parametrization, the ε-direction maps to the vector field Θ. A pair of vector fields {tilde over (E)}1, {tilde over (E)}2 is introduced that are linearly independent sufficiently near Σ and tangent to Tε(Σ) for every ε. This is achieved by pushing the local coordinate basis for the tangent bundle of Σ with respect to the parametrization forward to a sufficiently small tubular neighborhood of Σ. This means: if Σ∘1, E∘2 denotes this basis, then the following definition is used Ei(y):=DTε(E∘i(x)) for any y sufficiently near Σ of the form y=Tε(x) for some x∈Σ. Here DTε is the matrix of partial derivatives of Tε. Second, a vector field N is defined on this tubular neighborhood with the property that if y sufficiently near Σ is of the form y=Tε(x) for some x∈Σ, then N(y) is the unit normal vector field of Tε(Σ) at y. The notation X=X∥+X⊥N for the orthogonal decomposition of a vector field X defined on this tubular neighborhood with respect to N. Note that if y=Tε(x) for some x∈Σ, then X∥(y) is tangent to Tε(Σ).
Let hε and Aε be the pull-backs of the induced surface metric tensor and second fundamental form of Tε(Σ), respectively, to Σ. Then their components with respect to the local basis {tilde over (E)}1, {tilde over (E)}2 satisfy
[hε]ij={tilde over (E)}i,{tilde over (E)}j∘Tε and [Aε]ij=N,∇{tilde over (E)}
Let [hε]ij denote the components of the inverse of the matrix hε. Note that the sub scripted ε is dropped when referring to these quantities on Σ.) Next, the vector field N satisfies
N,N
≡1 and N,{tilde over (E)}i≡0. (170)
Finally, by differentiating the defining equations for N, the following is deduced:
∇{tilde over (E)}
These two equations uniquely determine the vector field ∇{tilde over (E)}
Lemma 1. The following formula holds:
∇ΘN=−Σklhkl(AΣ(Ek,Θ∥)+Ek(Θ⊥))El. (172)
Proof. The equation N,N≡1 is differentiated in the Θ-direction to deduce that ∇ΘN must be tangential. Then the equations N,{tilde over (E)}i≡1 for i=1, 2 are differentiated in the Θ-direction to deduce
for i=1, 2. This implies the result.
Lemma 2. Suppose Tε is the variation of Σ generated by the vector field Θ and let [hε]ij be the components of the pull-back of the induced surface metric tensor of Tε(Σ) to Σ with respect to any local basis E1, E2 for the tangent bundle of Σ. Then
where Aij are the components of the second fundamental form of Σ with respect to the local basis and ∇∥ denotes the surface covariant derivative operator.
Proof. The calculations can be performed in any local basis since it can be shown that the results are independent of the basis (this is a foundational principle of differential geometry). Therefore the local basis introduced above is used at an arbitrary x∈Σ. Hence by the definition and properties of covariant derivatives in the following results
since the difference between the covariant derivatives above is the Lie bracket [Θ, {tilde over (E)}i] which vanishes because both θ and {tilde over (E)}i are push-forwards of coordinate basis vector fields. (In this calculation the notation
is used, where ƒ:→ is any differentiable function; this notation reflects the standard differential geometric practice of conflating the concept of a vector field X and the directional derivative operator in the X-direction.) Next, using the definition of the second fundamental form of Σ, the following is obtained:
Lemma 3. Suppose Tε is the variation of Σ generated by the vector field Θ and let [Aε]ij be the components of the pull-back of the second fundamental form of Tε(Σ) to Σ with respect to the local basis introduced above. Then
Proof. Again, the calculations can be performed in any local basis since it can be shown that the results are independent of the basis. Therefore the local basis introduced above is used at an arbitrary x∈Σ. By the definition of the pull-back of the second fundamental form of Tε(Σ), its components with respect to this local basis satisfy [Aε]ij=N,∇{tilde over (E)}
Using Lemma 1 in the first term of Equation 178 gives
∇ΘN,∇{tilde over (E)}
The following deductions are made for the second term of Equation 178:
since N,N=1 and the equations uniquely characterizing ∇{tilde over (E)}
N,∇
{tilde over (E)}
∇{tilde over (E)}
Substituting the two derivations into Equation 178 now yields
using the definition ∇E
The surface curvature invariants of Σ are: the total curvature, i.e., the squared norm of the second fundamental form with respect to the induced surface metric tensor, denoted ∥AΣ∥2; and the mean curvature, i.e., the trace of the second fundamental form with respect to the induced surface metric tensor, denoted HΣ. Note that the Gauss curvature or intrinsic curvature of Σ is given in terms of these invariants by the formula GΣ:=(HΣ2−∥AΣ∥2)/2. The derivatives of these quantities are now given with respect to the variation introduced above.
Proposition 4. Suppose Tε is the variation of Σ generated by the vector field Θ and let Hε0 be the mean curvature of Tε(Σ). Then
where Δ∥ is the surface Laplace-Beltrami operator.
Proof. Lemma 2 and 3 is applied to the definition Hε∘Tε:=[hε]ij[Aε]ij. That is,
As required, λ∥0 is simply identified where it appears and the Codazzi equation is used to replace the ∇∥AΣ with the ∇∥HΣ term.
Proposition 5. Suppose TΣ is the variation of Σ generated by the vector field Θ and let ∥Aε∥2 be the total curvature of Tε(Σ) Then
Proof. Apply Lemma 2 and 3 to the definition ∥Aε∥2∘Tε:=[hε]ik[hε]jl[Aε]ij[Aε]kl. The required calculations are similar to those of the proof of the previous lemma.
Variation of Integrated Surface Curvatures
In this section, surface integrals of the surface curvature invariants of Σ are considered. First, the following preliminary result is described.
Lemma 6. Suppose Tε is the variation of E generated by the vector field Θ and let [h249]ij be the components of the pull-back of the induced surface metric tensor of Tε(Σ) to Σ with respect to any local basis E1, E2 for the tangent bundle of Σ. Let ϕΣ:Σ→ be a surface-dependent scalar function that is shape-differentiable. Then
where div∥ is the surface divergence operator and HΣ is the mean curvature of Σ.
Proof. By the change of variables formula for surface integrals,
To conclude, Lemma 2 and the following calculation is used:
Proposition 7. Let ϕ be a surface integral operator of the form stated in the Introduction. Suppose Tε is the variation of Σ generated by the vector field Θ. Then
Substituting the results of Propositions 4 and 5:
since all terms containing Θ∥ can be shown to equal ∫Σdiv∥(ϕ(HΣ,∥AΣ∥2)Θ∥)dσ which vanishes by the generalized Stokes' Theorem.
Shape Derivative of a Combined Distance-Normal Curvature Penalty
Let Ω be a domain in . The general form of a class of shape functions for penalizing geometric features of Ω, based on the signed distance function, as well as the normal vector field and the curvature invariants of the boundary surface ∂Ω. Specifically, consider the shape function:
(Ω):=∫Ωϕ(χ,d∂Ω(χ))dχ+∫∂Ωψ(χ,d∂Ω(χ)·n∂Ω(χ)·H∂Ω(x))dx (191)
where ϕ and ψ are smooth functions of their arguments, and d∂Ω, n∂Ω, H∂Ω are the signed distance function, the normal vector field, and the mean curvature of ∂Ω, respectively. Most of the geometric performance functions, e.g., surface area, Willmore energy for shape smoothing, thickness penalty, overhang angle penalty can be cast in this form.
The calculation of the shape derivative of Equation 191 breaks up into pieces thanks to the Chain Rule for shape differentiation. Let Ωε be a variation of Ω generated by the boundary velocity function Θ:∂Ω→, with tangential and normal parts denoted by Θ∥ and Θ⊥ respectively. Then,
where ● denotes the so-called Lagrangian derivative and ∂k denotes taking the partial derivative with respect to the kth slot of the operand. The Lagrangian derivative of a function ƒ is defined as the derivative of ƒ along the variation induced by Θ. The Lagrangian derivative of a vector field is defined similarly.
Each of the Lagrangian-differentiated expressions above has a formula in terms of Θ∥ and Θ⊥ which have been derived above. Moreover, it can be shown that the dependence on Θ∥ drops out of the final expression entirely (this result is known as the Hadamard-Zolésio Structure Theorem). This is shown by introducing the Eulerian derivative defined by ƒ′:=ƒ●−∇ƒ·Θ for volumetric integrands, and ƒ′:=ƒ●−∇∥ƒ·Θ∥ for surface integrands. Now thanks to Stokes' Theorem, all the explicit dependence on Θ∥ disappears:
The remaining Eulerian-differentiated terms can be handled using the following equations derived-above. These are:
[d∂δ]′=−Θ⊥∘proj∂δ (194)
[n∂δ]′=−∇∥Θ⊥ (195)
[H∂δ]′=Δ∥Θ∥+∥A∂δ∥2Θ⊥ (196)
where proj∂δ is the nearest-point projection onto ∂δ, and Δ∥ is the Laplace-Beltrami operator of ∂δ, and ∥A∂δ∥2 is the squared Frobenius norm of the second fundamental form of ∂δ (a.k.a. the total curvature or sum of the squared principal curvatures). Substituting these into the Equation 193 yields the shape derivative formula.
The specification includes information, e.g., dataset or a function of thickness, build angle, or both, to generate a corresponding strength for the material. The specification indicates multiple strength values for the one or more materials, that are each dependent upon one or both of a thickness of the physical structure to be built using the one or more materials, and a build angle. The object to be manufactured is additively manufactured using the one or more materials.
A generatively designed three dimensional shape of the modeled object is produced 704, including modifying both a geometry of the three dimensional shape and a topology of the three dimensional shape, in accordance with the design criteria, the at least one in-use load case, and the specification of one or more materials. The producing 704 includes varying, during object shape and topology modification of the modeled object, evaluation of the stress constraint at different locations on or in the modeled object in accordance with respective values from the multiple strength values. In some implementations, other constraints are used, e.g., a strain constraint or a displacement constraint. Each strength value corresponds to one or both of the thickness and the build angle at each one of the different locations. Note that the described Design Digital Twin techniques can be implemented with density based topology optimization, e.g., using the SIMP method, rather than a boundary based topology optimization, e.g., the level-set method as described in connection with
In some implementations, each strength value corresponds to the thickness or both the thickness and the build angle at each one of the different locations. Further, the producing 704 includes: measuring the thickness at each one of the different locations; and computing a respective strength value based on at least the measured thickness at each one of the different locations. As described above with reference to
In some implementations, modifying both the geometry and the topology of the three dimensional shape includes enforcing a design criterion that limits a minimum thickness of the generatively designed three dimensional shape of the modeled object, e.g., a minimum thickness being based on the critical fatigue crack length for the material, as described with reference to
Returning to
A fatigue safety factor inequality constraint is redefined for the modeled object based on a damage fraction calculated from the required number of loading cycles for the modeled object and the expected number of loading cycles for each of the one or more in-use load cases for the physical structure. Techniques for Handling Arbitrary Constraints, including inequality constraints can be applied, as described above with reference to
Shape change velocities for an implicit surface in a level-set representation of the three dimensional shape are computed 716 in accordance with at least the fatigue safety factor inequality constraint. The level-set representation is updated 718 using the shape change velocities to produce an updated version of the three dimensional shape of the modeled object. In some implementations, computing the shape change velocities includes computing at least one shape change velocity using a gradient determined from a shape derivative of one or both of the thickness and the build angle at each of the different locations, e.g., the shape derivative shown in Equation 193 and derived above.
Alternatively or in combination, i.e., where there are multiple constraints, shape change velocities can be computed using at least one shape change velocity with an amount determined from a shape derivative formula that approximates a shape derivative of one or both of the thickness and the build angle at each of the different locations. One example shape derivative formula is the volume shape derivative formula, described above with reference to Equation 105.
As described above with reference to Arbitrary Inequality Constraints and Equations 91-102, importance factors can be applied to inequality constraints to modify a volume fraction based inequality constraint using a shape derivative formula that includes the volume fraction based inequality constraint. This constraint can also be a minimum thickness inequality constraint described above with reference to Equations 123-127, or a stress based inequality constraints, as described above with reference to
The performing 708, the finding 710, the determining 712, the redefining 714, the computing 716 and the updating 718 is repeated until a check 720 determines that a predefined number of shape modification iterations have been performed or that the generatively designed three dimensional shape of the modeled object in the design space has converged to a stable solution for the one or more design criteria and the one or more in-use load cases, as described above with reference to
Singularities and Disconnections
Stress constraints, including Von Mises stress constraints, can be essential for running generative design, but they are hard to implement due to few elements in a Finite Element Analysis model showing very high stresses. A probabilistic method is provided, below, that can mitigate or outright eliminate such high stresses. Although the examples below refer to stress constraints, these techniques can be applied to avoiding singularities and preventing disconnections during generative design for any constraint. Singularities can occur, for example due to sharp re-entrant corners or bad meshing. The process described below can be implemented to be performed automatically as part of a generative design process, e.g., including the techniques described above with reference to
One solution for avoiding singularities is to replace the maximum stress by a simple percentile value σx in Equation 134, modified below.
where σx denotes the stress of the element in the xth percentile when elements are sorted in increasing order of σ. However, this results in the maximum stress in the domain max no-being too sensitive to the value of x. Instead, the percentile x is converted into a standard normal deviate z using the inverse of the error function
Next, the maximum stress is computed from
where μ(σ) and χ(σ) denote the mean and standard deviation of the stress distribution, both of which can be computed according to any conventional technique.
TABLE 1, below, shows stress singularities for an object that are avoided based on the techniques described in this section:
The percentile based method brings stability to the oscillatory nature of global stress, by smoothing the maximum stress at a point based on the standard deviation of stress values for points on the object in the same percentile.
High Velocity Smoothing
The previous section described regulating the maximum value of constraint values in the domain, i.e., maxΩgi. The same is true for the constraint derivative dgt/dΩ, denoted by dgt for clarity.
Normalization: all shape derivatives can be normalized such that the magnitude of the maximum value on the surface of the current design F approximates the voxel size Δs
This implies an advection time of T=1/Δs is sufficient to prevent the geometry from advecting beyond one voxel. An easy way to achieve this is through velocity clamping.
Velocity clamping: the first step in velocity clamping is computing a reference value for the shape derivative following the percentile approach described above with reference to Equations 197-199.
(dgt)ref=μ(dgt)+erf−1(x)χ(dgt) (201)
where μ and χ denote the mean and standard deviation of the shape derivative on the surface and x∈{0.9,0.95,0.99,0.999 etc.} is a user-given percentile. Note that the percentile value x could be much higher for well behaved shape derivatives (without excessive highs) such as strain energy and lower for widely fluctuating shape derivatives, such as stress.
Next, for all grid points xk in the inner narrow band, i.e., a narrow band width equal to wnbΔs, velocity values higher than the reference value (dgt)ref are clamped while preserving the sign β as follows:
where ψ(xl) denotes the level-set of the grid point. However, this results in a sudden change in the velocity profile. Velocity clamping should be done in such a manner that there is a smooth transition in the velocity profile.
High velocity smoothing: The objective in high velocity smoothing is to smooth all high velocities (|dgt|>|(dgt)ref|) such that there is a gradual smoothing of all values above the reference velocity computed from Equation 201. A user given parameter ρ, e.g., ρ=0.85, is used to scale all velocity values less than the reference velocity
where kref denotes the position index of the grid point once they are sorted in order of the shape derivative.
The smoothed velocity of all grid points with velocity greater than the reference velocity are found by fitting a cubic polynomial, as follows:
(dgt(xk))smooth=aξ3+bξ2+cξ+d kref≤k≤n (204)
where ξ denotes the shifted index τ=k−kref. The unknown coefficients {a, b, c, d} can be found solving the linear system of equations obtained by settings the appropriate boundary conditions at ξ=0 and ξ=ξmax=n−kref
The gradient of the smooth velocity at ξ=0 can be obtained as
Finally, the smoothed normalized velocity is given by
Returning to
Then, the shape change velocities are changed 820 in accordance with a polynomial function that has been fit to at least a portion of the shape change velocities above a reference velocity. For example, the polynomial function can be a cubic polynomial, as described above with reference to Equation 204. Although the example given is a cubic polynomial, other different-order polynomials can be used.
The level-set representation is updated 822 using the shape change velocities to produce an updated version of the three dimensional shape of the modeled object, e.g., as described above with reference to
The performing 816, the computing 818, the changing 820, and the updating 822 is repeated in the iteratively modifying until check 830 determines that a predefined number of shape modification iterations have been performed or that the generatively designed three dimensional shape of the modeled object in the design space has converged to a stable solution for the one or more design criteria and the one or more in-use load cases, e.g., as described above with reference to
Backup and Restore
Another technique for preventing geometric disconnections is backing up of all critical data, including the geometric level-set at each iteration, and restoring the data when the result of advection is not desirable. Thus, in some implementations, after the updating 822 but before the repeating, excessive changes can be checked 824 for that were made during the updating. In those cases, the current version of the three dimensional shape can be set 826 as the updated version of the three dimensional shape for a next iteration to undue the excessive change, i.e., the excessive change is undone. Then, shape changes for the next iteration of the iteratively modifying can be slowed 828, as described below. Slowing the changes for the next iteration can include reducing a target volume change for the generatively designed three dimensional shape of the modeled object for the next iteration of the iteratively modifying.
To prevent a repeat of the same undesirable outcome, the volume change Δvt computed for each iteration t, as described above with reference to
Δvt←βΔvt. (208)
This multiplier can be initiated as β=1 at the start and updated in every iteration depending on the outcome of each advection using a fixed increment Δβ(>0):
β←max(0,β−Δβ) if advection undesirable (209)
β←min(1,β−Δβ) otherwise (210)
The next task is to classify the outcome of each advection as desirable/undesirable. One way to achieve this is to monitor the change in the Lagrangian, i.e., as described above with reference to Equations 15-25. This can be implemented in a more granular manner by imposing limits on allowable relative changes for objectives. The relative change in an objective/constraint for a given iteration t is defined as
where the constraint error etj is computed using a PID stabilized version of Equation 95 with the importance factor term ptj computed using techniques described above with reference to Arbitrary Inequality Constraints and
The maximum allowable change in objectives is limited to for objective decrease and increase, respectively. The maximum allowable change in constraints is limited to for positive (good), negative (bad) changes in constraints s, respectively. The sign of the constraint inequality needs to be checked to determine the appropriate limit.
Thus, in some implementations, before the performing 816 is done, elements that are generated from the current version of the three dimensional shape for the numerical simulation, but that are partially but not entirely within the implicit surface with Dirichlet boundary conditions are identified and removed 815 before performing the numerical simulation. The element and nodes refer to how the design geometry is represented during the performing 816. In finite element simulation, the domain is replaced by a collection of elements. Each element is defined by a set of nodes along the boundary of the element and, in some cases, inside the element. Generally, the more elements and nodes in the model, the higher the accuracy of the simulation. The set of techniques for creating these nodes and elements is referred to as meshing.
During optimization, the shape of the current domain repeatedly changes. The model can be re-meshed at every iteration for the finite element model to accurately represent the current domain. Since this is computationally expensive, some generative design processes deactivate the elements that lie outside the current domain at each iteration. The representation of the current design can be improved in the numerical simulation model by removing elements not entirely within the implicit surface. The overall accuracy can be improved by considering such cut elements as having partial stiffness.
Further, in some implementations, the checking 824 for the excessive change can include comparing a change in the one or more design criteria resulting from the updating with a predefined limit on an amount of change allowed for the one or more design criteria in a single iteration of the iteratively modifying.
Ersatz Materials
A common practice in the SIMP method of topology optimization is to use variable density materials or ersatz materials. Elements with density ρ=1 are deemed to be inside the domain whereas those with ∈<ρ<1 are on the boundary with ∈ (typically set to a small value such as 0:001) denoting the density of materials outside the design. During finite element analysis, the stiffness of each element is multiplied by the density along with a penalty factor p to penalize intermediate densities
K←ρ
p
K. (214)
The level-set method often uses only two states (ρ∈{∈,1}) as the boundary and therefore is more clearly defined. The penalty factor is not required as there are no intermediate densities, i.e., K←ρpK. However, this sometimes leads to disconnections as elements outside the domain with ρ=∈ can support the load path.
Additionally, it has been observed that having material with ρ=∈ severely affects the buckling safety factor predicted from the finite element model. Ersatz material can be removed completely from the finite element model where there is a significant impact on the predicted buckling factor. This is achieved by grouping all elements with p=E at each iteration and removing all such groups that are not connected to any nodes with Dirichlet boundary conditions.
As noted above, in some implementations of the process shown in
As another solution to severe disconnection problems, a tetrahedral-cutting algorithm can be applied, which reinstates the use of ersatz materials for elements on the boundary of the current domain. Essentially, the stiffness of such elements are better approximated by setting the density equal to the volume fraction of the element inside the domain. The stiffness is then penalized according to the density, as in the SIMP method, with a penalty factor of p=1:
Computing the volume of each element inside the domain is not trivial as some elements with valid overlap with the domain may have all nodes outside the elements.
where xi denotes the nodal coordinates of element e. Note that certain elements (e.g., element 810 e80) may have to be subdivided multiple times until they satisfy the condition of Equation 216. The required depth of subdivision l can be determined by recursively subdividing the element and comparing the total volume inside the domain
where C(eil) denotes the indices of the children of element eil. The recursive subdivision can be stopped when the above condition is true. In some implementations, the voxel size Δs is set to half the average edge length of solid elements. Given the smallest feature size≈Δs, only 1 level of subdivision is typically sufficient.
The volume fraction of a tetrahedron element can be cut by the domain. When computing the intersection point of an element edge with the 0th iso-contour, linear interpolation can be used
The data processing apparatus 900 also includes hardware or firmware devices including one or more processors 912, one or more additional devices 914, a computer readable medium 916, a communication interface 918, and one or more user interface devices 920. Each processor 912 is capable of processing instructions for execution within the data processing apparatus 900. In some implementations, the processor 912 is a single or multi-threaded processor. Each processor 912 is capable of processing instructions stored on the computer readable medium 916 or on a storage device such as one of the additional devices 914. The data processing apparatus 900 uses the communication interface 919 to communicate with one or more computers 990, for example, over the network 980. Examples of user interface devices 920 include a display, a camera, a speaker, a microphone, a tactile feedback device, a keyboard, a mouse, and VR and/or AR equipment. The data processing apparatus 900 can store instructions that implement operations associated with the program(s) described above, for example, on the computer readable medium 916 or one or more additional devices 914, for example, one or more of a hard disk device, an optical disk device, a tape device, and a solid state memory device.
Embodiments of the subject matter and the functional operations described in this specification can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the subject matter described in this specification can be implemented using one or more modules of computer program instructions encoded on a non-transitory computer-readable medium for execution by, or to control the operation of, data processing apparatus. The computer-readable medium can be a manufactured product, such as hard drive in a computer system or an optical disc sold through retail channels, or an embedded system. The computer-readable medium can be acquired separately and later encoded with the one or more modules of computer program instructions, e.g., after delivery of the one or more modules of computer program instructions over a wired or wireless network. The computer-readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, or a combination of one or more of them.
The term “data processing apparatus” encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, a runtime environment, or a combination of one or more of them. In addition, the apparatus can employ various different computing model infrastructures, such as web services, distributed computing and grid computing infrastructures.
A computer program (also known as a program, software, software application, script, or code) can be written in any suitable form of programming language, including compiled or interpreted languages, declarative or procedural languages, and it can be deployed in any suitable form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer need not have such devices. Moreover, a computer can be embedded in another device, e.g., a mobile telephone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device (e.g., a universal serial bus (USB) flash drive), to name just a few. Devices suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM (Erasable Programmable Read-Only Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory), and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
To provide for interaction with a user, embodiments of the subject matter described in this specification can be implemented on a computer having a display device, e.g., an LCD (liquid crystal display) display device, an OLED (organic light emitting diode) display device, or another monitor, for displaying information to the user, and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any suitable form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any suitable form, including acoustic, speech, or tactile input.
The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. Embodiments of the subject matter described in this specification can be implemented in a computing system that includes a back-end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front-end component, e.g., a client computer having a graphical user interface or a browser user interface through which a user can interact with an implementation of the subject matter described is this specification, or any combination of one or more such back-end, middleware, or front-end components. The components of the system can be interconnected by any suitable form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network (“LAN”) and a wide area network (“WAN”), an inter-network (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks).
While this specification contains many implementation details, these should not be construed as limitations on the scope of what is being or may be claimed, but rather as descriptions of features specific to particular embodiments of the disclosed subject matter. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
Thus, particular embodiments of the invention have been described. Other embodiments are within the scope of the following claims. In addition, actions recited in the claims can be performed in a different order and still achieve desirable results.
This is a continuation of U.S. application Ser. No. 17/346,154, filed on Jun. 11, 2021, and titled “GENERATIVE DESIGN SHAPE OPTIMIZATION WITH SINGULARITIES AND DISCONNECTION PREVENTION FOR COMPUTER AIDED DESIGN AND MANUFACTURING,” which claims priority to U.S. Provisional Application No. 63/044,651, filed on Jun. 26, 2020, and titled “GENERATIVE DESIGN SHAPE OPTIMIZATION WITH SINGULARITIES AND DISCONNECTION PREVENTION FOR COMPUTER AIDED DESIGN AND MANUFACTURING.” The disclosures of the prior applications are considered part of and are incorporated by reference in the disclosure of this application.
Number | Date | Country | |
---|---|---|---|
63044651 | Jun 2020 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17346154 | Jun 2021 | US |
Child | 18141218 | US |