This application claims priority to Indian application No. 202331043402, having a filing date of Jun. 28, 2023, the entire contents of which are hereby incorporated by reference.
The following relates to optimization techniques and more particularly relates to a system and method for performing structural topology optimization.
Optimization algorithms play a pivotal role in various fields, from engineering to machine learning. However, their practical implementation often faces significant challenges. The complexity of the optimization problem arises from intricate relationships between design variables, constraints, and objectives. Additionally, the search space—the range of possible solutions—can be vast. For instance, in structural optimization, the shape, size, and topology of a product must be optimized. This involves exploring a high-dimensional space of potential designs. As a result, optimization algorithms may require extensive computational resources and time to explore this large search space effectively.
Convergence criteria determine when an optimization algorithm stops searching for better solutions. Striking the right balance between accuracy and computational cost is crucial. In practical scenarios, delays in finding optimal solutions can impact product deployment. For instance, in material optimization, selecting the most efficient material affects product performance and manufacturing costs. Therefore, devising optimization methods that are computationally efficient become paramount.
Industrial product design involves multiple objectives (e.g., minimizing weight while maximizing strength) and nonlinear relationships between variables. Traditional optimization methods, such as Sequential Quadratic Programming (SQP), SIMP and BESO provide accurate solutions but can be computationally expensive. Data-driven approaches, like deep learning models, offer computational efficiency once trained. However, directly using their outputs can be challenging due to missing links and compliance deviations.
In other words, achieving a balance between accuracy and computational efficiency remains a challenge. Classical methods while accurate, are computationally expensive. Data-driven approaches, such as deep learning models, offer cost reduction but face challenges related to missing links and compliance deviations.
In light of the above, there exists a need for an improved system and method for performing structural topology optimization for a structure.
An aspect relates to a system and method for performing structural topology optimization for a structure. In embodiments, the method includes receiving from a client device, by a processing unit, an input indicative of one or more load vectors at one or more nodes in a design space corresponding to the structure. In an embodiment, the input comprises a two-channel input tensor corresponding to the load vectors. In embodiments, the method further includes applying a data-driven model to the input for predicting a suboptimal topology for the structure. In an embodiment, the data-driven model is a deep learning model. In a further embodiment, the deep learning model is a U-Net variational autoencoder. In an embodiment, the U-Net variational autoencoder is trained based on topology optimization code. In embodiments, the method further includes initializing an optimization solver based on the suboptimal topology predicted. In an embodiment, the optimization solver computes the optimal topology based on one of Solid Isotropic Material with Penalization (SIMP) topology optimization and Bi-directional Evolutionary Structural Optimization (BESO). In embodiments, the method further includes computing one or more design variables corresponding to an optimal topology for the structure using the initialized optimization solver. In an embodiment, the optimization solver computes the optimal topology based on one of Solid Isotropic Material with Penalization (SIMP) topology optimization and Bi-directional Evolutionary Structural Optimization (BESO). In an embodiment, the one or more design variables comprises density value. In embodiments, the method further includes providing an output indicative of the optimal topology on a user interface of the client device.
Disclosed herein is also an apparatus comprising one or more processing units and a memory unit communicatively coupled to the one or more processing units. The memory unit comprises an optimization management module stored in the form of machine-readable instructions executable by the one or more processing units, wherein the optimization management module is configured to perform method steps described above.
Disclosed herein is also a system comprising one or more clients and an apparatus as described above, communicatively coupled to the one or more clients. The apparatus is configured for performing structural topology optimization based on inputs received from the one or more client devices, according to the method steps described above.
Disclosed herein is also a computer-readable medium, on which program code sections of a computer program are saved, the program code sections being loadable into and/or executable by a processor which performs the method as described above when the program code sections are executed.
The realization of embodiments of the invention by a computer program product (non-transitory computer readable storage medium having instructions, which when executed by a processor, perform actions). The non-transitory computer-readable storage medium has the advantage that computer systems can be easily adopted by installing a computer program in order to work as proposed by embodiments of the present invention.
The computer program product can be, for example, a computer program or comprise another element apart from the computer program. This other element can be hardware, for example a memory device, on which the computer program is stored, a hardware key for using the computer program and the like, and/or software, for example a documentation or a software key for using the computer program.
The above-mentioned attributes and features of this invention and the manner of achieving them, will become more apparent and understandable (clear) with the following description of embodiments of the invention in conjunction with the corresponding drawings. The illustrated embodiments are intended to illustrate, but not limit the invention.
Some of the embodiments will be described in detail, with references to the following Figures, wherein like designations denote like members, wherein:
Hereinafter, embodiments for carrying out the present invention are described in detail. The various embodiments are described with reference to the drawings, wherein like reference numerals are used to refer to like elements throughout. In the following description, for purpose of explanation, numerous specific details are set forth to provide a thorough understanding of one or more embodiments. It may be evident that such embodiments may be practiced without these specific details.
The system 100 includes an apparatus 105 and a plurality of client devices 110A-N (collectively referred hereinafter as client device 110). Each of the client devices 110A-N is connected to the apparatus 105 via a network 112, for example, local area network (LAN), wide area network (WAN), Wi-Fi, etc. In an embodiment, the apparatus 105 is a computing system implemented in a cloud computing environment. As used herein, “cloud computing environment” refers to a processing environment comprising configurable computing physical and logical resources, for example, networks, servers, storage, applications, services, etc., and data distributed over the network, for example, the internet. The cloud computing environment provides on-demand network access to a shared pool of the configurable computing physical and logical resources.
The apparatus 105 comprises a processing unit 115, a memory 120, a storage unit 125, a communication module 130, a network interface 135, an input unit 140, an output unit 145, a standard interface or bus 150 as shown in
The memory 120 may include one or more of a volatile memory and a non-volatile memory. The memory 120 may be coupled for communication with the processing unit 115. The processing unit 115 may execute instructions and/or code stored in the memory 120. A variety of computer-readable storage media may be stored in and accessed from the memory 120. The memory 120 may include any suitable elements for storing data and machine-readable instructions, such as read only memory, random access memory, erasable programmable read only memory, electrically erasable programmable read only memory, hard drive, removable media drive for handling compact disks, digital video disks, diskettes, magnetic tape cartridges, memory cards, and the like. The memory 120 comprises an optimization management module 155. The optimization management module 155 may be stored in the memory 120 in the form of machine-readable instructions and executable by the processing unit 115. These machine-readable instructions when executed by the processing unit 115 causes the processing unit 115 to enable an improved structural topology optimization as disclosed in the present disclosure. The optimization management module 155 comprises a preprocessing module 160, a suboptimal topology module 165, a solver initialization module 170, an optimal topology module 175 and a notification module 180 as shown in
The apparatus 105 may include a network interface 135 for communicating with the client devices 110 via the network 112. Each of the client devices 110A-N include a user interface (not shown) that enables a user to upload information relevant for performing structural topology optimization. The user interface may be further configured, by the apparatus 105, to generate an output indicative of a solution to the optimization problem. Further, the user interface may also be configured, by the apparatus 105 to generate an output indicative of, for example, similarity indices, complexity scores, number of iterations or results of other intermediate processes related to a given optimization technique. In an embodiment, the user interface of the client device 110 may be provided by a web-based or client-based application on the client device 110.
The storage unit 125 comprises a non-volatile memory which comprises a database 198. The database 198 stores one or more source optimization tasks. The input unit 140 may include in-put means such as keypad, touch-sensitive display, camera, etc. capable of receiving input signal. The bus 150 acts as interconnect between the processing unit 115, the memory 120, the storage unit 125, and the network interface 135. The communication module 130 enables the apparatus 105 to communicate with the client devices 110A-N. The communication module 130 may support different standard communication protocols such as Transport Control Protocol/Internet Protocol (TCP/IP), Profinet, Profibus, and Internet Protocol Version (IPv).
An apparatus 105 in accordance with an embodiment of the pre-sent disclosure includes an operating system employing a graphical user interface. The operating system permits multiple display windows to be presented in the graphical user interface simultaneously with each display window providing an interface to a different application or to a different instance of the same application. A cursor in the graphical user interface may be manipulated by a user through the pointing device. The position of the cursor may be changed and/or an event such as clicking a mouse button, generated to actuate a desired response.
One of various commercial operating systems, such as a version of Microsoft Windows™, a product of Microsoft Corporation located in Redmond, Washington may be employed if suitably modified. The operating system is modified or created in accordance with the present disclosure as described.
Those of ordinary skilled in the conventional art will appreciate that the hardware depicted in
The present disclosure is not limited to a particular computer system platform, processing unit, operating system, or network. One or more aspects of the present disclosure may be distributed among one or more computer systems, for example, servers configured to provide one or more services to one or more client computers, or to perform a complete task in a distributed system. For example, one or more aspects of the present disclosure may be performed on a client-server system that comprises components distributed among one or more server systems that perform multiple functions according to various embodiments. These components comprise, for example, executable, intermediate, or interpreted code, which communicate over a network using a communication protocol. The present disclosure is not limited to be executable on any particular system or group of systems, and is not limited to any particular distributed architecture, network, or communication protocol. Those of ordinary skilled in the conventional art will appreciate that the hardware depicted in
At step 205, an input for determining an optimal topology for a structure is received by the processing unit 115 from the client device 110. The input is indicative one or more load vectors corresponding to one or more nodes present in a design space corresponding to the structure. In an implementation, the load vectors are generated, at the processing unit, based on the input received from the client device 110. The input may include, for example, a choice of optimization algorithm, an objective function, and variable definitions. The variable definitions may indicate one or more parameters associated with the objective function, one or more constraints and one or more bounds corresponding to the variables in the objective function. Furthermore, the input may also include optimization parameters such as allowable tolerance, maximum iterations, penalization factor etc. In an example, the optimization problem is associated with structural topology optimization. The input may indicate variable definitions corresponding to volume fraction, material properties, loading and boundary conditions, etc. For example, the optimization algorithm may be one of SIMP or BESO. Further, the objective function may correspond to compliance minimization, while constraints may be associated with volumetric relations corresponding to the structure. The one or more optimization parameters include tolerance, penalization factor and maximum iterations.
In an embodiment, the input is received in the form of a configuration file. The configuration file is parsed to extract optimization data corresponding to the optimization problem. In an embodiment, the configuration file is generated based on interaction of a user with a user interface of the client device. The user interacts with the user interface through one or more input means. For example, the user interface may provide options for choosing the optimization algorithm from a plurality of optimization algorithms indicated in a drop-down menu. Based on the optimization algorithm chosen, the user interface may provide further options, to the user, for entering the objective function, the parameter definitions etc. Further, the configuration file is generated based on inputs entered by the user. Alternatively, the request may include a location within the database 198 wherein configuration file is stored. In another embodiment, the optimization data is identified based on interactions of the user with the user interface, and directly stored in the database 198.
At step 210, a data-driven model is applied to the input for predicting a suboptimal topology for the structure. In an embodiment, the data-driven model is a deep learning model. In a further embodiment, the deep learning model is a variational autoencoder. In general, an autoencoder comprises an encoding layer and a decoding layer. The encoding layer reduces a higher dimensionality input to a lower dimensionality space called latent space. The latent space is deterministic in nature and is further up-sampled to generate a high dimensionality output. Based on the difference between the generated high-dimensionality output and a predetermined expected output, a loss function is evaluated, based on which model parameters associated with the autoencoder are optimized.
In case of a variational autoencoder, the latent space is probabilistic in nature, as opposed to the deterministic latent space in a regular autoencoder. The input to the variational autoencoder is a double channel image which passes through the encoder layer to form a latent space. From the latent space, a latent vector ‘z’ is sampled which, upon training, conforms to a lower-dimensional representation of the input image. The latent vector is further projected onto a higher dimensional space with the help of convolutional transpose layers to generate a single channel output image. In general, a distribution function corresponding to the probabilistic latent space of the variational autoencoder is close to an unknown distribution of a design space from which the higher dimensionality input data was generated. An example of an output generated by a VAE is shown later, in
For a given random input which is unrelated to a training dataset used for training of the autoencoder, a variational autoencoder has better chances of generating a reasonable output as compared to normal autoencoder. Furthermore, the ability of variational autoencoder to approximate or learn the probability distribution of the design space, from which the input data is generated, allows the variational autoencoder to generate new data points by sampling from the learned probability distribution.
It must be understood that the shape and dimensions mentioned herein are only for illustration purposes. The shape and dimensions may vary depending upon nature of use of the variational autoencoder.
The U-Net-variational autoencoder is a modified version of the variational autoencoder, that comprises a U-shaped architecture wherein information from shallow layers and deep layers are merged. The U-Net variational autoencoder is almost similar to the variational autoencoder described earlier. Additionally, the encoder section of the U-Net variational autoencoder comprises a convolution and max-pooling layers followed by the latent section comprising a flattening layer, dense layers and a reshape layer. Further, the decoder section of the U-Net variational autoencoder comprises transposed convolution layers that stack together outputs from the shallow convolution layers of the encoder section. Finally at the output layer, a single filter convolution layer based on sigmoid function is used to generate the required output i.e., the suboptimal topology. An exemplary embodiment of a U-Net VAE is described later with reference to
The predicted suboptimal topology may be close to the optimal topology or an intermediate structure which is likely to be generated in a classical topology optimization method such as SIMP or BESO. As the classical topology optimization method is now starting from a topology that has better chances of representing an early stage of an optimal topology, the computational cost or the total iterations to reach an optimal structure is less when compared with purely classical topology optimization methods.
At step 215, an optimization solver is initialized based on the suboptimal topology predicted. In an embodiment, initializing the optimization solver comprises initializing starting values for one or more design variables to values of corresponding one or more design variables in the suboptimal topology. In the present embodiment, the design variable corresponds to density value in each element within the design space. Herein, the term ‘starting value’ refers to initial values to be assigned for one or more variables associated with the optimization problem. The term ‘element’ or finite element as used herein refers to finite entities obtained by dividing/meshing the design space. Each of the elements may have different density values.
At step 220, an optimal topology is computed for the structure using the initialized optimization solver. The optimization solver may be based on a classical topology optimization method. In an embodiment, the classical topology optimization method is SIMP. In this algorithm, each element amongst the discretized finite elements of a design domain, is allocated a density variable. The density variables determine distribution of material inside the design domain and, consequently, the final topology. The topology optimization problem or the objective function may state as follows,
where compliance c is the objective function, K is a global stiffness matrix, u is nodal displacements and F is a global load vector. Here, d is a vector containing the density values corresponding to each of the elements, V(d) is the total volume of material available, V0 is maximum volume of material to be utilized for the design domain, and vf is target volume fraction. In practice, the optimization problem in equation (3.3) leads to inconceivable structures due to presence of intermediate density values. The SIMP approach uses a penalization method to mitigate this issue by discouraging the emergence of intermediate densities in the final optimal structure. The elemental stiffness matrices in this approach are computed as:
Kmax and Kmin represent elemental stiffness matrices for a solid element and a void element, respectively.
Standard procedure calls for the void elements to have a little non-zero stiffness to avoid numerical instability. P is the penalization parameter. Normally, the value of P is considered as more than three. The optimization problem may be solved using different optimization techniques such as, but not limited to, optimality criteria (OC), sequential linear programming (SLP), and the method of moving asymptotes (MMA).
In the present example, the OC-based method is used, and a heuristic-based updating scheme is used for the design variables, for the purpose of generating the training dataset for the variational autoencoder. The scheme is formulated as below:
where m is a positive move limit, n is a numerical damping coefficient, and Bi is
Bisection search method is employed to determine the Lagrangian multiplier, A. The sensitivity of objective function c is then calculated as:
wherein ui represents nodal displacements. To handle issues of mesh dependency and checkerboard patterns, a mesh sensitivity filter is used.
However, it must be understood that embodiments of the present invention may also be extended to other classical topology optimization methods such as, but not limited to, hard kill BESO and soft kill BESO.
At step 225, an output indicative of the optimal topology is provided on a user interface of the client device 110.
The variational autoencoder 300 comprises an encoder section that comprises 3 convolution layers with the outermost layer 304 comprising 32 filters, middle layer 306 comprising 64 filters and the innermost layer 308 comprising 128 filters. Each filter is of size 3×3. Each of the 3 convolution layers 304, 306 and 308 are based on the ReLU activation function. The la-tent section, following the encoder section, comprises a flattening layer 310, dense layers 312, 314, 316, 318, 320 and a reshape layer 322. Each of the dense layers are based on ReLU. The decoder section following the latent section comprises transposed convolution layers 324, 326 and 328 corresponding to the convolution layers 304, 306 and 308 in the encoder section. The transposed convolution layers 324, 326 and 328 up-sample a latent vector, generated in the latent section, to create an up-sampled image. The up-sampled image is then passed through a flattening layer 330, two dense layers 332 and 334 and a reshape layer 326. The dense layer 332 is based on ReLU activation function and the dense layer 334 is based on sigmoid activation function. The purpose of using the sigmoid activation function in the dense layer 334 is to have output values in between 0 and 1 which is basically the range assumed for the density vector in topology optimization problem. Lastly, after reshaping, the final output is generated which is the suboptimal topology in present case.
One of the most important hyper-parameters governing the effectiveness of a neural network is the loss function. During training of the neural network, a loss computed using the loss function, based on an actual output and a target output, is minimized in multiple iterations. In case of variational autoencoders, the objective of training is to minimize divergence between a probability distribution approximated via the variational autoencoder and probability distribution corresponding to a target topology. In embodiments, Kullback-Leible (KL) divergence can be defined as:
where p(x) is the target distribution (optimal topology) and q(x) is the distribution approximated via neural network. Along with the KL divergence, another regularization has been chosen as the mean square error (MSE) which may be represented mathematically as:
where y is the target topology, {circumflex over ( )}y is the topology generated via neural network and n is the total number of elements in the domain. Thus, the total loss function consists of KL divergence and MSE which can be represented as:
The attributes such as weights, learning rate etc. are updated during the training, using an optimizer. In the present example, NADAM (Nesterov-accelerated Adaptive Moment Estimation) optimizer is used for updating the attributes.
In an example, the dataset used for the training of the deep learning models is generated from 110 lines topology optimization code. The input data comprises load vectors. The number of loading nodes are kept random with the maximum number of nodes going up to 4. The input to the neural network is a tensor with dimensions (61, 61, 2), wherein 61 is the total number of nodes taken in x and y direction, 2 channels are considered to represent the x and y direction forces separately. As topology is dependent on orientation of the force rather than the magnitude of force, the magnitude is taken as unity. The described input may be more clearly understood from the schematic shown in
As the number of elements in x and y directions were kept 60, thus the size of the output topology becomes (60,60) i.e., total number of elements in which the domain is discretized in x and y directions are 60.
The test topologies generated with classical algorithms SIMP and BESO are presented in
As a result, the refined topologies obtained using SIMP based on suboptimal topologies generated via U-Net VAE and VAE respectively are shown in
The embodiments of the present invention also reduce number of iterations required for convergence to the optimal topology, thereby reducing computational resources required for performing structural topology optimization, compared to classical optimization techniques.
Although the present invention has been disclosed in the form of embodiments and variations thereon, it will be understood that numerous additional modifications and variations could be made thereto without departing from the scope of the invention.
For the sake of clarity, it is to be understood that the use of “a” or “an” throughout this application does not exclude a plurality, and “comprising” does not exclude other steps or elements.
Number | Date | Country | Kind |
---|---|---|---|
202331043402 | Jun 2023 | IN | national |