METHOD OF GENERATING DEEP LEARNING MODEL AND COMPUTING DEVICE PERFORMING THE SAME

Information

  • Patent Application
  • 20230056869
  • Publication Number
    20230056869
  • Date Filed
    March 08, 2022
    2 years ago
  • Date Published
    February 23, 2023
    a year ago
Abstract
To generate a deep learning model, basic training data corresponding to a combination of device data and simulation result data is generated using a compact model that generates the simulation result data indicating characteristics of a semiconductor device corresponding to the device data by performing simulation based on the device data. A deep learning model is trained based on the basic training data such that the deep learning model outputs prediction data indicating the characteristics of the semiconductor device and uncertainty data indicating uncertainty of the prediction data. The deep learning model is retrained based on the uncertainty data. The deep learning model may precisely predict the characteristics of the semiconductor device by training the deep learning model to output the prediction data and the uncertainty data and retraining the deep learning model based on the uncertainty data.
Description
CROSS-REFERENCE TO RELATED APPLICATION

This U.S. non-provisional application claims priority under 35 USC § 119 to Korean Patent Application No. 10-2021-0109563, filed on Aug. 19, 2021, in the Korean Intellectual Property Office (KIPO), the disclosure of which is incorporated by reference herein in its entirety.


BACKGROUND
1. Technical Field

Example embodiments relate generally to semiconductor integrated circuits, and more particularly to a method of generating a deep learning model and a computing device performing the method to predict characteristics of semiconductor devices.


2. Discussion of the Related Art

With development of the electronics industry, the semiconductor foundry industry, in which a manufacturer produces semiconductor products designed, e.g., by other companies, is increasingly important. In the foundry industry, a manufacturer may check the performance of a semiconductor product, according to its design, through a simulation before actually manufacturing the semiconductor product. In these cases, when the design of the semiconductor product needs to be changed (e.g., by a request from a client of the manufacturer) during the course of simulation, it may take an impractical and/or an enormous amount of time to newly perform a simulation based on the changed design, and accordingly, extra cost and/or time loss may be incurred. Moreover, if the accuracy of a model performing the simulation is low, the performance of designed and manufactured product may be degraded due to inaccurate prediction of the characteristics of the semiconductor device.


SUMMARY

Some example embodiments may provide a method of generating a deep learning model and a computing device performing the method, capable of precisely predicting characteristics of semiconductor devices.


According to example embodiments, a method of generating a deep learning model, the method being performed by executing program codes by at least one processor, the program codes being stored in computer readable media, includes, generating basic training data corresponding to a combination of device data and simulation result data using a compact model, the compact model configured to generate the simulation result data by performing a simulation based on the device data, the simulation result data indicating characteristics of a semiconductor device corresponding to the device data; training the deep learning model based on the basic training data such that the deep learning model is configured to output prediction data and uncertainty data, the prediction data indicating the characteristics of the semiconductor device, and the uncertainty data indicating uncertainty of the prediction data; and retraining the deep learning model based on the uncertainty data.


According to example embodiments, a method of generating a deep learning model, the method being performed by executing program codes by at least one processor, the program codes being stored in computer readable media, includes, generating basic training data corresponding to a combination of device data and simulation result data using a compact model, the compact model configured to generate the simulation result data by performing a simulation based on the device data, the simulation result data indicating characteristics of a semiconductor device corresponding to the device data; training the deep learning model based on the basic training data such that the deep learning model is configured to output prediction data, a model uncertainty value, and a data uncertainty value, the prediction data indicating the characteristics of the semiconductor device, the model uncertainty value indicating the uncertainty of the prediction data caused by insufficiency of the basic training data, and the data uncertainty value indicating the uncertainty of the prediction data caused by noises of the basic training data; performing a first retraining such that the deep learning model that has been trained based on the basic training data is further trained based on the model uncertainty value; and performing a second retraining such that the deep learning model that has been trained based on the basic training data is initialized, and the initialized deep learning model is trained based on the data uncertainty value.


According to example embodiments, a computing device includes at least one processor and a computer readable medium storing program codes and a compact model. The program codes are executed by the at least one processor to generate a deep learning model. The compact model compact model generates simulation result data indicating characteristics of a semiconductor device corresponding to device data by performing simulation based on the device data. The at least one processor executes the program codes to generate basic training data corresponding to a combination of the device data and the simulation result data using the compact model; train the deep learning model based on the basic training data such that the deep learning model is configured to output prediction data and uncertainty data, the prediction data indicating the characteristics of the semiconductor device and the uncertainty data indicating uncertainty of the prediction data; and retrain the deep learning model based on the uncertainty data.


The method and the computing device according to example embodiments may provide the deep learning model capable of precisely predicting the characteristics of the semiconductor device by training the deep learning model to output the prediction data indicating the characteristics of the semiconductor device and the uncertainty data indicating the uncertainty of the prediction data and retraining the deep learning model based on the uncertainty data.


In addition, the method and computing device according to example embodiments may efficiently provide the deep learning model capable of precisely predicting the characteristics of the semiconductor device by determining the method of retraining the deep learning model based on the type of the uncertainty of the prediction data.


Through the enhanced prediction performance of the deep learning model, the time of designing and manufacturing the semiconductor product including the semiconductor device and the performance of the semiconductor product may be enhanced.





BRIEF DESCRIPTION OF THE DRAWINGS

Example embodiments of the present disclosure will be more clearly understood from the following detailed description taken in conjunction with the accompanying drawings.



FIG. 1 is a flowchart illustrating a method of generating a deep learning model according to example embodiments.



FIG. 2 is a block diagram illustrating a computing device according to example embodiments.



FIG. 3 is a block diagram electronic device according to example embodiments.



FIGS. 4 and 5 are diagrams for describing examples of a deep learning neural network structure that is driven by a machine learning device according to example embodiments.



FIG. 6 is a diagram illustrating an example of a node included in a neural network.



FIG. 7 is a diagram for describing types of uncertainty in a method of generating a deep learning model according to example embodiments.



FIG. 8 is a diagram illustrating data in a method of generating a deep learning model according to example embodiments.



FIGS. 9 and 10 are diagrams illustrating example embodiments of a deep learning model in a method of generating a deep learning model according to example embodiments.



FIG. 11 is a flow chart illustrating an example of retraining based on model uncertainty value in a method of generating a deep learning model according to example embodiments.



FIG. 12 is a diagram illustrating an addition data range in a method of generating a deep learning model according to example embodiments.



FIG. 13 is a flow chart illustrating an example embodiment of retraining based on data uncertainty value in a method of generating a deep learning model according to example embodiments.



FIG. 14 is a diagram illustrating a measurement data range in a method of generating a deep learning model according to example embodiments.



FIGS. 15 and 16 are flow charts illustrating example embodiments of retraining based on model uncertainty value and data uncertainty value in a method of generating a deep learning model according to example embodiments.





DETAILED DESCRIPTION OF THE EMBODIMENTS

Various example embodiments will be described more fully hereinafter with reference to the accompanying drawings, in which some example embodiments are shown. In the drawings, like numerals refer to like elements throughout. The repeated descriptions may be omitted.



FIG. 1 is a flowchart illustrating a method of generating a deep learning model according to example embodiments. At least a portion of the method may be performed, e.g., by executing program codes by at least one processor. The program codes may be stored in, e.g., a non-transitory computer readable media. The term “non-transitory,” as used herein, is a description of the medium itself (e.g., as tangible, and not a signal) as opposed to a limitation on data storage persistency (e.g., RAM vs. ROM).


Referring to FIG. 1, basic training data corresponding to a combination of device data and simulation result data may be generated using a compact model (S100).


As will be described below with reference to FIGS. 9 and 10, the compact model may generate the simulation result data indicating characteristics of a semiconductor device corresponding to the device data by performing simulation based on the device data. As will be described below with reference to FIG. 8, the device data may indicate, e.g., structure and operation condition of the semiconductor device.


A deep learning model may be trained based on the basic training data such that the deep learning model outputs prediction data indicating the characteristics of the semiconductor device and uncertainty data indicating uncertainty of the prediction data (S200).


In some example embodiments, the uncertainty data may include a model uncertainty value indicating the uncertainty of the prediction data caused by, e.g., insufficiency of the basic training data (and/or the device data).


In some example embodiments, the uncertainty data include a data uncertainty value indicating the uncertainty of the prediction data caused by noises of the basic training data.


In some example embodiments, the uncertainty data may include both of the model uncertainty value and the data uncertainty value. The model uncertainty and the data uncertainty will be further described with reference to FIG. 7.


The deep learning model may be retrained based on the uncertainty data (S300).


In some example embodiments, as will be described below with reference to FIG. 11, a first retraining may be performed such that the deep learning model that has been trained based on the basic training data is further trained based on the model uncertainty value.


In some example embodiments, as will be described below with reference to FIG. 13, a second retraining may be performed such that the deep learning model that has been trained based on the basic training data is initialized, and the initialized deep learning model is trained based on the data uncertainty value. For example, weight values defining a starting point in the deep learning model may be set to small random values during a weight initialization process, thereby increasing the probability that at least the results of the deep learning model will converge.


In some example embodiments, as will be described below with reference to FIG. 15, whether to perform the first retraining may be determined based on the model uncertainty value, and when it is determined that the first retraining is not performed, whether to perform the second retraining may be determined based on the data uncertainty value.


In some example embodiments, as will be described below with reference to FIG. 16, whether to perform the first retraining and whether to perform the second retraining are determined independently of each other.


As such, the method and the computing device according to example embodiments may provide the deep learning model capable of precisely predicting the characteristics of the semiconductor device by training the deep learning model to output the prediction data indicating the characteristics of the semiconductor device and the uncertainty data indicating the uncertainty of the prediction data and retraining the deep learning model based on the uncertainty data. Through the enhanced prediction performance of the deep learning model, the time of designing and manufacturing the semiconductor product including the semiconductor device and the performance of the semiconductor product may be enhanced.



FIG. 2 is a block diagram illustrating a computing device according to example embodiments.


Referring to FIG. 2, a computing device 100 may include processors 110, a random access memory 120, a device driver 130, a storage device 140, a modem 150 and a user interface 160.


At least one processor of the processors 110 may be configured to operate a deep learning module DLM 220 and a training control module TCM 240. The training control module 240 may perform the method of FIG. 1 to train the deep learning model 220.


In some example embodiments, the deep learning model 220 and the correction module 240 may be implemented as instructions (and/or program codes) that may be executed by the at least one of the processors 110. The instructions (and/or program codes) of the deep learning model 220 and the training control module 240 may be stored in computer readable media. For example, the at least one processor may load (and/or read) the instructions to (and/or from) the random access memory 120.


In some example embodiments, the at least one processor may be manufactured to efficiently execute instructions included in the deep learning model 220 and the training control module 240. For example, the at least one processor may be a dedicated processor that is implemented (e.g., in hardware) based on the deep learning model 220 and the training control module 240. The at least one processor may efficiently execute instructions from various machine learning modules. In some embodiments, at least one processor may receive information corresponding to the deep learning model 220 and the training control module 240 to operate the deep learning model 220 and the training control module 240.


The processors 110 may include, for example, at least one general-purpose processor such as a central processing unit (“CPU”) 111, an application processor (“AP”) 112, and/or other processing units. In addition, the processors 110 may include at least one special-purpose processor such as a neural processing unit (“NPU”) 113, a neuromorphic processor (“NP”) 114, a graphic processing unit (“GPU”) 115, etc. For example, the processors 110 may include two or more heterogeneous processors. Though illustrated as including the CPU 111, AP 112, NPU 113, NP 114, and GPU 115, the example embodiments are not so limited. For example, the processors 110 may include more or fewer processors than illustrated.


The random access memory 120 may be used as an operation memory of the processors 110, a main memory, and/or a system memory of the computing device 100. The random access memory 120 may include a volatile memory such as a dynamic random access memory (DRAM), a static random access memory (SRAM), and/or the like. Additionally (and/or alternatively), the random access memory 120 may include a nonvolatile memory such as a phase-change random access memory (PRAM), a ferroelectrics random access memory (FRAM), a magnetic random access memory (MRAM), a resistive random access memory (RRAM), and/or the like.


The device driver 130 may control peripheral circuits such as the storage device 140, the modem 150, the user interface 160, etc., according to requests of the processors 110. The storage device 140 may include a fixed storage device such as a hard disk drive, a solid state drive (SSD), etc., and/or include (and/or be connected to) an attachable storage device such as an external hard disk drive, an external SSD, a memory card, and/or other external storage.


The modem 150 may perform wired or wireless communication with external devices through various communication methods and/or communication interface protocols such as Ethernet, Wi-Fi, LTE, a third generation communication system such as code division multiple access (CDMA), global system for mobile communications (GSM), north American digital cellular (NADC), extended-time division multiple access (E-TDMA), and/or wide band code division multiple access (WCDMA), a fourth generation communication system such as 4G LTE, a fifth generation communication system such as 5G mobile communication, and/or other communication methods.


The user interface 160 may receive information from a user and/or provide information to the user. The user interface 160 may include at least one output interface such as a display 161, a speaker 162, etc., and may further include at least one input interface such as mice (mouse) 163, a keyboard 164, a touch input device 165, etc. Though illustrated as including the display 161, the speaker 162, the mouse, 163, the keyboard 164, and the touch input device 165, the example embodiments are not so limited, and may, e.g., include more or fewer elements. In some example embodiments, for example, some of the user interfaces 160 may be combined (e.g., to include a touch screen).


In some example embodiments, the deep learning model 220 and the training control module 240 may receive the instructions (and/or codes) through the modem 150 and store the instructions in the storage device 150. In some example embodiments, the instructions of the deep learning model 220 and the training control module 240 may be stored in an attachable storage device and the attachable storage device may be connected to the computing device 100 by a user. The instructions of the deep learning model 220 and the training control module 240 may be loaded in the random access memory 120 for rapid execution of the instructions.


In some example embodiments, the computer program codes, the compact model, the deep learning model, and/or the training control module may be stored in a computer-readable medium. In some example embodiments, values resulting from a simulation performed by the processor or values obtained from arithmetic processing performed by the processor may be stored in a transitory and/or non-transitory computer-readable medium. In some example embodiments, intermediate values generated during deep learning may be stored in a transitory and/or non-transitory computer-readable medium. In some example embodiments, the training data, the process data, the device data, the simulation result data, the prediction data, and/or the uncertainty data may be stored in a transitory or non-transitory computer-readable medium. However, example embodiments are not limited thereto.



FIG. 3 is a block diagram electronic device according to example embodiments.


Referring to FIG. 3, an electronic device 1000 may include an input unit 11, a storage 12, and a processor 13. The storage 12 may include a compact model CM. The electronic device 1000, a semiconductor manufacturing equipment 31, and a semiconductor measuring equipment 32 may form a semiconductor system. In some example embodiments, the electronic device 1000 may be implemented as a semiconductor system separated from the semiconductor manufacturing equipment 31 and the semiconductor measuring equipment 32.


The input unit 11 may receive the device data and transmit the device data to processor 13, and the processor 13 may generate the basic training data using the compact model MD. For example, in some example embodiments, the input unit 11 may be (and/or include) at least one of the user inputs, storage device 140, and/or modem 150, and the processor 13 may be (and/or include) at least one of the processors 110 in the computing device 100 (as illustrated in FIG. 1). The compact model MD may provide the simulation result data indicating the characteristics of the semiconductor device corresponding to the device data by performing simulation based on the device data.


The processor 13 may generate the training data corresponding to a combination of the device data and the simulation result data. The processor 13 may obtain values of the simulation result data corresponding to various values of the device data and establish a database DB including various combinations of the values of the device data and the simulation result data. The established database DB may be stored in the storage 12, and the processor 13 may perform training or learning of the deep learning model MLM using the training data in the database DB.


The processor 13 may generate (and/or update) the compact model CM based on measurement data MD and store (and/or update) the compact model CM in the storage 12.


The compact model CM may be generated (and/or updated) based on the measurement data MD. The measurement data MD may include an electrical and/or structural characteristic of a semiconductor product actually measured by the semiconductor measuring equipment 32. The semiconductor product measured by the semiconductor measuring equipment 32 may have been manufactured by the semiconductor manufacturing equipment 31 based on semiconductor manufacturing data. The semiconductor manufacturing data may be related to a manufacture of a target semiconductor device and/or a manufacture of a semiconductor device similar to the target semiconductor device.


The compact model CM may be updated in response to the measurement of an electrical and/or structural characteristic of a semiconductor product by the semiconductor measuring equipment 32. For example, in response to the reception of the measurement data MD from the semiconductor measuring equipment 32, the processor 13 may update the compact model CM to reflect the latest measurement data MD. The processor 13 may receive the measurement data MD from the semiconductor measuring equipment 32 through the input unit 11 or a communication unit.


The storage 12 may include equipment information of at least one selected from the semiconductor manufacturing equipment 31 and/or the semiconductor measuring equipment 32. For example, a semiconductor product may have a different electrical and/or structural characteristic according to the type of the semiconductor manufacturing equipment 31. In addition, the electrical and/or structural characteristic of a semiconductor product may be differently measured according to the type of the semiconductor measuring equipment 32. To reduce the potential for errors involved in the types of the semiconductor manufacturing equipment 31 and the semiconductor measuring equipment 32, the storage 12 may include various kinds of equipment information such as information about a manufacturer of the semiconductor manufacturing equipment 31 and/or a manufacturer of the semiconductor measuring equipment 32, model information of the semiconductor manufacturing equipment 31 and/or the semiconductor measuring equipment 32, and/or performance information thereof. The processor 13 may update the compact model CM with reference to the equipment information stored in the storage 12.


The processor 13 may use deep learning model MLM, the compact model CM, and/or the database DB to simulate and/or predict the performance of a semiconductor device manufactured by the semiconductor manufacturing equipment 31, e.g., before the semiconductor device is manufactured. The processor 13 may, for example, determine whether a change to the design of the semiconductor device may improve or deteriorate the performance of the semiconductor device based on, e.g., operational conditions for the semiconductor device. In some example embodiments, for example, the processor 13 may confirm a design based on these predictions thereby indicating that the design is okay to proceed to manufacture and/or forwarding the design to a processor controlling the semiconductor manufacturing equipment 31, and/or may pause (and/or stop) the production of semiconductor devices based on the design if, e.g., a change in the design would result in a characteristic of the semiconductor devices deteriorating below a threshold value.


In some example embodiments, the processor 13 may (e.g., periodically) confirm the prediction of the deep learning model MLM by comparing the prediction of a design with a semiconductor device manufactured based on the design, e.g., by using the measurement data MD received from the semiconductor measuring equipment 32. For example, in some example embodiments, the processor 13 may store the prediction in the storage 12, and then may compare the prediction to a manufactured semiconductor device based on the design. If the prediction and the manufactured semiconductor device differ, e.g., beyond a maximum threshold, the compact model CM stored in the storage 12 may be, e.g., updated based on the measurement data MD actually measured by the semiconductor measuring equipment 32, as discussed above.



FIGS. 4 and 5 are diagrams for describing examples of a deep learning neural network structure that is driven by a machine learning device according to example embodiments.


Referring to FIG. 4, a general neural network may include an input layer IL, a plurality of hidden layers HL1, HL2, . . . , HLn and an output layer OL.


The input layer IL may include i input nodes x1, x2, . . . , xi, where i is a natural number. Input data (e.g., vector input data) X whose length is i may be input to the input nodes x1, x2, . . . , xi such that each element of the input data X is input to a respective one of the input nodes x1, x2, . . . , xi.


The plurality of hidden layers HL1, HL2, . . . , HLn may include n hidden layers, where n is a natural number, and may include a plurality of hidden nodes h11, h12, h13, . . . , h1m, h21, h22, h23, . . . , h2m, hn1, hn2, hn3, . . . , hnm. For example, the hidden layer HL1 may include m hidden nodes h11, h12, h13, . . . , h1m, the hidden layer HL2 may include m hidden nodes h21, h22, h23, . . . , h2m, and the hidden layer HLn may include m hidden nodes hn1, hn2, hn3, . . . , hnm, where m is a natural number.


The output layer OL may include j output nodes y1, y2, . . . , yj, providing output data Y where j is a natural number. The output layer OL may output the output data Y associated with the input data X.


A structure of the neural network illustrated in FIG. 4 may be represented by information on branches (or connections) between nodes illustrated as lines, and a weighted value assigned to each branch. Nodes within one layer may not be connected to one another, but nodes of different layers may be fully (and/or partially) connected to one another.


Each node (e.g., the node h11) may receive an output of a previous node (e.g., the node x1), may perform a computing operation, computation and/or calculation on the received output, and may output a result of the computing operation, computation, or calculation as an output to a next node (e.g., the node h21). Each node may calculate a value to be output by applying the input to a specific function, e.g., a nonlinear function.


Generally, the structure of the neural network may be set in advance, and the weighted values for the connections between the nodes are set appropriately using data having an already known answer of which class the data belongs to. The data with the already known answer may be referred to as “training data,” and a process of determining the weighted value is referred to as “training.” The neural network “learns” during the training process. A group of an independently trainable structure and the weighted value is referred to as a “model,” and a process of predicting, by the model with the determined weighted value, which class the input data belongs to, and then outputting the predicted value, is referred to as a “testing” process.


The general neural network illustrated in FIG. 4 may not be suitable for handling input image data (or input sound data) because each node (e.g., the node h11) is connected to all nodes of a previous layer (e.g., the nodes x1, x2, . . . , xi included in the layer IL) and then the number of weighted values drastically increases as the size of the input image data increases. Thus, a convolutional neural network (CNN), which is implemented by combining the filtering technique with the general neural network, has been researched such that two-dimensional image (e.g., the input image data) is efficiently trained by the convolutional neural network.


Referring to FIG. 5, a convolutional neural network may include a plurality of layers CONV1, RELU1, CONV2, RELU2, POOL1, CONV3, RELU3, CONV4, RELU4, POOL2, CONV5, RELU5, CONV6, RELU6, POOL3 and FC.


Unlike the general neural network, each layer of the convolutional neural network may have three dimensions of width, height, and depth, and thus data that is input to each layer may be volume data having three dimensions of width, height, and depth.


Each of convolutional layers CONV1, CONV2, CONV3, CONV4, CONV5 and CONV6 may perform a convolutional operation on input volume data. In an image processing, the convolutional operation represents an operation in which image data is processed based on a mask with weighted values and an output value is obtained by multiplying input values by the weighted values and adding up the total multiplied values. The mask may be referred to as a filter, window, and/or kernel.


In further detail, parameters of each convolutional layer may consist of (and/or include) a set of learnable filters. Every filter may be spatially small (e.g., along width and height), but may extend through the full depth of an input volume. For example, during the forward pass, each filter may be slid (e.g., convolved) across the width and height of the input volume, and dot products may be computed between the entries of the filter and the input at any position. As the filter is slid over the width and height of the input volume, a two-dimensional activation map that gives the responses of that filter at every spatial position may be generated. As a result, an output volume may be generated by stacking these activation maps along the depth dimension. For example, if input volume data having a size of 32×32×3 passes through the convolutional layer CONV1 having four filters with zero-padding, output volume data of the convolutional layer CONV1 may have a size of 32×32×12 (e.g., a depth of volume data increases).


Each of rectifying linear unit (“RELU”) layers RELU1, RELU2, RELU3, RELU4, RELU5 and RELU6 may perform a rectified linear unit operation that corresponds to an activation function defined by, e.g., a function f(x)=max(0, x) (e.g., an output is zero for all negative input x). For example, if input volume data having a size of 32×32×12 passes through the RELU layer RELU1 to perform the rectified linear unit operation, output volume data of the RELU layer RELU1 may have a size of 32×32×12 (e.g., a size of volume data is maintained).


Each of pooling layers POOL1, POOL2 and POOL3 may perform a down-sampling operation on input volume data along spatial dimensions of width and height. For example, four input values arranged in a 2×2 matrix formation may be converted into one output value based on a 2×2 filter. For example, a maximum value of four input values arranged in a 2×2 matrix formation may be selected based on 2×2 maximum pooling, or an average value of four input values arranged in a 2×2 matrix formation may be obtained based on 2×2 average pooling. For example, if input volume data having a size of 32×32×12 passes through the pooling layer POOL1 having a 2×2 filter, output volume data of the pooling layer POOL1 may have a size of 16×16×12 (e.g., width and height of volume data decreases, and a depth of volume data is maintained).


Typically, one convolutional layer (e.g., CONV1) and one RELU layer (e.g., RELU1) may form a pair of CONV/RELU layers in the convolutional neural network, pairs of the CONV/RELU layers may be repeatedly arranged in the convolutional neural network, and the pooling layer may be periodically inserted in the convolutional neural network, thereby reducing characteristics of the input data X. The type and number of layers including in the convolution neural network may be changed variously.


Example embodiments of the deep learning model are not limited to a specific neural network. The deep learning model may include, for example, at least one of CNN(Convolution Neural Network), R-CNN(Region with Convolution Neural Network), RPN(Region Proposal Network), RNN(Recurrent Neural Network), S-DNN(Stacking-based deep Neural Network), S-SDNN(State-Space Dynamic Neural Network), Deconvolution Network, DBN (Deep Belief Network), RBM (Restricted Boltzmann Machine), Fully Convolutional Network, LSTM (Long Short-Term Memory) Network, Classification Network and BNN (Bayesian Neural Network). Additionally (and/or alternatively), the deep learning model(s) may be trained based on at least one of various algorithms such as regression, linear and/or logistic regression, random forest, a support vector machine (SVM), and/or other types of models, such as statistical clustering, Bayesian classification, decision trees, dimensionality reduction such as principal component analysis, expert systems, and/or combinations thereof including ensembles such as random forests.



FIG. 6 is a diagram illustrating an example of a node included in a neural network



FIG. 6 illustrates an example node operation performed by a node ND in a neural network. When N inputs a1˜an are provided to the node ND, the node ND may multiply the n inputs a1˜an and corresponding n weights w1˜wn, respectively, may sum n values obtained by the multiplication, may add an offset “b” to a summed value, and may generate one output value by applying a value to which the offset “b” is added to a specific function “σ”. The learning operation may be performed based on the training data to update all nodes in the neural network.


In case of prediction of characteristics of a semiconductor device based on deep learning, a sufficient amount of training data and/or learning data may be required for training of a deep learning model or deep learning module. For example, tens through millions of training data of various kinds are required to prevent over-fitting during training and enhance performance of the deep learning model. According to some example embodiments, the database for training the deep learning model may be established efficiently by generating the training data using the compact model.



FIG. 7 is a diagram for describing types of uncertainty in a method of generating a deep learning model according to example embodiments.


In FIG. 7, the horizontal axis indicates input data of a deep learning model and the vertical axis indicates prediction data output from the deep learning model. FIG. 7 illustrates an example distribution of the prediction data provided using the deep learning model. The input data may be divided into first through fifth ranges RG1˜RG5 according to the distribution of the prediction data.


For example, the prediction data indicating the characteristics of the semiconductor device may increase linearly according to the input data. In this case, the prediction data from an ideal deep learning model may coincide with a dotted line of a uniform slope in FIG. 7.


The prediction data have a relatively large distribution in the first range RG1 and have a relatively small distribution in the fifth range RG5.


The large distribution of the prediction data may be caused by noises of the input data and/or the training data. The uncertainty of the prediction data caused by noises of the training data may be referred to as data uncertainty (and/or aleatoric uncertainty).


In contrast, the uncertainty of the prediction data caused by insufficiency of the training data in the second and fourth ranges RG2 and RG4 may be referred to as model uncertainty (and/or epistemic uncertainty).


In some example embodiments, the uncertainty data output from the deep learning model may include a model uncertainty value indicating the uncertainty of the prediction data caused, e.g., by insufficiency of the basic training data. In these cases, as will be described below with reference to FIG. 11, a first retraining may be performed such that the deep learning model that has been trained based on the basic training data is further trained based on the model uncertainty value.


In some example embodiments, the uncertainty data output from the deep learning model may include a data uncertainty value indicating the uncertainty of the prediction data caused by noises of the basic training data. In these cases, as will be described below with reference to FIG. 13, a second retraining may be performed such that the deep learning model that has been trained based on the basic training data is initialized, and the initialized deep learning model is trained based on the data uncertainty value.


The deep learning model may be configured to output a model uncertainty value and a data uncertainty value through quantification of the model uncertainty and the data uncertainty.


In some example embodiments, the deep learning model may include a Bayesian Neural Network (BNN). The deep learning model may quantize the uncertainty using the Monte-Carlo Dropout scheme, the Deep Ensemble scheme, a quantile regression scheme, a Gaussian Process Inference scheme, and/or a combination thereof.


As such, the method and computing device according to some example embodiments may efficiently provide the deep learning model capable of precisely predicting the characteristics of the semiconductor device by determining the method of retraining the deep learning model based on the type of the uncertainty of the prediction data.



FIG. 8 is a diagram illustrating data in a method of generating a deep learning model according to example embodiments. FIG. 8 illustrates example data when a semiconductor device corresponds to a transistor. Example embodiments are not limited to a transistor and may be applied to semiconductor devices (e.g., two-terminal devices like diodes, three-terminal devices like rectifiers, four-terminal devices like optocouplers, etc., and/or electronic components of other types such as microelectromechanical systems (MEMS), resistors, capacitors, integrated cells, etc.).


Referring to FIG. 8, input data X of a deep learning model may include device data DV and/or process data PR.


In some example embodiments, as will be described below with reference to FIG. 9, the input data X may not include the process data PR and may include, e.g., only the device data DV.


In some example embodiments, as will be described below with reference to FIG. 10, the input data X may include both of the device data DV and the process data PR.


The device data DV may indicate structure and operation condition of the semiconductor device. For example, the device data DV may include information on the structure of the semiconductor device such as a width W of a transistor, a length L of the transistor, etc. In addition, the device data DV may include information of the operation condition of the semiconductor device such as an operation temperature Top of the transistor, a drain voltage Vd, a gate voltage Vg, a body voltage Vb, a source voltage Vs of the transistor, etc.


The process data PR may indicate condition of manufacturing process of the semiconductor device. For example, the process data PR may include a kind Dk of a dopant in an ion-implanting process, a density Dd of the dopant, an activation temperature Tact, a thickness tOG of a gate oxide layer, a thickness of a spacer tSP in a gate structure of the transistor, etc.


Output data Y of the deep learning model may include prediction data PD and uncertainty data UC.


The prediction data PD may indicate electrical characteristics of the semiconductor device. For example, the prediction data PD may include a threshold voltage Vt, a gain G, a breakdown voltage Vbk, a drain current Id of the transistor, etc.


The uncertainty data UC may indicate uncertainty of the prediction data PD. The uncertainty data UC may include a model uncertainty value Um and/or a data uncertainty value Ud. As described above, the model uncertainty value Um may indicate the uncertainty of the prediction data PD caused by insufficiency of the basic training data, and the data uncertainty value Ud may indicate the uncertainty of the prediction data PD caused by noises of the basic training data.



FIGS. 9 and 10 are diagrams illustrating example embodiments of a deep learning model in a method of generating a deep learning model according to example embodiments. For convenience of illustration, detailed configuration of a deep learning model is omitted and an input layer IL receiving input data X and an output layer OL providing output data Y are illustrated in FIGS. 9 and 10.


The compact model may provide simulation result data SR indicating characteristics of a semiconductor device corresponding to device data DV by performing simulation based on the device data DV. In general, each compact model may be designed to output the simulation result data SR corresponding to process data associated a specific semiconductor product.


Referring to FIG. 9, by performing a method of generating a deep learning model according to example embodiments, a deep learning model DLM1 may be generated, which may replace one compact model CM1 corresponding to one process data PRE


The basic training data TRD may be generated using the one compact model CM1 such that the basic training data TRD correspond to a combination of device data DV and simulation result data SR. A plurality of basic training data TRD corresponding to different combinations may be generated based on different values of the device data DV. In general, the performance of the deep learning model MLM1 may be enhanced as an amount of the basic training data TRD used in training is increased.


In the example embodiment of FIG. 9, the input data X of the deep learning model DLM1 may not include the process data PR1 and include only the device data DV. The output data Y may include the prediction data PD and the uncertainty data UC. The supervised learning of the deep learning model MLM1 may be performed using the simulation result data SR as the ground-truth data with respect to the device data DV.


Referring to FIG. 10, by performing a method of generating a deep learning model according to example embodiments, a deep learning model DLM2 may be generated, which may replace a plurality of compact models CM1 and CM2 corresponding to a plurality of process data PR1 and PR2.


The basic training data TRD may be generated using the plurality of compact models CM1 and CM2 such that the basic training data TRD correspond to a combination of process data PR, device data DV, and simulation result data SR. A plurality of basic training data TRD corresponding to different combinations may be generated based on different values of the process data and the device data DV.


In the example embodiment of FIG. 10, the input data X of the deep learning model DLM1 may include both of the process data PR1 and the device data DV. The output data Y may include the prediction data PD and the uncertainty data UC. The supervised learning of the deep learning model MLM1 may be performed using the simulation result data SR as the ground-truth data with respect to the process data PR and the device data DV.



FIG. 11 is a flow chart illustrating an example embodiment of retraining based on model uncertainty value in a method of generating a deep learning model according to example embodiments, and FIG. 12 is a diagram illustrating an addition data range in a method of generating a deep learning model according to example embodiments.



FIG. 11 illustrates a first retraining such that the deep learning model that has been trained based on the basic training data is further trained based on the model uncertainty value.


Referring to FIGS. 2 and 11, the training control module 240 operated by at least on processor of the processors 110 may compare the model uncertainty value with a model reference value (S311). As described above, the model uncertainty value may indicate the uncertainty of the prediction data caused by insufficiency of the basic training data. The model reference value may be determined as a proper value depending on target performance of the trained deep learning model.


When the model uncertainty value is larger than the model reference value, addition training data different from the basic training data may be generated using the compact model (S312). The training control module 240 may retrain the deep learning model based on the addition training data (S313). The training control module 240 may, for example, not initialize the deep learning model that has been trained based on the basic training data, and retain the deep learning model additionally based on the addition training data.



FIG. 12 illustrates an example distribution of the model uncertainty value provided from the trained deep learning model. In FIG. 12, the horizontal axes indicate the length L and the width W of a semiconductor device (e.g., a transistor) and the vertical axis indicates the model uncertainty value Um.


The training control module 240 may determine an addition data range RGm corresponding to a range of data such that the model uncertainty value Um is larger than the model reference value. The training control module 240 may generate the addition training data such that the addition training data may correspond to a combination of the device data included in the addition data range and the corresponding simulation result data.



FIG. 13 is a flow chart illustrating an example embodiment of retraining based on data uncertainty value in a method of generating a deep learning model according to example embodiments, and FIG. 14 is a diagram illustrating a measurement data range in a method of generating a deep learning model according to example embodiments.



FIG. 13 illustrates a second retraining such that the deep learning model that has been trained based on the basic training data is initialized, and the initialized deep learning model is trained based on the data uncertainty value.


Referring to FIGS. 2 and 13, the training control module 240 operated by at least on processor of the processors 110 may compare the data uncertainty value with a data reference value (S321). As described above, the data uncertainty value may indicate the uncertainty of the prediction data caused by noises of the basic training data. The data reference value may be determined as a proper value depending on target performance of the trained deep learning model.


When the data uncertainty value is larger than the data reference value, measurement data may be provided by measuring characteristics of the semiconductor device (S322), as described with reference to FIG. 3. The measurement data may be provided to the training control module 240.


The training control module 240 may correct the compact model based on the measurement data (S324), and generate updated training data using the corrected compact model (S324). The training control module 240 may retrain the deep learning model based on the updated training data (S325). The training control module 240 may initialize the deep learning model that has been trained based on the basic training data, and then retain the initialized deep learning model based on the measurement training data.



FIG. 14 illustrates an example distribution of the data uncertainty value provided from the trained deep learning model. In FIG. 14, the horizontal axes indicate the length L and the width W of a semiconductor device (e.g., a transistor) and the vertical axis indicates the data uncertainty value Ud.


The training control module 240 may determine one or more measurement data ranges RGd1 and RGd2 corresponding to a range of data such that the data uncertainty value Ud is larger than the data reference value. The measurement data ranges RGd1 and RGd2 may be provided to the semiconductor manufacturing equipment 31 and the semiconductor measuring equipment 32 in FIG. 3, and the semiconductor manufacturing equipment 31 and the semiconductor measuring equipment 32 may provide the measurement data MD by measuring the characteristics of the semiconductor device corresponding to the device data included in the measurement data ranges RGd1 and RGd2.



FIGS. 15 and 16 are flow charts illustrating example embodiments of retraining based on model uncertainty value and data uncertainty value in a method of generating a deep learning model according to example embodiments.


Referring to FIGS. 2 and 15, the training control module 240 may generate, using the deep learning model DLM, the model uncertainty value Um and the data uncertainty value Ud corresponding to the input data of the deep learning model DLM (S11). As described above, the model uncertainty value Um may indicate the uncertainty of the prediction data caused by insufficiency of the basic training data and the data uncertainty value Ud may indicate the uncertainty of the prediction data caused by noises of the basic training data.


The training control module 240 may compare the model uncertainty value Um with the model reference value Rm (S12).


When the model uncertainty value Um is larger than the model reference value Rm (S12: YES), the training control module 240 may perform the first retraining (S13) as described above with reference to FIG. 11. As described above, the training control module 240 may generate the addition training data different from the basic training data using the compact model and retrain the deep learning model based on the addition training data. In this case, the training control module 240 may not initialize the deep learning model that has been trained based on the basic training data, and retain the deep learning model additionally based on the addition training data.


When the model uncertainty value Um is not larger than the model reference value Rm (S12: NO), the training control module 240 may compare the data uncertainty value Ud with the data reference value Rd (S14).


When the data uncertainty value Ud is larger than the data reference value Rd (S14: YES), the training control module 240 may perform the second retraining (S15) as described above with reference to FIG. 13. As described above, the training control module 240 may receive the measurement data, correct the compact model based on the measurement data, generate the updated training data using the corrected compact model, and then retrain the deep learning model based on the updated training data. In this case, the training control module 240 may initialize the deep learning model that has been trained based on the basic training data, and then retain the initialized deep learning model based on the measurement training data.


As described with reference to FIG. 15, the training control module 240 may determine first whether to perform the first retraining based on the model uncertainty value. The training control module 240 may determine whether to perform the second retraining based on the data uncertainty value, only and when it is determined that the first retraining is not performed.


Referring to FIGS. 2 and 16, the training control module 240 may generate, using the deep learning model DLM, the model uncertainty value Um and the data uncertainty value Ud corresponding to the input data of the deep learning model DLM (S21). As described above, the model uncertainty value Um may indicate the uncertainty of the prediction data caused by insufficiency of the basic training data and the data uncertainty value Ud may indicate the uncertainty of the prediction data caused by noises of the basic training data.


The training control module 240 may compare the model uncertainty value Um with the model reference value Rm (S22).


When the model uncertainty value Um is larger than the model reference value Rm (S22: YES), the training control module 240 may perform the first retraining (S23) as described above with reference to FIG. 11. As described above, the training control module 240 may generate the addition training data different from the basic training data using the compact model and retrain the deep learning model based on the addition training data. In these cases, the training control module 240 may, e.g., not initialize the deep learning model that has been trained based on the basic training data, and retain the deep learning model additionally based on the addition training data.


In addition, the training control module 240 may compare the data uncertainty value Ud with the data reference value Rd (S24).


When the data uncertainty value Ud is larger than the data reference value Rd (S24: YES), the training control module 240 may perform the second retraining (S25) as described above with reference to FIG. 13. As described above, the training control module 240 may receive the measurement data, correct the compact model based on the measurement data, generate the updated training data using the corrected compact model, and then retrain the deep learning model based on the updated training data. In this case, the training control module 240 may initialize the deep learning model that has been trained based on the basic training data, and then retain the initialized deep learning model based on the measurement training data.


As described with reference to FIG. 16, the training control module 240 may determine whether to perform the first retraining and whether to perform the second retraining independently of each other.


The first retraining may be performed rapidly using the established compact model and the second retraining may require a significant amount of time for providing the measurement data. One of methods of FIGS. 15 and 16 may be properly selected considering various matters of designing and manufacturing of semiconductor products.


As described above, the method and the computing device according to some example embodiments may provide the deep learning model capable of precisely predicting the characteristics of the semiconductor device by training the deep learning model to output the prediction data indicating the characteristics of the semiconductor device and the uncertainty data indicating the uncertainty of the prediction data and retraining the deep learning model based on the uncertainty data. In addition, the method and computing device according to some example embodiments may efficiently provide the deep learning model capable of precisely predicting the characteristics of the semiconductor device by determining the method of retraining the deep learning model based on the type of the uncertainty of the prediction data. Through the enhanced prediction performance of the deep learning model, the time of designing and manufacturing the semiconductor product including the semiconductor device and the performance of the semiconductor product may be enhanced.


As will be appreciated by one skilled in the art, example embodiments in this disclosure may be embodied as a system, method, computer program product, and/or a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon. The computer readable program code may be provided to a processor of a general purpose computer, special purpose computer, and/or other programmable data processing apparatus. The computer readable medium may be a computer readable signal medium and/or a computer readable storage medium. The computer readable storage medium may be any tangible medium that can contain and/or store a program for use by or in connection with an instruction execution system, apparatus, or device.


In this disclosure, the terms “driver,” “unit” and/or “module” may denote elements that process (and/or perform) at least one function or operation and may be included in and/or implemented as processing circuitry such hardware, software, or the combination of hardware and software. For example, the processing circuitry more specifically may include (and/or be included in), but is not limited to, a processor, Central Processing Unit (CPU), a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a field programmable gate array (FPGA), a System-on-Chip (SoC), a programmable logic unit, a microprocessor, application-specific integrated circuit (ASIC), etc. For example, the term “module” may refer to a software component, a hardware component such as a field programmable gate array (FPGA) or an application specific integrated circuit (ASIC), and/or combination of a hardware component and a software component. However, a “module” is not limited to software or hardware. A “module” may be configured to be included in an addressable storage medium or to reproduce one or more processors. Accordingly, for example, a “module” may include components such as software components, object-oriented software components, class components, and task components, processes. functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuits, data, databases, data structures, tables, arrays, and variables. A function provided in components and “modules” may be integrated with a smaller number of components and “modules” or divided into additional components and “modules”.


The example embodiments may be applied to any electronic devices and systems. For example, the inventive concept may be applied to systems such as a memory card, a solid state drive (SSD), an embedded multimedia card (eMMC), a universal flash storage (UFS), a mobile phone, a smart phone, a personal digital assistant (PDA), a portable multimedia player (PMP), a digital camera, a camcorder, a personal computer (PC), a server computer, a workstation, a laptop computer, a digital TV, a set-top box, a portable game console, a navigation system, a wearable device, an internet of things (IoT) device, an internet of everything (IoE) device, an e-book, a virtual reality (VR) device, an augmented reality (AR) device, a server system, an automotive driving system, etc.


The foregoing is illustrative of some example embodiments and is not to be construed as limiting thereof. Although a few example embodiments have been described, those skilled in the art will readily appreciate that many modifications are possible in the example embodiments without materially departing from the present inventive concepts.

Claims
  • 1. A method of generating a deep learning model, the method being performed by executing program codes by at least one processor, the program codes being stored in computer readable media, the method comprising: generating basic training data corresponding to a combination of device data and simulation result data using a compact model, the compact model configured to generate the simulation result data by performing a simulation based on the device data, the simulation result data indicating characteristics of a semiconductor device corresponding to the device data;training the deep learning model based on the basic training data such that the deep learning model is configured to output prediction data and uncertainty data, the prediction data indicating the characteristics of the semiconductor device, and the uncertainty data indicating uncertainty of the prediction data; andretraining the deep learning model based on the uncertainty data.
  • 2. The method of claim 1, wherein the uncertainty data include a model uncertainty value indicating the uncertainty of the prediction data caused by insufficiency of the basic training data.
  • 3. The method of claim 2, wherein the retraining the deep learning model includes: comparing the model uncertainty value with a model reference value;generating addition training data using the compact model when the model uncertainty value is larger than the model reference value; andretrain the deep learning model based on the addition training data,wherein the addition training data is different from the basic training data.
  • 4. The method of claim 3, wherein retraining the deep learning model further includes: determining an addition data range corresponding to a range of data such that the model uncertainty value is larger than the model reference value.
  • 5. The method of claim 4, wherein the addition training data correspond to a combination of the device data included in the addition data range and the simulation result data.
  • 6. The method of claim 3, wherein the deep learning model that has been trained based on the basic training data is further trained based on the addition training data.
  • 7. The method of claim 1, wherein the uncertainty data include a data uncertainty value, the data uncertainty value indicating the uncertainty of the prediction data caused by noises in the basic training data.
  • 8. The method of claim 7, wherein retraining the deep learning model includes: comparing the data uncertainty value with a data reference value;providing measurement data by measuring the characteristics of the semiconductor device, when the data uncertainty value is larger than the data reference value;correcting the compact model based on the measurement data;generating updated training data using the corrected compact model; andretraining the deep learning model based on the updated training data.
  • 9. The method of claim 8, wherein retraining the deep learning model further includes: determining a measurement data range corresponding to a range of data such that the data uncertainty value is larger than the data reference value.
  • 10. The method of claim 9, wherein the characteristics of the semiconductor device correspond to the device data included in the measurement data range.
  • 11. The method of claim 8, wherein the deep learning model that has been trained based on the basic training data is initialized, and the initialized deep learning model is trained based on the measurement data.
  • 12. The method of claim 1, wherein the uncertainty data include a model uncertainty value and a data uncertainty value,the model uncertainty value indicating the uncertainty of the prediction data caused by insufficiency of the basic training data, andthe data uncertainty value indicating the uncertainty of the prediction data caused by noises of the basic training data.
  • 13. The method of claim 12, wherein retraining the deep learning model includes at least one of: performing a first retraining such that the deep learning model that has been trained based on the basic training data is further trained based on the model uncertainty value; orperforming a second retraining such that the deep learning model that has been trained based on the basic training data is initialized, and the initialized deep learning model is trained based on the data uncertainty value.
  • 14. The method of claim 13, wherein whether to perform the first retraining is determined based on the model uncertainty value, and when it is determined that the first retraining is not performed, whether to perform the second retraining is determined based on the data uncertainty value.
  • 15. The method of claim 13, wherein whether to perform the first retraining and whether to perform the second retraining are determined independently of each other.
  • 16. The method of claim 1, wherein the deep learning model includes a Bayesian Neural Network (BNN).
  • 17. The method of claim 1, wherein the device data indicate structure and operation condition of the semiconductor device,the simulation result data and the prediction data indicate electrical characteristics of the semiconductor device, andthe device data is included in input data of the deep learning model.
  • 18. The method of claim 17, wherein the input data of the deep learning model further includes process data indicating a condition of manufacturing process of the semiconductor device.
  • 19. A method of generating a deep learning model, the method being performed by executing program codes by at least one processor, the program codes being stored in computer readable media, the method comprising: generating basic training data corresponding to a combination of device data and simulation result data using a compact model, the compact model configured to generate the simulation result data by performing a simulation based on the device data, the simulation result data indicating characteristics of a semiconductor device corresponding to the device data;training the deep learning model based on the basic training data such that the deep learning model is configured to output prediction data, a model uncertainty value, and a data uncertainty value, the prediction data indicating the characteristics of the semiconductor device, the model uncertainty value indicating the uncertainty of the prediction data caused by insufficiency of the basic training data, and the data uncertainty value indicating the uncertainty of the prediction data caused by noises of the basic training data;performing a first retraining such that the deep learning model that has been trained based on the basic training data is further trained based on the model uncertainty value; andperforming a second retraining such that the deep learning model that has been trained based on the basic training data is initialized, and the initialized deep learning model is trained based on the data uncertainty value.
  • 20. A computing device comprising: at least one processor; anda computer readable medium storing program codes and a compact model, the program codes being executed by the at least one processor to generate a deep learning model, the compact model generating simulation result data indicating characteristics of a semiconductor device corresponding to device data by performing simulation based on the device data,the at least one processor executing the program codes to: generate basic training data corresponding to a combination of the device data and the simulation result data using the compact model;train the deep learning model based on the basic training data such that the deep learning model is configured to output prediction data and uncertainty data, the prediction data indicating the characteristics of the semiconductor device and the uncertainty data indicating uncertainty of the prediction data; andretrain the deep learning model based on the uncertainty data.
Priority Claims (1)
Number Date Country Kind
10-2021-0109563 Aug 2021 KR national