The present disclosure relates to the field of electronic circuit model, in particular, to a method of establishing a transistor statistical model based on an artificial neural network system.
With continuous development of More than Moore's Law technology, limited by a series of second-order effects brought about by scaling down in actual production, people have begun to introduce new materials or propose new structures in devices, and more and more emerging devices have been developed. The physical mechanism of emerging devices is more complicated, which correspondingly brings great challenges to the modeling work of these devices. At the same time, physics-based models have long development cycles and slow simulation times. It is not beneficial for design technology co-optimization.
For general purpose devices, any of the process parameters will affect the device characteristics. During fabrication of the devices, process parameters such as channel length, and oxide layer thickness have a relatively large impact on the performance of emerging devices, e.g., off-state current, on-state current, sub-threshold slope, and threshold voltage. The device model requires consideration of fluctuation of device process parameters and its impact on device performance, so as to ensure the reliability of mass production of devices, especially emerging devices, while ensuring technology advancement.
For solving the technical problems existing in the prior art, the present disclosure provides a method of establishing a transistor statistical model based on an artificial neural network system, comprising receiving a first data set, and generating a nominal model of a baseline transistor by the artificial neural network system based on the first data set, the first data set including multiple sets of gate-source voltage data, drain-source voltage data, and drain-source current data of multiple transistors of a same type, wherein the multiple transistors of a same type include the baseline transistor and a plurality of variational transistors, wherein the baseline transistor is determined according to a median or average value of the drain-source current data of the multiple transistors of the same type under the same bias condition, and the rest of the multiple transistors are variational transistors; screening neurons in the artificial neural network system based on the first data set and the nominal model to obtain final variational neurons; obtaining distribution of weights of the final variational neurons and distribution of threshold voltages based on variation of the nominal model with respect to weights of the final variational neurons, variation of the nominal model with respect to threshold voltage, distribution of the drain-source current and distribution of the gate-source voltage of the multiple transistors of the same type in the first data set; and establishing the transistor statistical model based on the nominal model, the distribution of weights of the final variational neurons and the distribution of the threshold voltages.
Specifically, said screening neurons in the artificial neural network system based on the first data set and the nominal model to obtain final variational neurons comprises, changing weights of at least part of the neurons in the artificial neural network system in the nominal model, until difference between an intermediate output curve after the change of the weights and current-voltage characteristic curve of a variational transistor is less than a first threshold, and taking the changed weights as adjusted weights of the at least part of the neurons with regards to the variational transistor, wherein the intermediate output curve refers to the output curve obtained after change of the weights; for each of the variational transistors, calculating an absolute value of relative change between the adjusted weights and initial weights for each of the at least part of the neurons in the nominal model; calculating an average value of the absolute values for each of the at least part of the neurons in the artificial neural network system, and screening out preliminary variational neurons based on the average value; and screening out the final variational neurons based on a range of output variation of the preliminary variational neurons.
Specifically, the first threshold is less than or equal to 5% of the drain-source current of the variational transistor involved in comparison.
Specifically, preliminary variational neurons with output variation range greater than or equal to 0.1 are taken as the final variational neurons.
Specifically, the variation of the nominal model with respect to weights of the final variational neurons includes, a partial derivative of the drain-source current in the nominal model with respect to the weights of the final variational neurons and a partial derivative of the gate-source voltage in the nominal model with respect to the weights of the final variational neurons.
Specifically, the variation of the nominal model with respect to the threshold voltage includes, a partial derivative of the drain-source current in the nominal model with respect to the threshold voltage and a partial derivative of the gate-source voltage in the nominal model with respect to the threshold voltage.
Specifically, the distribution of the drain-source current and the distribution of the gate-source voltage of the multiple transistors of the same type in the first data set include a standard deviation of the drain-source current and a standard deviation of the gate-source voltage.
Specifically, the distribution of weights of the final variational neurons in the statistical model includes a standard deviation of weights of the final variational neurons, and the distribution of the threshold voltages in the statistical model includes a standard deviation of the threshold voltages.
The present application also provides a method of applying a transistor statistical model based on an artificial neural network system, comprising receiving a second data set including multiple sets of gate-source voltage data, drain-source voltage data of multiple transistors of a same type, the multiple transistors including a baseline transistor and a plurality of variational transistors, wherein the baseline transistor is determined according to a median or average value of drain-source current data of the multiple transistors of the same type under the same bias condition, and the rest are variational transistors, and the second data set also includes multiple sets of data of drain-source current data of the baseline transistors and part of the variational transistors; establishing the transistor statistical model by obtaining distribution of threshold voltages in the statistical model and distribution of weights of final variational neurons based on the multiple sets of gate-source voltage data, drain-source voltage data and drain-source current data of the baseline transistor and part of variational transistors in the second data set, wherein the final variational neurons are from the artificial neural network system; selecting a plurality of threshold voltages from the distribution of the threshold voltages, and performing calculating to obtain corresponding adjusted gate-source voltages accordingly to the selected threshold voltages; selecting a plurality of weights for the final variational neurons from the distribution of weights of the final variational neurons; and generating drain-source currents data of the multiple transistors of the same type that are not included in the second date set, based on the plurality of adjusted gate-source voltages and the plurality of selected weights for the final variational neuron.
Specifically, the threshold voltages are randomly selected based on Gaussian distribution from the distribution of the threshold voltages, and the corresponding weights are randomly selected based on Gaussian distribution from the distribution of weights of the final variational neurons.
Specifically, establishing the statistical model by obtaining distribution of threshold voltages in the statistical model and distribution of weights of final variational neurons based on the multiple sets of gate-source voltage data, drain-source voltage data and drain-source current data of the baseline transistor and part of the variational transistors in the second data set comprises, generating a nominal model of the baseline transistor by the artificial neutral network system based on the multiple sets of data of gate-source voltage data, drain-source voltage data, and drain-source current data of the baseline transistor and part of the variational transistors in the second data set; screening neurons in the artificial neural network system to select final variational neurons based on the nominal model and the multiple sets of gate-source voltage data, drain-source voltage data and drain-source current data of the baseline transistor and part of the variational transistors in the second data set; obtaining distribution of weights of the final variational neurons and distribution of threshold voltages in the statistical model based on variation of the nominal model with respect to weights of the final variational neurons, variation of the nominal model with respect to the threshold voltages, distribution of the drain-source current and distribution of the gate-source voltage of the baseline transistor and part of the variational transistors in the second data set; and establishing the statistical model based on the nominal model, the distribution of weights of the final variational neurons and the distribution of the threshold voltages.
Specifically, said screening neurons in the artificial neural network system to select final variational neurons based on the multiple sets of gate-source voltage data, drain-source voltage data and drain-source current data of the baseline transistors and part of the variational transistors in the second data set comprises, changing weights of at least part of the neurons in the artificial neural network system in the nominal model, until difference between an intermediate output curve after the change of weights and a current-voltage characteristic curve of a variational transistor is less than a first threshold, taking the changed weights as adjusted weights of the at least part of the neurons with regards to the variational transistor, wherein the intermediate output curve refers to the output curve after change of the weights; for each of the variational transistors, calculating absolute values of relative change between the adjusted weights and initial weights for each of the at least part of the neurons in the nominal model; calculating an average value of the absolute values for each of the at least part of the neurons in the artificial neural network system, and screening out preliminary variational neurons according to the average values; and screening out the final variational neurons based on neuron output variation range of the preliminary variational neurons.
Specifically, the first threshold is less than or equal to 5% of the drain-source current of the variational transistor involved in comparison.
Specifically, the preliminary variational neuron with the neuron output variation range of greater than or equal to 0.1 is used as the final variational neuron.
Specifically, the variation of the nominal model with respect to weights of the final variational neurons includes, a partial derivative of the drain-source current in the nominal model with respect to the weights of the final variational neurons and a partial derivative of the gate-source voltage in the nominal model with respect to the weights of the final variational neurons.
Specifically, the variation of the nominal model with respect to the threshold voltage includes, a partial derivative of the drain-source current in the nominal model with respect to the threshold voltage and a partial derivative of the drain-source current in the nominal model with respect to the threshold voltage.
Specifically, the distribution of the drain-source current and the distribution of the gate-source voltage of the baseline transistor and part of variational transistors in the second data set include, a standard deviation of the drain-source current and a standard deviation of the gate-source voltage.
Specifically, the distribution of weights of the final variational neurons in the statistical model includes, standard deviation of weights of the final variational neurons, and the distribution of the threshold voltages in the statistical model includes, standard deviation of the threshold voltages.
The present application also provides a computer-readable storage medium, comprising a memory storing a computer program, the computer program being executed to complete the method of establishing a transistor statistical model based on an artificial neural network system according to any one of the above.
The present application also provides computer-readable storage medium, comprising a memory storing a computer program, the computer program being executed to complete the method of applying a transistor statistical model of an artificial neural network system according to any one of above.
The solution of the present disclosure could, on the basis of accelerating compact model construction of transistors, ensure low complexity and high precision of the model, and meanwhile improve reliability of the transistor model by grasping performance changes caused by process fluctuations, significantly increasing simulation speed of generating transistor models.
Preferred embodiments of the present disclosure will be described in further detail below with reference to the accompanying drawings, wherein:
In order to make the purposes, technical solutions and advantages of the embodiments of the present disclosure clearer, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure. Obviously, the described embodiments are part of embodiments of the present disclosure, not all of them. Based on the embodiments in the present disclosure, all other embodiments obtained by those skilled in the art without making creative efforts fall into the scope of protection of the present disclosure.
In the following detailed description, please refer to the accompanying drawings in the specification, which, as a part of the present disclosure, illustrate specific embodiments of the present disclosure. In the drawings, similar reference signs describe substantially similar components in different views. Specific embodiments of the present disclosure are described in sufficient detail below, so that those skilled in the art can carry out the technical solutions of the present disclosure. It should be understood that other embodiments may also be utilized, and structural, logical or electrical modification can also be made to the embodiments of the present disclosure.
Techniques, methods and devices known to those skilled in the art may not be discussed in detail, but such techniques, methods and devices should be considered as part of the specification if appropriate. The connection between units in the drawings is only used for the convenience of description, which means that at least the units at both ends of the connection communicate with each other, and it is not intended to define that the units that are not connected cannot communicate with each other. In addition, the number of the lines between the two units is intended to indicate at least the number of signals involved in the communication between the two units or at least the output terminals provided, which is not used to define the communication between the two units can be achieved only with the signal as shown in the drawings.
Transistors may refer to transistors of any structure, such as field effect transistors (FETs) or bipolar junction transistors (BJTs). When the transistors are field effect transistors, according to different channel materials, they may be hydrogenated amorphous silicon, metal oxide, low temperature polysilicon, organic transistor, etc. According to the criteria whether the carriers are electrons or holes, transistors can be divided into N-type transistors and P-type transistors, and control pole refers to gate of a field effect transistor, the first pole can be a drain or source of the field effect transistor, and the corresponding second pole can be a source or drain of the field effect transistor. When the transistors are bipolar transistors, the control pole refers to base of the bipolar transistor, the first pole could be the collector or emitter of the bipolar transistor, and the corresponding second pole could be the emitter or the collector of the bipolar transistor. Transistors can be fabricated using amorphous silicon, polysilicon, oxide semiconductors, organic semiconductors, NMOS/PMOS processes, or CMOS processes. Of course, other types of transistors can also be used.
The existing modeling methods based on artificial neural network mainly relate to constructing compact models of electronic devices. The compact model refers to a model obtained by reasonably simplifying a physical model of electronic devices with reference to engineering experience or mathematical methods.
In recent years, artificial neural networks and related technologies have been continuously developed and widely used in various fields. In order to solve the above-mentioned problems of long modeling and simulation time and low model reliability of the electronic devices, modeling methods based on artificial neural network have emerged at the right moment. These methods are often specific to a single device. How to efficiently build models of a large number of devices is still a problem to be solved.
The present disclosure provides a method of establishing a transistor statistical model based on an artificial neural network system. The method, with consideration of the fluctuation of process parameters of multiple transistors and the output offset problem caused thereby, effectively improves reliability of the transistor model, contributes to greatly increasing the simulation speed of the transistor model, and thus demonstrates the advantage of establishing transistor models with artificial neural network in the large-scale Monte Carlo simulation process.
As shown in
According to an embodiment, at 101, receiving a first data set, and generating a nominal model of a baseline transistor by an artificial neural network system based on at least partial data in the first data set. The first data set may include multiple sets of gate-source voltage, drain-source voltage, and drain-source current data of multiple transistors of a same type, wherein the multiple transistors of the same type include the baseline transistor and multiple variational transistors.
According to an embodiment of the present disclosure, the baseline transistor may be determined according to a median value or an average value of drain-source current data of the plurality of transistors of the same type under the same bias condition in the first data set.
The same type of transistors described herein and in previous and following portions of this disclosure refers to transistors of the same category, having basically same physical structure, and manufactured with the same process. Due to reasons such as variations in the production process or others, slight differences such as size difference may exist between the variational transistors and the baseline transistor, resulting in certain differences in performance therebetween such as threshold voltage and output current. These differences are also known as fluctuations of the process parameters.
According to an embodiment, the nominal model refers to a model of the baseline transistor generated by training the artificial neural network based on at least partial data in the first data set.
According to different embodiments, the artificial neural network system suitable for the nominal model may include different structures, for example, which may include one or more artificial neural networks.
At 102: Based on the first data set and the nominal model, screening the neurons in the artificial neural network system to obtain final variational neurons.
According to one embodiment, part or all neurons in the artificial neural network system may be screened in the above screening operation.
Neurons may affect the performance of the transistor models during the simulation process, which can be used to reflect the fluctuation of process parameters of the corresponding transistors in actual production. Neurons may be screened to locate such affecting ones which are called final variational neurons hereinafter. According to one embodiment, for a system including only a single network, final variational neurons may be selected in the last layer of neurons before the output layer in the network. For a system including multiple networks, the final variational neurons can be screened among all neurons in all networks.
At 103: Based on the variation of the nominal model with respect to the weights of the final variational neurons, the variation of the nominal model with respect to the threshold voltage, the distribution of the drain-source current and the distribution of the gate-source voltage in the first data set, obtaining distribution of weights of the final variational neurons and distribution of the threshold voltage in the transistor statistical model.
According to different embodiments, the so-called variations here may refer to partial derivatives or variations reflected by other mathematical calculation methods. According to different embodiments, the so-called distribution here can be embodied as standard deviation, squared deviation, or other forms.
At 104: Establishing the transistor statistical model based on the nominal model, the distribution of weights of the final variational neurons, and the distribution of the threshold voltages.
At 121: Changing weight of at least part of the neurons in the artificial neural network system until the difference between an intermediate output curve after such change and the current-voltage characteristic curve of the variational transistor is less than a first threshold value, then taking this weight as an adjusted weight of the corresponding neuron.
According to one embodiment, the intermediate output curve refers to an output curve obtained by changing the weights of neurons in the system based on the data in the first data set and the nominal model. The current-voltage characteristic curve of the variational transistor refers to a curve formed based on current-voltage data corresponding to the variational transistors in the first data set.
The data of the plurality of variational transistors included in the first data set may include drain-source current values different from the nominal model's output under the same gate-source voltage and drain-source voltage. Changing the weights of the neurons so that the output of the nominal model is close to the current data of the variational transistors, the output offset caused by the fluctuations of the process parameters may be reflected by the weights of the neurons in the model and output changes. The adjusted weights of the neurons can be correspondingly regarded as influencing factors for establishing the statistical transistor model within the fluctuation range of the process parameters required for manufacture. The statistical transistor models obtained by analyzing the variation range and values of neuron weights can be considered as models that take into account the fluctuations of the process parameters.
According to one embodiment, the weight of each neuron may be adjusted by using gradient descent method.
According to an embodiment of the present disclosure, after the nominal model is established, each neuron in the artificial neural network system has its own weight obtained through prior training, also called the initial weight. When generating statistical models of multiple transistors, only the weights of selected neurons are changed while the weights of the other neurons keep the initial weights unchanged.
According to an embodiment of the present disclosure, the difference between the intermediate output curve of the nominal model after each change of the weights of the selected neurons and the current-voltage characteristic curve of the variational transistor, in the form of e.g., variance, can be used as the loss function, and it may be determined that whether to stop updating the weight of the selected neurons according to the relationship of the loss function and a first threshold.
According to an embodiment of the present disclosure, multiple iterations may be performed based on the loss function until the difference between the intermediate output curve and the output curve of the variational transistor is less than or equals to a first threshold. According to one embodiment, the first threshold may be, for example, a value of 5% of output of the variational transistor. According to an embodiment of the present disclosure, the maximum number of iterations may be, for example, 105. According to one embodiment, upon completion of each 104 iterations, the intermediate output curve is compared with the output curve of the variational transistor. If there is a current-voltage characteristic curve of a variational transistor that differs from the intermediate output curve by a level smaller than the first threshold, the calculation may be stopped. If the difference does not reach a level smaller than the first threshold after reaching the maximum number of iterations, the weight which renders the difference the smallest among all iterations will be taken as the weight of the selected neuron.
According to different embodiments, weights of one or more neurons may be changed at the same time. According to different embodiments, there may be more than one intermediate output curves satisfying the first threshold criteria, and the weight renders the smallest difference can be selected as the weight of the specific neuron.
At 122: With regards to each of the variational transistors, calculating the absolute value of relative change of the weight for each neuron.
According to an embodiment of the present disclosure, since the data in the first data set corresponds to n variational transistors (n can be an integer greater than 1), each neuron may have n updated weights accordingly, wherein for the kth transistor the adjusted weight of the ith neuron is Wi(k), wherein k is an integer greater than or equal to 1 and less than or equal to n, and i is an integer greater than or equal to 1. Calculation maybe performed to obtain the absolute value of the relative change of the weight for each neuron. For example, the absolute value of the relative change ΔAi(k) of the weight of the ith neuron before and after the change may be calculated. Specifically, the absolute value of the relative change of the weight ΔAi(k) satisfies the following relationship:
At 123: Screening out preliminary variational neurons based on the average values of the absolute value of the relative change of the weight of each neuron.
According to an embodiment of the present disclosure, each neuron has n absolute values of the relative change with regards to n weights, and the average value
Table 1 shows the average value
According to an embodiment of the present disclosure, since different fluctuations of process parameters have different impact on current characteristics of transistors in different regions (subthreshold region, linear region, and saturation region), the influence caused by different neurons on the output is also different. The smaller the average value
At 124: Screening out the final variational neurons based on the output variation range of the preliminary variational neurons.
According to an embodiment of the present disclosure, the variation range of the output curve of the final variational neuron should be sufficiently large upon actual production requirements. For example, preliminary variational neurons whose output curve variation range is greater than or equal to a third threshold may be selected as the final variational neurons. According to an embodiment, the neuron output curve may be a normalized curve, and thus the third threshold may be 0.1 for example.
A method of calculating and obtaining the distribution of the weights of the final variational neurons and the distribution of the threshold voltages of the multiple transistors according to an embodiment of the present disclosure is introduced below.
According to different embodiments, the so-called variation here may refer to partial derivatives or changes reflected by other mathematical calculation methods. According to different embodiments, the so-called distribution here can be embodied as standard deviation, variance, or other forms.
Specifically, after the artificial neural network system completes the selection of final variational neurons, according to the distribution of transistor data in the first data set, such as the standard deviation of IDS and VGS, the distribution of the weights of final variational neurons in the transistor statistical model can be calculated, such as the standard deviation {circumflex over (σ)}W
Since the transistors include P-type transistors and N-type transistors, the structures thereof are different, and the corresponding transistor parameters and calculation principles are also different. Therefore, when using the BPV (Backward Propagation of Variance) method for the calculation, separate calculations are required in terms of the P-type transistors and the N-type transistors.
According to an embodiment of the present disclosure, in equation (3),
represent the distribution of drain-source current data IDS_data and gate-source voltage data VGS_data of multiple transistors,
represent the variation of parameters in the nominal model with respect to weights of the final variational neurons and with respect to the threshold voltage (for example, the variation of the drain-source current IDS in the nominal model with respect to the weight of each final variational neuron, the variation of the gate-source voltage VGS in the nominal model with respect to the weight of each final variational neuron, the variation of the drain-source current IDS with respect to the threshold voltages in the nominal model, and the variation of the gate-source voltage VGS with respect to the threshold voltage in the nominal model), and
represent the distribution of the weight of each final variational neuron and the distribution of the threshold voltage.
According to an embodiment of the present disclosure, the gate-source voltage VGS and the gate-source voltage data VGS_data in the first data set may be the gate-source voltage of the transistors in the subthreshold region.
According to an embodiment of the present disclosure, the variation of the gate-source voltage VGS may be obtained by converting the nominal model from a model of outputting drain-source current to a model of outputting gate-source voltage, and calculating the variation of the model with respect to weights of the final variational neurons and with respect to the threshold voltage.
According to an embodiment, when the so-called variation here refers to partial derivative, it can be calculated by chain derivation method, or by numerical derivation and other methods to obtain partial derivative, which is determined upon actual needs.
According to an embodiment of the present disclosure, the data in the first data set may include one or more IDS under different VDS and VGS biases, and one or more VGS under different VDS and IDS biases, wherein part of the data is shown as Table 3.
In equation (3), {circumflex over (σ)}I
In equation (3), {circumflex over (σ)}V
In equation (3), ∂IDS/∂Wm represents the partial derivative of the IDS in the nominal model with respect to the weight Wm of the final variational neuron m, where the IDS can be one or more of the IDS shown in Table 3, such as ISAT. According to one embodiment, the final variational neuron m refers to the mth neuron of all neurons in the system. When there are multiple final variational neurons, the partial derivative in equation (3) should be calculated with respect to the weight of each final variational neuron.
In equation (3), ∂IDS/∂Vth represents the partial derivative of IDS in the nominal model to the threshold voltage Vth, where IDS can be one or more of ISAT or ILIN or ISUB as shown in Table 3. In equation (3), ∂VGS/∂Wm represents the partial derivative of VGS in the nominal model with respect to the weight Wm of the final variational neuron m, where VGS can be VGS1 and/or VGS2 in VGS as shown in Table 3.
In equation (3), ∂VGS/∂Vth represents the partial derivative of VGS in the nominal model with respect to the threshold voltage Vth, where Vas can be VGS1 and/or VGS2 as shown in Table 3.
In equation (3), {circumflex over (σ)}W
The nominal model combined with the above calculation results can be used as the statistical model of multiple transistors of the same type based on the first data set.
According to an embodiment, the artificial neural network system 400 may include a threshold voltage adjustment module 401, configured to receive the gate-source voltage VGS in the data set and the distribution σVth of the threshold voltage when applying the statistical model. The threshold voltage adjustment module 401 if configured to generate adjusted gate-source voltages VGS_Adjust.
As shown in
According to an embodiment of the present disclosure, the artificial neural network system 400 may include one artificial neural network. Of course, the system can also have other structures, for example, including multiple artificial neural networks. Correspondingly, the artificial neural network system 400 may also include multiple hidden layers. The following description will be based on an example where the artificial neural network system 400 includes only one artificial neural network.
As shown in
According to an embodiment, the second data set may include one or more sets of gate-source voltage and drain-source voltage data of multiple transistors of the same type.
According to an embodiment of the present disclosure, optionally, the artificial neural network system 400 may further include a normalization module (not shown), which may be coupled with the threshold voltage adjustment module 401 and the input layer 402. The normalization module performs normalizes the input data, and the data generated by the normalization will be converted into electrical signals and transmitted to the neurons in the input layer 402 for calculation. According to an embodiment of the present disclosure, the normalization module can also be configured to process the input data to satisfy the following relationship:
Optionally, the artificial neural network system 400 may further include a de-normalization module (not shown), which may be coupled with the output layer 404 of the artificial neural network. The denormalization module may de-normalize the calculation results output by the output layer 404 to obtain the drain-source current data obtained through the statistical model of the present disclosure.
As shown in
At 511: Receiving a second data set.
Taking the three-terminal transistor GAA-FET as an example, the second data set may include gate-source voltage data and drain-source voltage data of multiple transistors of the same type. According to an embodiment of the present disclosure, the plurality of transistors of the same type include a baseline transistor and a plurality of variational transistors, wherein the baseline transistor is determined according to a median or average value of the drain-source current data of the multiple transistors of the same type under the same bias condition, and the rest are called variational transistors. According to an embodiment of the present disclosure, the second data set also includes multiple sets of drain-source current data of the baseline transistor and part of the variational transistors.
At 512: Establishing a statistical model based on the data in the second data set, and obtaining the distribution of weights of the final variational neurons and the distribution of threshold voltages.
According to an embodiment of the present disclosure, the distribution of weights of the final variational neurons and the distribution of threshold voltages can be obtained with the above-described method of establishing a statistical model of transistors.
At 513: Selecting a plurality of threshold voltages from the distribution of threshold voltages and calculating adjusted gate-source voltages.
According to an embodiment of the present disclosure, when the second data set includes data of s transistors, the statistical model may generate transistor current models corresponding to data of q transistors (q may be an integer greater than 1 and less than or equal to s). Based on the threshold voltage Vth(0) of the baseline transistor, q corresponding threshold voltages σVth are randomly selected according to the Gaussian distribution within the distribution of threshold voltages Vth(r) in the model (r can be an integer greater than 1 and less than or equal to q), and the adjusted gate-source voltages VGS_Adjust are calculated. Specifically, the adjusted gate-source voltages VGS_Adjust satisfy the following relationship:
According to an embodiment of the present disclosure, Vth(0) may be the corresponding threshold voltage when the drain-source current of a baseline transistor is 1 μA.
At 514: Selecting a plurality of weights in the distribution of weights of the final variational neurons.
According to an embodiment of the present disclosure, in the distribution ôW
At 515: Generating drain-source currents for the plurality of transistors based on the adjusted gate-source voltages, selected weights and corresponding drain-source voltage data.
The embodiment of the present disclosure also provides a simulation tool, which may include a transistor statistical model established based on artificial neural network system. The simulation tool can be embedded in simulation software such as SPICE to simulate a single transistor, multiple transistors of the same type, or an entire circuit module. During the simulation process, users may input gate-source voltage and the drain-source voltage data into the simulation tool to obtain simulation results such as drain-source current data of transistors of the same type.
The embodiment of the present disclosure also provides a computer-readable storage medium, for example, including a memory storing a computer program, which computer program can be executed to complete the steps of the method for establishing the transistor statistical model based on the artificial neural network system provided in any embodiment of the present disclosure. The computer storage medium can be such memory as FRAM, ROM, PROM, EPROM, EEPROM, Flash Memory, magnetic surface memory, optical disk, or CD ROM, etc.; it can also be various devices including one or any combination of the above-mentioned memories.
The embodiment of the present disclosure also provides a computer-readable storage medium, for example, including a memory storing a computer program, and the computer program can be executed to complete the steps of the method for applying the transistor statistical model of the artificial neural network system provided in any embodiment of the present disclosure. The computer storage medium can be such memory as FRAM, ROM, PROM, EPROM, EEPROM, Flash Memory, magnetic surface memory, optical disk, or CD ROM, etc.; it can also be various devices including one or any combination of the above-mentioned memories.
The solution of the present disclosure could, under the premises accelerating construction of transistor compact model, ensures low complexity and high precision of the model, and improves reliability of the model by grasping performance changes caused by process fluctuations, significantly increasing the simulation speed.
The foregoing embodiments are only for the purpose of illustrating the present disclosure, rather than limiting the same. Those skilled in the art can also make various changes and modifications without departing from the scope of the present disclosure. Therefore, all equivalent technical solutions should also fall into the scope disclosed in the present disclosure.
Number | Date | Country | Kind |
---|---|---|---|
202310157186.0 | Feb 2023 | CN | national |