This application claims the benefit of priority of Singapore Patent Application No. 10201810572P, filed on 26 Nov. 2018, the content of which being hereby incorporated by reference in its entirety for all purposes.
The present invention generally relates to a method of generating training data for a machine learning model for predicting performance in electronic design, and a system thereof.
Machine learning methodology has made headways into analog and mixed signal circuit design in order to augment and fasten time to market for producing good quality circuits. Generation of training data is typically done by perturbing circuit parameters (input design parameters or input vectors) in order to capture and generalize the design boundaries (a priori) and observing the corresponding response through electronic design automation (EDA) simulators (EDA tools) on various performance targets (output vectors). The goal of machine learning algorithms is to learn the non-linear relationships between inputs and outputs as there are no linear or quadratic relationships (e.g., may be referred to as non-parametric learning) and to accurately predict (infer) the response with respect to an unseen (untrain) input vector during execution. The inference which is conditioned around training data is built on a statistical probability of estimating an output as regression or classification. The learned probability distribution P from training data thus forms the underline mechanism of any machine learning model, where the learning problem is to construct a function f based on pair (x, y) inputs-outputs such that it is minimized for xN, yN training samples, i.e., sum of observed square errors (e.g., see Equation (1) below):
or as a mean squared error representation (e.g., see Equation (2) below), the expectation with respect to probability P measured as regression (mean value of output y) of all functions of x:
E[(y−f(x)2|x] Equation (2)
Equation (2) thus also highlights variance of y given x (e.g., see Equation (3) below), where f(x) can be represented as f(x;D), and D is the dependence off(x) on training data.
(f(x; D)−E[y|x])2 Equation (3)
Modeling through machine learning may generate many scenarios where f(x;D) may be an accurate approximation with an optimal predictor of y. It may however also be the case where f(x;D) has quite different dependency using other training data sets and results are far away from regression estimator E[y x], thus the machine learning model may be considered as biased on training dataset. Further, as the dimensionality of inputs increase (typically in circuit design), the problem of bias and variance becomes paramount and complex. Often, generating approximation machine leaning models in these cases require complex state of the art neural networks, deep neural networks, statistical estimation theories and methodologies, such as Bayesian models.
Furthermore, machine modelling may start with a static assumption that the training data completely generalized the ground truth and a predictive statistical thus can be derived from learning input-output pairs. It has been reported that usage of statistical uncertainty is centered around a known probability distribution curve during transistor characterization of process, voltage, temperature, parasitics, and power settings, which is a Gaussian distribution, and also where the characterization tool uses statistical uncertainty to determine if further sampling is required and the areas (input perturbations) in which it might be required. It has also been reported use of semi-automatically generating label through statistical understanding of the ground truth.
Accordingly, various conventional methods of generating training data for machine learning in electronic design have been found to introduce high dimensionality, bias and/or variance within the training data which complicates development of machine learning models, such as issues of over or under fitting, which result in as well as resulting in inefficiencies in the electronic design process, such as higher EDA simulation time and cost.
A need therefore exists to provide a method of generating training data for a machine learning model for predicting performance in electronic design, and a system thereof, that seek to overcome, or at least ameliorate, one or more of the deficiencies in existing methods/systems for generating training data for machine learning in electronic design, such as but not limited to, reducing dimensionality, bias and/or variance within the training data, resulting in improvements in the development of the machine learning model(s) trained based on the training data, such as improved efficiencies and/or effectiveness in the electronic design process. It is against this background that the present invention has been developed.
According to a first aspect of the present invention, there is provided a method of generating training data for a machine learning model for predicting performance in electronic design using at least one processor, the method comprising:
generating a first set of training data based on a first set of input design parameters and an electronic design automation tool;
generating a first covariance information associated with the first set of input design parameters based on the first set of training data;
determining a second set of input design parameters based on the first covariance information; and
generating a second set of training data based on the second set of input design parameters and the electronic design automation tool.
According to a second aspect of the present invention, there is provided a system for generating training data for a machine learning model for predicting performance in electronic design, the system comprising:
a memory; and
at least one processor communicatively coupled to the memory and configured to:
generate a first set of training data based on a first set of input design parameters and an electronic design automation tool;
generate a first covariance information associated with the first set of input design parameters based on the first set of training data;
determine a second set of input design parameters based on the first covariance information; and
generate a second set of training data based on the second set of input design parameters and the electronic design automation tool.
According to a third aspect of the present invention, there is provided a computer program product, embodied in one or more non-transitory computer-readable storage mediums, comprising instructions executable by at least one processor to perform a method of generating training data for a machine learning model for predicting performance in electronic design, the method comprising:
generating a first set of training data based on a first set of input design parameters and an electronic design automation tool;
generating a first covariance information associated with the first set of input design parameters based on the first set of training data;
determining a second set of input design parameters based on the first covariance information; and
generating a second set of training data based on the second set of input design parameters and the electronic design automation tool.
Embodiments of the present invention will be better understood and readily apparent to one of ordinary skill in the art from the following written description, by way of example only, and in conjunction with the drawings, in which:
Various embodiments of the present invention provide a method of generating training data for a machine learning model for predicting performance in electronic design, and a system thereof.
As described in the background of the present application, various conventional methods of generating training data for machine learning in electronic design have been found to introduce high dimensionality, bias and/or variance within the training data which complicates development of machine learning models, such as issues of over or under fitting, which result in as well as resulting in inefficiencies in the electronic design process, such as higher electronic design automation (EDA) simulation time and cost. Accordingly, various embodiments of the present invention provide a method of generating training data for a machine learning model for predicting performance in electronic design, and a system thereof, that seek to overcome, or at least ameliorate, one or more of the deficiencies in existing methods/systems for generating training data for machine learning in electronic design, such as but not limited to, reducing dimensionality, bias and/or variance within the training data, resulting in improvements in the development of the machine learning model(s) trained based on the training data, such as improved efficiencies and/or effectiveness in the electronic design process.
EDA (which may also be referred to as electronic computer-aided design (ECAD)) is a category of software tools for designing and verifying/analyzing electronic systems, such as integrated circuits and printed circuit boards, and is known in the art. For example, an integrated circuit may have an extremely large number of components (e.g., millions of components or more), therefore, EDA tools are necessary for their design. Over time, EDA tools evolved into interactive programs that perform, for example, integrated circuit layout. For example, various companies created equivalent layout programs for printed circuit boards. These integrated circuit and circuit board layout programs may be front-end tools for schematic capture and simulation, which may be known as Computer-Aided Design (CAD) tools and may be classified as Computer-Aided Engineering (CAE). The term “automation” may refer to the ability for end-users to augment, customize, and drive the capabilities of electronic design and verification tools using a computer program (e.g., a scripting language) and associated support utilities. There are a wide variety of programming languages available, and the most commonly used by far are traditional C and its object-oriented offspring, C++. A gate-level netlist may refer to a circuit representation at the level of individual logic gates, registers, and other simple functions. The gate-level netlist may also specify the connections (wires) between the various gates and functions. A component-level netlist may refer to a circuit representation at the level of individual components. As EDA, as well as EDA tools, are well known in the art, they need not be described in detail herein for clarity and conciseness.
In various embodiments, performance in electronic design may refer to a performance of an electronic system configured based on a set of input design parameters determined in an electronic design of the electronic system. In various embodiments, an electronic system may include an integrated circuit (IC) and/or a printed circuit board (PCB). In various embodiments, the performance of the electronic system may be any measurable electrical property or output of the electronic system, which may be obtained or captured as performance data, such as in the form of a set of performance parameters (e.g., performance metrics). In this regard, it will be appreciated that the performance of the electronic system to be measured or considered may be determined or set as desired or as appropriate, and the present invention is not limited to any particular performance parameters or any particular set of performance parameters.
In various embodiments, the machine learning model may be based on any machine learning model known in the art that is capable of being trained based on training data to output a prediction (or predicted performance data) based on a set of input design parameters (e.g., each input design parameter having a particular or specific parameter value), such as but not limited to, logistic regression, support vector network (SVN), deep neural network (DNN), convolution neural network (CNN), recurrent neural network (RNN), Bayesian neural network or an ensemble of machine learning networks.
In various embodiments, the above-mentioned generating (at 102) a first set of training data comprises: perturbing the first set of input design parameters using the electronic design automation tool to obtain a first set of output performance parameters associated with the first set of input design parameters; and forming first labeled data based on the first set of input design parameters and the first set of output performance parameters.
In various embodiments, the first covariance information comprises a plurality of covariance parameters, each covariance parameter being associated with a respective data pair (e.g., unique data pair) of an input design parameter of the first set of input design parameters and an output performance parameter of the first set of output performance parameter.
In various embodiments, the above-mentioned each covariance parameter is based on a Pearson correlation coefficient associated with the respective data pair.
In various embodiments, the first covariance information is a first covariance matrix comprising the plurality of covariance parameters as elements therein.
In various embodiments, the above-mentioned determining (at 106) a second set of input design parameters comprises selecting (or identifying) each input design parameter of the first set of input design parameters having a parameter value that satisfies a first predetermined threshold condition. In various embodiments, the set of selected (or identified) input design parameters may form the second set of input design parameters. In various other embodiments, a random subset of input design parameters may be obtained from the selected (or identified) set of input design parameters to form the second set of input design parameters. In various embodiments, the first predetermined threshold condition may be an absolute parameter value equal to or greater than a predetermined or predefined value. Accordingly, in various embodiments, the second set of input design parameters may be a subset of the first set of input design parameters.
In various embodiments, the parameter value of the above-mentioned each input design parameter ranges from −1 to 1, and the first predetermined threshold condition is an absolute parameter value of about 0.5 or greater. In various embodiments, the first predetermined threshold condition may be an absolute parameter value of about 0.6 or greater, 0.7 or greater, 0.8 or greater, or 0.9 or greater.
In various embodiments, the above-mentioned generating (at 108) a second set of training data comprises: perturbing the second set of input design parameters using the electronic design automation tool to obtain a second set of output performance parameters associated with the second set of input design parameters; and forming second labeled data based on the second set of input design parameters and the second set of output performance parameters.
In various embodiments, the method 100 is configured to generate the training data iteratively in a plurality of iterations, comprising a first iteration and one or more subsequent iterations. The first iteration comprises: the above-mentioned generating (at 104) a first covariance information associated with the first set of input design parameters based on the first set of training data; the above-mentioned determining (at 106) a second set of input design parameters based on the first covariance information; and the above-mentioned generating (at 108) a second set of training data based on the second set of input design parameters using the electronic design automation tool. In each of the one or more subsequent iterations, the subsequent iteration comprises: generating a further covariance information associated with the set of input design parameters obtained in the immediately previous iteration based on at least the set of training data generated at the immediately previous iteration; determining a further set of input design parameters based on the further convariance information; and generating a further set of training data based on the further set of input design parameters and the electronic design automation tool.
In various embodiments, the method 100 continues from a current iteration to a subsequent iteration of the plurality of iterations until the further covariance information is determined to satisfy a predetermined consistency condition. In various embodiments, the predetermined consistency condition may be the covariance information generated at a predetermined number of consecutive iterations are determined to be within a predetermined variation or deviation. By way of example only and without limtation, the predetermined number of consecutive iterations may be three, four, five, or more. Also by way of example only and without limitation, the predetermined variation may be within about 5%, within about 3%, within about 2%, within about 1% or less.
It will be appreciated by a person skilled in the art that the at least one processor 204 may be configured to perform the required functions or operations through set(s) of instructions (e.g., software modules) executable by the at least one processor 204 to perform the required functions or operations. Accordingly, as shown in
It will be appreciated by a person skilled in the art that the above-mentioned modules are not necessarily separate modules, and one or more modules may be realized by or implemented as one functional module (e.g., a circuit or a software program) as desired or as appropriate without deviating from the scope of the present invention. For example, two or more of the training data generator 206, the covariance information generator 208 and the input design parameter determinator 210 may be realized (e.g., compiled together) as one executable software program (e.g., software application or simply referred to as an “app”), which for example may be stored in the memory 202 and executable by the at least one processor 204 to perform the functions/operations as described herein according to various embodiments.
In various embodiments, the system 200 corresponds to the method 100 as described hereinbefore with reference to
For example, in various embodiments, the memory 202 may have stored therein the training data generator 206, the covariance information generator 208 and/or the input design parameter determinator 210, which respectively correspond to various steps of the method 100 as described hereinbefore according to various embodiments, which are executable by the at least one processor 204 to perform the corresponding functions/operations as described herein.
A computing system, a controller, a microcontroller or any other system providing a processing capability may be provided according to various embodiments in the present disclosure. Such a system may be taken to include one or more processors and one or more computer-readable storage mediums. For example, the system 200 described hereinbefore may include a processor (or controller) 204 and a computer-readable storage medium (or memory) 202 which are for example used in various processing carried out therein as described herein. A memory or computer-readable storage medium used in various embodiments may be a volatile memory, for example a DRAM (Dynamic Random Access Memory) or a non-volatile memory, for example a PROM (Programmable Read Only Memory), an EPROM (Erasable PROM), EEPROM (Electrically Erasable PROM), or a flash memory, e.g., a floating gate memory, a charge trapping memory, an MRAM (Magnetoresistive Random Access Memory) or a PCRAM (Phase Change Random Access Memory).
In various embodiments, a “circuit” may be understood as any kind of a logic implementing entity, which may be special purpose circuitry or a processor executing software stored in a memory, firmware, or any combination thereof. Thus, in an embodiment, a “circuit” may be a hard-wired logic circuit or a programmable logic circuit such as a programmable processor, e.g., a microprocessor (e.g., a Complex Instruction Set Computer (CISC) processor or a Reduced Instruction Set Computer (RISC) processor). A “circuit” may also be a processor executing software, e.g., any kind of computer program, e.g., a computer program using a virtual machine code, e.g., Java. Any other kind of implementation of the respective functions which will be described in more detail below may also be understood as a “circuit” in accordance with various alternative embodiments. Similarly, a “module” may be a portion of a system according to various embodiments in the present invention and may encompass a “circuit” as above, or may be understood to be any kind of a logic-implementing entity therefrom.
Some portions of the present disclosure are explicitly or implicitly presented in terms of algorithms and functional or symbolic representations of operations on data within a computer memory. These algorithmic descriptions and functional or symbolic representations are the means used by those skilled in the data processing arts to convey most effectively the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities, such as electrical, magnetic or optical signals capable of being stored, transferred, combined, compared, and otherwise manipulated.
Unless specifically stated otherwise, and as apparent from the following, it will be appreciated that throughout the present specification, discussions utilizing terms such as “generating”, “determining”, “perturbing”, “forming” or the like, refer to the actions and processes of a computer system, or similar electronic device, that manipulates and transforms data represented as physical quantities within the computer system into other data similarly represented as physical quantities within the computer system or other information storage, transmission or display devices.
The present specification also discloses a system (e.g., which may also be embodied as a device or an apparatus), such as the system 200, for performing the operations/functions of the methods described herein. Such a system may be specially constructed for the required purposes, or may comprise a general purpose computer or other device selectively activated or reconfigured by a computer program stored in the computer. The algorithms presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose machines may be used with computer programs in accordance with the teachings herein. Alternatively, the construction of more specialized apparatus to perform the required method steps may be appropriate.
In addition, the present specification also at least implicitly discloses a computer program or software/functional module, in that it would be apparent to the person skilled in the art that the individual steps of the methods described herein may be put into effect by computer code. The computer program is not intended to be limited to any particular programming language and implementation thereof. It will be appreciated that a variety of programming languages and coding thereof may be used to implement the teachings of the disclosure contained herein. Moreover, the computer program is not intended to be limited to any particular control flow. There are many other variants of the computer program, which can use different control flows without departing from the spirit or scope of the invention. It will be appreciated by a person skilled in the art that various modules described herein (e.g., the training data generator 206, the covariance information generator 208 and/or the input design parameter determinator 210) may be software module(s) realized by computer program(s) or set(s) of instructions executable by a computer processor to perform the required functions, or may be hardware module(s) being functional hardware unit(s) designed to perform the required functions. It will also be appreciated that a combination of hardware and software modules may be implemented.
Furthermore, one or more of the steps of a computer program/module or method described herein may be performed in parallel rather than sequentially. Such a computer program may be stored on any computer readable medium. The computer readable medium may include storage devices such as magnetic or optical disks, memory chips, or other storage devices suitable for interfacing with a general purpose computer. The computer program when loaded and executed on such a general-purpose computer effectively results in an apparatus that implements the steps of the methods described herein.
In various embodiments, there is provided a computer program product, embodied in one or more computer-readable storage mediums (non-transitory computer-readable storage medium), comprising instructions (e.g., the training data generator 206, the covariance information generator 208 and/or the input design parameter determinator 210) executable by one or more computer processors to perform a method 100 of generating training data for a machine learning model for predicting performance in electronic design, as described hereinbefore with reference to
The software or functional modules described herein may also be implemented as hardware modules. More particularly, in the hardware sense, a module is a functional hardware unit designed for use with other components or modules. For example, a module may be implemented using discrete electronic components, or it can form a portion of an entire electronic circuit such as an Application Specific Integrated Circuit (ASIC). Numerous other possibilities exist. Those skilled in the art will appreciate that the software or functional module(s) described herein can also be implemented as a combination of hardware and software modules.
In various embodiments, the system 200 may be realized by any computer system (e.g., desktop or portable computer system) including at least one processor and a memory, such as a computer system 300 as schematically shown in
It will be appreciated by a person skilled in the art that the terminology used herein is for the purpose of describing various embodiments only and is not intended to be limiting of the present invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
In order that the present invention may be readily understood and put into practical effect, various example embodiments of the present invention will be described hereinafter by way of examples only and not limitations. It will be appreciated by a person skilled in the art that the present invention may, however, be embodied in various different forms or configurations and should not be construed as limited to the example embodiments set forth hereinafter. Rather, these example embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the present invention to those skilled in the art.
Various example embodiments provide sampling techniques by using covariance towards machine learning modelling applications of electronic circuits and systems.
Machine learning modelling of analog and mixed signal circuits relies on a large pool of training data in order to accurately approximate. Generation of such training data requires extensive use of EDA tools (simulator) to simulate perturbation of many circuit design parameters (input design parameters). However, this may introduce high dimensionality, bias and variance within training data which may further complicate development of machine learning models. In various example embodiments, a method of generating training data for a machine learning model for predicting performance in electronic design is provided which uses a batch sampling technique to first identify statistical variance of input design parameters to the output performance targets and use this statistical variance information (e.g., covariance information) automatically to generate meaningful perturbations with minimal intervention from circuit designer. The method has been found to drastically reduce the dimension space to be modeled and substantially reduce the complexity of the machine learning model to fit and mitigate issues of, for example, over and under fitting. In various example embodiments, using covariance information automatically to generate meaning perturbations may refer to the use of covariance information to identify the input design parameters where the electronic design is sufficiently or most sensitive, and then drawing (or obtaining) a random sample (a random subset of identified input design parameters) from the set of identified input design parameters and execute the random sample on the EDA simulator in an iterative progressive training loop. As will be described later below, through example experiments conducted, the method has been found to reduce training data size by more than 40% to 60%, with respect to a brute force method.
For better understanding, covariance information will now be described in further detail below, by way of an example only and without limitation, according to various example embodiments of the present invention.
Suppose the input and output of the machine learning model is identified as X and Y, respectively. The covariance r (e.g., Pearson correlation coefficient (PCC)) for a sampled input-output pair (X,Y) with n data sample pairs {(x1,y1) . . . (xn,yn)} may be represented as:
where
is the mean of individual sample set, analogous for
defined for each input feature and output target. For example, it can be observed that (xi−
According to various example embodiments, when generating training data for a machine learning model, the value of covariance rxy (value of covariance parameter) guides the sampling of input-output pair. In various example embodiments, if the value of the covariance parameter associated with a sampled input-output pair is large, it may be determined that more data points is needed around that sampled input-output pair to capture the behaviour, such as illustrted in
Accordingly, the method 600 may use an initial set (e.g., batch) of training data by randomly perturbing tuning knobs (input design parameters) to get insight into, possible bias and variance behavior of the underline mixed signal circuit. For example, an initial set (e.g., batch) of training data may be obtained by perturbing an initial set of input design parameters determined or selected by circuit designers based on circuit design knowledge. For example, the input parameters may be perturbed in strict ranges to first understand the circuit design topology and the underline CMOS technology. This initial generalization highlights which tune knobs (input design parameters) may be suitable or necessary to be selected and by how much the tuning knobs need to be perturbed (bounded by technology parameters). For example, electronic circuit designs are based on theoretical design knowledge and may be realized using active and passive devices, such as transistors, resistors, capacitors, and so on, together with biasing voltage and current. For example, in the example of an two-stage operational amplifier, the first stage CMOS transistor widths are theoretically known to be contributing to one of the output performance targets of bandwidth. As a result, such an input design parameter may be tuned and perturbed to understand (obtain information on) the circuit bandwidth response. Furthermore, theoretically, there are ratios of CMOS transistors width/length which need to be followed while implementing different stages of an analog circuit, thus to meet output specifications, step size may be determined to run design of experiments to understand the circuit's response. The method 600 uses covariance information, such as in the form of covariance matrix 610 as shown in
In various example embodiments, a covariance matrix may be generated for each output performance parameter in a set of output performance parameter (e.g., a covariance matrix generated with respect to multiple inputs and one output, such as shown in
In various example embodiments, initial training data (e.g., through random perturbations by a circuit designer using theoretical design knowledge) may represent all circuit devices (e.g., active devices, such as transistors and passive devices, such as resistors and capacitors) and all bias conditions, such as voltage and supply current. For each device, there may be multiple parameters (knobs) to tune (e.g., transistor and passive widths and lengths). A subset of input design parameters may be formulated based on the covariance matrix where higher positive or negative scores (values) may identify suitable or essential devices and their parameter correlation to output targets. In this regard, highly positive or negatively correlated parameters (e.g., score about 0.5 or above, or about −0.5 or below, that is, an absolute value greater about 0.5 or greater) correspond to the case of high variance. Furthermore, near zero positive or negative correlated parameters imply the case of high bias. As every iteration attempts to reduce the number of input design parameters in the set of input design parameters, this sampling technique according to various example embodiments reduces the shear permutation and combinations needed to exercise to generate high quality of training data. In various example embodiments, high quality of training data may refer to a sampled data set which captures the circuit response towards high bias and high variance.
In various example embodiments, training data generation may be guided by iterative and progressive, bias and variation guided feedback to control input perturbations. In various example embodiments, the training data generation may be performed in an iterative loop whereby the stopping criterionis less than 5% variance observed between three successive iterations. In various example embodiments, the feedback may be a software routine (algorithm) which uses technology constraints and/or constraints set by circuit designer to limit design space combinations. By way of an example only and without limitation, an example pseudo code for a method of generating training data for a machine learning model for predicting performance in electronic design, shown in
As the covariance matrix database builds progressively, it can be observed if the training data is becoming biased by the training set or achieving good variance (e.g., see
Conventional methods of training data generation for modeling electronic designs uses designer's knowledge to intelligently produce sampled and labeled data, as a ground truth and to be modeled by state of the art machine/deep learning algorithms. In contrast, the method of training data generation according to various example embodiments drastically changes this conventional methodology and completely relies on automatic statistical methods to first identify sensitive input devices and further augments the training phase with permutations (such as shown in
For illustration purpose, experiments with operational amplifiers and DC-DC converters were conducted according to various example embodiments of the present invention. The experiments demonstrated a reduction of 46% and 68% correspondingly on input devices to perturb to produce good quality training dataset. This simplifies the machine models and produces high accuracy probability based models, as shown in
Accordingly, the method of generating training data for a machine learning model for predicting performance in electronic design according to various example embodiments of the present invention represents a significant step in adoption of machine learning in electronic design space where EDA tool cost and simulation run times are huge. Reducing training dataset and automatically identifying device perturbations, reduces human intervention and use or prior knowledge in crafting ground truth before modeling can be performed. For example, the corresponding system can run as in a batch mode, in cloud server farms, on a library of mixed signal circuits to generate various forms of training data based on specific EDA simulators or optimizers. Further, the method can be cross adopted to any system which needed to be modeled by machine learning and always relies on human knowledge to make meaningful training data. State of the art machine learning techniques can also be included to guide the proposed algorithm such as reinforcement learning or active learning which can orchester input sample generation based on reward systems as each EDA simulation is time and cost intensive.
Accordingly, the method of generating training data for a machine learning model for predicting performance in electronic design according to various example embodiments of the present invention, has the following advantages:
The method of generating training data for a machine learning model for predicting performance in electronic design according to various example embodiments of the present invention, has also found to have increased performances, including:
The method of generating training data for a machine learning model for predicting performance in electronic design according to various example embodiments of the present invention, is also applicable to multiple domains and not just limited to electronic circuit design. The covariance information can be particularly used in modeling any system which uses time series solvers and uses solvers to generate training and testing dataset.
Accordingly, the method of generating training data for a machine learning model for predicting performance in electronic design according to various example embodiments of the present invention, advantageously utilizes statistical data, e.g., the covariance, from a batch of initial training samples to orchestrate sampling for generating training data for electronic circuits and systems; and progressive automatic high quality training data by generating new features or by generating more test data for a fixed set of features are formulated by utilizing covariance which can then be modelled by machine learning easily and accurately.
While embodiments of the invention have been particularly shown and described with reference to specific embodiments, it should be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the scope of the invention as defined by the appended claims. The scope of the invention is thus indicated by the appended claims and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced.
Number | Date | Country | Kind |
---|---|---|---|
10201810572P | Nov 2018 | SG | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/SG2019/050579 | 11/26/2019 | WO | 00 |