This U.S. patent application claims priority under 35 U.S.C. § 119 to: Indian Patent Application No. 202221058588, filed on 13 Oct. 2022. The entire contents of the aforementioned application are incorporated herein by reference.
The embodiments herein generally relate to the field of numerical simulation and, more particularly, to a method and system for prediction of fastest solver combination for solution of matrix equations for numerical simulations.
Computational modelling and simulation of many industrial problems results in a set of simultaneous linear equations or matrix equations. For practical problems, these linear equations have large number of terms. In order to solve these linear equations, direct and iterative solvers are used.
Each of these solvers can solve a specific class of matrix system efficiently. The class of these matrix system can be derived from the structure and properties of the coefficient matrices. Thus, solver selection process has a direct dependency on matrix system. Calculation of matrix properties is a computation expensive and time-consuming task.
An efficient solution of a matrix system requires a selection of a suitable pre-conditioner, smoother and solver combination along with tuning parameters since it results in fast solution for a system. Selection of solvers, other than the suitable combination, leads to increased solution time for the same accuracy and hence inefficient. Manual choice of an optimal combination is difficult due to the fact that the optimal combination for a given matrix system may not be optimal for a same problem with slight difference in as properties of the matrix formed and its implication on the choice of the solver combination is not readily available. As a result, the optimal combination needs to be found for each simulation problem to complete simulation in least amount of time. This in turn results in completion of the simulation in least resource and money.
Machine Learning (ML) approaches have enabled automating the process of selection of solver combination. However, the ML based approaches in the literature require calculations of the properties of the coefficient matrix as an intermediate step for the solver selection. Thus, existing approaches are inefficient as calculation of matrix properties is a computationally expensive and time-consuming task and is a technical limitation of the works in the art.
Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems.
For example, in one embodiment, a method for prediction of fastest solver combination for solution of matrix equations for numerical simulations is provided.
The method comprises a training phase to train a plurality of Machine Learning (ML) models and identify a best model for prediction during an inferencing mode. In addition to inferencing mode, the method also provides a self-learning mode that enables in background continuously identifies a revised best model based on the new inputs received during inferencing mode. Thus, whenever the revised model is generated, the best ML model is updated with the revised best model and used further for inferencing.
The training phase includes receiving a plurality of input parameters comprising a plurality of parameters associated with numerically discretized geometry that defines a fluid dynamics domain and a plurality of Computational Fluid Dynamics (CFD) model parameters of a CFD model. The plurality of CFD model parameters comprise a plurality of governing equations, one or more initial and boundary conditions, a plurality of numerical schemes designed for discretizing each of plurality of terms of the plurality of governing equations, and a plurality of solution algorithms and simulation control parameters. The training is performed using a supervised learning technique for predicting a fastest solver combination having a minimum simulation time for the CFD model, a solver combination comprising a solver and at least one of a preconditioner and a smoother. The plurality of ML models are trained using ML model input data generated by:
From the plurality of trained ML models, a best machine Learning (ML) model is selected by applying a performance specific threshold to the plurality of trained ML models. Further, a plurality of ML model parameters for prediction accuracy of the best ML model are optimized to generate an optimized best ML model to be used during an inferencing mode.
During the inferencing mode the one or more hardware processors are configured to predict via the optimized best ML model the fastest solver combination for each of a plurality of new input parameters associated with each of a plurality of CFD model of interests. The predicted fastest solver combination is used to solve the plurality of matrix equations generated from the plurality of governing equations of the CFD model of interest for performing CFD simulation.
Further, in self-learning mode, each of the new set of plurality of input parameters that are received during the inferencing mode are continuously shared with the self-learning mode to determine a revised best ML model with optimized plurality of ML model parameters, and wherein the best ML model is updated with the revised best ML model whenever the revised ML model is determined.
In another aspect, a system for prediction of fastest solver combination for solution of matrix equations for numerical simulations is provided. The system comprises a memory storing instructions; one or more Input/Output (I/O) interfaces; and one or more hardware processors coupled to the memory via the one or more I/O interfaces, wherein the one or more hardware processors are configured by the instructions to
The system comprises a training phase to train a plurality of Machine Learning (ML) models and identify a best model for prediction during an inferencing mode. In addition to inferencing mode, the system also provides a self-learning mode that enables in background continuously identifies a revised best model based on the new inputs received during inferencing mode. Thus, whenever the revised model is generated, the best ML model is updated with the revised best model and used further for inferencing.
The training phase includes receiving a plurality of input parameters comprising a plurality of parameters associated with numerically discretized geometry that defines a fluid dynamics domain and a plurality of Computational Fluid Dynamics (CFD) model parameters of a CFD model. The plurality of CFD model parameters comprise a plurality of governing equations, one or more initial and boundary conditions, a plurality of numerical schemes designed for discretizing each of plurality of terms of the plurality of governing equations, and a plurality of solution algorithms and simulation control parameters. The training is performed using a supervised learning technique for predicting a fastest solver combination having a minimum simulation time for the CFD model, a solver combination comprising a solver and at least one of a preconditioner and a smoother. The plurality of ML models are trained using ML model input data generated by:
From the plurality of trained ML models, a best machine Learning (ML) model is selected by applying a performance specific threshold to the plurality of trained ML models. Further, a plurality of ML model parameters for prediction accuracy of the best ML model are optimized to generate an optimized best ML model to be used during an inferencing mode.
During the inferencing mode the one or more hardware processors are configured to predict via the optimized best ML model the fastest solver combination for each of a plurality of new input parameters associated with each of a plurality of CFD model of interests. The predicted fastest solver combination is used to solve the plurality of matrix equations generated from the plurality of governing equations of the CFD model of interest for performing CFD simulation.
Further, in self-learning mode, each of the new set of plurality of input parameters that are received during the inferencing mode are continuously shared with the self-learning mode to determine a revised best ML model with optimized plurality of ML model parameters, and wherein the best ML model is updated with the revised best ML model whenever the revised ML model is determined.
In yet another aspect, there are provided one or more non-transitory machine-readable information storage mediums comprising one or more instructions, which when executed by one or more hardware processors causes a method for prediction of fastest solver combination for solution of matrix equations for numerical simulations is provided.
The method comprises a training phase to train a plurality of Machine Learning (ML) models and identify a best model for prediction during an inferencing mode. In addition to inferencing mode, the method also provides a self-learning mode that enables in background continuously identifies a revised best model based on the new inputs received during inferencing mode. Thus, whenever the revised model is generated, the best ML model is updated with the revised best model and used further for inferencing.
The training phase includes receiving a plurality of input parameters comprising a plurality of parameters associated with numerically discretized geometry that defines a fluid dynamics domain and a plurality of Computational Fluid Dynamics (CFD) model parameters of a CFD model. The plurality of CFD model parameters comprise a plurality of governing equations, one or more initial and boundary conditions, a plurality of numerical schemes designed for discretizing each of plurality of terms of the plurality of governing equations, and a plurality of solution algorithms and simulation control parameters. The training is performed using a supervised learning technique for predicting a fastest solver combination having a minimum simulation time for the CFD model, a solver combination comprising a solver and at least one of a preconditioner and a smoother. The plurality of ML models are trained using ML model input data generated by:
From the plurality of trained ML models, a best machine Learning (ML) model is selected by applying a performance specific threshold to the plurality of trained ML models. Further, a plurality of ML model parameters for prediction accuracy of the best ML model are optimized to generate an optimized best ML model to be used during an inferencing mode.
During the inferencing mode the one or more hardware processors are configured to predict via the optimized best ML model the fastest solver combination for each of a plurality of new input parameters associated with each of a plurality of CFD model of interests. The predicted fastest solver combination is used to solve the plurality of matrix equations generated from the plurality of governing equations of the CFD model of interest for performing CFD simulation.
Further, in self-learning mode, each of the new set of plurality of input parameters that are received during the inferencing mode are continuously shared with the self-learning mode to determine a revised best ML model with optimized plurality of ML model parameters, and wherein the best ML model is updated with the revised best ML model whenever the revised ML model is determined.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles:
It should be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative systems and devices embodying the principles of the present subject matter. Similarly, it will be appreciated that any flow charts, flow diagrams, and the like represent various processes which may be substantially represented in computer readable medium and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.
Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the scope of the disclosed embodiments.
Machine Learning (ML) approaches in literature for determining an optimal solver-preconditioner-smoother for solving matrix equations in computer modelling or computer simulations of any systems are directly dependent on matrix property calculation as an intermediate step. As understood, existing approaches are inefficient as calculation of matrix properties is a computation expensive and time-consuming task and is a technical limitation of the works in the art.
Further, the solver combination is highly dependent on the type of problem at hand, and thus even for a same domain problem the selection of solver combination varies from problem to problem. For example, Computational Fluid Dynamics (CFD) domain, a solver combination for lid driven cavity flow may not be the right choice for flow over an Ahmed body problem. Thus, correct selection of a solver combination is critical to generate simulation results in least possible time. However incorrect choice of pre-conditioner and solver out weights both accuracy and speed. These matrix systems, or matrix equations can be solved using direct and iterative solvers. There exist two types of solvers for solving this matrix system, namely direct solvers, and iterative solvers. Direct solvers have computational complexity of the order of three (O3) in terms of mesh size, but they provide an exact solution. While iterative solvers have low computational complexity (O1) in term of mesh size, but they provide approximate solution and need initial values for the field variables. In practice, iterative solvers are used predominantly in CFD analysis due to their computational efficacy.
Further, in case of Computational Fluid Dynamics (CFD) domain, the system or matrix equations is generated from CFD model input parameters provided by the modeler. Further, part of input parameter's relation with the matrix equations can be derived from the theory. Hence, the theory knowledge or domain knowledge can directly be used to predict solver combination without extracting matrix system.
Existing commercial CFD simulation tools such as such as Ansys Fluent™, Ansys CFX™, Star-CCM+™ carryout the selection of solvers automatically but the strategy of solver selection is propriety. Also, this automatic selection of solver may not be accurate, but CFD user has no choice to select the combination since it is hidden for him. With open-source CFD tools such as SU2™ and Open-FOAM™, the selection of iterative solvers needs to be done manually, by the user, as part of model creation. Thus, if these Open-source tools are assisted with an accurate automated solver selection approach, they can provide a time efficient and accurate CFD simulation system.
However practical challenge for using ML based solver selection approaches available in literature to be used in conjunction with existing CFD simulation tools is a) the approaches are dependent on matrix properties, b) many CFD tools do not provide access matrix system for the CFD user so the literature methods cannot be applied for solver selection and c) ML techniques that calculates coefficient matrix properties are computationally expensive and time-consuming and is a technical limitation of the works in the art.
Embodiments of the present disclosure provide a method and system for CFD model parameter-based ML based prediction of fastest solver combination for solution of matrix equations during CFD simulations. The system trains a Machine Learning (ML) model using CFD model parameters such as physics, numerical schemes etc., wherein a set of relevant input parameters is selected based on domain knowledge of a CFD problem of interest. The ML model is a multi-class classification model for the prediction of solver combination taking the CFD model parameters as an input and eliminates dependency of training of the ML model for solver combination prediction on matrix properties, without compensating accuracy of prediction. Once the fastest solver combination is obtained, the solver can be utilized to simulate a CFD model of interest with minimal simulation time using for example, an opensource CFD toolbox OpenFOAM™ that provides a high-speed simulation advantage.
Thus, the method and system disclosed provides efficient approach for solver combination selection using CFD model parameters, which is computationally less expensive and time-efficient task and addresses the technical limitation of the works in the art. Further, selecting the relevant features from among the CFD model parameters using domain knowledge enables selecting the solver combination specific to the CFD problem, hence accuracy of CFD simulation is improved. Furthermore, the method provides a general solution which learns from new problems and updates itself, thus is a self-learning system.
Referring now to the drawings, and more particularly to
In an embodiment, the system 100 includes a processor(s) 104, communication interface device(s), alternatively referred as input/output (I/O) interface(s) 106, and one or more data storage devices or a memory 102 operatively coupled to the processor(s) 104. The system 100 with one or more hardware processors is configured to execute functions of one or more functional blocks of the system 100.
Referring to the components of system 100, in an embodiment, the processor(s) 104, can be one or more hardware processors 104. In an embodiment, the one or more hardware processors 104 can be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the one or more hardware processors 104 are configured to fetch and execute computer-readable instructions stored in the memory 102. In an embodiment, the system 100 can be implemented in a variety of computing systems including laptop computers, notebooks, hand-held devices such as mobile phones, workstations, mainframe computers, servers, and the like.
The I/O interface(s) 106 can include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface and the like and can facilitate multiple communications within a wide variety of networks N/W and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular and the like. In an embodiment, the I/O interface (s) 106 can include one or more ports for connecting to a number of external devices or to another server or devices.
The memory 102 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random-access memory (SRAM) and dynamic random-access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes.
In an embodiment, the memory 102 includes a plurality of modules 110 for performing steps of pre-processing 204, solver selection at step 206, mapping at step 208, and self-learning 212 for training and prediction of fastest solver combination for a CFD model for simulation. Further, the plurality of modules 110 include programs or coded instructions that supplement applications or functions performed by the system 100 for executing different steps involved in the process of predicting fastest solver combination for a CFD model, being performed by the system 100. The plurality of modules 110, amongst other things, can include routines, programs, objects, components, and data structures, which performs particular tasks or implement particular abstract data types. The plurality of modules 110 may also be used as, signal processor(s), node machine(s), logic circuitries, and/or any other device or component that manipulates signals based on operational instructions. Further, the plurality of modules 110 can be used by hardware, by computer-readable instructions executed by the one or more hardware processors 104, or by a combination thereof. The plurality of modules 110 can include various sub-modules (not shown).
Further, the memory 102 may comprise information pertaining to input(s)/output(s) of each step performed by the processor(s) 104 of the system 100 and methods of the present disclosure. Further, the memory 102 includes a database 108. The database (or repository) 108 may include a plurality of abstracted piece of code for refinement and data that is processed, received, or generated as a result of the execution of the plurality of modules in the module(s) 110. As depicted in
Although the data base 108 is shown internal to the system 100, it will be noted that, in alternate embodiments, the database 108 can also be implemented external to the system 100, and communicatively coupled to the system 100. The data contained within such external database may be periodically updated. For example, new data may be added into the database (not shown in
The system 100 is trained and then used for inferencing during inferencing mode. Each of the training phase, inferencing phase is explained below in detail.
TRAINING: As depicted in
Computational aspect: Here, parameters which control the computational aspects are specified. These can include type of parallelization techniques such as domain decomposition, number of domains, etc.
Thus, once training phase is completed the updated ML model (best ML model) is obtained, which is then used during inferencing (inferencing mode).
INFERENCE MODE for predicting the fastest solver combination in accordance with received input parameters: In the inference or prediction mode, the input parameters (202) described in
In an embodiment, the system 100 comprises one or more data storage devices or the memory 102 operatively coupled to the processor(s) 104 and is configured to store instructions for execution of steps of the method 800 by the processor(s) or one or more hardware processors 104. The steps of the method 800 of the present disclosure will now be explained with reference to the components or blocks of the system 100 as depicted in
Referring to the steps of the method 800, at step 802 of the method 800, the one or more hardware processors 104 receive the plurality of input parameters (as depicted in step 202) comprising the plurality of parameters associated with numerically discretized geometry that defines fluid dynamics domain and a plurality of Computational Fluid Dynamics (CFD) model parameters of a CFD model. As depicted in
At step 804 of the method 800, the one or more hardware processors 104 train a plurality of Machine Learning (ML) models using a supervised learning technique for predicting a fastest solver combination having a minimum simulation time for the CFD model. A solver combination comprises a solver and at least one of a preconditioner and a smoother. Each of the plurality of ML models are trained using a plurality of ML model input data generated by:
Steps below indicate training process of the plurality of ML models:
ML models used for the model building can include Logistic Regression, Naive Bayes, K Neighbors Classifier, SVM—Linear Kernel, Decision Tree Classifier, Random Forest Classifier, Gradient Boosting Classifier, Light Gradient Boosting Machine, Linear Discriminant Analysis (LDA) and Quadratic Discriminant Analysis.
At step 806, the one or more hardware processors select the best ML model from among the trained plurality of ML models by applying a performance specific threshold to the plurality of trained ML models, and The performance specific threshold refers to comparing the performance of all the trained models based on average accuracy, recall and F1 score to select the best model is selected as explained in and explained in conjunction with the model selection block of
Experiment input data creation for the simulations are performed for fixed set of input parameters such as physics, mesh, numerical schemes, and simulation control parameters. For each input set, varied solver preconditioner combinations are present. The convergence of the simulation is insured by allowing residuals to fall beyond fixed threshold values for all the physical variables. A simulation time is recorded for each simulation. In one of the example implementations of the method, following scope is covered with following assumptions.
During the training, the plurality of solver combinations is arranged in ascending order of relevance into a plurality of bins as depicted in
Labelling the Data: For a single input parameter set, all the observations are arranged in ascending order of simulation time. Then each of the simulation time is normalized by the lowest simulation time. These normalized simulation time vales are then binned based on thresh-hold and class labels are assigned to them as shown in table 1 below.
The best Machine Learning (ML) model is selected by applying the performance specific threshold to plurality of trained ML models. ML model performance comparison based on classification evaluation metrics namely average accuracy, recall, precision and F1 score for the training data. For the use benchmark CFD problem of the lid driven cavity flow, it is observed that Linear discriminant analysis (LDA) model shows highest F1 score (metric for multi-class classification evaluation) of 56% for the training data.
At step 808 of the method 800, the one or more hardware processors 104 optimize a plurality of ML model parameters for prediction accuracy of the best ML model to generate an optimized best ML model to be used during an inferencing mode. The model analysis block explained in conjunction with
Once the trained best ML model is identified, then at step 810 of the method 800 during the inferencing mode, the one or more hardware processors 104 via the best ML model, predict the fastest solver combination for each of a plurality of new input parameters associated with each of a plurality of CFD models of interest. The predicted fastest solver combination is used to solve the plurality of matrix equations generated from the plurality of governing equations of the CFD model of interest for performing CFD simulation, for example using the Open-FOAM™ CFD simulation opensource tool.
The self-learning mode actively runs in background during the inferencing mode for self-learning based on received new inputs during inferencing mode. For self-learning, each of the new set of plurality of input parameters that are received during the inferencing mode are continuously shared with the self-learning mode to determine a revised best ML model with optimized plurality of ML model parameter. Once the revised best ML model is determined, the best ML model is updated with the revised best ML model. Thus, any later inferencing utilizes the revised ML model that has learnt over new input data received during inferencing. The self-learning steps remain similar to self-learning step 212 elaborated in conjunction with training of the system 100 with help of
CASE STUDY: The training process for fastest solver combination selection using the best ML model for a lid driven cavity benchmark problem is explained below. As depicted in
A. Model input parameters: Model inputs are divided into four categories as below.
B. Model training: By performing simulations using above input data labelled data is generated with 1200 datapoints. The dataset was made balanced by randomly selecting 250 datapoints for each class out of total. These 1000 datapoints points were further split into train and validation in 90:10 ratio. A model is trained for each of the ML techniques and training with 5-fold cross validation is performed.
RESULTS: Table 6 shows the ML model performance comparison based on classification evaluation metrics namely average accuracy (A), recall (R), precision (P) and F1 score (F1) for the training data. It is observed that Linear discriminant analysis (LDA) model shows highest F1 score (metric for multi-class classification evaluation) of 56% for the training data.
The LDA technique is selected for further tuning or optimization of ML model parameters and prediction. The LDA model is tuned or optimized for the hyper-parameters such as solver, tolerance, and shrinkage and the eigen solver with 0.001 shrinkage and 0.001 tolerance is selected for final model. An “Area Under Curve (AUC)” curve 3 is plotted to understand the model performance for each class. It is evident from the curve that the above LDA model can predict the fastest label with fairly good accuracy. Class 1 (Fastest label) is more important to us since it represents optimal solver preconditioner combination for the given simulation. Table 7 depicts the ML Model (LDA) performance on the test data.
Model prediction: Remaining 200 number of data points which were excluded during training and validated are selected as a test set. Since these datapoints are for the same CFD simulation case and are randomly selected, Independent and Identical Distribution (IID) assumption for the ML modelling is satisfied. The table below shows the observation of fair performance of the model on the test data.
The written description describes the subject matter herein to enable any person skilled in the art to make and use the embodiments. The scope of the subject matter embodiments is defined by the claims and may include other modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope of the claims if they have similar elements that do not differ from the literal language of the claims or if they include equivalent elements with insubstantial differences from the literal language of the claims.
Thus, the method and system disclosed, by eliminating dependency on the matrix properties of the CFD model for solver combination selection, provides efficient approach, which is computationally less expensive and time-efficient task and addresses the technical limitation of the works in the art. Further, selecting the relevant features from among the CFD mode parameters using on domain knowledge enables selecting the solver combination specific to the CFD problem, hence accuracy of CFD simulation is improved.
It is to be understood that the scope of the protection is extended to such a program and in addition to a computer-readable means having a message therein; such computer-readable storage means contain program-code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The hardware device can be any kind of device which can be programmed including e.g., any kind of computer like a server or a personal computer, or the like, or any combination thereof. The device may also include means which could be e.g., hardware means like e.g., an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of hardware and software means, e.g., an ASIC and an FPGA, or at least one microprocessor and at least one memory with software processing components located therein. Thus, the means can include both hardware means, and software means. The method embodiments described herein could be implemented in hardware and software. The device may also include software means. Alternatively, the embodiments may be implemented on different hardware devices, e.g., using a plurality of CPUs.
The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. The functions performed by various components described herein may be implemented in other components or combinations of other components. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope of the disclosed embodiments. Also, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.
Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
Number | Date | Country | Kind |
---|---|---|---|
202221058588 | Oct 2022 | IN | national |