This application claims priority to and the benefit under 35 USC § 119 (a) of Korean Patent Application No. 10-2023-0119976 filed in the Korean Intellectual Property Office on Sep. 8, 2023, the entire contents of which are incorporated herein by reference.
The present disclosure relates to a method of performing a genetic algorithm to solve an optimization problem and a method of designing an electronic device using the genetic algorithm.
Genetic algorithms are computational models analogous to an evolutionary process of the natural world and is one of various global optimization techniques used to solve optimization problems. A genetic algorithm is generally an in-parallel global search algorithm mimicking the selectivity of biological genetics in the natural world and may generate superior solutions to an optimization problem. Generally, the problem is solved by forming possible/proposed solutions to the optimization problem into a fixed structure and modifying the possible solutions little by little, testing whether the modifications move the proposed solutions closer to a solution. In other words, a genetic algorithm may simulate an evolution to find a solution x that optimizes some unknown function Y=f(x).
A method may perform a genetic algorithm to solve problems.
A method may design an electronic device through a genetic algorithm.
A method may generate a subset of a population to be used in a genetic algorithm.
In one general aspect, a method for performing a genetic algorithm is performed by one or more processors, and the method includes: generating a subset of elements by using a neural network from a population including elements that each correspond to a respective solution to a problem; generating child elements by performing a genetic algorithm on the subset; and determining that one of the child elements, from among the child elements is an answer to the problem.
The subset may be generated based on scores of the elements output from the neural network, the scores including, or corresponding to, probabilities of the elements.
The generating of the subset may be based on the scores of the elements output from the neural network may include sampling an element with the score greater than a predetermined magnitude.
The generating of the subset based on the scores of the elements output from the neural network may include sampling a predetermined number of the elements in order of the largest score to smallest score.
The generating of the child elements by performing the genetic algorithm on the subset may include: selecting two elements from the subset; and applying a genetic operator to the selected elements to generate one of the child elements.
The genetic operator may include a crossover operator, a mutation operator, or a local optimization operator.
The determining of the one of the child elements that is the answer may include: determining fitness values by performing a simulation on the problem for the respective child elements; and providing a reward to the neural network based on the fitness values of the child elements.
The providing of the reward to the neural network based on the fitness values of the child elements may include: updating the neural network based on a difference between a fitness value of the elements in the population and a fitness value of the child elements.
Determining the one of the child elements is the answer may further include updating the elements in the population based on the fitness values of the child elements.
The updating of the elements in the population based on the fitness values of the child elements may include replacing an element in the population with a child element determined to have a fitness value greater than fitness values of the elements in the population.
The determining of the answer among the child elements may include determining the child element as one having a fitness value that satisfies a predetermined criteria among the fitness values of the child elements as the answer.
Each of the elements may be generated by encoding an order of blocks to be placed in a peripheral area of a memory.
The method may further include: in response to determining that there is no answer among the child elements, generating a new subset of elements; generating a new child elements by performing the genetic algorithm for the newly generated subset; and searching for the answer in the newly generated elements.
In another general aspect, a method for designing an electronic device includes: generating a subset including elements by performing a sampling using a neural network on elements each corresponding to a different ordering of blocks that are to be arranged in a predetermined area of the electronic device; performing a genetic algorithm by using the subset to generate child elements of the elements; and determining an ordering of the blocks to be arranged in the predetermined area by selecting one of the child elements obtained through the genetic algorithm, the selected child element indicating the ordering of the blocks.
The generating of the subset by performing the sampling using the neural network on the elements may include: determining a probability of an element in a population generated from a random selection of the elements, the probability determined using the neural network; and generating the subset based on the probability.
The probability may correspond to a possibility that a child element generated from the element in the subset is determined to be an optimal solution of a problem being solved by the genetic algorithm.
The generating of the subset based on the probability may include sampling an element with a probability greater than a predetermined magnitude or sampling a predetermined number of elements in order of the higher probability.
The method may further include: after the performing of the genetic algorithm using the subset, updating the neural network based on a fitness value of a child element obtained from the subset through the genetic algorithm.
In yet another general aspect, a method for generating a subset of a population for a genetic algorithm is performed by one or more processors and includes: determining scores of respective elements included in the population by using a neural network to infer the scores; generating the subset including to include elements of the elements in the population based on the scores; and updating the neural network based on a fitness value of a child element generated from the elements in the subset.
The updating of the neural network based on the fitness value of the child element generated may include: determining the fitness value of the child element for a problem to which the genetic algorithm is applied by performing a simulation on the child element; and providing a reward to the neural network based on the fitness value of the child element.
Other features and aspects will be apparent from the following detailed description, the drawings, and the claims.
Throughout the drawings and the detailed description, unless otherwise described or provided, the same or like drawing reference numerals will be understood to refer to the same or like elements, features, and structures. The drawings may not be to scale, and the relative size, proportions, and depiction of elements in the drawings may be exaggerated for clarity, illustration, and convenience.
The following detailed description is provided to assist the reader in gaining a comprehensive understanding of the methods, apparatuses, and/or systems described herein. However, various changes, modifications, and equivalents of the methods, apparatuses, and/or systems described herein will be apparent after an understanding of the disclosure of this application. For example, the sequences of operations described herein are merely examples, and are not limited to those set forth herein, but may be changed as will be apparent after an understanding of the disclosure of this application, with the exception of operations necessarily occurring in a certain order. Also, descriptions of features that are known after an understanding of the disclosure of this application may be omitted for increased clarity and conciseness.
The features described herein may be embodied in different forms and are not to be construed as being limited to the examples described herein. Rather, the examples described herein have been provided merely to illustrate some of the many possible ways of implementing the methods, apparatuses, and/or systems described herein that will be apparent after an understanding of the disclosure of this application.
The terminology used herein is for describing various examples only and is not to be used to limit the disclosure. The articles “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. As used herein, the term “and/or” includes any one and any combination of any two or more of the associated listed items. As non-limiting examples, terms “comprise” or “comprises,” “include” or “includes,” and “have” or “has” specify the presence of stated features, numbers, operations, members, elements, and/or combinations thereof, but do not preclude the presence or addition of one or more other features, numbers, operations, members, elements, and/or combinations thereof.
Throughout the specification, when a component or element is described as being “connected to,” “coupled to,” or “joined to” another component or element, it may be directly “connected to,” “coupled to,” or “joined to” the other component or element, or there may reasonably be one or more other components or elements intervening therebetween. When a component or element is described as being “directly connected to,” “directly coupled to,” or “directly joined to” another component or element, there can be no other elements intervening therebetween. Likewise, expressions, for example, “between” and “immediately between” and “adjacent to” and “immediately adjacent to” may also be construed as described in the foregoing.
Although terms such as “first,” “second,” and “third”, or A, B, (a), (b), and the like may be used herein to describe various members, components, regions, layers, or sections, these members, components, regions, layers, or sections are not to be limited by these terms. Each of these terminologies is not used to define an essence, order, or sequence of corresponding members, components, regions, layers, or sections, for example, but used merely to distinguish the corresponding members, components, regions, layers, or sections from other members, components, regions, layers, or sections. Thus, a first member, component, region, layer, or section referred to in the examples described herein may also be referred to as a second member, component, region, layer, or section without departing from the teachings of the examples.
Unless otherwise defined, all terms, including technical and scientific terms, used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure pertains and based on an understanding of the disclosure of the present application. Terms, such as those defined in commonly used dictionaries, are to be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and the disclosure of the present application and are not to be interpreted in an idealized or overly formal sense unless expressly so defined herein. The use of the term “may” herein with respect to an example or embodiment, e.g., as to what an example or embodiment may include or implement, means that at least one example or embodiment exists where such a feature is included or implemented, while all examples are not limited thereto.
An artificial intelligence model (AI model) running a neural network according to the present disclosure is a machine learning model that learns at least one task, and may be implemented as a computer program (instructions) executed by a processor. The task learned may be a task to be solved through machine learning or a task to be performed through the machine learning. The AI models may be implemented as computer programs (instructions) running on computing devices, downloaded over a network, or rendered in a product form. Alternatively, the artificial intelligence model may be linked to various devices through a network.
A problem-solving system may include a neural network 100, a subset generator 200, and a genetic algorithm (GA) processor 300. The problem-solving system may perform a genetic algorithm by using the neural network 100, the subset generator 200, and the GA processor 300 to derive a solution to a specific problem. The GA processor 300 may output a predetermined number (K) of solutions for each step (iteration/generation) of the genetic algorithm.
The problem-solving system may verify whether any of the outputted K solutions may be the optimal solution to the specific problem through a simulation (the problem-solving system may verify whether each of the K solutions is the optimal solution to the specific problem, so K verifications may be performed by the problem-solving system). If any one solution among the K output solutions meets a predetermined criteria, then the problem-solving system may determine that that solution (the solution that satisfies the predetermined criteria) is the solution to the given problem.
In some embodiments, the genetic algorithm may be a computational model mimicking an evolutionary phenomenon of the natural world and may determine better solutions by expressing possible solutions (to the problem to be solved) in a predetermined data structure and by iteratively modifying the solutions. A given solution may be encoded and generated as one element, and multiple elements may be included in a population, the population being a set of the solutions that changes from generation to generation. That is to say, the solutions included in a population may be updated when each step (iteration, new generation, etc.) of the genetic algorithm is completed.
In some embodiments, the neural network 100 may perform an inference on an element in a population, and may output the inference as a score or a probability of the element included in a population (hereafter, “probability” refers to “score or probability”, as a score will generally be derived from, or correspond to, a probability). The probability of an element may indicate the possibility that the element will generate (e.g., parent, mutate to, etc.) a child element that may ultimately be determined to be the solution to the problem. In other words, the probability of an element may be the likelihood that a child (next generation) element generated based thereon will be the solution to the problem. In other words, the probability of an element may indicate the possibility that a child element generated from that element will be determined to be the answer to the problem.
Whether the child element will be the answer to the problem may be determined according to a fitness value calculated from a simulation performed on the child element. When the fitness value of the child element meets a predetermined criteria (e.g., exceeds a predetermined reference value), the child element may be determined to be the answer to the problem (as used herein “the answer” and “the solution” do not imply that such answer/solution is the best or only answer/solution, rather this refers to the answer/solution have been determined to be an optimum/sufficient answer/solution according to the performed process). To reiterate, as shown in
In some embodiments, the neural network 100 may perform reinforcement learning. An agent (employing the neural network 100) may be configured to determine the probabilities of the respective elements in the population as an action. The agent may output the probabilities of the respective elements in the population as the action based on a policy (the population functioning as an environment corresponding to the agent). Since the probability of each element is a basis for generating the subset (or a sub-population) of the population, the population may be updated (changing the state of the environment) according to the fitness value of the child element generated from the element selected in the subset by the genetic algorithm and a reward may be given to the agent (which is operated by the neural network 100) (the subset may be a portion of the population having a reduced number of elements, and may be generated by the subset generator 200 before the genetic algorithm generates the child element from the subset). For example, the reward may correspond to the difference between a maximum fitness value of the elements (among the values of the elements) in the population and a maximum fitness value of the child elements generated from the selected elements in the subset. Regarding the updating of the population, the updated population is the next generation of the population. (1) The subset may be generated from the population by the subset generator 200. (2) The genetic algorithm may be performed by the GA processor 300 based on the subset. (3) The population may be updated according to the fitted value of the child element generated from the selected parent elements in the subset by the genetic algorithm. And, (4) the subset may be regenerated from the updated population for next genetic algorithm.
In some embodiments, the neural network 100 may be updated by changing the policy according to the fitness value of the child element (a policy gradient). In the policy gradient, the policy to be applied to the neural network 100 (or to the agent) may be changed to maximize the expected reward and the gradient of the policy may be inferred.
In some embodiments, the neural network 100 may include at least one of a transformer (e.g., Bilinear transformer), a recurrent neural network (RNN), a long short term memory (LSTM), and/or a permutation-equivariant fully connected network that process a large-scale input such as a population. In some implementations, the mentioned neural networks may be capable of processing large-scale input, such as the entire population, in parallel. The transformer is one example. In addition, multiple RNNs and LSTMs may be configured to be capable of processing the large-scale input in parallel.
In some embodiments, the subset generator 200 may generate a subset (e.g., a next generation population) of the population based on the probability output from the neural network 100. The subset generator 200 may generate the subset from the current population by selecting, from the current population, the elements having respective probabilities greater than a predetermined magnitude. Alternatively, the subset generator 200 may generate the subset from the population by sequentially sampling/selecting a predetermined number of the elements in order of their probabilities, from high to low (i.e., selecting the top-N elements in terms of probabilities).
In some embodiments, the GA processor 300 may select a pair of elements (parent elements) from the subset generated by the subset generator 200 to (i) perform the genetic algorithm (form a child element) and (ii) perform the simulation (related to the problem) for the child element generated through the genetic algorithm, thereby determining the answer to the given problem. For example, the GA processor 300 may determine at least one child element with the fitness value that satisfies the predetermined criteria among the K child elements (obtained by performing the genetic algorithm) as the answer to the problem to be solved.
As above-described, the problem-solving system according to some embodiments may perform the calculation of the probabilities of elements in the population by using the neural network 100 and perform the generation of the subset and the GA process by using the subset generator 200 and the GA processor 300, thereby the workload of the genetic algorithm may be distributed between the processor (e.g., GPU) corresponding to (executing) the neural network 100 and the processor (e.g., CPU) corresponding to the subset generator 200 and the GA processor 300. Alternatively, the part that may use the in-parallel operation among the operations of the GA processor may be performed by the processor corresponding to the neural network 100. Additionally, by using the subset in the GA process, a search space may be reduced compared to the population, so the operation results of the GA process itself may be output quickly.
Referring to
Through the population initialization, a predetermined number of elements among the n! elements may be included in the initial population. The number of the elements included in the population may be set according to a type of the problems.
In some embodiments, the problem that the problem-solving system seeks to solve using the genetic algorithm may be an arrangement of blocks responsible for a predetermined function in predetermined areas of an electronic device when the electronic device is designed. For example, the problem may be to arrange a set of blocks in a line in a peripheral area of the electronic device (for example, but not limited to a memory circuit/device). In other words, the problem is modeled as an order of blocks for together controlling an operation of the memory device (e.g., electronic components consisting of the controller), where the blocks are placed in the peripheral area of the memory. In this example, the genetic algorithm may be used to minimize a length of wires respectively connecting components. Regarding “connecting components”, since each of the blocks corresponds to electronic components, “component” refers to the block or electronic component. The term “component” is in contrast to the term “element”; an “element” is a numeric string or vector, not a block.
In some embodiments, a set of blocks needs to be placed in one linear dimension within the peripheral area of the memory, the order of the blocks for that arrangement may be encoded as a numeric string or a vector. At this time, the encoded numeric string or vector may each correspond to one element. For example, in a problem of arranging six blocks with identifiers of 0 to 5, respectively, an order of the blocks may be an element that is encoded as a numeric string or the vector such as [0,1,2,3,4,5].
In some embodiments, all elements in the initial population may be assigned with an arbitrary (e.g., random) fitness value. The same or different fitness values may be given to two different elements. Regarding how the fitness value relates to the block example, the fitness value may be a real number and may be the same kind of fitness value that is computed from the simulation performed on the child element. Random fitness values may be assigned to elements in the population for comparison with a child element that has fitness value calculated from the simulation. That is, the fitness values for the elements in the population are randomly assigned, but the fitness value for the child element is determined by the simulation performed on the child element generated by the genetic algorithm. An element in the population that has a random fitness value smaller than that of the child element may be deleted from the population by the update. In addition, “two different elements” refers to a member of the population. For example, in the block arrangement embodiment, two elements correspond to two different orders of the block, respectively, therefore, because the fitness values are randomly assigned to each element, the sentence refers to the possibility that either the same fitness values are assigned to the two elements or that different fitness values are assigned to the two elements.
In some embodiments, the elements included in the population may be generated using a breadth first search (BFS) method and/or a depth first search (DFS) method with each node (one node corresponds to one block) of a graph as a root node. Both BFS method and DFS method are graph traversal algorithms to be used to generate elements, and the graph represents a space for determining the order of the nodes (that is, the arrangement order of the blocks). The number of the elements generated by the DFS method and/or BFS method may be the same as the number of the blocks to be placed in the peripheral area. For example, when there are 6 blocks, the number of all elements is 6! and the population may contain a part of all elements discovered by using the BFS method or the DFS method; and the BFS method or DFS method may be used to discover as many elements as the number of nodes (number of blocks).
The generated population may be input to the neural network 100, and the neural network 100 may output the probabilities of the respective elements included in the population. The probability of an element may indicate the possibility that a child element generated from that element in the subset will be determined as the answer to the problem or the possibility that the element in the subset generates the child element that will be determined as the answer to the problem.
Referring to
Referring to
With the Bernoulli sampling technique, a reference value may be determined in advance depending on the number of the elements to be included in the subset. For example, when the subset needs to include about 100 elements, the subset generator 200 may determine the reference value so that about 100 elements may be sampled. The number of the elements included in the subset may be a hyperparameter. The hyperparameter may be a value set by a user for the machine learning. The hyperparameter is distinct from parameters of the various operative AI models.
In some embodiments, the subset generator 200 may generate the subset based on the probabilities of the respective elements inferred from, and output by the neural network 100 (e.g., RNN, LSTM, GRU, etc.), which may be trained through the reinforcement learning. The reward of the reinforcement learning may be determined based on the fitness values of the child elements generated from the elements selected into the subset.
Referring to
Referring to
Referring to
The crossover operator may perform for an ordered crossover on the genes within the elements in a pair, such as a partially matched crossover (PMX), which is applied to permutation problems.
The mutation operator may randomly assign a value between 0 and 1 to each gene of an element being mutated and perform a swap between the genes whose assigned value is less than a threshold.
The local optimization operator may shift each gene of an element. For example, the local optimization operator may shift the gene of [0,1,2,3,4,5] and transform the element into [1,2,3,4,5,0].
Referring to
In the circuit design example described above, the fitness value of a child element (which has been generated through the genetic algorithm) may correspond to the length of the wire formed by the block to be placed in the peripheral area. For example, a child element with a relatively large fitness value may indicate that the order of the arrangement of the block (for that child element) makes the total length of the wires relatively short. Alternatively, a child element with a relatively small fitness value may indicate the order of the arrangement of the block (for that child element) makes the total length of the wires relatively long.
Afterwards, the GA processor 300 may determine whether the fitness values of any of the child elements satisfy the predetermined criteria (S150).
In some embodiments, when no child elements satisfy the predetermined criteria, the neural network 100 and the subset generator 200 may generate a new subset from the population. The GA processor 300 may generate the child elements from the population by performing the genetic algorithm on the elements and/or element pairs selected from the new subset. In some embodiments, it is a step in the genetic algorithm from the generation of the subset to the generation and evaluation of the predetermined number (K) of child elements. In other words, when the answer is not determined from any of the child elements generated by one step of the genetic algorithm, the GA processor 300 may generate a new subset and perform the next step (e.g., the step of generating the new child elements and the step of searching for the answer to the problem in the new child elements) of the genetic algorithm.
Referring to
Meanwhile, when the GA processor 300 performs the genetic algorithm by a predetermined number of times, the cumulative reward determined based on the fitness value of the child element may be provided to the neural network. Referring to
In Equation 1, at represents an action of the neural network (the agent) at a time step t, st, represents a state of the population (an environment) at the time step t, and Qπ (st, at), represents a value function regarding the action of the agent taken on the state of the environment at the time step t.
As shown in Equation 1, the sum
of the rewards after the time step t may be provided to the agent.
Although mathematical notation is used above, it will be appreciated that the mathematical notation is shorthand describing operations that could be described with English words, but such textual description would be difficult to understand. The mathematical notation and equations above may be readily translated into source code that can be compiled to produce processor-executable instructions that when executed cause the processor(s) to operate as described by the mathematical notation.
In some embodiments, the reward provided to the agent may be determined based on a difference between the fitness values of the child elements at the previous step and the current step of the genetic algorithm. In some embodiments, the reward may be determined based on a difference between a fitness value of the elements in the population and a fitness value of the child elements, to update the neural network 100. For example, the reward may be determined based on a difference between the maximum fitness value of the child element generated in the current step and the maximum fitness value of the element in the population of the current step of the genetic algorithm. Alternatively, the reward may be determined based on a difference between the maximum fitness value of the element in the population of the current step and the maximum fitness value of the element in the population of the previous step. The reward may be determined at every step of the genetic algorithm, and then the accumulated reward may be provided to the neural network 100 while the genetic algorithm is performed a predetermined number of times (or until a solution element is found).
In some embodiments, the policy of the neural network 100 may be updated to minimize the length of the wires formed by the block to be placed in the peripheral area. Here, changing the policy to maximize the fitness value of the child element may correspond to changing the policy to minimize the length of the wires formed by block.
Referring to
As explained above, the problem-solving system according to some embodiments may perform an adaptive sampling to sample an element (or element pair) that generates a child element close to the solution to the problem by using the neural network and the subset generator, thus improving the performance of the genetic algorithm solving the problem. In other words, by adaptively sampling the element corresponding to the solution of the problem by using the neural network and the subset generator, the speed of the genetic algorithm may be improved and the answer to the problem may be accurately derived.
Referring to
The input layer 610 may include input nodes (x1 to xi), and the number of the input nodes (x1 to xi) may correspond to the number of independent variables. For training of the neural network 600, a training dataset may be input to the input layer 610. When the test data set is input to the input layer 610 of the trained neural network 600, an inference result may be output from the output layer 630 of the trained neural network 600. In some embodiments, the input layer 610 may have a structure suitable for processing a large-scale input.
The hidden layer 620 may be positioned between the input layer 610 and the output layer 630 and may include hidden layers 6201 to 620n. The output layer 630 may include at least one output node y1 to yj. An activation function may be used in the hidden layers 620 and the output layer 630. In some embodiments, the neural network 600 may be learned by adjusting the weights of the nodes included in the hidden layers 620.
A problem-solving system according to some embodiments may be implemented as a computer system, for example, a computer-readable medium. Referring to
The processor 710 may implement the function, process, or method proposed in some embodiments. The operation of the computer system 700 according to some embodiments may be implemented by the processor 710. At least one processor 710 may include both GPU and CPU.
In some embodiments of this description, the memory 720 may be positioned inside or outside the processor, and the memory may be connected to the processor through various known means. The memory may be a volatile or non-volatile storage media of various forms, and for example, the memory may include a read-only memory (ROM) or a random access memory (RAM).
In some embodiments, to optimize one-dimensional arrangement of blocks (e.g., a buffer, an inverter, an AND gate, and an OR gate, etc.) in a peripheral area of memory semiconductors (DRAM, FLASH, SSD, CIS, DDI, SoC, Smart Card, MCU, etc.), the problem-solving system described herein may be used.
Referring to
When the numeric string or the vector related to the order of the arrangement of the blocks is input to the problem-solving system, the problem-solving system may generate a population related to the order of the arrangement of the block and output the solution to the problem by using the genetic algorithm within the generated population.
In this description, the neural network 100 may operate as an agent of a reinforcement learning network to output the score or probability of each element included in the population.
The subset generator 200 may generate a subset of the population based on the score or probability of the element within the population.
The GA processor 300 may generate child elements from the element selected from the subset by performing the GA process.
Afterwards, based on fitness values calculated through the simulation of the child elements, at least one child element among the child elements generated by the GA processor 300 may be output as the answer to the problem. The child element, which is the answer to the problem, may indicate the order of the arrangement of the blocks that may minimize the total length of the wires connecting the blocks to be arranged in the peripheral area of the memory. The solution to the problem illustrated in
In this way, the problem-solving system according to some embodiments may quickly output the answer regarding the order of the arrangement of the blocks by generating the subset for performing the genetic algorithm based on the score or probability of the element calculated using the neural network. In addition, by adaptively sampling the element corresponding to the solution of the problem by using the neural network and the subset generators, the speed of the genetic algorithm to find the optimal order of the arrangement of the block may be improved and the best answer can be derived.
Meanwhile, the embodiments are not only implemented through the devices and/or methods described so far, but may be implemented through a program (instructions) that realizes functions corresponding to the configuration of the embodiments or a recording medium on which the program is recorded, and this implementation may be implemented by anyone skilled in the art from the description of the embodiments described above. Specifically, the method according to some embodiments (e.g., a method for performing genetic algorithm and a method for designing an electronic device, etc.) may be implemented in a form of a program instruction that can be executed through various computer means and recorded on a computer-readable medium. The computer-readable medium may include program instructions, data files, data structures, etc., singly or in combination. The program instructions recorded on the computer-readable medium may be specially designed and configured for the examples, or may be known and available to those skilled in the art of computer software. The computer-readable recording medium may include a hardware device configured to store and execute the program instructions. For example, a computer-readable recording medium includes magnetic media such as hard disks, floppy disks and magnetic tapes, optical recording media such as CD-ROMs and DVDs, and optical disks such as floppy disks. It may be magneto-optical media, ROM, RAM, flash memory, or the like. A program instruction may include not only machine language code such as generated by a compiler, but also high-level language code that can be executed by a computer through an interpreter or the like.
The computing apparatuses, the electronic devices, the processors, the memories, the displays, the information output system and hardware, the storage devices, and other apparatuses, devices, units, modules, and components described herein with respect to
The methods illustrated in
Instructions or software to control computing hardware, for example, one or more processors or computers, to implement the hardware components and perform the methods as described above may be written as computer programs, code segments, instructions or any combination thereof, for individually or collectively instructing or configuring the one or more processors or computers to operate as a machine or special-purpose computer to perform the operations that are performed by the hardware components and the methods as described above. In one example, the instructions or software include machine code that is directly executed by the one or more processors or computers, such as machine code produced by a compiler. In another example, the instructions or software includes higher-level code that is executed by the one or more processors or computer using an interpreter. The instructions or software may be written using any programming language based on the block diagrams and the flow charts illustrated in the drawings and the corresponding descriptions herein, which disclose algorithms for performing the operations that are performed by the hardware components and the methods as described above.
The instructions or software to control computing hardware, for example, one or more processors or computers, to implement the hardware components and perform the methods as described above, and any associated data, data files, and data structures, may be recorded, stored, or fixed in or on one or more non-transitory computer-readable storage media. Examples of a non-transitory computer-readable storage medium include read-only memory (ROM), random-access programmable read only memory (PROM), electrically erasable programmable read-only memory (EEPROM), random-access memory (RAM), dynamic random access memory (DRAM), static random access memory (SRAM), flash memory, non-volatile memory, CD-ROMs, CD-Rs, CD+Rs, CD-RWs, CD+RWs, DVD-ROMs, DVD-Rs, DVD+Rs, DVD-RWs, DVD+RWs, DVD-RAMs, BD-ROMs, BD-Rs, BD-R LTHs, BD-REs, blue-ray or optical disk storage, hard disk drive (HDD), solid state drive (SSD), flash memory, a card type memory such as multimedia card or a micro card (for example, secure digital (SD) or extreme digital (XD)), magnetic tapes, floppy disks, magneto-optical data storage devices, optical data storage devices, hard disks, solid-state disks, and any other device that is configured to store the instructions or software and any associated data, data files, and data structures in a non-transitory manner and provide the instructions or software and any associated data, data files, and data structures to one or more processors or computers so that the one or more processors or computers can execute the instructions. In one example, the instructions or software and any associated data, data files, and data structures are distributed over network-coupled computer systems so that the instructions and software and any associated data, data files, and data structures are stored, accessed, and executed in a distributed fashion by the one or more processors or computers.
While this disclosure includes specific examples, it will be apparent after an understanding of the disclosure of this application that various changes in form and details may be made in these examples without departing from the spirit and scope of the claims and their equivalents. The examples described herein are to be considered in a descriptive sense only, and not for purposes of limitation. Descriptions of features or aspects in each example are to be considered as being applicable to similar features or aspects in other examples. Suitable results may be achieved if the described techniques are performed in a different order, and/or if components in a described system, architecture, device, or circuit are combined in a different manner, and/or replaced or supplemented by other components or their equivalents.
Therefore, in addition to the above disclosure, the scope of the disclosure may also be defined by the claims and their equivalents, and all variations within the scope of the claims and their equivalents are to be construed as being included in the disclosure.
Number | Date | Country | Kind |
---|---|---|---|
10-2023-0119976 | Sep 2023 | KR | national |