The present disclosure generally relates to an electronic design automation (EDA) system. In particular, the present disclosure relates to determining the predicted performance of an integrated circuit design based on a component statistical information model.
Electronic design and automation (EDA) systems verify the functionality of a circuit design. For example, EDA systems are used to verify that the circuit design meets the design requirements and/or manufacturing process requirements. An EDA system may analyze a circuit design to perform a timing analysis of the circuit design, routing analysis of the circuit design, fault detection within the circuit design, and/or fault debugging within the circuit design, among others. To perform the analysis an EDA system simulates the functionality of a circuit design to predict the behavior of the circuit design to determine whether or not the design requirements and/or manufacturing process requirements are met.
In one example, a method includes receiving circuit design training data and circuit design test data. The circuit design training data includes training nodes and training paths. The training paths connect the training nodes including circuit components. The circuit design test data includes a first test node and a second test node. Further, the method includes determining testing information for the circuit components of each training path from the circuit design training data. The method further includes determining, by a processing device, a statistical representation of the circuit design test data based on the testing information and the circuit design test data, and determining first test information for a test path connecting the first test node with the second test node based on the statistical representation.
In one example, a non-transitory computer readable medium comprising stored instructions, which when executed by a processor, cause the processor to receive circuit design training data and circuit design test data. The circuit design training data including training nodes and training paths. The training paths include connect the training nodes including circuit components. The circuit design test data includes a first test node and a second test node. Further, the processor is caused to determine testing information for the circuit components of each training path from the circuit design training data. The processor is further caused to determine a statistical representation of the circuit design test data based on the testing information and the circuit design test data, and determine first test information for a test path connecting the first test node with the second test node based on the statistical representation.
In one example, a system includes a memory storing instructions, and a processing device. The processing device is coupled with the memory and executes the instructions. The instructions when executed cause the processing device to receive design training data and design test data. The design training data includes training nodes and training paths. The training paths include components and connect the training nodes. The design test data includes a first test node and a second test node. Further, the processor determines testing information for the components of each training path from the design training data. The testing information includes entries including values associated with the components. The processor further determines test information for a test path connecting the first test node with the second test node based on the entries of the components associated with the test path.
The disclosure will be understood more fully from the detailed description given below and from the accompanying figures of embodiments of the disclosure. The figures are used to provide knowledge and understanding of embodiments of the disclosure and do not limit the scope of the disclosure to these specific embodiments. Furthermore, the figures are not necessarily drawn to scale.
Aspects of the present disclosure relate to statistical graph component probability model for an integrated circuit design. Electronic design and automation (EDA) systems analyze circuit designs to test the functionality of the circuit design to verify that the circuit design meets the corresponding design requirements and/or corresponding manufacturing process requirements. An EDA system performs one or more of a timing analysis of a circuit design, routing analysis of a circuit design, fault detection within a circuit design, and fault debugging within a circuit design, among others, to analyze the circuit design. An EDA system uses the netlist of a circuit design when analyzing the circuit design. In a netlist, cells and/or signals and corresponding components are interconnected via wires. An EDA system may employ machine learning (ML) techniques (or ML processes) to predict various metrics of a circuit design during analysis of the circuit design.
The ML techniques may include a random forest technique, which uses tabular feature data to predict the metrics of the circuit design. However, random forest techniques have not been applied to the netlist of a circuit design (e.g., graph data) as random forest techniques are applied to tabular feature data instead of netlists.
In other examples, the ML techniques include a graph neural network (GNN) technique that is applied to the netlist of a circuit design. In one or more example, the circuit design is a very large-scale integration (VLSI) circuit design. A VLSI circuit design includes a large numbers of logic gates, e.g., millions, or more. Accordingly, due to the heavy computation memory requirements, applying a GNN technique to a netlist of a VLSI circuit design includes partitioning the larger graph topology (e.g., graph) of the circuit design into smaller subgraphs (e.g., smaller partitions or portions). The subgraphs are used during analysis (e.g., fault prediction, timing delay prediction, and/or congestion prediction, among others) of a circuit design. As a netlist includes a graph topology of the integrated cells and/or signals of a circuit design, the circuit analysis process is further complicated. Further, a netlist is a special type of graph. In general, netlists are hierarchical having nodes with multiple abstract layers that can be flattened. Also, netlists use directional edges with one source and one or more readers (fan-out). However, common graphs do not include hierarchical nodes and directional edge usually only have 1 source and 1 sink. As a result, common ML techniques are not able to efficiently analyze a circuit design by analyzing the netlist. For example, some ML techniques are not able to apply subgraph pattern matching and are not able to utilize strong subgraph characteristics learned during training to analyze a circuit design.
In one or more examples, circuit design performance prediction processes using ML techniques as described herein are applied to VLSI netlists. In such examples, such processes include measurements of subsets (e.g., paths and nodes, among others) of netlists. The measurements correspond to path congestion, path timing, and faults, among others, taken on a subset of the netlist, such as paths. The ML techniques described herein may be used to generate predictions for the corresponding circuit design. In one example, local circuit graph data is converted into tabular data features, and an ML technique(s) is applied to the tabular data features. The ML technique includes regression, classification, and/or neural networks, among others. An example neural network is a convolutional neural network (CNN). However, such a process requires time-consuming feature engineering work to convert cell, wire, and topology information, into proper tabular features. Further, the iterations between feature engineering and model testing rely heavily on domain knowledge to explore new features. In another example, relevant node/edge features are assigned to the circuit graph, and the corresponding problem is converted into tasks suitable for a GNN. Example tasks include node classification, graph classification, and/or node/graph embedding, among others. However, in such a process, the VLSI circuit graph is too large and complex for GNN to process or process quickly. Further, the labelled data is not sufficient for proper training and testing. Additionally, there are some unique circuit properties that differentiate netlist from common graphs that are difficult to handle. For example, netlists include ordered edges assumed by a module's input/outputs (IOs) (while graph edges are unordered), hierarchical structures associated with cells, many different cell types, long range interactions among macros, pins, standard cells, and place and route features, among others.
The present ML system and method as described herein track statistics of subgraphs, such as path, or module of corresponding netlist, and build statistical model(s) from the statistics. As described herein, the ML techniques allow for a region of interest to be identified for defined metrics, provide predictions for unseen (unmeasured) netlists, and offer insights for model performance issues. The present ML system and method may be applied to areas such as congestion and timing predictions, fault detection and prediction, among others. As described herein, the present ML system and method may be applied to VLSI circuit device graphs. In one or more examples, circuit device graph ML problems can be cast into statistical problems on the corresponding graph. For example, node classification problems can be converted to class probability problems. Further, methods that capture the statistical relationship between the labeled subgraph and corresponding components are described herein. Specifically, subgraph labels are treated as statistical events contributed to (and from) associated graph components. In the training process, training data from graph components, paths, and/or subgraphs, are used for updating the statistics on the original graph and to generate a bookkeeping graph, and/or a hash table. During testing, for an unseen test cell, wire, paths, and module, among others, relevant statistics are recombined and used for prediction. The present ML system and method as described herein captures the graph topology information via a neighborhood hash process and/or an embedding process to evaluate a circuit design.
The technical advantages of the present disclosure include, but are not limited to, circuit analyses methods using ML techniques that generate results for a circuit design that are less complex and more efficient to debug than those generate by other ML techniques. Further, the ML techniques described herein do not include complicated feature engineering that are used to convert the circuit topology information into features, and does not use neural network solutions. Further, supervised ML methods may include a computation-heavy training stage. For example, a GNN ingests feature data from graph nodes and neighborhoods and trains associated network parameters in minibatches to meet embedding and training objectives. In the circuit design prediction process described herein, training is treated as bookkeeping of statistical information on the circuit graph, and testing is treated as inference from statistical data recorded on the bookkeeping graph. Other ML techniques perform the tasks of collecting data, training the model (e.g., via back propagation on mini-batches until the model converges), and testing the model in inference. However, for the ML techniques described herein, training includes parsing the data to generate hash tables, where statistical information is stored (e.g., during training) and retrieved (e.g., during testing). The ML techniques described herein are less computational complex to perform than previous ML techniques and scale linearly with input data size reducing the corresponding processor time and processor resources. Accordingly, the ML techniques as described herein uses less processor resources and processing time than other ML techniques, reducing the cost to analyze a circuit design and manufacture a corresponding semiconductor device based on the circuit design.
At 110, circuit design training data and test data associated with a circuit design (e.g., circuit design test data) is received. For example, an EDA system (e.g., the computer system 900 of
In one example, 110 of
In one example, the training node data 212 and/or the training path data 214 and the test node data 216 and/or the test path data 218 are part of the same circuit design. In other examples, the training node data 212 and/or the training path data 214 corresponds to a circuit design or circuit designs different from that of the test node data 216 and/or the test path data 218.
At 120 of
In one example, the EDA system (e.g., the computer system 900 of
With reference to
The bookkeeping graph 222 is a set of graphs that can be constructed from the training path data 214 and test path data 218 (or subgraphs). The training path data 214 and test path data 218 are samples generated from a circuit design graph or graphs. The bookkeeping graph 222 is reconstructed from the training path data 214 and the test path data 218. The bookkeeping graph 222 includes paths between interconnected nodes. For example, nodes are connected via edges, forming the paths of the bookkeeping graph. The bookkeeping graph 222 is a subset of the original design circuit graph with some missing topology information based on the lack of coverage of sampling. In one or more examples, two or more paths overlap with each other, i.e., with common nodes/edges. In such examples different paths can be pieced together to form one connected subgraph of the original graph. For paths that have no overlaps, separated subgraphs are formed. In one example, separate subgraphs are formed for such paths, even if those paths are connected on the original graph.
In one example, the bookkeeping graph is a node instance bookkeeping graph. In such an example, the testing information is associated with the nodes of the bookkeeping graph. In another example, the bookkeeping graph is an edge instance bookkeeping graph. In such an example, the testing information is associated with the edges of the bookkeeping graph. In one or more examples, based on a determination that the training path data 214 and the test path data 218 are associated with nodes, a node based bookkeeping graph is generated. In one example, the training path data 214 and the test path data 218 includes an identifier or another type of indicator that associates the training path data 214 and the test path data 218 as being associated with nodes. In another example, the testing information is associated with the nodes of the bookkeeping graph. Based on a determination that the training path data 214 and the test path data 218 are associated with edges, an edge based bookkeeping graph is generated. In one or more examples, based on a determination that the training path data 214 and the test path data 218 are associated with nodes and edges, a node and edge based bookkeeping graph is generated.
The hashing table 224 uses the cell/net instances (or type, or hash values from neighboring nodes) as key entries, so that the occurrences of the cell/instances in training data set can be recorded and retrieved. In one example, the training data is analyzed to determine each cell (net) instance, cell (net) type, and/or hash values from neighboring nodes. The cell instances, cell types, and has values are used to build the hashing table 224. A hashing function used to build the hashing table 224 may be selected based a collision rate and/or other performance parameters. Different hashing functions provide different hashing tables with different performance parameters.
In one or more examples, in the bookkeeping graph 222 the keys are nodes, edges are hash vectors (for local neighborhood hashing), and/or the values are the testing information for bookkeeping. In one example, two or more paths have a shared starting point and a shared ending point. Path data collected along the paths, can be cell type, cell name, cell instance, cell hierarchy, labels, wire instances (signal names), and certain operation characteristics (e.g., number of toggles, timing delays), among others. Labels (or target values) that are attributed to the path, or subgraph, are then attributed to the path, or subgraph components, and recorded by the bookkeeping graph 222. During testing, nodes, or components from test node or path, or subgraph) are looked up in the bookkeeping graph 222 and statistics of the associated components are inferred from the bookkeeping graph 222. If the testing nodes or paths are not found in the bookkeeping graph 222, or if a bookkeeping graph is not included, a testing subgraph or path can be reconstructed out of the components from the hashing table 224.
The bookkeeping graph 222 can be reconstructed from the training data, or a hashing table 224 (e.g., a hash table storing the element statistics). In one example, one or more graphlets (e.g., connected subgraphs) of the bookkeeping graph 222 are generated from the training data and testing data 210. Each graphlet is generated with a unique instance identifier (ID) for each element that is obtained from the training data and testing data 210. The instances of each element are extracted from the training data and testing data 210 as a graphlet (or graphlets). The graphlets that are determined to overlap are combined (e.g., pieced together) to form the bookkeeping graph 222. In one or more examples, the netlists of the training or test data 210 are not available, but detailed path data is provided instead. In such examples, the bookkeeping graph 222 can be constructed out of the path data when node data and/or edge data is provided.
In one or more examples, the training path data 214 is unlinked to provide the list of nodes (and edges) along the path, to update the associated statistics in the hashing table 224, and to reconstruct the bookkeeping graph 222. The test path data 218 can be unlinked to provide the list of nodes (and edges) along the path, to retrieve nodes' (and edges') statistics from the hashing table 224, and to reconstruct and update the bookkeeping graph 222.
Further, in one or more examples, the bookkeeping graph 222 is built out of the training path data 214. For example, the bookkeeping graph 222 is built from the hashing table 224. Separately building the bookkeeping graph 222 can be used in instances where training data and testing data 210 cover a relatively small subset of the original netlist of the circuit design. From the smaller subset, query matching subgraphs for the test subgraph can be completed to generate the bookkeeping graph 222.
At 130 of
In one example, 130 of
The statistical path model 240 can be used to perform predictions by an EDA system in ways similar to that of the statistical graph model 230, using statistics from test data path elements, while the statistical graph model 230 can offer statistics from the neighborhood not captured by the test path.
At 140, test information for a test path connecting test nodes is determined based on the statistical representation. For example, the EDA system (e.g., the computer system 900 of
In one example, determining the test information includes determining statistical information and/or parameter information. In one example, a hashing table (e.g., the hashing table 224) is used to store and retrieve statistical values for an entry (e.g., a node). A hashing table provides fast query and storing operations. In one or more examples, the statistical graph model 230 and/or the statistical path model 240 are used to determine one or more properties of a circuit design. In one or more examples, the properties determined using the statistical graph model 230 and/or statistical path model 240 are computationally expensive to determine. For example, the statistical graph model 230 and/or statistical path model 240 are used to determine a probability of finding a fault within a cell of a circuit design.
In one example, as is described in greater detail in the following, the test path is between nodes N1 and N9 of the bookkeeping graph 300. The nodes N1 and N9 are connected via a first path including nodes N1, N2, N4, N6, N9 and a second path including nodes N1, N3, N5, and N9. Adjacent nodes are connected via an edge of the bookkeeping graph. Each node is associated with statistical information. In other examples, the nodes are associated with circuit parameter information. As illustrated in
In other examples, the combined circuit parameter information for the path between two nodes is determined instead of the combined statistical information. For example, the combined delay or congestion for the path between the two nodes is determined. The combined delay or congestion for the path may be determined via summation, numeric average, weighted average, or multiplication, among others. In one example, the delay associated with each node N1, N2, N4, N6, and N9 is summed to determine a first path delay and the delay associated with each node N1, N3, N5, and N9 is summed to determine a second path delay. A total delay is determined by combining the first and second path delays.
In one example, the test information for two or more test paths within the circuit design is determined as described above. The different test paths may include one or more common nodes, or include different nodes.
In one or more examples, the test information for a test path is output. The test information may be output as part of a report (e.g., a debug report or another type of test report) for the corresponding circuit design. The report may include the test information for one or more test paths. In one example, the report is output to a display (e.g., the video display unit 910 of
In one example, the bookkeeping graph 300 is a node based bookkeeping graph with node instances provided. In the bookkeeping graph 300, the testing information (e.g., statistical information) is associated with each of the nodes. In one example, a node based bookkeeping graph is determined based on node type being provided for the path data.
In the example of
To predict the statistical information of a path from node N1 to node N9, the statistical information of the nodes along the path from node N1 to node N9 are used. For example, a first path from node N1 to node N9 includes nodes N1, N2, N4, N6, and N9. A second path from node N1 to node N9 includes nodes N1, N3, N5, and N9. The cumulative statistical information of the first path is determined by adding the statistical information of each of the nodes N1, N2, N4, N6, and N9. Further, the cumulative statistical information of the second path is determined by adding the statistical information of each of the nodes N1, N3, N5, and N9. The statistical information of the first path is [9|4] and the statistical information of the second path is [5|3]. The combined statistical information of the first path and the second path is [14|7]. As the statistical information of a pass occurring is twice that of a failure occurring, the statistical information along the path from node N1 to node N9 can be simplified to [1|0].
In one example, node N7 may be determined to be a triggering node as the pass/failure statistical information associated with node N7 is not associated with that of the nodes N2 and N4 that are connected to the node N7.
In one or more examples, when the test subgraph is not completely found on the bookkeeping graph (e.g., the bookkeeping graph 300), the test subgraph can be rebuilt out of the graph elements (node/edge) from the test graph, using statistics from the hashing table (e.g., the hashing table 224 of
While
In the example of
In one example, the testing path to be predicted includes edges E2, E5, and E8. The statistical information of the testing path includes a combination of the statistical information of the edges E2, E5, and E8. For example, the statistical information associated with the edge E2 is [3|2], the statistical information associated with the edge E5 is [3|0], and the statistical information associated with edge E8 is [1|0]. In one example, the combined statistical information for the associated test path is [7|2].
In
In one or more examples, the amount of training data is less sufficient and covers a small subset of a corresponding circuit graph. In such an example, building instance-based bookkeeping has less coverage from training data to test data. Further, as about 10 to about 20 different logic gate types are used in a modern integrated circuit (IC) device, gate type information may be used in the ML process to improve the corresponding coverage. However, using a limited number of basic types places a strong bias on the basic types by assuming one gate type would have one statistical behavior regardless of the corresponding neighborhood. However, a gate's statistical behavior can be assumed to be primarily influenced by neighbors (e.g., gates connected to inputs and outputs of a gate), and the same gates might behave differently within different context or neighborhoods. Accordingly, ML techniques that use type information along with neighborhood type information provide a more accurate prediction for a circuit design as compared to ML techniques that do not use neighborhood type information. One way of achieving that is to use the hierarchical type information of the leaf module that houses the gates
V
c=hash(c,Input(c)|Output(c)) Equation 1
Equation 1 is for a 1-hop neighborhood. Equation 1 can be used recursively for K-hopes (K is one or more). For example, equation 2 illustrates an example where K=2, 2-hop.
V
c=hash(c,Input(c)|Output(c))
Input(c)=hash(Input(c),Input(Input(c))|Output(Input(c)))
Output(c)=hash(Output(c),Input(Output(c))|Output(Output(c))) Equation 2
Hash functions or embedding like equations 1 and 2 are be applied to large netlists and offer adjustable range for neighborhood discovery.
The graph can be represented by G=G (V, E), where V, E are node and edge sets, respectively. The node v∈V and edge e∈E are from different types, Tv and Te. Further, there is an associated neighborhood, for example 1-hop: N(vj)={vi} for ∀vivj∈E. The hash function can be defined by equations 3 and 4.
f(v)=hv∈d, Equation 3
f(Tv)=hTd Equation 4
Similarly, 2-hop or other types of neighborhood such as up-stream paths can be described. Then for a neighborhood of a set of graph elements, N(v)={vN, eN}, the hash can be fined based on equation 5.
Equation 5 assumes commutative property of the nodes and edges. To make the order of nodes and edges matter, i.e., noncommutative as nodes in a path, equation 6 can be used.
In equation 6, iv and ie are the index of node and edge on the path, respectively. To preserve IO port order, for example, for register cell types, the corresponding edge vectors can undergo different number of hash operations, for example, IO ports can be assigned by priority, pe
h
e
=fpe
Equation 7 can be simplified as illustrated in equation 8.
h
e
=(pe
In other examples, other hash schemes can be designed, such as rotate the vector by different angles, and translate the vector by different displacements, among others. The other schemes ensure that the IO port order differences can be reflected by the final hash vector. Ensuring that the IO port order differences are reflected can be guaranteed if all possible combinations can be verified against collisions in hashing.
During training, for labels associated with a neighborhood, such as a path, or paths, or just a node or edge, the statistics is updated to the associated hash vector(s). Causes (e.g., triggering nodes/edges) for certain labels are determined based on abnormal statistics. In one example, class probability (statistical information) for a test neighborhood can be defined as illustrated in equation 10.
P(vN=y)=lookup(hv
Equation 10 assumes the probability on the graph is accumulative, i.e., related to the associative and commutative properties of the contributions of a component to a given target. For behaviors that not associative or commutative, the behaviors can be captured by the neighborhood hash vectors. Also, multiplication can be used, if the probability of class is joint probability of events along the paths.
In one or more examples, the results or statistics reported by the GCP model can be used as features for ML methods. Further, special paths, such as certain logical gate long paths, can be identified and selected for hashing, to compensate the limited local topologies from 1-hop, 2-hop, n-hop neighborhood. In one or more examples, user engineered paths, or subgraph types can be used for bookkeeping. In one example, a bookkeeping graph, a training graph, a testing subgraph can be the same graph. In another examples, a test netlist can be converted directly as a bookkeeping graph, then updated by statistics from the training netlist. In this way, a more complete neighborhood can be considered for the testing subgraph (path). In one examples, when building the type or instance bookkeeping graph, strict rules can be applied to ensure that signal flow the same direction in the bookkeeping graph as the bookkeeping graph is on the original netlist. In some applications, such as molecules classification, modules from netlists, the training labels are assigned to the whole graph, instead of parts of the training graph, the same statistical approach can still be applied. Further, when establishing a node's neighborhood, up-stream and/or down-stream elements on signal paths can be considered. The choice of up-stream or down-stream depends on which are logically contributing to the labels at the measurement graph element.
Specifications for a circuit or electronic structure may range from low-level transistor material layouts to high-level description languages. A high-level of representation may be used to design circuits and systems, using a hardware description language (‘HDL’) such as VHDL, Verilog, SystemVerilog, SystemC, MyHDL or OpenVera. The HDL description can be transformed to a logic-level register transfer level (‘RTL’) description, a gate-level description, a layout-level description, or a mask-level description. Each lower representation level that is a more detailed description adds more useful detail into the design description, for example, more details for the modules that include the description. The lower levels of representation that are more detailed descriptions can be generated by a computer, derived from a design library, or created by another design automation process. An example of a specification language at a lower level of representation language for specifying more detailed descriptions is SPICE, which is used for detailed descriptions of circuits with many analog components. Descriptions at each level of representation are enabled for use by the corresponding systems of that layer (e.g., a formal verification system). A design process may use a sequence depicted in
During system design 814, functionality of an integrated circuit to be manufactured is specified. The design may be optimized for characteristics such as power consumption, performance, area (physical and/or lines of code), and reduction of costs, etc. Partitioning of the design into different types of modules or components can occur at this stage.
During logic design and functional verification 816, modules or components in the circuit are specified in one or more description languages and the specification is checked for functional accuracy. For example, the components of the circuit may be verified to generate outputs that match the requirements of the specification of the circuit or system being designed. Functional verification may use simulators and other programs such as testbench generators, static HDL checkers, and formal verifiers. In some embodiments, special systems of components referred to as ‘emulators’ or ‘prototyping systems’ are used to speed up the functional verification.
During synthesis and design for test 818, HDL code is transformed to a netlist. In some embodiments, a netlist may be a graph structure where edges of the graph structure represent components of a circuit and where the nodes of the graph structure represent how the components are interconnected. Both the HDL code and the netlist are hierarchical articles of manufacture that can be used by an EDA product to verify that the integrated circuit, when manufactured, performs according to the specified design. The netlist can be optimized for a target semiconductor manufacturing technology. Additionally, the finished integrated circuit may be tested to verify that the integrated circuit satisfies the requirements of the specification.
During netlist verification 820, the netlist is checked for compliance with timing constraints and for correspondence with the HDL code. During design planning 822, an overall floor plan for the integrated circuit is constructed and analyzed for timing and top-level routing.
During layout or physical implementation 824, physical placement (positioning of circuit components such as transistors or capacitors) and routing (connection of the circuit components by multiple conductors) occurs, and the selection of cells from a library to enable specific logic functions can be performed. As used herein, the term ‘cell’ may specify a set of transistors, other components, and interconnections that provides a Boolean logic function (e.g., AND, OR, NOT, XOR) or a storage function (such as a flipflop or latch). As used herein, a circuit ‘block’ may refer to two or more cells. Both a cell and a circuit block can be referred to as a module or component and are enabled as both physical structures and in simulations. Parameters are specified for selected cells (based on ‘standard cells’) such as size and made accessible in a database for use by EDA products.
During analysis and extraction 826, the circuit function is verified at the layout level, which permits refinement of the layout design. During physical verification 828, the layout design is checked to ensure that manufacturing constraints are correct, such as DRC constraints, electrical constraints, lithographic constraints, and that circuitry function matches the HDL design specification. During resolution enhancement 830, the geometry of the layout is transformed to improve how the circuit design is manufactured.
During tape-out, data is created to be used (after lithographic enhancements are applied if appropriate) for production of lithography masks. During mask data preparation 832, the ‘tape-out’ data is used to produce lithography masks that are used to produce finished integrated circuits.
A storage subsystem of a computer system (such as computer system 900 of
The machine may be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, a switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.
The example computer system 900 includes a processing device 902, a main memory 904 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM), a static memory 906 (e.g., flash memory, static random access memory (SRAM), etc.), and a data storage device 918, which communicate with each other via a bus 930.
Processing device 902 represents one or more processors such as a microprocessor, a central processing unit, or the like. More particularly, the processing device may be complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device 902 may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device 902 may be configured to execute instructions 926 for performing the operations and steps described herein.
The computer system 900 may further include a network interface device 908 to communicate over the network 920. The computer system 900 also may include a video display unit 910 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alphanumeric input device 912 (e.g., a keyboard), a cursor control device 914 (e.g., a mouse), a graphics processing unit 922, a signal generation device 916 (e.g., a speaker), graphics processing unit 922, video processing unit 928, and audio processing unit 932.
The data storage device 918 may include a machine-readable storage medium 924 (also known as a non-transitory computer-readable medium) on which is stored one or more sets of instructions 926 or software embodying any one or more of the methodologies or functions described herein. The instructions 926 may also reside, completely or at least partially, within the main memory 9004 and/or within the processing device 902 during execution thereof by the computer system 900, the main memory 904 and the processing device 902 also constituting machine-readable storage media.
In some implementations, the instructions 926 include instructions to implement functionality corresponding to the present disclosure. While the machine-readable storage medium 924 is shown in an example implementation to be a single medium, the term “machine-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “machine-readable storage medium” shall also be taken to include any medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine and the processing device 902 to perform any one or more of the methodologies of the present disclosure. The term “machine-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media.
Some portions of the preceding detailed descriptions have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the ways used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm may be a sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Such quantities may take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. Such signals may be referred to as bits, values, elements, symbols, characters, terms, numbers, or the like.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the present disclosure, it is appreciated that throughout the description, certain terms refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage devices.
The present disclosure also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the intended purposes, or it may include a computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus.
The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various other systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct a more specialized apparatus to perform the method. In addition, the present disclosure is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the disclosure as described herein.
The present disclosure may be provided as a computer program product, or software, that may include a machine-readable medium having stored thereon instructions, which may be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g., a computer). For example, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium such as a read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices, etc.
In the foregoing disclosure, implementations of the disclosure have been described with reference to specific example implementations thereof. It will be evident that various modifications may be made thereto without departing from the broader spirit and scope of implementations of the disclosure as set forth in the following claims. Where the disclosure refers to some elements in the singular tense, more than one element can be depicted in the figures and like elements are labeled with like numerals. The disclosure and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense.
This application claims the benefit of U.S. provisional patent application Ser. No. 63/392,436, filed Jul. 26, 2022, which is hereby incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
63392436 | Jul 2022 | US |