This disclosure relates generally to neural network searching and, more particularly, to methods, systems, articles of manufacture and apparatus to improve neural architecture searches.
In recent years, neural networks have emerged with a vast number of different configurations and types. Some neural network configurations may exhibit particular capabilities that are better or worse than other neural network configurations. Typically, neural networks have a particular number of layers, take particular data as input, apply particular weights, and apply particular bias values to their outputs.
In general, the same reference numbers will be used throughout the drawing(s) and accompanying written description to refer to the same or like parts. The figures are not to scale. As used herein, connection references (e.g., attached, coupled, connected, and joined) may include intermediate members between the elements referenced by the connection reference and/or relative movement between those elements unless otherwise indicated. As such, connection references do not necessarily infer that two elements are directly connected and/or in fixed relation to each other. As used herein, stating that any part is in “contact” with another part is defined to mean that there is no intermediate part between the two parts.
As used herein, “approximately” and “about” modify their subjects/values to recognize the potential presence of variations that occur in real world applications. For example, “approximately” and “about” may modify dimensions that may not be exact due to manufacturing tolerances and/or other real world imperfections as will be understood by persons of ordinary skill in the art. For example, “approximately” and “about” may indicate such dimensions may be within a tolerance range of +/−10% unless otherwise specified in the below description. As used herein “substantially real time” refers to occurrence in a near instantaneous manner recognizing there may be real world delays for computing time, transmission, etc.
As used herein, the phrase “in communication,” including variations thereof, encompasses direct communication and/or indirect communication through one or more intermediary components, and does not require direct physical (e.g., wired) communication and/or constant communication, but rather additionally includes selective communication at periodic intervals, scheduled intervals, aperiodic intervals, and/or one-time events.
As used herein, “processor circuitry” is defined to include (i) one or more special purpose electrical circuits structured to perform specific operation(s) and including one or more semiconductor-based logic devices (e.g., electrical hardware implemented by one or more transistors), and/or (ii) one or more general purpose semiconductor-based electrical circuits programmable with instructions to perform specific operations and including one or more semiconductor-based logic devices (e.g., electrical hardware implemented by one or more transistors). Examples of processor circuitry include programmable microprocessors, Field Programmable Gate Arrays (FPGAs) that may instantiate instructions, Central Processor Units (CPUs), Graphics Processor Units (GPUs), Digital Signal Processors (DSPs), XPUs, or microcontrollers and integrated circuits such as Application Specific Integrated Circuits (ASICs). For example, an XPU may be implemented by a heterogeneous computing system including multiple types of processor circuitry (e.g., one or more FPGAs, one or more CPUs, one or more GPUs, one or more DSPs, etc., and/or a combination thereof) and application programming interface(s) (API(s)) that may assign computing task(s) to whichever one(s) of the multiple types of processor circuitry is/are best suited to execute the computing task(s).
An architecture of a neural network (NN) (sometimes referred to as an “architecture,” a “network architecture” or a “NN architecture”) includes many different parameters. As such, a network combination of characteristics and/or parameters is referred to as a type of architecture, which may include a particular combination of layer and/or activation types to perform particular tasks, or may include a particular combination of operations represented in the form of a computational graph. Parameters that make up a NN include, but are not limited to a number of layers of the NN, a number of nodes within each layer of the NN, a type of operation(s) performed with the NN (e.g., convolutions), a particular kernel size/dimension (e.g., 3×3). In the event a NN architecture is built and/or otherwise generated, the NN is expected to accomplish some sort of objective, such as image recognition, character recognition, etc. Still further, different NNs exhibit different performance characteristics (e.g., accuracy, latency, power consumption, memory bandwidth) that can be influenced by the type of input data to be processed and/or the type of computational resource that executes the NN.
Artificial intelligence (AI), including machine learning (ML), deep learning (DL), and/or other artificial machine-driven logic, enables machines (e.g., computers, logic circuits, etc.) to use a model to process input data to generate an output based on patterns and/or associations previously learned by the model via a training process. For instance, the model may be trained with data to recognize patterns and/or associations and follow such patterns and/or associations when processing input data such that other input(s) result in output(s) consistent with the recognized patterns and/or associations.
Many different types of machine learning models and/or machine learning architectures exist. In general, implementing a ML/AI system typically involves two phases, a learning/training phase and an inference phase. In the learning/training phase, a training algorithm is used to train a model to operate in accordance with patterns and/or associations based on, for example, training data. In general, the model includes internal parameters that guide how input data is transformed into output data, such as through a series of nodes and connections within the model to transform input data into output data. Additionally, hyperparameters are used as part of the training process to control how the learning is performed (e.g., a learning rate, a number of layers to be used in the machine learning model, etc.). Hyperparameters are defined to be training parameters that are determined prior to initiating the training process.
Different types of training may be performed based on the type of ML/AI model and/or the expected output. For example, supervised training uses inputs and corresponding expected (e.g., labeled) outputs to select parameters (e.g., by iterating over combinations of select parameters) for the ML/AI model that reduce model error. As used herein, labelling refers to an expected output of the machine learning model (e.g., a classification, an expected output value, etc.) Alternatively, unsupervised training (e.g., used in deep learning, a subset of machine learning, etc.) involves inferring patterns from inputs to select parameters for the ML/AI model (e.g., without the benefit of expected (e.g., labeled) outputs). In examples disclosed herein, once training is complete, one or more models are deployed for use as an executable construct that processes an input and provides an output based on the network of nodes and connections defined in the model. The model(s) is stored at any storage location, and then be executed by a computing device and/or platform.
Once trained, the deployed model may be operated in an inference phase to process data. In the inference phase, data to be analyzed (e.g., live data) is input to the model, and the model executes to create an output. This inference phase can be thought of as the AI “thinking” to generate the output based on what it learned from the training (e.g., by executing the model to apply the learned patterns and/or associations to the live data). In some examples, input data undergoes pre-processing before being used as an input to the machine learning model. Moreover, in some examples, the output data may undergo post-processing after it is generated by the AI model to transform the output into a useful result (e.g., a display of data, an instruction to be executed by a machine, etc.).
In some examples, output of the deployed model may be captured and provided as feedback. By analyzing the feedback, an accuracy of the deployed model can be determined. If the feedback indicates that the accuracy of the deployed model is less than a threshold or other criterion, training of an updated model can be triggered using the feedback and an updated training data set, hyperparameters, etc., to generate an updated, deployed model.
In view of the relatively large number of different influences, determining an optimal NN may consume a similarly large amount of time. As used herein, “optimal” refers to a particular performance improvement (e.g., improved accuracy, improved speed, improved (e.g., lower) power consumption, etc.) that satisfies a threshold change from a prior NN architecture configuration. In some examples, an optimum NN is based on a performance improvement that illustrates diminishing returns from one iteration to the next, such as an accuracy metric that does not improve by more than one percentage point, becomes asymptotically stagnant, or does not change from one iteration to the next. Further still, the number of potential permutations of different NN architecture design choices renders the task of determining an optimal NN (sometimes referred to as a Neural Architecture Search (NAS)) impossible for human effort without computational assistance.
Efforts to perform a NAS to identify optimized and/or otherwise candidate NN architectures (or particular combinations of NN architecture characteristics) are typically focused on a specified hardware platform (e.g., a target platform type) for a given task. However, such efforts do not consider particular NN architecture parameters and/or characteristics that could be relevant prior to beginning the search effort. Stated differently, such search efforts begin the search task without parameter/characteristic granularity and/or the beneficial foresight of previously executed search efforts that may represent helpful starting search conditions that, if applied, ultimately reduce an amount of time to identify an optimized and/or otherwise candidate NN architecture. Additionally, such search efforts typically fail to consider particular starting search conditions that are known to work very poorly in a particular circumstance. For instance, in the event prior search efforts have observed that a particular task with particular adjacency relationships that execute on a particular hardware platform (e.g., a particular hardware platform type, such as a PC, a server with a particular processor type, a rack with GPUs, etc.) perform rather poorly (e.g., in view of performance metric thresholds), then search efforts can be improved (e.g., occur faster and with greater accuracy) in the event those particular architectural parameter combinations are not suggested, tested, labeled as poor and/or otherwise withheld from evaluation in the NAS. In some examples, particular features (e.g., NN characteristics and/or parameters) that are labeled as relatively poor performers for a given task and/or platform combination are beneficial for network analyzers as inputs for one or more search algorithms because the labeled inputs facilitate efficient determinations of what features/combinations do not work particularly well. This is in contrast to existing techniques in which a network analyzer or NAS search effort takes candidate input data as an unlabeled opportunity for a solution, when in fact some of the inputs are poor choices. As such, if those particular unlabeled starting search conditions are used as inputs during a traditional NAS attempt, using ineffective or poorly performing NN architecture parameters causes the NAS to take longer to execute (e.g., more iterations will be required to reach a conversion point than would otherwise be needed if the particular iterations were withheld from consideration).
Examples disclosed herein improve neural architecture searches by, in part, initiating and/or otherwise establishing search efforts using starting conditions that have a relatively higher probability of being relevant, thereby reducing a time needed for the search. Additionally, because examples disclosed herein explicitly identify particular NN architecture parameters that are known to be ineffective and/or otherwise cause poor NN performance, such parameters are labeled to aid in machine learning analysis by a network analyzer.
The example analysis platform 102 includes example reference network selection circuitry 114, example dataset analyzer circuitry 116, an example network knowledge database 118, example network comparison circuitry 120, example features extraction circuitry 122, example benchmark evaluation circuitry 124, example network analysis circuitry 126, example similarity verification circuitry 128, example likelihood verification circuitry 130 and example architecture modification circuitry 132.
In operation, the example reference network selection circuitry 114 retrieves, receives, accesses and/or otherwise obtains candidate task information, associated task dataset information, target platform characteristics information (e.g., characteristics indicative of a particular target platform type), and in some examples, constraint information. Constraint information includes, but is not limited to particular types of hardware platforms and/or particular metrics (e.g., latency, accuracy), such as metrics to be associated with contractual performance expectations when an architecture is operating on a platform. For instance, if a user is not interested in NNs that have an accuracy value lower than a particular threshold, then that particular value may be used as a constraint. In some examples, the aforementioned information is obtained from a user of the example NAS system 100, which may render one or more user interfaces (UIs) (e.g., a graphical user interface) to accept different types of input from users. In some examples, the NAS system 100 enables users to enter target destination storage locations in which the example dataset information 106 is stored, where the example task information 108 is stored, where the example target hardware information 110 is stored and/or where the example constraint information 112 is stored. In some examples, the NAS system 100 enables user entry of some information, such as information related to the target hardware and/or constraint information. In some examples, the network analysis platform 102 is communicatively connected to computing resources that execute NNs such that the example network analysis platform 102 scans such resources to determine their capabilities (e.g., processor operating frequency) and/or components (e.g., type of processor, amount and/or type of memory, number of processor cores, etc.).
The example dataset analyzer circuitry 116 extracts dataset information from the example dataset information database 106. In some examples, the dataset analyzer circuitry 116 determines any number of characteristics (features) associated with the dataset information to be processed by a network, such as a dataset size and/or a dataset type (e.g., CIFAR-10, ImageNet, a custom dataset, etc.). The example reference network selection circuitry 114 proceeds to select an initial starting point for a neural architecture search in a manner that utilizes available historical information and operational information that the NN is expected to experience. In particular, the example reference network selection circuitry 114 invokes the example similarity verification circuitry 128 to determine whether the network knowledge database 118 includes information corresponding to a prior occurrence of a combination of (a) a same dataset type, (b) a same task type, and (c) a same platform type. To accomplish this determination of a possible prior occurrence of these particular environmental conditions, the example similarity verification circuitry 128 queries the example network knowledge database 118.
The illustrated example of
In some examples, the hardware representation table 210 includes a hardware characteristics column 222 and a corresponding hardware representations column 224. The example hardware representation table 210 includes observed hardware devices that have corresponding performance data, and the example representations column 224 may include details of the observed hardware device. For example, Intel® Cascade Lake (CLX) processors may have different configurations with different numbers of on-board processing cores and cache memory configurations that may be stored in the example representations column 224. In some examples, the information from the hardware representation table 210 is linked to one or more other tables of the example network knowledge database 118 to allow one or more insights to be learned and/or otherwise appreciated. For instance, while the example hardware representation table 210 includes two columns, one or more additional columns may be generated to reveal information corresponding to performance metrics of particular hardware in view of different operation types, different kernel sizes, different input sizes, different network types, etc.
In some examples, the network information table 212 includes a network type column 226, a dataset column 228 and a weights column 230. The example network type column 226 may include any type of observed network types that have been previously executed by particular hardware configurations, particular datasets (e.g., ImageNet), and/or having particular weights. In some examples, the network information table 212 includes one or more additional columns to identify performance metrics for particular configurations.
In some examples, the network knowledge database 118 includes statistical information and/or probability distribution information 232 corresponding to different network configurations and their corresponding performance metrics. In the illustrated example of
The example probability distribution information table 232A also includes any number of rows corresponding to particular task types (example column 234), corresponding configuration settings (example columns 236, 238), and corresponding performance metrics (example columns 240, 242, 244). In the event a same task type observation occurs two or more times (e.g., vehicle identification “A” and vehicle identification “B”), then examples disclosed herein provide insight (e.g., information) regarding which particular architecture configuration exhibits particular (e.g., improved) performance metrics. As such, selections for a particular architecture configuration may occur in a more objective manner (e.g., avoiding human discretionary choices), and/or this information may be used in subsequent search algorithms as a preferred starting point, thereby reducing NAS search times.
Returning to the illustrated example of
In some examples, similar tasks are selected for analysis when exact matches are not available. For instance, if the task of interest includes sport car identification, a closest match determined by the example similarity verification circuitry 128 could be task data corresponding to general vehicle identification. In other words, examples disclosed herein enable a search effort to consider relevant architecture characteristics to be used as seeds so that the search effort is more efficient. In some examples, the likelihood verification circuitry 130 selects one or more layer types (e.g., convolution, depth-wise convolution, separable convolution, feed forward linear, etc.) that exhibit a relatively highest probability metric value that is related to the task of interest or a task of interest that may be deemed similar. In some examples, the likelihood verification circuitry 130 selects one or more activation types (e.g., RELU, GeLU, Softmax, etc.) that exhibit a relatively highest probability metric value that is related to the task of interest or a task of interest that may be deemed similar.
Examples disclosed herein also consider whether layer modification (e.g., layer pruning) has occurred in the past and a corresponding effect it may have had on one or more performance metrics. In some examples, the architecture modification circuitry 132 queries the network knowledge database 118 to determine if architecture modification information is available. If so, the architecture modification circuitry 132 applies changes to one or more layers by way of, for example, pruning and/or layer substitution. Again, such modifications may be used to establish a seed or starting point for subsequent search efforts to attempt to converge on a search in a more efficient manner. The example reference network selection circuitry 114 may then forward any number of candidate reference architectures to the example network comparison circuitry 120, as described in further detail below.
In view of the one or more candidate reference architectures to be considered for a search effort, the example network comparison circuitry 120 selects one of them. In some examples, the network comparison circuitry 120 generates a pareto metric and/or pareto graph to determine relative values of two or more co-existing architecture characteristics. Generally speaking, deciding which architectural configuration may be deemed optimal may also require a compromise between individual performance metrics, and the pareto metrics can help determine how the candidate configurations satisfy performance metrics of interest. For example, performance metrics related to accuracy and latency are both typically considered important and/or otherwise valuable when network architecture characteristics are chosen. However, in some circumstances a particular network architecture characteristic combination may perform particularly well with one of those two performance metrics (e.g., accuracy), while performing particularly poorly with another/different one of those two performance metrics (e.g., latency). As such, the example network comparison circuitry 120 generates pareto metrics and/or graphs in view of any number of candidate architecture combinations. While the above-identified example considers generating a pareto metric in view of two architecture characteristics, examples disclosed herein are not limited thereto.
The example network comparison circuitry 120 labels all candidate architecture characteristic combinations based on relative performance, such as their relative performance in view of performing a task in view of accuracy, latency, power consumption, memory bandwidth, etc. In some examples, the network comparison circuitry 120 applies labels indicative of an overall performance value derived as an aggregate of individual performance metrics (e.g., an aggregate of relative accuracy, relative latency and/or relative power consumption). In some examples, performance metrics are categorized and/or otherwise ranked by the example likelihood verification circuitry 130 on an aggregate level, such as a first tier of performance values that perform in a top percentage (e.g., metrics corresponding to an upper threshold) and a second tier of performance values that perform in a bottom percentage (e.g., metrics corresponding to a lower threshold). When such first tier values and bottom tier values are categorized, then their corresponding features are extracted to reveal key guidance on which features, parameters and/or characteristics may cause such first or second tier performance effects. While typical NAS techniques do not consider the granularity of particular features and/or combinations of features as inputs to a network analyzer, examples disclosed herein extract such granularity from historical information related to both past success and past failures with regard to performance metrics.
In some examples, the network comparison circuitry 120 labels candidate architecture characteristic combinations based on pareto metrics, such as particular pareto metrics that do not have values within a lower percentage range. Stated differently, some architecture characteristic combinations exhibit a first performance characteristic (e.g., accuracy) that is within a top threshold percentage (e.g., top 10%) and a second performance characteristic (e.g., latency) that is within a bottom threshold percentage (e.g., bottom 10%). Particular threshold limits may be set to avoid any candidate network architecture combinations when they include a performance metric that resides within a low range (e.g., bottom 20%). In any event, the example network comparison circuitry 120 generates labels for all candidate network architecture combinations, regardless of whether they are considered “best performing” or “worst performing” because both label types are helpful when exploring candidate architectures to ultimately select in a search effort. In particular, knowledge regarding which architecture combinations typically perform poorly when attempting to execute a particular task (e.g., face recognition) is helpful to NAS operating efficiency by avoiding future search attempts for those same architecture combinations.
In the illustrated example of
In the event an input regarding (a) a dataset type, (b) a task type and (c) a target platform type includes corresponding vetted architecture combinations in the example network knowledge database 118, then those particular combinations may be selected when moving forward with deciding which network architecture to use for a target task type of interest. In some examples, the example features extraction circuitry 122 determines that the network knowledge database 118 includes such matching information (e.g., the network knowledge database 118 has a degree of parity with the input). If so, then future NAS efforts may be reduced or eliminated in favor of using the historical network architecture combinations stored in the network knowledge database 118.
However, in the event the features extraction circuitry 122 determines that the network knowledge database 118 does not have parity with the input, or that there is not substantial overlapping parity, then the example benchmark evaluation circuitry 124 is invoked to execute benchmark tests with characteristics that match the desired input. For example, the benchmark evaluation circuitry 124 initiates and/or otherwise instantiates benchmark tests using the same or similar (a) dataset type(s), (b) task type(s) and (c) target platform type(s). In some examples, even when there is parity between the input and the network knowledge database 118, the benchmark evaluation circuitry 124 may instantiate benchmark tests in the event previously stored historical information in the network knowledge database 118 has a threshold age (e.g., the data/information may be considered “stale”). The example benchmark evaluation circuitry 124 calculates updated performance metrics based on the tests and updates the network knowledge database 118 with that new performance metric information (e.g., latency metrics, accuracy metrics, power consumption metrics, multiply-accumulate (MAC) metrics, floating point operations per second (FLOPS), etc.).
Based on benchmark performance results and the architecture characteristics associated therewith and/or based on previously stored performance results and the architecture characteristics associated therewith, the example network comparison circuitry 120 transmits and/or otherwise feeds such seed architecture characteristics and network connectivity information to the example network analysis circuitry 126. In some examples, the network analysis circuitry 126 accepts, receives and/or otherwise retrieves the performance metric information (e.g., latency information, accuracy information, etc.), adjacency information, hardware aware features and/or constraints. The example network analysis circuitry 126, which is sometimes referred to as a network analyzer, extracts any number of patterns using one or more of classical machine learning algorithms, deep neural network trained algorithms (e.g., in a semi-supervised or un-supervised manner), rule-based algorithms, etc. Outputs from the example network analysis circuitry 126 identify candidate architecture characteristics that perform with particular abilities, such as certain architecture characteristics that result in the relatively best performance goals (e.g., least amount of latency with the highest accuracy), or vice-versa. In some examples, the network analysis circuitry 126 generates human readable outputs, such as a sentence stating “For <task 1>, depth-wise separable convolution with a kernel dimension of 3×3 does not perform well on an A100 GPU device.” In some examples, the network analysis circuitry 126 generates one or more visual representations of the output, such as scatterplots, bar-graphs, feature-map plots, etc.
As described above,
In some examples, the reference network selection circuitry 114, the dataset analyzer circuitry 116, the network comparison circuitry 120, the features extraction circuitry 122, the benchmark evaluation circuitry 124, the network analysis circuitry 126, the similarity verification circuitry 128, the likelihood verification circuitry 130, the architecture modification circuitry 132 and/or the network analysis platform 102 is instantiated by processor circuitry executing instructions and/or configured to perform operations such as those represented by the flowcharts of
In some examples, the apparatus includes means for reference network selection, means for dataset analysis, means for network comparison, means for feature extraction, means for benchmark evaluation, means for network analysis, means for similarity verification, means for likelihood verification, and means for architecture modification. For example, the means for reference network selection, the means for dataset analysis, the means for network comparison, the means for feature extraction, the means for benchmark evaluation, the means for network analysis, the means for similarity verification, the means for likelihood verification, and the means for architecture modification may be implemented by respective ones of the reference network selection circuitry 114, the dataset analyzer circuitry 116, the network comparison circuitry 120, the features extraction circuitry 122, the benchmark evaluation circuitry 124, the network analysis circuitry 126, the similarity verification circuitry 128, the likelihood verification circuitry 130, and the architecture modification circuitry 132. In some examples, the aforementioned may be instantiated by processor circuitry such as the example processor circuitry 512 of
While an example manner of implementing the example network analysis platform 102 of
Flowcharts representative of example machine readable instructions, which may be executed to configure processor circuitry to implement the network analysis platform 102 of
The machine readable instructions described herein may be stored in one or more of a compressed format, an encrypted format, a fragmented format, a compiled format, an executable format, a packaged format, etc. Machine readable instructions as described herein may be stored as data or a data structure (e.g., as portions of instructions, code, representations of code, etc.) that may be utilized to create, manufacture, and/or produce machine executable instructions. For example, the machine readable instructions may be fragmented and stored on one or more storage devices and/or computing devices (e.g., servers) located at the same or different locations of a network or collection of networks (e.g., in the cloud, in edge devices, etc.). The machine readable instructions may require one or more of installation, modification, adaptation, updating, combining, supplementing, configuring, decryption, decompression, unpacking, distribution, reassignment, compilation, etc., in order to make them directly readable, interpretable, and/or executable by a computing device and/or other machine. For example, the machine readable instructions may be stored in multiple parts, which are individually compressed, encrypted, and/or stored on separate computing devices, wherein the parts when decrypted, decompressed, and/or combined form a set of machine executable instructions that implement one or more operations that may together form a program such as that described herein.
In another example, the machine readable instructions may be stored in a state in which they may be read by processor circuitry, but require addition of a library (e.g., a dynamic link library (DLL)), a software development kit (SDK), an application programming interface (API), etc., in order to execute the machine readable instructions on a particular computing device or other device. In another example, the machine readable instructions may need to be configured (e.g., settings stored, data input, network addresses recorded, etc.) before the machine readable instructions and/or the corresponding program(s) can be executed in whole or in part. Thus, machine readable media, as used herein, may include machine readable instructions and/or program(s) regardless of the particular format or state of the machine readable instructions and/or program(s) when stored or otherwise at rest or in transit.
The machine readable instructions described herein can be represented by any past, present, or future instruction language, scripting language, programming language, etc. For example, the machine readable instructions may be represented using any of the following languages: C, C++, Java, C#, Perl, Python, JavaScript, HyperText Markup Language (HTML), Structured Query Language (SQL), Swift, etc.
As mentioned above, the example operations of
“Including” and “comprising” (and all forms and tenses thereof) are used herein to be open ended terms. Thus, whenever a claim employs any form of “include” or “comprise” (e.g., comprises, includes, comprising, including, having, etc.) as a preamble or within a claim recitation of any kind, it is to be understood that additional elements, terms, etc., may be present without falling outside the scope of the corresponding claim or recitation. As used herein, when the phrase “at least” is used as the transition term in, for example, a preamble of a claim, it is open-ended in the same manner as the term “comprising” and “including” are open ended. The term “and/or” when used, for example, in a form such as A, B, and/or C refers to any combination or subset of A, B, C such as (1) A alone, (2) B alone, (3) C alone, (4) A with B, (5) A with C, (6) B with C, or (7) A with B and with C. As used herein in the context of describing structures, components, items, objects and/or things, the phrase “at least one of A and B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, or (3) at least one A and at least one B. Similarly, as used herein in the context of describing structures, components, items, objects and/or things, the phrase “at least one of A or B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, or (3) at least one A and at least one B. As used herein in the context of describing the performance or execution of processes, instructions, actions, activities and/or steps, the phrase “at least one of A and B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, or (3) at least one A and at least one B. Similarly, as used herein in the context of describing the performance or execution of processes, instructions, actions, activities and/or steps, the phrase “at least one of A or B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, or (3) at least one A and at least one B.
As used herein, singular references (e.g., “a,” “an,” “first,” “second,” etc.) do not exclude a plurality. The term “a” or “an” object, as used herein, refers to one or more of that object. The terms “a” (or “an”), “one or more,” and “at least one” are used interchangeably herein. Furthermore, although individually listed, a plurality of means, elements or method actions may be implemented by, e.g., the same entity or object. Additionally, although individual features may be included in different examples or claims, these may possibly be combined, and the inclusion in different examples or claims does not imply that a combination of features is not feasible and/or advantageous.
In the event the example network knowledge database 118 includes probability distribution information (block 408), the example likelihood verification circuitry 130 selects a threshold number of architectures that exhibit a relatively highest probability value (e.g., a probability value corresponding to a highest expected performance metric of interest) (block 410). As disclosed above, the example likelihood verification circuitry 130 selects architectures that exhibit a relatively lowest probability value (e.g., a probability value corresponding to a lowest expected performance metric of interest) so that machine learning techniques can more quickly converge on desired network architectures that ultimately perform the best. Additionally, the likelihood verification circuitry 130 selects one or more layer types (e.g., convolution, depth-wise convolution, separable convolution, feed forward linear, etc.) that exhibit a relatively highest probability metric value that is related to the task of interest or a task of interest that may be deemed similar (block 412) (e.g., similar to a target task of interest that prompted the NAS effort). Further, the example likelihood verification circuitry 130 selects one or more activation types (e.g., RELU, GeLU, Softmax, etc.) that exhibit a relatively highest probability metric value that is related to the task of interest or a task of interest that may be deemed similar (block 414).
The example architecture modification circuitry 132 determines whether the network knowledge database 118 includes architectural modification information (block 416), such as pruning modifications. If so, the architecture modification circuitry 132 applies changes to one or more layers by way of, for example, pruning and/or layer substitution (block 418). The example reference network selection circuitry 114 forwards any number of candidate reference architectures to the example network comparison circuitry 120 (block 420), and control advances to block 308 of
Returning to the illustrated example of
The example features extraction circuitry 122 extracts connectivity information from the candidate architecture combinations (block 316), and determines whether there is a need to perform benchmark testing (block 318). For example, if the network knowledge database 118 does not have performance metrics associated with candidate architectures of interest (in which the performance metrics will be helpful in machine learning analysis of the candidate architectures), then the example benchmark evaluation circuitry 124 is invoked to execute benchmark tests (block 320) and calculate performance indicators (block 322). The benchmark evaluation circuitry 124 updates the network knowledge database 118 with the calculated indicators (block 324).
The example network comparison circuitry 120 transmits and/or otherwise feeds such seed architecture characteristics and network connectivity information to the example network analysis circuitry 126 (block 326), and the example network analysis circuitry 126 extracts any number of patterns using one or more of classical machine learning algorithms, deep neural network trained algorithms (e.g., in a semi-supervised or un-supervised manner), rule-based algorithms, etc. (block 328). The network comparison circuitry 120 (or in some examples the network analysis circuitry 126) generates outputs corresponding to candidate architecture characteristics that perform with particular abilities (block 330), such as certain architecture characteristics that result in the relatively best performance goals (e.g., least amount of latency with the highest accuracy), or vice-versa.
The processor platform 500 of the illustrated example includes processor circuitry 512. The processor circuitry 512 of the illustrated example is hardware. For example, the processor circuitry 512 can be implemented by one or more integrated circuits, logic circuits, FPGAs, microprocessors, CPUs, GPUs, DSPs, and/or microcontrollers from any desired family or manufacturer. The processor circuitry 512 may be implemented by one or more semiconductor based (e.g., silicon based) devices. In this example, the processor circuitry 512 implements the example reference network selection circuitry 114, the example dataset analyzer circuitry 116, the example network comparison circuitry 120, the example features extraction circuitry 122, the example benchmark evaluation circuitry 124, the example network analysis circuitry 126, the example similarity verification circuitry 128, the example likelihood verification circuitry 130, the example architecture modification circuitry 132 and/or, more generally, the example network analysis platform 102 of
The processor circuitry 512 of the illustrated example includes a local memory 513 (e.g., a cache, registers, etc.). The processor circuitry 512 of the illustrated example is in communication with a main memory including a volatile memory 514 and a non-volatile memory 516 by a bus 518. The volatile memory 514 may be implemented by Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS® Dynamic Random Access Memory (RDRAM®), and/or any other type of RAM device. The non-volatile memory 516 may be implemented by flash memory and/or any other desired type of memory device. Access to the main memory 514, 516 of the illustrated example is controlled by a memory controller 517.
The processor platform 500 of the illustrated example also includes interface circuitry 520. The interface circuitry 520 may be implemented by hardware in accordance with any type of interface standard, such as an Ethernet interface, a universal serial bus (USB) interface, a Bluetooth® interface, a near field communication (NFC) interface, a Peripheral Component Interconnect (PCI) interface, and/or a Peripheral Component Interconnect Express (PCIe) interface.
In the illustrated example, one or more input devices 522 are connected to the interface circuitry 520. The input device(s) 522 permit(s) a user to enter data and/or commands into the processor circuitry 512. The input device(s) 522 can be implemented by, for example, an audio sensor, a microphone, a camera (still or video), a keyboard, a button, a mouse, a touchscreen, a track-pad, a trackball, an isopoint device, and/or a voice recognition system.
One or more output devices 524 are also connected to the interface circuitry 520 of the illustrated example. The output device(s) 524 can be implemented, for example, by display devices (e.g., a light emitting diode (LED), an organic light emitting diode (OLED), a liquid crystal display (LCD), a cathode ray tube (CRT) display, an in-place switching (IPS) display, a touchscreen, etc.), a tactile output device, a printer, and/or speaker. The interface circuitry 520 of the illustrated example, thus, typically includes a graphics driver card, a graphics driver chip, and/or graphics processor circuitry such as a GPU.
The interface circuitry 520 of the illustrated example also includes a communication device such as a transmitter, a receiver, a transceiver, a modem, a residential gateway, a wireless access point, and/or a network interface to facilitate exchange of data with external machines (e.g., computing devices of any kind) by a network 526. The communication can be by, for example, an Ethernet connection, a digital subscriber line (DSL) connection, a telephone line connection, a coaxial cable system, a satellite system, a line-of-site wireless system, a cellular telephone system, an optical connection, etc.
The processor platform 500 of the illustrated example also includes one or more mass storage devices 528 to store software and/or data. Examples of such mass storage devices 528 include magnetic storage devices, optical storage devices, floppy disk drives, HDDs, CDs, Blu-ray disk drives, redundant array of independent disks (RAID) systems, solid state storage devices such as flash memory devices and/or SSDs, and DVD drives.
The machine readable instructions 532, which may be implemented by the machine readable instructions of
The cores 602 may communicate by a first example bus 604. In some examples, the first bus 604 may be implemented by a communication bus to effectuate communication associated with one(s) of the cores 602. For example, the first bus 604 may be implemented by at least one of an Inter-Integrated Circuit (I2C) bus, a Serial Peripheral Interface (SPI) bus, a PCI bus, or a PCIe bus. Additionally or alternatively, the first bus 604 may be implemented by any other type of computing or electrical bus. The cores 602 may obtain data, instructions, and/or signals from one or more external devices by example interface circuitry 606. The cores 602 may output data, instructions, and/or signals to the one or more external devices by the interface circuitry 606. Although the cores 602 of this example include example local memory 620 (e.g., Level 1 (L1) cache that may be split into an L1 data cache and an L1 instruction cache), the microprocessor 600 also includes example shared memory 610 that may be shared by the cores (e.g., Level 2 (L2 cache)) for high-speed access to data and/or instructions. Data and/or instructions may be transferred (e.g., shared) by writing to and/or reading from the shared memory 610. The local memory 620 of each of the cores 602 and the shared memory 610 may be part of a hierarchy of storage devices including multiple levels of cache memory and the main memory (e.g., the main memory 514, 516 of
Each core 602 may be referred to as a CPU, DSP, GPU, etc., or any other type of hardware circuitry. Each core 602 includes control unit circuitry 614, arithmetic and logic (AL) circuitry (sometimes referred to as an ALU) 616, a plurality of registers 618, the local memory 620, and a second example bus 622. Other structures may be present. For example, each core 602 may include vector unit circuitry, single instruction multiple data (SIMD) unit circuitry, load/store unit (LSU) circuitry, branch/jump unit circuitry, floating-point unit (FPU) circuitry, etc. The control unit circuitry 614 includes semiconductor-based circuits structured to control (e.g., coordinate) data movement within the corresponding core 602. The AL circuitry 616 includes semiconductor-based circuits structured to perform one or more mathematic and/or logic operations on the data within the corresponding core 602. The AL circuitry 616 of some examples performs integer based operations. In other examples, the AL circuitry 616 also performs floating point operations. In yet other examples, the AL circuitry 616 may include first AL circuitry that performs integer based operations and second AL circuitry that performs floating point operations. In some examples, the AL circuitry 616 may be referred to as an Arithmetic Logic Unit (ALU). The registers 618 are semiconductor-based structures to store data and/or instructions such as results of one or more of the operations performed by the AL circuitry 616 of the corresponding core 602. For example, the registers 618 may include vector register(s), SIMD register(s), general purpose register(s), flag register(s), segment register(s), machine specific register(s), instruction pointer register(s), control register(s), debug register(s), memory management register(s), machine check register(s), etc. The registers 618 may be arranged in a bank as shown in
Each core 602 and/or, more generally, the microprocessor 600 may include additional and/or alternate structures to those shown and described above. For example, one or more clock circuits, one or more power supplies, one or more power gates, one or more cache home agents (CHAs), one or more converged/common mesh stops (CMSs), one or more shifters (e.g., barrel shifter(s)) and/or other circuitry may be present. The microprocessor 600 is a semiconductor device fabricated to include many transistors interconnected to implement the structures described above in one or more integrated circuits (ICs) contained in one or more packages. The processor circuitry may include and/or cooperate with one or more accelerators. In some examples, accelerators are implemented by logic circuitry to perform certain tasks more quickly and/or efficiently than can be done by a general purpose processor. Examples of accelerators include ASICs and FPGAs such as those discussed herein. A GPU or other programmable device can also be an accelerator. Accelerators may be on-board the processor circuitry, in the same chip package as the processor circuitry and/or in one or more separate packages from the processor circuitry.
More specifically, in contrast to the microprocessor 600 of
In the example of
The configurable interconnections 710 of the illustrated example are conductive pathways, traces, vias, or the like that may include electrically controllable switches (e.g., transistors) whose state can be changed by programming (e.g., using an HDL instruction language) to activate or deactivate one or more connections between one or more of the logic gate circuitry 708 to program desired logic circuits.
The storage circuitry 712 of the illustrated example is structured to store result(s) of the one or more of the operations performed by corresponding logic gates. The storage circuitry 712 may be implemented by registers or the like. In the illustrated example, the storage circuitry 712 is distributed amongst the logic gate circuitry 708 to facilitate access and increase execution speed.
The example FPGA circuitry 700 of
Although
In some examples, the processor circuitry 512 of
A block diagram illustrating an example software distribution platform 805 to distribute software such as the example machine readable instructions 532 of
From the foregoing, it will be appreciated that example systems, methods, apparatus, and articles of manufacture have been disclosed that improve the efficiency of performing neural architecture searches. Examples disclosed herein consider a lower-level degree of granularity for inputs of a network analyzer, such that particular features of a candidate architecture are used in view of their corresponding performance effects. Additionally, the granular features are labeled as such so that machine learning and/or artificial intelligence systems can converge to optimum architectures faster.
Example methods, apparatus, systems, and articles of manufacture to improve neural architecture searches are disclosed herein. Further examples and combinations thereof include the following:
Example 1 includes an apparatus including interface circuitry to obtain target task information, and processor circuitry including one or more of at least one of a central processor unit, a graphics processor unit, or a digital signal processor, the at least one of the central processor unit, the graphics processor unit, or the digital signal processor having control circuitry to control data movement within the processor circuitry, arithmetic and logic circuitry to perform one or more first operations corresponding to instructions, and one or more registers to store a result of the one or more first operations, the instructions in the apparatus, a Field Programmable Gate Array (FPGA), the FPGA including logic gate circuitry, a plurality of configurable interconnections, and storage circuitry, the logic gate circuitry and the plurality of the configurable interconnections to perform one or more second operations, the storage circuitry to store a result of the one or more second operations, or Application Specific Integrated Circuitry (ASIC) including logic gate circuitry to perform one or more third operations, the processor circuitry to perform at least one of the first operations, the second operations, or the third operations to instantiate similarity verification circuitry to identify candidate networks based on a combination of a target platform type, a target workload type to be executed by the target platform type, and historical benchmark metrics corresponding to the candidate networks, wherein the candidate networks are associated with performance metrics, likelihood verification circuitry to categorize (a) a first set of the candidate networks based on a first one of the performance metrics corresponding to first tier values, and (b) a second set of the candidate networks based on a second one of the performance metrics corresponding to second tier values, and extract first features corresponding to the first set of the candidate networks and extract second features corresponding to the second set of the candidate networks, and network analysis circuitry to perform network analysis by providing the first features and the second features to a network analyzer to identify particular ones of the candidate networks.
Example 2 includes the apparatus as defined in example 1, wherein the likelihood verification circuitry is to identify (a) the first tier values as performance metrics corresponding to an upper threshold and (b) the second tier values as performance metrics corresponding to a lower threshold.
Example 3 includes the apparatus as defined in example 1, further including benchmark evaluation circuitry to initiate benchmarking tests corresponding to operation information extracted from the candidate networks.
Example 4 includes the apparatus as defined in example 3, wherein the benchmark evaluation circuitry is to initiate the benchmarking tests corresponding to the target platform type to determine third performance metrics.
Example 5 includes the apparatus as defined in example 4, wherein the third performance metrics include at least one of latency, accuracy, power consumption or memory bandwidth.
Example 6 includes the apparatus as defined in example 3, wherein the operation information corresponds to at least one of an operation type, a kernel size or an input size.
Example 7 includes the apparatus as defined in example 1, wherein the first and second features include at least one of network adjacency features, layer connection information, or network graph information.
Example 8 includes the apparatus as defined in example 1, further including architecture modification circuitry to query a network knowledge database for prior modification information corresponding to the candidate networks.
Example 9 includes the apparatus as defined in example 8, wherein the architecture modification circuitry is to establish a starting search point by applying changes to the candidate networks.
Example 10 includes an apparatus to identify candidate networks, including at least one memory, machine readable instructions, and processor circuitry to at least one of instantiate or execute the machine readable instructions to determine candidate networks corresponding to a combination of target platform characteristics, workload characteristics to be executed by a target platform, and historical benchmark metrics characteristics associated with the candidate networks, wherein the candidate networks are associated with performance metrics, categorize (a) a first set of the candidate networks based on first values, and (b) a second set of the candidate networks based on second values, identify (a) first features associated with the first set of the candidate networks and (b) second features associated with the second set of the candidate networks, and feed a network analyzer with the first and second features to determine one or more of the candidate networks to be executed with the target platform.
Example 11 includes the apparatus as defined in example 10, wherein the processor circuitry is to cause identification of (a) performance metrics corresponding to an upper threshold as the first values, and (b) performance metrics corresponding to a lower threshold as the second values.
Example 12 includes the apparatus as defined in example 10, wherein the processor circuitry is to initiate benchmarking tests corresponding to operation information extracted from the candidate networks.
Example 13 includes the apparatus as defined in example 12, wherein the processor circuitry is to initiate the benchmarking tests corresponding to the target platform to determine third performance metrics.
Example 14 includes the apparatus as defined in example 13, wherein the processor circuitry is to identify the third performance metrics as at least one of latency, accuracy, power consumption or memory bandwidth.
Example 15 includes the apparatus as defined in example 12, wherein the processor circuitry is to identify operation information as at least one of an operation type, a kernel size or an input size.
Example 16 includes the apparatus as defined in example 10, wherein the first and second features include at least one of network adjacency features, layer connection information, or network graph information.
Example 17 includes the apparatus as defined in example 10, wherein the processor circuitry is to query a network knowledge database for prior modification information corresponding to the candidate networks.
Example 18 includes the apparatus as defined in example 17, wherein the processor circuitry is to initiate a starting search point by applying changes to the candidate networks.
Example 19 includes a non-transitory machine readable storage medium including instructions that, when executed, cause processor circuitry to at least determine candidate networks corresponding to a combination of target platform characteristics, workload characteristics to be executed by a target platform, and historical benchmark metrics characteristics associated with the candidate networks, wherein the candidate networks are associated with performance metrics, categorize (a) a first set of the candidate networks based on first tier values, and (b) a second set of the candidate networks based on second tier values, identify (a) first features associated with the first set of the candidate networks and (b) second features associated with the second set of the candidate networks, and feed a network analyzer with the first and second features to determine one or more of the candidate networks to be executed with the target platform.
Example 20 includes the non-transitory machine readable storage medium as defined in example 19, wherein the instructions, when executed, cause the processor circuitry to identify (a) performance metrics corresponding to an upper threshold as the first tier values, and (b) performance metrics corresponding to a lower threshold as the second tier values.
Example 21 includes the non-transitory machine readable storage medium as defined in example 19, wherein the instructions, when executed, cause the processor circuitry to initiate benchmarking tests corresponding to operation information extracted from the candidate networks.
Example 22 includes the non-transitory machine readable storage medium as defined in example 21, wherein the instructions, when executed, cause the processor circuitry to initiate the benchmarking tests corresponding to the target platform to determine third performance metrics.
Example 23 includes the non-transitory machine readable storage medium as defined in example 22, wherein the instructions, when executed, cause the processor circuitry to identify the third performance metrics as at least one of latency, accuracy, power consumption or memory bandwidth.
Example 24 includes the non-transitory machine readable storage medium as defined in example 21, wherein the instructions, when executed, cause the processor circuitry to identify operation information as at least one of an operation type, a kernel size or an input size.
Example 25 includes the non-transitory machine readable storage medium as defined in example 19, wherein the instructions, when executed, cause the processor circuitry to identify the first and the second features as at least one of network adjacency features, layer connection information, or network graph information.
Example 26 includes the non-transitory machine readable storage medium as defined in example 19, wherein the instructions, when executed, cause the processor circuitry to query a network knowledge database for prior modification information corresponding to the candidate networks.
Example 27 includes the non-transitory machine readable storage medium as defined in example 26, wherein the instructions, when executed, cause the processor circuitry to initiate a starting search point by applying changes to the candidate networks.
The following claims are hereby incorporated into this Detailed Description by this reference. Although certain example systems, methods, apparatus, and articles of manufacture have been disclosed herein, the scope of coverage of this patent is not limited thereto. On the contrary, this patent covers all systems, methods, apparatus, and articles of manufacture fairly falling within the scope of the claims of this patent.