Embodiments described herein generally relate to reducing the resources that are required in the execution of a machine learning algorithm, and in a particular embodiment, but not by way of limitation, consolidating synaptic inputs in an artificial neural network or similar machine learning algorithm.
One of the major roadblocks of executing neural networks on certain systems, such as in low SWaP (size, weight, and power) embedded hardware, is the resources required, such as memory, computational power, and timing.
In the drawings, which are not necessarily drawn to scale, like numerals may describe similar components in different views. Like numerals having different letter suffixes may represent different instances of similar components. Some embodiments are illustrated by way of example, and not limitation, in the figures of the accompanying drawings.
An embodiment of the present disclosure reduces the calculations required to perform a classification in an artificial neural network or similar machine learning algorithm. Specifically, execution of a neural network requires many multiply and accumulate operations (i.e., weights×inputs). Because multiplication is more expensive than addition in terms of cycles per instruction, any reduction in the number of multiplications will significantly reduce the neural network execution time. Embodiments of the present disclosure replace expensive multiplications with additions, which require fewer cycle to execute. A byproduct of these embodiments is a significant reduction in the amount of memory and bandwidth needed to execute the neural network.
In an embodiment, synaptic input consolidation reduces the number of these multiplications needed by finding inputs with similar weight values, adding the input values together and multiplying the sum by a single representative weight value. For each perceptron in a trained dense layer, the weights can be clustered using a 1D k-means algorithm. Each weight that is associated with the perceptron is checked to see to which cluster it belongs. When the correct cluster is found for a given weight, the neural network architecture is modified (see
Referring now to
Y=I
0
W
0
+I
1
W
1
+I
2
W
2
+I
3
W
3
+I
4
W
4
If W0≈W2≈W3, then the following change can be made:
Change Y=I0W0+I1W1+I2W2+I3W3+I4W4 to Y=(I0+I2+I3)Wavg+I1W1+I4W4
wherein Wavg=The average of W0, W2 and W3
The resulting synaptic structure is illustrated in
To find weight values that are approximately the same, a Jenks optimization algorithm can be used. The Jenks algorithm reduces the variance within a group of weights and maximizes the variance between groups of weights. The process is illustrated in
Specifically, referring to
At 330, the process continues with a clustering of the weights of the synapses in the layer at issue in the neural network. In an embodiment, a range of the weights in the perceptrons in the layer at issue is determined (that is, a range from the minimum weight value to the maximum weight value). That range is then used to generate two distinct clusters.
At 340, the first cluster is obtained, and at 345, the first synapse in that first cluster is retrieved. At 350, it is determined if the weight of that first synapse is greater than or equal to the minimum value for this cluster and less than or equal to the maximum value in this cluster (355). If the weight value of the synapse falls within the range of the cluster, then at 360 the weight of this synapse is replaced with the average weight value that was calculated for this particular cluster. If the synapse weight is not in the weight range for this cluster, i.e., the “No” branch of either operation 350 or 355 was executed, then it is determined if it is the last synapse for this particular cluster at 370. If it is not, the next synapse for this cluster is retrieved at 375, and operations 350 and 355 are repeated. If it is the last synapse for this cluster, then it is determined at 380 whether this is the last cluster. If it is not, the next cluster is retrieved (385) and operations 345, 350, and 355 are executed for the new cluster and the synapses in the new cluster. If it is the last cluster, the process terminates.
After the execution of the operations in
Referring now to
At a high level, the identification of functions in the machine learning algorithm that have similar weighted values is determined as follows. At 415, the functions are clustered based on the weighted values of the functions, and at 415A, the representative weighted value for a cluster is determined. The representative weighted value can be determined by averaging the weighted values of the functions for the cluster (415B). The clustering can include a one-dimensional k-means algorithm (415C).
At a more detailed level, and as also illustrated in
At 422, the clusters are assigned cluster ranges based on a range of the weighted values and the threshold number. For example, if the weighted values range from −1.0 to 1.0, and the system is on its fourth iteration, there would be five clusters and the ranges for the five clusters could be −1.0 to −0.7 for the first cluster, −0.6 to −0.3 for the second cluster, −0.2 to 0.1 for the third cluster, 0.2 to 0.5 for the fourth cluster, and 0.6 to 1.0 for the fifth cluster. At 423, it is determined into which particular cluster a particular function is placed based on the weighted value of the particular function and the cluster range of the particular cluster. Using the example, if the particular function has a weight of 0.7, it would be placed into the fifth cluster. At 424, an average weighted value is determined for the particular cluster. In an embodiment, this average weighted value is a simple arithmetic average. At 425, the weighted value of the particular function is replaced with the average weighted value for the particular cluster with which the particular function is associated.
After the execution of operations 421-425, the accuracy of the machine learning algorithm is determined (426). Then, at 427, the accuracy of the machine learning algorithm is compared to a previous accuracy of the machine algorithm. At 428, the threshold number of clusters is incremented when the accuracy of the machine learning algorithm is not acceptable when compared with the previous accuracy of the algorithm. Then, at 429, the system repeats operations 421-425, that is, assigning cluster ranges to the clusters based on the range of the weighted values and the threshold number, determining a particular cluster into which a particular function is associated based on the weighted value of the particular function and the cluster range, determining an average weighted value for the particular cluster, and replacing the weighted value of the particular function with the average weighted value for the particular cluster.
At 430, a summation of a plurality of input values for the functions is calculated, and at 440, the summation is multiplied by the representative weighted value. This multiplication generates an output for the functions, which at 441 can be provided to a next level in the machine learning algorithm.
Example computing platform 500 includes at least one processor 502 (e.g., a central processing unit (CPU), a graphics processing unit (GPU) or both, processor cores, compute nodes, etc.), a main memory 504 and a static memory 506, which communicate with each other via a link 508 (e.g., bus). The computing platform 500 may further include a video display unit 510, input devices 512 (e.g., a keyboard, camera, microphone), and a user interface (UI) navigation device 514 (e.g., mouse, touchscreen). The computing platform 300 may additionally include a storage device 516 (e.g., a drive unit), a signal generation device 518 (e.g., a speaker), and a RF-environment interface device (RFEID) 520.
The storage device 516 includes a non-transitory machine-readable medium 522 on which is stored one or more sets of data structures and instructions 524 (e.g., software) embodying or utilized by any one or more of the methodologies or functions described herein. The instructions 524 may also reside, completely or at least partially, within the main memory 504, static memory 506, and/or within the processor 502 during execution thereof by the computing platform 500, with the main memory 504, static memory 506, and the processor 502 also constituting machine-readable media.
While the machine-readable medium 522 is illustrated in an example embodiment to be a single medium, the term “machine-readable medium” may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more instructions 524. The term “machine-readable medium” shall also be taken to include any tangible medium that is capable of storing, encoding or carrying instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure or that is capable of storing, encoding or carrying data structures utilized by or associated with such instructions. The term “machine-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media. Specific examples of machine-readable media include non-volatile memory, including but not limited to, by way of example, semiconductor memory devices (e.g., electrically programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM)) and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.
RFEID 520 includes radio receiver circuitry, along with analog-to-digital conversion circuitry, and interface circuitry to communicate via link 508 according to various embodiments. Various form factors are contemplated for RFEID 520. For instance, RFEID may be in the form of a wideband radio receiver, or scanning radio receiver, that interfaces with processor 502 via link 508. In one example, link 508 includes a PCI Express (PCIe) bus, including a slot into which the NIC form-factor may removably engage. In another embodiment, RFEID 520 includes circuitry laid out on a motherboard together with local link circuitry, processor interface circuitry, other input/output circuitry, memory circuitry, storage device and peripheral controller circuitry, and the like. In another embodiment, RFEID 520 is a peripheral that interfaces with link 508 via a peripheral input/output port such as a universal serial bus (USB) port. RFEID 520 receives RF emissions over wireless transmission medium 526. RFEID 520 may be constructed to receive RADAR signaling, radio communications signaling, unintentional emissions, or some combination of such emissions.
The above detailed description includes references to the accompanying drawings, which form a part of the detailed description. The drawings show, by way of illustration, specific embodiments that may be practiced. These embodiments are also referred to herein as “examples.” Such examples may include elements in addition to those shown or described. However, also contemplated are examples that include the elements shown or described. Moreover, also contemplated are examples using any combination or permutation of those elements shown or described (or one or more aspects thereof), either with respect to a particular example (or one or more aspects thereof), or with respect to other examples (or one or more aspects thereof) shown or described herein.
Publications, patents, and patent documents referred to in this document are incorporated by reference herein in their entirety, as though individually incorporated by reference. In the event of inconsistent usages between this document and those documents so incorporated by reference, the usage in the incorporated reference(s) are supplementary to that of this document; for irreconcilable inconsistencies, the usage in this document controls.
In this document, the terms “a” or “an” are used, as is common in patent documents, to include one or more than one, independent of any other instances or usages of “at least one” or “one or more.” In this document, the term “or” is used to refer to a nonexclusive or, such that “A or B” includes “A but not B,” “B but not A,” and “A and B,” unless otherwise indicated. In the appended claims, the terms “including” and “in which” are used as the plain-English equivalents of the respective terms “comprising” and “wherein.” Also, in the following claims, the terms “including” and “comprising” are open-ended, that is, a system, device, article, or process that includes elements in addition to those listed after such a term in a claim are still deemed to fall within the scope of that claim. Moreover, in the following claims, the terms “first,” “second,” and “third,” etc. are used merely as labels, and are not intended to suggest a numerical order for their objects.
The above description is intended to be illustrative, and not restrictive. For example, the above-described examples (or one or more aspects thereof) may be used in combination with others. Other embodiments may be used, such as by one of ordinary skill in the art upon reviewing the above description. The Abstract is to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. Also, in the above Detailed Description, various features may be grouped together to streamline the disclosure. However, the claims may not set forth every feature disclosed herein as embodiments may feature a subset of said features. Further, embodiments may include fewer features than those disclosed in a particular example. Thus, the following claims are hereby incorporated into the Detailed Description, with a claim standing on its own as a separate embodiment. The scope of the embodiments disclosed herein is to be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.