The present disclosure relates to deep neural networks (DNN), and in particular to protection of DNNs from adversarial attacks.
Deep neural networks (DNNs) have shown substantial success for many practical applications, e.g., image/speech recognition, autonomous driving, etc., achieving high accuracy aided by deep and complex network structures. While many works have investigated DNN model size reduction, DNNs may use significant computation and memory resources. As a means to address such computation/memory challenges, in-memory computing (IMC) has been proposed and has shown promising energy-efficiency numbers. While IMC substantially improves the energy-efficiency of multiply-and-accumulate (MAC) operations in DNNs, the noise margin may be lower due to the analog nature of computing and noise/variability, may lead to accuracy degradation.
On the other hand, the vulnerability of DNNs against adversarial attacks has been an important issue, where adversaries can manipulate the inputs/weights of DNNs by small amounts and reduce the inference accuracy. Some prior work has shown that the performance of DNNs can be degraded by modifying the inputs of DNNs by a small amount using adversarial algorithms such as projected gradient descent (PGD) and fast gradient sign method (FGSM). These algorithms can iteratively analyze the gradients at different locations in the network topology and use DNN optimization functions to identify the suitable magnitude of change in the input pixels, so that the DNN classifies the input incorrectly.
Some work has claimed to provide a robust defense against such attacks, such as PGD, but that robustness may be obtained mainly due to the presence of obfuscated gradients, e.g., in quantized DNNs. Obfuscated gradients, however, can be circumvented using the backward-pass differentiable approximation (BPDA) technique. Hence, DNNs may be vulnerable to adversarial input attacks, even if they are quantized to low precision.
In addition, adversarial weight attacks are known, where the attacker iteratively identifies the most vulnerable bits of the weights in all DNN layers that lead to large accuracy loss. In some cases. the accuracy of 8-bit DNNs may be reduced to below a random guess by only flipping tens of bits in the entire model. These attacks may make the DNN hardware that stores DNN weights and biases vulnerable.
Embodiments described herein leverage noise and aggressive quantization of in-memory computing (IMC) to provide robust deep neural network (DNN) hardware against adversarial input and weight attacks. IMC substantially improves the energy efficiency of DNN hardware by activating many rows together and performing analog computing. The noisy analog IMC induces some amount of accuracy drop in hardware acceleration, which is generally considered as a negative effect. However, this disclosure demonstrates that such hardware intrinsic noise can, on the contrary, play a positive role in enhancing adversarial robustness.
To achieve this, a new DNN training scheme is proposed that integrates measured IMC hardware noise and aggressive partial sum quantization at the IMC crossbar. It is shown that this effectively improves the robustness of IMC DNN hardware against both adversarial input and weight attacks. Against black-box adversarial input attacks and bit-flip weight attacks, DNN robustness is improved by up to 10.5% (CIFAR-10 accuracy) and 33.6% (number of bit-flips), respectively, compared to conventional DNNs.
An exemplary embodiment provides a method for strengthening a DNN against adversarial attacks. The method includes providing the DNN on IMC hardware; and training the DNN using measured noise of the IMC hardware.
Another exemplary embodiment provides a robust DNN device. The robust DNN device includes IMC hardware; and a memory storing instructions. The instructions are configured to cause the IMC hardware to: implement the DNN; and train the DNN using measured noise of the IMC hardware.
Those skilled in the art will appreciate the scope of the present disclosure and realize additional aspects thereof after reading the following detailed description of the preferred embodiments in association with the accompanying drawing figures.
The accompanying drawing figures incorporated in and forming a part of this specification illustrate several aspects of the disclosure, and together with the description serve to explain the principles of the disclosure.
The embodiments set forth below represent the necessary information to enable those skilled in the art to practice the embodiments and illustrate the best mode of practicing the embodiments. Upon reading the following description in light of the accompanying drawing figures, those skilled in the art will understand the concepts of the disclosure and will recognize applications of these concepts not particularly addressed herein. It should be understood that these concepts and applications fall within the scope of the disclosure and the accompanying claims.
It will be understood that the terms “noise” and “noisy” are used herein to refer to the variations in analog signals generated by in-memory computing crossbar circuits provided due to, for example, variations in the resistances of components included in the in-memory computing crossbar circuit, such as the resistance of bitlines, source lines, which may vary from device to device. Furthermore, the phrase “variation from idealities” can also be used herein to refer to the variations in the analog signals described above.
Embodiments described herein can leverage noise and aggressive quantization of in-memory computing (IMC) to provide deep neural network (DNN) hardware having improved protection against adversarial input and weight attacks. As described herein, IMC can improve the energy efficiency of DNN hardware by activating many rows of data together which can be combined internally (in the IMC crossbar) to provide an analog signal that represents the result of an operation performed within the DNN. The noisy analog signals generated by the IMC crossbar can reduce the accuracy drop of hardware acceleration, which has been generally considered as a negative effect. As appreciated by the present inventors, however, such hardware intrinsic noise can improve the performance of the DNN against an adversarial attack.
In some embodiments according to the inventive concept, a new DNN training scheme is disclosed herein that can integrate measured IMC hardware noise and aggressive partial sum quantization at the IMC crossbar. As described herein, this can effectively improve the robustness of IMC DNN hardware against both adversarial input and weight attacks. For example, in some embodiments according to the inventive concept, against black-box adversarial input attacks and bit-flip weight attacks, DNN robustness can be improved by up to about 10.5% (CIFAR-10 accuracy) and about 33.6% (number of bit-flips), respectively, compared to conventional DNNs.
As disclosed herein, the actual measured hardware noise from IMC prototype chips was used towards enhancing the robustness of DNNs against both adversarial input attacks and weight attacks. Using the input-splitting technique, the effect of aggressively quantizing the partial sums obtained from IMC crossbars on the adversarial robustness was also evaluated. For adversarial input attacks, adversarial training was performed with a continually differentiable exponential linear unit (CELU) activation function for DNNs with 1-bit, 2-bit, and 4-bit activation/weight precision values.
All multiply-and-accumulate (MAC) operations in convolution and fully-connected layers of such pre-trained DNN models are mapped with IMC hardware designs for inference. Injecting IMC hardware noise during the DNN training process was also investigated and the adversarial robustness evaluated. For adversarial weight attacks, the effect of IMC hardware noise and aggressive partial sum quantization was evaluated via input-splitting towards the robustness against bit-flip attacks (BFAs).
In some embodiments according to the inventive concept, up to 10% improvement in the classification accuracy was achieved under black-box adversarial attack when IMC hardware noise and adversarial examples were used to train and test DNNs against adversarial inputs. As also disclosed herein, in some embodiments according to the inventive concept, introducing IMC noise into a conventionally trained DNN during inference led to no degradation or even about 2% improvement in adversarial accuracy. Furthermore, the input-split DNNs with aggressive partial sum quantization improved the robustness against BFA by up to about 30% compared to the conventionally trained DNNs.
Accordingly, embodiments according to the inventive concept can provided improved the robustness in the context of Black-box adversarial attacks with IMC noise injection during training and testing of DNNs, improvement by injecting noise from actual IMC prototype chips during DNN training, improvement by using CELU activation function and IMC noise for DNN inference, and improvement by using input-splitting and aggressive partial sum quantization.
A. SRAM-Based In-Memory Computing Hardware Designs
In IMC systems, DNN weights are stored in a crossbar structure, and analog computation is performed typically by applying activations as the voltage from the row side and accumulating the bitwise multiplication result via analog voltage/current on the column side. The analog voltage/current values are quantized into digital values by analog-to-digital converters (ADCs) at the crossbar periphery. This way, vector-matrix multiplication of activation vectors and the stored weight matrices can be computed in a highly parallel manner without memory operation to read out the weights.
Embodiments according to the invention may be realized with different types of memory architectures including static random-access memory (SRAM)-based IMC and non-volatile memory (NVM)-based IMC, as disclosed herein. In particular, SRAM has a very high on/off ratio and the SRAM IMC scheme can be implemented in CMOS technology. SRAM IMC schemes can be broadly categorized into resistive and capacitive IMC. Resistive IMC uses the resistive pull-down/pull-up of transistors in the SRAM bit-cell, while capacitive IMC employs additional capacitors in the bit-cell to compute MAC operations via capacitive coupling or charge sharing.
B. Adversarial Input and Weight Attacks
The security analysis of DNNs is dominated by the adversarial input noise attack popularly known as adversarial examples attack. Adversarial input attacks can be classified into two major categories: white-box and black-box attacks. In a white-box attack (e.g., PGD, FGSM), the adversary has complete knowledge about DNN inputs, architectures, and gradients. In contrast, the black-box attack (e.g., Substitute) gives the adversary no access to the DNN information, only leveraging input image and output score of the DNN. As described herein, the adversarial example generation techniques used to evaluate embodiments of the present disclosure are briefly introduced.
1. PGD Attack
Projected gradient descent (PGD) is a popular white-box adversarial input attack. It is one of the strongest L∞ norm-based attacks that iteratively generates malicious samples {circumflex over (x)} from clean (i.e., no noise) samples x with label y. At each iteration t, PGD follows the update rule:
{circumflex over (x)}
t+1
={circumflex over (x)}
t+α·sign(∇x(ƒ({circumflex over (x)}t;θ),y)) Equation 1
where ƒ(;) is the DNN inference function parameterized by θ, α is the step size, and {circumflex over (x)}∈[0,1] for normalized input.
A PGD attack generates a universal and strong adversary among the first order approach (i.e., attack relying on only first order gradient information) by adding the gradient sign of the loss function with regard to the input x.
2. Substitute Model Attack
Some approaches have demonstrated that non-linear functions of DNNs cause gradient obfuscation (i.e., attacker fails to approximate the true gradient), which causes the white-box attacks to perform poorly. One possible solution to bypass this obfuscation issue is to evaluate defenses against black-box attacks (e.g., substitute model) that do not require any gradient information.
The vulnerability of DNNs against adversarial weight attacks have also been investigated. Among them, bit flip attack (BFA) has proven to be the most effective, which demonstrated accuracy collapse of ResNet-18 for ImageNet from 69% to 0.1% by modifying only 13 bits out of 88 million bits.
3. Bit-Flip Attack (BFA).
BFA integrates progressive search and gradient ranking to identify the vulnerable bits in quantized DNNs. For each attack iteration, BFA follows two steps: i) In-layer search: The attacker picks each layer of the DNN and flips top nb gradient bits (i.e., nb=1 typically) to record the inference loss. After evaluating the loss, the attacker restores the original bit state. ii) Cross-layer search: In this step, the attacker picks the layer with maximum inference loss evaluated at the last step and performs the bit-flip at that layer. In addition, deep hammer attack has demonstrated that the vulnerable bits identified by BFA can be flipped in real hardware through popular fault injection techniques such as row-hammer. The key advantage of BFA is that quantized networks have been attacked successfully (i.e., lowering accuracy to random guess), whereas other works show unsuccessful weight attack for quantized DNNs.
C. Adversarial Defense with Noise Injection and Quantization
One approach to address the challenge of adversarial examples is to train DNNs using adversarial samples, which is known as adversarial training. This optimizes the network with both clean and malicious samples:
Here, the inner maximization generates adversarial samples {circumflex over (x)} by maximizing the loss with regard to label y and the outer minimization trains the DNN parameters θ using the adversarial samples forming a min-max optimization problem.
Other approaches perform adversarial training by injecting noise at both training and inference phases. Injecting noise during training works as a regularizer to prevent DNNs from over-fitting and also aids optimization between clean accuracy (i.e., no attack) and perturbed accuracy (i.e., under attack). However, injecting noise during adversarial training causes gradient obfuscation. Other approaches have instead quantized the DNN weights during training to leverage gradient obfuscation only as a defense tool. On the other hand, aggressive model quantization (i.e., binary weights) has been effective in resisting adversarial weight attack (e.g., BFA), but still may not completely defend against this attack.
In some embodiments according to the inventive concept, the inherent noise/variability of IMC hardware and partial sum quantization at the IMC crossbar granularity are exploited to enhance the robustness of DNNs against adversarial attacks. For example, embodiments according to the inventive concept can utilize the following aspects: a PGD based adversarial training (Section TLC) with smooth CELU activation function, in-training activation and weight quantization for low-precision DNNs (e.g., 1-bit, 2-bit, 4-bit), employing IMC noise for DNN inference and training based on actual IMC prototype chip measurements, and using partial sum quantization (e.g., 1-bit, 2-bit, —3-bit) considering IMC crossbar size, ADC, and input-splitting.
A. Adversarial Training with CELU
Several adversarial attacks utilize the gradients of DNNs to generate adversarial images. Accordingly, various functions used in DNNs should be continuously differentiable. While rectified linear unit (ReLU) is one of the commonly used activation functions, the gradient of ReLU has an abrupt change at input of zero. Such a discontinuity lowers the quality of gradients, and weaker adversarial examples would be used for adversarial training of DNNs. To make the gradient continuously differentiable, the CELU activation function is employed, which is defined as:
B. Training DNNs with IMC Quantization and Noise
To train DNNs for inference with very low precision, such as 1-bit, 2-bit, 3-bit, and 4-bit, in-training quantization is used. In IMC hardware targeting low-precision DNN inference, each IMC crossbar performs MAC operations to obtain the partial sum for a fixed number of inputs (e.g., 256-input partial sum), and the partial sums are quantized to a limited number of ADC levels. Due to the hardware noise and variability (e.g., supply noise, mismatch of transistors, wires, and capacitors), the partial sums that have the same MAC value could result in different ADC outputs.
Such hardware noise obtained from IMC prototype chip measurements is employed in two ways. First, noise is only involved for DNN inference for pre-trained 1-bit, 2-bit, and 4-bit DNNs. Second, IMC hardware noise is injected during DNN training at the partial sum level (as measured from the IMC prototype chip), so that DNNs become aware of the noisy quantization of partial sums and adapt the weights accordingly.
C. Aggressive Quantization of Partial Sums in IMC Crossbars
IMC crossbar supports a fixed number of inputs and weights per dot-product computation and generates intermediate analog partial sums. These partial sums are digitized and accumulated outside the IMC crossbar to represent the final output of the layer, also known as a full sum. IMC hardware typically uses multi-bit ADCs to digitize these partial sums performed by a column of the IMC SRAM array, and additional area and energy costs need to be spent to accommodate such ADCs.
The input-splitting scheme is used to address this issue of large ADCs used in IMC hardware. The input-splitting algorithm divides the convolution and fully-connected layers into groups, where each group has the same number of inputs as the IMC crossbar (e.g., 256) and computes partial sums. In some embodiments according to the inventive concept, during the DNN training process, the partial sums are aggressively quantized (e.g., to 1-bit, 2-bit, 3-bit, or 4-bit values), and the DNNs are trained to adapt to such computations. This helps reduce the high-resolution ADCs to single comparators, 2-bit ADCs, or similar low-bit ADCs, but the small adversarial perturbations on inputs or weights of DNNs could be masked by such aggressive partial sum quantization, improving the adversarial robustness.
Table I summarizes the thresholds used in the aggressive partial sum quantization scheme of embodiments described herein:
D. Adversarial Input Attack: Black-Box Attack and Evaluation
To circumvent the issue of potential gradient obfuscation present in the low-precision DNNs with IMC noise and partial sum quantization, the black-box adversarial attack as illustrated in
First, the target DNNs are pre-trained with low-precision and IMC noise (e.g., with gradient obfuscation), and the predicted labels for the clean images are obtained using the pretrained target model. Then, a full-precision black-box DNN (e.g., without gradient obfuscation) is trained using the same input images and corresponding white-box adversarial images obtained from the target model. This black-box model is trained to 100% accuracy with respect to the predicted labels of the target model, and the PGD adversarial attack is applied. Then, the adversarial images generated by the black-box model attack are used to evaluate the adversarial accuracy of the target DNNs with low-precision and IMC noise.
E. Adversarial Weight Attack: Bit Flip Attack and Evaluation
A BFA is performed on DNNs implemented with IMC hardware. The un-targeted BFA uses progressive search and gradient ranking to identify vulnerable bits that degrade test accuracy. The objective of the attacker is to lower the overall test accuracy by maximizing the loss function:
where Ŵ is the weight matrix after flipping the target bits, and ƒ(⋅) is the DNN inference function with loss . To conduct the attack, the attacker is assumed to have access to a sample batch of data x and corresponding true label t.
To progressively search for vulnerable bits, at each attack iteration, the top nb ranked bits (e.g., typically nb=1) are flipped based on the gradient of every bit in each of the P layers of the DNN. The bits are only flipped in the direction of the gradient sign. After flipping the bits at a given layer, the loss is evaluated, and the flipped bits are restored to the original state. This way, a loss profile set of {1, 2, . . . , p} is generated, and the layer with maximum loss is identified:
Finally, the attacker enters layer j to perform the bit-flip of the current iteration. The attack iterates until DNN accuracy degrades to a random guess (i.e., 10% for CIFAR-10).
A. Evaluation Setup
Against adversarial attacks, both conventional DNN training and adversarial training are analyzed. The ResNet-18 DNN is primarily used as the target model with 1-bit, 2-bit, and 4-bit precision in activations and weights. Adversarial input and weight attacks are performed, where the PGD algorithm is used as the main adversarial input attack with ∈=0.03, α=2/255, and iterations=10, and the BFA as the main adversarial weight attack. All DNNs were trained using either the Adam or the SGD optimization algorithm in the PyTorch framework.
Starting from the in-training quantization scheme, further modifications are made in the DNN training and inference process to integrate IMC hardware noise injection and input-splitting (1-bit and 2-bit) quantization of partial sums. Adversarial training of DNNs is performed by using both the clean images and corresponding adversarial images obtained using the white-box PGD attack.
DNNs with 1-bit and 2-bit partial sum quantization are also trained by expanding on input-splitting. The ADC comparator thresholds and levels used for 3.5-bit (11-level), 2-bit, and 1-bit partial sum quantization are shown in Table I. Different fixed threshold values were evaluated for DNNs with partial sum quantization, and then the IMC prototype chip was tuned using the best threshold values to extract the IMC hardware noise data.
B. Adversarial Input Attack and Defense Results
Table II shows the clean and black-box adversarial accuracies for binary ResNet-18 trained with IMC noise characteristics measured at different supply voltages. Note that the noise of the resistive SRAM IMC chip increased with higher supply voltages due to larger IR drop on the bit-lines. With a higher amount of IMC noise, the clean accuracy (no attack) slightly degrades, but the adversarial accuracy (black-box attack) notably improved, since injecting a higher amount of noise during DNN training led to a stronger generalization.
In comparison to the baseline noiseless model, the accuracy is improved by up to about 10% by adding measured IMC noise to the DNN training and inference process. Compared to the conventionally trained DNNs in
C. Adversarial Weight Attack and Defense Results
Compared to the baseline BFA (no noise), when the resistive SRAM IMC noise results from 3.5-bit ADC were applied, the DNNs became more vulnerable to BFA (requiring fewer bits to reach about 10% CIFAR-10 accuracy). However, the input-splitting scheme with partial sum binarization required BFA to flip about 33.57% more bits to reach random guess, showing enhanced robustness against BFA. When IMC chip measurements with partial sum binarization with 1-bit ADC (single comparator) were used, a similar level of robustness was maintained against BFA, overall requiring >30% more bit-flips compared to the baseline BFA. It will be understood that binary DNNs can require about 6× to about 50× more bit-flips, compared to 2-bit and 4-bit DNNs, respectively.
D. Comparison to Other Approaches
In Table III, the comparison to two relevant prior works is shown. Compared to PNI (as described in Z. He et al., “Parametric Noise Injection: Trainable Randomness to Improve Deep Neural Network Robustness Against Adversarial Attack,” in IEEE CVPR, 2019), this work can incorporate arbitrary IMC hardware noise and achieves better black-box adversarial accuracy improvement. Roy (as described in D. Roy et al., “Robustness Hidden in Plain Sight: Can Analog Computing Defend Against Adversarial Attacks?” arXiv:2008.1201, 2020) evaluated NVM IMC for different array sizes, but only used ideal simulation models and is not based on actual IMC silicon results. By integrating actual IMC prototype chip results in the DNN training/inference process, the scheme according to embodiments described herein shows better adversarial robustness. In addition, this is the only work that has investigated both adversarial input attacks and weight attacks.
The process may optionally continue at operation 806, with performing analog computations in the IMC hardware by accumulating bitwise multiplication results via analog circuitry. The process may optionally continue at operation 808, with storing DNN weights in a crossbar. The process may optionally continue at operation 810, with aggressively (e.g., between 1 and 4 bits) quantizing partial sums of the bitwise multiplication results at the crossbar using ADCs (e.g., 1-bit, 2-bit, 3-bit, or 4-bit ADCs).
Although the operations of
The exemplary computer system 900 in this embodiment includes a processing device 902 or processor, a system memory 904, and a system bus 906. The system memory 904 may include non-volatile memory 908 and volatile memory 910. The non-volatile memory 908 may include read-only memory (ROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), and the like. The volatile memory 910 generally includes random-access memory (RAM) (e.g., dynamic random-access memory (DRAM), such as synchronous DRAM (SDRAM)). A basic input/output system (BIOS) 912 may be stored in the non-volatile memory 908 and can include the basic routines that help to transfer information between elements within the computer system 900.
The system bus 906 provides an interface for system components including, but not limited to, the system memory 904 and the processing device 902. The system bus 906 may be any of several types of bus structures that may further interconnect to a memory bus (with or without a memory controller), a peripheral bus, and/or a local bus using any of a variety of commercially available bus architectures.
The processing device 902 represents one or more commercially available or proprietary general-purpose processing devices, such as a microprocessor, central processing unit (CPU), or the like. More particularly, the processing device 902 may be a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a processor implementing other instruction sets, or other processors implementing a combination of instruction sets. The processing device 902 is configured to execute processing logic instructions for performing the operations and steps discussed herein.
In this regard, the various illustrative logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with the processing device 902, which may be a microprocessor, field programmable gate array (FPGA), a digital signal processor (DSP), an application-specific integrated circuit (ASIC), or other programmable logic device, a discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. Furthermore, the processing device 902 may be a microprocessor, or may be any conventional processor, controller, microcontroller, or state machine. The processing device 902 may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration).
The computer system 900 may further include or be coupled to a non-transitory computer-readable storage medium, such as a storage device 914, which may represent an internal or external hard disk drive (HDD), flash memory, or the like. The storage device 914 and other drives associated with computer-readable media and computer-usable media may provide non-volatile storage of data, data structures, computer-executable instructions, and the like. Although the description of computer-readable media above refers to an HDD, it should be appreciated that other types of media that are readable by a computer, such as optical disks, magnetic cassettes, flash memory cards, cartridges, and the like, may also be used in the operating environment, and, further, that any such media may contain computer-executable instructions for performing novel methods of the disclosed embodiments.
An operating system 916 and any number of program modules 918 or other applications can be stored in the volatile memory 910, wherein the program modules 918 represent a wide array of computer-executable instructions corresponding to programs, applications, functions, and the like that may implement the functionality described herein in whole or in part, such as through instructions 920 on the processing device 902. The program modules 918 may also reside on the storage mechanism provided by the storage device 914. As such, all or a portion of the functionality described herein may be implemented as a computer program product stored on a transitory or non-transitory computer-usable or computer-readable storage medium, such as the storage device 914, volatile memory 910, non-volatile memory 908, instructions 920, and the like. The computer program product includes complex programming instructions, such as complex computer-readable program code, to cause the processing device 902 to carry out the steps necessary to implement the functions described herein.
An operator, such as the user, may also be able to enter one or more configuration commands to the computer system 900 through a keyboard, a pointing device such as a mouse, or a touch-sensitive surface, such as the display device, via an input device interface 922 or remotely through a web interface, terminal program, or the like via a communication interface 924. The communication interface 924 may be wired or wireless and facilitate communications with any number of devices via a communications network in a direct or indirect fashion. An output device, such as a display device, can be coupled to the system bus 906 and driven by a video port 926. Additional inputs and outputs to the computer system 900 may be provided through the system bus 906 as appropriate to implement embodiments described herein.
As further shown in
The DNN can be adversarially trained by applying adversarial inputs wherein the measured variations from idealities in signals generated by an in-memory computing crossbar array circuit can be added to the analog partial sum current signals generated at the hidden layers at operation 1010. The resulting varied analog partial sum current signals can then be quantized to provide the digital partial sum values at operation 1015. It will be understood that the varied analog partial sum current signals can be quantized as described herein.
The operational steps described in any of the exemplary embodiments herein are described to provide examples and discussion. The operations described may be performed in numerous different sequences other than the illustrated sequences. Furthermore, operations described in a single operational step may actually be performed in a number of different steps. Additionally, one or more operational steps discussed in the exemplary embodiments may be combined.
It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of the present disclosure. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
It will be understood that when an element such as a layer, region, or substrate is referred to as being “on” or extending “onto” another element, it can be directly on or extend directly onto the other element or intervening elements may also be present. In contrast, when an element is referred to as being “directly on” or extending “directly onto” another element, there are no intervening elements present. Likewise, it will be understood that when an element such as a layer, region, or substrate is referred to as being “over” or extending “over” another element, it can be directly over or extend directly over the other element or intervening elements may also be present. In contrast, when an element is referred to as being “directly over” or extending “directly over” another element, there are no intervening elements present. It will also be understood that when an element is referred to as being “connected” or “coupled” to another element, it can be directly connected or coupled to the other element or intervening elements may be present. In contrast, when an element is referred to as being “directly connected” or “directly coupled” to another element, there are no intervening elements present.
Relative terms such as “below” or “above” or “upper” or “lower” or “horizontal” or “vertical” may be used herein to describe a relationship of one element, layer, or region to another element, layer, or region as illustrated in the Figures. It will be understood that these terms and those discussed above are intended to encompass different orientations of the device in addition to the orientation depicted in the Figures.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises,” “comprising,” “includes,” and/or “including” when used herein specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. It will be further understood that terms used herein should be interpreted as having a meaning that is consistent with their meaning in the context of this specification and the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The term “about” generally refers to a range of numeric values that one of skill in the art would consider equivalent to the recited numeric value or having the same function or result. For example, “about” may refer to a range that is within ±1%, ±2%, ±5%, ±7%, ±10%, ±15%, or even ±20% of the indicated value, depending upon the numeric values that one of skill in the art would consider equivalent to the recited numeric value or having the same function or result. Furthermore, in some embodiments, a numeric value modified by the term “about” may also include a numeric value that is “exactly” the recited numeric value. In addition, any numeric value presented without modification will be appreciated to include numeric values “about” the recited numeric value, as well as include “exactly” the recited numeric value. Similarly, the term “substantially” means largely, but not wholly, the same form, manner or degree and the particular element will have a range of configurations as a person of ordinary skill in the art would consider as having the same function or result. When a particular element is expressed as an approximation by use of the term “substantially,” it will be understood that the particular element forms another embodiment.
Many different embodiments have been disclosed herein, in connection with the above description and the drawings. It will be understood that it would be unduly repetitious and obfuscating to literally describe and illustrate every combination and subcombination of these embodiments. Accordingly, all embodiments can be combined in any way and/or combination, and the present specification, including the drawings, shall support claims to any such combination or subcombination.
Those skilled in the art will recognize improvements and modifications to the preferred embodiments of the present disclosure. All such improvements and modifications are considered within the scope of the concepts disclosed herein and the claims that follow.
The present application claims priority to U.S. Provisional Patent Application No. 63/243,452 titled LEVERAGING NOISE AND AGGRESSIVE QUANTIZATION OF IN-MEMORY COMPUTING FOR ROBUST DNN HARDWARE AGAINST ADVERSARIAL INPUT AND WEIGHT ATTACKS, filed on Sep. 13, 2021, in the U.S.P.T.O., the entirety of which is hereby incorporated herein by reference.
This invention was made with government support under 1652866, 1715443, 2005209 and 2019548 awarded by the National Science Foundation and under HR0011-18-3-0004 awarded by the Defense Advanced Research Projects Agency (DARPA). The government has certain rights in the invention.
Number | Date | Country | |
---|---|---|---|
63243452 | Sep 2021 | US |