The present disclosure is related to in-memory computing (IMC) for deep neural networks (DNNs).
Deep neural networks (DNNs) have been very successful across many practical applications including computer vision, natural language processing, autonomous driving, etc. However, to achieve high inference accuracy for complex tasks, DNNs necessitate a very large amount of computation and storage. For the inference of one image for the ImageNet dataset, state-of-the-art DNNs require billions of multiply-and-accumulate (MAC) operations and millions of weight parameter storage.
On the algorithm side, the arithmetic complexity of such DNNs has been aggressively reduced by low-precision quantization techniques, which also largely helps the storage requirements. Recently proposed low-precision DNNs have demonstrated that 2-bit/4-bit DNNs can achieve minimal accuracy degradation compared to full-precision models. Also, recent binary DNNs have shown noticeable improvement for ImageNet accuracy, compared to the initial binary DNNs.
On the hardware side, to efficiently implement DNNs onto custom application-specific integrated circuits (ASIC) chips, many digital DNN accelerators have been designed to support specialized dataflows for DNN computation. In these digital ASIC chips, DNN weights stored in static random-access memory (SRAM) arrays need to be accessed one row at a time and communicated to a separate computing unit such as a two-dimensional (2-D) systolic array of processing engines (PEs). Although the data reuse is enhanced through an on-chip memory hierarchy, the energy/power breakdown results show that memory access and data communication is accountable for a dominant portion (e.g., two-thirds or higher) of the total on-chip energy/power consumption.
As a means to address such memory bottlenecks, the in-memory computing (IMC) scheme has emerged as a promising technique. IMC performs MAC computation inside the on-chip memory (e.g., SRAM) by activating multiple/all rows of the memory array. The MAC result is represented by analog bitline voltage/current and subsequently digitized by an analog-to-digital converter (ADC) in the peripheral of the array. This substantially reduces data transfer (compared to digital accelerators with separate MAC arrays) and increases parallelism (compared to conventional row-by-row access), which significantly improves the energy-efficiency of MAC operations. Recently, several IMC SRAM designs have been demonstrated in ASIC chips, which reported high energy-efficiency values of up to hundreds of TOPS/W by efficiently combining storage and computation.
Due to such intra-/inter-chip variations and ADC quantization noise, IMC designs often report accuracy degradation compared to the digital baseline, which is a critical concern. For example, DNN accuracy degradation higher than 7% for the CIFAR-10 dataset was reported when software trained DNNs are evaluated on the noisy IMC ASIC hardware of
Hardware noise-aware training for improving accuracy of in-memory computing (IMC)-based deep neural network (DNN) hardware is provided. DNNs have been very successful in large-scale recognition tasks, but they exhibit large computation and memory requirements. To address the memory bottleneck of digital DNN hardware accelerators, IMC designs have been presented to perform analog DNN computations inside the memory. Recent IMC designs have demonstrated high energy-efficiency, but this is achieved by trading off the noise margin, which can degrade the DNN inference accuracy.
The present disclosure proposes hardware noise-aware DNN training to largely improve the DNN inference accuracy of IMC hardware. During DNN training, embodiments perform noise injection at the partial sum level, which matches with the crossbar structure of IMC hardware, and the injected noise data is directly based on measurements of actual IMC prototype chips. Embodiments are evaluated on several DNNs including ResNet-18, AlexNet, and VGG with binary, 2-bit, and 4-bit activation/weight precision for the CIFAR-10 dataset. These DNNs are evaluated with measured noise data obtained from two different SRAM-based IMC prototype designs and five different chips, across different supply voltages that result in different amounts of noise. Furthermore, the effectiveness of the proposed DNN training is evaluated using individual chip noise data versus the ensemble noise from multiple chips. Across these various DNNs and IMC chip measurements, the proposed hardware noise-aware DNN training consistently improves DNN inference accuracy for actual IMC hardware, up to 17% accuracy improvement for the CIFAR-10 dataset.
An exemplary embodiment provides a method for performing hardware noise-aware training for a DNN. The method includes training the DNN for deployment on IMC hardware; and during the training, injecting pre-determined hardware noise into a forward pass of the DNN.
Another exemplary embodiment provides a computing system. The computing system includes an IMC engine, configured to train a deep neural network (DNN) and, during the training, injecting pre-determined hardware noise into a forward pass of the DNN.
Those skilled in the art will appreciate the scope of the present disclosure and realize additional aspects thereof after reading the following detailed description of the preferred embodiments in association with the accompanying drawing figures.
The accompanying drawing figures incorporated in and forming a part of this specification illustrate several aspects of the disclosure, and together with the description serve to explain the principles of the disclosure.
The embodiments set forth below represent the necessary information to enable those skilled in the art to practice the embodiments and illustrate the best mode of practicing the embodiments. Upon reading the following description in light of the accompanying drawing figures, those skilled in the art will understand the concepts of the disclosure and will recognize applications of these concepts not particularly addressed herein. It should be understood that these concepts and applications fall within the scope of the disclosure and the accompanying claims.
It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of the present disclosure. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
It will be understood that when an element such as a layer, region, or substrate is referred to as being “on” or extending “onto” another element, it can be directly on or extend directly onto the other element or intervening elements may also be present. In contrast, when an element is referred to as being “directly on” or extending “directly onto” another element, there are no intervening elements present. Likewise, it will be understood that when an element such as a layer, region, or substrate is referred to as being “over” or extending “over” another element, it can be directly over or extend directly over the other element or intervening elements may also be present. In contrast, when an element is referred to as being “directly over” or extending “directly over” another element, there are no intervening elements present. It will also be understood that when an element is referred to as being “connected” or “coupled” to another element, it can be directly connected or coupled to the other element or intervening elements may be present. In contrast, when an element is referred to as being “directly connected” or “directly coupled” to another element, there are no intervening elements present.
Relative terms such as “below” or “above” or “upper” or “lower” or “horizontal” or “vertical” may be used herein to describe a relationship of one element, layer, or region to another element, layer, or region as illustrated in the Figures. It will be understood that these terms and those discussed above are intended to encompass different orientations of the device in addition to the orientation depicted in the Figures.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises,” “comprising,” “includes,” and/or “including” when used herein specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. It will be further understood that terms used herein should be interpreted as having a meaning that is consistent with their meaning in the context of this specification and the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
Hardware noise-aware training for improving accuracy of in-memory computing (IMC)-based deep neural network (DNN) hardware is provided. DNNs have been very successful in large-scale recognition tasks, but they exhibit large computation and memory requirements. To address the memory bottleneck of digital DNN hardware accelerators, IMC designs have been presented to perform analog DNN computations inside the memory. Recent IMC designs have demonstrated high energy-efficiency, but this is achieved by trading off the noise margin, which can degrade the DNN inference accuracy.
The present disclosure proposes hardware noise-aware DNN training to largely improve the DNN inference accuracy of IMC hardware. During DNN training, embodiments perform noise injection at the partial sum level, which matches with the crossbar structure of IMC hardware, and the injected noise data is directly based on measurements of actual IMC prototype chips. Embodiments are evaluated on several DNNs including ResNet-18, AlexNet, and VGG with binary, 2-bit, and 4-bit activation/weight precision for the CIFAR-10 dataset. These DNNs are evaluated with measured noise data obtained from two different SRAM-based IMC prototype designs and five different chips, across different supply voltages that result in different amounts of noise. Furthermore, the effectiveness of the proposed DNN training is evaluated using individual chip noise data versus the ensemble noise from multiple chips. Across these various DNNs and IMC chip measurements, the proposed hardware noise-aware DNN training consistently improves DNN inference accuracy for actual IMC hardware, up to 17% accuracy improvement for the CIFAR-10 dataset.
At the inference stage, the trained IMC engine 10 is used in a forward pass to evaluate the input image 10. Thus, the input image is passed through each of the convolution layer 14 and fully connected layer 16 using the corresponding weights w0, w1, w2, w3 obtained from the training phase. A convolution layer forward pass 20 is further illustrated in
The present disclosure presents a novel hardware noise-aware DNN training scheme to largely recover the accuracy loss of highly-parallel (e.g., 256 rows activated together) IMC hardware. Different from a few prior works that performed noise injection for DNN accuracy improvement of IMC hardware, in embodiments described herein (1) noise injection is performed at the partial sum level that matches with the IMC crossbar, and (2) the injected noise is based on actual hardware noise measured from two recent IMC prototype designs.
Evaluation results are obtained by performing noise-aware training and inference with several DNNs including ResNet-18, AlexNet, and VGG with binary, 2-bit, and 4-bit activation/weight precision for the CIFAR-10 dataset. Furthermore, by using noise data obtained from five different chips, the effectiveness of the proposed DNN training is evaluated using individual chip noise data versus the ensemble noise from multiple chips.
The key contributions and observations of this work are:
In IMC systems, DNN weights are stored in a crossbar structure, and analog computation is performed typically by applying activations as the voltage from the row side and accumulating the bitwise multiplication result via analog voltage/current on the column side. ADCs at the periphery quantize the analog voltage/current into digital values. This way, vector-matrix multiplication (VMM) of activation vectors and the stored weight matrices can be computed in a highly parallel manner without reading out the weights.
Both SRAM based IMC and non-volatile memory (NVM) based IMC have been proposed. While NVM devices have density advantages compared to SRAMs, availability of embedded NVMs in scaled CMOS technologies is limited, and peripheral circuits such as ADCs often dominate the area. Accordingly, a recent study reported that 7 nanometer (nm) SRAM IMC designs exhibit smaller area and energy-delay-product than 32 nm NVM IMC designs. In addition, several device non-idealities such as low on/off ratio, endurance, relaxation, etc., pose challenges for robust NVM IMC and large-scale integration. On the other hand, SRAM has a very high on/off ratio and the SRAM IMC scheme can be implemented in any latest CMOS technology. To that end, this disclosure focuses on SRAM IMC designs.
SRAM IMC schemes can be categorized into resistive and capacitive IMC. Resistive IMC uses the resistive pulldown/pull-up of transistors in the SRAM bitcell, while capacitive IMC employs additional capacitors in the bitcell to compute MAC operations via capacitive coupling or charge sharing.
Accuracy degradation has been reported when software-trained DNNs are deployed on IMC hardware due to quantization noise, process variations, and transistor nonlinearity. To address this, several prior works have employed the information on non-ideal hardware characteristics during DNN training to improve the DNN inference accuracy with IMC hardware. For example, on-chip training circuits have been proposed, but incur a large overhead in both area and energy. A quantization-aware DNN training scheme has been proposed, but it only allows for up to 36 rows to be activated simultaneously and incurs a >2% accuracy loss.
Several recent works have employed noise injection during DNN training to improve the DNN inference accuracy of IMC hardware. The noise-aware DNN training schemes in these approaches inject weight-level noise drawn from Gaussian distributions, and do not consider the crossbar structure of IMC or the ADC quantization noise at the crossbar periphery. In contrast, the hardware noise-aware DNN training scheme proposed herein performs noise injection on the partial sum level that matches with the IMC crossbar structure, and the injected noise is directly from IMC chip measurement results on the quantized ADC outputs for different partial sum (MAC) values.
In IMC hardware, depending on the ADC precision, the partial sums are quantized to a limited number of ADC levels. Due to the variability of devices (transistors, wires, and capacitors), the partial sums from the DNN computation that have the same MAC value could result in different ADC outputs. To characterize this noisy quantization behavior, a large number of IMC chip measurements can be performed with random input activation vectors and weight vectors for different MAC values, and two-dimensional (2-D) histograms between MAC value and ADC output can be obtained (e.g., as illustrated in
Both XNOR-SRAM and C 3SRAM IMC macros contain 256 rows of memory cells and 11-level ADCs, which digitize the bitline voltage after performing the analog MAC computation. Both macros are capable of performing the 256-input dot-product with signed binary input activations and weights (−1 and +1). The dot-product or MAC results are in the range from −256 to +256, and this is represented by the analog bitline voltage between 0V and the supply voltage. The 11-level ADC at the periphery quantizes the analog bitline voltage to one of the 11 possible output levels, e.g., [−60, −48, −36, −24, −12, 0, 12, 24, 36, 48, 60]. As mentioned earlier, each possible MAC value could be quantized to any of the 11 different ADC levels, and hence there exists a probability corresponding to every bit-count and every ADC level.
For multi-bit DNN evaluation, multi-bit weights are split across multiple columns of the IMC SRAM array and multi-bit activations are fed to the IMC SRAM array over multiple cycles to perform bit-serial processing. Bit-by-bit MAC computation between split sub-activations and sub-weights is performed to obtain the digitized partial sums from the ADC outputs. The partial sums are then accumulated with proper binary-weighted coefficients depending on the bit positions of the sub-activations/weights, and the full sum for a given neuron in the DNN layer is obtained.
If the supply voltage changes, the noise/variability gets affected, and the IMC prototype chip measurement results change as well. Also, intra-chip (e.g., different SRAM columns) and inter-chip (e.g., different chips) variations exist, which affect the amount of noise introduced to the analog MAC computation as well as the resultant DNN accuracy.
With reference to
In the conventional IMC works, the training algorithm was not made aware of the hardware variability and quantization noise but when software-trained DNNs are deployed on the IMC hardware, the inference is affected by the above-discussed hardware noise. To address this issue, the noise-aware DNN training is performed by injecting the measured hardware noise into the forward pass of the DNNs during training (see
Hardware noise-aware DNN training was performed using the CIFAR-10 dataset. ResNet-18 (as described in He, K., Zhang, X., Ren, S., and Sun, J.,
“Deep Residual Learning for Image Recognition,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2016, which is incorporated herein by reference in its entirety), AlexNet (as described in Krizhevsky, A., Sutskever, I., and Hinton, G. E., “ImageNet Classification with Deep Convolutional Neural Networks,” in Advances in Neural Information Processing Systems, pp. 1097-1105, 2012, which is incorporated herein by reference in its entirety), VGG (as described in Simonyan, K. and Zisserman, A, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” in International Conference on Learning Representations, 2015, which is incorporated herein by reference in its entirety), and MobileNet (as described in Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., and Adam, H., “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications,” CoRR, abs/1704.04861, 2017, URL http://arxiv.org/labs/1704.04861, which is incorporated herein by reference in its entirety) DNN models were trained and evaluated with 1-bit, 2-bit, and 4-bit activation/weight precision. Noise data was measured from two different IMC chips of XNOR-SRAM and C3SRAM at different supply voltages to perform the proposed IMC hardware noise-aware training. Furthermore, ensemble noise-aware training was also performed, by combining the probability tables of five different XNOR-SRAM chips and obtaining a single unified probability table that represents the noise from the five different chips.
Quantization-aware training was employed for low-precision DNN inference. For the proposed hardware noise-aware training, all DNNs were trained by using a batch-size of 50 and default hyperparameters. Furthermore, the reported DNN inference accuracy values are the average values obtained from five inference evaluations of the same DNN under the same conditions of noise used during the proposed training process.
In all the results, software baseline DNNs were trained where no noise was injected during training, and the same DNNs were trained by injecting the IMC hardware noise. To clarify, the following set of accuracies are obtained: (1) Baseline Accuracy represents software baseline inference accuracy without any noise injection, (2) Conventional IMC Inference Accuracy represents the DNN inference accuracy with IMC dot-product evaluation on the baseline DNN models without noise-aware training, and (3) Noise-Aware IMC Inference Accuracy represents IMC dot-product evaluation on the new DNN model that is trained with the proposed hardware noise injection. When software trained DNNs are directly deployed onto IMC hardware, DNN accuracy degradation occurs. By using the proposed IMC hardware noise-aware training, embodiments aim to largely recover the DNN accuracy loss of the IMC hardware.
By using the XNOR-SRAM chip measurement results and noise probability tables, the proposed noise-aware DNN training was performed for the CIFAR-10 dataset for different types of DNNs, with different activation/weight precision, with different types of noise models, with noise from different chip voltages, and also across different physical chips.
In this evaluation, DNN training and inference was performed on four different DNNs of ResNet-18, VGG, AlexNet. and MobileNet for CIFAR-10, using the XNOR-SRAM noise measurements from a single chip at the supply voltage of 0.6V. First, the hardware noise-aware training is performed on the binarized versions of these DNNs, where only convolution layers are binarized for MobileNet. Subsequently, the ResNet-18 DNNs with 2-bit and 4-bit activation/weight precision are evaluated for the proposed noise-aware training.
The supply voltage of the chip affects analog IMC operation. Higher supply voltages worsen the IMC noise, due to a higher IR drop on bitline voltage. Hardware noise-aware training was performed using the XNOR-SRAM noise data at three different supply voltages of 0.6V, 0.8V, and 1.0V.
In this evaluation, the same noise-aware DNN training was performed for binary AlexNet and ResNet-18 by using five different noise probability tables, obtained from five different XNOR-SRAM chips at the same supply voltage of 0.6V. As shown in Table 1, Table 2, and Table 3, the noise-aware IMC inference accuracy is higher than the conventional IMC accuracy of the software baseline model across all five chips for different precisions of the ResNet-18 DNN trained on the CIFAR-10 dataset.
An ensemble probability table was also obtained by combining the probability data from five different XNOR-SRAM chips. To achieve this, 100,000 random samplings of ADC quantization outputs were performed from each probability table for random inputs and the new ensemble probabilities from the pool of 500,000 quantization samplings were obtained. This noise probability table represents a more generalized version of the hardware noise and allows for testing the IMC hardware noise robustness of the DNNs when trained with a non-chip-specific noise. In Table 1, Table 2, and Table 3, five inference evaluations were performed for each evaluation, and the mean of the five inference accuracies and the average deviation from the mean are reported.
Table 1 shows the results obtained by performing ensemble noise-aware training on ResNet-18 DNN with different parameter precisions on the CIFAR-10 dataset. Besides showing the inference accuracy by using the ensemble probability table, the table also shows the inference accuracies obtained by using the individual chip probability tables during inference only. It can be noted that the same DNN with non-noise-aware training had an IMC inference accuracy of 86.54%. This was later improved to 88.75% on average by using the ensemble XNOR-SRAM noise data to perform noise-aware training.
Furthermore, similar results on 2-bit and 4-bit ResNet- 18 DNNs are shown in Table 2 and Table 3, respectively. It can be seen that although some accuracy is traded-off compared to the chip-specific noise-aware training, the generalized noise model still results in better performance than a non-noise-aware trained model. It is expected that such a generalized model will not be able to out-perform a highly chip-specific noise model in noise-aware training.
The noise-aware training demonstrates the capability of recovering the inference accuracy, however, it tends to require a long training time. This is because look-up operations need to be performed in order to implement the non-ideal noisy IMC quantization function. Training deeper and larger neural networks, and training with a larger dataset such as ImageNet can become a challenge when limited hardware resources are available. Therefore, to accelerate the training process, both ideal quantization noise-based training and single noise model-based training were evaluated. In the first evaluation, the bit-wise probability table-based noise injection was replaced with the ideal quantization function of the IMC hardware during DNN training. This noise model corresponds only to the ADC quantization under ideal circumstances and was expected to help improve the DNN accuracy compared to the non-noise-aware trained DNNs.
Another fast training model was also devised, where the bit-wise probability table-based noise injection was replaced with a single noise model obtained from the quantization error distribution shown in
Table 4 shows the results obtained when using ideal quantization-aware training and single noise model-based training on ResNet-18 binary DNN with the CIFAR-10 dataset. The ResNet-18 DNN trained without any noise injection is used as the baseline for this comparison. It can be observed that there is about 3.96% degradation in accuracy when this model is deployed on the IMC hardware. The reported results correspond to the mean of the accuracy and its 3-variation across 10 evaluations. The results show that the IMC inference accuracy after quantization-aware training is improved by 1.05% when compared to the IMC inference accuracy on the software baseline model. On the other hand, the single noise model-based training also improves the IMC inference accuracy by 1.08% compared to that of the baseline model.
Noise-aware training was also performed using the noise data obtained from another IMC hardware, the C3SRAM chip. The noise data measured from the C3SRAM chip at 1.0V and 0.6V supply voltages was used. Three different binary DNNs were trained in software without noise injection first and then the baseline and C3SRAM IMC hardware inference accuracies were obtained for each of them. After that, noise-aware training was performed by substituting the XNOR-SRAM noise data with C3SRAM noise data during training and the C3SRAM IMC hardware inference accuracy was evaluated.
The performance of the proposed noise-aware training algorithm was also compared with two other similar works (Joshi, V. et al, “Accurate Deep Neural Network Inference Using Computational Phase-Change Memory,” in Nature Communications, 11(1): 1-13, 2020) and (Zhou, C., Kadambi, P., Mattina, M., and Whatmough, P. N., “Noisy Machines: Understanding Noisy Neural Networks and Enhancing Robustness to Analog Hardware Errors Using Distillation,” arXiv preprint,arXiv: 2001.04974, 2020), both articles of which are incorporated herein by reference in their entirety. In both of these works, noise-aware training was performed by injecting noise at the weight-level drawn from Gaussian distributions, in addition to knowledge distillation in Zhou et al. Moreover, the parameters of the Gaussian distribution used by Joshi et al. to inject noise into weights were determined based on their 11-level PCM hardware, which supported 3.5 bits of precision for weights. Both these works injected noise at a much lower granularity of individual weights.
However, this is not a highly accurate emulation of IMC hardware noise which contains both quantization noise and device noise lumped at the partial sum level of the IMC crossbar. Furthermore, the variations of transistors/wires/capacitors are not accounted for when using standard Gaussian distributions. In comparison, noise injection was performed at the partial sum level, which is more specific to IMC computations, therefore making it more relevant. In an attempt to make an apple-to-apple comparison of the proposed scheme and prior works, noise-aware training was performed using the approaches proposed by prior works and evaluated with the same XNOR-SRAM IMC chip hardware measurement results.
Noise-aware training was performed using the ηtr=ηinf combination with a value of 0.11 for the work of Joshi et al. and a value of 0.058 for the work of Zhou et al. These values were chosen so that the noise remains the same during training and inference, and also the noise remains quantitatively similar to the single noise model obtained from the best Gaussian fit for the quantization error distribution shown in
provided by Joshi et al. and the noise formula of σnoisel=η×(Wmaxl−Wminl) provided by Zhou et al., the aforementioned values of η are obtained.
Table 5 shows the IMC inference accuracies on ResNet-18 with different activation/weight precisions and trained using the CIFAR-10 dataset. The IMC-Joshi and IMC-Zhou columns show the XNOR-SRAM IMC inference accuracies on the DNN models trained using the noise-aware training method proposed by Joshi et al. and by Zhou et al., respectively. The IMC-Proposed column shows the XNOR-SRAM IMC inference accuracies on the DNN models trained using the proposed chip-specific noise-aware training. It can be seen that the proposed chip-specific IMC hardware noise-aware training achieves better DNN inference accuracy across different parameter precisions of the ResNet-18 DNN, compared to the results achieved by Joshi et al. and Zhou et al.
The process continues at operation 1108, with, during the training, injecting pre-determined hardware noise into a forward pass of the DNN. Operation 1108 may optionally include operation 1110, with performing stochastic quantization of each partial sum. The process optionally continues at operation 1112, with performing an inference evaluation using a forward pass through the DNN.
Although the operations of
The exemplary computer system 1200 in this embodiment includes a processing device 1202 or processor, a system memory 1204, and a system bus 1206. The system memory 1204 may include non-volatile memory 1208 and volatile memory 1210. The non-volatile memory 1208 may include read-only memory (ROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), and the like. The volatile memory 1210 generally includes random-access memory (RAM) (e.g., dynamic random-access memory (DRAM), such as synchronous DRAM (SDRAM)). A basic input/output system (BIOS) 1212 may be stored in the non-volatile memory 1208 and can include the basic routines that help to transfer information between elements within the computer system 1200.
The system bus 1206 provides an interface for system components including, but not limited to, the system memory 1204 and the processing device 1202. The system bus 1206 may be any of several types of bus structures that may further interconnect to a memory bus (with or without a memory controller), a peripheral bus, and/or a local bus using any of a variety of commercially available bus architectures.
The processing device 1202 represents one or more commercially available or proprietary general-purpose processing devices, such as a microprocessor, central processing unit (CPU), or the like. More particularly, the processing device 1202 may be a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a processor implementing other instruction sets, or other processors implementing a combination of instruction sets. The processing device 1202 is configured to execute processing logic instructions for performing the operations and steps discussed herein.
In this regard, the various illustrative logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with the processing device 1202, which may be a microprocessor, field programmable gate array (FPGA), a digital signal processor (DSP), an application-specific integrated circuit (ASIC), or other programmable logic device, a discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. Furthermore, the processing device 1202 may be a microprocessor, or may be any conventional processor, controller, microcontroller, or state machine. The processing device 1202 may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration).
The computer system 1200 may further include or be coupled to a non-transitory computer-readable storage medium, such as a storage device 1214, which may represent an internal or external hard disk drive (HDD), flash memory, or the like. The storage device 1214 and other drives associated with computer-readable media and computer-usable media may provide non-volatile storage of data, data structures, computer-executable instructions, and the like. Although the description of computer-readable media above refers to an HDD, it should be appreciated that other types of media that are readable by a computer, such as optical disks, magnetic cassettes, flash memory cards, cartridges, and the like, may also be used in the operating environment, and, further, that any such media may contain computer-executable instructions for performing novel methods of the disclosed embodiments.
An operating system 1216 and any number of program modules 1218 or other applications can be stored in the volatile memory 1210, wherein the program modules 1218 represent a wide array of computer-executable instructions corresponding to programs, applications, functions, and the like that may implement the functionality described herein in whole or in part, such as through instructions 1220 on the processing device 1202. The program modules 1218 may also reside on the storage mechanism provided by the storage device 1214. As such, all or a portion of the functionality described herein may be implemented as a computer program product stored on a transitory or non-transitory computer-usable or computer-readable storage medium, such as the storage device 1214, volatile memory 1210, non-volatile memory 1208, instructions 1220, and the like. The computer program product includes complex programming instructions, such as complex computer-readable program code, to cause the processing device 1202 to carry out the steps necessary to implement the functions described herein.
An operator, such as the user, may also be able to enter one or more configuration commands to the computer system 1200 through a keyboard, a pointing device such as a mouse, or a touch-sensitive surface, such as the display device, via an input device interface 1222 or remotely through a web interface, terminal program, or the like via a communication interface 1224. The communication interface 1224 may be wired or wireless and facilitate communications with any number of devices via a communications network in a direct or indirect fashion. An output device, such as a display device, can be coupled to the system bus 1206 and driven by a video port 1226. Additional inputs and outputs to the computer system 1200 may be provided through the system bus 1206 as appropriate to implement embodiments described herein.
The operational steps described in any of the exemplary embodiments herein are described to provide examples and discussion. The operations described may be performed in numerous different sequences other than the illustrated sequences. Furthermore, operations described in a single operational step may actually be performed in a number of different steps. Additionally, one or more operational steps discussed in the exemplary embodiments may be combined.
Those skilled in the art will recognize improvements and modifications to the preferred embodiments of the present disclosure. All such improvements and modifications are considered within the scope of the concepts disclosed herein and the claims that follow.
This application claims the benefit of provisional patent application Ser. No. 63/171,448, filed Apr. 6, 2021, the disclosure of which is hereby incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
63171448 | Apr 2021 | US |