The present disclosure relates to processing-in-memory (PIM) architectures and hyperdimensional computing (HDC).
There is an increasing need for efficient processing for diverse cognitive tasks using a vast volume of data generated. However, running machine learning algorithms often results in extremely slow processing speed and high energy consumption on traditional systems or needs a large cluster of application-specific integrated chips (ASICs), such as deep learning on Google tensor processing unit (TPU). There are two key technical challenges that make it difficult to learn in today's computing devices: computation efficiency and robustness to noise.
Hyperdimensional computing (HDC) has been introduced as a computational model towards high-efficiency and noise-tolerant computation. HDC is motivated by the observation that the human brain operates on high-dimensional data representations. In HDC, objects are thereby encoded with high-dimensional vectors, called hypervectors, which have thousands of elements. This mimics several important functionalities of the human memory model with vector operations, which are computationally tractable and mathematically rigorous in describing human cognition.
Although HDC is quite powerful in reasoning and association of the abstract information, it is weak on feature extraction from complex data, such as image and video data. This forces HDC algorithms to rely on a pre-processing step to extract useful information from raw data.
Stochastic hyperdimensional arithmetic computing is provided. Hyperdimensional computing (HDC) is a neurally-inspired computation model working based on the observation that the human brain operates on high-dimensional representations of data, called hypervectors. Although HDC is powerful in reasoning and association of the abstract information, it is weak on feature extraction from complex data such as image and video data. As a result, most existing HDC solutions rely on expensive pre-processing algorithms for feature extraction.
This disclosure proposes StocHD, a novel end-to-end hyperdimensional system that supports accurate, efficient, and robust learning over raw data. Unlike prior work that used HDC for learning tasks, StocHD expands HDC functionality to the computing area by mathematically defining stochastic arithmetic over HDC hypervectors. StocHD enables an entire learning application (including feature extractor) to process using HDC data representation, enabling uniform, efficient, robust, and highly parallel computation.
This disclosure further provides a novel fully digital and scalable processing in-memory (PIM) architecture that exploits the HDC memory-centric nature to support extensively parallel computation. An evaluation over a wide range of classification tasks shows that StocHD provides, on average, 3.3× and 6.4× (52.3× and 143.5×) faster and higher energy efficiency as compared to state-of-the-art HDC algorithm running on PIM (NVIDIA GPU), while providing 16× higher computational robustness.
An exemplary embodiment provides a method for efficient and robust computation. The method includes converting stored data into hyperdimensional data and performing hyperdimensional computation over the hyperdimensional data using hyperdimensional arithmetic operations.
Another exemplary embodiment provides a processing-in-memory (PIM) architecture. The PIM architecture includes a memory array configured to store a set of hypervectors, a compute block coupled to the memory array and configured to perform hyperdimensional arithmetic operations on the set of hypervectors, and a search block coupled to the memory array and configured to perform search operations on the set of hypervectors.
Another exemplary embodiment provides a stochastic hyperdimensional system. The stochastic hyperdimensional system includes a PIM accelerator and a system memory storing instructions. When executed, the instructions cause the PIM accelerator to convert data stored in memory into a set of hypervectors stored in the PIM accelerator and perform hyperdimensional computation over the set of hypervectors using hyperdimensional arithmetic operations.
Those skilled in the art will appreciate the scope of the present disclosure and realize additional aspects thereof after reading the following detailed description of the preferred embodiments in association with the accompanying drawing figures.
The accompanying drawing figures incorporated in and forming a part of this specification illustrate several aspects of the disclosure, and together with the description serve to explain the principles of the disclosure.
The embodiments set forth below represent the necessary information to enable those skilled in the art to practice the embodiments and illustrate the best mode of practicing the embodiments. Upon reading the following description in light of the accompanying drawing figures, those skilled in the art will understand the concepts of the disclosure and will recognize applications of these concepts not particularly addressed herein. It should be understood that these concepts and applications fall within the scope of the disclosure and the accompanying claims.
It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of the present disclosure. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
It will be understood that when an element such as a layer, region, or substrate is referred to as being “on” or extending “onto” another element, it can be directly on or extend directly onto the other element or intervening elements may also be present. In contrast, when an element is referred to as being “directly on” or extending “directly onto” another element, there are no intervening elements present. Likewise, it will be understood that when an element such as a layer, region, or substrate is referred to as being “over” or extending “over” another element, it can be directly over or extend directly over the other element or intervening elements may also be present. In contrast, when an element is referred to as being “directly over” or extending “directly over” another element, there are no intervening elements present. It will also be understood that when an element is referred to as being “connected” or “coupled” to another element, it can be directly connected or coupled to the other element or intervening elements may be present. In contrast, when an element is referred to as being “directly connected” or “directly coupled” to another element, there are no intervening elements present.
Relative terms such as “below” or “above” or “upper” or “lower” or “horizontal” or “vertical” may be used herein to describe a relationship of one element, layer, or region to another element, layer, or region as illustrated in the Figures. It will be understood that these terms and those discussed above are intended to encompass different orientations of the device in addition to the orientation depicted in the Figures.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises,” “comprising,” “includes,” and/or “including” when used herein specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. It will be further understood that terms used herein should be interpreted as having a meaning that is consistent with their meaning in the context of this specification and the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
Stochastic hyperdimensional arithmetic computing is provided. Hyperdimensional computing (HDC) is a neurally-inspired computation model working based on the observation that the human brain operates on high-dimensional representations of data, called hypervectors. Although HDC is powerful in reasoning and association of the abstract information, it is weak on feature extraction from complex data such as image and video data. As a result, most existing HDC solutions rely on expensive pre-processing algorithms for feature extraction.
This disclosure proposes StocHD, a novel end-to-end hyperdimensional system that supports accurate, efficient, and robust learning over raw data. Unlike prior work that used HDC for learning tasks, StocHD expands HDC functionality to the computing area by mathematically defining stochastic arithmetic over HDC hypervectors. StocHD enables an entire learning application (including feature extractor) to process using HDC data representation, enabling uniform, efficient, robust, and highly parallel computation. This disclosure further provides a novel fully digital and scalable processing in-memory (PIM) architecture that exploits the HDC memory-centric nature to support extensively parallel computation.
With reference to
To address these issues, this disclosure proposes StocHD, a novel end-to-end hyperdimensional learning system operating accurate, efficient, and robust learning over raw generated data. The main contributions are listed below:
The efficiency of StocHD is evaluated over a wide range of learning algorithms. This evaluation shows that StocHD provides, on average, 3.3× and 6.4× (52.3× and 143.5×) faster and higher energy efficiency as compared to state-of-the-art HDC algorithm running on PIM (NVIDIA GTX 1080 GPU). In addition, as compared to state-of-the-art HDC solutions, StocHD provides 16× higher robustness to possible noise.
The StocHD framework 10 receives raw data 12, such as via signals or data stored in system memory. The raw data 12 is represented as high-dimension data 14 in hyperdimensional space 16, as further explained in Section III. Thereafter, computations are performed on the high-dimension data 14 in hyperdimensional space 16, such as with a feature extractor 18, an HDC encoder 20, and in HDC learning 22.
HDC has a memory-centric architecture with primitives hardware-friendly operations and extensive parallelism. These features make HDC idea for in-memory acceleration. Embodiments described herein further provide a novel processing in-memory platform that supports all StocHD operations directly over digital data stored in memory. This eliminates the data movement issue between memory and computing unit, which dominates the entire StocHD energy.
HDC encoding works based on a set of defined primitives. The goal is to exploit the same primitives to define stochastic computing-based arithmetic operations over HDC vectors. HDC is an algebraic structure; it uses search along with several key operations (and their inverses): Bundling (+) that acts as memorization during hypervector addition, Binding (*) that associates multiple hypervectors, and Permutation (ρ) which preserves the position by performing a single rotational shift. In HDC, the hypervectors are compositional—they enable computation in superposition, unlike standard neural representations. These HDC operations facilitate reasoning about and searching through input data that satisfy prespecified constraints. To support arithmetic operation, StocHD requires the following HDC operations.
A random hypervector is generated with elements ±1 such that +1 appears with probability p. This facilitates constructing HDC representations of arbitrary numbers via a D dimensional vector. In the stochastic hyperdimensional system 24, information is stored with components ±1. A random HDC vector {right arrow over (V)}1 is fixed to be a basis vector. A random HDC vector {right arrow over (V)}h(h ∈ [−1,1]) is said to represent the number h if δ ({right arrow over (V)}h, {right arrow over (V)}1)=h. This is consistent with the notation for {right arrow over (V)}1 which means that {right arrow over (V)}1 represents the number 1. Note that based on this representation, {right arrow over (V)}−a=−{right arrow over (V)}a.
Given n numbers {a1, a2, . . . , an} and their corresponding probability values, {p1, p2, pn−1} ∈ [0,1], where pn is defined by Σi=1n pi=1, the probabilistic merging chooses the number ai with probability pi. This operation can be extended to operate n hypervectors, where each dimension of merged hypervector is selected by probabilistic merging of n elements located in the same dimension of given input hypervectors.
Between two HD vectors {right arrow over (V)}1 and {right arrow over (V)}2, the similarity defines as
where D is the number of dimensions and (·) is the vector dot product operator. HDC supports other similarity metrics, such as Hamming similarity, that measures the number of dimensions at which two HDC vectors differ.
StocHD defines weighted accumulation over hypervectors. Given two random HDC vectors {right arrow over (V)}a and {right arrow over (V)}b and two probability numbers {p, q} ∈ [0,1](p+q=1), define C=p{right arrow over (V)}a⊕q {right arrow over (V)}b to be the random HDC vector whose i-th component is {right arrow over (V)}ai or {right arrow over (V)}bi with probability p and q, respectively. This can be extended to probabilistic merging of n HDC vectors {right arrow over (V)}1, {right arrow over (V)}2, . . . , {right arrow over (V)}n and n−1 probabilities p1, p2, . . . , pn−1 (pn is defined by Σi=1n pi=1). Similarly, the weighted sum can be defined as p1 {right arrow over (V)}1 ⊕p2 {right arrow over (V)}2 ⊕ . . . ⊕pn {right arrow over (V)}n.
Consider {right arrow over (V)}c=0.5 {right arrow over (V)}a ⊕0.5 {right arrow over (V)}b. To verify the correct functionality of StocHD, it must be shown that
Based on this definition, {right arrow over (V)}a has similarity a with {right arrow over (V)}1. As a result it has exactly
parts components common with {right arrow over (V)}1. So if a dimension i is randomly chosen, the probability that the i-th component of both {right arrow over (V)}a and {right arrow over (V)}1 match is given by
Considering the i-tn component of {right arrow over (V)}c, the probability that the i-th component is taken from {right arrow over (V)}a and {right arrow over (V)}b is 0.5 and 0.5, respectively. Thus, the probability that the i-th component of {right arrow over (V)}c matches with {right arrow over (V)}1 is given by
As a result,
which is what was claimed. Similarly, StocHD supports weighted subtraction given by
Define
Note that {right arrow over (V)}a will have
components same with {right arrow over (V)}1 and
component same with −{right arrow over (V)}1 (which has components complementary to {right arrow over (V)}1). As a result, δ({right arrow over (V)}a,{right arrow over (V)}1)=a and thus a representation of the number a given by
has been constructed. Note that if a ∈ [0,1], then this is equivalent to
and so the probibilities for merging are well defined. This operation will be the building blocks of all other arithmetic operations.
Given two HDC representations {right arrow over (V)}a and {right arrow over (V)}b, an HDC vector {right arrow over (V)}ab can be constructed. Consider the i-th dimension of {right arrow over (V)}c and set it to the i-th dimension of {right arrow over (V)}1 if the i-th dimension of {right arrow over (V)}a and {right arrow over (V)}b are both +1 or both −1. Otherwise set the i-th dimension of {right arrow over (V)}c to be the i-th dimension of {right arrow over (V)}−1. From this construction, the probability that the i-th dimension of {right arrow over (V)}c is the same as {right arrow over (V)}1 is given by
and so {right arrow over (V)}c ≡{right arrow over (V)}ab. In a simpler form, {right arrow over (V)}ab can be computed by element-wise product of {right arrow over (V)}a, Vb, and {right arrow over (V)}1 hypervectors.
Comparison is another key operation for data processing as well as required operation to implement division. Suppose hypervectors {right arrow over (V)}a and {right arrow over (V)}b are given, corresponding to a and b values in original space. One can check comparison by first calculating
Then the value is evaluated using
Finally, one can check whether
is positive, negative or 0.
Consider two vectors {right arrow over (V)}a and {right arrow over (V)}b. The aim is to construct an HDC vector {right arrow over (V)}a/b, which would be saturated if a/b lies outside the range of the HDC arithmetic system. Without loss of generality, assume a and b are both positive. Then, the following steps are performed:
This process eventually has to end because the difference between the evaluation of {right arrow over (V)}low and {right arrow over (V)}high keeps decreasing at each iteration. This yields a representation of a/b which is {right arrow over (V)}mid.
Similar to how division is defined, a doubling formula can also be defined to construct {right arrow over (V)}2a from {right arrow over (V)}a. The way to proceed would be to compare {right arrow over (V)}mid/2 with {right arrow over (V)}a rather than {right arrow over (V)}midb in the algorithm for division.
First, the error rates for generating errors are discussed. A stochastic hyperdimensional representation of a number a ∈ [−1,1] is generated using
Let Xi be a random variable with value 1 if the i-th dimension of {right arrow over (V)}a and {right arrow over (V)}1 are the same, and −1 otherwise. Moreover, let
Note that δ({right arrow over (V)}a, {right arrow over (V)}1)=2S−1. Now, Xi are independently and identically distributed (i.i.d.) Bernoulli random variable with
Using the central limit theorem, N(0,1) is normal distributed. As a result:
The similarity of two vectors {right arrow over (V)}a and {right arrow over (V)}b is calculated using
where {right arrow over (V)}ai is the i-th component of the vector {right arrow over (V)}a. The mean absolute error (MRE) of the representation {right arrow over (V)}a is calculated representing the number a ∈ [−1,1] using the formula
Here, this is divided by 2 to normalize the length of the interval to 1. This metric is used to compare errors with other methods.
The error in weighted addition follows the same theoretical analysis of the generational error. This is because the analysis only depends on the relation between the probability with which {right arrow over (V)}a and {right arrow over (V)}1 have a common dimension, and the value of a itself. The additional advantage is that the repeated weighted addition does not result in an accumulation of error, which is essential in multiplication where weighted addition is used multiple times in a sequence. This arises theoretically from the fact that the distribution of the components of the added vectors follows the correct Bernoulli distribution; thus, the same explained error analysis still holds.
The goal is to find the probability that the comparison returns the correct result. Recall that
is normally distributed with
and standard deviation
which yields the upper bound for the error. The first case is when
is positive. The probability that the comparison returns the incorrect value is given by
The second case is when
is negative. The probability that the comparison returns the incorrect value can be computed in a similar way.
This section presents a digital-based processing in-memory architecture implementing StocHD, which accelerates a wide range of HDC-based algorithms on conventional crossbar memory. StocHD supports all essential HDC operations in memory in a parallel and scalable way.
The digital-based PIM architecture enables parallel computing and learning over the hypervectors stored in memory. Unlike prior PIM designs that use large analog-to-digital converter (ADC)/digital-to-analog converter (DAC) blocks for analog computing, StocHD performs all HDC computations on the digital data stored in memory. This eliminates ADC/DAC blocks, resulting in high throughput/area and scalability. StocHD supports several fundamental operations required to accelerate HDC. StocHD uses two blocks for performing the computation: a compute block and a search block.
StocHD selects two or more columns of the memory as input NOR operands by connecting them to ground voltage. During NOR computation, the output memristor is switched from RON to ROFF when one or more inputs store a ‘1’ value (RON). In fact, the low resistance input passes a current through an output memristor resulting in writing ROFF value on it. This NOR computation performs in row-parallel on all the activated memory rows by the row-driver. Since NOR is a universal logic gate, it can be used to implement other logic operations like AND and XOR operations required for HDC automatics. For example, embodiments of the PIM accelerator 30 can perform comparison and addition, and may additionally perform at least one of scalar multiplication, vector multiplication, scalar division, vector division, and weighted average. Note that all these arithmetic operations can be supported in parallel over all dimensions of hypervectors, enabling significant computational speedup.
To provide a more reliable search, a voltage stabilizer is exploited in each CAM row that ensures a fixed ML voltage during the search. In addition, ganged-circuits are used as a CAM sense amplifier to enable the nearest search in a row-parallel way.
StocHD exploits row-parallel PIM-based NOR operation to accelerate feature extractors, which are mainly based on arithmetic operation, i.e., bitwise operations in HDC space. The feature extraction can perform by simple bitwise operation between hypervectors representing the values. Next, StocHD supports permutation and row-parallel XOR operation over the high-dimensional features. For example, in case of n extracted features, {{right arrow over (f)}1, {right arrow over (f)}2, . . . , {right arrow over (f)}n}, StocHD encodes the information by: ={right arrow over (f)}1⊕ρ1{right arrow over (f)}2⊕ . . . ρn−1fn, where ρn denotes n-bit rotational shift. All encoding steps can perform using row-parallel NOR operation and shift operation that can be implemented by the PIM. StocHD performs classification by checking the similarity of an encoded query with a binary HDC class hypervectors. A query will assign to data with the highest Hamming distance similarity. The inference can be supported using the nearest search supported by the PIM.
StocHD is implemented using both software and hardware support. In software, a PyTorch-based library of Hyperdimensional computing is developed, supporting all required computing and learning operations. In hardware, a cycle-accurate simulator is designed based on PyTorch that emulates StocHD functionality during classification. For hardware, HSPICE is used for circuit-level simulations to measure the energy consumption and performance of all the StocHD operations in 28 nanometer (nm) technology. System Verilog and Synopsys Design Compiler are used to implement and synthesize the StocHD controller.
At the circuit-level, the cost of inter-tile communication is simulated, while in architecture, the intra-tile communications are modeled and evaluated. StocHD works with any bipolar resistive technology, which is the most commonly used in existing NVMs. In order to have the highest similarity to commercially available 3D Xpoint, the memristor device is adopted with a VTEAM model. StocHD accuracy and efficiency are evaluated on five popular datasets such as a large data that includes hundreds of thousands of facial data. Table I lists the workloads, their corresponding feature extractors, and dataset size.
HDC classification accuracy is compared in different configurations: (i) without feature extractor where learning directly happens over a raw data, (ii) with feature extractor running on original data, and (iii) using StocHD arithmetic computation to processed feature extraction. The evaluation shows that HDC with no feature extraction provides, on average, 59% lower accuracy than HDC operating over extracted features. Revisiting the feature extractor with StocHD stochastic arithmetic can almost provide the same result as running feature extraction over original data. The quality of StocHD computation depends on the HDC dimensionality. Using D=4,000 dimensions, StocHD provides the same accuracy as the baseline algorithm. Reducing dimension to D=3,000 and D=2,000 reduces StocHD accuracy, on average, by 0.9% and 2.1%, respectively. This lower accuracy comes from StocHD accumulative noise during the pre-processing step.
The evaluation on
Reducing the dimensionality improves StocHD computation efficiency. As
Many advanced technologies typically pose issues for hardware robustness. One of the main advantages of StocHD is its high robustness to noise and failure in hardware. In StocHD, hypervectors are random and holographic with i.i.d. components. Each hypervector stores all the information across all its components so that no component is more responsible for storing any piece of information than another. This makes a hypervector robust against errors in its components. StocHD efficiency and robustness highly depend on the dimensionality and the precision of each hypervector element. Table II compares StocHD robustness to noise in the memory devices. StocHD provides significantly higher robustness to memory noise than the baseline HDC algorithm. In binary representation, an error only flips a reference dimension results in minor changes in the entire hypervector pattern. In contrast, an error in original space (feature extractor in baseline HDC) can happen in most significant bits, which significantly affects the absolute value and robustness. The results indicate that 10% failure in memory cells results in 0.9% and 14.4% loss on StocHD and the baseline HDC accuracy.
Table II also explores the impact of limited NVM endurance on StocHD quality of learning. Assume an endurance model with μ=107. The evaluation shows that after a few years of using the PIM-based platform, similar to the human brain, StocHD starts forgetting information stored in reference hypervector. To address this issue, wear-leveling is performed to distribute writes uniformly over memory blocks. The overhead of wear-leveling is minor as (i) StocHD has predictable write pattern, and (ii) wear-leveling can happen in long-time periods. The evaluation shows that the baseline HDC has higher sensitivity to the endurance issue. This is because feature extractor requires PIM arithmetic operation that involves several device switching. In contrast, StocHD computes feature extraction with minimal write operation.
Although the operations of
The exemplary computer system 1300 in this embodiment includes a processing device 1302 or processor, a system memory 1304, and a system bus 1306. The processing device 1302 represents one or more commercially available or proprietary general-purpose processing devices, such as a microprocessor, central processing unit (CPU), or the like. More particularly, the processing device 1302 may be a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a processor implementing other instruction sets, or other processors implementing a combination of instruction sets. The processing device 1302 is configured to execute processing logic instructions for performing the operations and steps discussed herein.
In this regard, the various illustrative logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with the processing device 1302, which may be a microprocessor, field programmable gate array (FPGA), a digital signal processor (DSP), an application-specific integrated circuit (ASIC), or other programmable logic device, a discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. Furthermore, the processing device 1302 may be a microprocessor, or may be any conventional processor, controller, microcontroller, or state machine. The processing device 1302 may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration).
The system memory 1304 may include non-volatile memory 1308 and volatile memory 1310. The non-volatile memory 1308 may include read-only memory (ROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), and the like. The volatile memory 1310 generally includes random-access memory (RAM) (e.g., dynamic random-access memory (DRAM), such as synchronous DRAM (SDRAM)). A basic input/output system (BIOS) 1312 may be stored in the non-volatile memory 1308 and can include the basic routines that help to transfer information between elements within the computer system 1300.
The system bus 1306 provides an interface for system components including, but not limited to, the system memory 1304 and the processing device 1302. The system bus 1306 may be any of several types of bus structures that may further interconnect to a memory bus (with or without a memory controller), a peripheral bus, and/or a local bus using any of a variety of commercially available bus architectures.
The computer system 1300 may further include or be coupled to a non-transitory computer-readable storage medium, such as a storage device 1314, which may represent an internal or external hard disk drive (HDD), flash memory, or the like. The storage device 1314 and other drives associated with computer-readable media and computer-usable media may provide non-volatile storage of data, data structures, computer-executable instructions, and the like. Although the description of computer-readable media above refers to an HDD, it should be appreciated that other types of media that are readable by a computer, such as optical disks, magnetic cassettes, flash memory cards, cartridges, and the like, may also be used in the operating environment, and, further, that any such media may contain computer-executable instructions for performing novel methods of the disclosed embodiments.
An operating system 1316 and any number of program modules 1318 or other applications can be stored in the volatile memory 1310, wherein the program modules 1318 represent a wide array of computer-executable instructions corresponding to programs, applications, functions, and the like that may implement the functionality described herein in whole or in part, such as through instructions 1320 on the processing device 1302. The program modules 1318 may also reside on the storage mechanism provided by the storage device 1314. As such, all or a portion of the functionality described herein may be implemented as a computer program product stored on a transitory or non-transitory computer-usable or computer-readable storage medium, such as the storage device 1314, volatile memory 1310, non-volatile memory 1308, instructions 1320, and the like. The computer program product includes complex programming instructions, such as complex computer-readable program code, to cause the processing device 1302 to carry out the steps necessary to implement the functions described herein.
An operator, such as the user, may also be able to enter one or more configuration commands to the computer system 1300 through a keyboard, a pointing device such as a mouse, or a touch-sensitive surface, such as the display device, via an input device interface 1322 or remotely through a web interface, terminal program, or the like via a communication interface 1324. The communication interface 1324 may be wired or wireless and facilitate communications with any number of devices via a communications network in a direct or indirect fashion. An output device, such as a display device, can be coupled to the system bus 1306 and driven by a video port 1326. Additional inputs and outputs to the computer system 1300 may be provided through the system bus 1306 as appropriate to implement embodiments described herein.
The operational steps described in any of the exemplary embodiments herein are described to provide examples and discussion. The operations described may be performed in numerous different sequences other than the illustrated sequences. Furthermore, operations described in a single operational step may actually be performed in a number of different steps. Additionally, one or more operational steps discussed in the exemplary embodiments may be combined.
Those skilled in the art will recognize improvements and modifications to the preferred embodiments of the present disclosure. All such improvements and modifications are considered within the scope of the concepts disclosed herein and the claims that follow.
This application claims the benefit of provisional patent application serial number 63/188,199, filed May 13, 2021, the disclosure of which is hereby incorporated herein by reference in its entirety.
This invention was made with government funds under grant number N00014-21-1-2225 awarded by the Department of the Navy, Office of Naval Research. The U.S. Government may have rights in this invention.
Number | Date | Country | |
---|---|---|---|
63188199 | May 2021 | US |