Generally, this disclosure relates to sensors. More particularly, this disclosure relates to remote sensors.
Wireless sensor networks (WSNs) have been an active area of research for over a decade. These efforts have culminated in various application areas for sensor networks, including military and civilian surveillance, habitat observation, monitoring the health of physical structures, body area networks, and building energy efficiency, among others. Wireless sensor nodes almost always rely on a highly limited source of power—either a battery or environmental harvesting. In the case of battery-powered nodes, manual maintenance becomes necessary whenever the battery is depleted. In the case of energy harvesting, the energy source imposes a hard constraint on the amount of energy that may be used in a given period. Therefore, energy efficiency is a critical design concern in sensor networks.
Due to the sensitive nature of many WSN applications, secure communication mechanisms between the sensor nodes and base station are often required. The security objectives of a WSN include confidentiality, integrity, authentication, and availability. These objectives are often addressed using cryptographic primitives, such as encryption and hashing. However, state-of-the-art cryptographic algorithms are highly compute-intensive and impose a significant burden on the limited energy sources of sensor nodes (a reduction in lifetime by a factor of two or more is common when cryptography is used). Despite significant advances in lightweight implementations of cryptography for sensor networks, the stringent energy constraints inherent to wireless sensor nodes often imply that the use of state-of-the-art cryptographic algorithms is infeasible.
Current sensors do not use encryption/hashing to address the confidentiality/integrity requirements since doing so increases sensor energy by around 1.5×, i.e., this drains the battery 2.5× faster.
Current sensors do not perform on-chip inference. Again, this is due to the fact that on-chip inference can consume significant energy.
Currently, sensors just sense the targeted signal and transmit the data to the base station where inference may be performed from data collected from a large number of sensors.
At the base station, if the signals are raw, they can just be used for analysis. The limitation of an approach where signals are directly used or reconstructed for analysis at a base station is that it does not address the need to perform local signal analysis, because generally this is computationally intensive and thus impractical to perform on either the sensors or the gateway device. Further, the need for local analysis is gaining importance in advanced sensing systems, particularly for medical applications where local detection can enable closed-loop monitoring and therapeutic devices while also identifying the critical signal segments to enable transmission to centralized human experts for reconstruction and further analysis.
The present disclosure combines secure/energy-efficient sensor design and smart/energy-efficient sensor design. It can be retrofitted to any existing sensor by feeding the output of the existing sensor to an ASIC or FPGA on which the system and technology is implemented. This provides a way to deal with legacy sensor hardware. In the future, however, it can also be implemented on the same IC as the sensor: this would lead to even greater energy efficiency. The signal can be compressively sensed by the sensor and transmitted to the base station. If the signal is compressively sensed, it can be reconstructed before analysis. The approach of performing analysis directly on compressed representations can have broad and valuable implications beyond systems where the aim is simply to move such functions from a base station to the local nodes.
The system and method according to the present disclosure can also be used to simply compressively sense the data, analyze the data, and then encrypt/hash it before sending it to a base station. This enables security and inference. In this case, inference can be performed at the sensor by analyzing the compressed data, and then encrypting the data and sending it to the base station for utilizing the intelligence provided from the sensors. If on-chip inference detects a rare event, the compressively-sensed signals around the rare event can also be transmitted to the base station for reconstruction and further analysis. Use of compression techniques, such as, for example, compressive sensing, before encryption/hashing eliminates the energy overhead. The present disclosure makes it possible to do inference on the sensor node at one to two orders of magnitude lower energy. The present disclosure makes the distilling of intelligence from the sensor data much more efficient by using a two-stage process where local intelligence is distilled from each sensor node and higher-level intelligence is distilled on the base station.
The system and method according to the present disclosure can be used to augment any Internet-of-Things (IoT) sensor so that its output can be sent to a base station in a secure fashion while the output of the local inference performed on the augmented sensor node can be fed to a second-stage inference system. This would also make the second-stage inference much more efficient and accurate. Analysis on compressed representations can enable a generalizable approach to substantially reduce computational energy for signal-processing operations.
A further use of the invention would be to significantly reduce the storage requirements at the sensor node and at higher levels of IoT.
A third use would be to plug the current security gap when sensors communicate data to the base station. Current systems do not use encryption, thus making the whole IoT system vulnerable to malicious attacks based on fake sensor data. This challenge is addressed by the invention.
It can be used to augment any IoT sensor to make it secure and smart, while maintaining its energy efficiency. The invention enables existing IoT sensors to be augmented for enabling security and inference. Sensors to base station communication is currently the weakest link in IoT security since a malicious attacker can easily send fake sensor data to the base station, causing it to make an incorrect inference, thus resulting in significant damage to the IoT system. The present disclosure prevents this by not only making energy-efficient security possible, but also by alleviating the inference burden on the base station by making energy-efficient on-sensor inference possible.
The following is a summary of measurement results from the IC.
The energy measurements from a compressed-domain feature extractor (CD-FE) block (logic and SRAM) were identified at different values of the logic supply voltage. Since the total CD-FE energy exhibits a non-linear relationship with respect to compression factor ξ and a parameter called projection factor v, the optimal CD-FE logic voltage, Vdd,opt, was empirically determined such that it minimizes the total CD-FE energy at a given value of ξ and v.
Energy measurements from the CD-FE block vs. ξ and v were also identified. The CD-FE SRAM energy comprises the active- and idle-mode energies. At smaller values of ξ and v, active-mode SRAM leakage energy, Eact,lkgSRAM, tends to be the dominant component while at higher values of ξ and v, the idle-mode SRAM leakage energy, Eidl,lkgSRAM, is dominant. Further, the CD-FE logic and SRAM energy measurements showed that for values of ξ>4×, the total feature-extraction energy in the compressed domain is lower than that in the Nyquist domain.
The classification energy can dominate the feature-extraction energy when compressed-domain processing is used with non-linear SVM kernels. However, for linear kernels, feature-extraction energy dominates and compressed-domain processing can provide substantial energy scalability with respect to ξ and v. Further, energy measurements from the processor (feature extraction+classification), also show a similar trend as the classifier for the linear and non-linear SVM kernels.
Sparsity of signals provides an opportunity to efficiently represent sensor data. Compressive sensing is one technique that exploits signal sparsity in a secondary basis to achieve very low-energy compression on the sensing node. The random projections in compressive sensing, however, affect the sensed signals, preventing the use of Nyquist-domain algorithms for signal analysis. Moreover, signal reconstruction is energy-intensive and is not desirable on low-power sensor nodes. An approach to overcome these limitations in systems is to use compressive sensing. Computations from the Nyquist domain are transformed to the compressed domain, enabling computations to be performed directly on compressively-sensed data. In particular, the design of a processor that enables on-node signal analysis to detect epileptic seizures directly using compressively-sensed electroencephalogram (EEG) is presented. By using an exact solution for the compressed-domain filtering matrices, the performance of the compressed-domain detector is retained up to high compression factors. Additionally, by using an approximate solution, smaller-sized compressed-domain filtering matrices were derived, saving more energy in the compressed domain. These methods provide two strong knobs to control the energy of the compressed-domain seizure-detection processor.
Thus, in addition to communication energy savings, through end-to-end data reduction in a system, the methodologies described herein enable a mode of power management where the computational energy scales due to both a reduction in the number of input samples that need to be processed and due to approximations introduced at the algorithmic level.
The set of accompanying illustrative drawings shows various example embodiments of this disclosure. Such drawings are not to be construed as necessarily limiting this disclosure. Like numbers and/or similar numbering scheme can refer to like and/or similar elements throughout.
This disclosure is now described more fully with reference to the set of accompanying illustrative drawings, in which example embodiments of this disclosure are shown. This disclosure may, however, be embodied in many different forms and should not be construed as necessarily being limited to the example embodiments disclosed herein. Rather, the example embodiments are provided so that this disclosure is thorough and complete, and fully conveys various concepts of this disclosure to those skilled in a relevant art.
Features described with respect to certain example embodiments may be combined and sub-combined in and/or with various other example embodiments. Also, different aspects and/or elements of example embodiments, as disclosed herein, may be combined and sub-combined in a similar manner as well. Further, some example embodiments, whether individually and/or collectively, may be components of a larger system, wherein other procedures may take precedence over and/or otherwise modify their application. Additionally, a number of steps may be required before, after, and/or concurrently with example embodiments, as disclosed herein. Note that any and/or all methods and/or processes, at least as disclosed herein, can be at least partially performed via at least one entity in any manner.
Various terminology used herein can imply direct or indirect, full or partial, temporary or permanent, action or inaction. For example, when an element is referred to as being “on,” “connected” or “coupled” to another element, then the element can be directly on, connected or coupled to the other element and/or intervening elements can be present, including indirect and/or direct variants. In contrast, when an element is referred to as being “directly connected” or “directly coupled” to another element, there are no intervening elements present.
Although terms first, second, etc. can be used herein to describe various elements, components, regions, layers and/or sections, these elements, components, regions, layers and/or sections should not necessarily be limited by such terms. These terms are used to distinguish one element, component, region, layer or section from another element, component, region, layer or section. Thus, a first element, component, region, layer, or section discussed below could be termed a second element, component, region, layer, or section without departing from various teachings of this disclosure.
Furthermore, relative terms such as “below,” “lower,” “above,” and “upper” can be used herein to describe one element's relationship to another element as illustrated in the accompanying drawings. Such relative terms are intended to encompass different orientations of illustrated technologies in addition to the orientation depicted in the accompanying drawings. For example, if a device in the accompanying drawings were turned over, then the elements described as being on the “lower” side of other elements would then be oriented on “upper” sides of the other elements. Similarly, if the device in one of the figures were turned over, elements described as “below” or “beneath” other elements would then be oriented “above” the other elements. Therefore, the example terms “below” and “lower” can encompass both an orientation of above and below.
The terminology used herein is for describing particular example embodiments and is not intended to be necessarily limiting of this disclosure. As used herein, the singular forms “a,” “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. The terms “comprises,” “includes” and/or “comprising,” “including” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence and/or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
As used herein, the term “or” is intended to mean an inclusive “or” rather than an exclusive “or.” That is, unless specified otherwise, or clear from context, “X employs A or B” is intended to mean any of the natural inclusive permutations. That is, if X employs A; X employs B; or X employs both A and B, then “X employs A or B” is satisfied under any of the foregoing instances.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. The terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and should not be interpreted in an idealized and/or overly formal sense unless expressly so defined herein.
As used herein, the term “about” and/or “substantially” refers to a +/−10% variation from the nominal value/term. Such variation is always included in any given.
All references specifically cited herein are hereby incorporated herein by reference in their entireties for the purposes for which they are cited and for all other purposes. If any disclosures are incorporated herein by reference and such disclosures conflict in part and/or in whole with this disclosure, then to the extent of conflict, and/or broader disclosure, and/or broader definition of terms, this disclosure controls. If such disclosures conflict in part and/or in whole with one another, then to the extent of conflict, the later-dated disclosure controls.
The present disclosure describes certain embodiments relating to compressive sensing for simultaneously enabling energy-efficient encryption/hashing and inference on a sensor node. It can be used to augment any existing sensor, which typically just senses and transmits data to a base station, into a sensor that performs inference in a secure way, while maintaining energy efficiency. It can also be used to implement a secure, smart, and energy-efficient sensor in a single integrated circuit (IC).
Widely used network security protocols, such as SSL, have a provision for compressing packets using conventional data compression algorithms, before they are encrypted. This approach of performing encryption after data compression, which is called encompression, can potentially reduce the volume of data that needs to be processed by cryptographic algorithms. Unfortunately, this approach is not directly applicable to sensor networks, since traditional compression algorithms are themselves quite compute- and memory-intensive. However, as described herein, compression methods, such as, for example, compressive sensing (CS), can be effectively used for data compression in sensor networks, since they each offer a low-complexity compression scheme that still achieves high compression ratios for sparse data (often, over an order-of-magnitude). Further, the addition of inference, for example, employing a feature extraction and/or a linear or nonlinear classification component allows for the incorporation of computational complexity to the sensor while maintaining energy efficiency.
By applying encompression, inference, and encryption, and targeting a reasonable compression ratio, secure sensor data transmission can be achieved, while at the same time, the amount of data to be encrypted or hashed is significantly reduced. Moreover, encompression based on compressive sensing (CS) is especially suitable for sensor nodes and greatly reduces the energy cost of security. In some cases, encompression and inference even reduce energy compared to the case when no compression or encryption is performed or where the analytics are performed at the base station.
Information sensing and processing have traditionally relied on the Nyquist-Shannon sampling theorem that is one of the central tenets of digital signal processing. However, if the signal to be measured is sparse, Nyquist sampling produces a large number of redundant digital samples, which are costly to wirelessly transmit and severely limit the sensor node lifetime.
The Compressive Sensing (CS) method removes redundancies in sparse signals while sampling, thus unifying sampling and compression.
In addition to compression and reconstruction, CS may intrinsically provide a level of confidentiality given that the adversary has no knowledge of matrix Φ. However, to ensure a more robust level of security, CS may be combined with well-established cryptographic algorithms, while greatly reducing their energy overhead.
Compressive sensing is a technique that can be used to compress an N-sample signal x, which is sparse in a secondary basis Ψ; e.g., EEG is sparse in the Gabor basis and spike data are sparse in the wavelet basis; the sparse dictionary Ψ can be learned by training on the data, and such data-driven bases often outperform pre-defined fixed dictionaries. Thus, if x can be represented as Ψs, where s is a vector of C-sparse coefficients, a projection matrix Φ can be used to transform x to a set of M[O{C log(N/C)}<M<<N] compressed samples (denoted by {circumflex over (x)}) as follows:
{circumflex over (x)}M×1=ΦM×Nx N×1. (1)
The compression factor ξ=N/M quantifies the amount of compression achieved by the projection. For accurate recovery of x from {circumflex over (x)}, Φ needs to be incoherent with Ψ; an M×N dimensional matrix Φ, whose entries are i.i.d. samples from the uniform distribution U(+1, −1) or from the normal distribution N(0, 1), is often maximally incoherent with Ψ. Deriving Φ from U(+1, −1) also leads to low-energy compression since the projection is reduced to simple additions and subtractions.
Although sensing can thus incur very little energy, the reconstruction of x from {circumflex over (x)} can be costly. As seen from Eq. (1), {circumflex over (x)} is underdetermined (i.e., knowing only {circumflex over (x)} and Φ, there are an infinite number of possible solutions for x and, hence, for s). However, since x is sparse in Ψ, the sparsest solution for s is often the correct solution with high probability. The sparse solution can be determined by solving the following convex optimization problem:
minimize∥s∥1 subject to {circumflex over (x)}=Φψs, (2)
The reconstructed signal is then given by xR=Ψs*, where s* is the optimal solution to Eq. (2). Although Eq. (2) requires only a small number of measurements (M<<N) to enable accurate recovery, even with the most efficient approach, the complexity of solving the optimization problem can be prohibitive on typical power-constrained platforms, such as sensor nodes.
While the combination of CS with cryptographic algorithms as described herein allows for a greater level of security for sensor data that is transmitted from the sensor to another location, such as, for example, another sensor or a base station, it also provides for an overall reduction in energy consumption. Another sensor feature that can be enhanced and/or additional functionality provided is the ability of the sensor to compress data at the sensor level, as compared to current sensors that send uncompressed data to a base station. As described below, such data compression can be accomplished at the sensor level with energy savings. However, the prevalent thinking is that even where compression can be accomplished at the sensor level using energy saving techniques, encrypting such data would forfeit all of the energy savings benefits plus require additional levels of energy consumption resulting in an ineffective and inefficient sensor for low power applications. Using the techniques described herein, a low power sensor and a method for low power sensing is provided that allows for both compression, encryption and security at the sensor level, thereby providing a sensor that can provide compressed and encrypted and secure data, such as, for example, to another sensor, a base station or some other remote location.
The following is a description of one or more embodiments of a hardware encompressor module that performs compressive sensing (CS), encryption, and integrity checking at lower energy consumption rates than traditional systems. For example, CS can reduce the energy needed for securing output data using hardware block ciphers and hash functions, under one or more appropriate compression ratio and hardware compressor implementations.
One embodiment of an encompressor process is shown in
In one or more embodiments, the system architecture incorporates compression, such as, for example, encompression, inference, such as, for example, feature extraction (FE) and classification (CL), after the compression component, such as, for example, compressive sensing, as shown in
In an embodiment, CS is used to enable both sensing and compression. An efficient implementation of CS can be achieved by merging the sampling and compression steps. In one embodiment, compression occurs in the analog sensor read-out electronics prior to the use of an analog-to-digital converter (ADC). In another embodiment, a digital CS is used, where the compression algorithm is applied linearly after the ADC, and the ADC is not included as part of the encompressor. As Eq. (1) implies, in one embodiment, a compression process simply consists of multiplying the input vector with a random matrix. Therefore, the number of multiplications (and thus the corresponding energy consumption) depends on the matrix size, i.e., the product of projection matrix (Φ) and input (x) size.
Next, feature extraction is implemented.
In order to derive the signal-processing operations required in the feature-extraction stage of compressed analysis (CA), which is the approach used in the present embodiment, defined as wherein the end-to-end embedded signals are representations based on compressive sensing, certain calculations must be performed. Compressed-domain equivalents Ĥ for any signal-processing function can be derived, which can be represented as a matrix operation H. The error in the inner product between feature vectors is minimized, since, as described below, this is a key computation in kernel functions for inference stages like classifiers. Using Ĥ permits low distortion errors with respect to the inner-products between feature vectors.
Many powerful inference frameworks from the domain of machine learning transform data into the Euclidean space by employing signal-processing functions for feature extraction. These frameworks then use linear or nonlinear classification to perform inference over the data. The classification step commonly utilizes a distance metric, such as, for example, 2-norm or inner product in the Euclidean space, between feature vectors, i.e., classification can be achieved with only inner-product information, rather than complete feature data. For example, in prior systems, as shown in
One possibility is for each vector {circumflex over (x)} in CA to be processed by a matrix operator Ĥ to derive the compressed-domain feature vector ŷ. A naive approach might be to find Ĥ such that the output vector ŷ equals y from NA. This gives the following formulation:
However, with M<<N, matrix Ĥ above corresponds to N×M variables constrained by N×N equations. Such a system with fewer variables than equations is overdetermined and has no exact solution. An auxiliary matrix Θ can be used instead of Φ to introduce additional degrees of freedom in order to solve for Ĥ exactly. Instead of solving for y=ŷ, as shown above in Eq. (3), the system solves for some K-dimensional projection Θy of y. The elements of the K×N auxiliary matrix Θ are now design variables along with Ĥ. Thus, the system needs to solve for Θ and Ĥ simultaneously in the following equation:
With M<<N, Θ and Ĥ together correspond to K×(N+M) variables constrained by K×N equations. Thus, with more variables than constraints, Eq. (4) will have an infinite number of solutions. The system then sets constraints for finding unique solutions that make several useful design options available:
The system is able to solve exactly for the compressed-domain processing matrix Ĥ, avoiding additional error sources in the processing.
By using a smaller value of K, it also permits solving for an approximate Ĥ of smaller size. This solution provides the system with a knob to scale the number of computations performed in CA based on the required accuracy for solving Eq. (4).
Additionally, by introducing Θ, Eq. (4) allows the system to extend the methodology from signal-processing operations where H is a square matrix to those where H is a non-square matrix (e.g., multi-rate system).
For any signal-processing function, which can be represented as a matrix H, the system derives an equivalent operator Ĥ in CA. Since the system is not interested in the exact value of y but in its distance from other processed signals, the system solves for a random projection of y, which preserves the inner product of vectors.
Encompression in Hardware:
The intuition behind solving for a projection of y instead of y itself in Eq. (4) is that many machine-learning stages, such as, for example, support-vector machines, that act after feature extraction do not use the exact value of y but only its distance from other vectors. Thus, the Euclidean distance between feature vectors is the metric that is sought to be preserved. The distance between any two feature vectors, y1 and y2, is given by the inner product: y1Ty2. The corresponding distance in the compressed domain is given by:
ŷ1Tŷ2⇒(Θy1)T(Θy2)⇒y1T(ΘTΘ)y2 (5)
The right hand side will be equal to the inner product y1Ty2 of NA if ΘTΘ is equal to the N×N identity matrix I. Thus, to solve for Θ and Ĥ exactly in Eq. (4), requires solving the following constrained optimization problem:
arg minΘ∥(ΘTΘ)−I∥22 such that ΘH=ĤΦ (6)
Assuming H is a square matrix, the SVD of ΦH−1 as VSUT, can be obtained where V and U are orthogonal matrices (i.e., UTU=VTV=I) and S is an M×M diagonal matrix formed by the singular values of ΦH−1. The following is then the relationship for ΘTΘ:
ΘTΘ=(ĤΦH−1)TĤΦH−1=U(SVTĤTĤVS)UT (7)
The distance from the above matrix to the identity will be at least the rank deficiency of U. The lower bound in Eq. (6) will thus be achieved by setting K=M (or v=ξ),
Ĥ=S−1VT and Θ=ĤΦH−1 (8)
According to the Johnson-Lindenstrauss (JL) lemma (S. Dasgupta and A. Gupta, “An elementary proof of the Johnson-Lindenstrauss lemma,” Random Structures and Algorithms, vol. 22, no. 1, pp. 60-65, 2002, incorporated herein by reference in its entirety for all purposes), ŷ1Tŷ2 in Eq. (5) will be approximately equal to y1Ty2, if the entries of the auxiliary matrix Θ are drawn from the normal distribution N(0, 1). Thus, the following modified problem can be solved. Find Θ and Ĥ such that ΘH=ĤΦ and Θ˜N(0, 1)
Suppose Θ and Ĥ comprise row vectors θiT and ĥiT, i∈[1, K], where θ1T∈N and ĥiT∈M. The following representation is used:
Given the above formulation, the ith row of Eq. (4) can be simplified and represented as follows:
θiTH=ĥiTΦ⇒θi=Dĥi (9)
where DT=ΦH−1. Note that D in the above equation is of dimensionality N×M. Suppose the SVD of D is USVT, where orthogonal matrices U and V are of dimensionality N×M and M×M, respectively, and the diagonal matrix S, comprising the singular values of D, is of dimensionality M×M. Then Eq. (9) can be simplified as follows:
θi=Dĥi=USVTĥi (10)
Since θi˜N(0, IN) is sought to preserve the inner products according to the JL lemma, ĥi is drawn from N(0, Σ), where Σ=VS−2VT. Then each row of Θ is derived based on Eq. (10). This choice of ĥi, in fact, gives the exact JL solution for Ĥ according to the following corollary:
Given orthogonal matrices U, V of dimension N×M and M×M, respectively, and an M×M diagonal matrix of singular values S. Then, ĥi˜N(0, Σ), where Σ=VS−2VT and ĥi∈M, gives the solution for θi=USVTĥi such that the entries of the row vector θi are drawn i.i.d from the multivariate normal N(0, IN).
The proof is completed by deriving the mean and variance of ĥi, under the assumption of θi˜N(0, IM). Consider the following equation:
θi=USVTĥi=Uzi (11)
where zi=SVTĥi is an M-dimensional vector of random variables. Since θi˜N(0, IM) and U is a constant matrix, zi˜N(0, IN). Further, since ĥi=VS−1zi, the mean of ĥi can be computed as [ĥi]=[zi]=0, and the variance of ĥi as follows:
Thus, the approximate solution for matrix Ĥ is of dimension K×M, where K<M (or v>ξ).
To solve Eq. (6) for Θ and Ĥ, the transpose of Eq. (4) is taken and multiplied with itself, obtaining the following relationship:
(ΘH)T(ΘH)=(ĤΦ)T(ĤΦ)
HTΘTΘH=ΦTĤTĤΦ
RQPTΘTΘPQRT=USVTĤTĤVSUT (12)
where H=PQRT and Φ=VSUT are the SVDs of H and Φ, respectively.
Since H is of dimensionality L×N (L<N), P, Q, and R are of dimensionality L×L, L×L, and N×L, respectively. Similarly, since Φ is of dimensionality M×N (M<N), U, S, and V are of dimensionality N×M, M×M, and M×M, respectively. If Θ=BQ−1PT and Ĥ=AS−1VT in Eq. (12), the following relationship is known:
RBTBRT=UATAUT⇒UTRBTBRTU=ATA
where A and B are unknown matrices that need to be determined. The JL lemma can be used such that K×L elements of Θ can be drawn from N(0, 1). A solution for the K×L matrix B=ΘPQ can be obtained and the above equation can be used to derive the K×M matrix A=BRTU. Finally, the K×M matrix Ĥ=AS−1VT can be obtained.
The preceding solution is summarized below.
Before proceeding, the dimensionality of Θ is parameterized and related to the dimensionality of Ĥ; this will ease consideration of the scaling trade-offs related to accuracy and energy. The size of the compressed-domain processing matrix Ĥ is governed by the size of Θ and Φ (see Eq. (4)). Thus, in addition to the compression factor ξ=N/M, a parameter called projection factor v for Θ is defined as follows:
v=N/K. (13)
Note that v>1 (<1) denotes a compressive (expansive) projection Θ. Similarly, ξ>1 (<1) denotes a compressive (expansive) projection Φ. These, in turn, imply fewer (more) computations associated with Ĥ.
Assuming H is a square matrix, such as, for example, discrete wavelet transform (DWT) in NA, the following is a solution for Eq. (6), above. Setting K=M (or v=ξ) leads to a minimum error solution and results in the following relationships:
Ĥ=S−1VT and Θ=ĤΦH−1 (14)
The solutions for Θ and Ĥ have dimensionality M×N and M×M (M<<N due to compression), respectively. Processing vectors in CA (with an Ĥ that is smaller than H) would thus reduce the number of computations as compared to NA.
Above, is described a solution for Θ and an approximate Ĥ to save more computational energy in CA.
To derive the approximate solution, JL lemma states that the inner product of vectors is preserved under random projections. The results show that Θ=(ΦH−1)TĤ and each row of Ĥ needs to be derived from the normal distribution N(0, Σ), where Σ=VS−2VT; S is a diagonal and V is a unitary matrix obtained from the following singular value decomposition (SVD): (ΦH−1)T=USVT.
In this case, the solutions for Θ and Ĥ have dimensionality K×N and K×M (where K can be chosen to be smaller than M or v>ξ), respectively. Such an approach (with a much smaller Ĥ matrix) would reduce the number of computations in CA below those required for the exact solution and save additional computational energy. This energy saving comes at the cost of accuracy in solving Eq. (6). However, as described below, this cost can be small and, in fact, K<<M (v>>ξ) can be reliably used.
This approach is also applicable to multi-rate signal-processing systems, and Eq. (6) is solved when H is a non-square matrix.
For the case when H is of dimensionality L×N (L≠N), the JL lemma is used to derive a near-orthogonal matrix Θ and solve for Ĥ using the SVDs of H and Φ. The derivation is presented above, where Θ is shown to be that of dimensionality K×L and its elements are drawn from N(0, 1). It is also shown that Ĥ=ΘHUS−1VT, where U, S, and V are derived from the SVD: Φ=VSUT.
Algorithm 1 shows the pseudocode (with the correct scaling constants) that summarizes an approach of simultaneously solving for Θ and Ĥ under the three conditions described in this section. For the case of a non-square L×N (L>N) processing matrix H, Algorithm 1 also shows (on line 15) an optional step of orthogonalization, such as, for example, by the Gram-Schmidt process, before deriving B, A, and Ĥ. This ensures a perfectly orthonormal Θ when its row rank is greater than the column rank. Next, system-level metrics are described that will be used to evaluate an approach in CA.
The approach above opens up many system design options. To understand the associated accuracy trade-offs, below, the precise metrics that are relevant in inference applications are discussed. In addition to comparing the proposed CA with NA as a baseline approach, CA is also compared with RA in which the sensor node transmits compressed data to an external platform to reduce the amount of data transmitted (hence, saving communication energy and/or alleviating bandwidth constraints); the data are reconstructed on the external platform before performing signal processing.
Since CA solves for a projection of the processed signal (Θy) in NA, the accuracy of processing in CA is expected to be correlated with the ability to recover the y features from Θy. If the reconstructed features are denoted as y*CA, the SNR in CA can be defined as follows:
SNRCA=10·log[∥y∥22/(∥y*CA−y∥22)] dB. (15)
Similarly, the performance in RA is governed by the ability to recover the y*RA features. However, since reconstruction occurs before processing in RA, the reconstructed features y*RA are related to the reconstructed signal x*RA as y*RA=Hx*RA. Thus, the SNR in RA can be defined as follows:
SNRRA=10·log[∥y∥22/(∥Hx*RA−y∥22)] dB. (16)
For feature extraction and classification, a primary concern is how the IPE of feature vectors scales with ξ. For any two feature vectors yi and yj, IPE between the inner product in CA (i.e., ŷiTŷj) and the inner product in NA (i.e., yiTyj) is given by the following equation:
IPE=|ŷiTŷj−yiTyj|/(yiTyj) (17)
The scaling characteristics of IPE with respect to the dimensionality of Θ is analyzed below. There are trade-offs, for example, for a spike-sorting application and for a seizure detection application.
As discussed below, scaling of the first dimension K (or v) of Ĥ and Θ degrades IPE. If it degrades at a slow rate, it enables a smaller Ĥ and hence, reduces the amount of computation significantly. The rate of degradation can be quantified by invoking the distance-preservation guarantees as described in M. Rudelson and R. Vershynin, “Non-asymptotic theory of random matrices: Extreme singular values,” arXiv preprint arXiv: 1003.2990, April 2010 (“Rudelson”), incorporated herein by reference in its entirety for all purposes. For an input vector x, the following relationship exists (from the near-orthogonality of Θ):
∥Θx∥≈∥UUTx∥ (18)
However, since ϕ is a random projection, as described in Rudelson, ∥UUTx∥≈∥x∥.
As shown below, the measured IPE degrades at a slow rate when K is decreased (v is increased).
Because ξ=N/M quantifies the amount of compression achieved by compressive sensing, as ξ becomes larger, the performance of RA and CA is expected to deteriorate with respect to NA. The present innovation provides for computations to be viably performed on the sensor node, with the additional benefit of computational energy reduction (due to the fewer operations required in CA). As described below, the present innovation to energy-constrained sensor nodes, where devices can be more computationally powerful, thanks to energy savings enabled by the explicit use of efficient representations for the embedded signals, can be exploited alongside algorithmic and architectural optimizations.
While v=N/K provides a knob to obtain additional computational energy savings in the CA approach since the approximate solution permits a smaller Ĥ matrix, these energy savings come at the cost of accuracy. The impact on performance and computational energy if v and ξ knobs are turned simultaneously is shown below. Also shown below is a comparison of the accuracy and energy savings to a case where an exact solution is used for Ĥ.
In sensing systems, communication bandwidth, not just communication energy, may be of concern. A passive implant functioning as a spike-acquisition transponder is a typical example of such a case. This is a case where communication poses a bandwidth limitation, not an energy limitation, since the implant transmits with no power. In this case, the implant is severely energy-constrained, and thus unable to accommodate extensive local processing. On the other hand, for communication, it can take advantage of a passive transmitter based on (inductive) backscattering. The data-rate of such a transmitter, however, is limited (due to practical inductors that can be formed). The objective within the implant is thus to reduce the data rate to a level that can be supported by the passive transmitter while consuming no more energy than that required to achieve this level. An embodiment as described herein substantially improves the accuracy of the signal-processing system and enables two knobs for trading algorithmic performance in exchange for reduced computational complexity. The energy savings are linear with respect to each of these knobs.
In an exemplary control system for neural prosthesis, a passive transponder is used to transmit spike data serially, thus requiring buffering over all channels, at data rates up to 1 Mbps. Thus, spikes on the implant can be detected and aligned before transmission. This can significantly reduce the data rates. Spikes are sorted on an external head-stage before analysis, which comprises feature extraction and clustering. For example, DWT and K-means are two algorithms that can be used for feature extraction and clustering, respectively. After sorting, the data rates can become significantly lower. Spike trains from each sorted cluster can then be analyzed to extract statistical parameters, such as, for example, the spike count (SC), neuron firing rate (FR), inter-spike interval (ISI), and coefficient of variation (CV). These parameters eventually steer an algorithm for prosthesis control.
In CA, spikes on the implant are detected and aligned. Each detected spike is compressively sensed through random projections. This process can potentially help alleviate the bandwidth requirements of a passive transponder. Spike sorting is then performed directly on compressively-sensed data. This can be done either on the external head-stage or on the implant itself. If done on the implant, it permits real-time operation by avoiding reconstruction, while potentially reducing the computational energy of spike sorting. The results below suggest that the computational energy can be reduced substantially. If done on the head-stage, CA can reduce the communication constraints of the implant drastically (due to compressive sensing). This implies that low-energy or zero-energy communication links, such as, for example, based on passive impedance modulation, may be viable. The cost is only a small increase in computational energy (for the random projection of data) on the implant.
In the filter bank implementation, the DWT of a signal is derived by passing it through a series of filters. First, vector x is passed through a low pass filter (LPF) through convolution. The signal is also decomposed simultaneously using a high-pass filter (HPF). However, with half the frequency band removed, the outputs can be down-sampled by 2× without risk of aliasing. This comprises one level of wavelet decomposition. The process is repeated with the LPF outputs to achieve higher levels of decomposition. To formulate the entire process as a matrix operation in NA, the processing between a vector of filter coefficients g and the N-sample spike vector x can be represented as a convolution operation:
z=g*x=Σk=−∞∞g[n−k]x[k]=GNx (19)
where z is the filtered signal of N samples and GN is the N×N convolution matrix whose rows are shifted versions of the coefficient vector g. For the DWT algorithm, GNL and GNH can be used to represent the LPF and HPF operations, respectively. After the filtering process, down-sampling can be implemented by 2× at each level of decomposition through an N/2×N matrix D2,N:
Using a cascade of D-G operators, the full DWT operation can be represented in NA as the following linear transformation:
where y is the N-sample DWT of spike samples x. For L levels of decomposition, sub-matrices Hn (1≤n≤L+1) are given by:
Each pair of matrices, GN/2
Given the DWT formulation in NA, as shown in
Experimental Results:
The spike sorting and analysis systems of
SC is determined by counting the number of spikes in each cluster after K-means. The first step in computing CV is to determine the ISI histogram. The envelope of the histogram as a Poisson distribution is then modeled. This model is directly used to determine CV, which is defined as the ratio of the standard deviation to the mean of the distribution function of the ISI histogram. To compute FR for each class, the number of spikes, which occur in non-overlapping windows—each of width 300 ms, is determined. Then a Gaussian filter with a length (L) of 30 and variance (σ) of 3 is used to smooth the binned FR estimates. The bin-width and smoothing filter parameters are chosen empirically to avoid discontinuities in the FR curve. The mean FR is then computed from the smoothed curve.
Since the performance in CA and RA are related to the ability to reconstruct the feature vectors, the error introduced in each approach is analyzed.
With fewer compressively-sensed samples (i.e., larger ξ), accuracy of SC, CV, and FR estimates are expected to deteriorate in RA and CA. Since H is a square processing matrix in the neural prosthesis application, the exact solution for Ĥ is used.
Since the approximate solution permits a smaller Ĥ matrix, it enables additional savings in computational energy. However, as described above, due to the approximation required in ŷ, this can impose a performance cost.
As another example, the Nyquist-domain processing matrix H that is considered is non-square. The compressed-domain equivalent matrix Ĥ is derived using the solution set out above. The Nyquist-domain algorithm for seizure detection is described, which employs patient-specific classifier training.
The baseline detector in NA was validated on 558 hours of EEG data from 21 patients (corresponding to 148 seizures) in the CHB-MIT database. For every patient, up to 18 channels of continuous EEG was processed using eight BPFs, leading to an FV dimensionality of 144. The Nyquist-domain detector has been demonstrated to achieve an average latency, sensitivity, and specificity of 4.59 sec., 96.03%, and 0.1471 false alarms per hour, respectively.
To enable a transformation to the compressed domain, the focus is on computations in the feature-extraction stage of
fij=H*iD4,512xj (21)
where fij is the filtered EEG data derived from the ith filter acting upon the jth EEG channel. The Nyquist-domain processing matrix for each BPF can thus be defined as Hi=H*i D4,512. This matrix is rectangular and has a dimensionality of 128×512. As shown in
Experimental Results:
As discussed below, the error in the FVs (represented by IPE) and the performance of the end-to-end system is shown. As can be seen, the performance does not correlate directly with the IPE because the information content of the features is what controls the performance of the system. This behavior, which is unlike the previous case study, is due to the presence of the spectral-energy operation in the feature-extraction stage. Thus, the variation in mutual information with respect to ξ in CA is reviewed and compared with that in RA.
In CA, compressed-domain processing matrices Ĥi is derived from the corresponding rectangular NA matrices Hi using the solution above. Note that has K×M [or N(1/v+1/ξ)] entries. As in NA, the processed signal from each filter as: {circumflex over (f)}ij=ĤiΦxj is then obtained where the processing matrix Ĥi acts directly on the compressively-sensed signal Φxj. A CA-estimate of the spectral energy is then derived as: ŷij=fijTfij.
The error in the FVs (IPE) is defined as: IPE=∥ŷij−yij∥/yij. It is expected that the error will increase with increasing compression (ξ>1×). For these experiments, v=1× was kept and ξ was scaled. The computational savings in CA thus increase with ξ[Ĥ has N(1/v+1/ξ) entries].
To evaluate the performance of the compressed-domain detector, FVs were derived from the CHB-MIT database. These FVs were used to train and test the SVM classifier in a patient-specific manner. A leave-one-out cross-validation scheme was employed for measuring the performance of the detector.
The information content in the FVs, which has been shown to be a metric that directly indicates the end-to-end performance of the detector, is described below.
Mutual information between the FVs and the class labels acts as an indicator for the performance of a classifier. High mutual information results in better performance.
Hardware Analysis:
The hardware complexity of CA is compared below with that of NA. The number of computations required in CA can be substantially lower. However, there is an increased cost in storage that is required to accommodate the extra coefficients in Ĥ.
As can be seen from the foregoing, although CA provides substantial savings in computation and communication energies, it potentially requires more data storage than NA. Consequently, architectures and technologies that address the memory energy and footprint can play an important role in the use of CA.
Sparsity of signals provides an opportunity to efficiently represent sensor data. Compressive sensing is one technique that exploits signal sparsity in a secondary basis to achieve very low-energy compression at the cost of high complexity in signal reconstruction. The energy for reconstruction can present a significant barrier to signal analysis, which is becoming increasingly important in emerging sensor applications. The approach described above not only circumvents the energy imposed by signal reconstruction, but also enables computational energy savings by processing fewer signal samples. Through analytical validations, this approach was shown to achieve error bounds in feature estimates that are very close to the expected lower limit. This approach was validated with the two case studies describe above, namely spike sorting for neural prosthesis and EEG classification for seizure detection. For the neural-prosthesis application, the experimental results suggest that up to 54× fewer samples can be processed while restricting detection errors to under 3.5%. Using this approach, the reduction in the communication energy can also be significant. For instance, in the seizure-detection application, the detection error was under 2.41% when ˜21× fewer transmitted EEG samples were used. The proposed approach thus provides an approach for signal-processing systems that address system-resource constraints, such as energy and communication bandwidth, through efficient signal representation.
In another exemplary embodiment, a prototype IC is used to enable the two resulting power-management knobs within an energy-scalable EEG-based seizure-detector. The resulting algorithm for compressed-domain analysis increases the number of signal-transformation coefficients that need to be stored compared with a traditional Nyquist-domain algorithm. A key attribute of the IC is thus a scalable SRAM. The algorithm and detailed analysis and measurements from the IC implementation are described below. This implementation can take advantage of encryption and security to ensure that the sensitive patient information is maintained in a safe and secure manner. The additional energy savings provided by such encryption and security based systems and methods is unexpected because of the few bits output by the classifier, and allows encryption and security features and functionality to be provided at the sensor node.
Signal-classification algorithms typically base their decision rules on key features extracted from the signals via signal-processing functions; this is particularly true for medical detectors, where the features often correspond to physiological biomarkers. These algorithms then use a classifier to perform modeling and inference over the extracted features. Powerful classification frameworks exist in the domain of machine learning that can construct high-order and flexible models through data-driven training. In many such frameworks, the classification step utilizes a distance metric, such as, for example, 2-norm or inner product, between feature vectors (FVs). In certain cases, the distance metric may also be invoked within the feature extraction step, for instance, to extract spectral energies, which form generic biomarkers for neural field potentials, such as for example, brain-machine interfaces, and sleep disorders, among other things. The following description relates to a seizure-detection application, where clinical studies have shown that EEG spectral energy, derived using the inner product between FVs after linear finite impulse response (FIR) filtering, can serve as a biomarker that indicates the onset of a seizure.
The scalability of ξ and v can be exploited as knobs for system power management. An important consequence of the algorithmic construction proposed is that the CD-BPF matrices Ĥi (which are of dimensionality
for the exact solution and
for the approximate solution) do not retain the regularity of Hi. CD-BPF matrices Ĥi, derived using Hi and Φ, disrupt the regularity and zeros in Hi. The complexity of the CD-BPFs thus scales (a) quadratically with ξ for the exact solution and (b) linearly with ξ and v for the approximate solution. Even though Hi are of dimensionality N×N, as shown in
To exploit this attribute, an energy-scalable processor architecture for a compressed-domain seizure detector can be used, whose block diagram is shown in
In one embodiment, filter coefficients are represented using 8 bits of precision. Thus, to support CD-FE computations, the processor requires a maximum of 32 kB accesses per second from the memory bank.
As shown in
The CD-FE energy comprises the logic and SRAM energy subcomponents. The SRAM 3224 consumes a substantial portion of the total CD-FE energy. Its optimization to exploit scalability with respect to ξ and v is thus a key factor. The detector processes an EEG epoch every TEPOCH=2 sec. However, the optimal operating frequency (and supply voltage Vdd,opt) for the CD-FE logic is determined by minimizing the overall CDFE energy, while ensuring a minimum throughput that allows the active CD-FE computations to be completed in TCD-FE (<2) seconds for each value of ξ and v. For the remainder of the epoch (i.e., TEPOCH−TCD-FE), the logic and SRAMs 3224 can be placed in low-energy idle modes.
In the active mode, while set to a supply voltage of Vsram,min, EactSRAM comprises active-switching (Eact,swiSRAM) and leakage (Eact,lkgSRAM) energies for a period of TCD-FE. In the idle mode, while set to a supply voltage of Vsram,drv, EidlSRAM comprises only the leakage energy (Eidl,likSRAM) for the duration (TEPOCH−TCD-FE). Thus, the SRAM energy components can be represented as follows:
The duration of the active mode (TCD-FE) in Eq. (22) depends on ξ, v, and the optimum logic voltage Vdd,opt. For smaller values of ξ and v, there are more coefficients in Ĥi and TCD-FE (the active CD-FE time) is higher, and for larger values of ξ and v, there are fewer coefficients in Ĥi and TCD-FE (the active CD-FE time) is lower. For instance, TCD-FE is 0.26 sec. for ξ=4× and v=8×, as shown in
Further, the number of active subarrays (Nsub) is also a function of ξ and v;
The IC was prototyped in a 0.13 μm CMOS process from IBM. The die photograph of the integrated circuit forming the circuits of the compressed-domain processor of
The total processor energy is in the range 0.3-2.2 μJ (for linear SVMs), 12.6-38.5 μJ (for non-linear SVMs using a fourth-order polynomial kernel (poly4)), and 18.1-53.3 μJ (for SVMs with an RBF kernel). Since classification results are produced every two seconds (i.e., FVs are processed at a rate of 0.5 Hz), the total processor power lies in the range 0.15-27 μW for all SVM kernels.
As described above, the SRAM leakage energy changes with both ξ and v. Thus, the optimal voltage (Vdd,opt) for the CD-FE logic changes with both ξ and v. In order to determine Vdd,opt, the total CD-FE energy comprising the logic and SRAM energies is minimized.
The CD-FE energy comprises the logic and SRAM energies. Below are provided measurement results for these energy subcomponents using both the exact and approximate solutions for Ĥi.
When ξ=v, the CD-FE complexity scales quadratically with ξ; for the approximate solution, it scales linearly with both ξ and v.
From the above results, it can be seen that the SRAM energy can significantly dominate the CD-FE logic energy at all values of ξ and v. This behavior validates the focus on optimizing the SRAM energy as described above. For example, at ξ=4× and v=2×, the total SRAM energy is 2.1 μJ and the CD-FE logic energy is 70.8 nJ. The contribution of the energy subcomponents is also apparent in the total CD-FE energy plots shown for the exact and the approximate solutions in
Since Hi are Toeplitz matrices implementing convolution, the filter order determines the number of non-zero coefficients in Hi (see
One downside of directly processing compressively-sensed EEG is that the SVM model for classification can become somewhat more complex at higher values of ξ and v. Intuitively, this happens due to the additional error introduced in the FVs when the compressed-domain equations (Eq. (6)) are solved, which necessitates complex decision boundaries in the classifier.
Confidentiality of the data generated and/or processed by the system and sensors can be accomplished through strong encryption at the sensor level. For example, AES can be used for encryption. AES is the current symmetric-key cryptography standard and is widely used for providing confidentiality to sensitive information. It employs keys of length 128, 192 or 256 bits to process data blocks of fixed length, such as, for example, 128 bits.
In one embodiment AES-128 (key-size of 128 bits) is used. Initially, a key expansion operation is performed where eleven 128-bit round keys are derived from the cipher key, with the first-round key being equal to the original secret key. Each byte of the 128-bit input is added modulo-2 to each byte of the first-round key, using bitwise XOR, in the AddRoundKey operation. Then, four operations, SubBytes (nonlinear substitution using S-BOX), ShiftRows, MixColumns, and AddRoundKey are repeated for ten transformation rounds (except for the last round that does not include MixColumns) until the input plaintext is converted into the final ciphertext. The decryption algorithm is the inverse of the encryption algorithm.
Different versions of SHA can be used for integrity checking to detect malicious modifications. SHA-2, the currently deployed standard, consists of a set of four hash functions, with 224-, 256-, 384- or 512-bit outputs (called digests). For example, SHA-2 generates a 512-bit output after performing 80 rounds of arithmetic operations, right shifts, and rotations. Another example of a hash function that can be used is Keccak, which was recently selected as the new hash standard, SHA-3. Keccak was reported to have the lowest energy consumption among the five finalist SHA-3 candidates.
It is worth noting that SHA alone cannot guarantee integrity. An imposter may hash and send spurious data if he/she has information about the hash function used. However, the combination of AES and SHA eliminates that possibility, as the attacker cannot generate encrypted data whose plaintext matches the hash digest, without knowing the AES secret key.
Hardware Implementations:
To evaluate the energy consumption of each component in the above embodiment of an encompression architecture, hardware implementations of CS, AES-128, SHA-2, and SHA-3 winner Keccak were developed and synthesized using Synopsys Design Compiler based on the 65-nm TSMC standard cell library. Synopsys Power Compiler was used to estimate the power consumption of these designs, based on gate-level simulation with 10 MHz clock frequency.
Tables IV and V report the energy consumption for each implemented algorithm. In both tables, B refers to bytes.
Table IV is an abbreviated version of a full table that contains 45 entries. It reports the average energy consumption for compressor implementations based on different input and output block sizes (various widths and heights of matrix). The execution time is the time required to process one input block. As expected, the compressor's energy consumption can be seen to be proportional to Φ's size.
For AES and hash functions, the amount of computation is determined by the input size. As a result, the measured energy consumption is linear in input size. Table V reports the energy consumption of AES and hash function implementations as a linear function of input size. These algorithms have fixed block sizes, which are shown in the second column. For each algorithm, the inputs used in the simulation are multiples of the block size. Their range is 16-1024 bytes for AES, 64-1024 bytes for SHA-2, and 128-4096 bytes for Keccak. The root mean square fitting error percentages of the linear models are reported in the last column.
In order to characterize the energy consumption with and without compression, Ecs (n, m), Eenc (m) and Ehash (m) respectively, denote the energy consumption for compressing an n-byte input into m bytes, encrypting m bytes, and hashing m bytes. Let r denote the compression ratio
Let E0(n) be the total energy required to encrypt and hash n bytes without compression, and E1(n, r) the total energy required to encompress n bytes with compression ratio r. The energy reduction, ρ(m, r), is defined as:
A software encompression module may be implemented on a commercial sensor node. The energy consumption of an encompressive sensor at the system level may be determined, taking into account all sensor node components, such as the ADC and radio transceiver.
One embodiment of a sensor platform is the eZ430-RF2500.
An ED's current consumption was measured while it ran a temperature monitoring application. In this application, the ED collects temperature data and sends it to the AP approximately once per second. The oscilloscope shot in
The CPU's contributions to power consumption can be evaluated in a straightforward manner, as the events can be traced through an analysis of the application software. The radio events, however, are abstracted from the user by design and often occur by default, executed by hardware, and are invisible to the programmer. Five radio events occur every time the radio is woken up from sleep and performs a successful reception or transmission of information. They are (i) oscillator startup (CC2500 oscillator used to source the chip's system clock), (ii) ripple counter timeout (how many times a ripple counter must time out after a successful oscillator startup routine before signaling the CC2500 chip's ready symbol), (iii) PLL calibration (calibration of the on-chip frequency synthesizer for reception (RX) and transmission (TX)) (demodulation), (iv) RX mode (necessary step before entering TX so that a Clear Channel Assessment can check whether another radio is already transmitting on the channel of interest), and (v) TX mode (data transmission).
After separating the radio and MSP430's current consumption components, each component's current consumption can be calculated based on the measured data and the user manual. Table VI shows the results for the most significant events that contributed to power in both the CPU and radio. Measured data in the middle two columns are italicized to differentiate them from data obtained from the user manual. Note that since only 25 bytes were transmitted in one transmission in this application, the four radio events prior to TX dominate the total current consumption. However, these initialization steps occur only once per transmission, and thus should be considered a constant overhead.
CS, AES-128, SHA-2, and SHA-3 winner Keccak were implemented on an eZ430 ED. CS accepts 16×r-byte blocks and outputs 16-byte blocks. Thus, the input size varies for implementations with different compression ratios. Each algorithm was implemented individually because of the sensor node's code size constraint. A lightweight block cipher called XTEA was also implemented, which is a 64-bit cipher suitable for ultra-low power applications. The execution time of each of these software implementations per input byte, and the product of the current drawn and execution time, are shown in Table VII. These results were derived by fitting a linear model to measurements for input sizes of 64, 128, 256, and 512 bytes. The average fitting errors for the five algorithms reported in the table are 1%, 1%, 2%, 2%, and 3%, respectively.
To estimate the energy impact of the software encompressor implemented on a sensor node, an energy model that consists of the following components is assumed. Econst represents a constant overhead that accounts for CPU activation, radio initialization, and other factors. Esense denotes the front-end energy consumption for sensing, and Exmit denotes the transmission energy excluding the initialization overhead. Ecs, Eenc, and Ehash were defined above. All energy components, except Econst are functions of the amount of data processed.
Without any security incorporated, the energy consumption for sensing and transmitting n bytes is:
E′0(n)=Econst+Esense(n)+Exmit(n) (25)
Without CS, to sense, encrypt, and hash n bytes of input, and then transmit n bytes of encrypted message along with k bytes of fixed-length hash value, the energy consumption would be:
E0(n)=Econst+Esense(n)+Eenc(n)+Ehash(n)+Exmit(n+k) (26)
With encompression, the n-byte input is encompressed before transmission. Thus, the energy consumption is given by:
energy reduction is defined by
In addition, energy bonus is defined as
From the measurements, it was determined that all the energy components could be modeled accurately as linear functions, i.e., Ecs(n)=αn, Eenc(n)=βn, Ehash(n)=γn, Esense(n)=λn, and Exmit(n)=θn. Since the compressor has a fixed output size of 16 bytes, Ecs here only depends on the input size, which is different from the case considered in Eq. (24). Using Tables VI and VII, and the fact that Vcc=3.226 V, the constants required in Eqs. (25), (26), and (27) can be determined. Since both SHA-2 and Keccak have 32-byte outputs, k is a constant, i.e., k=32. The rest of the constants are computed to be:
β(AES)=2.20 μJ/B, β(XTEA)=1.65 μJ/B, γ(SHA-2)=1.06 μJ/B, γ(Keccak)=0.98 μJ/B, λ=0.29 μJ/B, and θ=0.71 μJ/B. Econst is derived by adding up the CPU and radio start-up overheads. λ is obtained from the user manual. Other constants are translated directly from Tables VI and VII.
As shown in
Note that CS can be utilized to reduce energy consumption, even if no cryptography is used. Compared to such a case, it is natural to expect the addition of encryption and hashing to add energy overheads.
Overall, these results demonstrate the benefits of encompression in enabling secure, yet energy-efficient, communication in sensor networks.
As can be seen by the analysis of the energy consumption of each sensor component described above, the linear energy model shows that by increasing the compression ratio and data size per transmission, energy reduction of up to 78% can be achieved. In some cases, the total energy consumption with encompression can be even smaller than the original system that does not include security. With a compression ratio of 10, an energy bonus of above 14% can be achieved if input size is greater than 1280 bytes. Further benefits can be achieved by adding machine learning to the system to provide on-node analysis.
Due to the challenges associated with signal reconstruction, compressive sensing has previously been used primarily in applications where the sensor nodes acquire and relay data to a remote base station for analysis. The present system provides on-node analysis in addition to encompression and encryption. Random projections, which are used in compressive sensing, preserve inner products of signals, which are at the core of several machine-learning algorithms. An important aspect of exploiting inference frameworks for signal analysis, however, is to develop an ability to process signals before classification in order to create data representations based on critical signal features. One embodiment of the present disclosure provides a new auxiliary matrix in the regularized equations to achieve an exact solution for any linear signal-processing function in the compressed domain. This approach retains performance up to much higher values of ξ than were previously possible. An embodiment of a new auxiliary matrix also introduces previously unexplored knobs, which are under the control of a designer, to significantly reduce computational energy based on the accuracy needs of the system. This approach is referred to herein as compressed analysis (CA), wherein the end-to-end embedded signals are representations based on compressive sensing.
The embodiments described above may be implemented as full sensors or they can be incorporated as an add on component, such as, for example, a chip, integrated circuit, or field programmable gate array, to existing sensors to make them more energy-efficient and allow them to become smart sensors. As shown in
As shown in
The present application claims the benefit of U.S. Provisional Application Ser. No. 62/450,014 filed 24 Jan. 2017, which is incorporated herein by reference in its entirety.
This invention was made with government support under Grants No. CNS-0914787 and No. CCF-1253670 awarded by the National Science Foundation. The government has certain rights in the invention.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2018/014995 | 1/24/2018 | WO |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2018/140460 | 8/2/2018 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
8712157 | Marchesotti | Apr 2014 | B2 |
20040218760 | Chaudhuri | Nov 2004 | A1 |
20090141932 | Jones et al. | Jun 2009 | A1 |
20130332743 | Gueron et al. | Dec 2013 | A1 |
20160022141 | Mittal et al. | Jan 2016 | A1 |
20160034809 | Trenholm et al. | Feb 2016 | A1 |
20160366346 | Shin | Dec 2016 | A1 |
20170331837 | Moon | Nov 2017 | A1 |
Entry |
---|
S. Dasgupta and A. Gupta, “An elementary proof of the Johnson-Lindenstrauss lemma,” Random Structures and Algorithms, vol. 22, No. 1, pp. 60-65, 2002. |
M. Rudelson and R. Vershynin, “Non-asymptotic theory of random matrices: Extreme singular values,” arXiv preprint arXiv:J003.2990, Apr. 2010. |
R Q. Quiroga, Z. Nadasdy, andY. Ben-Shaul, “Unsupervised spike detection and sorting with wavelets and superparamagnetic clustering,” Neural Comp., vol. 16, No. 8, pp. 1661-1687, 2004. |
International Search Report for PCT Application No. PCT/US2018/014995, dated Apr. 17, 2018. |
Number | Date | Country | |
---|---|---|---|
20210357741 A1 | Nov 2021 | US |
Number | Date | Country | |
---|---|---|---|
62450014 | Jan 2017 | US |