The abilities of modern networked devices and sensors to acquire data and initiate transactions of that data across widespread networks using Web-based services has led to proliferation in the amount of digital data that must be managed. In addition, the prevalence of “big data” is growing beyond large, scientific data sets to include high quality audio media, visual media, and databases that combine numerous instances of multiple sets of data in an organized structure. For example, large databases that require expedient access from Web-based services might consist of one or more combinations of personal data, social data, inventory data, financial data, and transaction records among many others.
Various examples of the principles of the present disclosure are illustrated in the following drawings. The drawings are not necessarily to scale. The drawings and detailed description thereto are not intended to limit the disclosure to the particular forms disclosed. To the contrary, the drawings are intended to illustrate the principals applicable to all modifications, equivalents, and alternatives falling within the spirit and scope of the present disclosure.
Disclosed are various embodiments of a system for bit level representation for data processing and analytics. The system can include a computing device comprising a processor and a memory; and an application stored in the memory that, when executed by the processor, causes the computing device to at least: compute likeness measures between discrete samples of data; order data according to a priority value based at least in part on a portion of the likeness measures; construct one or more models based at least in part on a portion of the likeness measures and at least a portion of the ordered data; and transform, according to at least a portion of at least one of the models, samples of data into a progressive, binary representation comprising sets of single-bit coefficients. In some instances of the system, a portion of the samples of date are transformed into the progressive, binary representation using a compression system. In some instances of the system, the compression system uses a prediction about at least one partition of the samples of data to cause the computing device to transform the samples of data into the progressive, binary representation. In some instances of the system, at least one of the sets of single-bit coefficients comprises a set of block transform coefficients. In some instances of the system, at least one of the sets of single-bit coefficients comprises a multiresolution transform coefficient.
Disclosed are various embodiments of a method for bit level representation for data processing and analytics. The method can include computing, via a computing device, likeness measures between discrete samples of data; ordering, via the computing device, data according to a priority value based at least in part on a portion of the likeness measures; constructing, via the computing device, one or more models based at least in part on a portion of the likeness measures and at least a portion of the ordered data; and transforming, via a computing device, samples of data into a progressive, binary representation comprising sets of single-bit coefficients, wherein the transforming occurs according to at least a portion of at least one of the models. In some instances of the method, a portion of the samples of date are transformed into the progressive, binary representation using a compression system. In some instances of the method, the compression system uses a prediction about at least one partition of the samples of data to cause the computing device to transform the samples of data into the progressive, binary representation. In some instances of the method, at least one of the sets of single-bit coefficients comprises a set of block transform coefficients. In some instances of the method, at least one of the sets of single-bit coefficients comprises a multiresolution transform coefficient.
Disclosed are various embodiments of a non-transitory computer readable medium comprising a program for bit level representation for data processing and analytics. The program can, when executed by a processor of a computing device, cause the computing device to at least: compute likeness measures between discrete samples of data; order data according to a priority value based at least in part on a portion of the likeness measures; construct one or more models based at least in part on a portion of the likeness measures and at least a portion of the ordered data; and transform, according to at least a portion of at least one of the models, samples of data into a progressive, binary representation comprising sets of single-bit coefficients. In some instances, a portion of the samples of date are transformed into the progressive, binary representation using a compression system. In some instances, the compression system uses a prediction about at least one partition of the samples of data to cause the computing device to transform the samples of data into the progressive, binary representation. In some instances, at least one of the sets of single-bit coefficients comprises a set of block transform coefficients. In some instances, at least one of the sets of single-bit coefficients comprises a multiresolution transform coefficient.
It is to be understood that the present disclosure is not limited to particular devices, systems, or methods, which may, of course, vary. It is also to be understood that the terminology used herein is for the purpose of describing particular embodiments only. As used in this specification and the appended claims, the singular forms “a,” “an,” and “the” include singular and plural referents unless the content clearly dictates otherwise. The term “include,” and derivations thereof, mean “including, but not limited to.” The term “coupled” means directly or indirectly connected. The terms “block” or “set” mean a collection of data regardless of size or shape. The term “block” can also refer to one of a sequence of partitions of data. The term “coefficient” can include a singular element from a block of data.
Embodiments herein relate to automated quantitative analysis and implementation of a data representation and compressions scheme as they apply to digital media. In addition, embodiments herein relate to modeling, arranging, predicting, and encoding digital data such that the final representation of the data requires fewer bits than a previous representation. Accordingly, the present disclosure relates to computational systems, methods, and apparatuses for projecting image data into sparser domains, specifically the here-defined informationally sparse domains. However, the methods are applicable to digital signals and data in general and not to digital images alone. One can define informational sparsity as the relative compaction of the informational bits (or other symbols) in any perfectly decodable representation of a signal. Further, a “compressive transform” can refer to any invertible transform that maps an input signal into an informationally sparse representation, as further discussed within. To measure informational sparsity between distributions of transform coefficients, one can utilize the Gini Coefficient (also known as the Gini index or Gini ratio) from which a computational system can grade the compaction properties of a transform.
The Gini Coefficient measures the disproportionality between an observed distribution of a set positive numbers and a uniform distribution over the same set. A uniform distribution is the least sparse distribution with a Gini coefficient equal to 0, and a distribution that has a single, non-zero value is the sparsest with Gini Coefficient equal to 1. Mathematically, for a distribution X of k discrete values x, one can define the Gini Coefficient G as follows:
where i indexes the values in the set in ascending order from 0 to k−1. To determine the informational compaction performance of a compressive transform over a set of samples indexed by a time step t , one can measure the empirical entropy of each coefficient type over the samples and calculate the sparsity as the Gini Coefficient of the set of coefficient entropies. For example, provided a set of samples C(t) of transformed coefficients ci(t), one can measure the informational sparsity of the transformation as the Gini Coefficient of the set:
H
C={Hc
where Hc
where pc
Presented are transformation methods that learn from possibly nonhomogeneous samples of discrete, random variables and derive a sequential arrangement of those variables from which a sequential predictor and entropy encoder can compress samples of such data into a sparser representation. In general, a process that involves organizing the contents of sample data (e.g., permutation) is called in some prioritized fashion, learning correlations between the contents of sample data, learning one or more models of the contents' values based upon their correlations and ordering, and entropy encoding a sample's contents based on predictions from the model or models compressive transformation (
Hierarchical Markov Forest (or HMF) models can be used for compressive transformation, although these models are not exclusively applicable to compressive transformation and compressive transformation does not require use of HMFs. An HMF can be constructed with a method in which samples of categorical data are organized (or reorganized) in such a way that appropriate variable order Markov models (VMMs) constituting HMF and entropy encoders can compress them very well.
Several illustrative systems are provided for digital image processing using HMFs to model wavelet transform coefficients and facilitate multiresolution analyses that find various applications in denoising, enhancement, and superresolution. Using these features, one can construct an image compression system which utilizes both a wavelet transform and HMF compressive transforms to generate a small, progressive bitstream from which the HMF prediction can actually estimate missing or unencoded data. Media utilizing such estimation can be referred to as Generative Media in light of the fact that these estimations can generate superresolution and super quality data that is unavailable to a decoder through the denoising and enhancement features of the wavelet and HMF compressive transformations.
Described next is an HMF, an HMF construction method, and an associated compressive transform system that learns from a training set of discrete, random variables of possibly different types and derives a sequential arrangement of those variables from which one of a set of VMM predictors predict a subsequent variable type's value from previously processed variable values in the set.
An HMF is a collection of tree network graphs that each predict the distribution of values a specific variable in a sample is likely to take conditioned on the observation of the values of variables in the same sample that are higher in some hierarchical ordering. In some instances, each tree network graph can constitutes a VMM. For example, consider a list of samples where each of k random variables Xi={xi=0, xi=1, xi=2, . . . xi=k−1} take on different values Xi(t)={xi=0(t), xi=1(t), xi=2(t), . . . xi=k−1(t)} at some time step t that defines a sequence of samples. The goal of an HMF construction system is to create a forest structure F consisting of k context trees Tj that each model one of the variables in an order specified by the index j that designates the hierarchical ordering. To simplify the forthcoming notation, one can re-index X based on this hierarchical ordering into Xj(t)={xj=0(t), xj=1(t), xj=2(t), . . . xj=k−1(t)}, which is a permutation Xi. Mathematically, the forest structure can be represented as:
F={T
j=0(t), Tj=1(t), Tj=2(t), . . . Tj=k−1(t)}. (3)
Each tree Tj is a VMM for variable xj with a context defined from variables x0 to xj−1. Rj={x0, . . . xj−1} can be defined as a subset Xj that contains relevant variables of causal index less than j comprising the tree model Tj. A constituent variable of Rj can be used to refer to x<j to emphasize that its index must be less than j. The permutation of Xi into the hierarchical ordering Xj is to enforce a causal ordering of the variables, such that each tree Tj is a suitable model for xj given all previously traversable variables Rj.
Consider tree networks akin to the traditional downwardly drawn example and inverted as depicted in
The VMM trees including HMFs, however, contain nodes that correspond to a particular value of a particular variable type. All of these nodes predict the distribution of the possible values a variable type of interest (xj) might take in a particular sample. In the language describing Context Tree Weighting (CTW) algorithms, context trees containing such nodes are often called tree sources, and these are the actual tree structures with which true CTW algorithms model observed data. One can refer to this type of tree as a Type 2 Tree (
A summary of the construction steps for an HMF follow:
In one embodiment a priority value can be assigned to a variable that is equal to the total information it provides about other variable types in question (possibly measured by total, pairwise, mutual entropy between the former variable type and the latter variable types) minus the total information provided by variable types to the initial variable type in question (possibly measured by total, pairwise, conditional entropy of the latter variable type given the former variable type). For example, the pairwise mutual entropy relations between variables xk and xl are:
where M(a; b) is the mutual entropy (e.g., the shared information) between two random variables, H(a) is the IID entropy of a random variable, and C(a|b) is the conditional entropy (e.g., the average amount of information) of variable a after observing a value of variable b . The conditional entropy C(a|b) is therefore equivalent to the optimal compression rate of variable a conditioned on variableb , and the mutual entropy M(a; b) is a measure of informational correlation. If the goal is to maximize compression of a series of samples of random variables through elucidation of conditional dependencies, then one should sort a variable type xk to compress in each sample by the total amount of compression it offers the remaining variable types, Mtotal(xk; xl), minus the cost Ctotal(xk|xl) of using the other remaining variable types to compress xk. Then, the priority Πk for each variable type provided all other variable types is
To achieve better ordering, one can remove variable types from consideration in future computations involving Equation (5) once they have already been placed properly in the priority list. Other measures of correlation and priority are possible, including those that take more than only pairwise relationships into account. The hierarchical ordering then corresponds to the ordering of variable types by decreasing priority value. A person or device might also repeat the above process recursively or after one or more variable types is placed in hierarchical order—effectively re-sorting the order of each remaining variable type once a variable type has been specified a hierarchical index j.
In another embodiment, a priority value is assigned to a variable that is equal to the total information it provides about other variable types in question (possibly measured by total, pairwise, mutual entropy between the former variable type and the latter variable types) divided by the total the total information provided by variable types to the initial variable type in question (possibly measured by total, pairwise, conditional entropy of the latter variable type given the former variable type). The hierarchical ordering then corresponds to the ordering of variable types by decreasing priority value. A person or device might also repeat the above process recursively or after one or more variable types is placed in hierarchical order—effectively re-sorting the order of each remaining variable type once a variable type has been specified a hierarchical index j. Other priority measures exist that can be used to order variable types hierarchically using the appropriate information and (or) entropy correlations and mathematical relationships.
Various embodiments of the listing stage (3) select a small number of other variable types that are most correlated (possibly measured by mutual or conditional entropy) to each variable type. Small lists are recommended to mitigate tree construction complexities in stage (4) and complexities associated with compressive transformation. However, any number of correlated variable types can be selected per variable type, with the only restriction that the lists of correlated variable types must come from higher (e.g., with greater hierarchical priority) than a variable in question. One method of finding a suitable list Rj to variable Xj is to assign each possible member of Rj a correlation rating that is the amount of information it provides about Xj (possibly measured by mutual or conditional entropy) minus the total amount of information between it and other members of Rj. Another possibility is to divide these informational quantities. Other measures of correlation rating exist. Similarly to the hierarchical ordering stage (2), a person or device might repeat the above process recursively or after one or more variable types is placed into Rj until the correlation ratings of remaining variable types to add to the list fall below some threshold.
Embodiments of the tree construction system (4) build a Type 2 tree for each Xj by linking sequences of nodes from the root of the tree up (see
For each variable from training sample Rj(t), do the following to update tree Tj:
One of many advantages of the system is the use of the VMM tree to serve as an adaptively linking Markov Blanket to predict the value of variable xj(t), as illustrated in
To predict a value for variable xj(t) from a newly observed sample Xj(t) at time t using its corresponding Type 2 tree Tj and previously processed variables Rj(t) in the sample:
One embodiment of a compressive transform system is an entropy encoder that uses HMF predictions of variable values within a sample to compress the sample. Coefficients in the compressed representation consist of partitions of the compressed bitstream. Other embodiments of a compressive transform system consist of entropy encoders that utilize direct or indirect probabilistic modeling of sample variable values for compression. Embodiments of compressive transform systems can aggregate bits from a compressed representation into two or more partitions such that when each partition's bits are concatenated and interpreted as a numeric value, this value can be interpreted as the value of an aggregate coefficient.
One embodiment of an inverse compressive transform system is an entropy encoder that uses HMF predictions of variable values within a sample to decode/ decompress a compressed representation of a sample, returning the representation to the original sample domain. Other embodiments of an inverse compressive transform system consist of entropy decoders that utilize direct or indirect probabilistic modeling of sample variable values for decoding/ decompression. Embodiments of inverse compressive transform systems can divide bits from aggregate coefficient representations before decompression.
Various embodiments of VMM predictors can utilize tests of Markov relatedness between models m defined by the active nodes (or “contexts” in non-graphical representations of VMMs) in the process of generating a prediction. Such methods can be called “Embedded Context Fusion” or ECF. In addition, such methods generalize to network models other than VMMs, such as Markov chains and hidden Markov models.
One embodiment of ECF employs a Bayesian test of “embeddedness” to test the likelihood that one model's count distribution is drawn from the same probability distribution L (also called a “likelihood distribution”) as another model's count distribution. Such a test is also a test of Markov relatedness (e.g., statistical dependence on memory) in that a low probability of embeddedness implies that one model has a different, possibly relevant dependency on information contained within the memory of one model but not another. Therefore, a smaller probability of embeddedness of a higher-order model within a lower-order model implies that the higher-order model models dependency on memory information that is not available to the lower-order model, and is therefore more Markov related to (e.g., has a statistical dependency on) that information. As an example, for any set of active contexts simultaneously traversable within a VMM, the higher-order context count distributions Cm+1={cm+1,i, i∈Z} are partitions of the lower-order count distributions Cm={cm,i, i∈Z} in that cm+1,i≤cm,i, where the “order” is the number of nodes traversed from the root and where i indexes the possible values of a variable as a positive integer within the set Z . As an example,
The proper likelihood function for calculation of embeddedness depends on the nature of the process generating the counts and that the Dirichlet distribution is not the only option. Furthermore, approximation methods can be used in computation of the likelihood function intersection to mitigate potential computational complexities.
Another embodiment of ECF employs an exact test of embeddedness to measure Markov relatedness. For example, an exact test such as Fisher's or Bernard's Exact Test can be used to directly measure the likelihood that one set of counts is a random partition of another set of counts, which is the same as testing whether or not the two sets of counts are drawn from the same probability distributions. Similarly to the Bayesian methods above, one should choose the appropriate test for a given situation and can need to employ approximate methods to control computational complexities. Other embodiments of ECF might employ other test of embeddedness and/or Markov relatedness. One embodiment of ECF uses the probabilities of embeddedness as parameters for computing weights for fusing count distributions from a set of active node models. After combining count distributions, a smoothed and normalized distribution serves as the predicted probability distribution (See
Another embodiment of ECF uses the probabilities of embeddedness as proportions between the weights of available models. For example, the relative likelihood that a higher-order Markov model is better for prediction rather than an immediately lower-order Markov model is proportional to the likelihood that the higher-order distribution derives from a different probability distribution than the lower-order distribution. Consider the 1-Markov model case, where if the transition distributions are similar to the stationary distribution, then the process is more likely an IID process than a 1-Markov process. If the transition distributions are different than the stationary distribution, then a sequence likely obeys the 1-Markov process and a significant probability exists that the stationary and 1-Markov contexts are Markov-related. More clearly, comparing a higher-order model m to the immediately lower one m−1 is the same as the probability that the count distribution from m is a likely partition (e.g., is an embedment) of the count distribution from m−1:
w
m
∝p(Lm≠Lm−1)=(1−p(Lm=Lm−1)) (7)
Then, beginning from the highest order available model m and ending with model m−n, the relative weights form a recursive structure for computing all the weights:
Other variations of approximations to Equations (7) and (8) are possible. Other embodiments of ECF select the model with largest weight for prediction.
Other embodiments of ECF select a single model for prediction that a set of heuristics estimates to have the largest weight. For example, a computational system might select a higher-order model with at least one or more positive counts of one or more values from training data, then continue to search for lower-order model with more total counts from training data that maintains the counts of zero-count values at zero until it finds the lowest-order model where the previous conditions are met. Embodiments of HMFs can use ECF for prediction. Embodiments of compressive transforms or inverse transforms can use ECF for prediction.
An embodiment of a compressive transform for decorrelating color channel information in single pixel samples of digital, color imagery learns an HMF description of every bit from every color channel per pixel sample. For example, a common bitmap image representation in the spatial domain includes a two dimensional array of pixels, each pixel comprising 8 bits of information for each of three color channels (red, green and blue—or equivalently RGB). The relevant HMF description to the present disclosure considers each color channel bit as a variable within single pixel samples. Therefore, this embodiment of an HMF includes 24 (e.g., 8 bits×3 color channels) VMMs arranged in a hierarchical fashion. Application of the HMF compressive transform to each sample yields a new representation in the compressed domain. Using an embodiment of a compressive transformation system that partitions bit-valued coefficients of the transform domain into two or more aggregate coefficients corresponding to a numerical interpretation of concatenated bits within each partition, a computational system can decorrelate RGB pixel data into three aggregate, compressed domain coefficients.
An embodiment of a compressive transform for decorrelating spatial information in regional samples of pixels in digital, grayscale imagery learns an HMF description for every bit of every pixel in a regional sample. Compressed transformation of such regions is analogous to the 8×8 regional decorrelation using the DCT as illustrated in
An embodiment of a system for signal denoising utilizes a truncation of a compressed transform representation followed by inverse compressive transformation (
An embodiment of a system for image denoising utilizes a wavelet transform but with filtering performed in a compressed transform domain on localized wavelet coefficients. Such a system follows
This small scale also allows relatively fast training of the HMF, because it does not contain a large number of data. HMF compressive transformation is applied to the highpass coefficients of the largest scale, using observed lowpass coefficient bits at that scale and previously parsed highpass coefficient bits in the quadtree. The lowpass coefficients at this scale represent a slightly lower resolution, but denoised version of the full image. The initial coefficient bits of the compressive transformed highpass coefficients contain relevant structural data. The latter bits of the compressive transformed highpass coefficients likely contain the noisy elements of the image. Of particular importance to the denoising method is the way in which the transform encodes the coefficient bits using an arithmetic encoder. Specifically, the arrangement of the probabilistic representation in the encoder at every step such that the most likely symbol to encode is always nearest to the bottom of the range. By this fashion, the encoder favors coding more likely sequences to zero-valued coefficients, although similar systems could equally set the most likely bits to the top of the range, thus favoring one-valued coefficients. By setting noisy, highpass compressed transform coefficient bits to zero, this embodiment of the system ensures inverse compressive transformation will result in a more likely sequence of coefficient data, and thus can be considered a greedy maximum likelihood system for image denoising. This method is greedy by the fact that it only decodes the most likely coefficient bit at every step individually and does not select the complete group of coefficient bits that would be the most likely collectively. One might construct a system that tracks the probability of all possible combinations of coefficient bits and selects the most likely combination using the Viterbi, MAP, or another path optimizing algorithm.
An embodiment of a system for signal enhancement utilizes a randomization of at least a portion of compressed transform representation followed by inverse compressive transformation (
An embodiment of the disclosure for image enhancement that utilizes a wavelet transform, but with filtering performed in a compressed transform domain on localized wavelet coefficients adds detail into an image. Such a system follows
An embodiment of the system produces a superresolution (e.g., larger size) version of a digital image without the use of outside information. The embodiment presented here (depicted in
An embodiment of the system performs digital image compression and decompression in both lossless and lossy modes. The encoding portion of the compression system is constructed similarly to the denoising and enhancement systems above in the it learns an HMF from simple coefficient quadtrees at lower scales then uses the HMF to compressively transform the highpass coefficients at a larger scale. In general, these quadtrees might include more than one level of scale information and bits from multiple color channels as variables. Successive training of the HMF from the lowest scale to the highest scale results in more and more effective compressive transformation from an information compaction standpoint and ultimately leads to better and better compression. Lossy compression is obtained by encoding or sending only a portion of the quadtree information (e.g., only the lower scale information), and lossless compression is obtained by encoding all quadtree information. Decompression can be performed directly or generatively using either or both simulated coefficients not present within the lossy representation and simulated pixel data from randomized control of the compressive transform model inputs and outputs.
An embodiment of the image compression system forms a progressive bitstream that is scalable in quality by further encoding like-coefficients from multiple quadtree samples in the compressed domain, from most significant coefficient to least significant. An embodiment of the image compression system forms a progressive bitstream that is scalable in resolution by further encoding highpass, quadtree samples from the lowest wavelet resolution to the highest. Lossy embodiments of the image compression system encode a progressive bitstream until a target file size is met.
Embodiments of the decoding stage of the image compression system decode available portions of a lossy encoding, and simulate missing compressed transform data generatively, as in the enhancement or superresolution systems described above, by inserting random, semi-random, or non-random compressed transform coefficients that have yet to be decoded or are unavailable at the time of decoding. By controlling the statistics and possibly location of the bits representing the transform coefficients, one can control the amount or location of the generative decoding.
Embodiments of a subset or all (and portions or all) of the above can be implemented by program instructions stored in a non-transitory computer readable medium or a transitory carrier medium and executed by a processor. The non-transitory computer readable medium can include any of various types of memory devices or storage devices. For example, the non-transitory computer readable medium can include optical storage media, such as a Compact Disc Read Only Memory (CD-ROM), a digital video disc read only memory (DVD-ROM), a BLU-RAY® Disc Read Only Memory (BD-ROM), and writeable or rewriteable variants such as Compact Disc Recordable (CD-R), Compact Disc Rewritable (CD-RW), Digital Video Disc Dash Recordable (DVD-R), Digital Video Disc Plus Recordable (DVD+R), Digital Video Disc Dash Rewritable (DVD-RW), Digital Video Disc Plus Rewritable (DVD+RW), Digital Video Disc Random Access Memory (DVD-RAM), BLU-RAY Disc Recordable (BD-R), and BLU-RAY Disc Recordable Erasable (BD-RE). As another example, the non-transitory computer readable medium can include computer memory, such as static random access memory (SRAM), dynamic random access memory (DRAM), high-bandwidth memory (HBM), non-volatile random access memory (NVRAM), read only memory (ROM), programmable read only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read only memory (EEPROM), NOR based flash memory, and NAND based flash memory. The non-transitory computer readable medium can also include various magnetic media, such as floppy discs, magnetic tapes, and hard discs.
In addition, the non-transitory computer readable medium can be located in a first computer in which programs are executed, or can be located in a second different computer that connects to the first computer over a network, such as the Internet. In the latter instance, the second computer can provide program instructions to the first computer for execution. The term “non-transitory computer readable medium” can also include two or more memory mediums that can reside in different locations, such as in different computers that are connected over a network. In some embodiments, a computer system at a respective participant location can include a non-transitory computer readable medium on which one or more computer programs or software components according to one embodiment can be stored. For example, the non-transitory computer readable medium can store one or more programs that are executable to perform the methods described herein. The non-transitory computer readable medium can also store operating system software, as well as other software for operation of the computer system.
The non-transitory computer readable medium can store a software program or programs operable to implement the various embodiments. The software program or programs can be implemented in various ways, including procedure-based techniques, component-based techniques, object-oriented techniques, functional programming techniques, or other approaches. For example, the software programs can be implemented using ActiveX controls, C++objects, JavaBeans, MICROSOFT® Foundation Classes (MFC), browser-based applications (e.g., Java applets or embedded scripts in web pages), or other technologies or methodologies. A processor executing code and data from the memory medium can include a means for creating and executing the software program or programs according to the embodiments described herein.
Further modifications and alternative embodiments of various aspects of the disclosure will be apparent to those skilled in the art in view of this description. Accordingly, this description is to be construed as illustrative only and is for the purpose of teaching those skilled in the art the general manner of carrying out the various embodiments of the present disclosure. It is to be understood that the forms of the embodiments of the disclosure shown and described herein are to be taken as illustrative embodiments. Elements and materials can be substituted for those illustrated and described herein, parts and processes can be reversed, and certain features of the various embodiments of the present disclosure can be utilized independently. Changes can be made in the elements described herein without departing from the spirit and scope of the disclosure as described in the following clauses or claims.
This application is a continuation application claiming priority to, and the benefit of, U.S. Non-Provisional application Ser. No. 15/552,843, filed Aug. 23, 2017, which is the 35 U.S.C. § 371 national stage application of PCT Application No. PCT/US2016/018887, filed Feb. 22, 2016, where the PCT Application claims priority to, and the benefit of, United States Provisional Application No. 62/119,444, entitled “SYSTEMS, APPARATUS, AND METHODS FOR BIT LEVEL REPRESENTATION FOR DATA PROCESSING AND ANALYTICS,” and filed on Feb. 23, 2015, all of which are hereby incorporated by reference as if set forth herein in their entireties.
Number | Date | Country | |
---|---|---|---|
62119444 | Feb 2015 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15552843 | Aug 2017 | US |
Child | 18379746 | US |