Priority is claimed in the application data sheet to the following patents or patent applications, each of which is expressly incorporated herein by reference in its entirety:
The present invention is in the field of computer data storage and transmission, and in particular to the manipulation of compacted data.
As computers become an ever-greater part of our lives, and especially in the past few years, data storage has become a limiting factor worldwide. Prior to about 2010, the growth of data storage far exceeded the growth in storage demand. In fact, it was commonly considered at that time that storage was not an issue, and perhaps never would be, again. In 2010, however, with the growth of social media, cloud data centers, high tech and biotech industries, global digital data storage accelerated exponentially, and demand hit the zettabyte (1 trillion gigabytes) level. Current estimates are that data storage demand will reach 50 zettabytes by 2020. By contrast, digital storage device manufacturers produced roughly 1 zettabyte of physical storage capacity globally in 2016. We are producing data at a much faster rate than we are producing the capacity to store it. In short, we are running out of room to store data, and need a breakthrough in data storage technology to keep up with demand.
The primary solutions available at the moment are the addition of additional physical storage capacity and data compression. As noted above, the addition of physical storage will not solve the problem, as storage demand has already outstripped global manufacturing capacity. Data compression is also not a solution. A rough average compression ratio for mixed data types is 2:1, representing a doubling of storage capacity. However, as the mix of global data storage trends toward multi-media data (audio, video, and images), the space savings yielded by compression either decreases substantially, as is the case with lossless compression which allows for retention of all original data in the set, or results in degradation of data, as is the case with lossy compression which selectively discards data in order to increase compression. Even assuming a doubling of storage capacity, data compression cannot solve the global data storage problem. The method disclosed herein, on the other hand, works the same way with any type of data.
Transmission bandwidth is also increasingly becoming a bottleneck. Large data sets require tremendous bandwidth, and we are transmitting more and more data every year between large data centers. On the small end of the scale, we are adding billions of low bandwidth devices to the global network, and data transmission limitations impose constraints on the development of networked computing applications, such as the “Internet of Things”.
Furthermore, as quantum computing becomes more and more imminent, the security of data, both stored data and data streaming from one point to another via networks, becomes a critical concern as existing encryption technologies are placed at risk.
A problem with compacted data, however, is that it cannot be accessed randomly. Random access to compacted data results in invalid data, so compacted data must be uncompacted before it becomes usable.
What is needed is a system and method for providing random-access manipulation of compacted data, which facilitates searching, reading of, and writing to compacted data files.
A system and method random-access manipulation of compacted data files with adaptive method selection. The system may receive a data query pertaining to a data read or data write request, wherein the data file to be read from or written to is a compacted data file. A random-access engine may facilitate data manipulation processes by transforming the codebook into a hierarchical representation and then traversing the representation scanning for specific codewords associated with a data query request. In an embodiment, an estimator module is present and configured to utilize cardinality estimation to determine a starting codeword to begin searching the compacted data file for the data associated with the data query. The random-access engine may encode the data to be written, insert the encoded data into a compacted data file, and update the codebook as needed.
According to a preferred embodiment, A system for random-access manipulation of compacted data files with adaptive method selection, comprising: a computing device comprising a memory, a processor, and a non-volatile data storage device; a random access engine comprising a first plurality of programming instructions that, when operating on the processor, cause the computing device to: receive a data search query directed to the compacted data file, wherein the compacted file is compacted using a reference codebook; organize the reference codebook into a hierarchical representation; traverse the hierarchical representation to identify a start codeword corresponding to the beginning of the data search query; and send the start codeword and a plurality of immediately following codewords from the compacted data file to a decoder; and an adaptive estimator module comprising a second plurality of programming instructions that, when operating on the processor, cause the computing device to: train a machine learning model on a plurality of compacted data files, wherein the machine learning model selects an optimized refinement technique; estimate a first starting bit location in the compacted data file; and refine the first starting bit location by using the optimized refinement technique, is disclosed.
According to another preferred embodiment, a method for random-access manipulation of compacted data files with adaptive method selection, comprising the steps of: receiving a data search query directed to the compacted data file, wherein the compacted file is compacted using a reference codebook; organizing the reference codebook into a hierarchical representation; traversing the hierarchical representation to identify a start codeword corresponding to the beginning of the data search query; sending the start codeword and a plurality of immediately following codewords from the compacted data file to a decoder; estimating, using an adaptive estimator module, a first starting bit location in the compact data file; training a machine learning model on a plurality of compacted data files, wherein the machine learning model selects an optimized refinement technique; estimating a first starting bit location in the compacted data file; and refining the first starting bit location by using the optimized refinement technique.
According to an aspect, the system further comprises a ‘dyadic distribution compression and encryption subsystem comprising a third plurality of programming instructions stored in the memory and operating on the processor, wherein the third plurality of programming instructions, when operating on the processor, cause the computing device to: analyze input data to determine its properties; create a transformation matrix based on the properties of the input data; transform the input data into a dyadic distribution; generate a main data stream of transformed data and a secondary data stream of transformation information; and compress the main data stream.
According to an aspect, the system further comprises a large codeword model with a latent transformer core comprising a fourth plurality of programming instructions stored in the memory and operating on the processor, wherein the fourth plurality of programming instructions, when operating on the processor, cause the computing device to: receive the compressed data as input vectors; generate latent space vectors by processing the input vectors through a variational autoencoder's encoder; process the latent space vectors through a transformer to learn relationships between the vectors, wherein the transformer does not include an embedding layer and a positional encoding layer; and decode output latent space vectors through a variational autoencoder's decoder to produce final output data.
According to an aspect, the system further comprises an estimator module comprising a second plurality of programming instructions stored in the memory and operable on the processor, wherein the second plurality of programming instructions, when operating on the processor, cause the computing device to: estimate a first starting bit location in the compacted data file; refine the first starting bit location by: determining a plurality of codeword boundaries by performing distinct value estimation on the compacted data file, wherein the distinct value estimates correspond to a codeword boundary; and determining whether a bit sequence starting at the first starting bit location corresponds to a codeword boundary of the plurality of codeword boundaries and, if not, traversing the hierarchical representation until a codeword boundary is located at a new starting bit.
According to an aspect, the system further comprises an end to end training subsystem configured to jointly train the random access engine, the dyadic distribution compression and encryption module, the large codeword model, and the estimator module.
The accompanying drawings illustrate several aspects and, together with the description, serve to explain the principles of the invention according to the aspects. It will be appreciated by one skilled in the art that the particular arrangements illustrated in the drawings are merely exemplary, and are not to be considered as limiting of the scope of the invention or the claims herein in any way.
A system and method for random-access manipulation of compacted data files with adaptive method selection. The system may receive a data query pertaining to a data read or data write request, wherein the data file to be read from or written to is a compacted data file. A trainable random-access engine may facilitate data manipulation processes by organizing the codebook into a learnable hierarchical representation and then traversing the representation using trainable parameters to scan for specific codewords associated with a data query request. In an embodiment, a trainable estimator module is present and configured to utilize cardinality estimation with learnable parameters to determine a starting codeword to begin searching the compacted data file for the data associated with the data query. The estimator module refines the starting bit location by determining codeword boundaries through distinct value estimation and verifying if the initial bit sequence corresponds to a codeword boundary. The trainable random-access engine may encode the data to be written, insert the encoded data into a compacted data file, and update the codebook as needed, all while adapting its behavior based on end-to-end training.
A data search query may be generated by a system user. The data search query may include a search term, an identified compacted data file to read from, and a location hint. For instance, a user may search for a string in a text file and specify the location in the original file where the user thinks the string may be located. For example, a user data read query may be of the form: “search for the word ‘cosmology’ starting at the 50% mark of compacted version of an astrophysics textbook”. The system may use the location hint “50% mark” as a starting point for conducting a search of the encoded version of “cosmology” within the compacted version. The location hint may reference any point in the original data file, and the system may access the compacted data file at a point at or near the reference point contained within the location hint. In this way, any bit contained within a compacted data file may be randomly-accessed directly without the need to scan through or decode the entire compacted file. The random access engine uses the refined starting bit location from the estimator module as an initial point for traversing the hierarchical representation. When the correct encodings are found, the reference codes are retrieved and a trainable reference codebook may be used to transform the encoded version back to the original data, and the data may be sent to the user for verification. The efficiency of this process is continually improved through end-to-end training of all components.
Additionally, the system may support data write functions. A data write process begins when the system receives a data write query which may contain data to be inserted (write term) and a compacted data file to be written to. The system may re-encode the entire original data file with the inclusion of the inserted data using a trainable dyadic distribution compression and encryption module. This module uses trainable parameters for transforming the input data and compressing the main data stream. In other embodiments, an opcode representing an offset may be generated to facilitate a data write function that does not require re-encoding the entire data file, or unused bits located within the codebook can be used to create secondary encodings, which also does not require re-encoding the entire data file. These processes are optimized through joint training with other system components.
The system incorporates a large codeword model with a latent transformer core, which processes the compressed and encrypted data. The large codeword model uses a variational autoencoder (VAE) encoder to further compress the data into a latent space representation. The latent space vectors are then processed by a modified transformer that learns relationships between the vectors. Notably, this transformer does not include an embedding layer or a positional encoding layer, as these are not necessary for the latent space representation. The large codeword model is configured to adapt its processing based on the output of the dyadic distribution compression and encryption module. This allows the system to handle various types of data beyond just text, including images, audio, and time-series data. One skilled in the art would recognize that while the system is described using latent space representations, the principles and techniques could be applied to other forms of data representation and processing as well.
An end to end training subsystem coordinates the joint training of all system components, including the random-access engine, the dyadic distribution compression and encryption module, the large codeword model, and the estimator module. This joint training ensures that all components work in synergy, adapting to each other's behaviors for optimal overall performance in both compression efficiency and retrieval speed. The end to end training subsystem is configured to optimize performance across all components from initial compression and encryption to final random access capabilities.
One or more different aspects may be described in the present application. Further, for one or more of the aspects described herein, numerous alternative arrangements may be described; it should be appreciated that these are presented for illustrative purposes only and are not limiting of the aspects contained herein or the claims presented herein in any way. One or more of the arrangements may be widely applicable to numerous aspects, as may be readily apparent from the disclosure. In general, arrangements are described in sufficient detail to enable those skilled in the art to practice one or more of the aspects, and it should be appreciated that other arrangements may be utilized and that structural, logical, software, electrical and other changes may be made without departing from the scope of the particular aspects. Particular features of one or more of the aspects described herein may be described with reference to one or more particular aspects or figures that form a part of the present disclosure, and in which are shown, by way of illustration, specific arrangements of one or more of the aspects. It should be appreciated, however, that such features are not limited to usage in the one or more particular aspects or figures with reference to which they are described. The present disclosure is neither a literal description of all arrangements of one or more of the aspects nor a listing of features of one or more of the aspects that must be present in all arrangements.
Headings of sections provided in this patent application and the title of this patent application are for convenience only, and are not to be taken as limiting the disclosure in any way.
Devices that are in communication with each other need not be in continuous communication with each other, unless expressly specified otherwise. In addition, devices that are in communication with each other may communicate directly or indirectly through one or more communication means or intermediaries, logical or physical.
A description of an aspect with several components in communication with each other does not imply that all such components are required. To the contrary, a variety of optional components may be described to illustrate a wide variety of possible aspects and in order to more fully illustrate one or more aspects. Similarly, although process steps, method steps, algorithms or the like may be described in a sequential order, such processes, methods and algorithms may generally be configured to work in alternate orders, unless specifically stated to the contrary. In other words, any sequence or order of steps that may be described in this patent application does not, in and of itself, indicate a requirement that the steps be performed in that order. The steps of described processes may be performed in any order practical. Further, some steps may be performed simultaneously despite being described or implied as occurring non-simultaneously (e.g., because one step is described after the other step). Moreover, the illustration of a process by its depiction in a drawing does not imply that the illustrated process is exclusive of other variations and modifications thereto, does not imply that the illustrated process or any of its steps are necessary to one or more of the aspects, and does not imply that the illustrated process is preferred. Also, steps are generally described once per aspect, but this does not mean they must occur once, or that they may only occur once each time a process, method, or algorithm is carried out or executed. Some steps may be omitted in some aspects or some occurrences, or some steps may be executed more than once in a given aspect or occurrence.
When a single device or article is described herein, it will be readily apparent that more than one device or article may be used in place of a single device or article. Similarly, where more than one device or article is described herein, it will be readily apparent that a single device or article may be used in place of the more than one device or article.
The functionality or the features of a device may be alternatively embodied by one or more other devices that are not explicitly described as having such functionality or features. Thus, other aspects need not include the device itself.
Techniques and mechanisms described or referenced herein will sometimes be described in singular form for clarity. However, it should be appreciated that particular aspects may include multiple iterations of a technique or multiple instantiations of a mechanism unless noted otherwise. Process descriptions or blocks in figures should be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps in the process. Alternate implementations are included within the scope of various aspects in which, for example, functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those having ordinary skill in the art.
The term “bit” refers to the smallest unit of information that can be stored or transmitted. It is in the form of a binary digit (either 0 or 1). In terms of hardware, the bit is represented as an electrical signal that is either off (representing 0) or on (representing 1).
The term “byte” refers to a series of bits exactly eight bits in length.
The term “codebook” refers to a database containing sourceblocks each with a pattern of bits and reference code unique within that library. The terms “library” and “encoding/decoding library” are synonymous with the term codebook.
The terms “compression” and “deflation” as used herein mean the representation of data in a more compact form than the original dataset. Compression and/or deflation may be either “lossless”, in which the data can be reconstructed in its original form without any loss of the original data, or “lossy” in which the data can be reconstructed in its original form, but with some loss of the original data.
The terms “compression factor” and “deflation factor” as used herein mean the net reduction in size of the compressed data relative to the original data (e.g., if the new data is 70% of the size of the original, then the deflation/compression factor is 30% or 0.3.)
The terms “compression ratio” and “deflation ratio”, and as used herein all mean the size of the original data relative to the size of the compressed data (e.g., if the new data is 70% of the size of the original, then the deflation/compression ratio is 70% or 0.7.) The term “data” means information in any computer-readable form.
The term “sourceblock” refers to a series of bits of a specified length. The number of bits in a sourceblock may be dynamically optimized by the system during operation. In one aspect, a sourceblock may be of the same length as the block size used by a particular file system, typically 512 bytes or 4,096 bytes.
The term “effective compression” or “effective compression ratio” refers to the additional amount data that can be stored using the method herein described versus conventional data storage methods. Although the method herein described is not data compression, per se, expressing the additional capacity in terms of compression is a useful comparison.
The term “data set” refers to a grouping of data for a particular purpose. One example of a data set might be a word processing file containing text and formatting information.
The term “library” refers to a database containing sourceblocks each with a pattern of bits and reference code unique within that library. The term “codebook” is synonymous with the term library.
The term “sourcepacket” as used herein means a packet of data received for encoding or decoding. A sourcepacket may be a portion of a data set.
The term “sourceblock” as used herein means a defined number of bits or bytes used as the block size for encoding or decoding. A sourcepacket may be divisible into a number of sourceblocks. As one non-limiting example, a 1 megabyte sourcepacket of data may be encoded using 512 byte sourceblocks. The number of bits in a sourceblock may be dynamically optimized by the system during operation. In one aspect, a sourceblock may be of the same length as the block size used by a particular file system, typically 512 bytes or 4,096 bytes.
The term “codeword” refers to the reference code form in which data is stored or transmitted in an aspect of the system. A codeword consists of a reference code to a sourceblock in the library plus an indication of that sourceblock's location in a particular data set.
The term “dyadic distribution” refers to a probability distribution that arises from repeatedly dividing an interval in half.
The term “latent space” refers to a compressed representation of data in a lower-dimensional space, often used in machine learning models.
The term “variational autoencoder (VAE)” refers to a type of neural network that learns to encode data into a latent space and decode it back, while also learning the probability distribution of the latent space.
The term “transformer” refers to a type of neural network architecture that uses self-attention mechanisms to process sequential data.
The term “sourceblock” refers to a fixed-size segment of data used as the basic unit for compression and encoding.
The term “random access” refers to the ability to directly access and manipulate specific parts of a compressed file without decompressing the entire file.
The term “end-to-end training” refers to a machine learning approach where all components of a system are trained simultaneously to optimize overall performance.
The term “cardinality estimation” refers to a technique for estimating the number of distinct elements in a dataset.
An adaptive estimator module 4900 represents an advancement over previous estimator implementations, incorporating machine learning capabilities to dynamically select optimal refinement techniques based on input characteristics. Adaptive estimator 4900 may work in concert with random access engine 3900 to improve search efficiency by analyzing patterns in the compressed data 3704 and selecting the most appropriate refinement method for each specific query context.
The system maintains bidirectional communication between components, with compressed data 3704 flowing between the various subsystems as needed for processing. An end to end training subsystem 4200 coordinates the joint training of all system components, with particular emphasis on optimizing the adaptive estimator module's 4900 selection capabilities through continuous feedback and performance monitoring.
The output 3707 represents the final processed data, which has benefited from the adaptive estimation process, leading to more efficient and accurate random access operations on the compressed data. This architecture enables the system to learn and improve its estimation strategies over time, adapting to different types of data and access patterns while maintaining efficient compression and retrieval capabilities.
A trained method selection subsystem 5010 analyzes incoming queries and compressed data characteristics to select the most appropriate estimation method for the specific context. The selection process considers factors such as data patterns, historical performance metrics, and resource optimization criteria to determine the optimal estimation strategy.
Once a method is selected, the initial position estimation component 5020 applies the chosen technique to generate a first approximation of the target location within the compressed data. This initial estimate is then refined into a refined position estimation 5030, which uses the selected method's specific refinement approach to optimize the position determination.
In various exemplary aspects, the adaptive estimator module 4900 may employ multiple estimation methods, selecting the most appropriate based on input characteristics. These methods represent a diverse array of approaches to boundary detection and position estimation within compressed data streams.
A statistical pattern analysis method examines the statistical properties of bit distributions within the compressed data to identify likely boundary regions. This method calculates running averages, variance, and other statistical measures across bit sequences to identify patterns that typically indicate codeword boundaries. When certain bit patterns consistently appear at the start or end of valid codewords, they become reliable indicators of boundaries, allowing the method to make accurate predictions about boundary locations in new data.
In another embodiment, the approach may be an entropy-based detection method, which leverages information theory principles to identify boundaries by analyzing local entropy variations in the bit stream. This method operates on the principle that entropy often changes significantly at codeword boundaries due to the compression algorithm's properties. By calculating rolling entropy windows across the compressed data, the method can identify points where information density changes characteristically indicate boundary regions. These entropy transitions often provide highly reliable boundary indicators in many compression schemes.
In another embodiment a frequency analysis method takes a different approach by maintaining histograms of bit patterns that historically correspond to valid codeword boundaries. This method is particularly effective when certain bit sequences consistently appear at known offsets from valid boundaries. The method continuously builds and updates frequency tables of boundary-adjacent patterns and uses these statistics to estimate boundary locations in new data, becoming more accurate as it processes more data.
For handling complex patterns, a neural prediction method may employ a trained neural network to predict boundary locations based on surrounding bit patterns. This method processes windows of bit sequences through a convolutional neural network that has been trained to recognize boundary characteristics. The network outputs probability scores for each position being a valid boundary, allowing for high-precision boundary detection in complex compression schemes where traditional analytical methods might struggle.
A more structured approach is a probabilistic state machine method, which models the compression process as a state machine and uses probabilistic inference to estimate likely boundary locations. This method maintains a model of valid state transitions in the compression scheme and uses this model to identify sequences that likely represent boundaries. The method can adapt its state transition probabilities based on observed patterns in the data, making it particularly effective for compression schemes with well-defined state transitions.
The adaptive estimator module 4900 maintains comprehensive performance metrics for each method across different contexts, allowing it to learn which methods are most effective for particular combinations of data type characteristics, compression scheme properties, query patterns, resource availability constraints, performance requirements, and error tolerance thresholds. The selection subsystem evaluates these factors in real-time to choose the most appropriate method or combination of methods for each estimation
The system maintains a feedback loop where the success rate and efficiency of selected methods influence future selection decisions, allowing the module to continuously improve its estimation strategy selection over time. This adaptive approach enables the system to optimize its performance across different types of data and query patterns while maintaining efficient random access capabilities.
At the model training stage, a plurality of training data 5101 may be received by the generative AI training system 5150. Data preprocessor 5102 may receive the input data (e.g., read queries, write requests, compressed codeword data, initial position estimated) and perform various data preprocessing tasks on the input data to format the data for further processing. For example, data preprocessing can include, but is not limited to, tasks related to data cleansing, data deduplication, data normalization, data transformation, handling missing values, feature extraction and selection, mismatch handling, and/or the like. Data preprocessor 5102 may also be configured to create training dataset, a validation dataset, and a test set from the plurality of input data 5101. For example, a training dataset may comprise 80% of the preprocessed input data, the validation set 10%, and the test dataset may comprise the remaining 10% of the data. The preprocessed training dataset may be fed as input into one or more machine and/or deep learning algorithms 5103 to train a predictive model for object monitoring and detection.
During model training, training output 5104 is produced and used to measure the accuracy and usefulness of the predictive outputs. During this process a parametric optimizer 5105 may be used to perform algorithmic tuning between model training iterations. Model parameters and hyperparameters can include, but are not limited to, bias, train-test split ratio, learning rate in optimization algorithms (e.g., gradient descent), choice of optimization algorithm (e.g., gradient descent, stochastic gradient descent, of Adam optimizer, etc.), choice of activation function in a neural network layer (e.g., Sigmoid, ReLu, Tan h, etc.), the choice of cost or loss function the model will use, number of hidden layers in a neural network, number of activation unites in each layer, the drop-out rate in a neural network, number of iterations (epochs) in a training the model, number of clusters in a clustering task, kernel or filter size in convolutional layers, pooling size, batch size, the coefficients (or weights) of linear or logistic regression models, cluster centroids, and/or the like. Parameters and hyperparameters may be tuned and then applied to the next round of model training. In this way, the training stage provides a machine learning training loop.
In some implementations, various accuracy metrics may be used by the machine learning training system 5000 to evaluate a model's performance. Metrics can include, but are not limited to, word error rate (WER), word information loss, speaker identification accuracy (e.g., single stream with multiple speakers), inverse text normalization and normalization error rate, punctuation accuracy, timestamp accuracy, latency, resource consumption, custom vocabulary, sentence-level sentiment analysis, multiple languages supported, cost-to-performance tradeoff, and personal identifying information/payment card industry redaction, to name a few. In one embodiment, the system may utilize a loss function 5160 to measure the system's performance. The loss function 5160 compares the training outputs with an expected output and determined how the algorithm needs to be changed in order to improve the quality of the model output. During the training stage, all outputs may be passed through the loss function 5160 on a continuous loop until the algorithms 5103 are in a position where they can effectively be incorporated into a deployed model 5115.
The test dataset can be used to test the accuracy of the model outputs. If the training model is establishing correlations that satisfy a certain criterion such as but not limited to quality of the correlations and amount of restored lost data, then it can be moved to the model deployment stage as a fully trained and deployed model 5110 in a production environment making predictions based on live input data 5111 (e.g., read queries, write requests, compressed codeword data, initial position estimated). Further, model correlations and restorations made by deployed model can be used as feedback and applied to model training in the training stage, wherein the model is continuously learning over time using both training data and live data and predictions. A model and training database 5106 is present and configured to store training/test datasets and developed models. Database 5106 may also store previous versions of models. According to some embodiments, the one or more machine and/or deep learning models
may comprise any suitable algorithm known to those with skill in the art including, but not limited to: LLMs, generative transformers, transformers, supervised learning algorithms such as: regression (e.g., linear, polynomial, logistic, etc.), decision tree, random forest, k-nearest neighbor, support vector machines, Naïve-Bayes algorithm; unsupervised learning algorithms such as clustering algorithms, hidden Markov models, singular value decomposition, and/or the like. Alternatively, or additionally, algorithms 5103 may comprise a deep learning algorithm such as neural networks (e.g., recurrent, convolutional, long short-term memory networks, etc.).
In some implementations, the machine learning training system 5000 automatically generates standardized model scorecards for each model produced to provide rapid insights into the model and training data, maintain model provenance, and track performance over time. These model scorecards provide insights into model framework(s) used, training data, training data specifications such as chip size, stride, data splits, baseline hyperparameters, and other factors. Model scorecards may be stored in database(s) 5106.
According to the embodiment, data search engine 3410 can be configured to receive a data read request 3401. Responsive to the data read request, data search engine 3410 may receive, retrieve, or otherwise obtain the codebook 3402 from a suitable data storage system, library manager, data deconstruction engine, or some other component of system. The obtained codebook may be processed by a codebook transformer 3411 configured to transform the structure of the codebook to a hierarchical index representation. Hierarchical indexing can be used to efficiently organize and access compacted sourceblocks, each associated with a unique codeword. This approach can help manage and retrieve data blocks while taking advantage of the hierarchical structure of the codewords. According to an embodiment, codebook transformer 3411 may determine the hierarchical structure based on your specific requirements. For example, the hierarchy may be organized based on Huffman codewords. The transformed codebook may be sent to or otherwise obtained by a hierarchical lookup module 3412 configured to scan the transformed codebook to retrieve the data associated with the data read request 3401.
According to various embodiments, deconstruction engine and library manager may assign the shortest codewords to the most frequently occurring sourceblocks. This property can be leveraged to create a hierarchical structure. In an exemplary process, system may start with a root node of the hierarchy, which represents an empty codeword. For each codeword, follow the path from the root to the corresponding leaf node, creating intermediate nodes as needed. Each leaf node represents a unique codeword. At each leaf node, associate the data block with the codeword as a key-value pair in the codebook. This hierarchical indexing approach leverages the hierarchical nature of the assigned codewords, making it efficient for managing and accessing data blocks associated with unique codewords. It optimizes both storage and retrieval operations, taking advantage of the properties of encoding in the process.
According to an embodiment, when a new data block is to be added with its associated codeword, the system can follow the path in the hierarchy to the appropriate leaf node and insert the new key-value pair into the codebook.
According to an embodiment, to search for data blocks using partial codewords, system can traverse the hierarchy as far as the available codeword allows, which can be useful for prefix-based searches.
A hierarchical index, as described above, lends itself to efficient random access of data blocks, particularly when dealing with numerous data blocks bearing unique codewords. Hierarchical lookup module 3412 can facilitate the access process on a hierarchically indexed codebook. This access process initiates at the root node of the hierarchy, from which a hierarchical path traversal commences. The hierarchical traversal proceeds by interpreting each bit within the codeword, where ‘0’ and ‘1’ signify movement to the left and right, respectively. This traversal continues until the leaf node corresponding to the complete codeword is reached. The data block associated with this leaf node can then be accessed. The hierarchical structure's efficiency in random access results from reducing the search space as one traverses the hierarchy, restricting navigation to depths in correspondence with the bits in the codeword. The access process is advantageous, especially when dealing with diverse codeword lengths. User provided locational hints may be used by hierarchical lookup module 3412 to determine which leaf or node to commence the search from. Hierarchical lookup module 3412 may receive a data search request which may be associated with a start codeword. In an implementation, logically a start codeword is the initial point at which data may be retrieved to satisfy the query request. Random access engine may use lookup module 3412 to traverse the hierarchical representation of the codebook until a start codeword is located. The start codeword and any subsequent codewords relevant to the query request may be retrieved and returned to a decoder (e.g., data reconstruction engine) for decoding.
Also present in the embodiment is estimator 3420 which can provide initial estimates for a starting location to begin searching for the data associated with data read request 3401 as well as refine user provided hints or estimates. According to an implementation, estimator 3420 may estimate the one or more possible starting points using a distinct value estimate mechanism via a starting point module 3421. Distinct value estimation, also known as cardinality estimation or unique value estimation, is a process used to determine the number of distinct or unique values in a dataset. Some techniques that may be used by starting point module 3421 to perform distinct value estimation can include, but are not limited to, counting approaches (e.g., exact counting, HyperLogLog, etc.), sampling methods (reservoir sampling, min hashing, etc.), probabilistic data structures (e.g., bloom filters, count-min sketch, etc.), and statistical estimation, to name a few. These estimates 3403 may be sent to data search engine 3410 and used by hierarchical lookup module 3412 as an initial node/leaf at which to begin the search.
The distinct values may be associated with a codeword and/or a codeword boundary. In some implementations, estimator 3420 perform a distinct value estimate to determine one or more codeword boundaries, wherein the one or more codeword boundaries may be used as starting points for facilitating a random access read of compacted data. In some implementations, an initial estimate or user provided hint may be refined be performing a distinct value estimate.
At a next step 3503, hierarchical lookup module 3412 can traverse the hierarchical representation until a start codeword corresponding to the query request is located. At step 3504, the start codeword is retrieved and all other subsequent, relevant codewords are also retrieved from the compacted data file. The codewords are relevant with respect to the query request. A plurality of subsequent codewords may be retrieved to satisfy the query request. As a last step 3505, the retrieved codewords may be sent to a decoder and decoded. In an embodiment, data reconstruction engine 108 may be implemented as a decoder. The decoded data may then be returned to the appropriate endpoint associated with the request process. For example, if a user submitted a query, then the decoded data may be sent to user interface 2810 where it may be viewed by the user.
Location 2. In the case where the reference codes contained in a particular codeword have been newly generated by library manager 503 at Location 1, the codeword is transmitted along with a copy of the associated sourceblock. As data reconstruction engine 507 at Location 2 receives the codewords, it passes them to library manager module 508 at Location 2, which looks up the sourceblock in sourceblock library lookup table 509 at Location 2, and retrieves the associated from sourceblock library storage 510. Where a sourceblock has been transmitted along with a codeword, the sourceblock is stored in sourceblock library storage 510 and sourceblock library lookup table 504 is updated. Library manager 503 returns the appropriate sourceblocks to data reconstruction engine 507, which assembles them into the proper order and sends the data in its original form 511.
preferred embodiment of the invention. Incoming training data sets may be received at a customized library generator 1300 that processes training data to produce a customized word library 1201 comprising key-value pairs of data words (each comprising a string of bits) and their corresponding calculated binary Huffman codewords. The resultant word library 1201 may then be processed by a library optimizer 1400 to reduce size and improve efficiency, for example by pruning low-occurrence data entries or calculating approximate codewords that may be used to match more than one data word. A transmission encoder/decoder 1500 may be used to receive incoming data intended for storage or transmission, process the data using a word library 1201 to retrieve codewords for the words in the incoming data, and then append the codewords (rather than the original data) to an outbound data stream. Each of these components is described in greater detail below, illustrating the particulars of their respective processing and other functions, referring to
System 1200 provides near-instantaneous source coding that is dictionary-based and learned in advance from sample training data, so that encoding and decoding may happen concurrently with data transmission. This results in computational latency that is near zero but the data size reduction is comparable to classical compression. For example, if N bits are to be transmitted from sender to receiver, the compression ratio of classical compression is C, the ratio between the deflation factor of system 1200 and that of multi-pass source coding is p, the classical compression encoding rate is RC bit/s and the decoding rate is RD bit/s, and the transmission speed is S bit/s, the compress-send-decompress time will be
while the transmit-while-coding time for system 1200 will be (assuming that encoding and decoding happen at least as quickly as network latency):
so that the total data transit time improvement factor is
which presents a savings whenever
This is a reasonable scenario given that typical values in real-world practice are C=0.32, RC=1.1·1012, RD=4.2·1012, S=1011, giving
such that system 1200 will outperform the total transit time of the best compression technology available as long as its deflation factor is no more than 5% worse than compression. Such customized dictionary-based encoding will also sometimes exceed the deflation ratio of classical compression, particularly when network speeds increase beyond 100 Gb/s.
The delay between data creation and its readiness for use at a receiving end will be equal to only the source word length t (typically 5-15 bytes), divided by the deflation factor C/p and the network speed S, i.e.
since encoding and decoding occur concurrently with data transmission. On the other hand, the latency associated with classical compression is
where N is the packet/file size. Even with the generous values chosen above as well as N=512K, t=10, and p=1.05, this results in delayinvention≈3.3·10−10 while delaypriorart≈1.3·10−7, a more than 400-fold reduction in latency.
A key factor in the efficiency of Huffman coding used by system 1200 is that key-value pairs be chosen carefully to minimize expected coding length, so that the average deflation/compression ratio is minimized. It is possible to achieve the best possible expected code length among all instantaneous codes using Huffman codes if one has access to the exact probability distribution of source words of a given desired length from the random variable generating them. In practice this is impossible, as data is received in a wide variety of formats and the random processes underlying the source data are a mixture of human input, unpredictable (though in principle, deterministic) physical events, and noise. System 1200 addresses this by restriction of data types and density estimation; training data is provided that is representative of the type of data anticipated in “real-world” use of system 1200, which is then used to model the distribution of binary strings in the data in order to build a Huffman code word library 1200.
A codebook retriever 2930 receives a signal form the data query receiver 2910 that prompts the codebook retriever 2930 to request the codebook and frequency table associated with the compacted data file from a word library 1201. The frequency table 2950 shows the most frequently occurring words or substrings within a data set, and may be used by the data search engine 2940 to refine the location estimate.
The data search engine 2940 receives a data read request in the form of a search term such as a byte range, string, or substring, and may receive an initial location estimate from the estimator 2920 if a location hint was included in the data read query. The data search engine 2940 may use a frequency table 2950 to refine location estimates and identify codeword boundaries in an automatic way. The estimated location may be in the middle of a codeword. If this is the case then the search results will return output that does not match the search query. For example, the search results return a sequence of bytes, the frequency table 2950 may be used to identify whether the sequence of bytes are unlikely to occur in the original data, or if the sequence was reasonably likely then a codeword boundary has probably been found. When a codeword boundary is found, it allows the whole compacted data file to be accessed in any order by jumping from codeword to codeword, facilitating useful search results. If the data request is in a string format and a location hint was provided, then the data search engine 2940 may automatically locate the search string via a binary search from the estimated starting point or a found codeword boundary. The data search engine 2940 may also parse a search term string into sourceblocks and create at least one or more encodings for sub-search strings derived from the original search string. An exemplary parsing process is discussed in more detail in
A search cache 2960 may optionally be used to store previous search terms and their locations within the compacted data file. The data query receiver 2910 may look for the requested data in the cache 2960 and if it is found in the cache then its location is sent to the data reconstruction engine 108 where the compacted data may be reconstructed and then sent to the user for review.
If the data query is a data write query, then the data query receiver 2910 may send a signal to the codebook retriever 2930 to retrieve the codebook corresponding to the identified compacted version of the data file in which the write term is to be written and send the write term to a data write engine 2970. The codebook retriever 2930 sends the codebook to the data write engine 2970. If the size of the data to be written (write term) is exactly the length of the sourceblock (sourceblock), then the data write engine 2970 can simply encode the data and insert it into the received codebook. More likely, the size of the data to be written does not exactly match the sourceblock length, and simply encoding and adding the codeword to the codebook would modify the output of the codewords globally, basically changing everything from that point on. In an embodiment, when some data is to be inserted into the original data file, the original file may be entirely re-encoded. In another embodiment, instead of re-encoding the entire file, an opcode is created that tells the decoder there is an offset that has to be accounted for when reconstructing the compacted data. In yet another embodiment, instead of using an opcode, there are extra unused bits available in the codebook that can be used to encode information about how many secondary bytes are coming up. A secondary byte(s) represents the newly written data that may be encoded and inserted in the codebook. In this way when encoded bit is found, the data encoder can switch to secondary encoding, encode one fewer byte, then resume normal encoding. This allows for inserting data into the original data file without having to re-encode the entire file.
The system 3700 is designed to handle various types of input and produce corresponding outputs. Input data 3701 represents the original data to be compressed and encrypted. This data is fed into the dyadic distribution compression and encryption subsystem 3800. Search queries 3702 and write requests 3703 are directed to the random access engine 3900, allowing for efficient searching and modification of the compacted data.
The dyadic distribution compression and encryption subsystem 3800 is responsible for the initial processing of input data 3701. It analyzes the input data, creates a transformation matrix, and transforms the data into a dyadic distribution. The subsystem 3800 then generates a main data stream of transformed data and a secondary data stream of transformation information. Finally, it compresses the main data stream, producing compacted data 3704 as output.
The random access engine 3900 facilitates efficient searching and manipulation of the compacted data files. It receives search queries 3702 and write requests 3703, and interacts with the compacted data 3704. The engine 3900 organizes the reference codebook into a learnable hierarchical representation and traverses this representation to identify specific codewords. For search queries, it produces search results 3705, and for write requests, it generates write results 3706.
The large codeword model with latent transformer core 4000 processes the compressed data from subsystem 3800. It uses a variational autoencoder (VAE) encoder to further compress the data into latent space vectors. These vectors are then processed by a modified transformer that learns relationships between the vectors. The processed data is then decoded through a VAE decoder to produce the final output 3707.
The trainable estimator subsystem 4100 works in conjunction with the random access engine 3900 to improve search efficiency. It estimates starting bit locations in the compacted data file and refines these estimates using distinct value estimation techniques. The subsystem 4100 helps determine codeword boundaries and guides the traversal of the hierarchical representation.
The end to end training subsystem 4200 coordinates the joint training of all system components. It ensures that subsystems 3800, 3900, 4000, and 4100 are trained together, allowing them to adapt to each other's behaviors and outputs. This subsystem 4200 optimizes performance across all components, from initial compression and encryption to final random access capabilities.
In operation, input data 3701 is first processed by the dyadic distribution compression and encryption subsystem 3800, producing compacted data 3704. This compacted data can then be efficiently searched or modified using the random access engine 3900 in response to search queries 3702 or write requests 3703. The large codeword model 4000 and trainable estimator subsystem 4100 support these operations by providing advanced processing capabilities and improved search efficiency. Throughout this process, the end to end training subsystem 4200 continuously optimizes the performance of all components, ensuring the system adapts and improves over time.
The subsystem 3800 receives input data 3801 for compression and encryption. This input data can be of various types, including but not limited to text documents, images, audio files, video files, or time series data. The processed output of this subsystem is a compressed and encrypted data stream 3851, which is then passed to other components of the system.
The input data analyzer 3810 is the first component to process the incoming data 3801. It examines the input data to determine its properties and characteristics, such as data type, structure, and statistical properties. This analysis is crucial for optimizing the subsequent compression and encryption processes.
Based on the analysis from 3810, the transformation matrix generator 3820 creates a transformation matrix. This matrix is designed to efficiently transform the input data into a dyadic distribution, which is more amenable to compression and encryption. The generator 3820 uses trainable parameters, allowing it to adapt and improve its matrix generation over time through the end to end training process.
The dyadic distribution transformer 3830 takes the input data and the generated transformation matrix to convert the data into a dyadic distribution. This transformation is a key step in preparing the data for efficient compression and encryption, as dyadic distributions have properties that are particularly well-suited for these processes.
After transformation, the stream generator 3840 creates two separate data streams: a main data stream of transformed data 3841 and a secondary data stream containing transformation information 3842. This separation allows for more efficient compression and provides necessary context for later decompression and decryption.
Finally, the compression module 3850 compresses the main data stream using algorithms optimized for dyadic distributions. This module also uses trainable parameters to adapt its compression techniques based on the specific characteristics of the input data and the results of the dyadic transformation.
In operation, input data 3801 flows through each component of the subsystem sequentially. The input data analyzer 3810 first processes the data, passing its analysis to the transformation matrix generator 3820. The generated matrix and the original input data are then used by the dyadic distribution transformer 3830. The transformed data is passed to the stream generator 3840, which creates the two separate streams. These streams are then compressed by the compression module 3850, resulting in the final compressed and encrypted data stream 3851.
Throughout this process, the dyadic distribution compression and encryption subsystem 3800 interacts with other system components, particularly the end to end training subsystem 4200, which provides continuous optimization of the trainable parameters used in various stages of the compression and encryption process. This allows subsystem 3800 to adapt to different types of input data and improve its performance over time.
The random access engine 3900 receives input in the form of data search queries 3702 and write requests 3703. It interacts with the compacted data file 3704 and produces search results 3705 or write confirmations 3706 as output. The engine also interfaces with the trainable estimator subsystem 4100 to optimize its search and write operations.
The query processor 3910 is the first component to handle incoming requests. It parses and interprets data search queries 3702 and write requests 3703, extracting key information such as search terms, write locations, and any provided hints or metadata. This processed information is then passed to other components of the engine for further action.
The hierarchical representation generator 3920 is responsible for organizing the reference codebook into a learnable hierarchical structure. This component receives the reference codebook 4010 and transforms it into a tree-like structure where each node represents a partial codeword, and leaf nodes correspond to complete codewords. The hierarchical structure is designed to optimize search operations and is continually refined through the end to end training process.
The traversal subsystem 3930 uses the hierarchical representation created by 3920 to efficiently navigate through the compacted data. For search queries it starts from an initial point, which may be provided by the trainable estimator subsystem 4100, and moves through the hierarchy to locate the desired codeword. For write requests, it navigates to the appropriate insertion point. The traversal subsystem uses trainable parameters to optimize its path-finding strategies.
The codeword identifier 3940 works closely with the traversal subsystem 3930. Once the traversal subsystem has located the relevant section of the hierarchical representation, the codeword identifier pinpoints the exact codeword or sequence of codewords that correspond to the search query or write request. For search operations, it retrieves the identified codewords; for write operations, it determines where new codewords should be inserted or existing ones modified.
The search results 3705 and write confirmations 3706 are sent back to the user or system component that initiated the request 3941. Any modifications to the compacted data file 3704 are stored back in the system's data storage. The engine also provides performance data to the end to end training subsystem 4200 for continuous optimization
In operation, a data search query 3702 or write request 3703 is first processed by the query processor 3910. The processed query information is passed to the traversal subsystem 3930, which uses the hierarchical representation generated by 3920 to navigate through the compacted data file 3704. The codeword identifier 3940 then pinpoints the exact codewords relevant to the query. For search queries, these codewords are retrieved and decoded to produce search results 3705. For write requests, the identified location is used to modify the compacted data, and a write confirmation 3706 is generated.
Throughout this process, the random access engine 3900 interacts with other system components, particularly the trainable estimator subsystem 4100, which provides initial estimates for search starting points. The engine also interfaces with the end to end training subsystem 4200, which continuously optimizes the trainable parameters used in the hierarchical representation generation and traversal processes. This allows the random access engine to adapt to different types of queries and data structures, improving its performance over time.
The large codeword model 4000 receives input in the form of compressed data 3704, which is the output from the dyadic distribution compression and encryption subsystem 3800. It processes this data and produces final output data 4001, which can be used for various downstream tasks or sent back to the random access engine 3900 for storage or further processing.
The codebook 4010 maintains the mapping between codewords and their corresponding data. It is used by the VAE encoder 4020 and VAE decoder 4050 for interpreting the compressed data and generating output, respectively. The codebook 4010 is shared with the random access engine 3900 to ensure consistent interpretation of codewords across the system.
The VAE encoder 4020 processes the incoming compressed data 3704, using the codebook 4010 to interpret the data. It further compresses the input into a lower-dimensional latent space representation. This encoder uses a variational approach, capturing a distribution over latent space points for each input.
The latent space representation subsystem 4030 manages the compressed representations produced by the VAE encoder. It maintains the structure of the latent space and provides the interface between the encoder and the modified transformer. This subsystem may implement additional processing on the latent representations to facilitate performance in the transformer stage.
The modified transformer 4040 is the core processing unit of this subsystem. It operates directly on the latent space vectors, without embedding or positional encoding layers. It uses self-attention mechanisms to learn relationships between different parts of the latent representation. The transformer can be configured with multiple layers and attention heads to capture complex dependencies in the data.
The VAE decoder 4050 is the final component in the processing pipeline. It takes the output of the modified transformer and decodes it back into the original data space, using the codebook 4010 to ensure proper interpretation. Like the encoder, the decoder is variational, allowing it to generate outputs based on the learned latent space distribution.
The final output data 4051 produced by the VAE decoder 4050 may be returned to the random access engine 3900, which can efficiently store and index the processed data for future retrieval or forwarded to other components of the larger system or to external systems for further use 4052, depending on the specific application requirements. This output pathway ensures that the processed data is both stored efficiently and immediately available for any necessary downstream tasks.
In operation, compressed data 3704 is processed by the VAE encoder 4020, which produces a latent space representation. This representation is managed by the latent space representation subsystem 4030, which passes it to the modified transformer 4040. The transformer processes the latent representation, learning and applying relationships between different parts of the data. The processed latent representation is then passed to the VAE decoder 4050, which generates the final output data 4051.
The large codeword model 4000 is designed to handle various types of data including text, images, audio, and time-series data. This flexibility is achieved through the use of the latent space representation, which provides a common format for different data types.
Throughout its operation, the large codeword model 4000 interacts with the end to end training subsystem 4200, which optimizes the trainable parameters used in its components. This allows the model to adapt to different types of input data and improve its performance over time. The model also interfaces with the random access engine 3900, providing processed data that can be efficiently stored and retrieved.
The trainable estimator subsystem 4100 receives input in the form of search queries 3702 and write requests 3703. It processes these requests to provide optimized starting points for the random access engine 3900, outputting refined starting bit locations 4101.
The initial bit location estimator 4110 is the first component to process incoming requests. For search queries 3702, it uses trainable parameters to estimate an initial starting bit location in the compacted data file. For write requests 3703, it estimates the location where new data should be inserted. This component leverages historical data and learned patterns to make its initial estimates.
The distinct value estimator 4120 performs cardinality estimation on the compressed data 3704, which is the output from the dyadic distribution compression and encryption subsystem 3800 and is managed by the random access engine 3900. It uses techniques such as HyperLogLog, min hashing, or statistical estimation to determine the number of distinct values in the dataset. This information is crucial for refining the initial bit location estimate and identifying potential codeword boundaries.
The codeword boundary detector 4130 works in conjunction with the distinct value estimator 4120 to identify likely codeword boundaries in the compacted data. It analyzes the distribution of distinct values to determine where codewords are likely to begin and end. This component uses trainable parameters to improve its boundary detection accuracy over time.
The hierarchical representation traversal subsystem 4140 is responsible for navigating the hierarchical representation of the codebook when the initial estimate needs refinement. If the initial bit location does not correspond to a codeword boundary, this subsystem efficiently traverses the hierarchy to locate the nearest valid codeword boundary. It uses optimized traversal strategies that adapt based on the structure of the data and historical access patterns.
In operation, the process begins when the trainable estimator subsystem 4100 receives either a search query 3702 or a write request 3703 from the random access engine 3900. Simultaneously, it accesses the compressed data 3704 managed by the random access engine 3900. The initial bit location estimator 4110 processes the search query or write request to generate an initial estimate. This estimate is then refined using information from the distinct value estimator 4120, which analyzes the compressed data 3704. The codeword boundary detector 4130 further refines this estimate based on the analysis of the compressed data and the specifics of the query or request. If necessary, the hierarchical representation traversal subsystem 4140 adjusts the location to ensure it corresponds to a valid codeword boundary within the compressed data 3704, taking into account the requirements of the search query or write request. The final refined starting bit location 4141 is then output for use by the random access engine 3900 to fulfill the original search query or write request.
Throughout its operation, the trainable estimator subsystem 4100 interacts with the end to end training subsystem 4200, which continuously optimizes the trainable parameters used in each component. This allows the estimator to adapt to different data structures and access patterns, improving its accuracy and efficiency over time.
The trainable estimator subsystem 4100 plays a crucial role in enhancing the performance of the random access engine 3900 by providing accurate starting points for data access operations. By leveraging statistical techniques and machine learning, it significantly reduces the search space and improves the efficiency of both read and write operations on the compacted data.
The end to end training subsystem 4200 interacts with all other major components of the system, including the dyadic distribution compression and encryption subsystem 3800, the random access engine 3900, the unified neural intelligent transformer engine 4000, and the trainable estimator subsystem 4100. It receives performance metrics and gradient information from these components 4201 and outputs optimized parameters 4241.
The joint training coordinator 4210 orchestrates the end to end training process. It implements a multi-task learning framework that balances the objectives of different system components. This coordinator uses a dynamic weighting scheme to adjust the importance of each component's loss function during training. For example, it might increase the weight of the random access engine's loss if retrieval speed is lagging, or prioritize the transformer's loss if prediction accuracy needs improvement. The coordinator also manages a shared parameter space, identifying and updating parameters that affect multiple components to ensure cohesive optimization.
The performance optimizer 4220 employs a suite of analytical tools to evaluate system performance. It calculates metrics such as BLEU scores for text generation tasks, mean squared error for numerical predictions, and retrieval latency for database operations. This optimizer maintains a historical record of performance across different data types and operational scenarios. It uses statistical analysis to identify performance trends and anomalies, and employs a decision tree algorithm to determine which aspects of the system to prioritize for improvement. The optimizer also implements A/B testing to evaluate the impact of different optimization strategies.
The gradient flow manager 4230 utilizes advanced gradient manipulation techniques to ensure effective backpropagation across the system. It implements gradient clipping to prevent exploding gradients, and uses layer-wise adaptive rate scaling (LARS) to adjust learning rates for different layers based on their gradient statistics. This manager also employs gradient accumulation for large batch training, allowing it to simulate larger batch sizes than would fit in memory. For very deep networks, it uses gradient checkpointing to trade computation for memory, enabling training of more complex models.
The component-specific training adapters 4240 are a collection of specialized optimization modules. For the transformer component, the adapter implements adaptive attention span techniques to dynamically adjust the attention window. For the VAE, it uses a cyclical annealing schedule for the KL divergence term to balance reconstruction quality and latent space regularity. The adapter for the random access engine employs a custom optimization algorithm that combines elements of simulated annealing and genetic algorithms to optimize the codeword allocation strategy. Each adapter also implements its own learning rate scheduler, using techniques like cosine annealing with warm restarts to prevent convergence to suboptimal local minima.
In operation, the end to end training subsystem 4200 continuously monitors the performance of all system components. The joint training coordinator 4210 initiates training cycles, during which the performance optimizer 4220 analyzes system-wide metrics. Based on this analysis, the gradient flow manager 4230 computes and distributes gradients across the system. The component-specific training adapters 4240 then apply these gradients in a manner optimized for each component.
For example, when processing a batch of data, the system might identify that the random access engine 3900 is underperforming in retrieval speed for certain types of queries. The performance optimizer 4220 would flag this issue, and the joint training coordinator 4210 would initiate a focused training cycle. The gradient flow manager 4230 would ensure that relevant gradients are prominently backpropagated to the random access engine 3900, while the corresponding component-specific training adapter would apply these gradients using optimization techniques tailored for the engine's architecture.
Throughout this process, the end to end training subsystem 4200 maintains a holistic view of the system, ensuring that improvements in one area do not come at the cost of degradation in another. It continuously adjusts its training strategies based on the evolving performance characteristics of the system and the nature of the data being processed. The optimized parameters 4241 output by the end to end training subsystem 4200 are
distributed to all other components of the system, allowing them to update their internal models and decision-making processes. This continuous optimization loop enables the unified neural intelligent transformer engine to adapt to changing data patterns and operational requirements over time, maintaining high performance across a wide range of tasks and data types. The training subsystem 4200 is characterized as “end to end” due to its holistic approach
to optimizing the entire unified neural intelligent transformer engine as a single, integrated system. Unlike traditional approaches that might optimize each component separately, this subsystem considers the interdependencies between all parts of the system, from the initial data input to the final output. It simultaneously trains all components of the system, including the dyadic distribution compression and encryption subsystem 3800, the random access engine 3900, the unified neural intelligent transformer engine 4000, and the trainable estimator subsystem 4100. The joint training coordinator 4210 ensures that improvements in one component contribute to the overall system performance, rather than optimizing locally at the expense of global efficiency. The gradient flow manager 4230 propagates gradients across component boundaries, allowing changes in the output to influence even the earliest stages of data processing. Performance metrics are calculated based on the system's end-to-end performance on tasks, rather than on the individual performance of each component. The component-specific training adapters 4240 are designed to work in concert, ensuring that optimizations in one part of the system are compatible with and beneficial to other parts.
This end-to-end approach allows the system to discover and exploit synergies between components that might not be apparent when training each part in isolation, leading to superior overall performance and adaptability.
In a step 5210, the system trains a machine learning model using a diverse collection of compacted data files. This training process involves analyzing patterns in various types of compressed data, understanding boundary characteristics, and learning which estimation techniques perform best under different circumstances. The training data helps the model recognize relationships between data characteristics and optimal refinement strategies.
In a step 5220, the system performs a detailed analysis of the incoming search query's characteristics. This analysis examines factors such as data type patterns, statistical properties, structural features, temporal patterns, and content distribution. The analysis provides information that will guide the selection of the most appropriate refinement technique for the specific query context. In a step 5230, the system generates an initial starting bit location estimation. This preliminary estimate serves as a starting point for more refined searches and is based on the parameters provided in the search query. The initial estimation may utilize basic statistical methods or historical access patterns to determine a reasonable starting position within the compacted data file.
In a step 5240, the system leverages the trained machine learning model to select an optimized refinement technique. The selection process considers the analyzed characteristics from step 5220 and chooses from various methods such as but not limited to statistical pattern analysis, entropy-based detection, frequency analysis, neural prediction, probabilistic state machines, or hybrid approaches. The selection is based on historical performance metrics and the specific requirements of the current query. In a step 5250, the system applies the selected refinement technique to improve upon the initial bit location estimation. This step may involve multiple iterations of refinement, using the chosen method's specific approach to boundary detection and position optimization. The refinement process continues until a satisfactory level of precision is achieved or predetermined accuracy criteria are met.
In a step 5260, the system outputs the refined bit location, which represents the optimized position within the compacted data file where the requested data can be accessed. This final position enables efficient retrieval of the desired data without requiring decompression of the entire file. The refined location is then used by other system components to access and decode the requested data. Throughout this process, the system maintains performance metrics and feedback loops that enable continuous improvement of the machine learning model's selection capabilities and refinement technique effectiveness. This adaptive approach ensures that the system becomes increasingly efficient at handling various types of queries and data patterns over time.
Since the library consists of re-usable building sourceblocks, and the actual data is represented by reference codes to the library, the total storage space of a single set of data would be much smaller than conventional methods, wherein the data is stored in its entirety. The more data sets that are stored, the larger the library becomes, and the more data can be stored in reference code form.
As an analogy, imagine each data set as a collection of printed books that are only occasionally accessed. The amount of physical shelf space required to store many collections would be quite large, and is analogous to conventional methods of storing every single bit of data in every data set. Consider, however, storing all common elements within and across books in a single library, and storing the books as references codes to those common elements in that library. As a single book is added to the library, it will contain many repetitions of words and phrases. Instead of storing the whole words and phrases, they are added to a library, and given a reference code, and stored as reference codes. At this scale, some space savings may be achieved, but the reference codes will be on the order of the same size as the words themselves.
As more books are added to the library, larger phrases, quotations, and other words patterns will become common among the books. The larger the word patterns, the smaller the reference codes will be in relation to them as not all possible word patterns will be used. As entire collections of books are added to the library, sentences, paragraphs, pages, or even whole books will become repetitive. There may be many duplicates of books within a collection and across multiple collections, many references and quotations from one book to another, and much common phraseology within books on particular subjects. If each unique page of a book is stored only once in a common library and given a reference code, then a book of 1,000 pages or more could be stored on a few printed pages as a string of codes referencing the proper full-sized pages in the common library. The physical space taken up by the books would be dramatically reduced.
The more collections that are added, the greater the likelihood that phrases, paragraphs, pages, or entire books will already be in the library, and the more information in each collection of books can be stored in reference form. Accessing entire collections of books is then limited not by physical shelf space, but by the ability to reprint and recycle the books as needed for use.
The projected increase in storage capacity using the method herein described is primarily dependent on two factors: 1) the ratio of the number of bits in a block to the number of bits in the reference code, and 2) the amount of repetition in data being stored by the system.
With respect to the first factor, the number of bits used in the reference codes to the sourceblocks must be smaller than the number of bits in the sourceblocks themselves in order for any additional data storage capacity to be obtained. As a simple example, 16-bit sourceblocks would require 216, or 65536, unique reference codes to represent all possible patterns of bits. If all possible 65536 blocks patterns are utilized, then the reference code itself would also need to contain sixteen bits in order to refer to all possible 65,536 blocks patterns. In such case, there would be no storage savings. However, if only 16 of those block patterns are utilized, the reference code can be reduced to 4 bits in size, representing an effective compression of 4 times (16 bits/4 bits=4) versus conventional storage. Using a typical block size of 512 bytes, or 4,096 bits, the number of possible block patterns is 24,096, which for all practical purposes is unlimited. A typical hard drive contains one terabyte (TB) of physical storage capacity, which represents 1,953,125,000, or roughly 231, 512 byte blocks. Assuming that 1 TB of unique 512-byte sourceblocks were contained in the library, and that the reference code would thus need to be 31 bits long, the effective compression ratio for stored data would be on the order of 132 times (4,096/31≈132) that of conventional storage.
With respect to the second factor, in most cases it could be assumed that there would be sufficient repetition within a data set such that, when the data set is broken down into sourceblocks, its size within the library would be smaller than the original data. However, it is conceivable that the initial copy of a data set could require somewhat more storage space than the data stored in a conventional manner, if all or nearly all sourceblocks in that set were unique. For example, assuming that the reference codes are 1/10th the size of a full-sized copy, the first copy stored as sourceblocks in the library would need to be 1.1 megabytes (MB), (1 MB for the complete set of full-sized sourceblocks in the library and 0.1 MB for the reference codes). However, since the sourceblocks stored in the library are universal, the more duplicate copies of something you save, the greater efficiency versus conventional storage methods. Conventionally, storing 10 copies of the same data requires 10 times the storage space of a single copy. For example, ten copies of a 1 MB file would take up 10 MB of storage space. However, using the method described herein, only a single full-sized copy is stored, and subsequent copies are stored as reference codes. Each additional copy takes up only a fraction of the space of the full-sized copy. For example, again assuming that the reference codes are 1/10th the size of the full-size copy, ten copies of a 1 MB file would take up only 2 MB of space (1 MB for the full-sized copy, and 0.1 MB each for ten sets of reference codes). The larger the library, the more likely that part or all of incoming data will duplicate sourceblocks already existing in the library.
The size of the library could be reduced in a manner similar to storage of data. Where sourceblocks differ from each other only by a certain number of bits, instead of storing a new sourceblock that is very similar to one already existing in the library, the new sourceblock could be represented as a reference code to the existing sourceblock, plus information about which bits in the new block differ from the existing block. For example, in the case where 512 byte sourceblocks are being used, if the system receives a new sourceblock that differs by only one bit from a sourceblock already existing in the library, instead of storing a new 512 byte sourceblock, the new sourceblock could be stored as a reference code to the existing sourceblock, plus a reference to the bit that differs. Storing the new sourceblock as a reference code plus changes would require only a few bytes of physical storage space versus the 512 bytes that a full sourceblock would require. The algorithm could be optimized to store new sourceblocks in this reference code plus changes form unless the changes portion is large enough that it is more efficient to store a new, full sourceblock.
It will be understood by one skilled in the art that transfer and synchronization of data would be increased to the same extent as for storage. By transferring or synchronizing reference codes instead of full-sized data, the bandwidth requirements for both types of operations are dramatically reduced.
In addition, the method described herein is inherently a form of encryption. When the data is converted from its full form to reference codes, none of the original data is contained in the reference codes. Without access to the library of sourceblocks, it would be impossible to re-construct any portion of the data from the reference codes. This inherent property of the method described herein could obviate the need for traditional encryption algorithms, thereby offsetting most or all of the computational cost of conversion of data back and forth to reference codes. In theory, the method described herein should not utilize any additional computing power beyond traditional storage using encryption algorithms. Alternatively, the method described herein could be in addition to other encryption algorithms to increase data security even further.
In other embodiments, additional security features could be added, such as: creating a proprietary library of sourceblocks for proprietary networks, physical separation of the reference codes from the library of sourceblocks, storage of the library of sourceblocks on a removable device to enable easy physical separation of the library and reference codes from any network, and incorporation of proprietary sequences of how sourceblocks are read and the data reassembled.
It will be recognized by a person skilled in the art that the methods described herein can be applied to data in any form. For example, the method described herein could be used to store genetic data, which has four data units: C, G, A, and T. Those four data units can be represented as 2 bit sequences: 00, 01, 10, and 11, which can be processed and stored using the method described herein.
It will be recognized by a person skilled in the art that certain embodiments of the methods described herein may have uses other than data storage. For example, because the data is stored in reference code form, it cannot be reconstructed without the availability of the library of sourceblocks. This is effectively a form of encryption, which could be used for cyber security purposes. As another example, an embodiment of the method described herein could be used to store backup copies of data, provide for redundancy in the event of server failure, or provide additional security against cyberattacks by distributing multiple partial copies of the library among computers are various locations, ensuring that at least two copies of each sourceblock exist in different locations within the network.
A data search query specifying a search term to read from the original data set. In this example, the selected search term captures to the first four lines of the data as received 3205. The system estimates a bit location N′ in the converted data set that corresponds to byte N in the original data set. The estimated location, bit N′, may not be aligned with a codeword boundary 3220. In this example, the first codeword that should be accessed and returned is supposed to be 01, but the estimate N′ location puts the pointer at the last bit in the codeword 3220. When N′ is not aligned with a codeword boundary, the system will start decoding in the middle of a codeword, resulting in returned data 3225 that when decoded leads to incorrect output 3230. Due to the boundary misalignment, the random access data returned is 10 01 11 01 3225, when the correct random access data returned should have been 01 00 11 10. The user that submits the data search query will receive the incorrect output and recognize it as garbage output. The user can manually bit scroll 3235 forward and backward from N′ until a codeword boundary is found and the expected output 3240 corresponding to the search term is returned.
In another embodiment, mile markers are stored in a file accompanying the compacted data set with a list of exact locations N′ in the compacted data set that correspond to N=100, 200, 1000, etc. The mile marker file enables more refined estimates of N′ with less seeking necessary as now the user may seek forwards and backwards in the compacted data set in codeword increments and boundary alignment is automatic. These mile markers (i.e., locations) might denote which bit corresponds to the 1000th byte from the unencoded data, which bit corresponds to the 2000th byte, etc. The use of mile markers prevents the possibility of starting the data read process in the middle of codeword as any search may begin at the nearest mile marker bit associated with byte N.
The compacted data file may then be searched for occurrences of the assigned codeword(s). For example, the “Ato” 3302 and “mBe” 3303 sourceblocks may each be encoded with codewords C13305 and C23306 respectively. These sourceblocks 3302, 3303 were selected because they both contain only data that is part of the search string 3301 and do not contain non relevant data (e.g., “x”, “xy” “xyz” from preceding paragraph). The assigned codewords may be concatenated to form a codeword double (pair) C1C23307 and then the search engine 2940 may perform a search for codeword pair C1C23307 in the compacted data. This process is done for each of the possible encodings 3300, 3310, 3320 of the search string 3301.
From encoding two 3310 sourceblocks containing “tom” 3311 and “Bea” 3312 are assigned a codeword such as C33314 and C43315. These codewords may be concatenated to form a codeword pair C3C43316 and then the search engine 2940 may perform a search for the codeword pair C1C23316 in the compacted data file. Likewise, from encoding three 3320 sourceblocks containing “omB” 3321 and “eam” 3322 are assigned a codeword such as C53324 and C63325. These codewords may be concatenated to form a codeword pair C5C63326 and then the search engine 2940 may perform a search for the codeword pair C5C63326 in the compacted data file. Each of the codeword pairs C1C23307, C3C43316, and C5C63326 form three new search strings and the data search engine 2940 may scan through the compacted data file looking for all three of them. If any of them are found, then the codewords in the compacted data file to the left and right of the found codeword pair may be decoded to identify if the correct letter (byte) is preceding or following the codeword pair. In this example, two source blocks were used to create a codeword pair, however, it should be appreciated that number of sourceblocks concatenated is dependent upon the length of the search term and the sourceblock length. There may be codeword triples, codeword quadruples, etc., as any codeword n-tuple may be possible due to the above mentioned dependencies.
For example, if the search results return “tomBea” that means an occurrence of codeword pair C3C43316 was found. The search engine 2940 may decode one letter to the left side and check if it is “A” and one letter to the right to check if is “m”. If those are the letters found the search string has been located, if not then it is not the correct string and the scan continues through the compacted data file until another occurrence of any one of the codeword pairs 3307, 3316, or 3326 is found. The data search engine 2940 performs this process automatically until the search string has been located or the entire compacted data file has been scanned and searched.
The exemplary computing environment described herein comprises a computing device 10 (further comprising a system bus 11, one or more processors 20, a system memory 30, one or more interfaces 40, one or more non-volatile data storage devices 50), external peripherals and accessories 60, external communication devices 70, remote computing devices 80, and cloud-based services 90.
System bus 11 couples the various system components, coordinating operation of and data transmission between those various system components. System bus 11 represents one or more of any type or combination of types of wired or wireless bus structures including, but not limited to, memory busses or memory controllers, point-to-point connections, switching fabrics, peripheral busses, accelerated graphics ports, and local busses using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) busses, Micro Channel Architecture (MCA) busses, Enhanced ISA
(EISA) busses, Video Electronics Standards Association (VESA) local busses, a Peripheral Component Interconnects (PCI) busses also known as a Mezzanine busses, or any selection of, or combination of, such busses. Depending on the specific physical implementation, one or more of the processors 20, system memory 30 and other components of the computing device 10 can be physically co-located or integrated into a single physical component, such as on a single chip. In such a case, some or all of system bus 11 can be electrical pathways within a single chip structure.
Computing device may further comprise externally-accessible data input and storage devices 12 such as compact disc read-only memory (CD-ROM) drives, digital versatile discs (DVD), or other optical disc storage for reading and/or writing optical discs 62; magnetic cassettes, magnetic tape, magnetic disk storage, or other magnetic storage devices; or any other medium which can be used to store the desired content and which can be accessed by the computing device 10. Computing device may further comprise externally-accessible data ports or connections 12 such as serial ports, parallel ports, universal serial bus (USB) ports, and infrared ports and/or transmitter/receivers. Computing device may further comprise hardware for wireless communication with external devices such as IEEE 1394 (“Firewire”) interfaces, IEEE 802.11 wireless interfaces, BLUETOOTH® wireless interfaces, and so forth. Such ports and interfaces may be used to connect any number of external peripherals and accessories 60 such as visual displays, monitors, and touch-sensitive screens 61, USB solid state memory data storage drives (commonly known as “flash drives” or “thumb drives”) 63, printers 64, pointers and manipulators such as mice 65, keyboards 66, and other devices 67 such as joysticks and gaming pads, touchpads, additional displays and monitors, and external hard drives (whether solid state or disc-based), microphones, speakers, cameras, and optical scanners.
Processors 20 are logic circuitry capable of receiving programming instructions and processing (or executing) those instructions to perform computer operations such as retrieving data, storing data, and performing mathematical calculations. Processors 20 are not limited by the materials from which they are formed or the processing mechanisms employed therein, but are typically comprised of semiconductor materials into which many transistors are formed together into logic gates on a chip (i.e., an integrated circuit or IC). The term processor includes any device capable of receiving and processing instructions including, but not limited to, processors operating on the basis of quantum computing, optical computing, mechanical computing (e.g., using nanotechnology entities to transfer data), and so forth. Depending on configuration, computing device 10 may comprise more than one processor. For example, computing device 10 may comprise one or more central processing units (CPUs) 21, each of which itself has multiple processors or multiple processing cores, each capable of independently or semi-independently processing programming instructions based on technologies like complex instruction set computer (CISC) or reduced instruction set computer (RISC). Further, computing device 10 may comprise one or more specialized processors such as a graphics processing unit (GPU) 22 configured to accelerate processing of computer graphics and images via a large array of specialized processing cores arranged in parallel. Further computing device 10 may be comprised of one or more specialized processes such as Intelligent Processing Units, field-programmable gate arrays or application-specific integrated circuits for specific tasks or types of tasks. The term processor may further include: neural processing units (NPUs) or neural computing units optimized for machine learning and artificial intelligence workloads using specialized architectures and data paths; tensor processing units (TPUs) designed to efficiently perform matrix multiplication and convolution operations used heavily in neural networks and deep learning applications; application-specific integrated circuits (ASICs) implementing custom logic for domain-specific tasks; application-specific instruction set processors (ASIPs) with instruction sets tailored for particular applications; field-programmable gate arrays (FPGAs) providing reconfigurable logic fabric that can be customized for specific processing tasks; processors operating on emerging computing paradigms such as quantum computing, optical computing, mechanical computing (e.g., using nanotechnology entities to transfer data), and so forth. Depending on configuration, computing device 10 may comprise one or more of any of the above types of processors in order to efficiently handle a variety of general purpose and specialized computing tasks. The specific processor configuration may be selected based on performance, power, cost, or other design constraints relevant to the intended application of computing device 10.
System memory 30 is processor-accessible data storage in the form of volatile and/or nonvolatile memory. System memory 30 may be either or both of two types: non-volatile memory and volatile memory. Non-volatile memory 30a is not erased when power to the memory is removed, and includes memory types such as read only memory (ROM), electronically-erasable programmable memory (EEPROM), and rewritable solid state memory (commonly known as “flash memory”). Non-volatile memory 30a is typically used for long-term storage of a basic input/output system (BIOS) 31, containing the basic instructions, typically loaded during computer startup, for transfer of information between components within computing device, or a unified extensible firmware interface (UEFI), which is a modern replacement for BIOS that supports larger hard drives, faster boot times, more security features, and provides native support for graphics and mouse cursors. Non-volatile memory 30a may also be used to store firmware comprising a complete operating system 35 and applications 36 for operating computer-controlled devices. The firmware approach is often used for purpose-specific computer-controlled devices such as appliances and Internet-of-Things (IoT) devices where processing power and data storage space is limited. Volatile memory 30b is erased when power to the memory is removed and is typically used for short-term storage of data for processing. Volatile memory 30b includes memory types such as random-access memory (RAM), and is normally the primary operating memory into which the operating system 35, applications 36, program modules 37, and application data 38 are loaded for execution by processors 20. Volatile memory 30b is generally faster than non-volatile memory 30a due to its electrical characteristics and is directly accessible to processors 20 for processing of instructions and data storage and retrieval. Volatile memory 30b may comprise one or more smaller cache memories which operate at a higher clock speed and are typically placed on the same IC as the processors to improve performance.
There are several types of computer memory, each with its own characteristics and use cases. System memory 30 may be configured in one or more of the several types described herein, including high bandwidth memory (HBM) and advanced packaging technologies like chip-on-wafer-on-substrate (CoWoS). Static random access memory (SRAM) provides fast, low-latency memory used for cache memory in processors, but is more expensive and consumes more power compared to dynamic random access memory (DRAM). SRAM retains data as long as power is supplied. DRAM is the main memory in most computer systems and is slower than SRAM but cheaper and more dense. DRAM requires periodic refresh to retain data. NAND flash is a type of non-volatile memory used for storage in solid state drives (SSDs) and mobile devices and provides high density and lower cost per bit compared to DRAM with the trade-off of slower write speeds and limited write endurance. HBM is an emerging memory technology that provides high bandwidth and low power consumption which stacks multiple DRAM dies vertically, connected by through-silicon vias (TSVs). HBM offers much higher bandwidth (up to 1 TB/s) compared to traditional DRAM and may be used in high-performance graphics cards, AI accelerators, and edge computing devices. Advanced packaging and CoWoS are technologies that enable the integration of multiple chips or dies into a single package. CoWoS is a 2.5D packaging technology that interconnects multiple dies side-by-side on a silicon interposer and allows for higher bandwidth, lower latency, and reduced power consumption compared to traditional PCB-based packaging. This technology enables the integration of heterogeneous dies (e.g., CPU, GPU, HBM) in a single package and may be used in high-performance computing, AI accelerators, and edge computing devices.
Interfaces 40 may include, but are not limited to, storage media interfaces 41, network interfaces 42, display interfaces 43, and input/output interfaces 44. Storage media interface 41 provides the necessary hardware interface for loading data from non-volatile data storage devices 50 into system memory 30 and storage data from system memory 30 to non-volatile data storage device 50. Network interface 42 provides the necessary hardware interface for computing device 10 to communicate with remote computing devices 80 and cloud-based services 90 via one or more external communication devices 70. Display interface 43 allows for connection of displays 61, monitors, touchscreens, and other visual input/output devices. Display interface 43 may include a graphics card for processing graphics-intensive calculations and for handling demanding display requirements. Typically, a graphics card includes a graphics processing unit (GPU) and video RAM (VRAM) to accelerate display of graphics. In some high-performance computing systems, multiple GPUs may be connected using NVLink bridges, which provide high-bandwidth, low-latency interconnects between GPUs. NVLink bridges enable faster data transfer between GPUs, allowing for more efficient parallel processing and improved performance in applications such as machine learning, scientific simulations, and graphics rendering. One or more input/output (I/O) interfaces 44 provide the necessary support for communications between computing device 10 and any external peripherals and accessories 60. For wireless communications, the necessary radio-frequency hardware and firmware may be connected to I/O interface 44 or may be integrated into I/O interface 44. Network interface 42 may support various communication standards and protocols, such as Ethernet and Small Form-Factor Pluggable (SFP). Ethernet is a widely used wired networking technology that enables local area network (LAN) communication. Ethernet interfaces typically use RJ45 connectors and support data rates ranging from 10 Mbps to 100 Gbps, with common speeds being 100 Mbps, 1 Gbps, 10 Gbps, 25 Gbps, 40 Gbps, and 100 Gbps. Ethernet is known for its reliability, low latency, and cost-effectiveness, making it a popular choice for home, office, and data center networks. SFP is a compact, hot-pluggable transceiver used for both telecommunication and data communications applications. SFP interfaces provide a modular and flexible solution for connecting network devices, such as switches and routers, to fiber optic or copper networking cables. SFP transceivers support various data rates, ranging from 100 Mbps to 100 Gbps, and can be easily replaced or upgraded without the need to replace the entire network interface card. This modularity allows for network scalability and adaptability to different network requirements and fiber types, such as single-mode or multi-mode fiber.
Non-volatile data storage devices 50 are typically used for long-term storage of data. Data on non-volatile data storage devices 50 is not erased when power to the non-volatile data storage devices 50 is removed. Non-volatile data storage devices 50 may be implemented using any technology for non-volatile storage of content including, but not limited to, CD-ROM drives, digital versatile discs (DVD), or other optical disc storage; magnetic cassettes, magnetic tape, magnetic disc storage, or other magnetic storage devices; solid state memory technologies such as EEPROM or flash memory; or other memory technology or any other medium which can be used to store data without requiring power to retain the data after it is written. Non-volatile data storage devices 50 may be non-removable from computing device 10 as in the case of internal hard drives, removable from computing device 10 as in the case of external USB hard drives, or a combination thereof, but computing device will typically comprise one or more internal, non-removable hard drives using either magnetic disc or solid state memory technology. Non-volatile data storage devices 50 may be implemented using various technologies, including hard disk drives (HDDs) and solid-state drives (SSDs). HDDs use spinning magnetic platters and read/write heads to store and retrieve data, while SSDs use NAND flash memory. SSDs offer faster read/write speeds, lower latency, and better durability due to the lack of moving parts, while HDDs typically provide higher storage capacities and lower cost per gigabyte. NAND flash memory comes in different types, such as Single-Level Cell (SLC), Multi-Level Cell (MLC), Triple-Level Cell (TLC), and Quad-Level Cell (QLC), each with trade-offs between performance, endurance, and cost. Storage devices connect to the computing device 10 through various interfaces, such as SATA, NVMe, and PCIe. SATA is the traditional interface for HDDs and SATA SSDs, while NVMe (Non-Volatile Memory Express) is a newer, high-performance protocol designed for SSDs connected via PCIe. PCIe SSDs offer the highest performance due to the direct connection to the PCIe bus, bypassing the limitations of the SATA interface. Other storage form factors include M.2 SSDs, which are compact storage devices that connect directly to the motherboard using the M.2 slot, supporting both SATA and NVMe interfaces. Additionally, technologies like Intel Optane memory combine 3D XPoint technology with NAND flash to provide high-performance storage and caching solutions. Non-volatile data storage devices 50 may be non-removable from computing device 10, as in the case of internal hard drives, removable from computing device 10, as in the case of external USB hard drives, or a combination thereof. However, computing devices will typically comprise one or more internal, non-removable hard drives using either magnetic disc or solid-state memory technology. Non-volatile data storage devices 50 may store any type of data including, but not limited to, an operating system 51 for providing low-level and mid-level functionality of computing device 10, applications 52 for providing high-level functionality of computing device 10, program modules 53 such as containerized programs or applications, or other modular content or modular programming, application data 54, and databases 55 such as relational databases, non-relational databases, object oriented databases, NoSQL databases, vector databases, knowledge graph databases, key-value databases, document oriented data stores, and graph databases.
Applications (also known as computer software or software applications) are sets of programming instructions designed to perform specific tasks or provide specific functionality on a computer or other computing devices. Applications are typically written in high-level programming languages such as C, C++, Scala, Erlang, GoLang, Java, Scala, Rust, and Python, which are then either interpreted at runtime or compiled into low-level, binary, processor-executable instructions operable on processors 20. Applications may be containerized so that they can be run on any computer hardware running any known operating system. Containerization of computer software is a method of packaging and deploying applications along with their operating system dependencies into self-contained, isolated units known as containers. Containers provide a lightweight and consistent runtime environment that allows applications to run reliably across different computing environments, such as development, testing, and production systems facilitated by specifications such as containerd.
The memories and non-volatile data storage devices described herein do not include communication media. Communication media are means of transmission of information such as modulated electromagnetic waves or modulated data signals configured to transmit, not store, information. By way of example, and not limitation, communication media includes wired communications such as sound signals transmitted to a speaker via a speaker wire, and wireless communications such as acoustic waves, radio frequency (RF) transmissions, infrared emissions, and other wireless media.
External communication devices 70 are devices that facilitate communications between computing device and either remote computing devices 80, or cloud-based services 90, or both. External communication devices 70 include, but are not limited to, data modems 71 which facilitate data transmission between computing device and the Internet 75 via a common carrier such as a telephone company or internet service provider (ISP), routers 72 which facilitate data transmission between computing device and other devices, and switches 73 which provide direct data communications between devices on a network or optical transmitters (e.g., lasers). Here, modem 71 is shown connecting computing device 10 to both remote computing devices 80 and cloud-based services 90 via the Internet 75. While modem 71, router 72, and switch 73 are shown here as being connected to network interface 42, many different network configurations using external communication devices 70 are possible. Using external communication devices 70, networks may be configured as local area networks (LANs) for a single location, building, or campus, wide area networks (WANs) comprising data networks that extend over a larger geographical area, and virtual private networks (VPNs) which can be of any size but connect computers via encrypted communications over public networks such as the Internet 75. As just one exemplary network configuration, network interface 42 may be connected to switch 73 which is connected to router 72 which is connected to modem 71 which provides access for computing device 10 to the Internet 75. Further, any combination of wired 77 or wireless 76 communications between and among computing device 10, external communication devices 70, remote computing devices 80, and cloud-based services 90 may be used. Remote computing devices 80, for example, may communicate with computing device through a variety of communication channels 74 such as through switch 73 via a wired 77 connection, through router 72 via a wireless connection 76, or through modem 71 via the Internet 75. Furthermore, while not shown here, other hardware that is specifically designed for servers or networking functions may be employed. For example, secure socket layer (SSL) acceleration cards can be used to offload SSL encryption computations, and transmission control protocol/internet protocol (TCP/IP) offload hardware and/or packet classifiers on network interfaces 42 may be installed and used at server devices or intermediate networking equipment (e.g., for deep packet inspection).
In a networked environment, certain components of computing device 10 may be fully or partially implemented on remote computing devices 80 or cloud-based services 90. Data stored in non-volatile data storage device 50 may be received from, shared with, duplicated on, or offloaded to a non-volatile data storage device on one or more remote computing devices 80 or in a cloud computing service 92. Processing by processors 20 may be received from, shared with, duplicated on, or offloaded to processors of one or more remote computing devices 80 or in a distributed computing service 93. By way of example, data may reside on a cloud computing service 92, but may be usable or otherwise accessible for use by computing device 10. Also, certain processing subtasks may be sent to a microservice 91 for processing with the result being transmitted to computing device 10 for incorporation into a larger processing task. Also, while components and processes of the exemplary computing environment are illustrated herein as discrete units (e.g., OS 51 being stored on non-volatile data storage device 51 and loaded into system memory 35 for use) such processes and components may reside or be processed at various times in different components of computing device 10, remote computing devices 80, and/or cloud-based services 90. Also, certain processing subtasks may be sent to a microservice 91 for processing with the result being transmitted to computing device 10 for incorporation into a larger processing task. Infrastructure as Code (IaaC) tools like Terraform can be used to manage and provision computing resources across multiple cloud providers or hyperscalers. This allows for workload balancing based on factors such as cost, performance, and availability. For example, Terraform can be used to automatically provision and scale resources on AWS spot instances during periods of high demand, such as for surge rendering tasks, to take advantage of lower costs while maintaining the required performance levels. In the context of rendering, tools like Blender can be used for object rendering of specific elements, such as a car, bike, or house. These elements can be approximated and roughed in using techniques like bounding box approximation or low-poly modeling to reduce the computational resources required for initial rendering passes. The rendered elements can then be integrated into the larger scene or environment as needed, with the option to replace the approximated elements with higher-fidelity models as the rendering process progresses.
In an implementation, the disclosed systems and methods may utilize, at least in part, containerization techniques to execute one or more processes and/or steps disclosed herein. Containerization is a lightweight and efficient virtualization technique that allows you to package and run applications and their dependencies in isolated environments called containers. One of the most popular containerization platforms is containerd, which is widely used in software development and deployment. Containerization, particularly with open-source technologies like containerd and container orchestration systems like Kubernetes, is a common approach for deploying and managing applications. Containers are created from images, which are lightweight, standalone, and executable packages that include application code, libraries, dependencies, and runtime. Images are often built from a containerfile or similar, which contains instructions for assembling the image. Containerfiles are configuration files that specify how to build a container image. Systems like Kubernetes natively support containerd as a container runtime. They include commands for installing dependencies, copying files, setting environment variables, and defining runtime configurations. Container images can be stored in repositories, which can be public or private. Organizations often set up private registries for security and version control using tools such as Harbor, JFrog Artifactory and Bintray, GitLab Container Registry, or other container registries. Containers can communicate with each other and the external world through networking. Containerd provides a default network namespace, but can be used with custom network plugins. Containers within the same network can communicate using container names or IP addresses.
Remote computing devices 80 are any computing devices not part of computing device 10. Remote computing devices 80 include, but are not limited to, personal computers, server computers, thin clients, thick clients, personal digital assistants (PDAs), mobile telephones, watches, tablet computers, laptop computers, multiprocessor systems, microprocessor based systems, set-top boxes, programmable consumer electronics, video game machines, game consoles, portable or handheld gaming units, network terminals, desktop personal computers (PCs), minicomputers, mainframe computers, network nodes, virtual reality or augmented reality devices and wearables, and distributed or multi-processing computing environments. While remote computing devices 80 are shown for clarity as being separate from cloud-based services 90, cloud-based services 90 are implemented on collections of networked remote computing devices 80.
Cloud-based services 90 are Internet-accessible services implemented on collections of networked remote computing devices 80. Cloud-based services are typically accessed via application programming interfaces (APIs) which are software interfaces which provide access to computing services within the cloud-based service via API calls, which are pre-defined protocols for requesting a computing service and receiving the results of that computing service. While cloud-based services may comprise any type of computer processing or storage, three common categories of cloud-based services 90 are serverless logic apps, microservices 91, cloud computing services 92, and distributed computing services 93.
Microservices 91 are collections of small, loosely coupled, and independently deployable computing services. Each microservice represents a specific computing functionality and runs as a separate process or container. Microservices promote the decomposition of complex applications into smaller, manageable services that can be developed, deployed, and scaled independently. These services communicate with each other through well-defined application programming interfaces (APIs), typically using lightweight protocols like HTTP, protobuffers, gRPC or message queues such as Kafka. Microservices 91 can be combined to perform more complex or distributed processing tasks. In an embodiment, Kubernetes clusters with containerized resources are used for operational packaging of system.
Cloud computing services 92 are delivery of computing resources and services over the Internet 75 from a remote location. Cloud computing services 92 provide additional computer hardware and storage on as-needed or subscription basis. Cloud computing services 92 can provide large amounts of scalable data storage, access to sophisticated software and powerful server-based processing, or entire computing infrastructures and platforms. For example, cloud computing services can provide virtualized computing resources such as virtual machines, storage, and networks, platforms for developing, running, and managing applications without the complexity of infrastructure management, and complete software applications over public or private networks or the Internet on a subscription or alternative licensing basis, or consumption or ad-hoc marketplace basis, or combination thereof.
Distributed computing services 93 provide large-scale processing using multiple interconnected computers or nodes to solve computational problems or perform tasks collectively. In distributed computing, the processing and storage capabilities of multiple machines are leveraged to work together as a unified system. Distributed computing services are designed to address problems that cannot be efficiently solved by a single computer or that require large-scale computational power or support for highly dynamic compute, transport or storage resource variance or uncertainty over time requiring scaling up and down of constituent system resources. These services enable parallel processing, fault tolerance, and scalability by distributing tasks across multiple nodes.
Although described above as a physical device, computing device 10 can be a virtual computing device, in which case the functionality of the physical components herein described, such as processors 20, system memory 30, network interfaces 40, NVLink or other GPU-to-GPU high bandwidth communications links and other like components can be provided by computer-executable instructions. Such computer-executable instructions can execute on a single physical computing device, or can be distributed across multiple physical computing devices, including being distributed across multiple physical computing devices in a dynamic manner such that the specific, physical computing devices hosting such computer-executable instructions can dynamically change over time depending upon need and availability. In the situation where computing device 10 is a virtualized device, the underlying physical computing devices hosting such a virtualized computing device can, themselves, comprise physical components analogous to those described above, and operating in a like manner. Furthermore, virtual computing devices can be utilized in multiple layers with one virtual computing device executing within the construct of another virtual computing device. Thus, computing device 10 may be either a physical computing device or a virtualized computing device within which computer-executable instructions can be executed in a manner consistent with their execution by a physical computing device. Similarly, terms referring to physical components of the computing device, as utilized herein, mean either those physical components or virtualizations thereof performing the same or equivalent functions.
The skilled person will be aware of a range of possible modifications of the various aspects described above. Accordingly, the present invention is defined by the claims and their equivalents.
Number | Date | Country | |
---|---|---|---|
63140111 | Jan 2021 | US | |
63027166 | May 2020 | US | |
62578824 | Oct 2017 | US | |
62926723 | Oct 2019 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17734052 | Apr 2022 | US |
Child | 18078909 | US | |
Parent | 17180439 | Feb 2021 | US |
Child | 17734052 | US | |
Parent | 16455655 | Jun 2019 | US |
Child | 16716098 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 18822208 | Sep 2024 | US |
Child | 19023275 | US | |
Parent | 18412439 | Jan 2024 | US |
Child | 18822208 | US | |
Parent | 18078909 | Dec 2022 | US |
Child | 18412439 | US | |
Parent | 16923039 | Jul 2020 | US |
Child | 17180439 | US | |
Parent | 16716098 | Dec 2019 | US |
Child | 16923039 | US | |
Parent | 16200466 | Nov 2018 | US |
Child | 16455655 | US | |
Parent | 15975741 | May 2018 | US |
Child | 16200466 | US |