Graph neural networks (GNNs) are deep learning models that are designed specifically for graph data. GNNs typically rely on node features as the input node representation to the first layer. When applying GNNs on graphs without node features, it is possible to extract simple graph based node features (e.g., the number of degrees) or learn the input node representation (e.g., the embeddings) when training the network. Using GNNs to train input node embeddings for downstream models often leads to better performance, but the number of parameters associated with the embeddings grows linearly with the number of nodes. It is impractical to train the input node embeddings for large-scale graph data with a GNN in the memory of a graphics processing unit (GPU) as the memory cost for the embeddings alone can reach 238 gigabytes. An efficient node embedding compression method such that the GNN can be used in a GPU is desired.
Embodiments of the disclosure address this problem and other problems individually and collectively.
One embodiment of the invention includes a method. The method comprises: generating, by a server computer, a binary compositional code matrix from an input matrix derived from input data used to make a prediction; converting, by the server computer, the binary compositional code matrix into an integer code matrix; inputting, by the server computer, each row of the integer code matrix into a decoder comprising plurality of codebooks to output a summed vector for each row; inputting, by the server computer, derivatives of the summed vectors into a downstream machine learning model to output a prediction.
In some embodiments, the derivatives of the rows can be embeddings corresponding to the summed vectors of the rows, where the embeddings were produced by a multilayer perceptron. In some embodiments, the summed vector for each row can be an aggregated to form intermediate matrix. The rows of the intermediate matrix can be processed by the multilayer perceptron to produce a processed intermediate matrix, which may be an embedding matrix. The rows of the processed intermediate matrix can be input into the downstream learning model to output the prediction.
Another embodiment of the invention includes a computer comprising a processor and a non-transitory computer readable medium comprising instructions, executable by the processor, to perform operations including: generating, by a server computer, a binary compositional code matrix from an input matrix derived from input data used to make a prediction; converting, by the server computer, the binary compositional code matrix into an integer code matrix; inputting, by the server computer, each row of the integer code matrix into a decoder comprising plurality of codebooks to output a summed vector for each row; inputting, by the server computer, derivatives of the summed vectors into a downstream machine learning model to output a prediction.
A better understanding of the nature and advantages of embodiments of the invention may be gained with reference to the following detailed description and accompanying drawings.
The data computer 110 may be operated by a data aggregator, such as a processing network, web host, bank, traffic monitor, etc. The data computer 110 can aggregate data, such as traffic data (e.g., network traffic, car traffic), interaction data (e.g., transaction data, access request data), word or speech data, or some other data that can be represented by a graph. For example, traffic data can be represented on a two-dimensional graph by location (e.g., for car traffic, each car can be put on a map), or by a combination of a location (e.g., for website traffic, the website requestor's IP address and the website server IP address). In some embodiments, the data computer 110 may compile data, and transmit the data to the server computer 100. The data computer 110 may request the server computer 100 to analyze the data and generate a prediction using the compiled data. The data computer 110 may then receive the prediction, and instruct the machine 120 accordingly. For example, the data computer 110 can monitor car traffic and transmit traffic data to the server computer 100. The data computer 110 can request for the server computer 100 to analyze traffic patterns in the traffic data using some downstream model (e.g., a neural network). The server computer 100 may analyze the traffic data and generate a prediction, such as predicting a level of traffic during a specific time period. The server computer 100 can then transmit the prediction to the data computer 110, which may then instruct the machine 120 to actuate, such as instructing a traffic light controller to change lights, to improve traffic flow.
The components in the system of
The memory 104 may be coupled to the processor 102 internally or externally (e.g., via cloud-based data storage), and may comprise any combination of volatile and/or non-volatile memory such as RAM, DRAM, ROM, flash, or any other suitable memory device.
The network interface 106 may include an interface that can allow the server computer 100 to communicate with external computers and/or devices. The network interface 106 may enable the server computer 100 to communicate data to and from another device such as the data computer 110. Some examples of the network interface 106 may include a modem, a physical network interface (such as an Ethernet card or other Network Interface Card (NIC)), a virtual network interface, a communications port, a Personal Computer Memory Card International Association (PCMCIA) slot and card, or the like. The wireless protocols enabled by the network interface 106 may include Wi-Fi. Data transferred via the network interface 106 may be in the form of signals which may be electrical, electromagnetic, optical, or any other signal capable of being received by the external communications interface (collectively referred to as “electronic signals” or “electronic messages”). These electronic messages that may comprise data or instructions may be provided between the network interface 106 and other devices via a communications path or channel. As noted above, any suitable communication path or channel may be used such as, for instance, a wire or cable, fiber optics, a telephone line, a cellular link, a radio frequency (RF) link, a WAN or LAN network, the Internet, or any other suitable medium.
The computer readable medium 108 may comprise code, executable by the processor 102, for a method comprising: generating, by a server computer, a binary compositional code matrix from an input matrix derived from input data used to make a prediction; converting, by the server computer, the binary compositional code matrix into an integer code matrix; inputting, by the server computer, each row of the integer code matrix into a decoder comprising plurality of codebooks to output a summed vector for each row; inputting, by the server computer, derivatives of the summed vectors into a downstream machine learning model to output a prediction.
The computer readable medium 108 may comprise a number of software modules including, but not limited to, a computation module 108A, an encoding/decoding module 108B, a codebook management module 108C, and a communication module 108D.
The computation module 108A may comprise code that causes the processor 102 to perform computations. For example, the computation module 108A can allow the processor 102 to perform addition, subtraction, multiplication, matrix multiplication, comparisons, etc. The computation module 108A may be accessed by other modules to assist in executing algorithms.
The encoding/decoding module 108B may comprise code that causes the processor 102 to encode and decode data. For example, the encoding/decoding module 108B can store encoding and decoding algorithms, such as the encoding algorithm 200 shown in
The codebook management module 108C may comprise code that causes the processor 102 to manage codebooks. For example, the codebook management module 108C can store and modify codebooks generated by the encoding/decoding module 108B. A “codebook” can be a set of vectors that can be used to transform an integer code vector to an integer vector. Codebooks are further described in Zhang et al., Learning non-redundant codebooks for classifying complex objects, ICML '09: Proceedings of the 26th Annual International Conference on Machine Learning, June 2009 Pages 1241-1248; https://doi.org/10.1145/1553374.1553533.
The communication module 108D may comprise code that causes the processor 102 to generate messages, forward messages, reformat messages, and/or otherwise communicate with other entities.
Graph neural networks (GNNs) are representation learning methods for graph data. When a GNN is applied on a node classification problem, the GNN typically learns the node representation from input node features X and its graph G, where the input node features X are used as the input node representation to the first layer of the model and the graph G dictates the propagation of information. Examples of a GNN can be found in Zhou et al., “Graph Neural Networks: A Review of Methods and Applications,” AI Open, 1:57-81, 2020. However, the input node features X may not be available for every dataset. In order to apply a GNN to a graph without input node features X, it is possible to either 1) extract simple graph based node features (e.g., number of degrees) from the graph, or 2) use embedding learning methods to learn the node embeddings as input node features X such as in Duong et al., “On Node Features for Graph Neural Networks,” arXiv preprint arXiv:1911.08795, 2019. The second approach often outperforms the first, and many methods, such as in Wang et al., “Neural Graph Collaborative Filtering,” Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 165-174, 2019, learn the node embeddings jointly with the parameters of the GNN.
Learning the input node features X (or equivalently the embedding matrix X) for a graph with a small number of nodes can be performed without difficulty by a common computer system. However, as the size of the embedding matrix X grows linearly with the number of nodes, scalability becomes a prominent issue. For example, if a given graph has 1 billion nodes, the dimension of the learned embedding is set to 64, and the embedding matrix X is stored using a single-precision floating-point format, the memory cost for the embedding layer alone is 238 gigabytes which is beyond the capability of many common graphics processing units (GPUs).
To reduce the memory requirement, embodiments represent nodes using a generated binary compositional code vector, such as described for natural language processing in Takase, Sho and Kobayashi, Sosuke. “All Word Embeddings from One Embedding.” arXiv preprint arXiv:2004.12073, 2020. Then, a decoder model that can be trained end-to-end with a downstream model, uncompresses the binary compositional code vector into a floating-point vector. The bit size of the binary compositional code vector is parameterized by a code cardinality value c, and a code length value m. The code cardinality value c determines which values the elements of the code vector can take, and the code length value m determines how many elements the code vector has. For example, if the code cardinality c=4 and the code length m=6, one valid code vector is [2, 0, 3, 1, 0, 1], where each element of the code vector is within the set {0, 1, 2, 3} and the length of the code vector is 6. The code vector can be converted to a bit vector of length m log2 c by representing each element in the code vector as a binary number, and concatenating the resulting binary numbers. Continuing the above example, the code vector [2, 0, 3, 1, 0, 1] can be compactly stored as [10, 00, 11, 01, 00, 01]. The choice of c=64 and m=8 can uniquely represent 248 nodes (e.g., the exponent determined by 8 log2 64=48).
Embodiments can use a random projection method to generate a code vector for each entity in a graph using auxiliary information of the graph, such as the adjacency matrix associated with the graph G of a pre-trained embedding. Random projection can process entities (nodes) with similar auxiliary information into similar code vectors. Such random projection methods are known as locality-sensitive methods, further details of which are described in Charikar, Moses, “Similarity Estimation Techniques from Rounding Algorithms,” Proceedings of the 34th annual ACM Symposium on Theory of Computing, pp. 380-388, 2002.
, where n is a number of nodes of an associated graph and d is the length of a first vector associated with each node. The encoding algorithm 200 can additionally take as input a code cardinality value c, and a code length value m. The code cardinality value c and the code length value m determine the format and associated memory cost of the output binary compositional code matrix {circumflex over (X)}. The output binary compositional code matrix {circumflex over (X)}, can be in a binary format, where each row of the binary compositional code matrix {circumflex over (X)} comprises a node's associated binary code vector.
In some embodiments, the input matrix A can comprise auxiliary information of a graph, such as the adjacency matrix of the graph. In other examples, the input matrix A can be generated by sampling a batch of nodes of a graph. For each node of the batch, a set of nearest neighbor nodes of the node can be sampled, retrieving the adjacency matrix. In yet other example, the input matrix A can further include a set of second nearest neighbors of the batch of nodes. For example, for each node of the batch, the set of nearest neighbor nodes of the node can be sampled, and for each node in the set of nearest neighbor nodes, a set of nearest neighbor nodes can be sampled (e.g., the second nearest neighbors of the original node of the batch of nodes). For a transaction graph, the input matrix A may comprise the relationships between bank accounts. For example, the graph may show bank accounts as nodes, and the input matrix A can be the adjacency matrix of the graph which contains information about the connections between the bank accounts (e.g., where each bank account moves funds to and receives funds from). If the input matrix A comprises an adjacency matrix, the number of nodes may be equal to the length of the first vector associated with each node (e.g., n=d).
In line 1 of the encoding algorithm 200, an input matrix A, code cardinality value c, and code length value m can be input into the algorithm. A user of the algorithm can obtain a code cardinality value c and a code length value m based on the desired encoding to be performed. For example, if the user wishes to encode a total of M nodes, the user can determine values for the code cardinality value c and a code length value m appropriately (e.g., 2m log
In line 2 of the encoding algorithm 200, the number of bits required to store each code vector is computed and stored in the variable nbt. The variable nbit can be computed as equaling m log2 c.
In line 3 of the encoding algorithm 200, the binary compositional code matrix {circumflex over (X)} can be initialized as a false Boolean matrix of size n×nbit. For example, the initial binary compositional code matrix {circumflex over (X)} can be an n×nbit sized matrix with each element of the initial binary compositional code matrix {circumflex over (X)} set to logical false. The binary compositional code matrix {circumflex over (X)} may later store the resulting binary compositional code vectors.
From lines 4 through 11 of the encoding algorithm 200, the binary compositional code vectors are generated bit-by-bit in the outer loop, and node-by-node in the inner loops. The outer loop iterates through each column of the initial binary compositional code matrix {circumflex over (X)}. Generating compositional codes in this order is a memory efficient way to perform random projections, as only a size d random vector is stored in each iteration. If the inner loop (e.g., lines 7-8) is switched with the outer loop (e.g., lines 4-11), it would require an matrix to store all the random vectors for random projection. Line 4 begins the outer loop, which is repeated a number of iterations equal to the required number of bits nbit.
In line 5 of the encoding algorithm 200, a first vector V∈ (e.g., a random vector in a real number space of size d) is generated randomly. For example, a random number generator can be used to randomly assign a value for each of the elements of the first vector V. The first vector V can be used for performing the random projection.
In line 6 of the encoding algorithm 200, a second vector U∈ (e.g., an empty vector in a real-number space of size n) is initialized as an empty vector. For example, the second vector U can be a vector with a total of n empty elements. The second vector U can store the result of the random projection.
In line 7 of the encoding algorithm 200, a first inner loop can be repeated for a total of n iterations. The first inner loop can populate each element of the second vector U.
In line 8 of the encoding algorithm 200, each node's associated code vector is projected using the first vector V and stored in the second vector U. The jth element of the second vector U can be store the result of a dot product between the first vector V and the jth row of the input matrix A. For example,
In line 9 of the encoding algorithm 200, a threshold value t is computed using the second vector U. The threshold value t can be used to binarize the real values of the second vector U. In some embodiments, the threshold value t can be computed by computing the median of the second vector U. The median can be used as the threshold value t to reduce the number of collisions (e.g., duplicates rows in the binary compositional code matrix {circumflex over (X)}), as shown in Dong et al., “Scalable Representation Learning for Heterogeneous Networks,” Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 135-144, 2017a. A unique code vector for each input node is desired, and the choice of the threshold value t has a significant impact on the appearance rate of collisions. The impact of the choice of the threshold value t is summarized by the histograms of
In line 10 of the encoding algorithm 200, a second inner loop can be repeated for a total of n iterations. The second inner loop can binarize each element of the second vector U and store the binary result in the corresponding element of the initial binary compositional code matrix {circumflex over (X)}.
In line 11 of the encoding algorithm 200, the jth element of the second vector U is compared to the threshold value t. If the jth element of the second vector U is larger than the threshold value t, then the corresponding element of the initial binary compositional code matrix {circumflex over (X)} is set to logical true (e.g., {circumflex over (X)}[j,i]=True). The corresponding element of the initial binary compositional code matrix {circumflex over (X)} is determined by the value of i of the outer loop and the value of j of the second inner loop. Line 10 iterates through each element of the second vector U, such that they are all compared to the threshold value t.
An illustration of the process of lines 10-11 is provided by
In line 12 of the encoding algorithm 200, the binary compositional code matrix {circumflex over (X)} can be returned. The binary compositional code matrix {circumflex over (X)} may then be used for any downstream tasks.
The memory complexity of the encoding algorithm 200 is O(MAX(nm log2 c, df, nf)), where f is the number of bits used to store a floating-point number, nm log2 c is the memory cost for storing the binary compositional code matrix {circumflex over (X)}, df is the memory cost associated with storing the first vector V, and nf is the memory cost associated with storing the second vector U. Typically, because f is usually less than m log2 c and d≤n, the memory complexity of the encoding algorithm 200 is O(nm log2 c), which is the same as the output binary compositional code matrix {circumflex over (X)}.
In each of the histograms shown by
The decoder 510 can include a plurality of codebooks 504 including a first codebook 504A, a second codebook 504B, a third codebook 504C, and a fourth codebook 504D. Although a specific number of codebooks is shown for purposes of illustration, there can be more codebooks in other embodiments. Each codebook is a matrix, where c is the number of real number vectors in the codebook (e.g., the code cardinality value, such as the one used as input to the encoding algorithm 200) and dc is the size of each real vector in the codebook. The plurality of codebooks 504 comprises a number of codebooks equal to the code length value m, where the code length value m is the total code length (e.g., the length of the code after being converted to integer from binary). In the example shown, there are four codebooks (e.g., m=4), each comprising 4 real vectors (e.g., c=4).
The decoder 510 can additionally include logic to perform the overlayed steps S500-S510. The input to the decoder 510 may be a binary compositional code matrix {circumflex over (X)}, such as those generated by the encoding algorithm 200 of
In step S500, the binary code vector 500 can be converted into an integer code vector 502. The conversion can be done directly, such as by a look-up table. In the example shown, the binary code vector 500 (e.g., [10, 00, 11, 01]) is converted to the integer code vector 502 (e.g., [2, 0, 3, 1]).
In step S502, the integer code vector 502 can be used to retrieve a set of real number vectors 506A-506D from the plurality of codebooks 504 based on corresponding indices. For example, the integer code vector 502 [2, 0, 3, 1] retrieves a first real number vector 506A corresponding to the index 2 from the first codebook 504A, a second real number vector 506B corresponding to the index 0 from the second codebook 504B, a third real number vector 506C corresponding to the index 3 from the third codebook 504C, and a fourth real number vector 506D corresponding to the index 1 from the fourth codebook 504D. The real number vectors of the codebooks 504 can be non-trainable random vectors, or trainable vectors. Both the trainable and non-trainable vectors can have elements that are initially randomly generated. Each codebook of the plurality of codebooks 504 can be said to be trainable if they comprise trainable vectors, or non-trainable if they comprise non-trainable random vectors. A trainable vector can be a vector that includes trainable parameters as elements (e.g., the elements can be modified as a part of training). A non-trainable vector can be a vector with randomly generated fixed elements. The use of trainable vectors increases the number of trainable parameters by mcdc (e.g., number of codebooks*number of codes possible*length of real vectors), and has improved performance if the memory cost can be paid. Additionally, the memory cost of the trainable parameters of the trainable codebooks is independent of the number of nodes of an input matrix A (e.g., the input matrix A used to generate the binary compositional code matrix {circumflex over (X)}).
In step S504, the retrieved set of real number vectors 506A-506D can be summed to form an integer vector 506. The integer vector 506 can be the element-wise sum of the first real number vector 506A, the second real number vector 506B, the third real number vector 506C, and the fourth real number vector 506D.
In step S506A, if the plurality of codebooks 504 are trainable, the integer vector 506 can be a summed vector 508. For example, the summer vector 508 can be the element-wise sum of the first real number vector 506A, the second real number vector 506B, the third real number vector 506C, and the fourth real number vector 506D.
In step S506B, if the plurality of codebooks 504 are non-trainable, the element-wise product between the integer vector 506 and a trainable vector 506E can be computed to output a summed vector 508. The element-wise product between the two vectors can result in a rescaling of each dimension of the integer vector 506 such that the resultant summed vector 508 is unique for each input integer code vector 502. The rescaling method using the trainable vector 506E is described in the aforementioned Takase and Kobayashi, “All Word Embeddings from One Embedding.” The rescaling is not needed to form the summed vector 508 in step S506A because the trainable parameters of the trainable codebooks can instead be modified (as opposed to the trainable vector 506E) to ensure uniqueness.
In step S508, after the summed vector 508 is output by the trainable codebooks 504, or the summed vector 508 is output by the non-trainable codebooks 504, the summed vector 508 can be fed into a multilayer perceptron 512 to generate a derivative of the summed vector 508 corresponding to the integer code vector 502. In some embodiments, the multilayer perceptron 512 can comprise a ReLU function between linear layers, and can output an embedding corresponding to the input binary code vector 500. In some embodiments, the multilayer perceptron 512 can a receive an intermediate matrix as input. The multilayer perceptron 512 can process the intermediate matrix to form a processed intermediate matrix, which can be an embedding matrix (e.g., each row of the embedding matrix can be an embedding corresponding to vector similar integer code vector 502). The rows of the processed intermediate matrix can then be input into the downstream model 514.
In step S510, the derivative of the summed vector corresponding to the input binary code vector 500 can be fed into the downstream model 514. In some embodiments, the derivative of the summed vector can be an embedding.
The number of trainable parameters can be independent of the number of nodes both when using non-trainable codebooks and when using trainable codebooks. Given the number of neurons for the multilayer perceptron is set to dm, the number of layers is set to l≥2, and the dimension of the output embedding is set to dc, when using non-trainable codebooks, there is a total of mcdc non-trainable parameters (e.g., parameters that can be stored outside of GPU memory) and dc+dcdm+(l−2)dm2+dmde trainable parameters. When using trainable codebooks, there is a total of mcdc+dcdm+(l−2)dm2+dmde trainable parameters. The number of trainable parameters is independent of the number of nodes of the input.
At block 600, a batch of nodes of a graph are sampled. For example, a graph dataset of transaction data can plot a transaction in N-dimensional space. The transaction can have various features with high cardinality, such as transaction amount, timestamp, transaction identifier, user identifiers, etc. Several nodes of the transaction data can be sampled as nodes. Note that embodiments are not limited to transaction data, but can be applied to any other suitable type of data. For example, the data forming the graph can relate to recommendations (e.g., of content such as movies, recommendations of friends or pages of interest in social media networks, images similar to an input image), data (e.g., traffic data, road data, etc.) related to transportation for autonomous vehicles,
At block 602, a neighbor sampler can, for each sampled node, determine a set of first nearest neighbor nodes (e.g., most similar transactions). Additionally, because the shown GraphSAGE model has two layers, the neighbor sampler can, for each sampled node, determine a set of second nearest neighbor nodes (e.g., highly similar transactions).
At block 604, the binary compositional code matrix {circumflex over (X)} associated with the sampled node's first nearest neighbors and second nearest neighbors can be retrieved. For example, for each sampled node, the nodes of the set of first nearest neighbors and the nodes of the set of second nearest neighbors can be fed into the encoding algorithm 200 of
At block 606, the binary compositional code matrix {circumflex over (X)} (or individually, the binary code vectors) can be decoded. For example, the binary compositional code matrix {circumflex over (X)}, or the rows of the binary compositional code matrix {circumflex over (X)} can be fed into the decoder 510 of
Blocks 608-616 illustrate a GraphSAGE model as shown in Hamilton et al., “Inductive Representation Learning on Large Graphs,” arXiv preprint arXiv:1706.02216, 2017.
At block 608, the second nearest neighbor embeddings for the first nearest neighbor embeddings of sampled nodes can be aggregated. For example, the aggregation can be performed using a mean or max function in the first aggregate layer. Given that a matrix Hi contains the embeddings of nodes neighboring node i, the first aggregate layer computes with Aggregate(Hi).
At block 610, the first layer can, for each first nearest neighbor node of node i, the aggregate for the first nearest neighbor and xi (e.g., the embedding for node i) can be concatenated and processed. The process of the first layer can be represented as σ(W·Concatenate (
, xi)), where W is a weight associated with the first layer and σ(·) is some non-linearity like a ReLU.
At block 612, the first nearest neighbor embeddings for the sampled nodes can be aggregated, similarly, to block 608.
At block 614, the second layer can process the first nearest neighbor embeddings using some non-linearity, similarly, to block 610. However, the second layer does not concatenate the first nearest neighbor embeddings and the sampled node embeddings as they are not used in the GraphSAGE model.
At block 616, the learned representation is fed into an output (i.e., linear) layer. The output layer may generate a prediction 618 using the embeddings. The parameters of the model can be learned end-to-end using labeled training data.
When the number of compressed entities is low, the reconstructed embeddings from all of the tested compression methods perform similarly to the raw embeddings. However, as the number of compressed entities increases, the reconstructed embeddings' performance decreases, as the decoder model size does not grow with the number of compressed entities. In other words, the compression ratio increases as the number of compressed entities increases. The quality of the reconstructed embeddings from the random coding method performs significantly worse and drops sharply compared to other methods. The method by embodiments performs similarly to the learning-based coding method while using less parameters to learn the encoding functions.
Embodiments provide several advantages. Embodiments reduce the memory cost of storing embeddings, such that conventional GPUs can train embeddings on memory. Embodiments use a random projection based algorithm to generate a binary compositional code matrix that encodes an input matrix formed from nodes of a graph, where each row of the binary compositional code matrix corresponds to a node of the graph. Embodiments then use a decoder to decode each row of the binary compositional code matrix into a summed vector. The summed vector can then be fed into a multilayer perceptron to generate an embedding for the associated node of the graph. The embedding may then be fed into any downstream model. The binary compositional code matrix provides for a significant reduction in memory to store the embeddings of input matrices.
Any of the software components or functions described in this application may be implemented as software code to be executed by a processor using any suitable computer language such as, for example, Java, C, C++, C#, Objective-C, Swift, or scripting language such as Perl or Python using, for example, conventional or object-oriented techniques. The software code may be stored as a series of instructions or commands on a computer readable medium for storage and/or transmission, suitable media include random access memory (RAM), a read only memory (ROM), a magnetic medium such as a hard-drive or a floppy disk, or an optical medium such as a compact disk (CD) or DVD (digital versatile disk), flash memory, and the like. The computer readable medium may be any combination of such storage or transmission devices.
Such programs may also be encoded and transmitted using carrier signals adapted for transmission via wired, optical, and/or wireless networks conforming to a variety of protocols, including the Internet. As such, a computer readable medium according to an embodiment of the present invention may be created using a data signal encoded with such programs. Computer readable media encoded with the program code may be packaged with a compatible device or provided separately from other devices (e.g., via Internet download). Any such computer readable medium may reside on or within a single computer product (e.g., a hard drive, a CD, or an entire computer system), and may be present on or within different computer products within a system or network. A computer system may include a monitor, printer, or other suitable display for providing any of the results mentioned herein to a user.
The above description is illustrative and is not restrictive. Many variations of the invention will become apparent to those skilled in the art upon review of the disclosure. The scope of the invention should, therefore, be determined not with reference to the above description, but instead should be determined with reference to the pending claims along with their full scope or equivalents.
One or more features from any embodiment may be combined with one or more features of any other embodiment without departing from the scope of the invention.
As used herein, the use of “a,” “an,” or “the” is intended to mean “at least one,” unless specifically indicated to the contrary.
This application is a PCT application, which claims priority to and the benefit of U.S. Provisional Patent Application No. 63/249,852, filed on Sep. 29, 2021 which is herein incorporated by reference in its entirety.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2022/044144 | 9/20/2022 | WO |
Number | Date | Country | |
---|---|---|---|
63249852 | Sep 2021 | US |