This is the first application for this matter.
This disclosure relates to a system, methods and computer readable medium for processing graph data structures.
A graph is a data structure that models a set of entities as a set of respective nodes and the relationships between entities as a set of edges (E), such that graph =(
). A simplified example of a graph
=(
) is shown in
contains nodes A, B, C, D, E, and F), having respective feature vectors XA, XB, XC, XD, XE, XF.
Graphs have great expressive power to model a set of entities and the relationships between entities in various fields such as social networks, recommendation systems, knowledge graphs, physical systems, protein-protein interaction networks, and many other complex systems, and therefore, are gaining a lot of attention in the field of machine learning.
and sends a request (for example, through a network that may include a cloud network such as the Internet) to a server 104. At the server 104, a graph neural network (GNN) model (GNN encoder 106) serves as a core building block of a prediction model. Although there are numerous designs for GNN models, the basic function of the GNN model is to represent each node based on its neighbourhood by aggregating features from neighbour nodes.
Upon receiving the request, the server 104 will compute node representations (also referred to as embeddings) using the GNN encoder 106 for specific nodes related to the request. With the node representations, some downstream procedure is further performed (e.g., by a decoder 108) to generate a prediction and return the final results to the client.
Node embedding or node representation is defined as mapping nodes to a De-dimensional embedding space (where De<<D (original dimensionality of the node)) so that similar nodes z in a graph are mapped at close proximity in the embedding space. Similarity of nodes is quantified by different metrics which may or may not be domain/application specific.
A GNN is a type of neural network which directly operates on the graph structure. The goal of using a GNN is to provide an encoder function that can learn a low-dimensional vector also known as an embedding or representation g(z) of each node z in the graph . A core building component of a GNN is called the message passing procedure, where the representation of each node is recursively updated over the aggregation of node features from its neighbors. Specifically, Let hzl be the embedding vector of node z at layer l. Denote N(z) as the set of neighbor nodes of node z. hzl is iteratively evaluated as:
where AGGREGATEl(·) is any permutation invariant operation (a function over a sequence of inputs and the output is not affected by the order of the inputs) that aggregates a set of vectors, and UPDATEl(·) is any transformation function with possible learnable parameters and non-linear transformations. hz0 is usually initialized as the row vector Xz* with node features. Finally, the representation of a node g(z) is defined as:
where L is the predefined hyperparameter indicating the number of layers.
Deploying GNN models to process large graphs is challenging since collecting information from the neighbor nodes and computing the aggregation is extremely time-consuming. The processing time within a conventional GNN encoder 106, both for training and for inference, is inevitably long due to the number of computations required by a GNN model. The complexity of direct inference of each node representation in equation (2) is proportional to the number of receptive nodes. Specifically, for message-passing based GNN models, the number of receptive nodes potentially grows exponentially as the number of GNN layers increases. For the graph transformers, the number of receptive nodes is simply the full graph. Therefore, conventional GNN models are infeasible for applications with stringent latency requirements due to the intractable number of receptive nodes.
Accordingly, there is a need for an improved system, method, and computer readable medium for processing graph data structures that can reduce the number of computations required for such processing while maintaining a suitable level of accuracy.
According to a first example aspect, a method is disclosed for operating a computer system to process a graph, the graph comprising a data structure that defines nodes that represent entities and edges that represent relationships between the nodes, each node having an associated feature vector that specifies a set of feature values for the node. The method includes performing an inference process. The inference process includes receiving a prediction request for a subject node of the graph; obtaining a sparse node approximation for the subject node, the sparse node approximation defining a weighted combination of a subset of nodes of the graph as receptive nodes for the subject node; applying a neural network based transformation function based on the sparse node approximation to generate a node representation for the subject node; performing a prediction task based on the generated node representation to generate a prediction for the subject node; and outputting the prediction.
In some examples of the first example aspect, obtaining the sparse node approximation for the subject node comprises accessing a stored sparse data structure that identifies, for each of a plurality of the nodes of the graph including the subject node, a respective sparse vector that defines a respective subset of other nodes of the graph as the receptive nodes for that node and respective weight indices for the receptive nodes; and applying the neural network based transformation function to generate the node representation for the subject node comprises: (i) for each receptive node of the subject node, applying the neural network transformation function to the feature vector associated with the receptive node to generate a respective receptive node representation, and (ii) combining the respective receptive node representations according to the respective weight indices for the receptive nodes.
In some examples of one or more of the preceding aspects, the method includes, prior to performing the inference process, preforming a learning process to learn an approximation of a target graph neural network (GNN) encoder, comprising: collectively learning the respective sparse vectors for the nodes of the graph and a set of parameters for the neural network based transformation function with an objective of jointly optimizing (i) an accuracy of node representations generated by the neural network based transformation function relative to node representations generated by the target GNN encoder and (ii) a sparsity of the respective sparse vectors.
In some examples of one or more of the preceding aspects, the learning process is performed to enable during the inference process a collective time required for obtaining the sparse node approximation and generating the node representation for the subject node during the inference process to be is less than a time required for a corresponding inference by the target GNN encoder.
In some examples of one or more of the preceding aspects wherein collectively learning the sparse vectors and the set of parameters comprises: identifying, for each of the nodes, a respective candidate nodes that are a subset of the nodes of the graph, wherein the respective sparse vector for each node is learned based on identifying a subset of the respective candidate nodes for the node as the receptive nodes for the node.
In some examples of one or more of the preceding aspects, identifying the respective candidate nodes for each node comprises a random walk selection process.
In some examples of one or more of the preceding aspects, collectively learning the sparse vectors and the set of parameters comprises performing a plurality of training iterations that each comprise a first phase during which the respective sparse vectors are updated and a second phase during which the set of parameters for the neural network based transformation function are updated.
In some examples of one or more of the preceding aspects, a mini-batch of the nodes are processed during each of the training iterations.
In some examples of one or more of the preceding aspects, preforming the learning process comprises training a neural network based decoder to perform the prediction task based on node representations generated by the neural network based transformation function subsequent to completion of the collective learning of the respective sparse vectors and the set of parameters.
In some examples of one or more of the preceding aspects, the method is performed at a server, the prediction request is received through a network from a user devices, and outputting the prediction comprises sending the prediction through the network to the user device.
According to a further example aspect, a computer system is disclosed that is configured to perform the method of one or more of the preceding aspects.
According to a further example aspect, a computer readable medium is disclosed that stores non-transient instructions that configure a computer system to perform the method of one or more of the preceding aspects.
Reference will now be made, by way of example, to the accompanying drawings which show example embodiments of the present application, and in which:
Similar reference numerals may have been used in different figures to denote similar components.
Example embodiments are directed towards a processing system, method, and computer readable medium for processing graph data structures that can reduce the number of computations (and thus inference time) required for such processing while maintaining a suitable level of accuracy. In example embodiments, the disclosed solution offers a sparse decomposition GNN (SDGNN) model that has linear complexity in terms of inference time (e.g., inference time is linear w.r.t. the number of layers of a GNN model and the average node degree (i.e., number of hops that define a neighbourhood)), while still enabling the most relevant neighbourhood features to be processed during inference. Furthermore, the disclosed SDGNN model is highly versatile in that it can be implemented in combination with a variety of known GNN architectures.
Context for example implementation of a disclosed solution is provided in the following paragraphs. Consider a graph ={
,
} with |
|=n nodes and |
|=m edges. Denote by X∈
n×D0 the feature matrix, with a dimension of D0, in which the ith row Xi* denotes the feature vector of node i. In known GNN solutions, graph representation learning involves learning an embedding g (z, X|
) for each node z by training GNN encoder 106. The representations can then be applied in a downstream task using an encoder 108, e.g., node classification, link prediction, and graph classification. In a node classification task, encoder 108 outputs a label matrix Ŷ∈
n×K where the ith row Ŷi* is a one-hot encoded vector of K classes.
Given the labels for a subset of nodes in graph , the node classification task is to predict the labels for unlabelled nodes of graph
. The graph
is associated with an adjacency matrix A∈
n×n, where Aij: =1 if and only if the edge (i, j)∈
, and otherwise Aij:=0.
As noted above in respect of equations (1) and (2), in a conventional GNN encoder g the direct inference of each node representation g(z, X|) is proportional to the number of receptive nodes. Specifically, for message-passing based GNN models, the number of receptive nodes potentially grows exponentially as the number of GNN layers increases. Accordingly, example implementations are directed to a GNN model solution that enables the number of nodes that are considered as receptive nodes to be substantially reduced. This is accomplished by applying an alternative node embedding approximating computation approach, ĝ(z, X|
), so that inference computation complexity O(
L) is linear with respect to number of layers L and average node degree
, while approximate node representation ĝ(z, X|
) remains sufficiently close to node representation g(z, X|
) to enable accurate performance of downstream computations (e.g., prediction output by decoder 108).
Different metrics can be adopted to compare (e.g., measure the distance) between approximate node representation ĝ(z, X|) and original GNN node representation g(z, X|G). In an example implementation, normalized Euclidean distance is selected as a comparative metric, with normalized regret (NR) defined as:
The normalization makes the metric independent of the scale of the target embeddings. As long as the NR is small, the node representation from the approximate embedding ĝ(z, X|) can be adopted for any downstream tasks instead of the original GNN embedding.
In this regard, , for each graph node z. The transformation function h(·;W):
D→
D′ maps each row of an input feature matrix X∈
to a predefined dimension D′, where W represents a set of learnable parameters. For conciseness, the notation is extended to the matrix setting, with h(X;W):
→
being the result of a row-wise mapping from X under the function h(·;W). Each node z is modelled as a linear combination over the transformed node features via its respective sparse node vector θz, such that the sparse decomposition performed by SDGNN encoder 202 for node z can be denoted as ĝ(z, X|
):=θzTh(X;W).
The sparsity of sparse node vector θ2 and the node-wise dependency on h(·;W) ensure that the computation of representation ĝ(z, X|) depends on a limited set of node features. Hence, the inference complexity can be controlled to be O(
L). The sparse vector function 204 and transformation function (h) 206 collectively provide flexibility that enables SDGNN encoder 202 to approximate a large variation of target GNN models.
In this disclosure, at the matrix level SDGNN encoder 202 can be denoted as follows. Let (θ1, θ2, . . . , θz|v|) be the columns of the sparse matrix Θ. The target node representation matrix is denoted as Ω∈, where the zth row Ωz*:=ĝ(z, X|
). Correspondingly, a predicted node representation matrix (e.g., the output of SDGNN encoder 202 is denoted as {circumflex over (Ω)}:=Θth(X;W).
In example implementations, the transformation function h (·;W) is implemented using a neural network such as a multilayer perceptron (MLP). As known in the art, MLP is a misnomer for a modern feedforward artificial neural network, consisting of fully connected neurons with a nonlinear kind of activation function, organized in at least three layers. Any number of suitable MLP structures known for graph processing can be used for transformation function h (·;W), with specific examples disclosed below in the context of experimentation and evaluation.
The optimal sparse matrix Θ and optimal transformation function h(·;W) are jointly learned through a training process in respect of an input graph that is provided with a target node representation matrix Ω. The target node representation matrix Ω can, for example, be the set of embeddings generated for graph
by the target GNN encoder 106 that SDGNN encoder 202 is being trained to approximate.
In example implementations, the optimal sparse matrix Θ and transformation function h(·;W) are learned by solving the following optimization problem:
where λ1≥0 is a hyperparameter to control the sparsity of sparse matrix Θ via an L1 regularization term, and λ2≥0 is the hyperparameter for L2 regularization of the transformation function parameters W. In the case where the transformation function h(X;W) is a multi-layer perceptron (MLP) model, the L2 regularization implicitly upper bounds the row-wise norm of h(X;W) for a given X. This prevents the degenerate case of an extremely small and sparse matrix Θ.
Jointly learning the respective sets of values for sparse matrix Θ and function parameters W can be challenging due to the sparsity constraint on sparse matrix Θ. In order to address this challenge, in example implementations, a two stage iterative training procedure is applied to optimize one set of values while fixing the other set of values. These two stages are referred to herein as Phase Θ and Phase h.
Phase Θ can be represented as follows. During each training iteration, for each node z∈, the respective sparse node vector θz is updated with the solution of the least absolute shrinkage and selection operator (Lasso) optimization problem:
In at least some example implementations the constraint s.t. θ≥0 can be applied to Equation (5) to make the optimization procedure more robust.
In example implantations, the Least Angle Regression (LARS) algorithm [Reference: Efron, Bradley; Hastie, Trevor; Johnstone, Iain; Tibshirani, Robert (2004). “Least Angle Regression” (PDF). Annals of Statistics. 32 (2): pp. 407-499. arXiv: math/0406456] is applied to solve the optimization problem of Equation (5) due to its efficiency and capability of controlling the maximum number of nonzero entries.
Phase h can be represented as follows. In each training iteration, the parameter matrix W is updated based on solving the optimization problem:
In an example implementation, a common gradient descent (GD) algorithm can be used to solve for W. In some examples, rather than reaching a converged W, a few steps are taken as part of each training iteration.
In some examples, training is performed on a mini-batch basis as part of each training iteration. Specifically, ⊂
nodes are randomly selected at each iteration and only the respective sparse node vectors θz for nodes z∈
are updated. In the case of parameters W, the same subset
of nodes are selected and the corresponding rows of θth(X; W)−Ω to include in the loss term for each mini-batch, to solve the optimization problem:
In some examples, a preliminary step of culling some nodes can be performed at server 104 prior to sparse vector function 204. In particular, the number of computations required can grow rapidly as the size n of node set increases. Specifically, the computational overhead comes from the inference time of h(X; W) and the computation time to solve the optimization problem of Equation (5). To reduce the required computation, in example implementations a preliminary step is performed to define a much smaller candidate node set
for each node z and only this smaller set of candidate nodes are processed to determine the respective sparse node vector θz for each node z.
In such implementations, the optimization problem represented by Equation (5) is replaced the following problem instead:
Equation (9) defines the reduced candidate set for solving sparse node vector θz. In such case, the inference h(X; W) only has to be performed for node z that appears in the candidate node set and the complexity of solving Equation (8) only depends on the size of the candidate node set
instead of the full set of graph nodes.
In example embodiments, the candidate node set can be selected by using knowledge of the structure of graph
(as represented in adjacent matrix A) to heuristically determine the candidate node set
for each node z with an objective of including the most relevant nodes with a moderate-sized candidate set. In a particular example, candidate node set
for a subject node z is the union of all K1-hop neighbours of subject node z together with nodes visited during a K2-hop random walk for K2>K1. In some examples, K2=K1+2 and the smallest K1 is selected that yields an acceptable combination of speed and accuracy. In some examples K1=1. In some examples K1=2. In some examples K1=3 or greater.
An example of a method for training the SDGNN encoder (ĝ) 202 at server 104 that applies the two stage process described above is illustrated in pseudocode form by Algorithm 1 of
Step 1 (302): The following inputs are obtained: (1) a representation of graph , in the form of node feature matrix X and adjacent matrix A; and (2) target node representation matrix Ω. (As noted above, target node representation matrix Ω can be the set of embeddings generated for graph
by a target GNN encoder 106 that SDGNN encoder 202 is being trained to approximate.)
Step 2 (304): In at least some examples, a preliminary node selection procedure is performed to determine for each node z, a respective set of candidate nodes (from the total set of nodes
) that are to be considered when determining the sparse node vector θz for that node. For example, as noted above, candidate node set
for a subject node z can be the union of all K1-hop neighbours of subject node z together with nodes visited during a K2-hop random walk for K2>K1.
Step 3 (306): A main training loop is performed for T iterations. In the illustrated example, the training is done in mini-batches as described above, such that for each iteration nodes are randomly selected from
for processing. Each iteration of the main training loop comprises, two phases, namely: Phase Θ (308), during which for each node z∈
, the respective sparse node vector θz is updated to solve the optimization problem represented by Equations (8) and (9) using a LARS algorithm; and Phase h (310), during which a GD algorithm is performed for a few iterations as per Equation (7) to update the parameter matrix W of transformation function (h).
Step 4 (312): Finally, for each node z in node set , fix the parameters of function transformation function (h) and apply a LARS algorithm to solve the optimization problem represented by Equations (8) and (9) and update the sparse node vector θz.
To provide further illustration of the training of SDGNN encoder (ĝ) 202, reference is made to
Thus, nodes B and C are considered receptive nodes for Node A, with respective weighting indices of 0.9 and 0.5. In matrix form, the sparse matrix Θ, which includes sparse vector representations θz for all nodes z in can be represented as follows:
The sparse matrix Θ includes non-zero indices that identify, for each subject node z a sparse node vector θz indicating the subject node's respective receptor nodes and a respective index value (weight) for each of the receptor nodes. Non-receptor nodes are assigned a null or 0 value in the sparse matrix Θ.
The sparse matrix Θ and feature matrix X can be provided as inputs to transformation function (h) to generate predicted node representation matrix {circumflex over (Ω)}:=ΘTh(X; W) in the form illustrated in
In example implantations, once SDGNN encoder 202 is trained to approximate target GNN encoder 106, a corresponding decoder (f) 208 can then be trained. In particular, SDGNN encoder 202 is used to generate node representation matrix {circumflex over (Ω)}:=ΘTh(X; W), which includes a respective SDGNN node representation {circumflex over (Ω)}Z*:=ĝ(z, X|) for each node z. In the case of a classification task, the decoder (f) 208 is then trained to map the SDGNN node representation Ωz* for each node having a pre-known label to its respective label Y2. In at least some examples, at least some of the known labels may be soft labels obtained by training the target encoder 106 and its respective decoder 108.
Accordingly, using the known labels and the node representation ĝ (z, X|) for each node z from SDGNN encoder 202, decoder 208 is trained to map the node representations to the predicted labels. In some examples, an MLP NN model may be used for the decoder and the decoder can be trained using, for example, a cross-entropy loss for classification tasks and mean squared error for regression tasks.
. As indicated above in the example of Table 1, sparse matrix Θ includes data that indicating the receptive nodes for each subject node in Graph
and index values indicating a weight value for those receptive nodes. Accordingly, server 104 consults sparse matrix Θ to look up what weighted combination of other nodes can be used to approximate node B, which in the case of the example of Table 1 is B≈0.5XC+0.1 XD. Server 104 obtains the most recent feature vectors XC, XD that are known for Graph
. In some examples, this information may be requested by server 104 from user device 102 or another source of current graph data. In other examples, a current version of Graph G may be maintained and stored at Server 104.
Server 104 applies transformation function (ĝ) to each of the respective feature vectors XC and XD to obtain representations C′ and D′. These individual transformed representations for the receptor nodes are then combined as per the weightings specified in sparse matrix Θ, namely 0.5C′+0.1D′ to provide an embedding (e.g., node embedding) ĝ (z, X|) for subject node B (shown in
The resulting node embedding ĝ(z, X|) for subject node B is then provided to decoder (f) to provide a prediction. The prediction is then returned to user device 102.
It will be noted that the prediction for node B can be generated using only the feature vectors for nodes C and D. This can substantially reduce the amount of feature node data that the server 104 needs to obtain to perform a prediction, as well the number of computations required to arrive at the prediction. Thus, performance and operation of the computer system used to implement server 104 can be improved when compared to using the original target encoder . The latency between receiving a request for a node prediction and then providing that prediction can be substantially reduced, while still maintaining prediction accuracy.
Among other things, once trained, SDGNN encoder (ĝ) 202 can be applied to node classification tasks in a dynamic node environment where the features of nodes in the graph can change over time, requiring updated predictions. In some cases, SDGNN encoder (ĝ) 202 can also be used in link prediction tasks.
The effectiveness of the disclosed SDGNN encoder (ĝ) 202 methods and architecture of improving the functioning of a computer system such as server 104 is illustrated with reference to following experimental and evaluation data.
Datasets: Evaluations were conducted using the following datasets: Cora, Citeseer, Pubmed, Computer, Photo, Arxiv and Products.
Baseline Model Architectures: (1) GLNN (Zhang, S., Liu, Y., Sun, Y., and Shah, N. (2021). Graph-less neural networks: Teaching old mlps new tricks via distillation. In Proc. Int. Conf. on Learn. Representations); (2) NOSMOG (Tian, Y., Zhang, C., Guo, Z., Zhang, X., and Chawla, N. (2023). Learning mlps on graphs: A unified view of effectiveness, robustness, and efficiency. In Proc. Int. Conf. on Learn. Representations) as baselines representing techniques that aim to infer any targeted GNN model with linear complexity. (3) Ã3 Model: baseline constructed by selecting receptive nodes and deriving approximate weights using Ã3, the third power of the symmetric normalized adjacent matrix, Ã=D−1/2 AD−1/2, where D is the diagonal degree matrix. In particular, an estimate Ã3X is formed.
Target GNN models: (1) GraphSAGE (Hamilton, W., Ying, Z., and Leskovec, J. (2017). Inductive representation learning on large graphs. In Proc. Advances in Neural Information Processing Systems.) with mean aggregator for all the datasets. (2) Geom-GCN (Pei, H., Wei, B., Chang, K. C.-C., Lei, Y., and Yang, B. (2020). Geom-gcn: Geometric graph convolutional networks. In Proc. Int. Conf. on Learn. Representations) for Cora, Citeseer and Pubmed datasets. (3) Exphormer (Shirzad, H., Velingker, A., Venkatachalam, B., Sutherland, D. J., and Sinop, A. K. (2023). Exphormer: Sparse transformers for graphs. In Proc. Int. Conf. Machine Learning.) for Computer and Photo datasets.; (4) DRGAT (Zhang, L., Yan, X., He, J., Li, R., and Chu, W. (2023). Drgcn: Dynamic evolving initial residual for deep graph convolutional networks. In Proc. of the AAAI Conf. on Artificial Intelligence.) for Arxiv dataset. (5) RevGNN-112 (RevGNN) (Li, G., M{umlaut over ( )} uller, M., Ghanem, B., and Koltun, V. (2021). Training graph neural networks with 1000 layers. In Proc. Int. Conf. Machine Learning.) for Products dataset.
For these evaluations, the representation of the second last layer of each of the target models was adopted as the target node embeddings. Evaluation was focused on the transductive setting. For the 5 small datasets (Cora, Citeseer, Pubmed, Computer and Photo), nodes were randomly split with a 6:2:2 ratio into training, validation and testing sets. Experiments are conducted using 10 random seeds, as in (Pei, H., Wei, B., Chang, K. C.-C., Lei, Y., and Yang, B. (2020). Geom-gcn: Geometric graph convolutional networks. In Proc. Int. Conf. on Learn. Representation). For Arxiv and Products, the fixed predefined data splits that were used are those specified in (Hu, W., Fey, M., Zitnik, M., Dong, Y., Ren, H., Liu, B., Catasta, M., and Leskovec, J. (2020). Open graph benchmark: Datasets for machine learning on graphs. In Proc. Advances in Neural Information Processing Systems.) and the experiments run 10 times and report the mean and standard deviation.
Node Representation: Table 2 below shows the effectiveness of SDGNN encoder in approximating the node representations of the target GNN models according to a normalized regret metric (2), with the worst performance on Arxiv with SAGE model (0.11).
In most cases, the representations generated by the SDGNN encoder is close to the target GNN models. Performance on Downstream Tasks. Table 2 show comparisons of SDGNN encoder accuracy among GLNN, NOSMOG and SDGNN for all seven datasets with different target GNN models. The SDGNN encoder approximates the target model well for all scenarios and achieve close or better accuracy. The largest negative discrepancy is 0.77 percent for Products/RevGNN. GLNN and NOSMOG often outperform the SAGE target model. However, for the better-performing models on the larger datasets, Arxiv and Products, the performance gap with respect to the target model is more pronounced. For example, on Products/RevGNN the performance gaps are 18.39 and 1.88 percent, respectively. One of the most important differences between GLNN and NOSMOG is that NOSMOG calculates a positional encoding (derived using DeepWalk) for each node and stores this as an additional feature. For Products and Arxiv, this positional encoding is clearly playing a critical role. This suggests that NOSMOG is partially learning a static mapping from node position to label. This would prevent NOSMOG from being used in online inference, whereas the SDGNN encoder can be used in environments where node features and labels can change.
Inference Time: Inference was performed for 10,000 randomly sampled nodes for each dataset to assess the trade-off between inference time and accuracy.
Receptive Nodes: Table 3 below shows the average number of receptive nodes for the SDGNN encoder, the candidate set (as described above) and original graph within 3-hop neighbours for each node. In summary, the SDGNN encoder can effectively select a small set of receptive nodes to approximate the node embedding from a target GNN model.
The processing unit 170 may include one or more processing devices 172, such as a processor, a microprocessor, a general processor unit (GPU), a hardware accelerator, an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), a dedicated logic circuitry, or combinations thereof. The processing unit 170 may also include one or more input/output (I/O) interfaces 174, which may enable interfacing with one or more appropriate input devices 184 and/or output devices 186. The processing unit 170 may include one or more network interfaces 176 for wired or wireless communication with a network (for example a network linking user device 102 and server 104)
The processing unit 170 may also include one or more storage units 178, which may include a mass storage unit such as a solid state drive, a hard disk drive, a magnetic disk drive and/or an optical disk drive. The processing unit 170 may include one or more memories 180, which may include a volatile or non-volatile memory (e.g., a flash memory, a random access memory (RAM), and/or a read-only memory (ROM)). The memory (ies) 180 may store instructions for execution by the processing device(s) 172, such as to carry out examples described in the present disclosure. The memory (ies) 180 may include other software instructions, such as for implementing an operating system and other applications/functions.
There may be a bus 182 providing communication among components of the processing unit 170, including the processing device(s) 172, I/O interface(s) 174, network interface(s) 176, storage unit(s) 178 and/or memory (ies) 180. The bus 182 may be any suitable bus architecture including, for example, a memory bus, a peripheral bus or a video bus.
Although the present disclosure describes methods and processes with steps in a certain order, one or more steps of the methods and processes may be omitted or altered as appropriate. One or more steps may take place in an order other than that in which they are described, as appropriate.
Although the present disclosure is described, at least in part, in terms of methods, a person of ordinary skill in the art will understand that the present disclosure is also directed to the various components for performing at least some of the aspects and features of the described methods, be it by way of hardware components, software or any combination of the two. Accordingly, the technical solution of the present disclosure may be embodied in the form of a software product. A suitable software product may be stored in a pre-recorded storage device or other similar non-volatile or non-transitory computer readable medium, including DVDs, CD-ROMs, USB flash disk, a removable hard disk, or other storage media, for example. The software product includes instructions tangibly stored thereon that enable a processing device (e.g., a personal computer, a server, or a network device) to execute examples of the methods disclosed herein.
The present disclosure may be embodied in other specific forms without departing from the subject matter of the claims. The described example embodiments are to be considered in all respects as being only illustrative and not restrictive. Selected features from one or more of the above-described embodiments may be combined to create alternative embodiments not explicitly described, features suitable for such combinations being understood within the scope of this disclosure.
All values and sub-ranges within disclosed ranges are also disclosed. Also, although the systems, devices and processes disclosed and shown herein may comprise a specific number of elements/components, the systems, devices and assemblies could be modified to include additional or fewer of such elements/components. For example, although any of the elements/components disclosed may be referenced as being singular, the embodiments disclosed herein could be modified to include a plurality of such elements/components. The subject matter described herein intends to cover and embrace all suitable changes in technology.
The content of any publications identified in this disclosure are incorporated herein by reference in their entirety.