The present invention relates to machine learning models and, more particularly, to graph neural networks.
Graph neural networks (GNNs) are a type of machine learning model that handles data in the form of graphs. GNNs can be used to extract information from data having graph structures. For example, message passing may be used to update node representations by aggregating messages from their neighbors, which makes it possible for the GNN to capture both node features and topology information. GNNs can be used for a variety of tasks, such as node classification, graph classification, and link prediction. Exemplary applications include drug discovery, recommender mechanisms, fraud detection, and social networking.
A method includes processing an input graph using a graph neural network (GNN) to generate an output. An explanation sub-graph is generated using an explainer that identifies parts of the input graph that most influence the output. A fidelity measure of the explanation sub-graph is determined that is robust against distribution shifts. An action is performed responsive to the output, the explanation sub-graph, and the fidelity measure.
A system includes a hardware processor and a memory that stores a computer program. When executed by the hardware processor, the computer program causes the hardware processor to process an input graph using a graph neural network (GNN) to generate an output, to generate an explanation sub-graph using an explainer that identifies parts of the input graph that most influence the output, to determine a fidelity measure of the explanation sub-graph that is robust against distribution shifts, and to perform an action responsive to the output, the explanation sub-graph, and the fidelity measure.
These and other features and advantages will become apparent from the following detailed description of illustrative embodiments thereof, which is to be read in connection with the accompanying drawings.
The disclosure will provide details in the following description of preferred embodiments with reference to the following figures wherein:
While graph neural networks (GNNs) are powerful tools, it can be challenging to interpret their outputs. Understanding how GNNs arrive at their outputs fosters greater confidence in applying GNNs in areas where errors have serious consequences. Furthermore, explainability heightens the transparency of GNNs, making them more appropriate for use in delicate sectors, such as healthcare and pharmaceutical development, where fairness, privacy, and safety are high priorities.
Unbiased fidelity measurements can be used to evaluate the faithfulness of an explanation for a GNN's output. The model f may be implemented as a black box, which cannot be fine-tuned to generalize it. Additionally, evaluation may be stable and, ideally, deterministic. As a result, complex parametric evaluation metrics may not be well suited to explainability, as the results may be affected by randomly initiated parameters.
The explanation task is therefore formulated to generate a sub-graph which closely matches the statistics of the original graph with respect to the output of the GNN. That is, the GNN is likely to generate a similar output when given the original graph or the sub-graph. This sub-graph is then used as a tool for explaining the parts of the original graph which were significant to arriving at the output.
A generalized class of surrogate fidelity measures are described herein which are robust to distributional shifts in a wide range of scenarios. These metrics can be used to implement the explainer to ensure that a given sub-graph is close to the original graph in its treatment by the GNN.
Referring now to
A GNN classifier 104 is used to process the input 102. The GNN classifier may, for example, extract some property of the input 102 such as a molecule's affinity for binding to a particular biological receptor. In another example, the GNN classifier 104 may identify whether an attacker has intruded into a computer network or may indicate a network failure. Although a classifier is specifically contemplated, it should be understood that the GNN classifier 104 may be replaced by any appropriate GNN. The output 106 of the GNN classifier 104 may thus be a vector that includes a classification result or any other output appropriate to the task for which the GNN has been trained. The GNN classifier 104 may be regarded herein as a black box, without any ability to train or tune its parameters during operation.
The output 106 is used as the basis for some action 114. For example, the action 114 may include correction to a network failure. However, the action 114 may need information beyond the output 106 to effect a solution. The bare output 106 may not include an explanation for why the GNN classifier 104 indicated, for example, a network failure.
An explainer 108 is thus used to identify a sub-graph 110 of the input 102 that is most relevant to the output 106. For example, this sub-graph may identify systems within the computer network and connections between them that are most indicative of a network failure indicated by the output 106. The action 114 can use this sub-graph to direct its corrective intervention.
However, some assurance of the correctness of the explanation would also be helpful. To that end, the sub-graph 110 is fed back into the GNN classifier 104 and its output is compared to the output resulting from the input 102. A fidelity measure 112 is used to determine how similar the output 106 is to the output generated by the sub-graph 110. The closer these two outputs are, according to the fidelity measure 112, the more likely the sub-graph 110 is to be an accurate explanation of the action of the GNN classifier 104. A good fidelity score indicates that the portions of the input 102 that are excluded from the sub-graph 110 have little impact on the output 106.
The explainer 108 may be implemented by any appropriate mechanism. Regardless of the mechanism selected, the trustworthiness of the sub-graph 110 that it generates depends on the reliability of the fidelity measure. While quantitative evaluation of a sub-graph 110 can be performed by comparing it to a ground truth explanation, such known-true ground truth examples are rare in real-world applications. A first fidelity measure, Fid+, is defined as the difference in accuracy (or predicted probability) between outputs generated by the output 106 and an output based on a remainder of the input 102 when the sub-graph 110 is removed. A second fidelity measure, Fid−, measures the difference the output 106 and an output based on the sub-graph 110. However, these fidelity measures have drawbacks due to an assumption that the to-be-explained model can make accurate predictions based on the sub-graph 110 or remainder sub-graph. This assumption does not hold in a wide range of real-world scenarios because, when edges are removed, the resulting sub-graphs may be out of distribution.
For example, in an input 102 that includes a molecule with nodes representing atoms and edges describing the chemical bonds, the functional group NO2 may be considered a dominant sub-graph that causes the molecule to be mutagenic. This explanation sub-graph includes only two edges, which may be much smaller than the whole molecular graph. Such disparities in properties introduce distribution shifts, reducing the reliability of the fidelity measures described above. Machine learning relies on training and test data coming from the same distribution, so a sub-graph that is out of the distribution will produce unreliable results.
A labeled graph G may be represented as a tuple (V, ε; Y, X, A), where i) V={v1, v2, . . . , vn} is the vertex set, ii) ε⊆V×V is the edge set, iii) Y is the graph class label taking values from finite set of classes , iv) X∈
n×d is the feature matrix, where the ith row of X, denoted by Xi∈
1×d, is the d-dimensional feature vector associated with node vi, i∈[n], and v) A∈{0,1}n×m is the adjacency matrix. The graph parameters (Y, A, X) are generated according to the joint probability measure PY,A,X. Note that the adjacency matrix determines the edge set ε, where Aij=1 if (vi, vj)∈ε, and Aij=0, otherwise. The terms |G| and |ε| are used interchangeably to denote the number of edges of G. Lower-case letters, such as g, y, x, and a, are used to represent realizations of the random objects G, Y, X and A, respectively.
Given a labeled graph G=(V, ε; Y, X, A), the corresponding graph without label is written as as , and its support is
.
In a classification task, there may be a set of labeled training data ={(
, i∈[|
|]}, where (
|] are generated independently according to an identical joint distribution induced by PY,X,A. A classification function (GNN classifier 104) ƒ(·) is trained to classify an unlabeled input graph
. The reconstructed label Ý is produced randomly based on PY.
In node classification tasks, each graph Gi denotes a K-hop sub-graph centered around node vi, with a GNN model ƒ trained to predict the label for node vi based on the node representation of vi learned from Gi, whereas in graph classification tasks, Gi is a random graph whose distribution is determined by the (general) joint distribution PY,A,Z, with the GNN model ƒ(·) trained to predict the label for graph G based on the learned representation of
For a classification task with underlying distribution PY,X,Λ, a classifier is a function ƒ: →Δy. For a given ϵ>0, the classifier is called ϵ-accurate if P(Ý≠Y)≤ϵ, where Ý is produced according to probability distribution ƒ(
There are two types of explainability to consider: Explainability of a classification task and explainability of a classifier for a given task. Given a classification task with underlying distribution RY,X,A, an explanation function for the task is a mapping Ψ: (Vexp, εexp) which takes an unlabeled graph
)≈0. The term d2 may be expressed with:
In practice, an explanation function may be selected to have an output size that is significantly smaller than the original input size, i.e., EG(|Ψ(G(|
A classification task may have an underlying distribution PY,X,A. An explanation function for this task is a mapping Ψ: →2V×2ε. For a given pair of parameters κ∈[0,1] and s∈
, the task is called (s, κ)-explainable if there exists an explanation function Ψ:
(Vexp, εexp) such that: i I(Y;
)≤κ and ii) EG(|εexp|)≤s.
A similar notion of explainability can be provided for a given classifier as follows. A classification task may have an underlying distribution PY,X,A and a classifier ƒ: →Δy. For a given pair of parameters ζ∈[0,1] and s∈
, the classifier ƒ(·) is called (s, ζ)-explainable if there exists an explanation function Ψ(·) such that: i) I(Ý;
)≤ζ and ii)
G(|εexp|)≤s, where Ψ(
) can be alternatively written as
(I(Y;
The explainability of the classification task does not imply, nor is it implied by, the explainability of a classifier for that task. For example, the trivial classifier whose output is independent of the input is explainable for any task, even if the task is not explainable itself. To keep the analysis tractable, a condition may be imposed on Ψ(·):
This condition holds for the ground-truth explanation in many of the widely studied datasets in the explainability literature such as BA-2motifs, Tree-Cycles, Tree-Grid, and MUTAG datasets. A consequence of the condition is that, if Ψ(·) satisfies the condition, then I(Ý;
For a classification task with underlying distribution PY,X,A, parameters ζ, κ, ϵ∈[0,1], and an integer s∈, if there exists a classifier ƒ(·) for this task which is ϵ-accurate and (s, ζ)-explainable, with the explanation function satisfying the above condition, then, the task is (s, κ′)-explainable, where
For a classification task with underlying distribution PY,X,Λ, parameters κ, ϵ∈[0,1], and an integer s∈, assuming that the task is (s, κ)-explainable with an explanation function satisfying the above condition, and further assuming that the classification task has a Bayes error rate equal to ϵ*, then there exists an e-accurate and (s, 0)-explainable classifier ƒ(·), such that
In particular, ϵ→0 as ϵ*, κ→0.
The above provides intuitive notions of explainability along with fidelity measures expressed as mutual information terms. However, in most practical applications it is not possible to quantitatively compute and analyze them. Estimating the mutual information term, I(Ý; ) is not practically feasible in most applications since
≥
), so that a ‘good’ explanation function with respect to the surrogate fidelity measure is ‘good’ under the mutual information and vice versa, and ii) there must exist an empirical estimate of Fid*(ƒ, Ψ) with sufficiently fast convergence guarantees so that the surrogate measure can be estimated accurately using a reasonably large set of observations.
For a classification task with underlying distribution PY,X,A a (surrogate) fidelity measure is a mapping Fid: (ƒ, Ψ)→≥0, which takes a pair consisting of a classification function ƒ(·) and explanation function Ψ(·) as input, and outputs a non-negative number. The fidelity measure is said to be well-behaved for a set of classifiers
and explanation functions
if for all pairs of explanation functions Ψ1, Ψ2∈
and classifiers ƒ∈
, we have:
This fidelity condition requires that better explanation functions must have higher fidelity when evaluated using the surrogate measure.
Let ={(
be a sequence of sets of independent and identically distributed observations for a given classification problem. A fidelity measure Fid(·,·) is said to be empirically estimated with rate of convergence β if there exits a sequence of functions Hn:
, n∈
such that for all ϵ>0, we have:
As discussed above, some fidelity measures may be expressed as:
The rate of convergence of this empirical estimate is
These measures are well-behaved for a class of deterministic classification tasks and completely explainable classifiers.
For a deterministic classification task, for which the induced distribution PG has support consisting of all graphs with n∈
vertices, the graph edges may be jointly independent, and X∈
n×d, where
is a finite set. The graph label may be assumed to be Y=1(gcexp⊆
={Ψp(
.
FidΔ is well-behaved in a specific set of scenarios, where the task is deterministic and the classifier is completely explainable. However, it is not well-behaved in a wide range of scenarios of interest which do not have these properties. This is due to the OOD issue described above. To elaborate, for a good classifier, which has low probability of error, the distribution ƒ((·|{right arrow over (G)}) on average, i.e.,
(dTV(ƒ(
As a result, {acute over (P)}(Y) is close to (Y|
(dTV(ƒ(
(·|
(·|
(Y|
(Y|Ψ(
Generally, in scenarios where Ψ(
The generalized fidelity measures can be empirically estimated by
the distribution {acute over (P)}ε(yi) is the probability of yi under the distribution ƒ(={(
||} is the set of observations, where εi is the edge set of
i, i∈{
]. Using the Chernoff bound and standard information theoretic arguments, it can be shown that for fixed α1, α2 and
these empirical estimates converge to their statistical counterparts with rate of convergence
as ||→∞ for large input graph and explanation sizes.
Fidα, such that
Furthermore, given n∈ and δ, ϵ∈[0,1], it can be assumed that the graph distribution PG and the trained classifier ƒδ(·), satisfy the following conditions. The graph has n vertices. There exists a set
of classifier ƒδ(·), satisfy the following conditions. The graph has n vertices. There exists a set
of input graphs, called an ϵ-typical set, such that
(
)>1−ϵ, and
In the classification scenario described above, the class of explanation functions may include stochastic mappings Ψ:
Gexp where P(
is the event that ∃ly:
′ is the event that Σi=1n
becomes monotonically increasing in p. Consequently, there exists a non-zero error threshold ϵth>0, such that Fiddα
Referring now to
Block 206 evaluates explainer fidelity as described above. A measure of fidelity is generated for each of the sets of explanations, corresponding to each of the respective explainers. A score may be generated for each explainer, for example by summing the fidelity scores for each example from each respective set or by averaging the fidelity scores for each example from each respective set. Block 208 then selects a best explainer, for example by taking the explainer with a highest corresponding score.
Referring now to
Block 310 deploys the explainer model. In some cases, where the training 300 and the GNN task 320 are performed at the same location, the deployment 310 may be skipped. In other cases, parameters of the trained or fine-tuned explainer model may be transmitted to an inference site, where they will be used to aid in explaining the output of the GNN task 320.
Block 320 performs the GNN task, using any appropriate GNN model to process input graphs at block 322. Block 324 uses the trained explainer model to generate a sub-graph that explains the output of the GNN. Block 326 measures the fidelity of the sub-graph as described above, with explanation sub-graphs that produce outputs from the GNN which are similar to outputs from the corresponding original inputs producing higher fidelity scores.
Block 330 then performs an action based on the output of the GNN, the explanation sub-graph, and the fidelity measure. For example, the explanation sub-graph may be used to help guide the action to an appropriate location or to select the action that is to be performed. When the fidelity measure is high (e.g., above a fidelity threshold value), the action can be focused on a specific area indicated by the sub-graph. For example, in the event that the GNN indicates an intrusion in a computer network, the responsive action can target systems indicated by a sub-graph with a high fidelity measure. In contrast, a sub-graph with a low fidelity measure may not be trustworthy enough to rely on, and so the responsive action may need to have a broader target. In such an application, the action may include changing an operational state of one or more devices on the network (e.g., turning on or off routers or computers), changing a networking topology (e.g., changing a routing path between devices), and changing one or more security settings in the network devices (e.g., changing passwords, changing authentication types or stringency).
In some cases, the action of block 330 may relate to the selection of a pharmaceutical that is to have an intended effect in the human body. The GNN classifier 104 may indicate, for example, whether the a given molecule will bind with a particular protein related to a disease. The sub-graph 110 can help to explain how and why the molecule accomplishes that binding. In such an application, action of block 330 may include manufacturing the molecule so that it can be used in testing or therapies.
As shown in
The processor 410 may be embodied as any type of processor capable of performing the functions described herein. The processor 410 may be embodied as a single processor, multiple processors, a Central Processing Unit(s) (CPU(s)), a Graphics Processing Unit(s) (GPU(s)), a single or multi-core processor(s), a digital signal processor(s), a microcontroller(s), or other processor(s) or processing/controlling circuit(s).
The memory 430 may be embodied as any type of volatile or non-volatile memory or data storage capable of performing the functions described herein. In operation, the memory 430 may store various data and software used during operation of the computing device 400, such as operating systems, applications, programs, libraries, and drivers. The memory 430 is communicatively coupled to the processor 410 via the I/O subsystem 420, which may be embodied as circuitry and/or components to facilitate input/output operations with the processor 410, the memory 430, and other components of the computing device 400. For example, the I/O subsystem 420 may be embodied as, or otherwise include, memory controller hubs, input/output control hubs, platform controller hubs, integrated control circuitry, firmware devices, communication links (e.g., point-to-point links, bus links, wires, cables, light guides, printed circuit board traces, etc.), and/or other components and subsystems to facilitate the input/output operations. In some embodiments, the I/O subsystem 420 may form a portion of a system-on-a-chip (SOC) and be incorporated, along with the processor 410, the memory 430, and other components of the computing device 400, on a single integrated circuit chip.
The data storage device 440 may be embodied as any type of device or devices configured for short-term or long-term storage of data such as, for example, memory devices and circuits, memory cards, hard disk drives, solid state drives, or other data storage devices. The data storage device 440 can store program code 440A for explaining a GNN's output, 440B for determining a fidelity measurement of an explanation sub-graph, and/or 440C for performing a responsive action. Any or all of these program code blocks may be included in a given computing system. The communication subsystem 450 of the computing device 400 may be embodied as any network interface controller or other communication circuit, device, or collection thereof, capable of enabling communications between the computing device 400 and other remote devices over a network. The communication subsystem 450 may be configured to use any one or more communication technology (e.g., wired or wireless communications) and associated protocols (e.g., Ethernet, InfiniBand®, Bluetooth®, Wi-Fi®, WiMAX, etc.) to effect such communication.
As shown, the computing device 400 may also include one or more peripheral devices 460. The peripheral devices 460 may include any number of additional input/output devices, interface devices, and/or other peripheral devices. For example, in some embodiments, the peripheral devices 460 may include a display, touch screen, graphics circuitry, keyboard, mouse, speaker system, microphone, network interface, and/or other input/output devices, interface devices, and/or peripheral devices.
Of course, the computing device 400 may also include other elements (not shown), as readily contemplated by one of skill in the art, as well as omit certain elements. For example, various other sensors, input devices, and/or output devices can be included in computing device 400, depending upon the particular implementation of the same, as readily understood by one of ordinary skill in the art. For example, various types of wireless and/or wired input and/or output devices can be used. Moreover, additional processors, controllers, memories, and so forth, in various configurations can also be utilized. These and other variations of the processing system 400 are readily contemplated by one of ordinary skill in the art given the teachings of the present invention provided herein.
Referring now to
The empirical data, also known as training data, from a set of examples can be formatted as a string of values and fed into the input of the neural network. Each example may be associated with a known result or output. Each example can be represented as a pair, (x, y), where x represents the input data and y represents the known output. The input data may include a variety of different data types, and may include multiple distinct values. The network can have one input node for each value making up the example's input data, and a separate weight can be applied to each input value. The input data can, for example, be formatted as a vector, an array, or a string depending on the architecture of the neural network being constructed and trained.
The neural network “learns” by comparing the neural network output generated from the input data to the known values of the examples, and adjusting the stored weights to minimize the differences between the output values and the known values. The adjustments may be made to the stored weights through back propagation, where the effect of the weights on the output values may be determined by calculating the mathematical gradient and adjusting the weights in a manner that shifts the output towards a minimum difference. This optimization, referred to as a gradient descent approach, is a non-limiting example of how training may be performed. A subset of examples with known values that were not used for training can be used to test and validate the accuracy of the neural network.
During operation, the trained neural network can be used on new data that was not previously used in training or validation through generalization. The adjusted weights of the neural network can be applied to the new data, where the weights estimate a function developed from the training examples. The parameters of the estimated function which are captured by the weights are based on statistical inference.
In layered neural networks, nodes are arranged in the form of layers. An exemplary simple neural network has an input layer 520 of source nodes 522, and a single computation layer 530 having one or more computation nodes 532 that also act as output nodes, where there is a single computation node 532 for each possible category into which the input example could be classified. An input layer 520 can have a number of source nodes 522 equal to the number of data values 512 in the input data 510. The data values 512 in the input data 510 can be represented as a column vector. Each computation node 532 in the computation layer 530 generates a linear combination of weighted values from the input data 510 fed into input nodes 520, and applies a non-linear activation function that is differentiable to the sum. The exemplary simple neural network can perform classification on linearly separable examples (e.g., patterns).
A deep neural network, such as a multilayer perceptron, can have an input layer 520 of source nodes 522, one or more computation layer(s) 530 having one or more computation nodes 532, and an output layer 540, where there is a single output node 542 for each possible category into which the input example could be classified. An input layer 520 can have a number of source nodes 522 equal to the number of data values 512 in the input data 510. The computation nodes 532 in the computation layer(s) 530 can also be referred to as hidden layers, because they are between the source nodes 522 and output node(s) 542 and are not directly observed. Each node 532, 542 in a computation layer generates a linear combination of weighted values from the values output from the nodes in a previous layer, and applies a non-linear activation function that is differentiable over the range of the linear combination. The weights applied to the value from each previous node can be denoted, for example, by w1, w2, . . . wn-1, wn. The output layer provides the overall response of the network to the input data. A deep neural network can be fully connected, where each node in a computational layer is connected to all other nodes in the previous layer, or may have other configurations of connections between layers. If links between nodes are missing, the network is referred to as partially connected.
Training a deep neural network can involve two phases, a forward phase where the weights of each node are fixed and the input propagates through the network, and a backwards phase where an error value is propagated backwards through the network and weight values are updated.
The computation nodes 532 in the one or more computation (hidden) layer(s) 530 perform a nonlinear transformation on the input data 512 that generates a feature space. The classes or categories may be more easily separated in the feature space than in the original data space.
Embodiments described herein may be entirely hardware, entirely software or including both hardware and software elements. In a preferred embodiment, the present invention is implemented in software, which includes but is not limited to firmware, resident software, microcode, etc.
Embodiments may include a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system. A computer-usable or computer readable medium may include any apparatus that stores, communicates, propagates, or transports the program for use by or in connection with the instruction execution system, apparatus, or device. The medium can be magnetic, optical, electronic, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium. The medium may include a computer-readable storage medium such as a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk and an optical disk, etc.
Each computer program may be tangibly stored in a machine-readable storage media or device (e.g., program memory or magnetic disk) readable by a general or special purpose programmable computer, for configuring and controlling operation of a computer when the storage media or device is read by the computer to perform the procedures described herein. The inventive system may also be considered to be embodied in a computer-readable storage medium, configured with a computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner to perform the functions described herein.
A data processing system suitable for storing and/or executing program code may include at least one processor coupled directly or indirectly to memory elements through a system bus. The memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code to reduce the number of times code is retrieved from bulk storage during execution. Input/output or I/O devices (including but not limited to keyboards, displays, pointing devices, etc.) may be coupled to the system either directly or through intervening I/O controllers.
Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modem and Ethernet cards are just a few of the currently available types of network adapters.
As employed herein, the term “hardware processor subsystem” or “hardware processor” can refer to a processor, memory, software or combinations thereof that cooperate to perform one or more specific tasks. In useful embodiments, the hardware processor subsystem can include one or more data processing elements (e.g., logic circuits, processing circuits, instruction execution devices, etc.). The one or more data processing elements can be included in a central processing unit, a graphics processing unit, and/or a separate processor- or computing element-based controller (e.g., logic gates, etc.). The hardware processor subsystem can include one or more on-board memories (e.g., caches, dedicated memory arrays, read only memory, etc.). In some embodiments, the hardware processor subsystem can include one or more memories that can be on or off board or that can be dedicated for use by the hardware processor subsystem (e.g., ROM, RAM, basic input/output system (BIOS), etc.).
In some embodiments, the hardware processor subsystem can include and execute one or more software elements. The one or more software elements can include an operating system and/or one or more applications and/or specific code to achieve a specified result.
In other embodiments, the hardware processor subsystem can include dedicated, specialized circuitry that performs one or more electronic processing functions to achieve a specified result. Such circuitry can include one or more application-specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), and/or programmable logic arrays (PLAs).
These and other variations of a hardware processor subsystem are also contemplated in accordance with embodiments of the present invention.
Reference in the specification to “one embodiment” or “an embodiment” of the present invention, as well as other variations thereof, means that a particular feature, structure, characteristic, and so forth described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrase “in one embodiment” or “in an embodiment”, as well any other variations, appearing in various places throughout the specification are not necessarily all referring to the same embodiment. However, it is to be appreciated that features of one or more embodiments can be combined given the teachings of the present invention provided herein.
It is to be appreciated that the use of any of the following “/”, “and/or”, and “at least one of”, for example, in the cases of “A/B”, “A and/or B” and “at least one of A and B”, is intended to encompass the selection of the first listed option (A) only, or the selection of the second listed option (B) only, or the selection of both options (A and B). As a further example, in the cases of “A, B, and/or C” and “at least one of A, B, and C”, such phrasing is intended to encompass the selection of the first listed option (A) only, or the selection of the second listed option (B) only, or the selection of the third listed option (C) only, or the selection of the first and the second listed options (A and B) only, or the selection of the first and third listed options (A and C) only, or the selection of the second and third listed options (B and C) only, or the selection of all three options (A and B and C). This may be extended for as many items listed.
The foregoing is to be understood as being in every respect illustrative and exemplary, but not restrictive, and the scope of the invention disclosed herein is not to be determined from the Detailed Description, but rather from the claims as interpreted according to the full breadth permitted by the patent laws. It is to be understood that the embodiments shown and described herein are only illustrative of the present invention and that those skilled in the art may implement various modifications without departing from the scope and spirit of the invention. Those skilled in the art could implement various other feature combinations without departing from the scope and spirit of the invention.
This application claims priority to U.S. Patent Application No. 63/539,626, filed on Sep. 21, 2023, incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
63539626 | Sep 2023 | US |