The present invention relates to personalized federated learning, and, more particularly, to personalized federated learning via heterogeneous modular networks.
Personalized Federated Learning (PFL) which collaboratively trains a federated model while considering local clients under privacy constraints has attracted much attention. Despite its popularity, it has been observed that existing PFL approaches result in sub-optimal solutions when the joint distribution among local clients diverges.
A method for personalizing heterogeneous clients is presented. The method includes initializing a federated modular network including a plurality of clients communicating with a server, maintaining, within the server, a heterogenous module pool having sub-blocks and a routing hypernetwork, partitioning the plurality of clients by modeling a joint distribution of each client into clusters, enabling each client to make a decision in each update to assemble a personalized model by selecting a combination of sub-blocks from the heterogenous module pool, and generating, by the routing hypernetwork, the decision for each client.
A non-transitory computer-readable storage medium comprising a computer-readable program for personalizing heterogeneous clients is presented. The computer-readable program when executed on a computer causes the computer to perform the steps of initializing a federated modular network including a plurality of clients communicating with a server, maintaining, within the server, a heterogenous module pool having sub-blocks and a routing hypernetwork, partitioning the plurality of clients by modeling a joint distribution of each client into clusters, enabling each client to make a decision in each update to assemble a personalized model by selecting a combination of sub-blocks from the heterogenous module pool, and generating, by the routing hypernetwork, the decision for each client.
A system for personalizing heterogeneous clients is presented. The system includes a processor and a memory that stores a computer program, which, when executed by the processor, causes the processor to initialize a federated modular network including a plurality of clients communicating with a server, maintain, within the server, a heterogenous module pool having sub-blocks and a routing hypernetwork, partition the plurality of clients by modeling a joint distribution of each client into clusters, enable each client to make a decision in each update to assemble a personalized model by selecting a combination of sub-blocks from the heterogenous module pool, and generate, by the routing hypernetwork, the decision for each client.
These and other features and advantages will become apparent from the following detailed description of illustrative embodiments thereof, which is to be read in connection with the accompanying drawings.
The disclosure will provide details in the following description of preferred embodiments with reference to the following figures wherein:
The huge quantity of data available nowadays is usually stored in the form of isolated islands. The barriers between data sources are usually difficult to break. In this context, Federated Learning (FL) emerges as a prospective solution that facilitates distributed collaborative learning without disclosing original training data whilst naturally complying with government regulations. FL works by collaboratively training a model under the orchestration of a central server (e.g., a service provider) while keeping the training data decentralized. Instead of aggregating the raw data to a centralized data center for training, FL leaves the raw data distributed on the client devices and trains a shared model on the server by aggregating locally computed updates, thus mitigating systemic privacy risks and costs resulting from conventional centralized machine learning approaches. Consequently, different clients share the same model structure and global model parameters.
In real applications, local data stored across devices are usually heterogeneous. The data may be distributed in a non-independently and identically distributed (e.g., non-IID) manner across multiple devices. In addition, some users may probably produce significantly more or less data than others. Moreover, the number of edge device owners may be significantly larger than the average number of training samples on each device. The problem of data heterogeneity deteriorates the performance of the global FL model on individual clients due to the lack of solution personalization. The global model shared across clients will not generalize well on a local distribution that is very different from the global distribution. To tackle this issue, researchers focus on Personalized Federated Learning (PFL), which aims to make the global model fit the distributions on most of the devices.
The conventional PFL approaches first learn a global model and then locally adapt it to each client by fine-tuning the global parameters. In this case, the trained global model can be regarded as a meta-model ready for further personalization of each local client. In order to build a better meta-model, many efforts have been made to bridge the FL and the Model Agnostic Meta Learning (MAML). However, the global generalization error usually does not decrease much for these approaches. Thus, the performance cannot be significantly improved. Another line of research focuses on jointly training a global model and a local model for each client to achieve personalization. This strategy does not perform well on the clients whose local distributions are far from their average. Cluster-based PFL approaches address this issue by grouping the clients into several clusters. The clients in a cluster share the same model while those belonging to different clusters have different models. Unfortunately, the model trained in one cluster will not benefit from the knowledge of the clients in other clusters, which limits the capability to share knowledge, and, therefore, results in a sub-optimal solution.
An alternative strategy is to adopt the Multi-Task Learning (MTL) framework to train a PFL model. However, some efforts are restricted to solve a convex objective due to the multi-task penalty. They are usually transformed into a dual problem to get a closed-form solution during the updating. Other MTL-based approaches are flexible to modern deep models and can be personalized to each client.
However, most existing efforts do not consider the difference in conditional distribution between clients, which is an important problem when building a federated model. For example, labels sometimes reflect sentiment. Some users may label a laptop as cheap while others may label the laptop as expensive. This conditional distribution heterogeneity problem will cause model inaccuracies on some clients where the p(y|x) is far from the average. To address the problem, recent works have assumed the data distribution of each client is a mixture of M underlying distributions and a flexible framework was proposed in which each client learns a combination of M shared components with different weights. It optimizes the varying conditional distribution pi(y|x) under the assumption that the marginal distribution pi(x)=p(x) is the same for all clients. This assumption, however, is problematic. For instance, in handwriting recognition, users who write the same words might still have different stroke widths, slants, etc. In this cases, pi(x)≠pj(x) for client i and j.
Other recent works either assume the marginal distribution pi(X) or the conditional distribution pi(y|x) the same across clients. In reality, data on each client may be deviated from being identically distributed, say, Pi≠Pj for client i and j. That is, the joint distribution Pi(x,y) (can be rewritten as Pi(y|x)Pi(x) or Pi(x|y)Pi(y)) may be different across clients. This is referred to as the “joint distribution heterogeneity” problem. Existing approaches fail to completely model the difference of joint distribution between clients because they assume one term to be the same while varying the other one. Moreover, to accommodate different data distributions, the homogeneous model would be too large so that the given prediction power can be satisfied. Thus, the communication costs between the server and the clients would be huge. In this case, communication would be a key bottleneck to consider when developing FL methods. To this end, it is desirable to design an effective PFL model to accommodate heterogeneous clients in an efficient manner.
To solve the aforementioned issues, a Federated Modular Networks (FedMN) approach is presented, which personalizes heterogeneous clients efficiently. The main idea is that the exemplary methods implicitly partition the clients by modeling their joint distribution into clusters and the clients in the same cluster have the same architecture (
To sum up, the contributions are as follows, that is, the problem of joint distribution heterogeneity in the personalized FL is addressed and a FedMN approach is presented to alleviate this issue. An efficient mechanism is devloped to selectively upload model parameters which decreases the communication cost between the clients 130 and the server 120.
As shown in
The modular networks 310, 320 first encode the data feature into low-dimensional embeddings by a group of encoders 305, 315. Then, personalized feature embeddings are obtained by discovering and assembling a set of modular blocks 307, 317 in different ways for different clients. The modular networks 310, 320 have L layers and the l-th layer has nl blocks of sub-networks. The encoders 305, 315 in the 1st layer are n1 independent blocks which learn feature embeddings for each client.
Formally, let xi be the i-th sample, and the feature embedding zi(j) is obtained after the j-th encoder is applied:
z
i
(j)=Encoder(j)(xi),j=1, . . . ,n1 (1)
The choices of encoder networks are flexible. For example, convolutional neural networks (CNNs) can be adopted as encoders 305, 315 for image data and transformers for text data. The set of feature embeddings {zi(1), . . . ,zi(n
MLPs are used as the modular blocks and each pair of them in successive layers may be connected or not. At most, there are E possible connection paths between modular blocks that can be calculated as follows:
ε=Σj=1L−1njnj+1+nL. (2)
To determine which path would be connected, the exemplary methods need to learn a decision VmϵZ2E for client m. Each element vi(m)ϵVm is a binary variable with values chosen from {0,1}. vi(m)=1 indicates that the path between two blocks is connected, and 0 otherwise. Since some blocks may not have connected paths, Vm also determines which subset of blocks will be selected from the modular pool 115 for each client 130 (
With the defined modular networks, the exemplary methods can formally define the learning objective. Specifically, in a generic FL with M clients where each client has a local dataset Dm={(xi,yi)}i=1|D
Here, W is the model parameter, D=∪mDm is the aggregated data set from all clients, and Lm(w) is an empirical risk computed from client m's data. The objective in (3) is optimized by iterating between local training and global aggregation for multiple communication rounds. For generic FL, the exemplary methods perform ŷi=ƒw(xi) to make a prediction in the local updating.
In the FedMN framework, after getting Vm, the architecture of the modular network for client m is fixed at an epoch during local updating. The model ƒ can be parameterized by θ which includes parameters in modular networks 310, 320 and the routing hypernetwork 330.
When making a prediction, the exemplary methods have ŷi=ƒθ(xi; Vm). Then, it is easy to extend the generic FL to get the empirical risk of FedMN 100 as:
However, the direct optimization of the objective in (4) is intractable as there are 2E candidates for each Vm. Thus, a relaxation is considered by assuming that the decision of each connection path in vi(m)ϵVm is conditionally independent to each other. Formally, it is given as:
A straightforward instantiation of P(vi(m)) is the Bernoulli distribution vi(m):Bern (πi(m)). P(vi(m)=1)=πi(m) is the probability that the i-th path exists in Vm. With this relaxation, the objective in (4) can be rewritten as:
where q(Πm) is the distribution of the decision variable parameterized by π(m)'s.
Due to the binary nature of Vm, it is impractical to optimize (6) with gradient-based back prorogation. To enable efficient computation, the exemplary methods further approximate the binary vector VmϵZ2E with a continuous real-valued vector in [0,1]E. In practice, the exemplary methods approximate each Bernoulli distribution vi(m):Bern (πi(m)) with a binary concrete distribution.
Formally, letting σ(·) as the Sigmoid function, it is given as:
The hyper-parameter τ is a temperature variable to trade-off between approximation and binary output.
For justification, when the temperature τ approaches to 0, the binary concrete distribution of vi(m) in (7) converge to the Bernoulli distribution vi(m):Bern(πi(m)). Specifically,
Since ϵ and πi(m) both lies in (0,1), and function
is monotonically increasing in this region.
Thus:
Therefore, with reparameterization, combining (6) and (7) the learning objective is given as:
When temperature τ>0, the objective function in (8) has a well-defined gradient that enables efficient optimization with backpropagation.
The routing hypernetwork 330 that automatically learns Πm from the joint distribution is presented.
Suppose M clients are provided and such clients own local datasets D1, . . . , DM, where Dm={(x1, y1), . . . , (xn
where X1:p is a set of p variables {X1, . . . , Xp} defined on xη=1pΩηΩ1x . . . xΩp, ϕη is the feature map of variable Xη endowed with kernel kη in RKHS Hη, ⊗η=1pϕη(xη)ϕ1(x1)⊗ . . . ⊗ϕp(xp) is the feature map in the tensor product Hilbert space, where the inner product satisfies ⊗η=1p ϕη(xη), ⊗η=1pϕη(x′η)=Πη=1pkη(xη,x′η). The joint embedding is an uncentered cross-covariance operator CX
To estimate the embeddings of distribution P (X1, . . . ,Xp), finite samples can be used. For a sample set DX
which converges to its population counterpart in RKHS norm. For instantiation, since the joint distribution on feature domain X is considered and domain Y is labeled for client m, m∈[M], the joint embedding is given as:
The mappings ϕx(X) and ϕy(y) are flexible. The tensor product ϕx(X)⊗ϕx(X) or higher order ones can be used, such as ϕx(X)⊗ϕx(X)⊗ϕx(X). θh is denoted as the parameters used in the routing hypernetwork, which is a part of the model parameters θ. The exemplary methods parameterize the feature mappings by employing neural networks, and thus the joint embedding estimator in (11) results in:
Then, two fixed-size vector representations of a dataset are provided by the averaged output of the two neural networks: ϕθ
which results in a vector of joint embedding of the local dataset at client m.
Then, Πm can be obtained by:
Since Vm determines the connection paths between blocks, some blocks may not have connections with other ones. To clarify the message passing between blocks, the connection paths between blocks in the layer (l−1) and the layer l are denoted as
with the element Cjk(m)∈{0,1} in its j-th row and k-th column.
Letting uj(l) be the input tensor for the j-th block in the layer l, and ũj(l) be its output:
To decrease the number of model parameters transmitted between clients and the server, a block-wise strategy is devloped for clients to upload the local models to the server and copy them from the server. In detail, when the decision Vm is obtained, it is known that the inputs for some blocks are 0's from (15). Therefore, some blocks are still active whose input is not all 0's. In total, it is denoted that there are B blocks in the modular network, where B=n2+ . . . +nl. Let am∈Z2B to denote which blocks are active for the local model at client m, with the element ai(m)=1 if the input for the i-th block is not 0 while ai(m)=0 otherwise. When uploading the model to the server, the client only uploads the active blocks whose ai(m)=1. When copying the model from the server, the client only copies the parameters of active blocks from the global model. This strategy significantly reduces unnecessary communication costs between clients and the server.
When all the clients upload their local models to the server, the server averages the model to get the global modules. In the proposed FedMN 100, the aggregation for the routing hypernetwork 110 is similar to FedAvg. For the modular networks 310, 320 of
The federated learning process of FedMN 100 is provided in Algorithm 1 below. The computation complexity in each round at each client in FedMN 100 is the same as that in FedAvg. The FedMN algorithm is a personalized FL method whose convergence is guaranteed.
Regarding the Federated Modular Networks Algorithm:
Input: Number of clients M; local dataset {Dm}m=1M, where Dm={(xi, ydi)}i=1|D
Output: Local models θmT for m∈[M]; global model θT; local decisions VmT for m∈[M]. Server initializes the global modular pool {nl}l=1L.
The exemplary methods address the problem of joint distribution heterogeneity in the personalized FL. To tackle this issue, the exemplary methods propose a novel FedMN approach that adaptively assembles architectures for each client by selecting a subset of module blocks from a module pool in the global model. The proposed FedMN 100 adopts a light-weighted routing hypernetwork to model the joint distributions for each client and produce the module selection decisions. Advised by the decision, each client selects its personalized architecture. When federated updating, each client uploads, and downloads only part of the module parameters, which reduces the communication burden between the server and the clients.
The routing hypernetwork 110 produces decisions for each of the clients 130. The clients 130 with similar decisions are grouped into the same cluster which copies the same subset of blocks as the local model from the module pool 115 in the server 120. After the local updating on each client 130, the clients 130 send their model parameters back to the server 120. The server 120 aggregates the model parameters block wisely, which results in a global model pool 115.
There are various applications 230 for the proposed architecture 100 for personalized federated learning. For all general supervised or unsupervised learning tasks that includes edge devices 210 such as smartphones, sensors, radars, and so forth, the proposed architecture 100 can provide personalized prediction for edge devices, and at the same time, the prediction model can have the knowledge shared by other edge devices. The whole framework is privacy protected. The communication costs between edges are low. The diverse artificial intelligence (AI) services 220 provided can includes services 222, such as, anomaly detection, label prediction, sales prediction, finance prediction, medical prediction, natural language processing (NLP), etc.
The modular networks 310, 320 include a group of encoders 305, 315 in the first layer and modular blocks 307, 317 in the following layers. The connection paths between blocks are determined by a decision from the routing hypernetwork 330. The input of modular networks 310, 320 is in sample-wise while the input of routing hypernetwork 330 is the full dataset for each client.
At block 410, input data in edge services.
At block 420, edge devices are locally trained by using local data.
At block 430, local selected modules' parameters and local hyper network parameters are sent to the server.
At block 440, the server aggregates block-wise module parameters and the hyper network parameters.
At block 450, the aggregated global parameters are sent back to local clients.
At block 460, a prediction is made.
At block 420, edge devices are locally trained by using local data.
At block 522, adaptively select a subset of modules from a module pool to assemble heterogeneous architectures for different clients.
At block 524, use a light-weighted routing hyper network to model the joint data distribution of the local client.
At block 526, edge devices use the local routing hyper network.
At block 430, local selected modules' parameters and local hyper network parameters are sent to the server.
At block 532, only selected modules' parameters and hyper networks are sent to the server.
At block 534, it is noted that this will significantly decrease communication costs.
At block 440, the server aggregates block-wise module parameters and the hyper network parameters.
At block 642, the aggregation is to weighted average for the parameters of blocks, that is only average a block if this block is chosen by k number of clients (k>0); hyper network parameters are averaged over clients.
At block 644, blocks not chosen by any clients are not aggregated.
At block 450, the aggregated global parameters are sent back to local clients.
At block 652, only those blocks that are updated need to be sent back to the clients.
The processing system includes at least one processor (CPU) 904 operatively coupled to other components via a system bus 902. A GPU 905, a cache 906, a Read Only Memory (ROM) 908, a Random Access Memory (RAM) 910, an input/output (I/O) adapter 920, a network adapter 930, a user interface adapter 940, and a display adapter 950, are operatively coupled to the system bus 902. Additionally, the Federated Modular Network (FedMN) 100 is presented, a novel PFL approach that adaptively selects sub-modules from a module pool to assemble heterogeneous neural architectures for different clients. FedMN 100 adopts a light-weighted routing hypernetwork 110 to model the joint distribution on each client 130 and produce the personalized selection of the module blocks for each client 130. To reduce the communication burden in existing FL, an efficient way to interact between the clients 130 and the server 120 is developed.
A storage device 922 is operatively coupled to system bus 902 by the I/O adapter 920. The storage device 922 can be any of a disk storage device (e.g., a magnetic or optical disk storage device), a solid-state magnetic device, and so forth.
A transceiver 932 is operatively coupled to system bus 902 by network adapter 930.
User input devices 942 are operatively coupled to system bus 902 by user interface adapter 940. The user input devices 942 can be any of a keyboard, a mouse, a keypad, an image capture device, a motion sensing device, a microphone, a device incorporating the functionality of at least two of the preceding devices, and so forth. Of course, other types of input devices can also be used, while maintaining the spirit of the present invention. The user input devices 942 can be the same type of user input device or different types of user input devices. The user input devices 942 are used to input and output information to and from the processing system.
A display device 952 is operatively coupled to system bus 902 by display adapter 950.
Of course, the processing system may also include other elements (not shown), as readily contemplated by one of skill in the art, as well as omit certain elements. For example, various other input devices and/or output devices can be included in the system, depending upon the particular implementation of the same, as readily understood by one of ordinary skill in the art. For example, various types of wireless and/or wired input and/or output devices can be used. Moreover, additional processors, controllers, memories, and so forth, in various configurations can also be utilized as readily appreciated by one of ordinary skill in the art. These and other variations of the processing system are readily contemplated by one of ordinary skill in the art given the teachings of the present invention provided herein.
At block 1001, initializing a federated modular network including a plurality of clients communicating with a server.
At block 1003, maintaining, within the server, a heterogenous module pool having sub-blocks and a routing hypernetwork.
At block 1005, partitioning the plurality of clients by modeling a joint distribution of each client into clusters.
At block 1007, enabling each client to make a decision in each update to assemble a personalized model by selecting a combination of sub-blocks from the heterogenous module pool.
At block 1009, generating, by the routing hypernetwork, the decision for each client.
As used herein, the terms “data,” “content,” “information” and similar terms can be used interchangeably to refer to data capable of being captured, transmitted, received, displayed and/or stored in accordance with various example embodiments. Thus, use of any such terms should not be taken to limit the spirit and scope of the disclosure. Further, where a computing device is described herein to receive data from another computing device, the data can be received directly from the another computing device or can be received indirectly via one or more intermediary computing devices, such as, for example, one or more servers, relays, routers, network access points, base stations, and/or the like. Similarly, where a computing device is described herein to send data to another computing device, the data can be sent directly to the another computing device or can be sent indirectly via one or more intermediary computing devices, such as, for example, one or more servers, relays, routers, network access points, base stations, and/or the like.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module,” “calculator,” “device,” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical data storage device, a magnetic data storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can include, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
Aspects of the present invention are described below with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the present invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general-purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks or modules.
These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks or modules.
The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks or modules.
It is to be appreciated that the term “processor” as used herein is intended to include any processing device, such as, for example, one that includes a CPU (central processing unit) and/or other processing circuitry. It is also to be understood that the term “processor” may refer to more than one processing device and that various elements associated with a processing device may be shared by other processing devices.
The term “memory” as used herein is intended to include memory associated with a processor or CPU, such as, for example, RAM, ROM, a fixed memory device (e.g., hard drive), a removable memory device (e.g., diskette), flash memory, etc. Such memory may be considered a computer readable storage medium.
In addition, the phrase “input/output devices” or “I/O devices” as used herein is intended to include, for example, one or more input devices (e.g., keyboard, mouse, scanner, etc.) for entering data to the processing unit, and/or one or more output devices (e.g., speaker, display, printer, etc.) for presenting results associated with the processing unit.
The foregoing is to be understood as being in every respect illustrative and exemplary, but not restrictive, and the scope of the invention disclosed herein is not to be determined from the Detailed Description, but rather from the claims as interpreted according to the full breadth permitted by the patent laws. It is to be understood that the embodiments shown and described herein are only illustrative of the principles of the present invention and that those skilled in the art may implement various modifications without departing from the scope and spirit of the invention. Those skilled in the art could implement various other feature combinations without departing from the scope and spirit of the invention. Having thus described aspects of the invention, with the details and particularity required by the patent laws, what is claimed and desired protected by Letters Patent is set forth in the appended claims.
This application claims priority to Provisional Application No. 63/349,988 filed on Jun. 7, 2022, the contents of which are incorporated herein by reference in their entirety.
Number | Date | Country | |
---|---|---|---|
63349988 | Jun 2022 | US |