The present disclosure relates generally to methods and apparatus for cascaded federated learning for performance in a telecommunications network.
Decisions, for example, related to secondary carrier handover or selection process in a telecommunications network is currently taken at the network side, where a communication device (e.g., a user equipment (UE)) reports different measurements based on network requests or periodic allocation. The periodicity of such measurements requests from UE might vary from tens of milliseconds to more than hundreds of milliseconds.
From a machine learning (ML) perspective, federated learning presently may be a machine learning tool that competes with other approaches for ML models that may train on large aggregations of data collected over multiple data sources. As referred to in this disclosure, such ML models are referred to as “centralized machine learning models”.
Generally, FL follows operations illustrated in
According to some embodiments, a method performed by a network computer device in a telecommunications network is provided for adaptively deploying an aggregated machine learning model and an output parameter in the telecommunications network to control an operation in the telecommunications network. The network computing device can perform operations aggregating a plurality of client machine learning models received from a plurality of client computing devices in the telecommunications network to obtain an aggregated machine learning model. The network computing device can perform further operations aggregating an output performance metric of the plurality of the client machine learning models received from the plurality of client computing devices to obtain an aggregated output performance metric. The network computing device can perform further operations training a network machine learning model with inputs including 1) the aggregated output performance metric and 2) at least one measurement of a network parameter to obtain an output parameter of the network machine learning model. The network computing device can perform further operations sending to the plurality of client computing devices the aggregated machine learning model and the output parameter of the network machine learning model.
Corresponding embodiments of inventive concepts for network computing devices, computer products, and computer programs are also provided.
According to some embodiments, a method performed by a client computing device in a telecommunications network is provided to control an operation in the telecommunications network. The client computing device can perform operations receiving an aggregated machine learning model from a network computing device. The client computing device can perform further operations receiving an output parameter of a network machine learning model from the network computing device. The client computing device can perform further operations training the aggregated machine learning model in iterations with inputs. The inputs include 1) the output parameter and 2) at least a location or at least one measurement of the client computing device to obtain an output performance metric of the aggregated machine learning model. The client computing device can perform further operations sending the output performance metric of the aggregated machine learning model to the network computing device at each iteration of the training or at the last iteration of the training.
Corresponding embodiments of inventive concepts for client computing devices, computer products, and computer programs are also provided.
Other systems, computer program products, and methods according to embodiments will be or become apparent to one with skill in the art upon review of the following drawings and detailed description. It is intended that all such additional systems, computer program products, and methods be included within this description and protected by the accompanying embodiments.
The following explanation of potential problems is a present realization as part of the present disclosure and is not to be construed as previously known by others. Some approaches for improving telecommunications (mobile) network performance, e.g. secondary carrier prediction, may not use machine learning. Thus, without a deployed machine learning agent, the network and a UE may not be able to predict parameters for controlling an operation in the network.
Another possible approach may use centralized machine learning at the network side. Centralized machine learning, however, may use significant signaling and measurement reporting in a training phase; and may not have UE features that help in predictions due to privacy or other issues. Thus, centralized machine learning may ignore UE input to predict parameters for controlling an operation in the network.
Another possible approach may use federated learning. Federated learning, however, may be limited to features of the client devices, and incorporation of features of client devices and a gNB may not be possible.
Thus, improved processes for predicting parameters for controlling an operation in a telecommunications network are needed.
One or more embodiments of the present disclosure may include methods for deploying an aggregated machine learning model and an output parameter in a telecommunications network to control an operation in the telecommunications network (also referred to herein as a network). The methods may include a network computing device that uses a cascaded and hybrid federated model to adaptively enable client computing devices (e.g., UEs) to participate in heterogeneously taking a decision on an operation in the network. Operations advantages that may be provided by one or more embodiments include preserving privacy of the UE's information (e.g., a UE's private information, such as location, may not be shared), and measurements and features at both UEs and a network computing device (e.g., a gNB) may be used. Thus, one or more embodiments may improve a parameter in the network and an associated decision for controlling that parameter.
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this application, illustrate certain non-limiting embodiments of inventive concepts. In the drawings:
Various embodiments will be described more fully hereinafter with reference to the accompanying drawings. Other embodiments may take many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided by way of example to convey the scope of the subject matter to those skilled in the art. Like numbers refer to like elements throughout the detailed description.
Generally, all terms used herein are to be interpreted according to their ordinary meaning in the relevant technical field, unless a different meaning is clearly given and/or is implied from the context in which it is used. All references to a/an/the element, apparatus, component, means, step, etc. are to be interpreted openly as referring to at least one instance of the element, apparatus, component, means, step, etc., unless explicitly stated otherwise. The steps of any methods disclosed herein do not have to be performed in the exact order disclosed, unless a step is explicitly described as following or preceding another step and/or where it is implicit that a step must follow or precede another step. Any feature of any of the embodiments disclosed herein may be applied to any other embodiment, wherever appropriate. Likewise, any advantage of any of the embodiments may apply to any other embodiments, and vice versa. Other objectives, features and advantages of the enclosed embodiments will be apparent from the following description.
As used herein, a client computing device refers to any device intended for accessing services via an access network and configured to communicate over the access network. For instance, the client computing device may be, but is not limited to: a user equipment (UE), a communication device, mobile phone, smart phone, sensor device, meter, vehicle, household appliance, medical appliance, media player, camera, or any type of consumer electronic, for instance, but not limited to, television, radio, lighting arrangement, tablet computer, laptop, or PC. The client computing device may be a portable, pocket-storable, hand-held, computer-comprised, or vehicle-mounted mobile device, enabled to communicate voice and/or data, via a wireless or wireline connection.
As used herein, network computing device refers to equipment capable, configured, arranged and/or operable to communicate directly or indirectly with a client computing device and/or with other network nodes or equipment in the radio communication network to enable and/or provide wireless access to the user device and/or to perform other functions (e.g., administration) in the radio communication network. Examples of network nodes include, but are not limited to, base stations (BSs) (e.g., radio base stations, Node Bs, evolved Node Bs (eNBs), gNode Bs (including, e.g., network computing node 201, etc.), access points (APs) (e.g., radio access points), servers, etc. Base stations may be categorized based on the amount of coverage they provide (or, stated differently, their transmit power level) and may then also be referred to as femto base stations, pico base stations, micro base stations, or macro base stations. A base station may be a relay node or a relay donor node controlling a relay. A network node may also include one or more (or all) parts of a distributed radio base station such as centralized digital units and/or remote radio units (RRUs), sometimes referred to as Remote Radio Heads (RRHs). Such remote radio units may or may not be integrated with an antenna as an antenna integrated radio. Parts of a distributed radio base station may also be referred to as nodes in a distributed antenna system (DAS). Yet further examples of network nodes include multi-standard radio (MSR) equipment such as MSR BSs, network controllers such as radio network controllers (RNCs) or base station controllers (BSCs), base transceiver stations (BTSs), transmission points, transmission nodes, multi-cell/multicast coordination entities (MCEs), core network nodes (e.g., MSCs, MMEs), O&M nodes, OSS nodes, SON nodes, positioning nodes (e.g., E-SMLCs), and/or MDTs. As another example, a network node may be a virtual network node. More generally, however, network nodes may represent any suitable device (or group of devices) capable, configured, arranged, and/or operable to enable and/or provide a user device with access to the telecommunications network or to provide some service to a user device that has accessed the telecommunications network.
Some approaches for federated learning may provide advantages in a wireless network. Possible advantages may include that federated learning may provide improvements to mobile network (e.g, a 5G) network in terms of preserving UE information privacy. For example, a UE may not send the UE's position to a gNB, and may use a learning model instead. Additional potential advantages may include an exchange of learning among UEs, enabling more efficient signaling for a gNB and UEs (e.g., reduce signaling), and decreasing data transfer since information that is exchanged between UEs and a gNB may be compressed by way of a neural network.
Potential problems with some approaches may be categorized depending on the type of approach as described below.
Potential problems related to deployed systems in a network without federated learning may include the following.
In some systems, no machine learning agent is deployed in a system. Accordingly, network equipment (e.g., a gNB) or UE cannot predict a parameter (e.g., the reference signal receive power (RSRP)/reference signal received quality (RSRQ)) without machine learning or a statistical prediction algorithm; and only UE measurement and reporting of RSRP/RSRP may be relied on. Thus, decisions may be delayed (e.g., secondary carrier handover, carrier aggregation selection, dual connectivity selection decisions).
In other systems, a centralized machine learning approach may be deployed at the network side. In such an approach, a network may try to predict a parameter (e.g., signal strengths) at the UE side. This approach, however, may cause potential problems including: 1) large signaling and measurement reporting at a training phase. Large signaling may increase if the model is an online mode, where training phase is carried frequently, because supervised learning at the network side may require reporting a measurement (e.g. RSRP) from the UE side. 2) Missing UE features that may help in prediction (e.g., UE location that is missing due to privacy or other issues). Thus, this approach may ignore UE input to control an operation in the network (e.g., secondary carrier handover like decision).
Potential problems related to applying some approaches for federated learning to a wireless network may include the following.
Some approaches to federated learning may be limited to the features of the clients (e.g., UEs), whereas a server (e.g., gNB) may have much more features that may help for improving network performance that depends on decisions (e.g., secondary carrier decisions, such as decisions on hand over, dual connectivity, carrier aggregation, RLC legs, duplications, milli-meter wave communication).
Additional potential problems with some approaches to federated learning may include that incorporation of features of both clients and servers (e.g., a gNB) may not be possible. Thus, utilizing heterogeneous information at both a gNB and UEs may not be possible (e.g., utilizing clients' features (e.g., location information of UEs) and server's features (e.g., throughput, load, interference information from gNB), together may not be possible).
In various embodiments of the present disclosure, a parameter may be predicted and related decisions on the parameter may be made to control an operation in the telecommunications network. A cascaded and hybrid federated model may be used to enable the telecommunications network to adaptively enable UEs to participate in taking (heterogeneously) a decision on n operation in the telecommunications network, while preserving the privacy of the UE's information (e.g., not sharing the UE's private information such as location).
In various embodiments of the present disclosure, a method may be provided for secondary carrier prediction and related decisions on secondary carrier operations (such as selection, handover, dual connectivity, etc). A cascaded and hybrid federated model may be included that enables a network to adaptively enable UEs to participate in taking (heterogeneously) a decision on secondary carrier operations, while preserving the privacy of the UEs' information (e.g., UEs' private information such as location may not be shared). The methods may take advantage of measurements and features at both UEs (e.g., location, etc) and a gNB (e.g., throughput, load, interference, etc.) sides. Thus, the methods may improve, e.g, secondary carrier (SC) strength and an associated decision. The methods may further provide sever messaging and methods for exchanging training and/or operation related information.
In various embodiments of the present disclosure, a method is provided in a telecommunications network for adaptively deploying an aggregated machine learning model and an output parameter in the telecommunications network to control an operation in the telecommunications network. One exemplary application is for secondary carrier prediction and a related decision(s) on secondary carrier operations (such as selection, handover, dual connectivity, etc.).
Presently disclosed embodiments may provide potential advantages. One potential advantage may provide for a greater degree of freedom when a model is learning (e.g., learning not only from UEs but also from a network node). Another potential may provide new input to local training that may be obtained from a network node model output, and flexibility in taking decisions related to controlling an operation in the telecommunications network.
Further potential advantages of various presently disclosed embodiments may include improving learning performance, parameter prediction (e.g., secondary carrier prediction), and a decision on the predicted parameter (e.g. improving carrier selection).
Further potential advantages of various presently disclosed embodiments may include improving federated learning performance (loss or accuracy), and improving parameter prediction (e.g., secondary carrier prediction). These potential improvements may be provided due to, for example, interference (and other cells' based measurement in the network) may be directly or indirectly related to secondary carrier strength (e.g., RSRQ or RSRP). Thus, knowing such a parameter, may result in a more accurate training of the ML mode.
Further potential advantages of various presently disclosed embodiments may include improving carrier selection (e.g., at dual connectivity, carrier aggregation, moving to mm-Wave, etc.) or handover process. These potential improvements may be provided due to, for example, interference (and other cells' based measurement in the network) may be directly or indirectly related to secondary carrier strength (e.g., RSRQ or RSRP). Thus, knowing such a parameter, may result in a more accurate training of the ML mode. Additionally, a cell-based parameter, may help the decision making process of selecting a new carrier (e.g., the decision may not be only related to carrier prediction, but also the prediction of future selected carriers based on parameter other than strength).
Referring to
The network computing device 201 and client computing devices 205 of
UEs 205 may upload to gNB 201 (a) their ML models 207 (also referred to herein as client machine learning models 207), and (b) quantized version of their output or a function of that output 209 (e.g., P1-P5) (also referred to herein as output performance metric 209). gNB 201 may aggregate a) UEs' 205 ML models 207, and b) UEs' 205 quantized output 209 (e.g., secondary carrier signal strength (RSRP, RSRQ, etc.).
gNB 201 may take (a) the aggregated quantized output, mean squared error (MSE) or coefficient of determination (R2) 211 (also referred to herein as aggregated output performance metric 211), and (b) other gNB 201 available measurement(s) such as network throughput, load, and interference (also referred to herein as measurement of a network parameter 303), and use the aggregated output 211 and measurement(s) 303 to train a centralized, or other type of model, at gNB 201 (also referred to herein as a hybrid server model 301 or a network machine learning model 301), as described below with reference to
gNB 201 may download to UEs 205 (a) the aggregated UEs' model 203, and (b) a quantized output, MSEs, or R2s 307 (also referred to herein as output parameter 307) (not shown in
After UEs 205 predict, e.g., secondary carrier (SC) strength (e.g. RSRP/RSRQ), a decision on SC operations (e.g., handover or selection) may be taken and may include:
A UE 205 may take a final decision on SC handover or selection based on trained model 401 (as described further below with reference to
Alternatively, a UE 205 may send a confidence value (e.g., a probability) of its decision, e.g. on SC handover or selection, to the network (e.g., gNB 201). The network (e.g. gNB 201) may generate a discrete report and take final decision.
Alternatively, gNB 201 may take a final decision on SC handover or selection based on the quantized report of the predicted SC.
Still referring to
Still referring to
In the first approach, a UE 205 may have to take its decision on SC handover or selection, and then send the decision to gNB 201 via a resource radio control (RRC), medium access control (MAC), or physical PHY message.
In the second approach, a UE 205 may have to take its decision on SC handover or selection, and then convert the decision to a confidence value (e.g., probability based) and send the value to gNB 201 via RRC, MAC, or PHY messages.
In the third approach, a UE 205 may not take a decision on SC handover or selection. UE 205 may send its predicted SC value to gNB 201 via RRC, MAC, or PHY messages.
In some embodiments, the exchange of quantized outputs, quantized MSE or R2307 and 407 of both client computing devices 205 or network computing device 201 (e.g., UE or gNB, respectively) models might differ depending on the dynamicity of the wireless environment, network measurement (e.g., throughput, load, and interference), and client computing device 205 location. For example, a UE 205 might send to gNB 201 (during the iteration phase) only the model 207 or both the quantized output (or MSE or R2) 209 and the model 207. Further, gNB 201 might send to UEs 205 (during the iteration phase) only the aggregated model 203 or both the gNB output (quantized output or MSE or R2) 307 and the aggregated model 203.
In some embodiments, input to the network machine learning model 301 model that is obtained from a UE 205 may be adapted to the number of reporting or active UEs 205. For example, gNB 201 takes a weighted average of all UEs' 205 reported output as input; gNB 201 statistically combines all UEs' 205 output to be considered as input; or gNB 201 takes a minimum or a maximum of all UEs' 205 output to be considered as input, etc.
In some embodiments, gNB 201 and a UE 205 exchange local model 207 of UE 205 and aggregated model 203 via RRC configurations signals; physical downlink control channel (PDCCH) and physical uplink control channel (PUCCH) signals; and/or medium access control (MAC) control element (CE) signals.
In some embodiment, gNB 201 and UE 205 exchange the quantized output 209 of UE 205 and centralized quantized 307 output via RRC configurations signals; PDCCH and PUCCH signals; and/or MAC CE signals.
In some embodiment, the network may change and mix the signaling methodology (of both models and quantized MSE or R2/outputs) depending on convergence speed, dynamicity of the wireless channel, required accuracy, mobility (change of UE location), etc. For example, when a fast and small size model and input update is needed, the network may enable PHY layer model transfer with mini-slot. This may enable that the information needed to be transferred arrives without a time limit.
In some embodiments, the network dynamically decides on whether (1) gNB 201 only learns and predicts secondary carrier strength, or (2) conventional federated learning, or 3) cascaded federated learning is used to enhance the secondary carrier prediction and selection. The dynamic decision may be based on changes of wireless fading channel, network load, interference from neighbor cells or networks, etc. It is also may be based on (a) whether UE 205 local information is enough to make the prediction, (b) gNB 201 measurement is enough to make the prediction, (c) or both are needed. Once the above decision is made, gNB 201 can communicate a specific signal to UE 205, upon reception of which, UEs 205 will understand the gNB 201 intention.
In some embodiments, the network may utilize the UE 205 shared model 207 and quantized MSE or R2209 to make a proactive decision on the secondary carrier application, such as selecting the suitable secondary carrier for dual connectivity or carrier aggregation, etc.
Various embodiments of the present disclosure may provide several technical enhancements compared to some approaches of federated learning. For example, RSRQ/RSRP may depend on gNB based information (interference, load, throughput (TP), etc.), Thus, including extra information in accordance with various embodiments may enhance the accuracy and convergence rate of the prediction. Additional potential technical enhancements may include, for example, that load and TP of neighbor cells may be used in the process of secondary carrier selection, not only the accuracy of the predicted secondary carrier strength.
Various operational phases will now be described.
In some embodiments, in a training phase, network computing device 201 decides on an operation mode among the following modes: (1) gNB 201 takes full decision on SC operations (handover or selection, etc.); (2) gNB 201 and UE 205 participate in decision making for SC operations; and (3) UE 205 takes full decision on SC operation. Both UEs 205 and gNB 201 iterate on their perspective model, as described above, until UEs 205 and gNB 201 reach the desired accuracy of predicted secondary carrier RSRP/RSRQ.
In some embodiments, in an execution phase, both UE 205 and gNB 201 follow the decided operation mode. UEs 205 predict SC RSRP/RSRP, every decided T period of time. The period of time, T, may depend on the dynamicity of changes in the wireless environment, UE 205 location, and the needed speed of convergence. Based on the selected operation mode and the predicted values of SC, UE 205 may exchange the associated information (to the operation mode) to gNB 201. Additionally, based on the selected operation mode, gNB 201 may process the information uploaded by UEs 205 to gNB 201 as described above.
Exemplary inputs to models 301 and 401 of various embodiments will now be described.
Inputs to the client machine learning model 401 may include, but are not limited to, UE 205 location (altitude and longitude); gNB 201 model's quantized output, MSE or R2; Time; Surrounding event; Etc.
Inputs to the network machine learning model 301 may include, but are not limited to: Network throughput and load; Cell throughput and load; Neighbor interference; UE 205 quantized output, MSE, or R2; Etc.
Exemplary outputs of models 301 and 401 of various embodiments will now be described.
Outputs of network machine learning model 301 may include, but is not limited to: Aggregated clients' local model 203 weights; Gradient with respect to common features between client 205 and server 201; Loss value; Etc.
Outputs of client machine learning model 401 may include, but is not limited to: RSRP; RSRQ; selection decision; Local gradients with respect to common features between client 205 and server 201; Local loss value; Etc.
Online updating of models 301 and 401 will now be described.
In various embodiments, network computing device 201 chooses to continue updating the UEs model 401 even while running the execution phase depending on, for example environmental changes, neighbor events, a surrounding event(s), etc ; channel fluctuation; fluctuation on loads on target and neighbor cells, etc.
In a training phase, there may not be stringent constraints on updating the models, due to flexible time and bandwidth. However, when operating in execution mode, cells may be fully loaded, and decision on e.g. secondary carrier (or handover to another serving cell) should be made very fast, with stringent latency on model convergence. Thus, in some embodiments, a model is updated depending on the situation. For example, if a quick and large size model update is needed, network computing device 201 may enable an all layer's model transfer mode, e.g, PHY, MAC, radio link control (RLC), packet data convergence protocol (PDCP), and Application layers. In another example, if a quick and small size model update is needed, network computing device 201 may enable a PHY layer model transfer with mini-slot. In yet another example, if a slow and small size model update is enough, network computing device 201 may enable an application layer model transfer.
In some embodiments, in the different phases, exchanging of the model and the outputs of the models can be via transferring the weights and biases of the model, or gradients of the model matrix.
Symbiotic federations will now be described.
In some embodiments, two symbiotic federations take place in parallel. For example, one between gNBs 201 and the other between UEs 205 as described further below
UEs 205 may upload to a gNB 201 their learned model 207 and quantized version of their output or a function of that output 209.
gNB 201 may aggregate the models 207 of UEs 205 and the quantized output (e.g., secondary carrier signal strength (RSP, RSRQ)) 209 of UEs 205.
gNB 201 may take (a) the aggregated quantized output, MSEs or R2s 211 of the UEs 205, and (b) other gNB 201 available measurements 303 such as network throughput load, interference and cell utilization to train a local model 301 at gNB 201.
The local model 301 trained at gNB 201 may be aggregated together with additional models 217 trained by other gNBs 213 in proximity by an additional controller network computing device 215 (e.g., a gNB controller). During the aggregation of that model, weighted federated averaging may be performed where the weights are balanced accordingly on the distribution of labels. In this case, a decision is aimed at deciding whether the UE 205 takes the final decision for, e.g. a SC handover or selection. The process repeats periodically.
In some embodiments, a trained model is moved to gNB 201 and may be used as described above after UEs 205 predict a parameter, for a decision on an operation in the network.
In the non-limiting illustrative embodiment of
Although the embodiment of
Referring to
The neural network circuit 500 of
The neural network circuit 500 can be operated to process different inputs 303, 305, during a training mode by a processing circuit of network computing device 201 and/or during the execution mode of the trained neural network circuit 500, through different inputs (e.g., input nodes I1 to IN) of the neural network circuit 500. Inputs 303, 305 that can be simultaneously processed through different input nodes I1 to IN.
In the non-limiting illustrative embodiment of
Although the embodiment of
Referring to
The neural network circuit 1000 of
The neural network circuit 1000 can be operated to process different inputs 403, 405, during a training mode by a processing circuit of client computing device 700 and/or during the execution mode of the trained neural network circuit 1000, through different inputs (e.g., input nodes I1 to IN) of the neural network circuit 1000. Inputs 403, 405 that can be simultaneously processed through different input nodes I1 to IN.
These and other related operations will now be described in the context of the operational flowcharts of
Referring initially to
In some embodiments, the output performance metric (e.g., 209) of the plurality of the client machine learning models includes at least one of: a predicted quantized output; a predicted function of a quantized output; a decision on the operation in the telecommunications network; a gradient of a variation between a common type of the output of a client computing device and the network computing device; and a loss value indicting and accuracy of at least one of the plurality of client machine learning model.
In some embodiments, the network machine learning model (e.g., 301) includes a neural network (e.g., 500).
In some embodiments, the at least one measurement of network parameter (e.g., 303) includes at least one measurement of a parameter of a cell of the telecommunications network.
Referring to
Referring to
Referring to
Referring to
Referring to
Referring again to
In some embodiments, the aggregated output performance metric (211) further includes adapting the aggregated output performance metric to a number of client computing devices (e.g., 205, 700) that report the output performance metric (e.g., 209) to the network computing device (e.g., 201) based on one of: a weighted average of the output performance metric of the plurality of the client machine learning models; a statistical combination of the output performance metric of the plurality of the client machine learning models; and a minimum and a maximum of the output performance metric of the plurality of the client machine learning models.
Referring to
In some embodiments, the dynamically deciding (1800) on a machine learning model is a decision based on at least one change in a network parameter of the telecommunications network and one of: 1) local information of at least one of the plurality of client computing devices (e.g., 205, 700) used to predict the parameter, 2) a measurement by the network computing device (e.g., 201) of at least one change in the network parameter used to predict the parameter; and 3) both the local information of at least one of the plurality of client computing devices and the measurement by the network computing device of at least one change in the network parameter used to predict the parameter.
Referring to
Referring to
Referring to
In some embodiments, the output parameter (e.g., 307) of the network machine learning model inlcludes at least one of: an aggregated weight of the aggregated machine learning model; a gradient of a variation between the output performance metric and the output parameter over a defined time period; and a loss metric indicating an accuracy of the network machine learning model.
Referring to
In some embodiments, the updating the aggregated machine learning model (e.g., 203) after the training is sent to at least one of the plurality of the client computing devices based on one of based on one of:
enabling a physical layer, PHY layer, a medium access control layer, MAC layer, a resource radio control layer, RRC layer, a packet data convergence protocol layer, PDCP layer, and an application layer for sending the aggregated machine learning model to the plurality of client computing devices;
enabling a PHY layer with a mini slot for sending the aggregated machine learning model to the plurality of client computing devices; and
enabling an application layer for sending the aggregated machine learning model to the plurality of client computing devices.
Referring to
Referring to
Still referring to
In some embodiments, the network computing device 800 determines the signal type for each of the receiving and/or sending and a frequency of the exchanging based on at least one of a target rate that the at least one of the plurality of client computing devices sets for reaching a convergence for the aggregated machine learning model; and a rate of change of the at least one change in a speed of the network parameter of the telecommunications network.
Still referring to
Referring to
In some embodiments, the network computing device (e.g., 201, 800) is a network node and the plurality of client computing devices includes a communication device.
In some embodiments, the output performance metric (e.g., 209) is a predicted secondary carrier signal strength.
In some embodiments, the operation in the telecommunications network includes a secondary carrier operation.
According to some embodiments, a computer program can be provided that includes instructions which, when executed on at least one processor, cause the at least one processor to carry out methods performed by the network computing device.
According to some embodiments, a computer program product can be provided that includes a non-transitory computer readable medium storing instructions that, when executed on at least one processor, cause the at least one processor to carry out methods performed by the network computing device.
Operations of a client computing device 700 (implemented using the structure of the block diagram of
Referring initially to
Still referring to
In some embodiments, the output performance metric (e.g., 209) of the client machine learning model (e.g., 207) includes at least one of: a predicted quantized output; a predicted function of a quantized output; a decision on the operation in the telecommunications network; a gradient of a variation between a common type of the output of a client computing device and the network computing device; and a loss value indicting and accuracy of a client machine learning model.
In some embodiments, the aggregated machine learning model (e.g., 203) comprises a neural network (e.g., 1000).
Referring to
Referring to
Referring to
Referring to
Referring to
In some embodiments, the output parameter (e.g., 307) of the network machine learning model includes at least one of: an aggregated weight of the aggregated machine learning model; a gradient of a variation between the output performance metric and the output parameter over a defined time period; and a loss metric indicating an accuracy of the network machine learning model.
Referring to
the receiving an aggregated machine learning model (e.g., 203) from a network computing device (e.g., 201, 800); the receiving the output parameter (e.g., 307) of a network machine learning model (301) from the network computing device (e.g., 201, 800); the sending an output performance metric (e.g., 407) of the aggregated machine learning model (e.g., 203) to the network computing device; and the sending the aggregated machine learning model (e.g., 203) to the network computing device (e.g., 201, 800). The exchange is performed via a message received and/or sent using one of a signal type as follows: a resource radio control, RRC, configuration signal; a physical downlink control channel, PDCCH, signal from the network computing device; a physical uplink control channel, PUCCH, signal from at least one client computing device; and a medium access control, MAC, control element signal.
In some embodiments, the exchanging (3000) further includes one or more of: sending weights and biases of the aggregated machine learning model (e.g., 203) to the network computing device; receiving a transfer of weights and biases of the aggregated machine learning model (e.g., 203) from the network computing device; sending gradients of a matrix of the aggregated machine learning model (e.g., 203) to the network computing device; and receiving gradients of a matrix of the aggregated machine learning model (e.g., 203) from the network computing device.
Referring to
In some embodiments, the at least a location or at least one measurement of the client computing device (e.g., 403) to obtain an output performance metric (e.g., 407) of the aggregated machine learning model includes one or more of: a location of the client computing device; a time at the location of the client computing device; and an event in the telecommunications network.
Referring to
In some embodiments, the client computing device (e.g., 205, 700) is a communication device and the network computing device (e.g., 201, 800) is a network node.
In some embodiments, the output performance metric (e.g., 407) includes at least one of: a predicted secondary carrier signal strength; and a decision on a secondary carrier operation.
In some embodiments, the operation in the telecommunications network includes a secondary carrier operation.
According to some embodiments, a computer program can be provided that includes instructions which, when executed on at least one processor, cause the at least one processor to carry out methods performed by the client computing device.
According to some embodiments, a computer program product can be provided that includes a non-transitory computer readable medium storing instructions that, when executed on at least one processor, cause the at least one processor to carry out methods performed by the client computing device.
Aspects of the present disclosure have been described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable instruction execution apparatus, create a mechanism for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer readable medium that when executed can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions when stored in the computer readable medium produce an article of manufacture including instructions which when executed, cause a computer to implement the function/act specified in the flowchart and/or block diagram block or blocks. The computer program instructions may also be loaded onto a computer, other programmable instruction execution apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatuses or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
It is to be understood that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of this specification and the relevant art and will not be interpreted in an idealized or overly formal sense expressly so defined herein.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various aspects of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The terminology used herein is for the purpose of describing particular aspects only and is not intended to be limiting of the disclosure. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. Like reference numbers signify like elements throughout the description of the figures.
The corresponding structures, materials, acts, and equivalents of any means or step plus function elements in the embodiments below are intended to include any disclosed structure, material, or act for performing the function in combination with other embodiments. The description of the present disclosure has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the disclosure in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the disclosure. The aspects of the disclosure herein were chosen and described in order to best explain the principles of the disclosure and the practical application, and to enable others of ordinary skill in the art to understand the disclosure with various modifications as are suited to the particular use contemplated.
Exemplary embodiments are provided below. Reference numbers/letters are provided in parenthesis by way of example/illustration without limiting example embodiments to particular elements indicated by reference numbers/letters.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2019/086065 | 12/18/2019 | WO |