The present disclosure relates to a method of performing federated learning, and more particularly to a method for a plurality of user equipments (UEs) to perform federated learning in a wireless communication system and a device therefor.
Wireless communication systems have been widely deployed to provide various types of communication services such as voice or data. In general, the wireless communication system is a multiple access system capable of supporting communication with multiple users by sharing available system resources (bandwidth, transmission power, etc.). Examples of multiple access systems include a Code Division Multiple Access (CDMA) system, a Frequency Division Multiple Access (FDMA) system, a Time Division Multiple Access (TDMA) system, a Space Division Multiple Access (SDMA) system, an Orthogonal Frequency Division Multiple Access (OFDMA) system, a Single Carrier Frequency Division Multiple Access (SC-FDMA) system, and an Interleave Division Multiple Access (IDMA) system.
An object of the present disclosure is to provide a method of performing federated learning in a wireless communication system and a device therefor.
Another object of the present disclosure is to provide a method of scheduling a UE participating in federated learning when performing federated learning in a wireless communication system and a device therefor.
Another object of the present disclosure is to provide a method of transmitting a parity part when performing federated learning in a wireless communication system and a device therefor.
Another object of the present disclosure is to provide a method of processing a reception signal at a server when performing federated learning in a wireless communication system and a device therefor.
The technical objects to be achieved by the present disclosure are not limited to those that have been described hereinabove merely by way of example, and other technical objects that are not mentioned can be clearly understood by those skilled in the art, to which the present disclosure pertains, from the following descriptions.
The present disclosure provides a method of performing federated learning in a wireless communication system and a device therefor.
More specifically, in one aspect of the present disclosure, there is provided a method for a plurality of user equipments (UEs) to perform a federated learning in a wireless communication system, the method performed by one UE of the plurality of UEs comprising receiving, from a server, a channel state information reference signal (CSI-RS); transmitting, to the server, channel state information (CSI) calculated based on the CSI-RS; receiving, from the server, scheduling information that allows the one UE to participate in the federated learning, wherein the scheduling information is constructed based on a reference channel state configured based on channel state information of each of channels between the server and the plurality of UEs; encoding a local parameter for performing the federated learning, the encoded local parameter including a systematic part and a parity part; modulating the encoded local parameter, wherein the parity part is modulated based on a number of retransmissions determined based on (i) a modulation order of the systematic part and the parity part and (ii) a maximum number of UEs participating in the federated learning; and transmitting, to the server, the modulated local parameter based on the scheduling information and the number of retransmissions, wherein a transmission power for the local parameter is controlled based on a difference between a channel state of a channel between the one UE and the server and the reference channel state.
The scheduling information may be used to determine whether the one UE participates in the federated learning.
The reference channel state may be a channel state between the server and a UE that allows a channel gain between the server and the UE to be highest among the plurality of UEs.
Whether the one UE participates in the federated learning may be determined based on whether a ratio of a channel gain of a channel between the one UE and the server to a channel gain of the reference channel state is equal to or greater than a specific threshold.
Based on the ratio of the channel gain of the channel between the one UE and the server to the channel gain of the reference channel state being less than the specific threshold, the one UE may not participate in the federated learning.
Based on the ratio of the channel gain of the channel between the one UE and the server to the channel gain of the reference channel state being equal to or greater than the specific threshold, the one UE may participate in the federated learning.
The number of retransmissions (i) may be greater than or equal to 1 and (ii) may be determined to be equal to or less than a value by dividing the maximum number of UEs participating in the federated learning by 2 and rounding up.
Based on a channel gain of the one UE being lowest among respective channel gains of the plurality of UEs participating in the federated learning, an allocation power of the one UE for transmitting the systematic part may be set to a maximum power.
Based on the channel gain of the one UE being greater than a lowest channel gain among the respective channel gains of the plurality of UEs participating in the federated learning, the allocation power of the one UE for transmitting the systematic part may be set to a value by multiplying a value, that is determined based on a ratio of a channel gain of the one UE to a channel gain of the reference channel state, by the maximum power.
The parity part may be divided and transmitted on time/frequency resources as many as the number of retransmissions.
In another aspect of the present disclosure, there is provided a user equipment (UE) performing a federated learning with a plurality of UEs in a wireless communication system, the UE comprising a transmitter configured to transmit a radio signal; a receiver configured to receive the radio signal; at least one processor; and at least one computer memory operably connectable to the at least one processor, wherein the at least one computer memory is configured to store instructions performing operations based on being executed by the at least one processor, wherein the operations comprise receiving, from a server, a channel state information reference signal (CSI-RS); transmitting, to the server, channel state information (CSI) calculated based on the CSI-RS; receiving, from the server, scheduling information that allows the UE to participate in the federated learning, wherein the scheduling information is constructed based on a reference channel state configured based on channel state information of each of channels between the server and the plurality of UEs; encoding a local parameter for performing the federated learning, the encoded local parameter including a systematic part and a parity part; modulating the encoded local parameter, wherein the parity part is modulated based on a number of retransmissions determined based on (i) a modulation order of the systematic part and the parity part and (ii) a number of the plurality of UEs participating in the federated learning; and transmitting, to the server, the modulated local parameter based on the scheduling information and the number of retransmissions, wherein a transmission power for the local parameter is controlled based on a difference between a channel state of a channel between the UE and the server and the reference channel state.
In another aspect of the present disclosure, there is provided a method for a base station to perform a federated learning with a plurality of user equipments (UEs) in a wireless communication system, the method comprising transmitting, to each of the plurality of UEs, a channel state information reference signal (CSI-RS); receiving, from each of the plurality of UEs, channel state information (CSI) calculated based on the CSI-RS; transmitting, to each of the plurality of UEs, scheduling information that allows the plurality of UEs to participate in the federated learning, wherein the scheduling information is constructed based on a reference channel state configured based on channel state information of each of channels between the server and the plurality of UEs; and receiving, from each of the plurality of UEs, a local parameter for performing the federated learning of each of the plurality of UEs, the local parameter being encoded and modulated by each of the plurality of UEs, wherein the encoded local parameter includes a systematic part and a parity part, wherein the parity part is modulated based on a number of retransmissions determined based on (i) a modulation order of the systematic part and the parity part and (ii) a number of the plurality of UEs participating in the federated learning, wherein the local parameter of each of the plurality of UEs is transmitted based on the scheduling information and the number of retransmissions, wherein a transmission power for the local parameter is controlled based on a difference between a channel state of the channels between the plurality of UEs and the server and the reference channel state.
In another aspect of the present disclosure, there is provided a base station performing a federated learning with a plurality of user equipments (UEs) in a wireless communication system, the base station comprising a transmitter configured to transmit a radio signal; a receiver configured to receive the radio signal; at least one processor; and at least one computer memory operably connectable to the at least one processor, wherein the at least one computer memory is configured to store instructions performing operations based on being executed by the at least one processor, wherein the operations comprise transmitting, to each of the plurality of UEs, a channel state information reference signal (CSI-RS); receiving, from each of the plurality of UEs, channel state information (CSI) calculated based on the CSI-RS; transmitting, to each of the plurality of UEs, scheduling information that allows the plurality of UEs to participate in the federated learning, wherein the scheduling information is constructed based on a reference channel state configured based on channel state information of each of channels between the server and the plurality of UEs; and receiving, from each of the plurality of UEs, a local parameter for performing the federated learning of each of the plurality of UEs, the local parameter being encoded and modulated by each of the plurality of UEs, wherein the encoded local parameter includes a systematic part and a parity part, wherein the parity part is modulated based on a number of retransmissions determined based on (i) a modulation order of the systematic part and the parity part and (ii) a number of the plurality of UEs participating in the federated learning, wherein the local parameter of each of the plurality of UEs is transmitted based on the scheduling information and the number of retransmissions, wherein a transmission power for the local parameter is controlled based on a difference between a channel state of the channels between the plurality of UEs and the server and the reference channel state.
In another aspect of the present disclosure, there is provided a non-transitory computer readable medium (CRM) storing one or more instructions, wherein the one or more instructions executable by one or more processors are configured to allow one of a plurality of user equipments (UEs) to receive, from a server, a channel state information reference signal (CSI-RS); transmit, to the server, channel state information (CSI) calculated based on the CSI-RS; receive, from the server, scheduling information that allows the one UE to participate in the federated learning, wherein the scheduling information is constructed based on a reference channel state configured based on channel state information of each of channels between the server and the plurality of UEs; encode a local parameter for performing the federated learning, the encoded local parameter including a systematic part and a parity part; modulate the encoded local parameter, wherein the parity part is modulated based on a number of retransmissions determined based on (i) a modulation order of the systematic part and the parity part and (ii) a number of the plurality of UEs participating in the federated learning; and transmit, to the server, the modulated local parameter based on the scheduling information and the number of retransmissions, wherein a transmission power for the local parameter is controlled based on a difference between a channel state of a channel between the one UE and the server and the reference channel state.
In another aspect of the present disclosure, there is provided a device controlling a user equipment (UE) to perform a positioning in a wireless communication system, the device comprising one or more processors; and one or more memories operably connected to the one or more processors, wherein the one or more memories are configured to store instructions performing operations based on being executed by the one or more processors, wherein the operations comprise receiving, from a server, a channel state information reference signal (CSI-RS); transmitting, to the server, channel state information (CSI) calculated based on the CSI-RS; receiving, from the server, scheduling information that allows the UE to participate in the federated learning, wherein the scheduling information is constructed based on a reference channel state configured based on channel state information of each of channels between the server and a plurality of UEs; encoding a local parameter for performing the federated learning, the encoded local parameter including a systematic part and a parity part; modulating the encoded local parameter, wherein the parity part is modulated based on a number of retransmissions determined based on (i) a modulation order of the systematic part and the parity part and (ii) a number of the plurality of UEs participating in the federated learning; and transmitting, to the server, the modulated local parameter based on the scheduling information and the number of retransmissions, wherein a transmission power for the local parameter is controlled based on a difference between a channel state of a channel between the UE and the server and the reference channel state.
The present disclosure can perform federated learning in a wireless communication system.
The present disclosure can also schedule a UE participating in federated learning when performing federated learning in a wireless communication system.
The present disclosure can also efficiently transmit a parity part when performing federated learning in a wireless communication system.
The present disclosure can also process a reception signal at a server when performing federated learning in a wireless communication system.
Effects that could be achieved with the present disclosure are not limited to those that have been described hereinabove merely by way of example, and other effects and advantages of the present disclosure will be more clearly understood from the following description by a person skilled in the art to which the present disclosure pertains.
The accompanying drawings, which are included to provide a further understanding of the present disclosure and constitute a part of the detailed description, illustrate embodiments of the present disclosure and serve to explain technical features of the present disclosure together with the description.
The following technology may be used in various radio access system including CDMA, FDMA, TDMA, OFDMA, SC-FDMA, and the like. The CDMA may be implemented as radio technology such as Universal Terrestrial Radio Access (UTRA) or CDMA2000. The TDMA may be implemented as radio technology such as a global system for mobile communications (GSM)/general packet radio service (GPRS)/enhanced data rates for GSM evolution (EDGE). The OFDMA may be implemented as radio technology such as Institute of Electrical and Electronics Engineers (IEEE) 802.11 (Wi-Fi), IEEE 802.16 (WiMAX), IEEE 802.20, Evolved UTRA (E-UTRA), or the like. The UTRA is a part of Universal Mobile Telecommunications System (UMTS). 3rd Generation Partnership Project (3GPP) Long Term Evolution (LTE) is a part of Evolved UMTS (E-UMTS) using the E-UTRA and LTE-Advanced (A)/LTE-A pro is an evolved version of the 3GPP LTE. 3GPP NR (New Radio or New Radio Access Technology) is an evolved version of the 3GPP LTE/LTE-A/LTE-A pro. 3GPP 6G may be an evolved version of 3GPP NR.
For clarity in the description, the following description will mostly focus on 3GPP communication system (e.g. LTE-A or 5G NR). However, technical features according to an embodiment of the present disclosure will not be limited only to this. LTE means technology after 3GPP TS 36.xxx Release 8. In detail, LTE technology after 3GPP TS 36.xxx Release 10 is referred to as the LTE-A and LTE technology after 3GPP TS 36.xxx Release 13 is referred to as the LTE-A pro. The 3GPP NR means technology after TS 38.xxx Release 15. The LTE/NR may be referred to as a 3GPP system. “xxx” means a detailed standard document number. The LTE/NR/6G may be collectively referred to as the 3GPP system. For terms and techniques not specifically described among terms and techniques used in the present disclosure, reference may be made to a wireless communication standard document published before the present disclosure is filed. For example, the following document may be referred to.
When the UE is powered on or newly enters a cell, the UE performs an initial cell search operation such as synchronizing with the eNB (S11). To this end, the UE may receive a Primary Synchronization Signal (PSS) and a (Secondary Synchronization Signal (SSS) from the eNB and synchronize with the eNB and acquire information such as a cell ID or the like. Thereafter, the UE may receive a Physical Broadcast Channel (PBCH) from the eNB and acquire in-cell broadcast information. Meanwhile, the UE receives a Downlink Reference Signal (DL RS) in an initial cell search step to check a downlink channel status.
A UE that completes the initial cell search receives a Physical Downlink Control Channel (PDCCH) and a Physical Downlink Control Channel (PDSCH) according to information loaded on the PDCCH to acquire more specific system information (S12).
When there is no radio resource first accessing the eNB or for signal transmission, the UE may perform a Random Access Procedure (RACH) to the eNB (S13 to S16). To this end, the UE may transmit a specific sequence to a preamble through a Physical Random Access Channel (PRACH) (S13 and S15) and receive a response message (Random Access Response (RAR) message) for the preamble through the PDCCH and a corresponding PDSCH. In the case of a contention based RACH, a Contention Resolution Procedure may be additionally performed (S16).
The UE that performs the above procedure may then perform PDCCH/PDSCH reception (S17) and Physical Uplink Shared Channel (PUSCH)/Physical Uplink Control Channel (PUCCH) transmission (S18) as a general uplink/downlink signal transmission procedure. In particular, the UE may receive Downlink Control Information (DCI) through the PDCCH. Here, the DCI may include control information such as resource allocation information for the UE and formats may be differently applied according to a use purpose.
The control information which the UE transmits to the eNB through the uplink or the UE receives from the eNB may include a downlink/uplink ACK/NACK signal, a Channel Quality Indicator (CQI), a Precoding Matrix Index (PMI), a Rank Indicator (RI), and the like. The UE may transmit the control information such as the CQI/PMI/RI, etc., via the PUSCH and/or PUCCH.
A base station transmits a related signal to a UE via a downlink channel to be described later, and the UE receives the related signal from the base station via the downlink channel to be described later.
A PDSCH carries downlink data (e.g., DL-shared channel transport block, DL-SCH TB) and is applied with a modulation method such as quadrature phase shift keying (QPSK), 16 quadrature amplitude modulation (QAM), 64 QAM, and 256 QAM. A codeword is generated by encoding TB. The PDSCH may carry multiple codewords. Scrambling and modulation mapping are performed for each codeword, and modulation symbols generated from each codeword are mapped to one or more layers (layer mapping). Each layer is mapped to a resource together with a demodulation reference signal (DMRS) to generate an OFDM symbol signal, and is transmitted through a corresponding antenna port.
A PDCCH carries downlink control information (DCI) and is applied with a QPSK modulation method, etc. One PDCCH consists of 1, 2, 4, 8, or 16 control channel elements (CCEs) based on an aggregation level (AL). One CCE consists of 6 resource element groups (REGs). One REG is defined by one OFDM symbol and one (P) RB.
The UE performs decoding (aka, blind decoding) on a set of PDCCH candidates to acquire DCI transmitted via the PDCCH. The set of PDCCH candidates decoded by the UE is defined as a PDCCH search space set. The search space set may be a common search space or a UE-specific search space. The UE may acquire DCI by monitoring PDCCH candidates in one or more search space sets configured by MIB or higher layer signaling.
A UE transmits a related signal to a base station via an uplink channel to be described later, and the base station receives the related signal from the UE via the uplink channel to be described later.
A PUSCH carries uplink data (e.g., UL-shared channel transport block, UL-SCH TB) and/or uplink control information (UCI) and is transmitted based on a CP-OFDM (Cyclic Prefix-Orthogonal Frequency Division Multiplexing) waveform, DFT-s-OFDM (Discrete Fourier Transform-spread-Orthogonal Frequency Division Multiplexing) waveform, or the like. When the PUSCH is transmitted based on the DFT-s-OFDM waveform, the UE transmits the PUSCH by applying a transform precoding. For example, if the transform precoding is not possible (e.g., transform precoding is disabled), the UE may transmit the PUSCH based on the CP-OFDM waveform, and if the transform precoding is possible (e.g., transform precoding is enabled), the UE may transmit the PUSCH based on the CP-OFDM waveform or the DFT-s-OFDM waveform. The PUSCH transmission may be dynamically scheduled by an UL grant within DCI, or may be semi-statically scheduled based on high layer (e.g., RRC) signaling (and/or layer 1 (L1) signaling (e.g., PDCCH)) (configured grant). The PUSCH transmission may be performed based on a codebook or a non-codebook.
A PUCCH carries uplink control information, HARQ-ACK, and/or scheduling request (SR), and may be divided into multiple PUCCHs based on a PUCCH transmission length.
A 6G (wireless communication) system has purposes such as (i) a very high data rate per device, (ii) a very large number of connected devices, (iii) global connectivity, (iv) a very low latency, (v) a reduction in energy consumption of battery-free IoT devices, (vi) ultra-reliable connectivity, and (vii) connected intelligence with machine learning capability. The vision of the 6G system may include four aspects such as intelligent connectivity, deep connectivity, holographic connectivity, and ubiquitous connectivity, and the 6G system may satisfy the requirements shown in Table 1 below. That is, Table 1 shows an example of the requirements of the 6G system.
The 6G system may have key factors such as enhanced mobile broadband (eMBB), ultra-reliable low latency communications (URLLC), massive machine type communications (mMTC), AI integrated communication, tactile Internet, high throughput, high network capacity, high energy efficiency, low backhaul and access network congestion, and enhanced data security.
The 6G system is expected to have 50 times greater simultaneous wireless communication connectivity than a 5G wireless communication system. URLLC, which is the key feature of 5G, will become more important technology by providing an end-to-end latency less than 1 ms in 6G communication. The 6G system may have much better volumetric spectrum efficiency unlike frequently used domain spectrum efficiency. The 6G system can provide advanced battery technology for energy harvesting and very long battery life, and thus mobile devices may not need to be separately charged in the 6G system. In 6G, new network characteristics may be as follows.
In the new network characteristics of 6G described above, several general requirements may be as follows.
Technology which is most important in the 6G system and will be newly introduced is AI. AI was not involved in the 4G system. The 5G system will support partial or very limited AI. However, the 6G system will support AI for full automation. Advance in machine learning will create a more intelligent network for real-time communication in 6G. When AI is introduced to communication, real-time data transmission can be simplified and improved. AI may determine a method of performing complicated target tasks using countless analysis. That is, AI can increase efficiency and reduce processing delay.
Recently, attempts have been made to integrate AI with a wireless communication system in the application layer or the network layer, and in particular, deep learning has been focused on the wireless resource management and allocation field. However, such studies have been gradually developed to the MAC layer and the physical layer, and in particular, attempts to combine deep learning in the physical layer with wireless transmission are emerging.
AI-based physical layer transmission means applying a signal processing and communication mechanism based on an AI driver rather than a traditional communication framework in a fundamental signal processing and communication mechanism. For example, channel coding and decoding based on deep learning, signal estimation and detection based on deep learning, multiple input multiple output (MIMO) mechanisms based on deep learning, resource scheduling and allocation based on AI, etc. may be included.
Machine learning may be used for channel estimation and channel tracking and may be used for power allocation, interference cancellation, etc. in the physical layer of DL. The machine learning may also be used for antenna selection, power control, symbol detection, etc. in the MIMO system.
Machine learning refers to a series of operations to train a machine in order to create a machine capable of doing tasks that people cannot do or are difficult for people to do. Machine learning requires data and learning models. In the machine learning, a data learning method may be roughly divided into three methods, that is, supervised learning, unsupervised learning and reinforcement learning.
Neural network learning is to minimize an output error. The neural network learning refers to a process of repeatedly inputting training data to a neural network, calculating an error of an output and a target of the neural network for the training data, backpropagating the error of the neural network from an output layer to an input layer of the neural network for the purpose of reducing the error, and updating a weight of each node of the neural network.
The supervised learning may use training data labeled with a correct answer, and the unsupervised learning may use training data which is not labeled with a correct answer. That is, for example, in supervised learning for data classification, training data may be data in which each training data is labeled with a category. The labeled training data may be input to the neural network, and the error may be calculated by comparing the output (category) of the neural network with the label of the training data. The calculated error is backpropagated in the neural network in the reverse direction (i.e., from the output layer to the input layer), and a connection weight of respective nodes of each layer of the neural network may be updated based on the backpropagation. Change in the updated connection weight of each node may be determined depending on a learning rate. The calculation of the neural network for input data and the backpropagation of the error may construct a learning cycle (epoch). The learning rate may be differently applied based on the number of repetitions of the learning cycle of the neural network. For example, in the early stage of learning of the neural network, efficiency can be increased by allowing the neural network to rapidly ensure a certain level of performance using a high learning rate, and in the late of learning, accuracy can be increased using a low learning rate.
The learning method may vary depending on the feature of data. For example, in order for a reception end to accurately predict data transmitted from a transmission end on a communication system, it is preferable that learning is performed using the supervised learning rather than the unsupervised learning or the reinforcement learning.
The learning model corresponds to the human brain and may be regarded as the most basic linear model. However, a paradigm of machine learning using, as the learning model, a neural network structure with high complexity, such as artificial neural networks, is referred to as deep learning.
Neural network cores used as the learning method may roughly include a deep neural network (DNN) method, a convolutional deep neural network (CNN) method, and a recurrent Boltzmann machine (RNN) method.
The artificial neural network is an example of connecting several perceptrons.
Referring to
The perceptron structure illustrated in
A layer where the input vector is located is called an input layer, a layer where a final output value is located is called an output layer, and all layers located between the input layer and the output layer are called a hidden layer.
The above-described input layer, hidden layer, and output layer can be jointly applied in various artificial neural network structures, such as CNN and RNN to be described later, as well as the multilayer perceptron. The greater the number of hidden layers, the deeper the artificial neural network is, and a machine learning paradigm that uses the sufficiently deep artificial neural network as a learning model is called deep learning. In addition, the artificial neural network used for deep learning is called a deep neural network (DNN).
The deep neural network illustrated in
Based on how the plurality of perceptrons are connected to each other, various artificial neural network structures different from the above-described DNN can be formed.
In the DNN, nodes located inside one layer are arranged in a one-dimensional longitudinal direction. However, in
The convolutional neural network of
One filter has a weight corresponding to the number as much as its size, and learning of the weight may be performed so that a certain feature on an image can be extracted and output as a factor. In
The filter performs the weighted sum and the activation function calculation while moving horizontally and vertically by a predetermined interval when scanning the input layer, and places the output value at a location of a current filter. This calculation method is similar to the convolution operation on images in the field of computer vision. Thus, a deep neural network with this structure is referred to as a convolutional neural network (CNN), and a hidden layer generated as a result of the convolution operation is referred to as a convolutional layer. In addition, a neural network in which a plurality of convolutional layers exists is referred to as a deep convolutional neural network (DCNN).
At the node where a current filter is located at the convolutional layer, the number of weights may be reduced by calculating a weighted sum including only nodes located in an area covered by the filter. Hence, one filter can be used to focus on features for a local area. Accordingly, the CNN can be effectively applied to image data processing in which a physical distance on the 2D area is an important criterion. In the CNN, a plurality of filters may be applied immediately before the convolution layer, and a plurality of output results may be generated through a convolution operation of each filter.
There may be data whose sequence characteristics are important depending on data attributes. A structure, in which a method of inputting one element on the data sequence at each time step considering a length variability and a relationship of the sequence data and inputting an output vector (hidden vector) of a hidden layer output at a specific time step together with a next element on the data sequence is applied to the artificial neural network, is referred to as a recurrent neural network structure.
Referring to
Referring to
Hidden vectors (z1(1), z2(1), . . . , zH(1)) when input vectors (x1(t), x2(t), . . . , xd(t)) at a time step 1 are input to the recurrent neural network, are input together with input vectors (x1(2), x2(2), . . . , xd(2)) at a time step 2 to determine vectors (z1(2), z2(2), . . . , zH(2)) of a hidden layer through a weighted sum and an activation function. This process is repeatedly performed at time steps 2, 3, . . . , T.
When a plurality of hidden layers are disposed in the recurrent neural network, this is referred to as a deep recurrent neural network (DRNN). The recurrent neural network is designed to be usefully applied to sequence data (e.g., natural language processing).
A neural network core used as a learning method includes various deep learning methods such as a restricted Boltzmann machine (RBM), a deep belief network (DBN), and a deep Q-network, in addition to the DNN, the CNN, and the RNN, and may be applied to fields such as computer vision, speech recognition, natural language processing, and voice/signal processing.
In federated learning which is a scheme of distributed machine learning, each of a plurality of devices that are the subjects of learning shares local model parameters with a server, and the server collects the local model parameters of each device and updates a global parameter. The local model parameters may include parameters such as weight and gradient of a local model, and it is obvious that the local model parameters can be expressed in various ways within the range in which they can be interpreted identically/similarly to local parameters, etc. If federated learning is applied to 5G communication or 6G communication, the device may be a user equipment (UE), and the server may be a base station (BS). Hereinafter, the UE/device/transmitter and the server/base station/receiver may be used interchangeably for convenience of explanation.
In the above process, each device does not share raw data with the server, thereby reducing communication overhead during a data transmission process and protecting personal information of the device (user).
More specifically,
Devices 1011, 1012 and 1013 transmit their local parameters to a server 1020 on resources allocated to each of the devices 1011, 1012 and 1013 (1010). In this instance, before the devices 1011, 1012 and 1013 transmit the local parameters, the devices 1011, 1012 and 1013 may receive configuration information on learning parameters for federated learning from the server 1020. The configuration information on the learning parameters for federated learning may include parameters such as weight and gradient of local models, and the learning parameters included in the local parameters transmitted by the devices 1011, 1012 and 1013 may be determined based on the configuration information. After the reception of the configuration information, the devices 1011, 1012 and 1013 may receive control information for resource allocation for transmission of the local parameters. Each of the devices 1011, 1012 and 1013 may transmit the local parameters on resources allocated based on the control information.
Afterwards, the server 1020 performs offline aggregation (1021 and 1022) on the local parameters received from each of the devices 1011, 1012 and 1013.
In general, the server 1020 derives a global parameter through averaging of all the local parameters received from the devices 1011, 1012 and 1013 participating in federated learning, and transmits the derived global parameter to each of the devices 1011, 1012 and 1013.
However, in the working process of the orthogonal division access based federated learning, overhead generated in terms of the use of radio resources is very large (i.e., radio resources are linearly required as many as the number of devices participating in learning). Further, in the working process of the orthogonal division access based federated learning on limited resources, as the number of devices participating in federated learning increases, there may be a problem that the time required to update the global parameter is delayed (increased).
More specifically,
The Aircomp based federated learning is a method in which all devices participating in federated learning each transmit their local parameters on the same resources. Hence, the Aircomp based federated learning can solve the problem, described above with reference to
In
The local parameters transmitted by the devices 1111, 1112 and 1113 are transmitted based on an analog method or a digital method. The analog method means that pulse amplitude modulation (PAM) is simply applied to a gradient value, and the digital method means that quadrature amplitude modulation (QAM) or phase shift keying (PSK), which is a typical digital modulation method, is applied to a gradient value. The server 1120 may obtain a sum of the local parameters transmitted based on the analog or digital method received by superposition on air (1121). Afterwards, the server 1120 derives a global parameter through averaging of all the local parameters and transmits the derived global parameter to each of the devices 1111, 1112 and 1113.
In the AirComp based federated learning, because devices participating in the federated learning each transmit local parameters on the same resources, the number of devices participating in learning does not significantly affect latency. That is, even if the number of devices participating in the federated learning increases, the time it takes to update the global parameter does not change significantly compared to when a small number of devices participate in the federated learning. Therefore, the AirComp based federated learning can be efficient in terms of radio resource management.
In general, in the AirComp method, it is difficult to obtain an error correction performance gain by using the existing transmission/reception chain and binary channel coding. As a solution to this, a restriction-based scalable Q-ary code method for the AirComp method has been proposed. Based on Q being a prime number or q2 case and power of 2 (2q) case, different ways of information restrictions and modulations are applied. A biggest difference between the two cases is that the degree of freedom of available channel (channel dof) in the modulation is 2, but the degree of freedom of a transmitted symbol is greater or less than dof (the number of components of a polynomial) of 2. In the former case, modulation is possible without much consideration of the accumulations as many as the number of users/UEs. On the other hand, in the latter case (Q=2q), when multiple components are projected on one channel, an amplitude modulation for each component shall be performed considering accumulations as many as the number of users. For example, if Q=24, a set of degree-3 polynomials becomes a symbol set (={p(z)=Σi=14aizi−1|ai∈{0,1}}). Here, if (a1, a3) is mapped to a real channel by the amplitude modulation, the modulation shall be performed taking into account that it can be accumulated as many as the number of users participating in AirComp. Assuming that four users participate in learning, if modulation of amplitude 1 has been performed on a1, ambiguity can be removed when modulation of at least amplitude 5 is performed on a3. That is, as the number of users increases, when the same power is assumed in symbol mapping of a parity part, a distance between contiguous symbols exponentially decreases. Therefore, repetition transmission of the parity part that is a retransmission method for AirComp may not be efficient for the case of Q=2q. The present disclosure proposes a retransmission method suitable for the case of Q=2q.
Before describing a method proposed in the present disclosure, notations used in the present disclosure are defined as below.
Notations: regular characters represent a scalar, bold lowercase and uppercase characters represent a vector and a matrix, and calligraphic characters represent a set. For example, x, x, X and denote a scalar, a vector, a matrix and a set. x[i] denotes an i-th entry of vector x, and [x[i]]i=mn=[x[m], x[m+1], . . . , x[n]]. ┌⋅┐, └⋅┘ and (⋅)q denote ceiling, flooring and modulo-q operation. |x| and |x| (or ||) denote an absolute value of x and cardinality of x (or ), and |⋅|2 denotes l2-norm.
The following describes a retransmission method in the case of Q=2q that can be properly applied in restriction-based scalable Q-ary code based AirComp and restriction-based scalable Q-ary linear code based AirComp. A method proposed in the present disclosure considers the following three characteristics generated based on an information restriction.
The codeword and the modulation method in restriction-based Q-ary linear code are described below.
A set of all devices participating in federated learning is defined as ={1, . . . , U}, and a finite field to perform non-binary coding is defined as q. In this instance, q is assumed to be equal to or greater than at least U. An information sequence with length qK of each user-u is defined as Iub=[iub[k]]k=1qK, and an information freedom is defined as μ∈{1, . . . , q/U}. This means the degree to which information can be carried in a partial sequence with length q of each user-u, and zero padding is performed on remaining sequences q−μ. And, Iub may be expressed as in Equation 1 below.
Here, d2b(l, μ) is a function that converts a non-negative decimal integer l into a binary vector of length μ, and σ(, a) denotes a set that right cyclic-shifts respective elementary vectors of a set by a. Further, a reason for converting an information sequence set in units of partial sequence with length q for each user (i.e., adjusting a location where information is carried) is to set the average power for each user to be the same.
The binary sequence Iub above represented as Q-ary symbol sequence is defined as Iu. A parity check matrix defined in the finite field is defined as H∈ (M=N−K). The binary representation of the H is defined as Hb∈{0,1}qM×qN, and a generator matrix for this is defined as Gb=[Ik, P](∈{0,1}qK×qN). A codeword through encoding of each user-u is cub=(Gb)TIub, and a systematic codeword is [cub[k]]k=1qK=Iub. The codeword is defined as Q-ary symbol sequence cu.
The modulation order is determined by the number U of users participating in federated learning and an information freedom μ. In terms of units of partial sequence with length q, a portion q−μU of a rear end of a systematic sequence is always zero-padded and is not used. Therefore, a systematic part where modulation is performed in the codeword considering this is defined as cusys,b. cusys,b is defined as in Equation 2 below.
When a receiver observes aggregated modulated symbols, an effective modulation order is 2μU, but the modulation order at each transmitter is U(2μ−1)+1. For example, if q=6, μ=2, and U=3, 2μU symbols corresponding to 000000 to 111111 are observed at the receiver. Therefore, the modulation order is 2μU.
Because a transmitter modulates and transmits symbols for 000000, 100000, 010000, 110000, 001000, 000100, 001100, 000010, 000001, and 000011, the modulation order is U(2μ−1)+1. Here, the important point is that there should be no ambiguity between the respective symbols when observed by the receiver. The modulation method considering this is as shown in Equation 3 below.
Here, b denotes offset-term and can be set appropriately depending on each purpose. For example, the offset-term may be set in terms of optimization of power consumption at the transmitter or optimization of the received signal range at the receiver. More specifically, the offset-term may be defined as in Equation 4 below when the purpose is that constellation is observed symmetrically in each I-channel and Q-channel for optimization of the received signal range at the receiver.
The modulation of the parity part is described below.
A method of scheduling devices to participate in federated learning is described below. A ser of all users and a set of channels are defined as={1, . . . , U} and ={huch}u=1Ū. It is assumed that channels huch between device-u and a server are sorted in descending manner (|hich|2≥|hjch|2 for i<j). If Q is given, the maximum number of users is set to U=q, and U devices are selected to construct a collection set ={1, . . . , U}. Scheduling of the devices to participate in federated learning among all the U devices is performed using a method shown in Equation 5 below.
Here, U*=||, ηth denotes a minimum required reception sensitivity, and ηratio denotes a channel gain difference ratio and means that devices with a certain channel gain compared to a maximum channel gain participating in learning will be allowed to participate in learning. In this instance, there is a trade-off relationship between a power loss of devices with good channels and a participation rate of devices participating in learning. That is, as the participation rate of devices participating in learning increases, the power loss of devices with good channels may increase.
When this method is applied to a federated learning process, a plurality of UEs participating in federated learning receive a channel state information reference signal (CSI-RS) from a server, transmit channel state information (CSI) calculated based on the CSI-RS to the server, and receive scheduling information, that allows one UE of the plurality of UEs to participate in the federated learning, from the server. In this instance, the scheduling information is constructed based on a reference channel state configured based on channel state information of each of channels between the server and the plurality of UEs.
Further, the scheduling information may be used to determine whether a specific UE participates in the federated learning. The reference channel state may refer to a channel state between the server and a UE that allows a channel gain between the server and the UE to be highest among the plurality of UEs. Whether the specific UE participates in the federated learning may be determined based on whether a ratio of a channel gain of a channel between the specific UE and the server to a channel gain of the reference channel state is equal to or greater than a specific threshold. More specifically, based on the ratio of the channel gain of the channel between the specific UE and the server to the channel gain of the reference channel state being less than the specific threshold, the specific UE may not participate in the federated learning. In addition, based on the ratio of the channel gain of the channel between the specific UE and the server to the channel gain of the reference channel state being equal to or greater than the specific threshold, the specific UE may participate in the federated learning.
This method relates to a method of selecting the number of retransmissions dedicated to parity bit transmission. In this method, the number of retransmissions T is determined considering an available resource situation. In this instance, T is equal to or less than ┌q/2┐. A reason for determining the number of retransmissions as above is that if T=┌q/2┐, only one polynomial component information is modulated per channel (real/image) during modulation. Therefore, even if the same power P is used, the parity part is no longer less reliable than the systematic part. T is selected among [1,┌q/2┐] considering the available resource situation and target reliability.
When this method is applied to the federated learning process, the UE, that is determined to participate in the federated learning through the scheduling of the device to participate in the federated learning described above, encodes a local parameter for performing the federated learning. In this instance, the encoded local parameter includes a systematic part and a parity part. Afterwards, the UE modulates the encoded local parameter, and the parity part is modulated based on the number of retransmissions determined based on (i) modulation order of the systematic part and the parity part and (ii) the maximum number of UEs participating in the federated learning. The UE transmits the modulated local parameter to the server based on the scheduling information and the number of retransmissions.
This method relates to a power allocation method for devices participating in federated learning. The power allocation for systematic part transmission is determined as in Equation 6 below.
Among the devices participating in federated learning, a user with the worst channel gain performs transmission using the maximum power P, and remaining users perform the power control so that the remaining users have the same reception sensitivity as a signal transmitted by the user with the worst channel gain, and perform transmission.
In this instance, modulation for a parity part may be performed as below.
Here, Al and b1, b2 denote an amplitude modulation term and offset-terms and can be expressed as in Equations 9 and 10 below, respectively.
The power allocation for parity part transmission is determined as in Equation 11 below.
Here, Pavesys and Pavepar,I denote a constellation average power transmitted in the systematic part and a constellation average power transmitted in the parity part.
When two types of modulation are performed as shown in Equation 8 above, power control is performed by calculating the average power for each modulation type as shown in the third term of Equation 11 above. The power of the user with the worst channel is determined considering the maximum transmission power P and the reception sensitivity of the systematic part. Afterward, the power of the remaining users is controlled using the power of the user with the worst channel as a reference.
This method relates to a resource allocation method for devices participating in federated learning. Respective devices participating in federated learning share the same resources and transmit modulated sequences of a systematic part and a parity part to a server. In this instance, the sequence of the parity part is transmitted as a partial modulated sequence using time/frequency resources T times. In this instance, a resource overhead by the transmission of the partial modulated sequence using the time/frequency resources T times is expressed as in Equation 12 below.
It can be seen from
This method relates to a pre-processing performed before a receiver obtains a soft-value for channel decoding. An entry of a received signal
for the pre-processing at the receiver can be expressed as in Equation 13 below.
Here, this is additive white Gaussian noise (AWGN) following w[n]˜CN(0,1) or N(0,1).
The systematic part and the parity part can be divided as in Equations 14 and 15 below.
[systematic part: n=1, . . . ,K]
[parity part: n=1, . . . ,N−K and t=1, . . . ,└q/rm1┘+1]
The receiver obtains (rtpar[n]) soft-value for each n-th component and obtains a soft value of n-th polynomial through summation in log-domain (soft-value of a component corresponding to partition-t of n-th polynomial). Afterwards, the receiver decodes a received signal through a decoding operation.
Next, the one UE transmits, to the server, channel state information (CSI) calculated based on the CSI-RS, in S1520.
Next, the one UE receives, from the server, scheduling information that allows one UE to participate in the federated learning, in S1530.
The scheduling information is constructed based on a reference channel state configured based on channel state information of each of channels between the server and the plurality of UEs.
Next, the one UE encodes a local parameter for performing the federated learning, in S1540. In this instance, the encoded local parameter includes a systematic part and a parity part.
Next, the one UE modulates the encoded local parameter, in S1550. The parity part is modulated based on the number of retransmissions determined based on (i) modulation order of the systematic part and the parity part and (ii) the maximum number of UEs participating in the federated learning.
Finally, the one UE transmits, to the server, the modulated local parameter based on the scheduling information and the number of retransmissions, in S1560. A transmission power for the local parameter is controlled based on a difference between a channel state of a channel between the one UE and the server and the reference channel state.
Next, the base station receives, from each of the plurality of UEs, channel state information (CSI) calculated based on the CSI-RS, in S1620.
Next, the base station transmits, to each of the plurality of UEs, scheduling information that allows the plurality of UEs to participate in the federated learning, in S1630. The scheduling information is constructed based on a reference channel state configured based on channel state information of each of channels between the server and the plurality of UEs.
Next, the base station receives, from each of the plurality of UEs, a local parameter for performing the federated learning of each of the plurality of UEs, the local parameter being encoded and modulated by each of the plurality of UEs, in S1640. The encoded local parameter includes a systematic part and a parity part. The parity part is modulated based on the number of retransmissions determined based on (i) modulation order of the systematic part and the parity part and (ii) the number of the plurality of UEs participating in the federated learning. The local parameter of each of the plurality of UEs is transmitted based on the scheduling information and the number of retransmissions. A transmission power for the local parameter is controlled based on a difference between a channel state of the channels between the plurality of UEs and the server and the reference channel state.
Although not limited thereto, various proposals of the present disclosure described above can be applied to various fields requiring wireless communication/connection (e.g., 5G) between devices.
Hereinafter, a description will be given in more detail with reference to the drawings. In the following drawings/description, the same reference numerals may denote the same or corresponding hardware blocks, software blocks, or functional blocks, unless otherwise stated.
Referring to
Referring to
The first wireless device 100 may include one or more processors 102 and one or more memories 104 storing various information related to an operation of the one or more processors 102 and may further include one or more transceivers 106 and/or one or more antennas 108. The processor 102 may control the memory 104 and/or the transceiver 106 and may be configured to implement functions, procedures and/or methods described/proposed above.
Referring to
Codewords may be converted into radio signals via the signal processing circuit 1000 of
Specifically, the codewords may be converted into scrambled bit sequences by the scramblers 1010. Modulation symbols of each transport layer may be mapped (precoded) to corresponding antenna port(s) by the precoder 1040. Outputs z of the precoder 1040 may be obtained by multiplying outputs y of the layer mapper 1030 by an N*M precoding matrix W, where N is the number of antenna ports, and M is the number of transport layers. The precoder 1040 may perform precoding after performing transform precoding (e.g., DFT transform) for complex modulation symbols. Alternatively, the precoder 1040 may perform precoding without performing transform precoding. The resource mappers 1050 may map modulation symbols of each antenna port to time-frequency resources.
Signal processing procedures for a received signal in the wireless device may be configured in a reverse manner of the signal processing procedures 1010 to 1060 of
Referring to
The additional components 140 may be variously configured based on types of wireless devices. For example, the additional components 140 may include at least one of a power unit/battery, input/output (I/O) unit, a driving unit, and a computing unit. The wireless device may be implemented in the form of the robot (100a of
Examples of implementation of
Referring to
The communication unit 110 may transmit and receive signals (e.g., data and control signals) to and from other wireless devices or BSs. The control unit 120 may perform various operations by controlling components of the hand-held device 100. The control unit 120 may include an application processor (AP). The memory unit 130 may store data/parameters/programs/codes/instructions needed to drive the hand-held device 100. The memory unit 130 may store input/output data/information. The power supply unit 140a may supply power to the hand-held device 100 and include a wired/wireless charging circuit, a battery, etc. The interface unit 140b may support connection of the hand-held device 100 to other external devices. The interface unit 140b may include various ports (e.g., an audio I/O port and a video I/O port) for connection with external devices. The I/O unit 140c may input or output video information/signals, audio information/signals, data, and/or information input by a user. The I/O unit 140c may include a camera, a microphone, a user input unit, a display unit 140d, a speaker, and/or a haptic module.
Referring to
The communication unit 110 may transmit and receive signals (e.g., data and control signals) to and from external devices such as other vehicles, BSs (e.g., gNBs and road side units), and servers. The control unit 120 may perform various operations by controlling elements of the vehicle or the autonomous vehicle 100. The control unit 120 may include an electronic control unit (ECU). The driving unit 140a may allow the vehicle or the autonomous vehicle 100 to drive on a road. The driving unit 140a may include an engine, a motor, a powertrain, a wheel, a brake, a steering device, etc. The power supply unit 140b may supply power to the vehicle or the autonomous vehicle 100 and include a wired/wireless charging circuit, a battery, etc. The sensor unit 140c, which may include various types of sensors, may obtain a vehicle state, ambient environment information, user information, etc. The autonomous driving unit 140d may implement technology for maintaining a lane on which a vehicle is driving, technology for automatically adjusting speed, such as adaptive cruise control, technology for autonomously driving along a determined path, technology for driving by automatically setting a path if a destination is set, and the like.
Referring to
The communication unit 110 may transmit and receive signals (e.g., data and control signals) to and from external devices such as other vehicles or base stations. The control unit 120 may perform various operations by controlling components of the vehicle 100. The memory unit 130 may store data/parameters/programs/codes/instructions for supporting various functions of the vehicle 100. The I/O unit 140a may output an AR/VR object based on information within the memory unit 130. The I/O unit 140a may include an HUD. The positioning unit 140b may acquire location information of the vehicle 100. The location information may include absolute location information of the vehicle 100, location information of the vehicle 100 within a traveling lane, acceleration information, and location information of the vehicle 100 from a neighboring vehicle. The positioning unit 140b may include a GPS and various sensors.
Referring to
The communication unit 110 may transmit and receive signals (e.g., media data, control signal, etc.) to and from external devices such as other wireless devices, handheld devices, or media servers. The media data may include video, images, sound, etc. The control unit 120 may control components of the XR device 100a to perform various operations. For example, the control unit 120 may be configured to control and/or perform procedures such as video/image acquisition, (video/image) encoding, and metadata generation and processing. The memory unit 120 may store data/parameters/programs/codes/instructions required to drive the XR device 100a/generate an XR object. The I/O unit 140a may obtain control information, data, etc. from the outside and output the generated XR object. The I/O unit 140a may include a camera, a microphone, a user input unit, a display, a speaker, and/or a haptic module. The sensor unit 140b may obtain a state, surrounding environment information, user information, etc. of the XR device 100a. The sensor 140b may include a proximity sensor, an illumination sensor, an acceleration sensor, a magnetic sensor, a gyro sensor, an inertial sensor, an RGB sensor, an IR sensor, a fingerprint scan sensor, an ultrasonic sensor, a light sensor, a microphone, and/or a radar. The power supply unit 140c may supply power to the XR device 100a and include a wired/wireless charging circuit, a battery, etc.
The XR device 100a may be wirelessly connected to the handheld device 100b through the communication unit 110, and the operation of the XR device 100a may be controlled by the handheld device 100b. For example, the handheld device 100b may operate as a controller of the XR device 100a. To this end, the XR device 100a may obtain 3D location information of the handheld device 100b and generate and output an XR object corresponding to the handheld device 100b.
Referring to
The communication unit 110 may transmit and receive signals (e.g., driving information and control signals) to and from external devices such as other wireless devices, other robots, or control servers. The control unit 120 may perform various operations by controlling components of the robot 100. The memory unit 130 may store data/parameters/programs/codes/instructions for supporting various functions of the robot 100. The I/O unit 140a may obtain information from the outside of the robot 100 and output information to the outside of the robot 100. The I/O unit 140a may include a camera, a microphone, a user input unit, a display unit, a speaker, and/or a haptic module. The sensor unit 140b may obtain internal information of the robot 100, surrounding environment information, user information, etc. The sensor unit 140b may include a proximity sensor, an illumination sensor, an acceleration sensor, a magnetic sensor, a gyro sensor, an inertial sensor, an IR sensor, a fingerprint recognition sensor, an ultrasonic sensor, a light sensor, a microphone, a radar, etc. The driving unit 140c may perform various physical operations such as movement of robot joints. In addition, the driving unit 140c may allow the robot 100 to travel on the road or to fly. The driving unit 140c may include an actuator, a motor, a wheel, a brake, a propeller, etc.
Referring to
The communication unit 110 may transmit and receive wired/radio signals (e.g., sensor information, user input, learning models, or control signals) to and from external devices such as other AI devices (e.g., 100x, 200, or 400 of
The control unit 120 may determine at least one feasible operation of the AI device 100, based on information which is determined or generated using a data analysis algorithm or a machine learning algorithm. The control unit 120 may perform an operation determined by controlling components of the AI device 100.
The memory unit 130 may store data for supporting various functions of the AI device 100.
The input unit 140a may acquire various types of data from the exterior of the AI device 100. The output unit 140b may generate output related to a visual, auditory, or tactile sense. The output unit 140b may include a display unit, a speaker, and/or a haptic module. The sensing unit 140 may obtain at least one of internal information of the AI device 100, surrounding environment information of the AI device 100, and user information, using various sensors. The sensor unit 140 may include a proximity sensor, an illumination sensor, an acceleration sensor, a magnetic sensor, a gyro sensor, an inertial sensor, an RGB sensor, an IR sensor, a fingerprint recognition sensor, an ultrasonic sensor, a light sensor, a microphone, and/or a radar.
The learning processor unit 140c may learn a model consisting of artificial neural networks, using learning data. The learning processor unit 140c may perform AI processing together with the learning processor unit of the AI server (400 of
The embodiments described above are implemented by combinations of components and features of the present disclosure in predetermined forms. Each component or feature should be considered selectively unless specified separately. Each component or feature can be carried out without being combined with another component or feature. Moreover, some components and/or features are combined with each other and can implement embodiments of the present disclosure. The order of operations described in embodiments of the present disclosure can be changed. Some components or features of one embodiment may be included in another embodiment, or may be replaced by corresponding components or features of another embodiment. It is apparent that some claims referring to specific claims may be combined with another claims referring to the claims other than the specific claims to constitute the embodiment or add new claims by means of amendment after the application is filed.
Embodiments of the present disclosure can be implemented by various means, for example, hardware, firmware, software, or combinations thereof. When embodiments are implemented by hardware, one embodiment of the present disclosure can be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, microcontrollers, microprocessors, and the like.
When embodiments are implemented by firmware or software, one embodiment of the present disclosure can be implemented by modules, procedures, functions, etc. performing functions or operations described above. Software code can be stored in a memory and can be driven by a processor. The memory is provided inside or outside the processor and can exchange data with the processor by various well-known means.
It is apparent to those skilled in the art that the present disclosure can be embodied in other specific forms without departing from essential features of the present disclosure. Accordingly, the above detailed description should not be construed as limiting in all aspects and should be considered as illustrative. The scope of the present disclosure should be determined by rational construing of the appended claims, and all modifications within an equivalent scope of the present disclosure are included in the scope of the present disclosure.
The present disclosure has described focusing on examples applying to the 3GPP LTE/LTE-A and the 5G system, but can be applied to various wireless communication systems in addition to the 3GPP LTE/LTE-A and the 5G system.
Number | Date | Country | Kind |
---|---|---|---|
10-2021-0164721 | Nov 2021 | KR | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/KR2022/017362 | 11/7/2022 | WO |