The present disclosure relates to the field of communication technology and, in particular, to an information transmission method and apparatus, and a storage medium.
There are many challenges in a wireless communication system, such as a nonlinear problem, a time complexity of an optimal solution calculation, difficulty in accurately characterizing some problems with formulas or models, impossibility of overall optimization due to an accumulation of errors among different modules in a wireless link, and increased difficulty in finding an optimization due to changes in non-ideal factors in practical applications, and the like. At present, both academic research and 3GPP are exploring usage of artificial intelligence (AI)/machine learning (ML) to solve various problems in the wireless communication system. In NR Rel-17, research on the AI/ML has begun for the FS-NR_ENDC_data_collet project. In NR Rel-18, further research will be conducted on potential of the AI/ML in a physical layer of an air interface, such as improving performance indicators (e.g., throughput, accuracy, reliability, and robustness), and reducing resource overhead.
At present, in any practical application scenario, if the AI is needed to solve the problem in the wireless communication system, multiple sets of AI models which are in a one-to-one correspondence need to be deployed on both UE and gNB sides to meet different information transmission requirements between the UE and gNB, resulting in high storage overheads and maintenance complexity of the AI models.
The present disclosure provides an information transmission method and apparatus, and a storage medium.
In a first aspect, an embodiment of the present disclosure provides an information transmission method applied to a terminal, where the terminal includes X1 first AI models, and the method includes:
In one implementation, the determining the feedback indication information includes:
In one implementation, the first parameter includes at least one of a reference signal quality RSRQ, a reference signal power RSRP, a signal-to-noise ratio SNR/SINR, a received signal strength indicator RSSI, a bit error rate BER, a block error rate BLER, or a modulation and coding strategy MCS.
In one implementation, the determining the feedback indication information includes:
In one implementation, the determining the feedback indication information includes:
In one implementation, the determining feedback indication information includes:
In one implementation, the feedback indication information includes at least one of the following: the number of bits for the first information, a level corresponding to the number of bits for the first information, an application scenario corresponding to the first information or the target first AI model, an identification of an application scenario corresponding to the first information or the target first AI model, an encoding method corresponding to the first information or the target first AI model, an identification of an encoding method corresponding to the first information or the target first AI model, a model level of the target first AI model or the target second AI model, a model identification of the target first AI model or the target second AI model, a parameter of the target first AI model or the target second AI model, the target first AI model or the target second AI model.
In one implementation, the first information includes part or all of output data of the target first AI model.
In one implementation, the number of bits for the output data of the target first AI model included in the first information corresponds to the feedback indication information.
In one implementation, X1 is greater than 1, and bits for output data of the X1 first AI models are same in number.
In one implementation, further including:
In one implementation, X1 is greater than 1, and after the determining the feedback indication information, the method further includes:
In one implementation, X2 is greater than 1, the target second AI model corresponds to the feedback indication information.
In a second aspect, an embodiment of the present disclosure provides an information transmission method applied to a base station, where the base station includes X2 second AI models, and the method includes:
In one implementation, the method further includes:
In one implementation, the determining the feedback indication information includes:
In one implementation, the determining feedback indication information includes:
In one implementation, further including:
In one implementation, the feedback indication information includes at least one of the following: the number of bits for the first information, a level corresponding to the number of bits for the first information, an application scenario corresponding to the first information or the target first AI model, an identification of an application scenario corresponding to the first information or the target first AI model, an encoding method corresponding to the first information or the target first AI model, an identification of an encoding method corresponding to the first information or the target first AI model, a model level of the target first AI model or the target second AI model, a model identification of the target first AI model or the target second AI model, a parameter of the target first AI model or the target second AI model, the target first AI model or the target second AI model.
In one implementation, the first information includes part or all of output data of the target first AI model.
In one implementation, the number of bits for the first information corresponds to the feedback indication information.
In one implementation, X2 is greater than 1, and bits for input data of respective second AI models in the X2 second AI models are different in number.
In one implementation, the determining the input data of the target second AI model according to the feedback indication information and the first information sent by the terminal includes:
In one implementation, the determining the input data of the target second AI model according to the number of bits for the input data of the target second AI model and the number of bits for the first information, including:
In one implementation, the second information is a bit string consisting of M zeroes, where M is a difference between the number of bits for the input data of the target second AI model and the number of bits for the first information.
In a third aspect, the present disclosure provides an information transmission apparatus applied to a terminal, where the terminal includes X1 first AI models, the apparatus includes a memory, a transceiver, and a processor:
In one implementation, the processor is configured to perform the following operations:
In one implementation, the first parameter includes at least one of a reference signal quality RSRQ, a reference signal power RSRP, a signal-to-noise ratio SNR/SINR, a received signal strength indicator RSSI, a bit error rate BER, a block error rate BLER or a modulation and coding strategy MCS.
In one implementation, the processor is configured to perform the following operations:
In one implementation, the processor is configured to perform the following operations:
In one implementation, the processor is configured to perform the following operations:
In one implementation, the feedback indication information includes at least one of the following: the number of bits for the first information, a level corresponding to the number of bits for the first information, an application scenario corresponding to the first information or the target first AI model, an identification of an application scenario corresponding to the first information or the target first AI model, an encoding method corresponding to the first information or the target first AI model, an identification of an encoding method corresponding to the first information or the target first AI model, a model level of the target first AI model or the target second AI model, a model identification of the target first AI model or the target second AI model, a parameter of the target first AI model or the target second AI model, the target first AI model or the target second AI model.
In one implementation, the first information includes part or all of output data of the target first AI model.
In one implementation, the number of bits for the output data of the target first AI model included in the first information corresponds to the feedback indication information.
In one implementation, X1 is greater than 1, and bits for output data of the X1 first AI models are same in number.
In one implementation, the processor is configured to perform the following operations:
In one implementation, X1 is greater than 1, the processor is configured to perform the following operations:
In one implementation, X2 is greater than 1, the target second AI model corresponds to the feedback indication information.
In a fourth aspect, the present disclosure provides an information transmission apparatus applied to a base station, where the base station includes X2 second AI models, and the apparatus includes a memory, a transceiver, and a processor:
In one implementation, the processor is configured to perform the following operations:
In one implementation, the processor is configured to perform the following operations:
In one implementation, the processor is configured to perform the following operations:
In one implementation, the processor is configured to perform the following operations:
In one implementation, the feedback indication information includes at least one of the following: the number of bits for the first information, a level corresponding to the number of bits for the first information, an application scenario corresponding to the first information or the target first AI model, an identification of an application scenario corresponding to the first information or the target first AI model, an encoding method corresponding to the first information or the target first AI model, an identification of an encoding method corresponding to the first information or the target first AI model, a model level of the target first AI model or the target second AI model, a model identification of the target first AI model or the target second AI model, a parameter of the target first AI model or the target second AI model, the target first AI model or the target second AI model.
In one implementation, the first information includes part or all of output data of the target first AI model.
In one implementation, the number of bits for the first information corresponds to the feedback indication information.
In one implementation, X2 is greater than 1, and bits for input data of respective second AI models in the X2 second AI models are different in number.
In one implementation, the processor is configured to perform the following operations:
In one implementation, the processor is configured to perform the following operations:
In one implementation, the second information is a bit string consisting of M zeroes, where M is a difference between the number of bits for the input data of the target second AI model and the number of bits for the first information.
In a fifth aspect, the present disclosure provides an information transmission apparatus applied to a terminal, where the terminal includes X1 first AI models, and the apparatus includes:
In one implementation, the first determining unit is configured to:
In one implementation, the first parameter includes at least one of a reference signal quality RSRQ, a reference signal power RSRP, a signal-to-noise ratio SNR/SINR, a received signal strength indicator RSSI, a bit error rate BER, a block error rate BLER, or a modulation and coding strategy MCS.
In one implementation, the first determining unit is configured to:
In one implementation, the first determining unit is configured to:
In one implementation, the first determining unit is configured to:
In one implementation, the feedback indication information includes at least one of the following: the number of bits for the first information, a level corresponding to the number of bits for the first information, an application scenario corresponding to the first information or the target first AI model, an identification of an application scenario corresponding to the first information or the target first AI model, an encoding method corresponding to the first information or the target first AI model, an identification of an encoding method corresponding to the first information or the target first AI model, a model level of the target first AI model or the target second AI model, a model identification of the target first AI model or the target second AI model, a parameter of the target first AI model or the target second AI model, the target first AI model or the target second AI model.
In one implementation, the first information includes some or all of output data of the target first AI model.
In one implementation, the number of bits for the output data of the target first AI model included in the first information corresponds to the feedback indication information.
In one implementation, X1 is greater than 1, and bits for output data of the X1 first AI models are same in number.
In one implementation, the sending unit is further configured to:
In one implementation, X1 is greater than 1, and the apparatus further includes:
In one implementation, X2 is greater than 1, the target second AI model corresponds to the feedback indication information.
In a sixth aspect, the present disclosure provides an information transmission apparatus applied to a base station, where the base station includes X2 second AI models, and the apparatus includes:
In one implementation, the apparatus further includes a third determining unit, configured to:
In one implementation, the first determining unit is configured to:
In one implementation, the first determining unit is configured to:
In one implementation, the apparatus further includes:
In one implementation, the feedback indication information includes at least one of the following: the number of bits for the first information, a level corresponding to the number of bits for the first information, an application scenario corresponding to the first information or the target first AI model, an identification of an application scenario corresponding to the first information or the target first AI model, an encoding method corresponding to the first information or the target first AI model, an identification of an encoding method corresponding to the first information or the target first AI model, a model level of the target first AI model or the target second AI model, a model identification of the target first AI model or the target second AI model, a parameter of the target first AI model or the target second AI model, the target first AI model or the target second AI model.
In one implementation, the first information includes part or all of output data of the target first AI model.
In one implementation, the number of bits for the first information corresponds to the feedback indication information.
In one implementation, X2 is greater than 1, bits for input data of respective second AI models in the X2 second AI models are different in number.
In one implementation, the second determining unit is configured to:
In one implementation, the second determining unit is configured to:
In one implementation, the second information is a bit string consisting of M zeroes, where M is a difference between the number of bits for the input data of the target second AI model and the number of bits for the first information.
In a seventh aspect, the present disclosure provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and the computer program is configured for causing a computer to execute the method described in the first aspect or the second aspect.
The present disclosure provides the information transmission method and apparatus, and the storage medium. In the method, a terminal determines information to be transmitted to a base station according to feedback indication information and an AI model on a terminal side; the base station determines input data of an AI model on the base station side according to the feedback indication information and the information sent by the terminal; the AI models on the terminal side and the base station side do not need to maintain a one-to-one correspondence, thus reducing the number of AI models deployed on the terminal side or the base station side, reducing storage overhead and maintenance complexity of the AI models.
It should be understood that, content described in the above summary section is not intended to limit key or important features of embodiments of the present disclosure, nor is it intended to limit a scope of the present disclosure. Other features of the present disclosure will become easily understood through the following description.
In order to provide a clearer explanation of solutions disclosed in the present disclosure or the prior art, a brief introduction will be given to accompanying drawings required for describing embodiments or existing technologies. The accompanying drawings described below are some embodiments of the present disclosure.
A term “and/or” in the present disclosure describes an association relationship between related objects, indicating that there can be three types of relationships. For example, A and/or B, can represent: A exists alone, A and B exist simultaneously, and B exists alone. A character “/” generally indicates that objects related back and forth are in an “or” relationship. The term “multiple” in an embodiment of the present disclosure refers to being two or more objects, similar to other quantifiers.
The following will provide a clear and complete description of the embodiments of the present disclosure in conjunction with accompanying drawings in the embodiments of the present disclosure.
The embodiments of the present disclosure provide an information transmission method and apparatus that reduce the storage overhead and maintenance complexity of models. Among them, the method and apparatus are based on the same application concept. Since principles following which the method and apparatus solve problems are similar, reference can be made to each other for the method and apparatus embodiments, the repetition will not be repeated.
The embodiments of the present disclosure can be applied to various systems, especially 5G systems. For example, an applicable system may include a global system of mobile communication (GSM), a code division multiple access (CDMA) system, a wideband code division multiple access (WCDMA) general packet radio service (GPRS) system, a long term evolution (LTE) system, an LTE frequency division duplex (FDD) system, an LTE time division duplex (TDD) system, a long term evolution advanced (LTE-A) system, a universal mobile telecommunications system (UMTS), a worldwide interoperability for microwave access (WiMAX) system, a 5G New Radio, an NR system and the like. All of these systems include terminal devices and network devices. The system can also include core network parts, such as an evolved packet system (EPS), a 5G system (5GS) and the like.
The terminal device involved in the embodiments of the present disclosure may refer to a device that provides voice and/or data connectivity to users, a handheld device with wireless connection capabilities, or other processing devices connected to a wireless modem. In different systems, names of the terminal device may also vary. For example, in the 5G systems, the terminal device can be referred to as a user equipment (UE). A wireless terminal device can communicate with one or more core networks (CN) via a radio access network (RAN), and can be a mobile terminal device such as a mobile phone (also known as a “cellular” phones) and a computer with a mobile terminal device, such as portable, pocket sized, handheld, computer built-in, or vehicle mounted mobile devices that exchange language and/or data with the radio access network. For example, devices such as personal communication service (PCS) phones, cordless phones, session initiation protocol (SIP) phones, wireless local loop (WLL) stations, personal digital assistants (PDA), and the like. Wireless terminal devices can also be referred to as systems, subscriber units, subscriber stations, mobile stations, mobiles, remote stations, access points, remote terminals, access terminals, user terminals, user agents, or user devices, which are not limited in the embodiments of the present disclosure.
The base station involved in the embodiments of the present disclosure may include multiple cells that provide services to terminals. Depending on a specific application scenario, the base station can also be referred to as an access point, or it can be a device in an access network that communicates with a wireless terminal device through one or more sectors over an air interface, or it can have other names. For example, the base station involved in the embodiments of the present disclosure may be an evolutional node B (eNB or e-NodeB) in a long term evolution (LTE) system, a 5G base station (gNB) in a 5G network architecture (next generation system), or a home evolved node B (HeNBs), a relay node, a femto, a pico and the like, which are not limited in the embodiments of the present disclosure. In some network architectures, the base station may include a centralized unit (CU) node and a distributed unit (DU) node, which can also be geographically separated.
The base station and the terminal device can each use one or more antennas for a multi input multi output (MIMO) transmission, which can be a single user MIMO (SU-MIMO) or a multiple user MIMO (MU-MIMO). Depending on the form and number of antenna combinations, the MIMO transmission can be a 2D-MIMO, a 3D-MIMO, an FD-MIMO, or a massive MIMO, as well as a diversity transmission, a precoding transmission, a beamforming transmission and the like.
In the following, applications of AI models in the wireless communication system will be introduced in the first place. AI research at the physical layer of NR Rel-18 mainly identified several use cases, such as channel state information (CSI) feedback, beam management, localization, channel estimation and the like. Take the CSI feedback as an example, UE obtains CSI information by measuring a channel state information reference signal (CSI-RS) and feeds back the CSI information to the gNB. Using the AI model for the CSI feedback can reduce cost of the CSI feedback or improve an accuracy of channel recovery. For different feedback granularities of the CSI feedback, multiple sets of AI models which are in a one-to-one correspondence should be deployed on both the UE and gNB sides. At this point, the accuracy of channel recovery is high, but both from perspectives of UE and gNB, the storage overhead and maintenance complexity of the AI models are relatively high.
For example, as shown in
Similar to the CSI feedback introduced above, in other application scenarios, output data of the AI model on a terminal side will be used as input data of the AI model on a base station side. However, there are also multiple information transmission requirements for information transmission between the UE and gNB, which requires the deployment of multiple sets of AI models which are in a one-to-one correspondence on the UE and gNB sides. Each set of corresponding AI models meets one information transmission requirement, resulting in high storage overhead and maintenance complexity of the AI models.
To this end, an embodiment of the present disclosure proposes an information transmission method, in which UE determines information to be transmitted to a gNB based on feedback indication information and an AI model on the UE side; the gNB determines input data of one AI model on the gNB side based on the feedback indication information and the information sent by the UE. The AI models on the UE side and the gNB side do not need to maintain a one-to-one correspondence, thus reducing the number of AI models deployed on the UE side or the gNB side, and reducing the storage overhead and maintenance complexity of the AI models.
In the following, the information transmission method provided in the present disclosure will be explained in detail through specific embodiments. It can be understood that the following specific embodiments can be combined with each other, and the same or similar concepts or processes may not be repeated in some embodiments.
S201, a terminal determines feedback indication information.
The feedback indication information can be used to indicate an information transmission requirement when the terminal feeds back information to a base station, and the terminal can determine information that needs to be fed back to the terminal based on the feedback indication information. The feedback indication information can be determined by the terminal itself, or it can be determined by the base station. In the latter case, this step includes the terminal receiving the feedback indication information sent by the base station.
S202, the terminal determines first information according to the feedback indication information and a target first AI model, where the target first AI model is one of X1 first AI models in the terminal.
The terminal determines the first information which needs to be sent to the base station according to the feedback indication information and the target first AI model in the terminal. The first information may include part or all of output data of the target first AI model, or the first information may include data obtained by processing part or all of the output data of the target first AI model.
S203, the terminal sends the first information to the base station, where the first information is configured to determine input data of a target second AI model, and the target second AI model is one of X2 second AI models included in the base station.
X1 is greater than or equal to 1, X2 is greater than or equal to 1, and X1 is not equal to X2.
It can be understood that in the case where the terminal determines the feedback indication information on its own, the terminal also sends the feedback indication information to the base station. In one embodiment, the terminal can send the first information and the feedback indication information separately through different resources.
S204, the base station determines the feedback indication information.
In the case where the feedback indication information is determined by the terminal on its own, this step refers to the base station receiving the feedback indication information sent by the terminal; or, in the case where the feedback indication information is determined by the base station, this step is executed before S201, that is, the base station first determines the feedback indication information and then sends the feedback information to the terminal.
S205, the base station determines the input data of the target second AI model according to the feedback indication information and the first information sent by the terminal.
After determining the first information, the terminal sends the first information to the base station; and the base station determines the input data of the target second AI model on a base station side according to the feedback indication information and the first information. The target first AI model is one of the X1 first AI models in the terminal, and the target second AI model is one of the X2 second AI models included in the base station. However, the target first AI model and the target second AI model do not need to be deployed one-to-one, and there is no necessary correspondence between the two. The first information transmitted between the terminal and the base station is determined by the terminal according to the feedback indication information and the target first AI model in the terminal, while the base station can determine the input data of the target second AI model in the base station based on the feedback indication information and the first information. Thus, the terminal side and base station side can flexibly deploy their AI models, thus reducing the number of AI models, lowering storage overhead and maintenance complexity of the AI models.
The terminal involved in the embodiments of the present disclosure includes X1 first AI models, and the base station includes X2 second AI models. In one embodiment, the first AI model can be used to generate the information that needs to be transmitted to the base station from the terminal. For example, the first AI model can be used to encode the input data to generate the information that needs to be transmitted to the base station, while the second AI model can be used to process the information transmitted from the terminal to the base station, such as decoding the information transmitted from the terminal to the base station.
For example, in the CSI feedback scenario based on the AI models shown in
On the basis of the above embodiments, further explanation will be given on how to determine the feedback indication information.
In one embodiment, the terminal determines the feedback indication information according to a first parameter. In one embodiment, the first parameter includes at least one of a reference signal receiving quality (RSRQ), a reference signal receiving power (RSRP), a signal to noise ratio (SNR) or a signal to interference plus noise ratio (SINR), a received signal strength indication (RSSI), a bit error rate (BER), a block error rate (BLER), or a modulation and coding scheme (MCS). The terminal determines the feedback indication information based on the above parameter(s), and then determines, based on the feedback indication information, the first information that needs to be fed back to the base station, thus making the first information which will be fed back better match a current channel state, signal quality or the like, and improving an accuracy of data recovery on the base station side.
In one embodiment, the terminal acquires the second AI model and determines the feedback indication information according to the first AI model and the second AI model. Among them, the first AI model can be part or all of the X1 first AI models included in the terminal, and the second AI model can be part or all of the X2 second AI models included in the base station. The terminal can perform multiple times of joint inference on the first AI model and the second AI model to determine a model with the highest accuracy, and determine the corresponding feedback indication information according to the model with the highest accuracy. The terminal performs the joint inference based on the first AI model and the second AI model, and finally determines the feedback indication information according to the model with the highest accuracy, and then determines the first information that needs to be fed back to the base station based on the feedback indication information, to achieve the highest accuracy in data recovery by the base station.
In one embodiment, the terminal determines the target first AI model from the X1 first AI models, and determines the feedback indication information according to the target first AI model. That is, there is a correspondence between the first AI models and feedback indication information. The terminal first determines the target first AI model, and then determines the feedback indication information based on the corresponding relationship between the first AI models and the feedback indication information. It can be understood that the terminal can also first use the aforementioned method to determine the feedback indication information or receive the feedback indication information sent by the base station, and then determine the target first AI model from the X1 first AI models according to the feedback indication information.
In one embodiment, the feedback indication information can be determined by the base station, that is, the base station determines the target second AI model from the X2 second AI models, and determines the feedback indication information according to the target second AI model. That is, there is a correspondence between the second AI models and feedback indication information. The base station first determines the target second AI model, and then determines the feedback indication information based on the corresponding relationship between the second AI models and the feedback indication information.
In one embodiment, the feedback indication information includes at least one of the following: the number of bits for the first information, a level corresponding to the number of bits for the first information, an application scenario corresponding to the first information or the target first AI model, an identification of an application scenario corresponding to the first information or the target first AI model, an encoding method corresponding to the first information or the target first AI model, an identification of an encoding method corresponding to the first information or the target first AI model, a model level of the target first AI model or the target second AI model, a model identification of the target first AI model or the target second AI model, a parameter of the target first AI model or the target second AI model, the target first AI model or the target second AI model.
The feedback indication information represents the number of bits, the application scenario, or the encoding method of the first information fed back by the terminal to the base station. The first AI model or the second AI model can have correspondences with the aforementioned number of bits, application scenario, or encoding method. When determining the feedback indication information, the terminal and the base station can directly determine information related to the first information, the first AI model, or the second AI model through the feedback indication information, direct indication of the number of bits, application scenario, or encoding method of the first information is unnecessary. When the feedback indication information is determined by the terminal, the feedback indication information can be a model level of the target first AI model, a model identification of the target first AI model, a parameter of the target first AI model, or the target first AI model; when the feedback indication information is determined by the base station, the feedback indication information can be the model level of the target second AI model, the model identification of the target second AI model, the parameter of the target second AI model, or the target second AI model. When there are different pieces of feedback indication information, target first AI models can be different models in the X1 first AI models included in the terminal; and when there are different pieces of feedback indication information, target second AI models can be different models in the X2 second AI models included in the base station.
The first information includes the output data of the target first AI model. In one embodiment, the number of bits for the output data of the target first AI model included in the first information corresponds to the feedback indication information. That is, in the case of different feedback indication information, the numbers of bits for the first information determined by the terminal are different, and the number of bits for the first information corresponds to the feedback indication information.
In one embodiment, in the case where the number X1 of the first AI models included in the terminal is greater than 1, the numbers of bits for the output data of the X1 first AI models are different. Therefore, when the target first AI model corresponding to the feedback indication information is determined by the terminal, the number of bits for the first information is also determined.
In one embodiment, the target second AI model corresponds to the feedback indication information, that is, the base station determines the target second AI model from the X2 second AI models according to the feedback indication information, and then determines the input data of the target second AI model based on the first information.
In one embodiment, in the case where X2 is greater than 1, the numbers of bits for the input data of respective second AI models in the X2 second AI models are different. When the base station determines the input data of the target second AI model according to the feedback indication information and the first information, the base station determines the number of bits for the first information according to the feedback indication information; and determines the input data of the target second AI model according to the number of bits for the input data of the target second AI model and the number of bits for the first information.
In one embodiment, the base station determines the number of bits M for second information according to the number of bits for the input data of the target second AI model and the number of bits for the first information, in order to determine the second information; and determines the input data of the target second AI model according to the first information and second information, where the second information is a bit string consisting of M zeroes, and M is a difference between the number of bits for the input data of the target second AI model and the number of bits for the first information. In one embodiment, the second information can also be a bit string consisting of M ones, or can be a bit string of length M obtained by quantizing several other real numbers, where each bit in the bit string of length M is 0 or 1.
On the basis of the above embodiments, the information transmission method disclosed in the present disclosure will be explained with specific examples.
An example is taken for explanation where X1 is 1 and X2 is greater than 1 (that is, the terminal includes one first AI model, and the first AI model is also the target first AI model, and the base station includes multiple second AI models).
Step 1, the terminal determines feedback indication information.
The determining the feedback indication information by the terminal can be determining the feedback indication information by the terminal on its own or receiving the feedback indication information sent by the base station by the terminal.
In one embodiment, the determining the feedback indication information by the terminal on its own can be determining the feedback indication information according to the first parameter by the terminal as mentioned above, and distinguishing of the feedback indication information is related to a threshold of the first parameter. For example, as shown in Table 1, the feedback indication information includes X2 types, and the threshold of the first parameter corresponding to each type of the feedback indication information can be specified by the protocol or configured by the base station through a system message or a radio resource control (RRC) message. Of course, the number of types of the feedback indication information that can be included may not be equal to X2.
In one embodiment, the determining the feedback indication information by the terminal on its own can be determining the feedback indication information according to the first AI model and the second AI model by the terminal. For example, the terminal can acquire some or all of the second AI models from the base station on the basis of acquiring the first AI model already, such as acquiring the second AI model(s) on the base station side through downloading. Assuming that the terminal has obtained all X2 second AI models, represented by Second AI model_1 Second AI model_2, . . . , and Second AI model_X2, respectively. The terminal utilizes the first AI model and each second AI model of the multiple second AI models for Joint inference. In different joint inferences, the input data of the first AI model is the same, for example, the input data of the first AI model is CSI-RS channel estimation. In different joint inferences, different pieces of first information are determined by the first AI model, as shown in Table 2 below. There is a one-to-one correspondence between the second AI models and the feedback indication information, the terminal determines a model with the highest accuracy according to the multiple joint inferences, and determines feedback indication information corresponding to the model with the highest accuracy.
For example, if the model with the highest accuracy is First AI model+ Second AI model_2, then the terminal determines the feedback indication information as Feedback indication information 2. The terminal can determine the feedback indication information through the second AI model in the model with the highest accuracy, and the base station can also determine the target second AI model through the feedback indication information.
In one embodiment, the terminal receives the feedback indication information indicated by the base station through high-level signaling, or the terminal receives the feedback indication information indicated by the base station through downlink control information (DCI) or a media access control control element (MAC CE).
Step 2, the terminal determines the first information according to the feedback indication information and the first AI model, and sends the first information to the base station.
As shown in
In the case where the feedback indication information is determined by the terminal, the terminal also sends the feedback indication information to the base station, as shown in
In addition, it should be noted that when the feedback indication information is the model level of the first AI model, the model identification of the first AI model, the parameter of the first AI model or the first AI model, this step is that the terminal determines the first information according to the feedback indication information and sends the first information to the base station, or this step is that the terminal determines the first information according to the first AI model and sends the first information to the base station. In one embodiment, when the feedback indication information is the model level of the target first AI model, the model identification of the target first AI model, the parameter of the target first AI model or the target first AI model, this step is that the terminal determines the first information according to the feedback indication information and sends the first information to the base station. In one embodiment, this step is that the terminal determines the first information according to the target first AI model and sends the first information to the base station.
Step 3: the base station determines the input data of the target second AI model according to the feedback indication information and the first information.
In one embodiment, in the case where the feedback indication information is determined by the terminal, the base station receives the feedback indication information sent by the terminal and determines the target second AI model in the X2 second AI models according to the feedback indication information. For example, the feedback indication information is Feedback indication information 2, and the target second AI model is Second AI model_2, then the base station takes the first information as the input data of Second AI model_2.
In one embodiment, in the case where the feedback indication information is determined by the base station, the base station determines the feedback indication information and the corresponding target second AI model before Step 1, and sends the feedback indication information to the terminal. After receiving the first information, the base station determines the input data of the target second AI model according to the first information.
In one embodiment, the numbers of bits for the input data of different second AI models can vary, and there may be a one-to-one correspondence between the feedback indication information and the second AI models. In one embodiment, the numbers of bits for the input data of different second AI models can be the same. At this point, for different feedback indication information, the first AI model on the terminal side outputs the same number of bits for the first information, while the second AI models corresponding to different feedback indication information are applied to different application scenarios or encoding methods. For example, multiple second AI models can be respectively applied to scenarios such as Ultra reliable and Low Latency Communications (URLLC), Reduced Capability (REDCAP), Enhanced Mobile Broadband (eMBB), and the like.
The AI model training and applying process in this embodiment will be described in conjunction with
During the process of model training, for each training data in a training dataset, forward propagation process of a neural network is as follows:
Backward propagation process is as follows:
In step 2 above, the output data {x1, x2, . . . , xP} of length P can include N information blocks, and the first information {x1, x2, . . . , xQ} of length Q is composed of one or more of the N information blocks. Each of the N information blocks can be composed of data output from continuous neurons in the output layer of the first AI model, such as {x1, x2, x3, x4, . . . }; each of the N information blocks can also be composed of data output from non-continuous neurons in the output layer of the first AI model, such as {x1, x3, x5, x7, . . . }.
From the training process, it can be seen that each training data in the training dataset will be input into the first AI model, but different training data will be input into different second AI models according to the feedback information.
In model applying process, for each pieces of data, forward propagation process is as follows:
According to the information transmission method in this embodiment, it can be seen that only one AI model needs to be deployed on the terminal side to achieve different information transmission requirements between the terminal and the base station, reducing the storage overhead and maintenance complexity of the AI model.
An example is taken for explanation where X2 is greater than 1 and X2 is 1 (that is, the terminal includes multiple first AI models, and the base station includes one second AI model, and the second AI model is also the target second AI model).
Step 1: the terminal determines feedback indication information.
The determining the feedback indication information by the terminal can be determining the feedback indication information by the terminal on its own or receiving the feedback indication information sent by the base station.
Similar to the previous example, in one embodiment, the determining the feedback indication information by the terminal on its own can be determining the feedback indication information according to the first parameter by the terminal, and distinguishing of the feedback indication information is related to the threshold of the first parameter.
In one embodiment, the feedback indication information can be determined by the terminal per se according to the first AI model and the second AI model. For example, the terminal can acquire the second AI model from the base station on the basis of the first AI model, such as acquiring the second AI model on the base station side by downloading. Assuming there are X1 first AI models on the terminal side, represented by First AI model_1, First AI model_2, . . . , First AI model_X1. The terminal utilizes each of the multiple first AI models and the second AI model for joint inference. In different joint inferences, the input data of the first AI model is the same. For example, the input data of the first AI model is CSI-RS channel estimation. In different joint inferences, different pieces of first information are determined by the first AI model, as shown in Table 3 below. There is a one-to-one correspondence between the first AI models and the feedback indication information. The terminal determines a model with the highest accuracy according to the multiple joint inferences, and determines feedback indication information corresponding to the model with the highest accuracy.
For example, if the model with the highest accuracy is First AI model_2+second AI model, then the terminal determines the feedback indication information as Feedback indication information 2. The terminal can determine the feedback indication information through the first AI model in the model with the highest accuracy, and the base station can also determine the target second AI model through the feedback indication information.
In one embodiment, the terminal receives the feedback indication information indicated by the base station through high-level signaling, or the terminal receives the feedback indication information indicated by the base station through DCI or MAC CE.
Step 2: the terminal determines the first information according to the feedback indication information and the target first AI model, and sends the first information to the base station.
As shown in
In one embodiment, the terminal can first determine the target first AI model among X1 first AI models, and then determine the feedback indication information according to the target first AI model. For example, the terminal determines that the target first AI model is First AI model_1 among the multiple first AI models, and determines the feedback indication information 1 according to the target first AI model.
Step 3: the base station determines input data of the second AI model according to the feedback indication information and the first information.
Assuming that the number of bits included in the input data of the second AI model is A, and the number of bits included in the first information is B, A is greater than or equal to B. When A is greater than B, the input data of the second AI model consists of the first information and second information, and the second information consists of M zeroes, where M=A−B. That is, the base station determines, according to the feedback indication information, that a length of bit numbers included in the first information is B, and adds M zeroes to the first information of length B to form the input data of the second AI model.
In one embodiment, the numbers of bits for the first information output by different first AI models are different. At this point, there may be a one-to-one correspondence between the feedback levels and the first AI models. In one embodiment, the numbers of bits for the first information output by different first AI models can be the same. In this case, different first AI models corresponding to different feedback indication information are models applied to different application scenarios. For example, multiple first AI models can be applied to URLLC, REDCAP, and eMBB scenarios respectively.
The AI model training and applying process in this embodiment will be described in combination with
During the model training process, for each training data in the training dataset, the forward propagation process of the neural network is as follows:
The back propagation process is as follows:
As in Step 3 above, the input data {x1, x2, . . . , xQ1} of length Q1 of the second AI model is obtained based on the feedback indication information and the first information {x1, x2, . . . , xP1} of length PT. Positions of the first information in the input data of the second AI model can be continuous or discontinuous.
From the training process, it can be seen that each training data in the training dataset will be input into the second AI model, but different training data will be input into different first AI models according to the feedback indication information.
During the model applying process, for each pieces of data, the forward propagation process is as follows:
According to the information transmission method in this embodiment, it can be seen that only one AI model needs to be deployed on the base station side to achieve different information transmission requirements between the terminal and the base station, reducing the storage overhead and maintenance complexity of the AI model.
In one implementation, the processor 803 is configured to perform the following operations:
In one implementation, the first parameter includes at least one of a reference signal quality RSRQ, a reference signal power RSRP, a signal-to-noise ratio SNR/SINR, a received signal strength indicator RSSI, a bit error rate BER, a block error rate BLER, or a modulation and coding strategy MCS.
In one implementation, the processor 803 is configured to perform the following operations:
In one implementation, the processor 803 is configured to perform the following operations:
In one implementation, the processor 803 is configured to perform the following operations:
In one implementation, the feedback indication information includes at least one of the following: the number of bits for the first information, a level corresponding to the number of bits for the first information, an application scenario corresponding to the first information or the target first AI model, an identification of an application scenario corresponding to the first information or the target first AI model, an encoding method corresponding to the first information or the target first AI model, an identification of an encoding method corresponding to the first information or the target first AI model, a model level of the target first AI model or the target second AI model, a model identification of the target first AI model or the target second AI model, a parameter of the target first AI model or the target second AI model, the target first AI model or the target second AI model.
In one implementation, the first information includes part or all of output data of the target first AI model.
In one implementation, the number of bits for the output data of the target first AI model included in the first information corresponds to the feedback indication information.
In one implementation, X1 is greater than 1, and bits for output data of the X1 first AI models are same in number.
In one implementation, the processor 803 is configured to perform the following operations:
In one implementation, X1 is greater than 1, the processor 803 is configured to perform the following operations:
In one implementation, X2 is greater than 1, the target second AI model corresponds to the feedback indication information.
It should be noted that the above-mentioned apparatus provided in the present disclosure can implement all method steps implemented by the terminal in the above-mentioned method embodiments, and can achieve the same effects. Therefore, specific descriptions of the same parts and beneficial effects as the method embodiments in this embodiment will not be repeated here.
The memory 901 is configured to store a computer program;
In one implementation, the processor 903 is configured to perform the following operations:
In one implementation, the processor 903 is configured to perform the following operations:
In one implementation, the processor 903 is configured to perform the following operations:
In one implementation, the processor 903 is configured to perform the following operations:
In one implementation, the feedback indication information includes at least one of the following: the number of bits for the first information, a level corresponding to the number of bits for the first information, an application scenario corresponding to the first information or the target first AI model, an identification of an application scenario corresponding to the first information or the target first AI model, an encoding method corresponding to the first information or the target first AI model, an identification of an encoding method corresponding to the first information or the target first AI model, a model level of the target first AI model or the target second AI model, a model identification of the target first AI model or the target second AI model, a parameter of the target first AI model or the target second AI model, the target first AI model or the target second AI model.
In one implementation, the first information includes part or all of output data of the target first AI model.
In one implementation, the number of bits for the first information corresponds to the feedback indication information.
In one implementation, X2 is greater than 1, and bits for input data of respective second AI models in the X2 second AI models are different in number.
In one implementation, the processor 903 is configured to perform the following operations:
In one implementation, the processor 903 is configured to perform the following operations:
In one implementation, the second information is a bit string consisting of M zeroes, where M is a difference between the number of bits for the input data of the target second AI model and the number of bits for the first information.
It should be noted that the above-mentioned apparatus provided by the present disclosure can implement all the method steps implemented by the base station in the above method embodiments, and can achieve the same effects. Therefore, specific descriptions of the same parts and beneficial effects as the method embodiments in this embodiment will not be repeated here.
In one implementation, the first determining unit 1001 is configured to:
In one implementation, the first parameter includes at least one of a reference signal quality RSRQ, a reference signal power RSRP, a signal-to-noise ratio SNR/SINR, a received signal strength indicator RSSI, a bit error rate BER, a block error rate BLER, or a modulation and coding strategy MCS.
In one implementation, the first determining unit 1001 is configured to:
In one implementation, the first determining unit 1001 is configured to:
In one implementation, the first determining unit 1001 is configured to: receive the feedback indication information sent by the base station.
In one implementation, the feedback indication information includes at least one of the following: the number of bits for the first information, a level corresponding to the number of bits for the first information, an application scenario corresponding to the first information or the target first AI model, an identification of an application scenario corresponding to the first information or the target first AI model, an encoding method corresponding to the first information or the target first AI model, an identification of an encoding method corresponding to the first information or the target first AI model, a model level of the target first AI model or the target second AI model, a model identification of the target first AI model or the target second AI model, a parameter of the target first AI model or the target second AI model, the target first AI model or the target second AI model.
In one implementation, the first information includes some or all of the output data of the target first AI model.
In one implementation, the number of bits for the output data of the target first AI model included in the first information corresponds to the feedback indication information.
In one implementation, X1 is greater than 1, and bits for output data of the X1 first AI models are same in number.
In one implementation, the sending unit 1003 is further configured to: send the feedback indication information to the base station.
In one implementation, X1 is greater than 1, and the apparatus further includes:
In one implementation, X2 is greater than 1, the target second AI model corresponds to the feedback indication information.
It should be noted that the above-mentioned apparatus provided in the present disclosure can implement all method steps implemented by the terminal in the above-mentioned method embodiments, and can achieve the same effects. Therefore, specific descriptions of the same parts and beneficial effects as the method embodiments in this embodiment will not be repeated here.
In one implementation, the apparatus further includes a third determining unit, configured to:
In one implementation, the first determining unit 1101 is configured to:
In one implementation, the first determining unit 1101 is configured to: receive the feedback indication information sent by the terminal.
In one implementation, the apparatus further includes:
In one implementation, the feedback indication information includes at least one of the following: the number of bits for the first information, a level corresponding to the number of bits for the first information, an application scenario corresponding to the first information or the target first AI model, an identification of an application scenario corresponding to the first information or the target first AI model, an encoding method corresponding to the first information or the target first AI model, an identification of an encoding method corresponding to the first information or the target first AI model, a model level of the target first AI model or the target second AI model, a model identification of the target first AI model or the target second AI model, a parameter of the target first AI model or the target second AI model, the target first AI model or the target second AI model.
In one implementation, the first information includes part or all of output data of the target first AI model.
In one implementation, the number of bits for the first information corresponds to the feedback indication information.
In one implementation, X2 is greater than 1, and bits for input data of respective second AI models in the X2 second AI models are different in number.
In one implementation, the second determining unit 1102 is configured to:
In one implementation, the second determining unit 1102 is configured to:
In one implementation, the second information is a bit string consisting of M zeroes, where M is a difference between the number of bits for the input data of the target second AI model and the number of bits for the first information.
It should be noted that the above-mentioned device apparatus in the present disclosure can implement all method steps implemented by the base station in the above method embodiments, and can achieve the same effects. Therefore, specific descriptions of the same parts and beneficial effects as the method embodiments in this embodiment will not be repeated here.
It should be noted that a division of units in embodiments of the present disclosure is illustrative and only a logical functional division. In actual implementation, there may be other division methods. In addition, various functional units in various embodiments in the present disclosure can be integrated into one processing unit, which can exist physically separately, or two or more units may be integrated into one unit. The integrated unit mentioned above can be implemented in a form of hardware or in the form of software functional unit.
If the integrated unit mentioned above is implemented in the form of software functional unit and sold or used as a stand-alone product, it can be stored in a processor-readable storage medium. Based on this understanding, essence of the embodiments of the present disclosure, a portion of which that contributes to the existing technology, or all or part of the embodiments can be embodied in the form of a software product, which is stored in a storage medium and includes several instructions used for enabling a computer device (which can be a personal computer, server, or network device, and the like) or a processor to execute all or part of steps of the method in the various embodiments of the present disclosure. The aforementioned storage medium includes various mediums that can store program codes, such as a USB flash drive, a portable hard drive, a read only memory (ROM), a random access memory (RAM), a magnetic disk or an optical disk.
An embodiment of the present disclosure also provides a computer-readable storage medium, which stores a computer program, and the computer program is configured to enable a computer to execute the method executed by the terminal or base station in the above method embodiments.
The computer-readable storage media can be any available medium or data storage device that a computer can access, including but not limited to a magnetic storage (such as floppy disk, hard disk, magnetic tape, magneto-optical disk (MO) and the like), an optical storage (such as CD, DVD, BD, HVD and the like), and a semiconductor storage (such as ROM, EPROM, EEPROM, non-volatile memory (NAND FLASH), a solid state drive (SSD), and the like).
An embodiment of the present disclosure also provides a computer program product, including a computer program, when the computer program is executed by a processor, the method executed by the terminal or base station in the above method embodiments is implemented.
The embodiments of the present disclosure can be provided as methods, systems, or computer program products. Therefore, the present disclosure may take the form of a fully hardware implementation, a fully software implementation, or a combination of software and hardware implementation. Moreover, the present disclosure may take the form of the computer program product implemented on one or more computer usable storage mediums (including but not limited to the disk storage and optical storage) that containing computer usable program codes.
The present disclosure is described with reference to flowcharts and/or block diagrams of the method, device (system), and computer program product according to the embodiments of the present disclosure. It should be understood that each process and/or block in the flowchart and/or block diagram, as well as a combination of processes and/or blocks in the flowchart and/or block diagram, can be implemented by computer executable instructions. These computer executable instructions can be provided to the processor of a general-purpose computer, a specialized computer, an embedded processor, or other programmable data processing devices to generate a machine, enabling the instructions executed by the processor of the computer or other programmable data processing device to produce an apparatus for implementing functions specified in one or more processes in the flowchart and/or one or more boxes in the block diagram.
These processor-executable instructions can also be stored in a processor-readable memory that can guide a computer or other programmable data processing device to operate in a specific manner, and the instructions stored in the processor-readable memory generate a manufactured product including instruction apparatus, and the instruction apparatus implements the functions specified in one or more processes in the flowchart, and/or one or more boxes in the block diagram.
These processor-executable instructions can also be loaded onto a computer or other programmable data processing device, enabling a series of operation steps to be performed on the computer or other programmable device to generate computer implemented processing. The instructions executed on the computer or other programmable device provide steps for implementing the functions specified in one or more processes in the flowchart and/or one or more boxes in the block diagram.
Various modifications and variations to the present disclosure without departing from spirit and scope of the present disclosure. In this way, if these modifications and variations for the present disclosure fall within the scope of the claims and their equivalent technologies, then the present disclosure is also intended to include these modifications and variations.
| Number | Date | Country | Kind |
|---|---|---|---|
| 202210429669.7 | Apr 2022 | CN | national |
The present disclosure is a national stage of International application No. PCT/CN2023/086584, filed on Apr. 6, 2023, which claims the priority of the Chinese patent application No. 202210429669.7, entitled “INFORMATION TRANSMISSION METHOD AND APPARATUS, AND STORAGE MEDIUM” filed with the China National Intellectual Property Administration on Apr. 22, 2022. Both of the aforementioned applications are hereby incorporated by reference in their entireties.
| Filing Document | Filing Date | Country | Kind |
|---|---|---|---|
| PCT/CN2023/086584 | 4/6/2023 | WO |