The present application claims a priority of Chinese patent application No. 202011261728.1 filed on Nov. 12, 2020, which is incorporated herein by reference in its entirety.
The present disclosure relates to the field of communication technology, in particular to a data transmission method, a data transmission apparatus, and a data transmission device.
In the sixth generation (6th Generation, 6G) ultra-large-scale machine communication, the machine density that the network can support reaches 10 machines per square meter or 100 machines per cubic meter, and the above-mentioned machines usually only transmit some smaller data packets sporadically, adopting the scheduling-based communication method in the related art will lead to a large access signaling overhead, while the contention-based scheduling-free communication may reduce the access signaling overhead. However, in such ultra-large-scale contention-based machine communication, if the physical uplink shared channel (PUSCH) uses bitmap (bitmap) acknowledgment/negative acknowledgment (Acknowledgment/Negative Acknowledgment, ACK/NACK) mechanism or ACK/NACK plus a user equipment (user equipment, UE) identifier (identifier, ID), this method will result in a large downlink feedback overhead. In addition, if the 5th Generation (5G) ACK/NACK feedback mechanism continues to be used for the 6G physical downlink shared channel (PDSCH), it will result in a large uplink feedback overhead.
The ACK/NACK feedback method for PUSCH in the related art includes:
In LTE, the base station uses the Physical Hybrid ARQ Indicator Channel (PHICH) to transmit ACK/NACK bits, and the corresponding relationship is related to the uplink resource allocated by the user;
In 5G New Radio (NR) and Narrow Band Internet of Things (NB-IoT), the base station notifies the terminal of retransmission or new transmission by whether a new data indicator (New Data Indicator, NDI) in a physical downlink control channel (physical downlink control channel, PDCCH) is reversed.
The disadvantages of the ACK/NACK feedback method for PUSCH in the related art are as follows.
The ACK/NACK feedback method for PDSCH in 5G NR in the related art is:
In addition, in order to reduce the size of the HARQ-ACK codebook, NR supports spatial combination, that is, the HARQ-ACK feedback bits of different transport blocks (transport block, TB) of the same PDSCH are combined through the “AND” operation, thereby obtaining less the number of feedback bits.
The disadvantages of ACK/NACK codebook sequences for different PDSCHs in 5G NR in the related art are as follows.
In 6G, the quantity of PDSCHs per unit time increases, and the length of the above-mentioned ACK/NACK codebook sequences become very large, that is to say, the overheads become very large, however, the methods of using dynamic codebooks and space combination in NR in the related art cannot effectively reduce the signaling overhead thereof.
Embodiments of the present disclosure provide a data transmission method, a data transmission apparatus, and a data transmission device. The overhead of ACK/NACK feedback may be reduced.
In order to solve the above-mentioned technical problems, embodiments of the present disclosure provide the following technical solutions.
Optionally, the selecting, by the terminal, the machine learning model applied to the ACK/NACK sequence includes:
Optionally, the selecting, by the terminal, the machine learning model applied to the ACK/NACK sequence includes:
Optionally, the data transmission method further includes: transmitting, by the terminal, indication information of the selected machine learning model applied to the ACK/NACK sequence to the network device.
Optionally, the transmitting, by the terminal, the indication sequence to the network device includes:
Optionally, the data transmission method further includes:
Optionally, the data transmission method further includes:
Optionally, the data transmission method further includes: receiving, by the terminal, machine learning model indication information transmitted by the network device through radio resource control (radio resource control, RRC) signaling or medium access control control element (medium access control control element, MAC CE) signaling or physical layer signaling.
Optionally, the machine learning model indication information is transmitted by one or all of the network devices.
Optionally, the machine learning model is trained and distributed by the following process:
Optionally, the reporting, by the terminal, the trained model to the network device through the following process:
An embodiment of the present disclosure also provides a training and distributing method of machine learning model, which is applied to a network device side, and the method includes:
Optionally, the training process and distributing process for the machine learning model further include:
Optionally, the training and distributing method of the machine learning model further includes:
An embodiment of the present disclosure also provides a data transmission method, which is applied to a network device, the method includes:
Optionally, the data transmission method further includes:
Optionally, the machine learning model is trained and distributed by the following process:
Optionally, the machine learning model training process and distributing process further include: being repeated once every fixed cycle.
Optionally, the stored ACK/NACK sequence is grouped according to a sequence length and an ACK ratio.
Optionally, the data transmission method further includes:
An embodiment of the present disclosure also provides a data transmission device, including: a transceiver, a processor, and a memory, where a program executable by the processor is stored in the memory; and the program, when executed by the processor, implements: obtaining ACK/NACK sequence to be fed back; selecting a machine learning model applied to the ACK/NACK sequence; compressing, through the machine learning model, the ACK/NACK sequence to obtain an indication sequence; and transmitting the indication sequence to the network device.
Optionally, the processor obtains the ACK/NACK sequence to be fed back; the memory stores the ACK/NACK sequence; the processor groups the stored ACK/NACK sequence, performs machine learning model training, and generates a plurality of groups of machine learning models, and the transceiver reports the plurality of groups of machine learning models to the network device.
An embodiment of the present disclosure also provides a data transmission apparatus, which includes:
Optionally, the processing module is further configured to obtain the ACK/NACK sequence to be trained;
An embodiment of the present disclosure also provides a data transmission device, including: a transceiver, a processor, and a memory, where a program executable by the processor is stored in the memory; the program, when executed by the process, implements: obtaining ACK/NACK sequence to be fed back; selecting a machine learning model applied to the ACK/NACK sequence, and transmitting indication information of the selected machine learning model to the terminal; compressing, through the machine learning model, the ACK/NACK sequence to obtain an indication sequence; and transmitting the indication sequence to the terminal.
Optionally, the processor obtains the ACK/NACK sequence to be trained; the memory stores the ACK/NACK sequence; the processor groups the stored ACK/NACK sequence, performs machine learning model training, and generates a plurality of groups of machine learning models; the memory stores the machine learning models; the transceiver distributes the stored training machine learning models to the terminals through RRC signaling or MAC CE or physical layer signaling.
An embodiment of the present disclosure also provides a data transmission apparatus, which includes:
Optionally, the processing module is further configured to obtain the ACK/NACK sequence to be fed back;
An embodiment of the present disclosure also provides a network device, including: a transceiver, a processor, and a memory, where a program executable by the processor is stored in the memory; the program, when executed by the processor, implements: obtaining ACK/NACK sequence to be trained, and storing the ACK/NACK sequence to be trained; grouping the stored ACK/NACK sequence, performing machine learning model training, and generating a plurality of groups of machine learning models; storing the machine learning models; and distributing the stored machine learning models to the terminals through RRC signaling or MAC CE or physical layer signaling.
An embodiment of the present disclosure also provides a training and distributing apparatus of machine learning model, which is applied to a network device side, the apparatus includes:
An embodiment of the present disclosure also provides a data transmission system, which includes the above-mentioned device.
An embodiment of the present disclosure also provides a processor-readable storage medium, the processor-readable storage medium stores processor-executable instructions, and the processor-executable instructions are used to enable the processor to perform the above-mentioned steps of the method.
The beneficial effects of the embodiments in the present disclosure are as follows.
In the above-mentioned embodiments of the present disclosure, the terminal obtains the ACK/NACK sequence to be fed back, specifically, the terminal receives the downlink data transmitted by at least one network device in the same and/or different time slots; the terminal receives and processes the downlink data, determines whether the downlink data is received correctly, and uses 0/1 bit sequence for representation; the terminal selects, autonomously or based on the machine learning model indication information of the network device, the machine learning model applied to the bit sequence; the terminal compresses the 0/1 bit sequence through the machine learning model to obtain an indication sequence; the terminal transmits the indication sequence to the network device. The network device receives uplink data transmitted by at least one terminal; receives and processes the uplink data, determines whether the uplink data is received correctly, and uses 0/1 bit sequence for representation; autonomously selects the trained machine learning model corresponding to the bit sequence, and transmits the indication information of the selected machine learning model to the terminal; compresses the 0/1 bit sequence through the machine learning model to obtain an indication sequence; and transmits the indication sequence to the terminal. By the machine learning, the feedback overhead of 0/1 bit sequence with the sparsity is reduced (the sparsity here means that there are only a few 0s or a few is in the 0/1 sequence). In addition, the method for compressing, transmitting and decompressing the ACK/NACK sequence with the sparsity in the present disclosure is also applicable to other 0/1 bit sequences with the sparsity generated during communication.
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the drawings. Although exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited by the embodiments set forth herein. Conversely, these embodiments are provided for more thorough understanding of the present disclosure and to fully convey the scope of the present disclosure to those skilled in the art.
As shown in
Step 11, obtaining, by the terminal, the ACK/NACK sequence to be fed back; in an optional embodiment, the terminal receives the downlink data transmitted by at least one network device in the same and/or different time slots; the terminal receives and processes the downlink data, determines whether the downlink data is received correctly, and uses a 0/1 bit sequence for representation;
Step 12, selecting, by the terminal, the machine learning model applied to the ACK/NACK sequence; here, the terminal may select the machine learning model applied to the ACK/NACK sequence according to the machine learning model indication information of the network device; the terminal may also autonomously select the machine learning model applied to the ACK/NACK sequence;
Step 13, compressing, by the terminal, the ACK/NACK sequence through the machine learning model to obtain an indication sequence;
Step 14, transmitting, by the terminal, the indication sequence to the network device.
In this embodiment of the present disclosure, respective bits in the ACK/NACK sequence correspond to data blocks transmitted by a base station, which may be data blocks divided in a predetermined manner on different time slots, different channels, and the same channel and combinations thereof; in this embodiment, the terminal receives the downlink data transmitted by at least one network device in the same and/or different time slots; receives and processes the downlink data, determines whether the downlink data is received correctly, and uses a 0/1 bit sequence for representation; the terminal selects, autonomously or based on the machine learning model indication information of the network device, the machine learning model applied to the bit sequence; and compresses the 0/1 bit sequence through the machine learning model to obtain the indication sequence; the terminal transmits the indication sequence to the network device, and compresses, transmits, and decompresses the above-mentioned bit sequence with sparsity, based on deep learning technology (such as autoencoder technology), thereby reducing feedback overhead for bit sequence with sparsity.
In an optional embodiment of the present disclosure, the selecting, by the terminal, a machine learning model applied to the ACK/NACK sequence includes:
In an optional embodiment of the present disclosure, the selecting, by the terminal, the machine learning model applied to the ACK/NACK sequence includes:
In an optional embodiment of the present disclosure, the above-mentioned data transmission method may also include:
In an optional embodiment of the present disclosure, the above-mentioned data transmission method may also include:
Specifically, the machine learning model is trained and distributed by the following process:
The machine learning model training process and distributing process further include: being performed by one or all of the network devices; and being repeated once every fixed cycle.
Here, the network device stores the received 0/1 bit sequence to grade the sparsity of the above-mentioned 0/1 bit sequence with sparsity; for example, the ACK/NACK sequence [0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0], where 1 represents ACK and 0 represents NACK; in addition, 1 may also be used to represent NACK, 0 represents ACK, which is to reflect the sparsity of the ACK/NACK sequence. The sparsity of the bit sequence may be the ratio of the number of 1 in the bit sequence to the total number of bits in the bit sequence, the sparse value of the above-mentioned bit sequence is: 2/20=0.1, if the sparsity level corresponding to 0.1 is 1, then it is determined that the sparsity level of the bit sequence is 1.
The Nth machine learning model is trained through the Nth level of bit sequence among the multiple of different sparse levels of bit sequences to obtain the Nth machine learning model corresponding to the Nth level of bit sequence;
The N+1th machine learning model is trained through the Nth machine learning model and the N+1th level of bit sequence to obtain the N+1th machine learning model corresponding to the N+1th level of bit sequence.
During the specific implementation, the 0/1 bit sequences with different sparsity levels generated recently according to the demodulation results are stored; the sparsity of the above-mentioned 0/1 bit sequences with sparsity are graded; the training unit selects a sparse level of 0/1 bit sequence to train the autoencoder to be used; based on a certain sparse level of autoencoder model trained above, transfer learning is performed to obtain the parameters of the autoencoder at other sparse levels, that is, in a case that the other sparse levels of autoencoder parameters are trained, the certain sparse level of machine learning model trained above is used as the starting point.
In addition, the Nth machine learning model may also be trained through the Nth level of bit sequence among the multiple of different sparse levels of bit sequences to obtain the Nth machine learning model corresponding to the Nth level of bit sequence;
the N+1th machine learning model is trained through the N+1th level of bit sequence among the multiple of different sparse levels of bit sequences to obtain the N+1th machine learning model corresponding to the N+1th level of bit sequence.
During specific implementation, the recently received decompressed 0/1 bit sequence is stored; the sparsity of the above-mentioned stored 0/1 bit sequence is graded; the training unit uses the above-mentioned different sparse levels of 0/1 bit sequences to train the machine learning apparatus to be used, and obtains machine learning models with different parameters corresponding to different sparse levels respectively.
In an optional embodiment of the present disclosure, the machine learning model may also be trained and distributed by the following process:
Further, the terminal reports the trained models to the network device through the following process:
Here, the stored ACK/NACK sequence may be grouped, and may be grouped according to sequence length and ACK ratio.
In an optional embodiment of the present disclosure, the compressing, through a trained machine learning model corresponding to the sparse level, the ACK/NACK sequence to obtain an indication sequence includes:
Here, the above ACK/NACK sequence is compressed based on the encoder in the selected autoencoder, and the coefficient and output of the autoencoder are also limited as binary 0, 1 bits to obtain a compressed M×1 binary vector [0,1,1,0,0,1,0];
The specific operation of the encoder in the autoencoder is described as follows of: performing a convolutional neural network, then performing a batch normalization, a leakyReLU activation function, and a full connected layer on the above-mentioned sparse 0/1 bit sequence, and then limiting the output value as 0 or 1, that is, obtaining the above-mentioned compressed binary sequence.
In an optional embodiment of the present disclosure, the transmitting, by the terminal, the indication sequence to the network device includes:
In the above-mentioned embodiments of the present disclosure, based on the deep learning technology (such as the autoencoder technology), ACK/NACK sequence compression, transmission and decompression are performed on 6G PDSCH.
Based on the embodiment described in
The embodiment of the present disclosure also provides a training and distributing method of machine learning model, which is applied to a network device side, and the method includes:
In an optional embodiment of the present disclosure, the machine learning model training process and distributing process further include:
In an optional embodiment of the present disclosure, the training and distributing method of the machine learning model further includes:
Here, the terminal side reports the models trained by itself to the network device, and the network device summarizes the models of all other terminals, and then transmits the summarized result to the terminal.
In an optional embodiment of the present disclosure, the grouping the stored ACK/NACK sequence may be performed according to sequence length and ACK ratio.
In this embodiment of the present disclosure, through the above-mentioned training process, the network device can determine the machine learning model, thereby reducing the feedback overhead of the bit sequence with sparsity.
As shown in
step 21, obtaining, by the network device, the ACK/NACK sequence to be fed back; in an optional implementation mode, the network device receives uplink data transmitted by at least one terminal; receives and processes the uplink data, and determines whether the uplink data is received correctly, and uses a 0/1 bit sequence for representation;
step 22, selecting, by the network device, a machine learning model applied to the corresponding ACK/NACK sequence, and transmitting indication information of the selected machine learning model to the terminal;
step 23, compressing, by the network device, the ACK/NACK sequence through the machine learning model to obtain an indication sequence;
step 24, transmitting, by the network device, the indication sequence to the terminal.
In the embodiment of the present disclosure, respective bits in the ACK/NACK sequence correspond to different terminals, and the terminals transmit on a predetermined channel, and the base station feeds back collectively; the NACK here is not only used to indicate that the data has not been successfully demodulated, but also used to indicate that the terminals are in an Inactive (inactive) state.
In this embodiment, the network device receives the uplink data transmitted by at least one terminal; the network device receives and processes the uplink data, determines whether the uplink data is received correctly, and uses a 0/1 bit sequence for representation, for example, the 0/1 bit sequence corresponding to ACK/NACK-Inactive; the network device selects the trained machine learning model corresponding to the bit sequence according to the load level (or sparse level) corresponding to the bit sequence; the network device compresses the 0/1 bit sequence through the machine learning model to obtain the indication sequence; the network device transmits the indication sequence to the terminal, and based on deep learning technology (such as autoencoder technology), compresses, transmits, and decompresses the above-mentioned bit sequence with sparsity, thereby reducing feedback overhead for bit sequence with sparsity.
In an optional embodiment of the present disclosure, the data transmission method may also include:
In an optional embodiment of the present disclosure, the machine learning model is trained and distributed by the following process:
In an optional embodiment of the present disclosure, the machine learning model training process and the distributing process further include: being repeated every fixed cycle.
In an optional embodiment of the present disclosure, the terminal reports the trained models to the network device through the following process:
reporting, by the terminal, the trained machine learning models to the network device through RRC signaling or MAC CE or physical layer signaling.
In an optional embodiment of the present disclosure, the grouping stored ACK/NACK sequence is performed according to sequence length and ACK ratio.
In an optional embodiment of the present disclosure, the transmitting the indication sequence to the terminal includes:
In a case that the above-mentioned embodiment is specifically implemented, as shown in
The above machine learning training process and the process of storing and transferring model parameters are not always bound together with the use process. The implementation process includes the following two methods.
First, once the training is completed and the model parameters are transferred, the above-mentioned model parameters may be applied repeatedly until the communication environment changes greatly and the model parameters are no longer applicable, instead of training the model before each time of using the same.
Second, the base station extracts the above-mentioned data in stages, conducts independent training offline, and then compares the trained model with the model currently in use, and selects a more suitable model for online use (when using a newly trained model, the transfer process of the model is also included).
As shown in
In addition, training, storing and transferring parameters may also be the following steps, and the usage method is the same as the above-mentioned method, and will not be described again;
Step 1-2 of step training, and step 1-2 of storing and transferring the model and parameter are repeated, and the training result are updated at regular intervals.
The above-mentioned machine learning training process and the process of storing and transferring model parameters are not always bound together with the use process. The implementation process includes the following two methods.
Method 1: once the training is completed and the model parameters are transferred, the above-mentioned model parameters may be applied repeatedly until the communication environment changes greatly and the model parameters are no longer applicable, instead of training the model before each use.
Method 2: the base station extracts the above-mentioned data in stages, conducts independent training offline, and then compares the trained model with the model currently in use, and selects a more suitable model for online use (when using a newly trained model, the transfer process of the model is also included).
The specific implementation of above-mentioned method is illustrated in conjunction with the specific embodiment below.
Embodiment 1: Based on the autoencoder technology in machine learning, the ACK/NACK-Inactive sequence of PUSCH in 6G ultra-large-scale machine communication is compressed, wherein the autoencoder technology first uses the encoder to compress the data at the sending end, then uses the decoder to decompress the data at the receiving end, and the parameters of the encoder and the decoder are obtained based on data training through machine learning method.
3. usage
Embodiment 2: the ACK/NACK sequence in the PDSCH is compressed based on the autoencoder technology in machine learning.
In the above-mentioned embodiments of the present disclosure, machine learning technology is used to reduce the feedback overhead of 0/1 bit sequence with a sparsity, the specific embodiment includes but are not limited to: 1) reduce the overhead of ACK/NACK feedback of PUSCH in 6G ultra-large-scale machine communication with low complexity; and 2) reduce the overhead of the ACK/NACK sequence of PDSCH in the 6G ultra-high data rate with low complexity.
Based on the embodiment shown in
As shown in
Optionally, the selecting, by the terminal, the machine learning model applied to the ACK/NACK sequence includes:
Optionally, the selecting, by the terminal, the machine learning model applied to the ACK/NACK sequence includes:
Optionally, the terminal transmits indication information of the selected machine learning model applied to the ACK/NACK sequence to the network device.
Optionally, machine learning model indication information transmitted by the network device through radio resource control RRC signaling or medium access control control element MAC CE signaling or physical layer signaling is received.
Optionally, the transmitting, by the terminal, the indication sequence to the network device includes:
Optionally, the data transmission method further includes:
Optionally, the data transmission method further includes:
Optionally, the machine learning model indication information is transmitted by one or all of the network devices.
Optionally, the training process and distributing process for the machine learning model further include:
Optionally, the machine learning model is trained and distributed by the following process:
Optionally, the reporting, by the terminal, the trained model to the network device through the following process:
It should be noted that the device in this embodiment is a device corresponding to the method shown in
As shown in
Optionally, the selecting, by the terminal, the machine learning model applied to the ACK/NACK sequence includes:
Optionally, the selecting, by the terminal, the machine learning model applied to the ACK/NACK sequence includes:
Optionally, the terminal transmits indication information of the selected machine learning model applied to the ACK/NACK sequence to the network device.
Optionally, machine learning model indication information transmitted by the network device through radio resource control RRC signaling or medium access control control element MAC CE signaling or physical layer signaling is received.
Optionally, the transmitting, by the terminal, the indication sequence to the network device includes:
Optionally, the data transmission method further includes:
Optionally, the data transmission method further includes:
Optionally, the machine learning model indication information is transmitted by one or all of the network devices.
Optionally, training process and distributing process for the machine learning model further include:
Optionally, the machine learning model is trained and distributed by the following process:
Optionally, the reporting, by the terminal, the trained model to the network device through the following process:
It should be noted that the apparatus in this embodiment is an apparatus corresponding to the method shown in
The embodiment of the present disclosure also provides a network device, which includes: a transceiver, a processor, and a memory, where a program executable by the processor is stored in the memory; the program, when executed by the processor, implements: obtaining ACK/NACK sequence to be trained, and storing the ACK/NACK sequence to be trained; grouping the stored ACK/NACK sequence, performing machine learning model training, and generating a plurality of groups of machine learning models; storing the machine learning model; distributing the stored machine learning model to the terminal through RRC signaling or MAC CE or physical layer signaling.
Optionally, training process and distributing process for the machine learning model further include: being performed by one or all of the network devices; and being repeated once every fixed cycle.
Optionally, the storage module stores the machine learning model reported by the terminal;
Optionally, the stored ACK/NACK sequence is grouped according to a sequence length and an ACK ratio.
All the implementation manners in the foregoing method embodiments are applicable to the embodiment, and can also achieve the same technical effect.
The embodiment of the present disclosure also provides a training and distributing apparatus of machine learning model, which is applied to the network device side, and the apparatus includes:
Optionally, training process and distributing process for the machine learning model further include:
Optionally, the storage module stores the machine learning model reported by the terminal;
Optionally, the stored ACK/NACK sequence is grouped according to a sequence length and an ACK ratio.
All the implementation manners in the foregoing method embodiments are applicable to this embodiment, and can also achieve the same technical effect.
The embodiment of the present disclosure also provides a network device, which includes: a transceiver, a processor, and a memory, where a program executable by the processor is stored in the memory; the program, when executed by the processor, implements: obtaining ACK/NACK sequence to be fed back; selecting a machine learning model applied to the ACK/NACK sequence, and transmitting indication information of the selected machine learning model to the terminal; compressing, through the machine learning model, the ACK/NACK sequence to obtain an indication sequence; and transmitting the indication sequence to the terminal.
Optionally, the network device transmits the machine learning model indication information to the terminal through radio resource control RRC signaling or medium access control control element MAC CE signaling or physical layer signaling.
Optionally, one or all network devices transmits the machine learning model indication information to the terminal.
Optionally, the machine learning model is trained and distributed by the following process:
Optionally, the machine learning model training process and distributing process further include:
Optionally, the terminal reports the trained model to the network device through the following process:
Optionally, the stored ACK/NACK sequence is grouped according to a sequence length and an ACK ratio.
Optionally, the transmitting the indication sequence to the terminal includes:
It should be noted that the device in this embodiment is a device corresponding to the method shown in
The embodiment of the present disclosure also provides a data transmission apparatus, which includes:
Optionally, the transceiver module transmits the machine learning model indication information to the terminal through radio resource control RRC signaling or medium access control control element MAC CE signaling or physical layer signaling.
Optionally, one or all network devices transmit the machine learning model indication information to the terminal.
Optionally, the machine learning model is trained and distributed by the following process:
Optionally, the machine learning model training process and distributing process further include:
Optionally, the terminal reports the trained model to the network device through the following process:
Optionally, the stored ACK/NACK sequence is grouped according to a sequence length and an ACK ratio.
Optionally, the transmitting the indication sequence to the terminal includes:
It should be noted that the apparatus in this embodiment is an apparatus corresponding to the method shown in
The embodiment of the present disclosure further provides a data transmission system, which includes: the device on the network side and the device on the terminal side as described in the above-mentioned embodiments.
It should be noted that all the implementation manners in the foregoing embodiments are applicable to the embodiments of the system, and can also achieve the same technical effect.
The embodiment of the present disclosure also provides a processor-readable storage medium, where the processor-readable storage medium stores processor-executable instructions, and the processor-executable instructions are used to enable the processor to perform the above-mentioned methods. All the implementation manners in the foregoing method embodiments are applicable to the embodiments, and can also achieve the same technical effect.
Those skilled ordinarily in the art can appreciate that the units and algorithm steps of the examples described in conjunction with the embodiments disclosed herein can be implemented by electronic hardware, or a combination of computer software and electronic hardware. Whether these functions are executed by hardware or software depends on the specific application and design constraints of the technical solution. Skilled artisans may implement the described functionality through different methods for each particular application, but such implementation should not be considered beyond the scope of the present disclosure.
Those skilled in the art can clearly understand that for the convenience and brevity of the description, the specific working process of the above-described system, apparatus and unit can refer to the corresponding process in the foregoing method embodiment, which will not be repeated here.
In the embodiments provided in the present disclosure, it should be understood that the disclosed devices and methods may be implemented in other ways. For example, the apparatus embodiments described above are only illustrative. For example, the division of the units is only a logical function division. In actual implementation, there may be other division methods. For example, multiple units or components can be combined or may be integrated into another system, or some features may be ignored, or not implemented. In another point, the mutual coupling or direct coupling or communication connection shown or discussed may be achieved through some interfaces, and the indirect coupling or communication connection of apparatuses or units may be in electrical, mechanical or other forms.
The units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
If the functions described above are realized in the form of software function units and sold or used as independent products, they can be stored in a computer-readable storage medium. Based on this understanding, the essence of the technical solution of the present disclosure or the part that contributes to the related technology or the part of the technical solution may be embodied in the form of a software product. The computer software product is stored in a storage medium, and includes several instructions to enable a computer device (which may be a personal computer, server, or network device, etc.) to execute all or part of the steps of the methods described in various embodiments of the present disclosure. The aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (read-only memory, ROM), random access memory (random access memory, RAM), magnetic disk or optical disk, and various media that can store program codes.
In addition, it should be pointed out that, in the apparatus and method of the present disclosure, obviously, each component or each step can be decomposed and/or reassembled. These decompositions and/or recombinations should be considered as equivalents of the present disclosure. Also, the steps for executing the above series of processes can naturally be executed in chronological order according to the illustrated order, but they are not necessarily executed in chronological order, and some steps can be executed in parallel or independently with each other. For those ordinarily skilled in the art, it can be understood that all or any steps or components of the methods and apparatuses of the present disclosure may be implemented in any computing apparatus (including processors, storage media, etc.) or networks of computing devices in hardware, firmware, software or a combination thereof, which can be realized by those skilled in the art by using their basic programming skills after reading the description of the present disclosure.
It should be understood that the embodiments described in the embodiments of the present disclosure may be implemented by hardware, software, firmware, middleware, microcode or a combination thereof. For hardware implementation, modules, units, and subunits can be implemented in one or more application specific integrated circuits (Application Specific Integrated Circuits, ASIC), digital signal processor (Digital Signal Processor, DSP), digital signal processing device (DSP Device, DSPD), Programmable Logic Device (Programmable Logic Device, PLD), Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA), general-purpose processor, controller, microcontroller, microprocessor, and other electronic units or combinations thereof used for performing the functions described in the present disclosure.
For software implementation, the technologies described in the embodiments of the present disclosure may be implemented through modules (such as procedures, functions, etc.) that execute the functions described in the embodiments of the present disclosure. Software codes can be stored in memory and executed by a processor. Memory can be implemented within the processor or external to the processor.
Therefore, the object of the present disclosure can also be achieved by running a program or a group of programs on any computing apparatus. The computing apparatus may be a known general-purpose apparatus. Therefore, the object of the present disclosure can also be achieved only by providing a program product including program codes for realizing the method or apparatus. That is, such a program product also constitutes the present disclosure, and a storage medium storing such a program product also constitutes the present disclosure. Obviously, the storage medium may be any known storage medium or any storage medium developed in the future. It should also be pointed out that, in the apparatus and method of the present disclosure, obviously, each component or each step can be decomposed and/or reassembled. These decompositions and/or recombinations should be considered equivalents of the present disclosure. Also, the steps for performing the above-mentioned series of processes may naturally be performed in chronological order in the order described, but need not necessarily be performed in chronological order. Certain steps may be performed in parallel or independently with each other.
What have been described above are optional embodiments of the present disclosure. It should be pointed out that those ordinarily skilled in the art can make some improvements and modifications without departing from the principles described in the present disclosure. These improvements and modifications are also within the protection scope of the present disclosure.
Number | Date | Country | Kind |
---|---|---|---|
202011261728.1 | Nov 2020 | CN | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/CN2021/128386 | 11/3/2021 | WO |