This application relates to the field of channel coding, and to a method and an apparatus for encoding a polar code, and a method and an apparatus for decoding a polar code.
A polar code has been strictly proved to be a channel encoding scheme that reaches a channel capacity, and has features such as high performance, low complexity, and a flexible matching manner. Currently, the polar code is determined by the 3rd Generation Partnership Project (3GPP) as a control channel encoding scheme in some communication scenarios of the 5th generation (5G).
An original polar code is based on a kernel matrix
and an n-time Kronecker index F2⊗m of the kernel matrix corresponds to a linear code whose code length is N=2m. As for a high-dimensional kernel matrix, researchers have proved that a polarization speed when some well-designed high-dimensional kernel matrices are used to encode the polar code is higher, in other words, decoding performance is better. However, the high-dimensional kernel matrix causes high decoding complexity, and how to reduce decoding complexity is an urgent problem to be resolved.
Embodiments described herein provide a method and an apparatus for encoding a polar code, and a method and an apparatus for decoding a polar code, to reduce decoding complexity.
According to a first aspect, an encoding method is provided. The method is performed by an encoder device, or is performed by a chip, a chip system, or a circuit set in the encoder device. This is not limited in at least one embodiment.
The method includes: obtaining information bits; determining a target kernel matrix, where the target kernel matrix is obtained by adjusting an original kernel matrix; and performing polar encoding on the information bits based on the target kernel matrix.
For example, the obtaining information bits includes internally generating information bits, or includes receiving information bits from outside.
Based on the foregoing technical solution, in response to the polar encoding being performed, a plurality of kernel matrices is constructed based on the original kernel matrix, and an appropriate target kernel matrix is selected for encoding based on the constructed kernel matrices and an actual communication status. In this manner, in response to the kernel matrix being constructed or selected, performance and decoding complexity are comprehensively considered, so that decoding complexity is reduced by selecting the appropriate kernel matrix used during encoding.
With reference to the first aspect, in some implementations of the first aspect, a dimension of the target kernel matrix is less than a dimension of the original kernel matrix.
Based on the foregoing technical solution, a plurality of kernel matrices whose dimensions are small are constructed based on a kernel matrix whose dimension is known. Encoding is performed based on the kernel matrix whose dimension is small, so that lower decoding complexity is provided while the same performance is ensured.
With reference to the first aspect, in some implementations of the first aspect, the target kernel matrix is obtained by adjusting a partial distance of the original kernel matrix.
For example, the target kernel matrix is obtained by adjusting a partial distance profile (PDP) of the original kernel matrix.
For example, in response to the partial distance of the original kernel matrix being adjusted, the adjustment is performed based on partial distance close to that of an Arikan matrix.
With reference to the first aspect, in some implementations of the first aspect, the target kernel matrix is obtained by adjusting the partial distance of the original kernel matrix according to a depth-first search algorithm.
With reference to the first aspect, in some implementations of the first aspect, in response to processing being performed according to the depth-first search algorithm, one or more of the following constraints are used for processing:
a candidate line set M meets: M[i]={v0t−1∈F2l|wt(v0t−1)=Di}; or
in response to the ith line being selected from the bottom in the candidate line set M, a Hamming weight of a selected line v is greater than or equal to Di; or
selecting the line v from the candidate line set M and performing partial distance calculation on the line v and a constructed line, where in response to c∈C
a line that belongs to a same coset is not selected again, where
i indicates a line number, i∈[0,l−1], wt(v) wt(c) and both indicate the Hamming weight,
For example, the currently constructed kernel matrix
Based on the foregoing technical solution, a constraint condition about a candidate line and/or a selected line is designed, so as to accelerate convergence speed, thereby further reducing calculation amount.
With reference to the first aspect, in some implementations of the first aspect, the original kernel matrix is a kernel matrix K24 whose dimension is (24×24), and the K24 is:
With reference to the first aspect, in some implementations of the first aspect, the target kernel matrix is any one of the following:
According to a second aspect, a decoding method is provided. The method is performed by a decoder device, or is performed by a chip, a chip system, or a circuit set in the decoder device. This is not limited in at least one embodiment.
The method includes: obtaining a to-be-decoded sequence; and decoding the to-be-decoded sequence based on a plurality of trellises, and reusing an intermediate result obtained from decoding performed based on a first trellis in the plurality of trellises in response to decoding being performed based on a second trellis in the plurality of trellises, where the first trellis is used for a tth stage of decoding the to-be-decoded sequence, the second trellis is used for a (t+i)th stage of decoding the to-be-decoded sequence, t is an integer greater than 0 or equal to 0, and i is an integer greater than 1 or equal to 1.
For example, the obtaining the to-be-decoded sequence represents receiving the to-be-decoded sequence from outside.
For example, the first trellis corresponds to the tth stage of decoding. In other words, in the tth stage of decoding, decoding is performed based on the first trellis; or in response to decoding being performed on an information bit of the to-be-decoded sequence, decoding is performed based on the first trellis. For example, the second trellis corresponds to the (t+i)th stage of decoding. In other words, in the (t+i)th stage of decoding, decoding is performed based on the second trellis; or in response to decoding being performed on another information bit of the to-be-decoded sequence, decoding is performed based on the second trellis.
For example, the intermediate result obtained through decoding performed based on the first trellis indicates a result obtained through processing based on the first trellis or a result obtained at the tth stage of decoding. The intermediate result includes, for example, a decoding path obtained through decoding performed based on the first trellis and a corresponding path measure value.
For example, a plurality of combinations are obtained after paths in a trellis of a smaller level are spliced, and an optimal path and a path value of a trellis of a larger level are obtained by selecting a path with a largest score in these combinations.
Based on the foregoing technical solution, in response to the to-be-decoded sequence being decoded according to a decoding algorithm based on the trellis, intermediate results obtained in different decoding stages is reused. In other words, in response to calculation being performed on different trellises, some calculation steps is the same. Therefore, reusing is performed for a plurality of times, so that decoding complexity is greatly reduced, and decoding speed is improved.
With reference to the second aspect, in some implementations of the second aspect, each of the plurality of trellises is split into a plurality of sub-trellises.
Based on the foregoing technical solution, in response to the to-be-decoded sequence being decoded according to a decoding algorithm based on the trellis, the trellis is first split, and decoding based on a complete trellis is split into decoding based on a sub-trellis, thereby reducing decoding complexity.
With reference to the second aspect, in some implementations of the second aspect, locations of split points of the plurality of sub-trellises are related to decoding complexity.
In other words, the location of the split point at which the trellis is split is related to decoding complexity.
Based on the foregoing technical solution, an appropriate location of the split point is selected based on an actual situation. The location of the split point affects decoding complexity. Therefore, some split points is selected, to reduce decoding complexity.
With reference to the second aspect, in some implementations of the second aspect, the method further includes: determining, based on a correlation between the first trellis and the second trellis, whether to reuse the intermediate result obtained through decoding performed based on the first trellis in response to decoding being performed based on the second trellis.
For example, the correlation between the first trellis and the second trellis indicates whether the first trellis is the same as or similar to the second trellis; or indicates whether a plurality of sub-trellises corresponding to the first trellis are the same as or similar to a plurality of sub-trellises corresponding to the second trellis; or indicates whether a same calculation step exists in response to the processing being performed based on the first trellis and the second trellis.
Based on the foregoing technical solution, whether the intermediate results obtained in different decoding stages is reused is determined based on the correlation between the trellises, for example, whether the first trellis is the same as or similar to the second trellis.
With reference to the second aspect, in some implementations of the second aspect, the decoding method further includes: determining, based on the following information, whether to reuse the intermediate result obtained through decoding performed based on the first trellis in response to decoding being performed based on the second trellis: a code obtained by performing a shortening operation on a linear code C in the (t+i)th stage of decoding and a code obtained by performing the shortening operation on the linear code C in the tth stage of decoding; and/or a code obtained by performing a puncturing operation on the linear code C in the (t+i)th stage of decoding and a code obtained by performing the puncturing operation on the linear code C in the tth stage of decoding.
Based on the foregoing technical solution, whether the intermediate results obtained in different decoding stages is reused is determined by comparing the code obtained by performing the shortening operation on the linear code C in different decoding stages and/or the code obtained by performing the puncturing operation on the linear code C in different decoding stages.
With reference to the second aspect, in some implementations of the second aspect, in response to sx,y(C(t+i))=sx,y(C(t)), and px,y(C(t+i))⊂px,y(C(t)), reusing the intermediate result obtained through decoding performed based on the first trellis in response to decoding being performed based on the second trellis; or in response to sx,y(C(t+i))⊂sx,y(C(t)), reusing the intermediate result obtained through decoding performed based on the first trellis in response to decoding being performed based on the second trellis, where sx,y(C(t+i)) indicates a code obtained by performing the shortening operation on the linear code C at a location except x≤z<y in the (t+i)th stage of decoding, sx,y(C(t)) indicates a code obtained by performing the shortening operation on the linear code C at the location except x≤z<y in the tth stage of decoding, px,y(C(t+i)) indicates a code obtained by performing the puncturing operation on the linear code C at the location except x≤z<y in the (t+i)th stage of decoding, and px,y(C(t)) indicates a code obtained by performing the puncturing operation on the linear code C at the location except x≤z<y in the tth stage of decoding.
Based on the foregoing technical solution, whether the intermediate results obtained in different decoding stages is reused is determined based on the foregoing conditions.
With reference to the second aspect, in some implementations of the second aspect, the reusing the intermediate result obtained through decoding performed based on the first trellis in response to decoding being performed based on the second trellis includes: in response to processing being performed based on the second trellis in the (t+i)th stage of decoding, directly using the intermediate result obtained through decoding performed based on the first trellis as an intermediate result obtained through decoding performed based on the second trellis; or in response to processing being performed based on the second trellis in the (t+i)th stage of decoding, performing processing based on a result of a maximization operation in a process of the first trellis.
With reference to the second aspect, in some implementations of the second aspect, decoding is polar decoding.
In other words, the to-be-decoded sequence is an obtained to-be-decoded sequence after the polar encoding.
Based on the foregoing technical solution, in response to the encoding being performed by using a high-dimensional kernel matrix and decoding is performed according to a conventional trellis algorithm, decoding complexity of a polar code decoder is high. In at least one embodiment, decoding complexity is reduced by performing the decoding on the sub-trellis. In addition, compared with a conventional solution in which encoding is performed by using an Arikan kernel and decoding is performed by using an SCL, lower decoding complexity is provided while the same performance is ensured in this solution.
According to a third aspect, at least one embodiment provides an encoding apparatus. The encoding apparatus has a function of implementing the method in any one of the first aspect and the implementations of the first aspect. The function is implemented by hardware, or is implemented by hardware executing corresponding software. The hardware or the software includes one or more units corresponding to the foregoing function.
In at least one embodiment, in response to some or all of the functions being implemented by hardware, the encoding apparatus includes: an input interface circuit, configured to obtain information bits; a logic circuit, configured to perform the encoding method in the first aspect to perform polar encoding on the information bits; and an output interface circuit, configured to output an encoded sequence.
Optionally, the encoding apparatus is a chip or an integrated circuit.
In at least one embodiment, in response to some or all of the functions being implemented by software, the encoding apparatus includes: a memory, configured to store a computer program; and a processor, configured to execute the computer program stored in the memory, where in response to the computer program being executed, the encoding apparatus implements the encoding method according to the first aspect.
Optionally, the memory is a physically independent unit, or is integrated with the processor.
In at least one embodiment, in response to some or all of the functions being implemented by using software, the encoding apparatus includes only a processor. The memory configured to store the program is located outside the encoding apparatus. The processor is connected to the memory by using a circuit/wire, and is configured to read and run the program stored in the memory, to perform the encoding method in the first aspect.
During specific implementation, the encoding apparatus is the chip or the integrated circuit.
According to a fourth aspect, at least one embodiment provides a decoding apparatus. The decoding apparatus has a function of implementing the method in any one of the second aspect and the implementations of the second aspect. The function is implemented by hardware, or is implemented by hardware executing corresponding software. The hardware or the software includes one or more units corresponding to the foregoing function.
In at least one embodiment, in response to some or all of the functions being implemented by hardware, the decoding apparatus includes: an input interface circuit, configured to obtain a to-be-decoded sequence; a logic circuit, configured to perform the decoding method in the second aspect to decode the to-be-decoded sequence to obtain a decoding result; and an output interface circuit, configured to output the decoding result.
Optionally, the decoding apparatus is a chip or an integrated circuit.
In at least one embodiment, in response to some or all of the functions being implemented by software, the decoding apparatus includes: a memory, configured to store a computer program; and a processor, configured to execute the computer program stored in the memory, where in response to the computer program being executed, the decoding apparatus implements the decoding method according to the second aspect.
Optionally, the memory is a physically independent unit, or is integrated with the processor.
In at least one embodiment, in response to some or all of the functions being implemented by using software, the decoding apparatus includes only a processor. The memory configured to store the program is located outside the decoding apparatus. The processor is connected to the memory by using a circuit/wire, and is configured to read and run the program stored in the memory, to perform the decoding method in the second aspect.
During specific implementation, the decoding apparatus is the chip or the integrated circuit.
According to a fifth aspect, at least one embodiment provides a network device, including a transceiver, a processor, and a memory. The processor is configured to control the transceiver to receive and send a signal, the memory is configured to store a computer program, and the processor is configured to invoke and run the computer program stored in the memory, so that the network device performs the method in any implementation of the first aspect.
In response to the network device serving as a transmit end of information and/or data, the network device performs the encoding method in the first aspect to encode sent information bits.
According to a sixth aspect, at least one embodiment provides a network device, including a transceiver, a processor, and a memory. The processor is configured to control the transceiver to receive and send a signal, the memory is configured to store a computer program, and the processor is configured to invoke and run the computer program stored in the memory, so that the network device performs the method in any implementation of the second aspect.
In response to the network device serving as a receive end of information and/or data, the network device performs the decoding method in the second aspect, to decode a to-be-decoded sequence received from a transmit end.
According to a seventh aspect, at least one embodiment provides a terminal device, including a transceiver, a processor, and a memory. The processor is configured to control the transceiver to receive and send a signal, the memory is configured to store a computer program, and the processor is configured to invoke and run the computer program stored in the memory, so that the terminal device performs the method in any implementation of the first aspect.
In response to the terminal device serving as a transmit end of information and/or data, the terminal device performs the encoding method in the first aspect to encode sent information bits.
According to an eighth aspect, at least one embodiment provides a terminal device, including a transceiver, a processor, and a memory. The processor is configured to control the transceiver to receive and send a signal, the memory is configured to store a computer program, and the processor is configured to invoke and run the computer program stored in the memory, so that the terminal device performs the method in any implementation of the second aspect.
In response to the terminal device serving as a receive end of information and/or data, the terminal device performs the decoding method in the second aspect, to decode a to-be-decoded sequence received from a transmit end.
According to a ninth aspect, at least one embodiment provides a computer-readable storage medium. The computer-readable storage medium stores instructions, and in response to the instructions are run on a computer, the computer is enabled to perform the method in any one of the implementations of the first aspect or the second aspect.
According to a tenth aspect, at least one embodiment provides a computer program product, where the computer program product includes a computer program code, and in response to the computer program code running on a computer, the computer is enabled to perform the method in any one of the implementations of the first aspect or the second aspect.
According to an eleventh aspect, at least one embodiment provides a chip, including a logic circuit and a communication interface, where the communication interface is configured to receive to-be-processed data and/or information, and transmit the to-be-processed data and/or information to the logic circuit, the logic circuit is configured to perform encoding, and the communication interface is further configured to output the processed data/information.
The chip is a chip set at a transmit end, and the to-be-processed data and/or information is information bits. The chip obtains information bits by using the communication interface, and transmits the information bits to the logic circuit. The logic circuit encodes the information bits by using the encoding method described in the first aspect. The chip outputs an encoded polar code by using the communication interface.
Optionally, the communication interface includes an input interface and an output interface. The input interface is configured to obtain the information bits, and the output interface is configured to output the encoded polar code.
Optionally, the chip is a chip set at a transmit end. The logic circuit is configured to generate the information bits, and encode the information bits by using the encoding method described in the first aspect. The chip outputs the encoded polar code by using the communication interface.
Optionally, the communication interface includes an input output interface. The input output interface is configured to output the encoded polar code.
According to a twelfth aspect, at least one embodiment provides a chip, including a logic circuit and a communication interface, where the communication interface is configured to receive to-be-processed data and/or information, and transmit the to-be-processed data and/or information to the logic circuit, the logic circuit is configured to perform decoding, and the communication interface is further configured to output the processed data/information.
The chip is a chip configured in a receive end, and the to-be-processed data and/or information is a to-be-decoded sequence. The chip receives the to-be-decoded sequence by using the communication interface, and transmits the to-be-decoded sequence to the logic circuit. The logic circuit decodes the to-be-decoded sequence by using the decoding method described in the second aspect. The chip outputs a decoding result by using the communication interface.
Optionally, the communication interface includes an input interface and an output interface. The input interface is configured to receive the to-be-decoded sequence, and the output interface is configured to output the decoding result.
Optionally, the chip is a chip configured at a receive end. The logic circuit is configured to obtain the to-be-decoded sequence, and decode the to-be-decoded sequence by using the decoding method described in the second aspect. The chip outputs the decoding result by using the communication interface.
Optionally, the communication interface includes an input output interface. The input output interface is configured to output the decoding result.
According to a thirteenth aspect, at least one embodiment provides a communication system, including an encoder device and a decoder device.
The following describes technical solutions of at least one embodiment with reference to the accompanying drawings.
The technical solutions in embodiments described herein are applied to various wireless communication systems. For example, the wireless communication system includes but is not limited to a wireless local area network (WLAN) system, a narrow band-internet of things (NB-IoT) system, a 5G system, a new radio (NR) system, a long term evolution (LTE) system, or a communication system after 5G. The technical solutions in embodiments described herein are further applied to device to device (D2D) communication, machine to machine (M2M) communication, machine type communication (MTC), satellite communication, and communication in an Internet of Vehicles system. Communication modes in the Internet of Vehicles system are collectively referred to as V2X (X indicates everything). For example, the V2X communication includes vehicle to vehicle (V2V) communication, vehicle to infrastructure (V2I) communication, vehicle to pedestrian (V2P) communication, or vehicle to network (V2N) communication.
For example, the technical solutions in embodiments described herein are applied to an application scenario of a 5G mobile communication system, such as enhanced mobile broadband (eMBB), ultra reliable low latency communication (URLLC), and enhanced massive machine type communication (eMTC).
Based on a network device mentioned in at least one embodiment, the network device is any device having a wireless transceiver function. The device includes but is not limited to: an evolved NodeB (eNB), a Radio Network Controller (RNC), a NodeB (NB), a Base Station Controller (BSC), a base transceiver station (BTS), a home base station (for example, a Home evolved NodeB, or a Home NodeB, HNB), a Baseband unit (BBU), an Access Point (AP) in a Wireless Fidelity (Wi-Fi) system, a wireless relay node, a wireless backhaul node, a transmission point (TP), or a transmission and reception point (TRP). Alternatively, the device is 5G, for example, NR, a gNB in a system, a transmission point (TRP or TP), or one or a group of (including a plurality of antenna panels) antenna panels of a base station in the 5G system, or is a network node that forms the gNB or the transmission point, for example, the Baseband unit (BBU) or a distributed unit (DU).
In some deployments, the gNB includes a central unit (CU) and the DU. The gNB further includes an active antenna unit (AAU). The CU implements some functions of the gNB, and the DU implements some functions of the gNB. For example, the CU is responsible for processing a non-real-time protocol and a service, to implement functions of a radio resource control (RRC) layer and a packet data convergence protocol (PDCP) layer. The DU is responsible for processing a physical layer protocol and a real-time service, and implements functions of a radio link control (RLC) layer, a media access control (MAC) layer, and a physical (PHY) layer. The AAU implements some physical layer processing functions, radio frequency processing, and a function related to the active antenna. Information at the RRC layer is eventually converted into information at the PHY layer, or is converted from information at the PHY layer. Therefore, in this architecture, higher layer signaling such as RRC layer signaling is also considered as being sent by the DU or sent by the DU and the AAU. The network device is a device including one or more of a CU node, a DU node, and an AAU node. In addition, the CU is a network device in a radio access network (RAN), or is a network device in a core network (CN). This is not limited in embodiments described herein.
The terminal device in at least one embodiment is also referred to as user equipment (UE), an access terminal, a subscriber unit, a subscriber station, a mobile station, a remote station, a remote terminal, a mobile device, a user terminal, a terminal, a wireless communication device, a user agent, a user apparatus, or the like. The terminal device in at least one embodiment is a mobile phone, a Pad, a computer with a wireless transceiver function, a virtual reality (VR) terminal device, an augmented reality (AR) terminal device, a wireless terminal in industrial control, a wireless terminal in self driving, a wireless terminal in remote medical, a wireless terminal in a smart grid, or a wireless terminal in transportation security, a wireless terminal in a smart city, a wearable wireless terminal, a wireless terminal in a smart home, and the like. Application scenarios are not limited in embodiments described herein.
For ease of understanding embodiments described herein, a communication system to which an at least one embodiment is applicable is first described in detail with reference to
In the wireless communication system shown in
The encoder is also referred to as a transmit end (or a sending device), and the decoder is also referred to as a receive end (or a receiving device) of information or data. The sending device is a network device, and the receiving device is a terminal device. Alternatively, the sending device is a terminal device, and the receiving device is a network device. Alternatively, the sending device is a terminal device, and the receiving device is a terminal device. Alternatively, the sending device is a network device, and the receiving device is a network device.
Channel coding is one of core technologies in the wireless communication field, and performance improvement of channel coding directly improves network coverage and a user transmission rate. Currently, a polar code is strictly proved to be a channel encoding scheme that reaches a channel capacity, and has features such as high performance, low complexity, and a flexible matching manner. Currently, the polar code is determined by the 3GPP as a control channel coding scheme in a 5G control channel eMBB scenario (uplink/downlink).
For ease of understanding of embodiments described herein, the following first briefly describes several terms used herein.
1. Encoding Scheme of a Polar Code
To-be-encoded bits are sorted based on their respective reliability. Generally, a bit with high reliability is set to an information bit (data), a bit with low reliability is set to a frozen bit, and a value of the frozen bit is usually set to 0, which is known to both a transmit end and a receive end in actual transmission. As shown in
2. Linear Block Code Trellis Representation
Decoding based on a trellis is a conventional decoding manner of a convolutional code. For a linear block code, decoding is also performed based on the trellis. Decoding based on the trellis is a maximum likelihood (ML)/MAP decoding manner. In this manner of decoding, decoding performance is good, but correspondingly, decoding complexity is very high. Given a linear block code whose check matrix is H, all code words comply with: 0=cHT=Σi=0n-1ciH(i).
H(i) indicates the ith column of the check matrix H. A node at the jth moment in a trellis diagram is marked as: S(j)=Σi=0j-1ciH(i).
is shown in
3 Channel Parameter
A polar code is a constructable channel encoding scheme that reaches a binary input discrete memoryless channel capacity.
W: {0,1}→Y is a symmetric binary discrete memoryless channel(B-DMC), an input letter set of W: {0,1}→Y is x=10,11, and an output letter set of W: {0,1}→Y is Y. In response to input x and channel output y being given, a channel condition transition probability is expressed as W(y|x).
A capacity of the B-DM channel (B-DMC) W is defined as follows:
A B-DM channel Bhattacharyya parameter is defined as:
4. Kernel Matrix
In the literature, Arikan infers that polarization is a common phenomenon and is not limited to the given G=F2⊗m, where
and “⊗m” indicates an m-time Kronecker product. Korada and so on prove the conclusion and extend the conclusion to a high-dimensional kernel matrix, and prove that a condition for polarization of any l×l matrix is a permutation matrix of any column in the l×l matrix but not an upper triangular matrix. In addition, Korada and so on also prove that in response to l≤15, a polarization exponent β of any kernel matrix cannot exceed ½. In a case of a same code length, a larger polarization exponent β indicates a lower decoding error probability of a corresponding polar code, and corresponding decoding performance is better. However, a main problem of the high-dimensional kernel matrix polar code is that its decoding algorithm is complex, which is one of bottlenecks in high-dimensional kernel matrix research.
5. Polarization Exponent
Several theorems are briefly described for the polarization exponent. For details, refer to descriptions in the conventional technology. This is not limited herein.
Theorem 1: Given a B-DM channel W (that is, B-DWC W), for F2 and any β<½, the following is obtained:
Correspondingly, in response to I(W)<1, for any β>½, the following is obtained:
β is referred to as a polarization exponent, and a code length is n=2m.
Korada and so on extend the above theorem to any high-dimensional kernel matrix. In this case, the code length is n=lm. This is described in the following.
Theorem 2: For any B-DM channel W (that is, B-DWC W), in response to 0<I(W)<1 and the following conditions are met, an l×l-dimensional kernel matrix K has a polarization exponent E(K). The conditions are:
Here, E(K) indicates the polarization exponent. For the arikan kernel, the polarization exponent of the arikan kernel is ½.
Given the matrix K and the corresponding E(K), theorem 2 shows that:
(1) In response to m tending to infinity, in response to 0<R<I(W) and β<E(K), there is a set A whose size is NR that meets: Σi∈AZ(i)≤2−n
P
e(n)≤2−n
(2) In response to m tending to infinity, in response to R>0 and β>E(K), any set A whose size is NR meets:
where A is an information bit set, and a bound of a frame error ratio under SC decoding is:
P
e(n)≥2−n
The polarization exponent measures a convergence speed of decoding performance, and a larger polarization exponent indicates a faster convergence speed, so that better decoding performance is obtained in response to the code length being short.
6. Scaling Exponent
Another key feature of a polar kernel is a scaling exponent. The scaling exponent indicates kernel matrix performance. At least one embodiment focuses on a function of the scaling exponent.
For example, given a channel W whose channel capacity is I(W) and an expected error probability Pe, a kernel matrix of a polar code corresponding to (N, k) is K. In response to transmission being expected to be performed at a rate of R=I(W)−Δ, a code length N meets:
μ(K) is the scaling exponent.
The scaling exponent is a parameter related to the channel. For a random code, μ=2. For the polar code, in response to l→∞, let K=Fl, and the following is obtained:
On a BEC channel, a scaling exponent of the polar code with an Arikan kernel is 3.627.
7. Partial Distance
In an l×l matrix K, K[i] indicates the ith line of the matrix, and a part distance Di(i=0, . . . , l−1) of the matrix is:
D
i
□d
H,(K[i],K[i+1], . . . ,K[l−1]
),i=0, . . . ,l−2
d
l-1
□d
H(K[l−1],0).
dH(a,b) indicates a Hamming distance between a vector a and a vector b, and a vector D is defined as a partial distance profile (PDP). For any B-DMC W, any l×l polar matrix K, and a partial distance {Di}i=0l-1 thereof, a corresponding polarization exponent E(K) is expressed as:
Actually, with l→∞, an l×l kernel that makes the polarization exponent infinitely close to 1 is obtained.
The foregoing theorem is merely a brief description for understanding, and does not limit the protection scope of embodiments described herein. For details, refer to existing descriptions. This is not limited herein.
8. Kernel Matrix Encoding
For one (n, k) polar code (a code length n is: n=lm), given an l×l kernel matrix K, G=M(m)K⊗m, where M(m) is a digital rotated matrix, a corresponding mapping relationship is Σi=0m-1tili→Σi=0m-1tm-1-ili. ti∈[l], [l] indicates a set {0, 1, . . . , l−1}, a corresponding code is represented as C0n-1=u0n-1G, ui, i∈F is a frozen bit, |F|=n−k, and a remaining bit is an information bit.
9. Kernel Processing
During polar code decoding, a posterior probability Wm(j)(u0j|y0l) is to be calculated. Wm(j)(u0j|y0t) is obtained through recursion. For simple description, let m=1, and a corresponding task is referred to as kernel processing. The kernel processing is the following:
The foregoing formula is also approximately expressed as:
The probability in the foregoing formula corresponds to a path with a maximum possibility in a decoding number u0j. A log-likelihood ratio (LLR) is approximately expressed as:
u(a)i=(uoi−1,a,ui+1l-1). For all uj, i<j<l is of equal probability, and a matrix formed by the last (l−i+1) lines of the kernel K generates corresponding codewords. In this case, S1j is obtained by performing ML decoding on a coset of these codewords, and a coset leader is obtained by multiplying u0i−1 and a matrix formed by the first i lines of the kernel K.
A person skilled in the art should understand the meaning of coset mentioned for a plurality of times. For example, in response to G being a set, H is a subset of G, and g is an element of G, gH={gh: all h∈H} indicates a left coset of H, and Hg={hg: all h∈H} indicates a right coset of H. Details are not described in the following.
10. Extended Kernel Codes
An (l+1, l−i) code (i) generated by K considered, and K(i) is obtained from K according to the following rules:
(1) The first l columns of K(i) are obtained by obtaining the last (l−i) lines of K.
(2) The first line in the last column of K(i) is set to 1, and other lines are set to 0.
For example, an Arikan kernel is
is obtained. A trellis corresponding to the Arikan kernel is shown in
11. Correlation Discrepancy
In response to decoding being performed based on a trellis, different paths (which indicates different decoding results) are to be selected and compared. A definition of a path value (a score of c) is provided in a plurality of manners, for example:
(1) Logarithm of probability, for example, is expressed as: log W(c|y).
(2) Correlation, for example, is expressed as: Σi(−1)c
(3) Correlation discrepancy, for example, is expressed as: Σiτ(S(yi),ci).
The correlation discrepancy of c0l-1 and y0l-1 is defined as: ε(c0l-1,y0l-1)=Σi=0l-1τ(ci,yi).
In addition,
is obtained. c(ut)=(u0t,U(u0t−1,ut))K, and U(u0t−1,ut)=ut+1t−1. The solution of the foregoing formula is equivalent to searching for a probable codeword in response to the last bit in the coset of codeword (i) being 0 or 1. The searching is implemented by executing a Viterbi algorithm in a trellis of the codeword
(i). Assuming that the last bit of the codeword is erased, the corresponding trellis is defined as the ith extended trellis. Generally, complexity of the Viterbi algorithm is O(2min(i+1,l-i)).
In at least one embodiment, in response to a polar kernel being selected, kernel processing complexity during decoding is also considered. Sometimes, a kernel with better performance brings higher decoding complexity. Therefore, comprehensive consideration is to be further considered during kernel matrix construction or selection.
For example,
E(K16)=E(K′16)=0.51828; μ(K16)=3.346 and μ(K′16)=3.45; and ϕ(K16)=472 and ϕ(K′16)=183. ϕ(K) indicates processing complexity of the kernel
Using K16 brings better performance than using K′16 but decoding complexity is higher. Therefore, in this case, both performance and complexity is to be considered for selecting a kernel matrix.
In still another example,
E(K32)=0.5219 and E(K′32)=0.529; μ(K32)=3.421 and μ(K′32)=3.207; and ϕ(K32)=532 and ϕ(K′32)=6770. Using K′32 brings better performance than using K32, but decoding complexity of using K′32 is higher, and decoding complexity of using K′32 is more than 10 times of decoding complexity of using K32.
From the foregoing, during encoding, selection of the kernel matrix affects decoding complexity.
In addition, as mentioned above, the linear block code is decoded by using a trellis-based method, and the polar code is also a linear block code, and is also decoded by using such a method. For example, specific decoding steps is shown as follows:
(1) Let l(j, s) be a set of all states connected, at a moment j−1, to a state s at a moment j.
(2) Set cj-1,s′,s to a value on a corresponding connection edge.
(3) Define a node value as
ε0,0=0.
(4) LLR is obtained through calculation according to S1(t)(u0t−1,y0t−1)=εt,0−εt,1.
For example, in response to the Viterbi algorithm being used for the Arikan kernel, the Arikan kernel is
is obtained. A corresponding trellis diagram of the Arikan kernel is shown in
The foregoing method is an ML/MAP algorithm. In response to the code length being long, decoding complexity is very high, and is difficult to be accepted.
In view of this, embodiments described herein propose that improvement is made on an encoder and/or a decoder, so as to reduce decoding complexity. For example, a kernel matrix is selected from a plurality of constructed kernel matrices after comprehensive consideration based on an actual situation, so that both the performance and decoding complexity is considered. For another example, decoding based on trellis code is used, and the trellis diagram is split. A final trellis optimization path is obtained by processing each split result, so that decoding complexity is reduced.
The following describes in detail embodiments provided in at least one embodiment with reference to the accompanying drawings.
1310. Obtain information bits.
In other words, an encoder device obtains to-be-encoded information bits. The information bits is generated internally, or is received from outside. This is not limited herein.
1320. Determine a target kernel matrix, where the target kernel matrix is obtained by adjusting an original kernel matrix.
In at least one embodiment, the adjusting an original kernel matrix includes at least the following two solutions.
Solution 1. One or more kernel matrices are obtained by adjusting a PD of the original kernel matrix. One or more kernel matrices are obtained based on Solution 1, and an appropriate target kernel matrix is selected from the one or more kernel matrices.
Solution 2. A plurality of kernel matrices whose dimensions are different from a dimension of the original kernel matrix are constructed by adjusting the dimension of the original kernel matrix. One or more kernel matrices are obtained based on Solution 2, and an appropriate target kernel matrix is selected from the one or more kernel matrices.
In Solution 2, the adjusting the dimension of the original kernel matrix indicates that the dimension of the kernel matrix is changed by modulating the original kernel matrix. For example, corresponding processing is performed on a line and/or a column of the original kernel matrix, so that a line of the constructed kernel matrix is less than the line of the original kernel matrix, and a column of the constructed kernel matrix is less than the column of the original kernel matrix.
The following describes the preceding two solutions in detail.
1330. Perform polar encoding on the information bits based on the target kernel matrix.
In at least one embodiment, in response to the polar encoding being performed, a plurality of kernel matrices are first constructed, and an appropriate kernel matrix is selected for encoding based on the constructed kernel matrices and an actual communication status. In this manner, in response to the kernel matrix being constructed or selected, performance and decoding complexity are comprehensively considered, so that decoding complexity is reduced by selecting the appropriate kernel matrix used during encoding.
For example, in actual communication, one or more kernel matrices are constructed by adjusting the original kernel matrix, and an appropriate target kernel matrix is selected from the one or more kernel matrices. Then, a data vector and a Kronecker product of the corresponding constructed target kernel matrix are multiplied to implement encoding. Then, data is transmitted through a channel after passing through a rate matching module and a modulation module.
A specific solution on the encoder is described in detail in the following.
1410. Obtain a to-be-decoded sequence.
1420. Decode the to-be-decoded sequence based on a plurality of trellises, and reuse an intermediate result obtained from decoding performed based on a first trellis in the plurality of trellises in response to decoding being performed based on a second trellis in the plurality of trellises, where the first trellis is used for a tth stage of decoding the to-be-decoded sequence, the second trellis is used for a (t+i)th stage of decoding the to-be-decoded sequence, t is an integer greater than 0 or equal to 0, and i is an integer greater than 1 or equal to 1.
For example, the first trellis corresponds to the tth decoding stage, and the second trellis corresponds to the (t+i)th decoding stage. In other words, in the tth decoding stage, decoding is performed based on the first trellis, and in the (t+i)th decoding stage, decoding is performed based on the second trellis. For example, in the tth decoding stage, decoding is performed based on the first trellis, and in the (t+1)th decoding stage, decoding is performed based on the second trellis.
In addition, a result obtained through processing in each decoding stage is denoted as an intermediate result. For example, in the tth decoding stage, in response to decoding being performed based on the first trellis, an obtained result is denoted as the intermediate result obtained through decoding performed based on the first trellis. The intermediate result includes, for example, a decoding path obtained through decoding performed in the tth decoding stage and a corresponding path measure value. In the following embodiments, for brevity, a composite branch table (CBT) is used for representation. In other words, content stored in the CBT includes the corresponding path and the corresponding path value during the decoding. Each decoding stage corresponds to one CBT.
In at least one embodiment, in response to the to-be-decoded sequence being decoded according to a decoding algorithm based on the trellis, intermediate results obtained in different decoding stages is reused. In other words, in response to calculation being performed on different trellises, some calculation steps is the same. Therefore, reusing is performed for a plurality of times, so that decoding complexity is greatly reduced, and decoding speed is improved.
In at least one embodiment, the first trellis and the second trellis are mainly used as examples for description. For each trellis in the plurality of sub-trellises, or in each decoding stage, a determination is made whether an intermediate result of decoding in another decoding stage is reused in the decoding stage.
Optionally, each of the plurality of trellises is split into a plurality of sub-trellises.
In at least one embodiment, in response to the to-be-decoded sequence being decoded according to a decoding algorithm based on the trellis, the trellis is first split, and the decoding based on a complete trellis is split into decoding based on a sub-trellis, thereby reducing decoding complexity.
For example, the decoding method provided in at least one embodiment is also denoted as recursive trellis decoding or recursive trellis splitting decoding.
For example, in actual communication, after rate de-matching and demodulation are performed, decoding is performed by using the recursive trellis splitting algorithm proposed in at least one embodiment, so that decoding complexity is low.
A specific solution of the decoder (that is, a recursive trellis splitting algorithm) is described in detail in the following.
The following separately describes the solutions in embodiments described herein from the encoder and the decoder.
Encoding Side
Optionally, in at least one embodiment, a plurality of kernel matrices is constructed by using Solution 1 and/or Solution 2. During the kernel matrix construction or selection, the appropriate kernel matrix is selected for encoding based on an actual situation.
Solution 1. A plurality of kernel matrices are constructed by adjusting a PD of an original kernel matrix.
Solution 2. A plurality of kernel matrices whose dimensions are different from a dimension of the original kernel matrix are constructed by adjusting the dimension of the original kernel matrix.
The two solutions are described in the following.
Solution 1. A plurality of kernel matrices are constructed by adjusting a PD of an original kernel matrix.
Optionally, a plurality of kernel matrices are constructed based on an existing PDP by adjusting the PDP.
Optionally, a given PDP is adjusted, and depth-first search is performed based on the adjusted PDP.
Another algorithm similar to the depth-first search algorithm or implements a same function is applicable to at least one embodiment. For ease of understanding, the depth-first algorithm is mainly used as an example for description in the following.
In at least one embodiment, in response to the depth-first search being performed, some constraint conditions is designed to reduce search space and a quantity of search times. The constraint conditions, for example, is considered from one or more of the following aspects.
Aspect 1. A constraint condition about a candidate line set M is designed. In other words, the candidate line set meeting a specific condition is limited.
In at least one embodiment, the candidate line set M is to meet: M [i]={v0t−1∈F2t|wt(v0t−1)=Di}.
Aspect 2. A constraint condition on a selected line is designed. In other words, the selected line meeting a specific condition is limited.
A condition is that in response to the ith line being selected from the bottom in the candidate line set M, a Hamming weight of a selected line v is greater than or equal to Di, that is, wt(v)≥Di.
In another condition, the line v from the candidate line set M is selected. In response to c∈C
Another condition is that a line belongs to a same coset cannot be selected, or a line belongs to a same coset is not selected again.
The foregoing constraint condition listed for the candidate line set M and the selected line is merely an example for description, and are not limited thereto. For example, in response to another constraint condition for the candidate line set M or the selected line belonging to a variant form of the foregoing constraint condition, the constraint condition is also applicable to at least one embodiment.
For understanding, an algorithm procedure applicable to Solution 1 is described in the following. An algorithm procedure includes the following steps.
Step 1. Adjust a PDP to obtain one or more new PDPs.
In step 1, a given PDP is manually adjusted. For example, during adjustment, the PDP is made to be close to a PDP of an Arikan matrix.
For example, for a 16×16 kernel K16, the corresponding PDP is [1,2,2,2,2,4,4,4,4,6,6,8,8,8,8,16], E(K16)=0.51828; μ(K16)=3.346, and ϕ(K16)=472.
As described above, in at least one embodiment, E(K) indicates a polarization exponent. The polarization exponent is used to measure convergence speed of the decoding performance, and a larger polarization exponent indicates a faster convergence speed, so that better decoding performance is obtained even in response to the code length being short. μ(K) indicates a scaling exponent, and the scaling exponent indicates kernel matrix performance. ϕ(K) indicates complexity, that is, processing complexity of the kernel
The PDP is manually adjusted to obtain a PDP of a kernel K′16: [1,2,2,4,2,2,4,4,6,6,8,8,4,8,8,16], E(K′16)=0.51828, μ(K′16)=3.45, and ϕ(K′16=183.
For another example, a PDP corresponding to a 32×32 kernel K′32 is [1,2,2,2,2,4,4,4,6,2,4,4,6,6,8,4,8,4,8,8,8,12,12,12,12,12,16,16,16,16,16,32], E(K′32)=0.529, μ(K′32)=3.207, and ϕ(K′32)=6770.
The PDP is manually adjusted to obtain a PDP of a kernel K32: [1,2,2,4,2,4,2,4,6,8,2,4,6,8,4,6,8,12,4,8,12,16,4,8,12,16,8,16,8,16,16,32], E(K32)=0.5219, μ(K32)=3.421, and ϕ(K32)=532.
The foregoing enumerated constructed new PDPs are merely examples for description, and a larger quantity of PDPs is further constructed.
Step 2. For each new PDP, perform kernel construction method based on depth-first search.
One or more new PDPs is obtained by manually adjusting the given PDP. On the basis of these new PDPs, kernel matrix construction method in the following based on the depth-first search is performed for each new PDP.
(1) Select a given PDP: D and a line vector candidate set M (or candidate line set M).
(2) Construct the kernel matrix from the last but one line
(3) Construct the next line to the last ).
(4) Construct the third line from the last
D
l-3
=d
H()
(5) Continue the preceding steps. Similarly, the ith line from the last is to meet:
In a process of constructing the kernel matrix, in response to a line that meets the foregoing conditions cannot be found in the candidate line set M, the process is rolled back to a previous step, a new line
In the process of constructing the kernel matrix, in response to any one of the following conditions being met, the algorithm is stopped or the kernel construction is stopped: in response to an appropriate kernel matrix being found; or after lines of all candidate sets are searched while an algorithm that meets the conditions cannot be found to enable the algorithm to be continued.
A plurality of constructed kernel matrices is obtained by performing the foregoing steps. Next, a best and most appropriate kernel matrix is selected based on comparing some parameters of these kernel matrices, such as a polarization exponent (for example, E(K)), a scaling exponent (for example μ(K),), and decoding complexity (for example ϕ(K),) of decoding a codeword encoded based on these kernel matrices. In addition, the algorithm of this method is simple, and the calculation is simple and easy to implement.
Further, one or more of constraint conditions in the following is further added, for example, the constraint conditions in the foregoing Aspect 1 and Aspect 2, to reduce a quantity of attempts and duration.
Condition 1. Any kernel
M[i]={v
0
l-1
∈F
2
l
|wt(v0l-1)=Di}.
Condition 2. In response to the ith line being selected from the bottom in the candidate wt(v) Di line set M, a Hamming weight of a selected line v is greater than or equal to Di, that is, wt(v)≥Di.
Condition 3: In response to the line v being selected from the candidate line set M, and partial distance calculation is performed on the line v and the constructed line, in response to c∈C
For example, assuming that a 16×16 kernel
A line )=6 is expected to be found. In this case, v=[1111110000000000] is first selected from the candidate line set M, and c=v+
) is able to not be continued, and the current v is directly discarded.
Condition 4. A line that belongs to a same coset is not selected again.
In response to a line being selected from the candidate line set M to construct the kernel C
) is calculated. Next, dH(v′,
C
) is to be calculated. In response to v′∈C
C
)=dH(v,
C
). In other words, the line that belongs to the same coset is not selected again.
the foregoing understood Condition 1 to Condition 4 are merely examples for description, and are not limited thereto. For example, for the candidate line set M, or for the selected line, another constraint condition is further designed, so that a quantity of attempts and duration is reduced.
In addition, optionally, in at least one embodiment, in response to the PDP being adjusted, the PDP is adjusted in a direction in which the PDP is close to the PDP of the Arikan matrix. Adjusting the PDP in a direction close to the PDP of the arikan matrix is merely an adjustment manner. This is not limited herein. For example, a PDP of the constructed best kernel matrix is not necessarily a PDP which is most close to that of the arikan kernel matrix. For another example, in response to the best and most appropriate kernel matrix being selected, the kernel matrix is able to not be selected whose PDP is closest to the PDP of the arikan kernel matrix.
The foregoing describes Solution 1 in detail, and a plurality of kernel matrices is constructed by using Solution 1. Solution 2 is described in the following.
Solution 2. A plurality of kernel matrices whose dimensions are different from a dimension of the original kernel matrix are constructed by adjusting the dimension of the original kernel matrix.
Optionally, a plurality of kernel matrices with small dimensions are constructed based on an existing kernel matrix with a large dimension. Encoding is performed based on the kernel matrix whose dimension is small, so that lower decoding complexity is provided while the same performance is ensured.
In at least one embodiment, the plurality of kernel matrices with small dimensions is constructed based on the existing kernel matrix with a large dimension by using a shortening method. Another algorithm that constructs a kernel matrix with a small dimension based on an existing kernel matrix with a dimension, or an algorithm that implements a same function, is applicable to at least one embodiment. For ease of understanding, the shortening algorithm is mainly used as an example for description in the following.
For understanding, an algorithm procedure applicable to Solution 2 is described in the following. An algorithm procedure includes the following steps.
Step 1. Consider an l×l kernel, that is, determine an original kernel matrix.
Step 2. Select a column j∈[l].
Step 3. Add the ith line to all lines with 1 in the jth column, where i is a line corresponding to the last 1 in the column.
Step 4. Delete the ith line and the jth column to obtain an (l−1)×(l−1) kernel
The kernel matrix constructed in the foregoing manner has lower decoding complexity, and the (l−t)×(l−t) kernel is obtained by repeating the foregoing manner for t times.
For ease of understanding, the following uses a kernel K24 as an example to describe a specific example with reference to
In at least one embodiment, an original kernel matrix is the kernel K24 shown in
(1) Select the 15th column. The last line (that is, line 23) in this column display a value 1. As shown in
(2) Select the 14th column, the 13th line and the 14th line in this column display the value 1. As shown in
(3) Select the 13th line, the 16th line and the 18th line in this column display the value 1. As shown in
The 13th column and the 18th line are deleted to obtain K21. In
(4) Select the 12th column, the 5th line in this column displays the value 1. As shown in
Through the foregoing processing, a kernel matrix K20 shown in
Alternatively, a kernel matrix constructed based on the kernel K24 includes one or more of the following: the kernel K23 shown in
The foregoing is merely an example for ease of understanding, and an intermediate processing process also has other variations. This is not limited herein.
In response to a same decoding algorithm being used, decoding complexity of the kernel matrix obtained by using the foregoing method is lower, for example, ϕ(K20)<ϕ(K24). Based on this, after comprehensive consideration, for example, performance and complexity, the kernel matrix is selected based on an actual situation.
The foregoing describes the kernel matrix construction method provided in at least one embodiment with reference to Solution 1 and Solution 2. Solution 1 and Solution 2 is used separately, or is used in combination. This is not limited herein.
As an example, the following lists forms of several kernel matrices constructed by using at least one embodiment.
For example, a kernel matrix whose dimension is 20 is shown in
In still another example, a kernel matrix whose dimension is 21 is shown in
In still another example, a kernel matrix whose dimension is 24 is shown in
In still another example, a kernel matrix whose dimension is 25 is shown in
In still another example, a kernel matrix whose dimension is 28 is shown in
In still another example, a kernel matrix whose dimension is 32 is shown in
The kernel matrices listed in
Based on the encoding method provided in at least one embodiment, in response to polar encoding being used, a plurality of kernel matrices is first constructed, for example, the plurality of kernel matrices are constructed by using Solution 1 and/or Solution 2. Based on the constructed kernel matrices, an appropriate kernel matrix is selected for encoding based on an actual communication situation. In this manner, in response to the kernel matrix being constructed or selected, performance and decoding complexity are comprehensively considered, so that decoding complexity is reduced by selecting the appropriate kernel matrix used during encoding.
Decoding Side
As described above, in response to decoding being performed based on trellis, a trellis diagram is first split, and the decoding based on a complete trellis is split into decoding based on a sub-trellis. Then, an optimal path is selected by combining the intermediate results of decoding based on each small trellis diagram.
For example, a plurality of combinations are obtained after all paths in a trellis of a smaller level are spliced, and an optimal path and a path value of a trellis of a larger level are obtained by selecting a path with a largest score in these combinations.
For ease of understanding, the following shows an algorithm procedure. An algorithm procedure includes content in the following several aspects.
Aspect 1. Trellis splitting of a linear block code.
In at least one embodiment, multi-level splitting is performed on a trellis diagram of each stage t during polar code decoding.
A linear code C (8, 4) is given, and a generator matrix of the linear code C is
Let Ch,h′ be a subcode of the linear code C, and non-zero bits appear only in a location h≤i<h′; let ph,h′(C) be a linear code obtained by performing a puncturing operation on the linear code C at a location except h≤i<h′; and let Sh,h′(C)=ph,h′(Ch,h′) be a code obtained by performing a shortening operation on the linear code C at a location except h≤<h′. For example, in response to 0≤i<4, that is, h=0 and h′=4, the following is obtained:
On the trellis diagram, the path from a moment h to a moment h′ corresponds to the coset ph,h′, (C)/Sh,h′(C).
As shown in
{circle around (1)}{(0000),(1111)}
{circle around (2)}{(0011),(1100)}
{circle around (3)}{(1010),(0101)}
{circle around (4)}{(0110),(1001)}
Based on a correspondence between the left half part of the left figure and the left half part of the right figure in
Aspect 2. Trellis decoding of linear block code
For each cosec D∈ph,h′(C)/sh,h′(C), the most likely entry l(D) and its corresponding score is to be found. For this, a composite branch table (composite branch table, CBT) is defined to store the two values. A score corresponding to an entry is represented by correlation discrepancy in the foregoing term explanation, for example, is denoted as E(D). As an example, CBT(x,y)=(/(D),E(D)). ML decoding is performed on an (n, k) code, that is, only one entry is included in p0,n(C)/s0,n(C). Therefore, CBT(0,n) stores a corresponding path and a corresponding path value in response to the ML decoding being used.
The CBT is merely a name for ease of description, and the name does not limit the protection scope of embodiments described herein. In addition, that the CBT includes only two entries is not limited in at least one embodiment. In actual communication, the CBT further includes another entry. This is not limited herein.
The correlation discrepancy herein indicates a score corresponding to an entry (that is, the path l(D)), and another solution that indicates a score (or a path value) corresponding to the entry is also applicable to at least one embodiment.
Aspect 3. CBT construction.
CBTX,y is constructed, and y−x≥2. Then, for a specific z (x<z<y), CBTx,z and CBTz,y is constructed. For Dy∈px,y(C)/sx,y(C), all cosecs of D′∈px,z(C)/sx,z(C) and D″∈pz,y(C)/sz,y(C) is enumerated.
is obtained.
A path [x, y] is truncated into two parts. In this case, for the construction of CBTx,y, all Dy∈px,y(C)/sx,y(C) are not enumerated, and the CBTx,y is obtained by using the foregoing truncation processing manner. This reduces complexity.
For ease of understanding, as an example, a process of decoding an (8, 4) Hamming code by using a trellis diagram splitting method is described in
For ε20, ε20=ε10+τ(S2,0)=−1.
For ε21, ε21=ε10+τ(S2,1)=−3.
For ε22, ε22=ε11+τ(S2,0)=0.
For ε23, ε23=ε11+τ(S2,1)=−2.
Similarly, [2, 3], [4, 5], [6, 7], and their corresponding sub-trellis diagrams are represented, and various paths, that is, path values, are calculated by using a similar method.
Next, an [0, 3] segment is obtained by combining corresponding paths of the [0, 1] and the [2, 3]. For example, a path with the largest path score is selected as a path of the [0, 3] segment, for example, a part enclosed by dashed lines in
In at least one embodiment, an appropriate location of the split point is selected based on an actual situation. The location of the split point affects decoding complexity. Take a kernel 24×24 as an example. For a selection of z (z:x<z<y):
For example, splitting is evenly performed. To be specific, z=(x+y)/2. In this case, 715 times of summations and 449 times of comparisons is used.
In still another example, splitting is performed according to an optimization algorithm. For example, [0, 24) is divided into [0, 16) and [16, 24), and is further evenly split. In this case, 250 times of summations and 124 times of comparisons is used.
Therefore, in actual communication, a location of the split point is selected based on the complexity, for example, some split points is selected, so as to reduce decoding complexity.
Aspect 4. A nested code and a nested generator matrix.
According to the foregoing definition, sx,y(C)∈px,y(C), and a generator matrix of px,y(C) is represented as:
A generator matrix of sx,y(C) is represented as:
Aspect 5. Recursive trellis decoding.
There is a one-to-one correspondence between a coset D∈px,y(C)/sx,y(C) and a vector vG′x,y. G′x,y=(Gx,y(10) Gx,y(11)) is a matrix k′x,y×(y−x). Let CBTx,y[v]l:=l(D), and CBTx,y[v].e:=E(D), and then:
and v∈F2k″x,y.
a and b respectively indicate sequence numbers (which is represented in binary) of corresponding entries in coset D′∈px,z(C)/sx,z(C) and the coset D″∈sz,y(C)/sz,y(C). In response to w and v indicating a sequence number corresponding to a path [x,y] segment, then:
In this case, a and b is obtained by using the following mapping relationship:
a and b′ are unimportant parameters. According to the foregoing formula, the following is obtained:
and v∈F2k″x,y.
Complexity of the foregoing calculation is approximately O(2k′x,y+k′x,y). Complexity of this method is related to a splitting policy, that is, how to select a split point z from x, y affects decoding complexity.
Aspect 6. A kernel processing method based on a recursive trellis.
As an example, let u0t−1=0. From the foregoing, ε(c(0),y0l-1), and ε(c(1),y0l-1) are entries of CBT0,l(
a=(w v)Ĝx,y, and b=(w v){tilde over (G)}x,y. Sequence numbers of CBT entries are represented by v, a, and b. k′=dim px,y(C), and k″=dim sx,y(C).
Optionally, decoding complexity is further simplified through processing in the decoding.
Manner 1. Reuse an intermediate decoding result.
In at least one embodiment, whether an intermediate result obtained from decoding performed based on a trellis is reused by decoding performed based on another trellis is determined based on a correlation between trellises.
In an example, in response to the correlation between the trellises being large, for example, in response to trellises being similar or the same, or in response to a specific condition being met, a decoding result obtained through decoding performed based on the trellis is directly used during the decoding performed based on another trellis. For example, a combination CBT of all parts in the polar code decoding stage t is the same as a CBT in t+1 stage. In this case, no calculation is to be performed again in the stage t+1.
Optionally, whether a result of each decoding stage is reused is determined based on a code obtained by performing a shortening operation on the linear code C in different decoding stages and/or a code obtained by performing a puncturing operation on the linear code C in different decoding stages. In other words, a correlation between trellises is determined based on a code obtained by performing a shortening operation on the linear code C in different decoding stages and/or a code obtained by performing a puncturing operation on the linear code C in different decoding stages, so as to determine whether a result of each decoding stage is reused.
In at least one embodiment, whether the intermediate decoding result is reused is determined by checking a relationship between px,y(C(t+1)) and px,y(C(t)) and/or a relationship between sx,y(C(t+1)) and sx,y(C(t)).
Case 1. In response to sx,y(C(t+1))=sx,y(C(t)) and px,y(C(t+1)⊂px,y(C(t)), the intermediate decoding result is reused. In other words, in response to sx,y(C(t+1))=sx,y(C(t)) and px,y(C(t+1))⊂px,y(C(t)), the CBT in the stage t+1 is a subset of the CBT in the stage t, and is directly obtained without recalculation.
Case 2. In response to sx,y(C(t+1)) sx,y(C(t), the intermediate decoding result is reused. For example, a maximization operation result in the intermediate process is saved to reduce the calculation amount in the stage t+1.
Manner 1 is used in each stage of polar code decoding. For example, in each stage of polar code decoding, whether the foregoing two cases are met is determined.
Manner 2. Simplify calculation workload.
In at least one embodiment, a same value is subtracted from entries of all CBTs, while the processing does not affect a final result. For understanding, specific examples are listed in the following.
In an example, for k′=2, k″=1, consider calculating:
CBT
x,y[0]·e=max(CBTx,z[0]·e+CBTz,y[0]·e,CBTx,z[1]·e+CBTz,y[1]·e)
CBT
x,y[1]·e=max(CBTx,z[0]·e+CBTz,y[1]·e,CBTx,z[1]·e+CBTz,y[0]·e)
Δ=CBTx,y[0]·e−CBTx,y[1]·e=Lx,zLz,y.
Thus:
Δ=CBTx,y[0]·e−CBTx,y[1]·e=Lx,zLz,y.
L
p,q
=CBT
p,q[0]·e−CBTp,q[1]·e, and ab=sgn(a)sgn(b)min(|a|,|b|).
For example, CBTx,y[0]·e=Δ and CBTx,y[1]·e=0 is set. In this case, only three operations are used. Compared with a direct method (the direct method used six operations), a lot of calculation amounts is reduced, and decoding duration is reduced.
For another example, in response to CBTp,q[1]·e=0, the calculation amount is further reduced, for example, two subtractions is eliminated.
In yet another example, for k′=1, k″=0, consider calculating:
CBT
x,y
[v]·e=CBT
x,z
[v]·e+CBT
z,y
[v]·e,v∈{0,1}.
In response to being guaranteed that for any channel vector CBTx,z[1]·e=CBTz,y[v]·e=0, CBTx,z[1]·e=0 is set. In this case, only one summation is used.
In kernel processing, the manner of reducing the calculation amount through some calculation processing is applicable to at least one embodiment.
In response to many different kernels being processed, in the foregoing Manner 2, the calculation amount is greatly reduced, and the decoding duration is reduced.
In the foregoing Aspect 6, u0t−1=0 is mainly used as an example for description. In response to u0t−1≠0, the formula CBTx,y[v]·e is to be replaced with the following formula:
δ′, and δ″ indicate offsets, that is, δ′ and δ″ are offsets depending on u0t−1.
The foregoing describes the solutions in the decoder in detail with reference to Aspect 1 to the Aspect 6. For understanding, the following uses specific examples for description.
Example 1. A kernel processing on an Arikan matrix with two iterations.
Let
p0,4(C(0)) is a (4, 4) code and s0,4(C(0)) is a (4, 3) code. Through calculation, leaders of a coset p0,4(C(0))/s0,4(C(0)) are (0,0,0,0) and (1,0,0,0).
p0,2(C(0)) and p2,4(C(0)) are (2, 2) codes, and s0,2(C(0) and s2,4(C(0)) are (2, 1) codes. A state at a moment 2 is: S2 (0,0,0,0)={(0,0)} and S2 (1,0,0,0)={(1,0)}. Therefore, the following formula is obtained:
CBT
0,2
(0)=[max(e0,0+e0,1,e1,0+e1,1),max(e1,0+e0,1,e0,0+e1,1)]
CBT
2,4
(0)=[max(e0,2+e0,3,e1,2+e1,3),max(e1,2+e0,3,e0,2+e1,3)]
CBT
0,4
(0)=[max(CBT0,2(0)[0]+CBT2,4(0)[0],CBT0,2(0)[1]+CBT2,4(0)[1]),
max(CBT0,2(0)[0]+CBT2,4(0)[1],CBT0,2(0)[1]+CBT2,4(0)[0])].
e
i,j=τ(S(yj)i).
px,x+2(C(1))=px,x+2(C(0)),sx,x+2(C(1))=sx,x+2(C(0)), x∈{0,2}, p0,4(C(1)) is a (4, 3) code and s0,4(C(1)) is a (4, 2) code. Vectors (0,0,0,0) and (1,0,1,0) in the coset of the (4, 3) code and the (4, 2) code are considered. An assumption is that u0=0, the following formula is obtained:
CBT
0,4
(0)=[max(CBT0,2(0)[0]+CBT2,4(0)[0],CBT0,2(0)[1]+CBT2,4(0)[1])].
p0,2(C(2)) and p2,4(C(2)) are (2, 1) codes; s0,2(C(2)) and s2,4(C(2)) are (2, 0) codes; p0,4(C(2)) is a (4, 2) code, and s0,4(C(2)) is a (4, 1) code. Vectors (0,0,0,0) and (1,1,0,0) in the coset are considered. Therefore, S2(0,0,0,0)={(0,0)} and S2(1,1,0,0)={(1,1)}. Therefore, only the coset (0, 0) and (1, 1) of the (2, 0) code is to be considered. Assume u0=u1=0, the following is obtained:
CBT
0,2
(0)
=[e
0,0
+e
0,1
,e
1,0
+e
1,1], and CBT2,4(0)=[e0,2+e0,3,e1,2+e1,3]
CBT
0,4
(2)=[max(CBT0,2(2)[0]+CBT2,4(2)[0],CBT0,2(2)[1]+CBT2,4(2)[1]),
max(CBT0,2(2)[0]+CBT2,4(2)[1],CBT0,2(2)[1]+CBT2,4(2)[0])].
For C(3), K(3)=(1 1 1 1 1).
p0,2(C(3))=p0,2(C(2))=p2,4(C(2))=p2,4(C(3)) is a (2, 1) code, s0,2(C(2)) and s2,4(C(2)) are (2, 0) codes, p0,4(C(3)) is a (4, 1) code, and s0,4(C(4)) is a (4, 0) code. The following is obtained:
CBT
0,4
(3)
=[CBT
0,2
(2)[0]+CBT2,4(2)[0],CBT0,2(2)[1]+CBT2,4(2)[1]].
The kernel processing on an Arikan matrix with two iterations is described above as an example. The foregoing specific kernel processing is merely an example for description, and is not strictly limited thereto.
Example 2. A kernel processing on a high-dimensional kernel matrix.
SC decoding is used as an example. The decoding method based on a recursive trellis is applied to each stage of the SC decoding of the polar code. A kernel matrix Ki is shown in
In some cases, for example, sx,y(C(t+1))=sx,y(C(t)) and px,y(C(t+1))⊂px,y(C(t)), the CBT in the stage t+1 is a subset of the CBT in the stage t, and is directly obtained without recalculation. From
In some cases, for example, in response to sx,y(C(t+1))⊂sx,y(C(t)), the intermediate result of decoding is reused. For example, a maximization operation result in the intermediate process is saved to reduce the calculation amount in the stage t+1. From
According to the decoding method provided in at least one embodiment, in different stages of polar code decoding, in response to a step that is calculated in a previous stage reappearing in a subsequent stage, a previous calculation result is directly reused. This reduces polar decoding complexity.
Table 1 shows complexity comparison between the decoding algorithm provided in at least one embodiment and a Viterbi decoding algorithm in response to different kernel matrices being used on an encoding side.
From Table 1 that complexity of the decoding algorithm provided in at least one embodiment is greatly reduced compared with the existing algorithm.
Optionally, at least one embodiment is used not only in a binary domain, but also in a non-binary domain. Table 2 shows complexity comparison between the decoding algorithm provided in at least one embodiment and the Viterbi decoding algorithm in response to different kernel matrices being used in the non-binary domain. Reed-Solomon indicates a Reed-Solomon code, and Hermitian indicates Hermitian.
From Table 2, the complexity is still greatly reduced in the non-binary domain by using the decoding algorithm provided in at least one embodiment.
In addition,
The formulas in embodiments described herein are merely examples for description, and do not limit the protection scope of embodiments described herein. The formulas in the foregoing embodiments are mainly examples provided with reference to designs in the prior art, and definitions of the parameters is definitions in a general sense. In a process of calculating the foregoing related parameters, calculation is also performed according to the foregoing formulas, or calculation is performed based on variations of the foregoing formulas, or calculation is performed in another manner to meet a calculation result of the formulas.
The solutions on the encoder and the decoder is used separately, or is used in combination. This is not limited herein. For example, encoding is performed by using the encoding method provided in at least one embodiment, and decoding is performed by using an existing decoding method. For another example, encoding is performed by using an existing encoding method, and decoding is performed by using the decoding method provided in at least one embodiment. For another example, encoding is performed by using the encoding method provided in at least one embodiment, and decoding is performed by using the decoding method provided in at least one embodiment. In an example, encoding (polar code encoding) is implemented by multiplying a data vector and Kronecker products of some matrices (kernel), and decoding is performed at a receive end by using a decoding method based on recursive trellis.
In some of the foregoing examples, SC decoding is used as an example for description. This is not limited herein. For example, embodiments described herein are further used in decoding methods such as SCL, sequence decoding, and SC-Flip.
Based on the foregoing technical solution, in some embodiments, in response to the to-be-decoded sequence being decoded by using a decoding algorithm based on the trellis, the trellis is first split, and the decoding based on a complete trellis is split into decoding based on a sub-trellis, thereby reducing decoding complexity. In other words, in response to calculation being performed on different sub-trellises, some calculation steps is the same. Therefore, reusing is performed for a plurality of times, so that decoding complexity is greatly reduced, and decoding speed is improved.
In addition, based on the foregoing technical solutions, in some embodiments, in response to the polar encoding being performed, a plurality of kernel matrices are first constructed, and an appropriate kernel matrix is selected for encoding based on the constructed kernel matrices and an actual communication status. In this manner, in response to the kernel matrix being constructed or selected, performance and decoding complexity are comprehensively considered, so that decoding complexity is reduced by selecting the appropriate kernel matrix used during encoding.
Embodiments described in this specification is independent solutions, or is combined based on internal logic. All these solutions fall within the protection scope of embodiments described herein. For example, the solutions on the encoder and the decoder is used separately, or is used in combination.
In the foregoing method embodiments, methods and operations implemented by a decoder device (for example, a network device or a terminal device) is also implemented by a component (for example, a chip or a circuit) that is used in the decoder device, and methods and operations implemented by an encoder device (for example, a network device or a terminal device) is also implemented by a component (for example, a chip or a circuit) that is used in the encoder device.
The foregoing describes in detail the method provided in embodiments described herein. The following describes in detail apparatuses provided in embodiments described herein with reference to
In at least one embodiment, function module division is performed on an encoder device or a decoder device based on the foregoing method example. For example, each function module is obtained through division corresponding to each function, or two or more functions is integrated in one processing module. The integrated module is implemented in a form of hardware, or is implemented in a form of a software functional module. In at least one embodiment, division into the modules is an example and is merely logical function division, and is other division in an actual implementation. An example in which each functional module is obtained through division based on each corresponding function is used below for description.
Optionally, the apparatus 4400 further includes a storage unit. The storage unit is configured to store instructions and/or data. The processing unit 4420 reads the instructions and/or data in the storage unit, to enable the apparatus to implement the foregoing method embodiments.
The apparatus 4400 is configured to perform an action performed by the encoder device in the foregoing method embodiment. In this case, the apparatus 4400 is the encoder device or the component that is disposed in the encoder device. The transceiver unit 4410 is configured to perform operations related to receiving and sending on the encoder device in the foregoing method embodiment. The processing unit 4420 is configured to perform processing-related operations on the encoder device in the foregoing method embodiment.
Alternatively, the apparatus 4400 is configured to perform an action performed by the decoder device in the foregoing method embodiment. In this case, the apparatus 4400 is the decoder device or the component that is disposed in the decoder device. The transceiver unit 4410 is configured to perform operations related to receiving and sending on the decoder device in the foregoing method embodiment. The processing unit 4420 is configured to perform processing-related operations on the decoder device in the foregoing method embodiment.
In another design, the apparatus 4400 is configured to perform an action performed by the encoder device in the embodiment shown in
In an example, a dimension of the target kernel matrix is less than a dimension of the original kernel matrix.
In still another example, the target kernel matrix is obtained by adjusting a partial distance of the original kernel matrix.
As yet another example, the target kernel matrix is obtained by adjusting the partial distance of the original kernel matrix according to a depth-first search algorithm.
In still another example, in response to processing being performed according to the depth-first search algorithm, one or more of the following constraints are used for processing: a candidate line set M meets: M[i]={v0t−1∈F2l|wt(v0t−1)=Di}; or in response to the ith line being selected from the bottom in the candidate line set M, a Hamming weight of a selected line v is greater than or equal to Di; or selecting the line v from the candidate line set M and performing partial distance calculation on the line v and a constructed line, where in response to c∈C
In still another example, the original kernel matrix is a kernel matrix K24 whose dimension is (24×24), and the K24 is:
In still another example, the target kernel matrix is any one of the following:
The apparatus 4400 implements steps or procedures corresponding to the encoder device in the method embodiments described herein. The apparatus 4400 includes units configured to perform the methods performed by the encoder device in the method embodiments in
In response to the apparatus 4400 being configured to perform method 1300 in
A specific process in which the units perform the foregoing corresponding steps is described in detail in the foregoing method embodiments. For brevity, details are not described herein again.
In another design, the apparatus 4400 is configured to perform an action performed by the decoder device in the embodiment shown in
As an example, each of the plurality of trellises is split into a plurality of sub-trellises.
In still another example, locations of split points of the plurality of sub-trellises are related to decoding complexity.
In still another example, the processing unit 4420 is further configured to determine, based on a correlation between the first trellis and the second trellis, whether to reuse the intermediate result obtained through decoding performed based on the first trellis in response to decoding being performed based on the second trellis.
In still another example, the processing unit 4420 is configured to determine, based on the following information, whether to reuse the intermediate result obtained through decoding performed based on the first trellis in response to decoding being performed based on the second trellis: a code obtained by performing a shortening operation on a linear code C in the (t+i)th stage of decoding and a code obtained by performing the shortening operation on the linear code C in the tth stage of decoding; and/or a code obtained by performing a puncturing operation on the linear code C in the (t+i)th stage of decoding and a code obtained by performing a puncturing operation on the linear code C in the tth stage of decoding.
In still another example, in response to sx,y(C(t+i))=sx,y(C(t)), and px,y(C(t+i))⊂px,y(C(t)), reusing the intermediate result obtained through decoding performed based on the first trellis in response to decoding being performed based on the second trellis; or in response to sx,y(C(t+i))⊂sx,y(C(t)), reusing the intermediate result obtained through decoding performed based on the first trellis in response to decoding being performed based on the second trellis, where sx,y(C(t+i)) indicates a code obtained by performing the shortening operation on the linear code C at a location except x≤z<y in the (t+i)th stage of decoding, sx,y(C(t)) indicates a code obtained by performing the shortening operation on the linear code C at the location except x≤z<y in the tth stage of decoding, px,y(C(t+i)) indicates a code obtained by performing the puncturing operation on the linear code C at the location except x≤z<y in the (t+i)th stage of decoding, and px,y(C(t)) indicates a code obtained by performing the puncturing operation on the linear code C at the location except x≤z<y in the tth stage of decoding.
In still another example, the processing unit 4420 is configured to: directly use the intermediate result obtained through decoding performed based on the first trellis as an intermediate result obtained through decoding performed based on the second trellis in response to processing being performed based on the second trellis in the (t+i)th stage of decoding; or process based on a result of a maximization operation in a process of the first trellis in response to processing being performed based on the second trellis in the (t+i)th stage of decoding.
The apparatus 4400 implements steps or procedures corresponding to the decoder device in the method embodiments described herein. The apparatus 4400 includes units configured to perform the methods performed by the decoder device in the method embodiments in
In response to the apparatus 4400 being configured to perform method 1400 in
The processing unit 4420 in the foregoing embodiment is implemented by at least one processor or a processor-related circuit. The transceiver unit 4410 is implemented by using a transceiver or a transceiver-related circuit. The transceiver unit 4410 is also referred to as a communication unit or a communication interface. The storage unit is implemented by at least one memory.
In at least one embodiment, the foregoing functions of the apparatus 4400 is implemented by hardware, or is implemented by hardware executing corresponding software.
In an embodiment, the apparatus 4400 includes one or more processors. The one or more processors are configured to execute a computer program stored in the memory, so that the apparatus 4400 performs any method embodiment described herein.
Optionally, the memory configured to store the computer program is located outside the apparatus 4400, and the one or more processors are connected to the memory by using a circuit and/or a wire. There is one or more memories.
Optionally, the apparatus 4400 further includes one or more memories.
Further, optionally, the apparatus 4400 further includes one or more communication interfaces.
In some examples, the one or more communication interfaces is an input output interface or an input output circuit. This is not limited in embodiments described herein.
In another embodiment, the apparatus 4400 is further implemented by using hardware.
As shown in
Optionally, the apparatus 4500 includes one or more processors 4510.
Optionally, as shown in
Optionally, the apparatus 4500 includes one or more memories 4520.
Optionally, the memory 4520 and the processor 4510 is integrated together, or is disposed separately.
Optionally, as shown in
In a solution, the apparatus 4500 is configured to implement the operations performed by the encoder device in the foregoing method embodiments.
For example, the processor 4510 is configured to perform a processing-related operation performed by the encoder device in the foregoing method embodiment, and the transceiver 4530 is configured to perform operations related to receiving and sending performed by the encoder device in the foregoing method embodiment.
In another solution, the apparatus 4500 is configured to implement the operations performed by the decoder device in the foregoing method embodiments.
For example, the processor 4510 is configured to perform a processing-related operation performed by the decoder device in the foregoing method embodiment, and the transceiver 4530 is configured to perform operations related to receiving and sending performed by the decoder device in the foregoing method embodiment.
At least one embodiment further provides an apparatus 4600. The apparatus 4600 is configured to perform the operation performed by the encoder device in the foregoing method embodiment, or the apparatus 4600 is configured to perform the operation performed by the decoder device in the foregoing method embodiment.
The apparatus 4600 includes an input interface circuit 4610, a logic circuit 4620, and an output interface circuit 4630.
In a solution, the apparatus 4600 is configured to perform the operations performed by the decoder device in the foregoing method embodiments.
Optionally, the apparatus 4600 is also a device or module that is in the decoder and that is configured to implement channel decoding. For example, the apparatus is a channel decoder or a channel decoding circuit. Alternatively, the apparatus 4600 is a chip disposed in a receive end.
In at least one embodiment, the input interface circuit 4610 is configured to obtain a to-be-decoded sequence; the logic circuit 4620 is configured to decode the to-be-decoded sequence according to a decoding algorithm provided in embodiments described herein; and the output interface circuit 4630 is configured to output a decoding result.
In at least one embodiment, the logic circuit 4620 is configured to obtain a to-be-decoded sequence, and decode the to-be-decoded sequence according to a decoding algorithm provided in at least one embodiment. The output interface circuit 4630 outputs a decoding result.
In another solution, the apparatus 4600 is configured to perform the operations performed by the encoder device in the foregoing method embodiments.
Optionally, the apparatus 4600 is also a device or module configured to implement encoding in the encoder. For example, the apparatus is an encoder or an encoding circuit. Alternatively, the apparatus 4600 is a chip disposed at a transmit end.
In at least one embodiment, the input interface circuit 4610 is configured to obtain information bits; the logic circuit 4620 is configured to encode the information bits according to an encoding algorithm provided in at least one embodiment; and the output interface circuit 4630 is configured to output an encoded polar code.
In at least one embodiment, the logic circuit 4620 is configured to generate information bits, and encode the information bits according to the encoding algorithm provided in at least one embodiment; and the output interface circuit 4630 outputs an encoded polar code.
Optionally, the apparatus 4600 is a chip or an integrated circuit. For example, the chip is a system on chip (SOC), or is a baseband chip.
Optionally, the foregoing mainly uses an example in which the input interface circuit 4610 and the output interface circuit 4630 are separate interface circuits for description. This is not limited. For example, the input interface circuit 4610 and the output interface circuit 4630 is integrated into one interface.
At least one embodiment further provides a computer-readable storage medium. The computer-readable storage medium stores computer instructions used to implement the method performed by the encoder device or the method performed by the decoder device in the foregoing method embodiments.
For example, in response to a computer program being executed by a computer, the computer implements the method performed by the encoder device or the method performed by the decoder device in the foregoing method embodiments.
At least one embodiment further provides a computer program product including instructions. In response to the instructions being executed by a computer, the computer is enabled to implement the method performed by the encoder device or the method performed by the decoder device in the foregoing method embodiments.
At least one embodiment further provides a chip, including one or more memories and one or more processors. The one or more memories are configured to store a computer program, and the one or more processors are configured to invoke the computer program from the one or more memories and run the computer program, so that a device in which the chip is installed performs the method in at least one embodiment.
At least one embodiment further provides a communication device, including the apparatus 4500 or the apparatus 4600 described above.
At least one embodiment further provides a communication system. The communication system includes the encoder device and the decoder device in the foregoing embodiments.
The decoder in this specification is a receive end of a signal and/or data. Correspondingly, a party that sends the signal and/or the data is a transmit end. Optionally, the decoder is a network device (for example, a gNB in 5G) in the communication system, or is a terminal device. This is not limited in the solutions of embodiment described herein.
The encoder in this specification is a transmit end of a signal and/or data. Correspondingly, a party that receives the signal and/or the data is the receive end. Optionally, the encoder is a network device (for example, a gNB in 5G) in the communication system, or is a terminal device. This is not limited in the solutions of embodiments described herein.
A person skilled in the art understands that, for convenience and brief description, for explanations and beneficial effects of related content in any communication apparatus provided above, refer to the corresponding method embodiment provided above. Details are not described herein again.
In at least one embodiment, the encoder device or the decoder device includes a hardware layer, an operating system layer running above the hardware layer, and an application layer running above the operating system layer. The hardware layer includes hardware such as a central processing unit (CPU), a memory management unit (MMU), and a memory (also referred to as a primary memory). An operating system at the operating system layer is any one or more computer operating systems that implement service processing through a process, for example, a Linux operating system, a Unix operating system, an Android operating system, an iOS operating system, or a Windows operating system. The application layer includes applications such as a browser, an address book, word processing software, and instant messaging software.
A specific structure of an execution body of a method is not limited in embodiments described herein, provided that a program that records code for the method provided in at least one embodiment is run to perform communication according to the method provided in at least one embodiment. For example, the execution body of the method provided in embodiments described herein is performed by an encoder device or a decoder device, or is a function module that invokes and execute a program in the encoder device or the decoder device.
At least one embodiment is implemented as a method, an apparatus, or a product that uses standard programming and/or engineering technologies. The term “product” used in this specification covers a computer program that is accessed from any computer-readable component, carrier or medium.
The computer-readable storage medium is any usable medium accessible by a computer, or a data storage device, such as a server or a data center, integrating one or more usable media. The usable medium (or the computer-readable medium) includes, for example, but is not limited to, various media that stores program code such as a magnetic medium or a magnetic storage device (for example, a floppy disk, a hard disk (for example, a removable hard disk), or a magnetic tape), an optical medium (for example, an optical disc, a compact disc (compact disc, CD), or a digital versatile disc (DVD)), a smart card, and a flash storage device (for example, an erasable programmable read-only memory (EPROM), a card, a stick, or a key drive), or a semiconductor medium (for example, a solid-state disk (SSD), a USB flash drive, a read-only memory (ROM), or a random access memory (RAM)).
Various storage media described in this specification indicates one or more devices and/or other machine-readable media that are configured to store information. The term “machine-readable media” includes but is not limited to a radio channel and various other media that stores, includes, and/or carries instructions and/or data.
The processor in at least one embodiment described herein is a central processing unit (CPU), or is another general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field programmable gate array (FPGA) or another programmable logic device, a discrete gate or transistor logic device, a discrete hardware component, or the like. The general-purpose processor is a microprocessor, or the processor is any conventional processor or the like.
The memory mentioned in embodiments described herein is a volatile memory or a nonvolatile memory, or includes a volatile memory and a nonvolatile memory. The nonvolatile memory is a read-only memory (ROM), a programmable ROM (PROM), an erasable programmable read-only memory (EPROM), an electrically EPROM (EEPROM), or a flash memory. The volatile memory is a random access memory (RAM). For example, the RAM is used as an external cache. As an example instead of a limitation, the RAM includes the following plurality of forms: a static RAM (SRAM), a dynamic RAM (DRAM), a synchronous DRAM (SDRAM), a double data rate SDRAM (DDR SDRAM), an enhanced SDRAM (ESDRAM), a synchlink DRAM (SLDRAM), and a direct rambus RAM (DR RAM).
In response to the processor being a general-purpose processor, a DSP, an ASIC, an FPGA, another programmable logic device, a discrete gate or a transistor logic device, or a discrete hardware component, the memory (storage module) is integrated into the processor.
The memory described in at least one embodiment aims to include but is not limited to these memories and any memory of another proper type.
In at least one embodiment, the disclosed apparatuses and methods is implemented in other manners. For example, the foregoing apparatus embodiments are only examples. For example, division into the foregoing units is only logical function division, and is another division manner during actual implementation. For example, a plurality of units or components is combined or integrated into another system, or some features is ignored or not performed. In addition, the displayed or discussed mutual couplings or direct couplings or communication connections is implemented by using some interfaces. The indirect couplings or communication connections between the apparatuses or units is implemented in an electronic form, a mechanical form, or another form.
The foregoing units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, is located in one position, or is distributed on a plurality of network units. Some or all of the units is selected based on actual implementations of the solutions provided in embodiments described herein.
In addition, function units in embodiments described herein are integrated into one unit, or each of the units exist alone physically, or two or more units are integrated into one unit.
All or some of the foregoing embodiments is implemented by using software, hardware, firmware, or any combination thereof.
In response to software being used to implement the embodiments, all or a part of the embodiments is implemented in a form of a computer program product. The computer program product includes one or more computer instructions. In response to the computer program instructions being loaded and executed on a computer, the procedure or functions according to at least one embodiment are completely or partially generated. The computer is a general-purpose computer, a dedicated computer, a computer network, or another programmable apparatus. For example, the computer is a personal computer, a server, or a network device. The computer instructions is stored in a computer-readable storage medium or is transmitted from a computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions is transmitted from a website, computer, server, or data center to another website, computer, server, or data center in a wired (for example, a coaxial cable, an optical fiber, or a digital subscriber line (DSL)) or wireless (for example, infrared, radio, or microwave) manner. For the computer-readable storage medium, refer to the foregoing descriptions.
The foregoing descriptions are merely specific implementations of embodiments described herein, but are not intended to limit the protection scope of embodiments described herein. Any variation or replacement readily figured out by a person skilled in the art within the technical scope disclosed in at least one embodiment shall fall within the protection scope of embodiments described herein. Therefore, the protection scope of embodiments described herein shall be subject to the protection scope of the claims and this specification.
Number | Date | Country | Kind |
---|---|---|---|
2020130551 | Sep 2020 | RU | national |
This application is a continuation of International Patent Application No. PCT/CN2021/115206, filed on Aug. 30, 2021, which claims priority to Russian Federation(RU) Patent Application No.RU2020130551, filed on Sep. 17, 2020. The disclosures of the aforementioned applications are hereby incorporated by reference in their entireties.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/CN2021/115206 | Aug 2021 | US |
Child | 18183272 | US |