Benefit is claimed to Chinese Patent Application No. 202110351850.6, filed Mar. 31, 2021, the contents of which are incorporated by reference herein in their entirety.
This application relates to the technical field of channel coding, in particular, the field of block code encoding and decoding.
With the development and advancement of modern communications, Ultra-Reliable Low Latency Communication (URLLC), which is one of the key pillars in the 5G network, has attracted increasingly more attention. To meet the stringent requirements of low latency, channel codes for URLLC are designed for short block length, for example with length, n<1000.
Several bounds on achievable rate in the finite length regime had been investigated and presented in theory. For example, given a block length n and error probability ε, a coding rate
is achievable if there exists a codebook with M codewords and a decoder whose frame error rate (FER) is smaller than ε. It was proven that, given (n; M), there exist a code with the FER ε is upper bounded by the random coding union (RCU) bound and lower bounded by the meta-converse (MC) bound. To approach the performance bounds, coding schemes tailored to the transmission of short blocks were investigated, where BCH codes, tail-biting convolutional codes (TBCCs), polar codes, and low-density parity-check (LDPC) codes are compared and shown to be potentially applicable for low latency communications. List decoding algorithms such as order statistic decoding (OSD) may be used to attain a near-ML performance for codes such as BCH codes and Reed-Muller codes with length n≤128.
Another example of known efficient list decoding algorithms is the list Viterbi algorithm (LVA) for trellis codes such as TBCCs, which generates a list of candidate codewords associated with the highest likelihoods. It was shown that, TBCCs with a large encoding memory perform as well as the best-known block codes with short and medium length. However, the decoding suffers from a high decoding complexity due to the enormous amount of states. To reduce the decoding complexity, the wrap-around Viterbi algorithm (WAVA) for TBCC was proposed. In another way, TBCCs are designed by concatenating with cyclic redundancy checks (CRCs), where the outer concatenated CRC can be viewed as an error detector for identifying the correct candidate codeword in the decoding list.
Polar code was proposed in 2009, which is proved to achieve the channel capacity at infinite length with successive cancelation (SC) decoding. Based on the tensor network representation of polar codes, branching multi-scale entanglement renormalization ansatz (BMERA) code was proposed to improve the SC decoding performance. Similar to TBCCs, polar codes are designed with CRC precoding and decoded by the successive cancellation list (SCL) decoding algorithm. In a new polar code construction, CRC precoding may be replaced by rate-one convolutional precoding. The performance is further improved to approach the RCU bound by sequential decoding of the PAC codes. From the perspective of polar coding with dynamically frozen bits, the list decoding algorithm of PAC codes was investigated, which is shown to closely match the performance of the sequential decoder.
Among the above coding schemes, the BCH codes decoded by the OSD and the TBCC with a larger memory length have better performance, but they require extremely high algorithm complexity. In addition, for the design of short codes, TBCC and Polar codes need to use CRC as the outer code to assist their list decoding. On the one hand, the use of CRC increases the redundancy of the coding, thereby reducing the coding rate. On the other hand, it is necessary to optimize the design with CRC, which makes the coding structure inflexible.
This disclosure proposes alternatives to known methods.
The present disclosure aims to provide block code encoding and decoding methods, and apparatus therefor.
A coding scheme called twisted-pair superposition transmission (TPST) is proposed, which is constructed by superimposing together a pair of basic codes in a twisted manner. An SCL decoding algorithm is proposed for the TPST codes, which may be early terminated by a preset threshold on the empirical divergence functions (EDF) to trade off performance with decoding complexity. The SCL decoding of TPST is based on the efficient list decoding of the basic codes, where the correct candidate codeword in the decoding list is distinguished by employing a statistical learning aided decoding algorithm. Lower bounds for the two layers of TPST are derived, which may be used to predict the decoding performance and to show the near-ML performance of the proposed SCL decoding algorithm. The construction of TPST codes may be generalized by allowing different basic codes for the two layers. In addition to the partial superposition method presented, another design approach, referred to as the rate allocation method, is presented along with a procedure to search TPST codes with good performance within an SNR region of interest. Numerical simulation results show that, the constructed TPST codes have performance close to the RCU bound in the short length regime.
Compared to known methods, the proposed method herein uses the method of coupling short component codes to construct the code. By bi-directionally superimposing the two component codewords, it replaces the CRC (cyclic redundancy check) that will cause the code rate loss, making the code rate structure flexible and without information bits. Combining with the list decoding of component codes, better performance may be obtained under the condition of lower complexity.
In a first aspect of the present disclosure, there is provided an encoding method to encode a block code with a code length of n and an information bit length of k, the method comprising: using component codes C0[n0,k0] and C1[n1,k1] to indicate two block codes with code lengths n0 and n1 respectively, and information bit lengths k0 and k1 respectively such that n0+n1=n, k0+k1=k; and encoding an information sequence u=(u(0), u(1)) of length k into a codeword sequence c of length n, where u(0) and u(1) are sub-sequences of u of length k0 and k1 respectively, and correspond to an upper layer and a lower layer of encoding respectively; the encoding comprising the following steps: using the component code C0[n0,k0] to encode the upper layer u(0) into an upper layer component codeword v(0) of length n0, and using the component code C1[n1,k1] to encode the lower layer u(1) into a lower component codeword v(1) of length n1; coupling the upper layer u(0) into the lower layer component codeword v(1) by superposition to obtain a sequence c(1) of length n1: c(1)=S(u(0)+v(1), wherein S is a coupling function; obtaining a sequence w with a length of n0 by passing through the sequence c(1) through a partial sampler Π: w=c(1)Π, wherein Π is a n1×n0 binary matrix, each column of which has weight of no more than 1; superimposing by XOR to obtain a sequence c(0) of length n0: c(0)=v(0)+w; and outputting the codeword sequence c=(c(0), c(1)) of length n.
In one form, the coupling function S is selected such that a small change in u(0) causes a significant change in c(1). In one form, the coupling function S is a random binary matrix Ru of size k0×n1, such that c(1)=u(0)Ru+v(1). In one form, the coupling function S is a random binary matrix Rv of size n0×n1, such that c(1)=v(0)Rv+v(1). In one form, the coupling function S is provided as follow: using a polar code as a component code for the lower layer, and use a binary matrix P with a size of k0×(n1−k1) and each column of which has weight of no more than 1, to obtain a sequence of length n1−k1: f=u(0)P, so that c(1) is a polar codeword with f as frozen bits of the lower layer and u(1) as information bits.
In a second aspect of the present disclosure, there is provided a decoding method for decoding a received sequence encoded using the method of claim 1, with an input being a probability sequence y=(y(0), y(1)) with each bit of the received sequence of length n equals to 0, and an output being an estimated transmitted information sequence it, the method comprising the following steps: calculating a probability sequence {tilde over (y)}(0) of an upper layer component code according to a superposition relationship of a partial sampler Π; using {tilde over (y)}(0) as an input, calling a list decoding algorithm of a component code C0[n0,k0] with a list size of L, and obtaining L candidate information sequences ûl(0) and their corresponding component codeword sequence {circumflex over (v)}l(0) of length n0, l=0, 1, . . . , L−1; for l=0, 1, . . . L−1: calculating a probability sequence {tilde over (y)}l(1) of a lower layer component code according to Π; using {tilde over (y)}l(1) as an input, eliminating an influence of the upper layer component code, and decoding the lower layer component code C1[n1,k1] according to the coupling function S to obtain ûl(1); obtaining a candidate information sequence ûl=(ûl(0), ûl(l)); and calculating a likelihood function of each candidate information sequence, and outputting the candidate information sequence with the maximum likelihood.
In one form, for the probability sequence {tilde over (y)}(0) of the upper layer component code, the i-th component of {tilde over (y)}i(0), i=0, 1, . . . , n0−1 is calculated as follows: if the weight of the i-th column of Π is 0, then {tilde over (y)}i(0)=yi(0), if the weight of the i-th column of H is 1, and the position of 1 is in line j, that is, Πji=1, then {tilde over (y)}i(0)=yi(0)yj(1)+(1−yi(0))(1−yj(1)), where yi(0) represents the i-th component of y(0), and yj(1) represents weights of the j-th component of y(1).
In one form, for the probability sequence {tilde over (y)}i(1) of the lower layer component code, the i-th component {tilde over (y)}l,i(1), i=0, 1, . . . , n1−1 is calculated as follows: if the weight of the i-th row of Π is 0, then {tilde over (y)}l,i(1)=yi(1); if the weight of the i-th row of Π is ρi>0, that is, it has ρi positions as 1, and these positions are j0, j1, . . . , jρ
wherein,
ûl,j(0) represents the j-th component of ûl(0), and wherein the operation is defined as:
In one form, based on the coupling function S, the influence of the upper layer component code is eliminated, and the calculation method for decoding the lower layer component code C1[n1,k1] is: substituting ûl(0) into the coupling function S to obtain a sequence sl=S(ûl(0)), and correcting {tilde over (y)}l(1) according to sl to obtain ŷl(1):
wherein ŷl,j(1), {tilde over (y)}l,j(1), sl,j represent the j-th component of ŷl(1), {tilde over (y)}l(1), sl respectively; using the calculated ŷl(1) as the component code C1[n1,k1] of the called lower layer for decoding to obtain ûl(1).
In one form, the calculation method for calculating the likelihood function of each candidate information sequence is: re-encoding each candidate information sequence ûl to obtain an estimated codeword ĉl to calculate its likelihood function λ(ĉl)=Πi=0n-1[yi+ĉl,i(1−2yi)], where yi and ĉl,i represent the i-th component of y and ĉl respectively.
In one form, the candidate information sequence with the maximum likelihood is used as the output, and the decoding complexity is reduced by setting a threshold T, and when the calculated likelihood function exceeds the threshold, outputting û=ûl then terminating the decoding process.
In one form, when the coupling function S is provided by using a polar code as a component code for the lower layer, and use a binary matrix P with a size of k0×(n1−k1) and a column weight of no more than 1, to obtain a sequence of length n1−k1: f=u(0)P, so that c(1) is a polar codeword with f as frozen bits of the lower layer and u(1) as information bits; the step of decoding the lower layer component code C1[n1,k1] is simplified to: calculating frozen bits fl=ûl(0)P, and then calling the decoder of the polar code C1[n1, k1] with fl as the frozen bits to decode to obtain ûl(1).
In another aspect of the present disclosure, there is provided an apparatus comprising a memory, a processor, and a computer program stored on the memory capable of running on the processor, characterized in that, the processor executes the computer program to implement the steps of the method of any one of the first and the second aspects, and their various forms.
In another aspect of the present disclosure, there is provided a computer-readable medium with non-volatile program code executable by a processor, characterized in that, the program code causes the processor to execute the method according to any one of the first and the second aspects, and their various forms.
The present disclosure offers one or both of the following advantages:
Other features and advantages of the present disclosure will be described in the following description, and partly become clear from the disclosure, or understood by implementing the present disclosure. The purpose and other advantages of the disclosure are realized and obtained by the structures specifically pointed out in the description, claims and figures.
In order to make the above-mentioned objectives, features and advantages of the present disclosure clearer and understandable, the preferred embodiments and accompanying figures are described in detail as follows.
In order to more clearly illustrate the specific embodiments of the disclosure or the technical solutions in the prior art, the following will briefly introduce the figures that need to be used in the description of the specific embodiments or the prior art. Obviously, the figures in the following description are some embodiments of the present disclosure. For those of ordinary skill in the art, other figures may be obtained based on these figures without any inventive work.
In order to make the purpose, technical solutions and advantages of the embodiments of this application clearer, the technical solutions of this application will be described clearly and completely in conjunction with the accompanying figures. Obviously, the described embodiments are part of the embodiments of this application, not all of the embodiments. Based on the embodiments in this application, all other embodiments obtained by those of ordinary skill in the art without inventive work shall fall within the protection scope of this application.
By coupling the component codes to encoding and decoding, a good code with flexible and adjustable parameters is designed, and its performance is close to the limited code length and capacity limit, and its low complexity decoding algorithm is given.
In this embodiment of
When the process begins 1, the step of inputting an information sequence u is performed. This is followed by a basic encoding step 3. During this step 3, two steps are performed: (a) encode a upper layer u(0) into an upper layer component codeword v(0); and (b) encode a lower layer u(1) into a lower component codeword v(1). More details of step 3 will be discussed in later sections.
A forward superposition step 4 follows the step 3 above. In this step 4, using S as a coupling function, the upper layer u(0) is coupled into the lower layer component codeword v(1) by superposition to obtain a sequence c(1). More details of step 4 will be discussed in later sections.
A backward superposition step 5 follows the step 4 above. In this step 5, the sequence c(1) is passed through a partial sampler Π to obtain a sequence w, then by superimposing by XOR to v(0), a sequence c(0) is obtained. The next step 6 is to output a codeword sequence c=(c(0), c(1)). The codeword sequence is then sent through a channel indicated by channel transmission 8.
On the side of decoder 15, the decoder receives the transmission from encoder 7. The decoder 15 performs the following steps: first, Inputting probability sequence y 9; followed by calculating a probability sequence of an upper layer component code according to a superposition relationship of a partial sampler 10; then with ŷ(0) as input, using list decoding of the upper layer component code to obtain candidate information sequences ûl(0) 11; and For every candidate information sequences ûl(0), eliminating an influence of the upper layer component code, and decoding the lower layer component code 12; then finally obtaining a candidate information sequence ûl=(ûl(0), ûl(1)) 13, and outputting the candidate information sequence with the maximum likelihood 14. The entire encoding and decoding process then end 16.
Referring to
Referring to
Referring to
Encoding and decoding will be further elaborated below with reference to certain examples. However, unless specifically stated, none of the features of the examples are limiting.
For encoding, let F2 be the binary field. Let C0[n0, k0] and C1[n1, k1] be two binary linear codes of length n0=n1=n/2, with dimension k0 and k1, respectively. It is assumed that C0[n0, k0] has an efficient list decoding algorithm. Then, TPST code is constructed with length n and dimension k=k0+k1 as follows. Let u=(u(0), u(1))ϵF2k be a pair of information sequences to be transmitted, where u(0)ϵF2k
Algorithm 1: Twisted-Pair Superposition Transmission
Basic Encoding: Encode u(i) into v(i)ϵF2n by the encoding algorithm of the basic code Ci[ni, ki], for i=0, 1.
Forward Superposition: Compute w(0)=v(0)Rv and c(1)=v(1)+w(0).
Backward Superposition: Compute w(1)=c(1)Π and c(0)=v(0)+w(0).
The encoding result is a sequence c=(c(0), c(1))ϵF2n. Evidently, the TPST code is a block code constructed from the basic code, which has the coding rate
Let Gi and Hi be a generator matrix and a parity-check matrix of the basic code Ci[ni,ki], for i=0, 1. The generator matrix and the parity-check matrix of a TPST code is given, respectively, by
and
From the construction, it can be seen that the generator matrix of a TPST code is decomposed into three parts: a block diagonal one corresponding to the basic codes, a block upper triangle matrix corresponding to the forward superposition, and a block lower triangle matrix corresponding to the backward superposition. Therefore, similar to the PAC code construction, which is regarded as a form of upper-lower decomposition of the generator matrix, the TPST code construction may be regarded as a block upper-lower decomposition of the generator matrix in an alternative way.
The generator matrix of a TPST code consists of three types of submatrices, the basic code generator matrices G0 and G1, the random transformation Rv and the selection matrix Π.
For decoding, for simplicity, it is assumed that c is modulated using binary phase-shift keying (BPSK) and transmitted over an additive white Gaussian noise (AWGN) channel. The resulting received sequence is denoted by y and the channel transition probability density is denoted by W(y|x) for y∈R, x∈{0, 1}. A notation with a hat sign ({circumflex over ( )}) is used to denote the corresponding estimated messages according to the decoder output.
Given a received sequence y=(y(0), y(1)), the optimal decoding in terms of minimizing FER is the ML decoding, which finds the TPST codeword c such that:
Let lmax be a positive integer. A sub-optimal solution is to apply the successive cancellation list decoding (which indeed is optimal when the list size lmax=2k
(1) From the backward superposition as shown in
v(0)=c(0)+c(1)Π. (4)
Hence, similar to polar codes, the log-likelihood ratios (LLRs) of v(0) can be computed from (y(0), y(1)), the noisy version of (c(0), c(1)). To be precise, by assuming that c(1) is identically uniformly distributed (i.u.d.), the LLRs of v(0) is (for simplicity, the following assumes Π is a diagonal matrix, and n0=n1=n/2):
for j=0, 1, . . . n−1, where the subscript j denotes the j-th component of the sequence. Taking Λ(v(0)) as input to the basic list decoder of C0, a list of candidate codewords {circumflex over (v)}l(0); l=1, 2, . . . , lmax is obtained.
(2) From the encoding process of TPST:
indicating that v(1) is transmitted twice (with partial masking if partial superposition is employed). Hence, if v(0) is obtained by the decoder, the LLRs of v(1) can be computed as,
for j=0, 1, . . . n−1. However, since v(0) is unknown at the receiver, Λl(v(1)) is calculated instead by treating each candidate codeword {circumflex over (v)}l(0) as correct. Then, {circumflex over (v)}l(1) is estimated by taking Λl(v(l)) as input to the basic decoder of C1.
The LLRs Λl(v(1)), calculated by (7), match with the channel if and only if the candidate {circumflex over (v)}l(0) is correct. In the case when {circumflex over (v)}l(0) is erroneous, the mismatch can be roughly measured by the weight of binary interference Δv(I+RvΠ, Rv), where Δv={circumflex over (v)}l(0)+v(0)≠0. Since Rv is randomly generated, a nonzero Δv may cause a significant change on the joint typicality between (cl(0), cl(1)) and (y(0), y(1)).
For each candidate pair {circumflex over (v)}1=({circumflex over (v)}l(0), {circumflex over (v)}l(0)), the decoder computes the likelihood of the corresponding TPST codeword P(y|ĉl) and selects the most likely one as the decoding output.
An exemplary decoding procedure is summarized in Algorithm 2.
Algorithm 2: Successive Cancellation List Decoding of TPST
Performances of the proposed embodiments are analysed. Firstly, performance bounds are analysed. For analysing the performance of the proposed SCL decoding algorithm for TPST codes, the following events are considered:
Hence, the FER performance of the TPST code with the proposed SCL decoding may be expressed as
FERSCL=P(E0∪E1∪E2). (8)
The following are lower bounds:
(1) Genie-Aided Bound P(E0) for Layer 0
The received vector y(0) can be viewed as a noisy version of v(0), where c(1) is considered as binary interference with side information y(1). The link v(0)→y(0) is referred to as the binary interference AWGN channel. From (8), P(E0) is a lower bound of FERSCL. To estimate P(E0), a genie-aided list decoder in which the decoder outputs the transmitted codeword is considered if it is in the list (told by a genie). Hence, the genie-aided bound P(E0) may be obtained by simulations on the list decoding performance of the basic code for Layer 0 over the binary interference AWGN channel.
(2) Genie-Aided Bound P(E1) for Layer 1
By removing the effect of v(0), both the received vector y(0) (or only parts of y(0) when partial superposition is employed) and y(1) may be equivalently viewed as noisy versions of v(1) transmitting over the AWGN channel. The link v(1)→{(y(0), y(1)|v(0)} is referred to as the repetition AWGN channel. Obviously, P(E1) is also a lower bound of FERSCL. To estimate (E1), a genie-aided decoder for decoding vl(1) in which the decoder is told by a genie that the correct information bits vl(0) is considered. Hence, P(E1) may be obtained by simulations on the performance of the basic code for Layer 1 over the repetition AWGN channel.
It may be noticed that both P(E0) and P(E1) are irrelevant to the random transformation R. This makes it more convenient to analyze the performance by the lower bound given by
FERSCLmax{P(E0),P(E1)}, (9)
which is denoted as the genie-aided (GA) bound for the TPST construction in the sequel.
As an example: the TBCCs are taken as basic codes, resulting in the TPST-TBCCs. The basic code for Layer 0 is decoded by serial list Viterbi algorithm (SLVA), and the basic code for Layer 1 is decoded by Viterbi algorithm (VA). First, the (2, 1, 4) TBCC defined by the polynomial generator matrix G(D)=(1+D2+D3+D4,1+D+D4) (also denoted in octal notation as (56, 62)8) with information length 32 is taken as the basic codes for both the two layers. The random matrix Rv is sampled from the permutation matrices and let Π be the identity matrix for simplicity. As a result, the constructed TPST-TBCC is a rate-1/2 block code with length 128. The genie-aided bounds are shown in
(3) Lower Bound P(E2) for ML Decoding
Let FERML be the FER performance of the TPST code with ML decoding. Since the ML decoding outputs the most likely codeword, an error occurs if and only if the transmitted codeword c is not with the highest likelihood among the codebook C. Therefore, once the given decoder outputs a valid codeword that is more likely than the transmitted one, the ML decoding will surely make an error in this instance as well. Hence, P(E2) is not only a lower bound of FERSCL, but also a lower bound of FERML. P(E2) may be estimate by simulating the frequency of the event that the decoding output has a higher likelihood than the transmitted one. Then the performance of ML decoding is bounded by
P(E2)FERMLFERSCL. (10)
Now, the proposed SCL decoding attains a near-ML performance as follows. By applying the union bound technique, an upper bound of FERSCL from (8) is obtained as
FERSCLP(E0)+P(E1)+P(E2). (11)
Therefore, the gap between the SCL performance and the ML performance may be upper bounded by
FERSCL−FERMLFERSCL−P(E2)≤P(E0)+P(E1). (12)
Hence, by lowering down the genie-aided bounds for the two layers, one may construct a TPST code with an SCL decoding algorithm attaining a near-ML performance.
More specifically, if the basic decoder for Layer 1 is ML decoding (e.g., taking TBCC with Viterbi decoding algorithm as basic code), the upper bound of the gap may be further tightened. In this case, the instance that v(0) of Layer 0 is in the list and the basic decoder of Layer 1 outputs a basic codeword {circumflex over (v)}(1) of Layer 1 other than v(1) of Layer 1 implies that the SCL decoder finds a valid TPST codeword more likely than the transmitted one. Hence, one has E1⊆E2 and
FERSCL=P(E0∪E2)P(E0)+P(E2). (13)
Then the gap between the SCL performance and the ML performance is upper bounded by
FERSCL−FERMLFERSCL−P(E2)P(E0). (14)
the genie-aided bound for Layer 0.
Besides, performance bounds, complexity analysis and early termination may be performed. From Algorithm 2, the complexity of the proposed SCL decoding mainly consists of two parts: list decoding for Layer 0 and decoding for Layer 1 for each candidate sub-codeword.
S-state TBCCs is taken as basic codes, which are decoded by SLVA with list size L for Layer 0 and by VA for Layer 1. For simplicity, the add-compare-select operation is taken as an atomic operation to measure the complexity. For Layer 0, by going through all the s states as initial states, s2k0 operations are need to find the best candidate, while only k0 operations are needed for each other L−1 candidates, i.e., the SLVA requires s2k0+(L−1)k0 operations. For Layer 1, VA is employed for each candidate, which needs Ls2k1 operations. Hence, the total number of operations for the proposed SCL decoding is given by
#Opt=(s2+L−1)k0+Ls2k1. (15)
It can be seen that the decoding complexity is dominated by the term Ls2k1. One way to lower down the complexity is to employ WAVA for decoding the basic code of Layer 1, where with I=4 the decoder performs near optimally. In the test discussed in a later section, the constructed TPST-TBCC with 16 states performs similar to TBCC with 256 states.
To attain a near-ML performance, the decoding requires a large list size lmax, which incurs a high decoding complexity. However, the average list size for v(0) to be included in the decoding list is typically small, especially in the high SNR region. Hence, to reduce the complexity, the serial list decoding of Layer 0 is considered, and a criterion for identifying the correct codeword is designed, so that the list decoding can be early terminated.
The metric of empirical divergence function (EDF) for the early termination is given as
where P(y) is obtained by assuming a uniformly distributed input. The correct candidate v will typically result in D(y, v)≈I(V; Y)>0. Conversely, an erroneous candidate may cause a significant change on the joint typicality between (ĉl(0), ĉl(0)) and (y(0), y(1)). Hence, the EDF associated with an erroneous candidate is typically less than that of the correct one. An off-line learned threshold T is set for early termination, where the decoding candidate {circumflex over (v)}1=({circumflex over (v)}l(0), {circumflex over (v)}l(1)) is treated as correct if (y, {circumflex over (v)}l)>T. The SCL decoding with early termination is described in Algorithm 3 below.
Algorithm 3: Successive Cancellation List Decoding of TPST with Early Termination
Design for finite-length regime is now explained. As discussed above, the performance of a TPST code is lower bounded by max{P(E0), P(E1)}, and the gap to the ML decoding is upper bounded by P(E0)+P(E1). However, Example 1 shows that there exists a gap between the genie-aided bounds of the two layers, (E0)>>P(E1), resulting in poor performance. To improve the performance, two approaches are considered to narrowing the gap. One is to reduce the coding rate of Layer 0 by changing the basic codes with different rates, and the other is to improve the channel for Layer 0 by introducing partial superposition during backward superposition to reduce the binary interference from Layer 1.
The first approach is rate allocation. From the derivation of the genie-aided lower bounds, when full superposition is employed, the genie-aided bounds may be obtained by simulations on the basic codes transmitted over binary interference AWGN channels and the repetition AWGN channels. With the help of genie-aided bounds, the basic codes with different rates may be chosen to construct TPST codes with good performance within an SNR region of interest.
Suppose that there is a family of basic codes of length n with different rates with their genie-aided bounds available. For example, the basic codes with different rates can be constructed by puncturing from a mother code and the genie-aided bounds can be obtained by off-line simulations over the binary interference AWGN channel and the repetition AWGN channel. Then the procedure of designing TPST code with length 2n and dimension k by the rate allocation approach is described as follows.
As another example: random codes with ordered-statistic list decoding is considered as basic codes for Layer 0 and (punctured) TBCCs with Viterbi decoding as basic codes for Layer 1. To construct good TPST codes with length 128 and dimension 64 in the SNR region near 3.5 dB, the (punctured) (2, 1, 4) TBCC defined by the polynomial generator matrix G(D)=(56, 62)8 is used to construct the basic codes with different rates. The genie-aided bounds of the basic codes are shown in
Alternatively, TBCC with list Viterbi algorithm is considered as the basic code for Layer 0. A (3, 1, 4) TBCC defined by the polynomial generator matrix G(D)=(52, 66, 76)8 is punctured to fit the length 64 and dimension 29, where the performance is shown in
It can be seen from the above examples that the list Viterbi decoding of TBCC is more efficient than the OSD decoding in the sense of list size. Hence, TBCC is considered as basic codes for the TPST code construction in the sequel.
The second approach is by partial superposition. Different from the rate allocation approach that adjusts the coding rate of the two layers of TPST codes, the following partial superposition approach adjusts the channels for the two layers. The so-called superposition fraction α is introduced to characterize the partial superposition. Given α, one may construct the binary diagonal matrix Π by assigning └n1α┘ ones and n1−└n1α┘ zeros homogeneously (as uniformly as possible) to the n1 positions. For simplicity of designing, the same basic code may be employed on both layers.
Recall that the link v(0)→y(0) can be viewed as the binary interference AWGN channel, where c(1) is considered as binary interference with side information y(1). Therefore, P(E0) can be lowered down by reducing the binary interference, which can be achieved by decreasing the superposition fraction α.
Recall that with v(0) available, the link v(1)→{(y(0), y(1)|v(0)} can be viewed as the repetition AWGN channel, where v(1) is transmitted twice over AWGN channels (one is with partial masking). Therefore, decreasing the superposition fraction α leads to performance degradation of the genie-aided decoder for v(1) and hence increases (E1).
In summary, compared with full superposition, decreasing the superposition fraction a can narrow the gap between the two genie-aided bounds, as illustrated in the following example.
In this example, the same basic code as in paragraph 72 is taken and the TPST code is constructed with different superposition fractions α=0.5, 0.75 and 1.0. The genie-aided bounds for lmax=2048 are shown in
The following presents numerical simulations based on embodiments of the present disclosure. The flexibility of the construction lies in the following facts. 1) Any block codes with a fast encoding and efficient list decoding can be adopted as basic codes. 2) Given a family of basic codes with different rates, it is easy to construct good TPST codes within an SNR region of interest by the rate allocation approach. 3) For the basic codes with a given coding rate, the construction can be easily improved by optimizing the superposition fraction. In this subsection, TPST-TBCCs is taken as examples and it is shown by numerical simulations that the constructed TPST-TBCCs, with a wide range of coding rates, have good performance in the finite length regime.
Examples are provided below to show construction with different basic codes. TPST-TBCCs are constructed with different basic codes. TBCCs with different coding lengths and rates are used as the mother codes for the basic codes, where their generators (in octal notation) are listed in Table I. For example, to construct a basic codeword with coding length 64 and rate 3/4, every third bit of the TBCC C[96, 48] encoder output is punctured. The TPST codes are encoded with superposition fraction α=0.75 and decoded with list size lmax=2048. The simulation results are shown in
To investigate trade-off between complexity and performance, the same basic code as in paragraph 72 is taken. The TPST code is encoded with superposition fraction α=0.75 and decoded with the maximum list size lmax=2048, where different thresholds are used for early termination. The simulation results are shown in
A novel coding scheme referred to as twisted-pair superposition transmission is disclosed herewith, which is constructed by mixing up basic codes by superposition. Based on the list decoding of the basic codes, an SCL decoding algorithm is presented for the TPST codes. To reduce the complexity, thresholds may be presented for early termination of the decoding, which can significantly reduce the decoding complexity at the expense of a slight decoding performance degradation. Bounds for the FER performance is derived, which shows that the performance of the proposed SCL decoding is near-ML and can be easily predicted by the genie-aided bounds obtained from the basic codes. Based on the genie-aided bounds, two design approaches is presented for the construction of TPST codes. Taking TBCCs as basic codes, it is shown by numerical simulations that the constructed TPST-TBCCs have performance close to the RCU bound in a wide range of coding rates in the short length regime.
The novel coding scheme may be implemented in an apparatus or a device. The implementation principles and technical effects of the device provided in the embodiments of the application are the same as those of the foregoing method embodiments.
An embodiment of the present application provides an electronic apparatus or device. The electronic device comprises: a processor, a memory, and a communication interface. The processor, the communication interface, and the memory are connected; the processor is configured to execute an executable module stored in the memory, such as a computer program. The processor implements the steps of the method described in the method embodiments when the processor executes the computer program.
The memory may include a high-speed random access memory (RAM), and may also include a non-volatile memory, such as at least one disk memory. The channel communication connection between the encoder and decoder is realized through a communication means (which may be wired or wireless), the Internet, a wide area network, a local network, a metropolitan area network etc.
Wherein the memory is configured to store a program, and the processor executes the program after receiving an execution instruction. The method executed by the flow process defined apparatus disclosed in any of the foregoing embodiments of the present application can be applied to the processor or realized by the processor.
The processor may be an integrated circuit chip with signal processing capabilities. In the implementation process, the steps of the foregoing method may be completed by an integrated logic circuit of hardware in the processor or instructions in the form of software. The aforementioned processor may be a general-purpose processor, including a central processing unit (CPU for short), a network processor (NP), etc.; it may also be a digital signal processor (DSP for short), Application Specific Integrated Circuit (ASIC for short), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, and discrete hardware components. The methods, steps, and logical block diagrams disclosed in the embodiments of the present application can be implemented or executed. The general-purpose processor may be a microprocessor or the processor may also be any conventional processor, etc. The steps of the method disclosed in the embodiments of the present application may be directly embodied as being executed by a hardware decoding processor or by a combination of hardware and software modules in the decoding processor. The software module may be located in random access memory, flash memory, read-only memory, programmable read-only memory, electrically erasable programmable memory, or registers and other mature storage media in the field. The storage medium is located in the memory, and the processor reads the information in the memory, and completes the steps of the above method in combination with its hardware.
In another embodiment, there is also provided a computer-readable medium having non-volatile program code executable by a processor, and the program code causes the processor to execute the steps of the method described above.
In the several embodiments provided in this application, it should be understood that the disclosed device and methods may be implemented in other ways. The device embodiments described above are merely illustrative. For example, the division of units is only a logical function division, and there may be other divisions in actual implementation. For further example, multiple units or components can be combined or integrated into another system, or some features can be ignored, or not implemented. In addition, the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some communication interfaces, devices or units, and may be in electrical, mechanical or other forms.
The units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, that is, they may be located in one place or distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
In addition, the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
If the function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a non-volatile computer readable storage medium executable by a processor. Based on this understanding, the technical solution of the present application essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including several instructions to make a computer device (which can be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application. The aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM), random access memory (RAM), magnetic disks or optical disks and other media that can store program codes.
Finally, it should be noted that the above-mentioned embodiments are only specific implementations of this application, which are used to illustrate the technical solution of this application, rather than limiting it. The scope of protection of the application is not limited to this, although the application has been described in detail with reference to the foregoing embodiments, and those of ordinary skill in the art should understand that any person skilled in the art familiar with the technical field within the technical scope disclosed in this application may still modify the technical solutions described in the foregoing embodiments or may easily think of changes or equivalently replace some of the technical features. However, these modifications, changes or replacements do not cause the essence of the corresponding technical solutions to deviate from the spirit and scope of the technical solutions of the embodiments of the present application, and should be covered within the protection scope of the present application. Therefore, the protection scope of this application should be subject to the protection scope of the claims.
Number | Date | Country | Kind |
---|---|---|---|
202110351850.6 | Mar 2021 | CN | national |
Number | Name | Date | Kind |
---|---|---|---|
20050210357 | Piret | Sep 2005 | A1 |
20170104590 | Wang | Apr 2017 | A1 |
Number | Date | Country |
---|---|---|
2865083 | Jul 2005 | FR |
Entry |
---|
Suihua Cai, Xiao Ma; “Twisted-Pair Superposition Transmission”; IEEE Transactions on Communications, vol. 69, No. 9, Sep. 2021; pp. 5663-5671 (Year: 2021). |
Number | Date | Country | |
---|---|---|---|
20220337269 A1 | Oct 2022 | US |