The present invention generally relates to defining an order in which kernels of a polar code structure are processed during beliefs propagation, and consequently to perform the beliefs propagation by processing the kernels in the order thus defined.
Polar codes are linear block error correcting codes built from information theory considerations instead of relying on algebraic constructions. Polar codes are based on a structure built of multi-branched recursions of a kernel, which transforms a physical channel into virtual outer channels. When the quantity of recursions becomes large, the virtual channels tend to either have high reliability or low reliability. In other words, the virtual channels polarize. Useful data bits, also referred to as information bits, are then allocated to the most reliable virtual channels and frozen bits are allocated to the remaining virtual channels.
Encoding and decoding complexity of polar codes is in the order of N·log(N), where N is the size of the considered polar code. However, performance of polar codes is rather poor, compared to other coding techniques such as Turbo codes or LDPC (Low Density Parity Check) codes, when N is small, such as N=512. Moreover, polar codes shall be optimized for a given BDMC (Binary Discrete-input Memory-less Channel) onto which said polar codes are intended to be used.
In the following, we consider a system, consisting of an encoder 110 and a decoder 120, as schematically represented in
Let's denote xj, the j′-th entry of the vector x, and xj′:k the vector of size (k−j′+1) consisting of the entries xj′, . . . xj′+1, . . . , xk extracted from the vector x. Furthermore, let's denote xj′:m:k the vector consisting of the entries xj′, xj′+m, . . . , xj+m└(k−j′)/m┘ extracted from the vector x, wherein └u┘ represents the floor of u.
It is thus considered herein a polar code-based encoder of rate R<1, which converts a vector b of size N consisting of information bits and frozen bits into a codeword x of size N. It has to be noted that N·R information bits exist amongst the N entries of the vector b. It has then to be further noted that N·(1−R) frozen bits exist amongst the N entries of the vector b, and the positions and values of said frozen bits inside the vector b are known by design. Most of the time, the bits are set to the value “0” and their positions inside the vector b are represented by a vector F such that F(j′)=1 if the bit at the i-th position in the vector b carries a frozen bit and such that F(j′)=0 if the bit at the i-th position in the vector b carries an information bit.
Thus as shown in
A major aspect of polar codes lies in the fact that the conversion from the vector b to the codeword x is static whatever the effective code rate R and effective characteristics of the considered BDMC, which means that the structure of the polar code remains constant. Rate adaptation and performance optimization are then performed by adequately choosing the positions of the frozen bits in the vector b.
As shown in the modular architecture schematically represented in
The input of the parallel Kernels Ki,1, . . . , Ki,N/2 is a vector xi,1:N(in). (The output of the parallel Kernels Ki,1, . . . , Ki,N/2 is a vector xi,1:N(out). The relationship between the input of the parallel Kernels Ki+1,1, . . . , Ki+1,N/2 at depth position i+1 and the output of the parallel Kernels Ki,1, . . . , Ki,N/2 at depth position i is as follows:
xi+1,1:N(in)=φi,1:N(out))
and
xi,1:N(out)=φi−1(xi+1,1:N(in))
where φi is the shuffling operation performed by the shuffler Φi at the i-th depth position and φi−1 is the inverse shuffling operation related to said shuffler Φi at the i-th depth position. For example, the shuffling operation φi is defined as follows
Alternatively, the shuffling operation φi is defined as follows
Others alternative mixes between the shuffler inputs may be performed by the shuffling operations, and for example, the shuffling operation performed by the shuffler Φi at the i-th depth position may differ from the shuffling operation performed by the shuffler Φi−1 at the (i−1)-th depth position and/or from the shuffling operation performed by the shuffler Φi+1 at the (i+1)-th depth position.
It can thus be noted that the vector b is thus input as concatenation of x1,j(in)(∀0<j′≤N) and that the codeword x is thus output as concatenation of xL,j′(out)(∀0<j′≤N).
As a note, kernels that are interconnected (via one shuffler) in the structure of the polar code are designated as neighbours with respect to each other. Thus, each binary kernel has at most four neighbours since it has two inputs and two outputs. In general, it is considered that all inputs and outputs of a binary kernel are connected to different neighbours, which leads to exactly four neighbours per kernel Ki,1:N for 1<i<L.
wherein ⊕ represents the XOR function.
Consequently, each entry xj′(∀0<j′≤N) of the output codeword x is transmitted on a BDMC defined by a transition probability P(yj′|xj′). For example, when considering an AWGN (Additive White Gaussian Noise) channel with BPSK (Binary Phase Shift Keying) input, the transition probability P(yj′|xj′) is defined by
where ρj′ is the signal to noise ratio of said AWGN channel.
Furthermore, Log-Likelihood Ratio (LLR) Lj′(x) can be defined as
Lj′(x)=log(P(yj′|xj′=0)/P(yj′|xj′=1))
which characterizes the probability that xj′=1 or xj′=0 when observing yj′. Positive values of Lj′(x) are associated with a higher probability to get xj′=0. Equivalently, a priori Log-Likelihood Ratio (LLR) Lj′(b) associated with the vector b can be defined as
∀0<j′≤N,F(j′)=1 ⇒Lj′(b)=+∞,F(j′)=0 ⇒Lj′(b)=0,
which relates that if the vector bit bj′ is a frozen bit, the LLR is initialized to an infinity (or high enough) value since it is known for sure that the associated bit is null, while for information bit coming from the vector b′, no a priori information is known (i.e. not known whether the information bit equals to “1” or “0”), which leads to a zero initialization of the LLR.
As shown in the modular architecture schematically represented in
The behaviour of the decoder 120 is schematically shown by the algorithm in
In a step S451, the decoder 120 initializes a vector L(b) first as a null vector, and then the decoder 120 modifies the vector L(b) according to the knowledge of the positions of the frozen bits used by the encoder 110, as follows:
∀0<j′≤N,F(j′)=1⇒Lj′(b)=+∞
where +∞ indicates that the LLR Lj′(b) of the j′-th bit gives ‘0’ as a value for this bit with probability “1”. It has to be noted that the frozen bits are initialized with a “1” value instead of “0”, the value of the LLR should be “−∞”, and wherein “+∞” is numerically represented by a value that is sufficiently high to be out of the scope of the LLR values that can exist at the input of any polarizer. It means that, for any index value that corresponds to a position of a frozen bit in the vector b, the value of the vector L(b) at said index value is set to infinity or a default value representative thereof, and set to “0” otherwise.
In a following step S452, the decoder 120 initializes internal variables Li,1:N(in) and Li,1:N(out), ∀0<i≤L, with null values. Said internal variables Li,1:N(in) and Li,1:N(out) are intended to be propagated along the recursions of the beliefs propagation.
In a following step S453, the decoder 120 computes beliefs propagation) using the internal variables Li,1:N(in) and Li,1:N(out), according to the known structure of the polar code and to the observations Lj′(x) in LLR form and to the frozen bit information Lj′(b) in LLR form. In other words, the decoder 120 uses the algorithm schematically shown in
In a following step S454, the decoder 120 makes a decision on each bit of the estimation {circumflex over (b)} of the vector b according to the estimates {circumflex over (L)}(b) output by the beliefs propagation decoder 422. The decision is made as follows:
Normally, {circumflex over (L)}j′(b), ∀0<j′≤N, should not be equal to “0”. In case such a situation however occurs, the decoder 120 arbitrarily sets the corresponding bit {circumflex over (b)}j′ either to “0” or to“1”.
Then, since the decoder 120 knows the positions of the frozen bits in the vector b, the decoder is able to extract therefrom the information bits so as to form the estimation {circumflex over (b)}′ of the vector b′, which ends the transfer of said information bits from the encoder 110 to the decoder 120 using the polar code approach.
The behaviour of the belief propagation decoder 422, as apparent in the step S453, is schematically shown by the algorithm in
Belief propagation propagates probabilities (beliefs) from left to right and right to left within each kernel of the polarizer POL-N 220. Referring to a kernel Ki,j schematically represented in
In a step S471, the belief propagation decoder 422 computes all the values {circumflex over (L)}i,2j−1:2j(out) and {circumflex over (L)}i,2j−1:2j(in), ∀0<i≤L, ∀0<j≤N/2 (or in other terms, the belief propagation decoder 422 computes all the values Li,j(out) and {circumflex over (L)}i,j(in), ∀0<i≤L, ∀0<j′≤N), by performing the following operations:
{circumflex over (L)}i,2j−1(in)=1(in)(Li,2j−1:2j(in),Li,2j−1:2j(out))
{circumflex over (L)}i,2j(in)=2(in)(Li,2j−1:2j(in),Li,2j−1:2j(out))
{circumflex over (L)}i,2j−1(out)=1(out)(Li,2j−1:2j(in),Li,2j−1:2j(out))
{circumflex over (L)}i,2j(out)=2(out)(Li,2j−1:2j(in),Li,2j−1:2j(out))
In a following step S472, the belief propagation decoder 422 applies the shuffling inverse operations φi−1−1, ∀1<i≤L, onto the data {circumflex over (L)}i,1:N(in) so as to enable updating the data Li−1,1:N(out) It can be noted that the data Li−1,1:N(out) are here pre-computed and effectively updated later on in a step S474.
In a following step S473, the belief propagation decoder 422 applies the shuffling operations φi, ∀1<i<L, onto the data {circumflex over (L)}i,1:N(out) so as to enable updating the data Li+1,1:N(in). It can be noted that the data Li+1,1:N(out) are here pre-computed and effectively updated later on in the step S474.
It can further be noted that the steps S472 and S473 may be inverted or executed in parallel.
In the following step S474, the belief propagation decoder 422 updates out) and returns the beliefs {circumflex over (L)}i,1:N(in) and {circumflex over (L)}i,1:n(out) following the beliefs propagation performed in the steps S472 and S473.
The main strength of the polar code approach is its asymptotical optimality. The asymptotical conditions result in perfect channels (mutual information equal to “1”, or error rate equal to “0”) for transferring information bits and in null channels (mutual information equal to “0”, or error rate equal to “1”) for transferring the frozen bits. It can be seen from the above introductive description that, at finite length, channel polarization has not fully converged to perfect and null equivalent channels, leading to a non-perfect equivalent global channel with non-null error rate and non-unitary mutual information for the information bits. Furthermore, the frozen bits are thus transferred on channels with a non-null mutual information, which involves a loss in information rate since capacity is conserved by the polarizer POL-N 220.
Moreover, the polar code approach requires numerous computation operations for beliefs propagation in the decoder 120. CPU— (Central Processing Unit) or GPU—(Graphics Processing Unit) based architectures are well adapted to perform these numerous computation operations, but might encounter difficulties so as to perform them in a constrained timeframe (or within a constrained quantity of processing steps) in view of quantity of available processing resources. Under such circumstances it is usual to stop the beliefs propagation when the end of the constrained timeframe is reached (or when the constrained quantity of processing steps is reached), which generally refers to meeting a stop condition, and to make a decoding decision as is.
It is thus desirable to provide a solution that improves performance of the beliefs propagation in the decoder when a stop condition leading to make a decoding decision is met. It is further desirable to provide a solution that allows keeping low complexity benefit of the polar codes. It is further desirable to provide a solution that is simple and cost-effective.
To that end, the present invention concerns a method for performing beliefs propagation in a scope of polar code decoding by a decoder, the polar code having a size of N bits and being based on a structure of L sub-polarization stages of N/2 parallel kernels Ki,j, wherein N and L being positive integers such that N=2L, wherein i is an index that identifies the sub-polarization stage of the kernel Ki,j and j is an index that represents the position of said kernel Ki,j among the N/2 parallel kernels at the sub-polarization stage identified by the index i, the N/2 parallel kernels at each sub-polarization stage are separated from their neighbour N/2 parallel kernels at each adjacent sub-polarization stage by a shuffler. The method is such that the decoder performs: computing a value (i,j) of a performance-improvement metric for each kernel Ki,j of the structure, wherein the performance-improvement metric is representative of a magnitude in which the input beliefs of the considered kernel Ki,j are not in agreement and/or the performance-improvement metric is representative of a magnitude in which the output beliefs of the considered kernel Ki,j bring instantaneous information rate to the neighbour kernels of said considered kernel Ki,j; and sorting the kernels in a list in decreasing order of the values (i,j) of the performance-improvement metric . The method is further such that the decoder performs a beliefs propagation iterative process as follows: updating output beliefs for the W top kernels of the list , wherein W is a positive integer, and propagating said output beliefs as input beliefs of neighbour kernel of said W top kernels; updating output beliefs for each neighbour kernel of said W top kernels following update of their input beliefs, and re-computing the performance-improvement metric value (i,j) for each said neighbour kernel; setting the performance-improvement metric for said W top kernels to a null value; re-ordering the kernels in the list in decreasing order of the values (i,j) of the performance-improvement metric . Furthermore, the method is such that the decoder repeats the beliefs propagation iterative process until a stop condition is met.
Thus, by identifying the kernels that bring the most relevant information to the decoding process thanks to the performance-improvement metric as defined above, performance of the beliefs propagation in the decoding process is improved when the stop condition is met.
According to a particular embodiment, the value (i,j) of the performance-improvement metric for each kernel Ki,j depends on a difference between the sum of the information rate associated to the input beliefs of said kernel Ki,j, and the sum of the information rate associated to each input belief of said kernel Ki,j before the previous update of said kernel Ki,j.
Thus, the magnitude in which the input beliefs of each considered kernel Ki,j are not in agreement is used to sort the kernels and thus to improve the decoding process performance, as a first approach.
According to a particular embodiment, the value (i,j)=(1)(i,j) of the performance-improvement metric for each kernel Ki,j is defined as follows
wherein Li−2j-1:2j(in) are the input beliefs coming from the sub-polarization stage i−1 and Li−2j-1:2j(out) are the input beliefs coming from the sub-polarization stage i+1,
and wherein mi,j(1) becomes mi,j(1)old) after each effective update of the kernel Ki,j, during which the output beliefs {circumflex over (L)}i−2j-1:2j(in) and Li−2j-1:2j(out) are computed from the input beliefs Li−2j-1:2j(in) and Li−2j-1:2j(out) so as to be used in a later update of the performance-improvement metric value (i,j)=(1) (i,j), and wherein mi,j(1)old=0 at the very first computation of the performance-improvement metric value (1)(i,j) for the kernel Ki,j.
Thus, in the first approach, the magnitude in which the input beliefs of each considered kernel Ki,j are not in agreement is easily obtained.
According to a particular embodiment, the value (i,j) of the performance-improvement metric for each kernel Ki,j depends on a sum rate difference between the sum of the information rate associated to input ex-post beliefs of said kernel Ki,j, and the sum of the information rate associated to input ex-post beliefs of said kernel Ki,j before the previous update of said kernel Ki,j, wherein ex post belief is a sum of a-priori beliefs and extrinsic beliefs.
Thus, the magnitude in which the input beliefs of each considered kernel Ki,j are not in agreement is used to sort the kernels and thus to improve the decoding process performance, as a second approach.
According to a particular embodiment, the value (i,j)=(2) (i,j) of the performance-improvement metric for each kernel Ki,j is defined as follows
wherein Li,2j−1:2j(in) are the input beliefs coming from the sub-polarization stage i−1 and Li,2j−1:2j(out) are the input beliefs coming from the sub-polarization stage i+1,
and wherein {circumflex over (L)}i,2j−1:2j(in)old represents the immediately preceding respective values of {circumflex over (L)}i,2j−1:2j(out)old which are the output beliefs toward the sub-polarization stage i−1, and {circumflex over (L)}i,2j−1:2j(out)old represents the immediately preceding respective values of {circumflex over (L)}i,2j−1:2j(out) which are the output beliefs toward the sub-polarization stage i+1.
Thus, in the second approach, the magnitude in which the input beliefs of each considered kernel Ki,j are not in agreement is easily obtained.
According to a particular embodiment, the value (i,j) of the performance-improvement metric for each kernel K1,j depends on an increase of information output by the kernel Ki,j during the last update of said kernel Ki,j.
Thus, the magnitude in which the output beliefs of each considered kernel Ki,j bring instantaneous information rate to the neighbour kernels of said considered kernel Ki,j is used to sort the kernels and thus to improve the decoding process performance.
According to a particular embodiment, the value (i,j)=(3)(i,j) of the performance-improvement metric for each kernel Ki,j is defined as follows
and wherein {circumflex over (L)}i,2j−1:2j(in) are the output beliefs toward the sub-polarization stage i−1, and {circumflex over (L)}i,2j−1:2j(out) are the output beliefs toward the sub-polarization stage i+1.
Thus, the magnitude in which the output beliefs of each considered kernel Ki,j bring instantaneous information rate to the neighbour kernels of said considered kernel Ki,j is easily obtained.
According to a particular embodiment, the value (i,j)=(1)(i,j)+(3)(i,j) of the performance-improvement metric for each kernel Ki,j is defined as follows
wherein Li,2j−1:2j(in) are the input beliefs coming from the sub-polarization stage i−1 and Li,2j−1:2j(out) are the input beliefs coming from the sub-polarization stage i+1, wherein {circumflex over (L)}i,2j−1:2j(in) are the output beliefs toward the sub-polarization stage i−1, and {circumflex over (L)}i,2j−1:2j(out) are the output beliefs toward the sub-polarization stage i+1, 1) and wherein mi,j(1) becomes mi,j(1)old after each effective update of the kernel Ki,j, during which the output beliefs {circumflex over (L)}i,2j−1:2j(in) and {circumflex over (L)}i,2j−1:2j(out) are computed from the input beliefs Li,2j−1:2j(in) and Li,2j−1:2j(out), so as to be used in a later update of the performance-improvement metric value (i,j)=(1)(i,j), and wherein mi,j(1)old=0 at the very first computation of the performance-improvement metric value (1) (i,j) for the kernel Ki,j.
According to a particular embodiment, the value (i,j)=(1) (i,j)·(3) (i,j) of the performance-improvement metric for each kernel Ki,j is defined as follows
wherein Li,2j−1:2j(in) are the input beliefs coming from the sub-polarization stage i−1 and Li,2j−1:2j(out) are the input beliefs coming from the sub-polarization stage i+1, wherein {circumflex over (L)}i,2j−1:2j(in) are the output beliefs toward the sub-polarization stage i−1, and {circumflex over (L)}i,2j−1:2j(out) are the output beliefs toward the sub-polarization stage i+1, and wherein mi,j(1) becomes mi,j(1)old after each effective update of the kernel Ki,j, during which the output beliefs {circumflex over (L)}i,2j−1:2j(in) and {circumflex over (L)}i,2j−1:2j(out) are computed from the input beliefs Li,2j−1:2j(in) and Li,2j−1:2j(out) so as to be used in a later update of the performance-improvement metric value (i,j)=(1)(i,j), and wherein mi,j(1)old=0 at the very first computation of the performance-improvement metric value (1)(i,j) for the kernel Ki,j.
According to a particular embodiment, the stop condition is met when one of the following conditions is fulfilled: when a time period of predefined duration has ended since the beliefs propagation iterative process has started; when a predefined quantity of iterations has been performed in the beliefs propagation iterative process; and when the value (i,j) of the performance-improvement metric of the kernel at the top position in the list is below a predefined threshold.
Thus, processing resources are easily managed for performing the decoding process in an efficient manner.
The present invention also concerns a polar code decoder configured for performing beliefs propagation in a scope of polar code decoding, the polar code having a size of N bits and being based on a structure of L sub-polarization stages of N/2 parallel kernels Ki,j, wherein N and L being positive integers such that N=2L, wherein i is an index that identifies the sub-polarization stage of the kernel Ki,j and j is an index that represents the position of said kernel Ki,j among the N/2 parallel kernels at the sub-polarization stage identified by the index i, the N/2 parallel kernels at each sub-polarization stage are separated from their neighbour N/2 parallel kernels at each adjacent sub-polarization stage by a shuffler. The decoder comprises: means for computing a value (i,j) of a performance-improvement metric for each kernel Ki,j of the structure, wherein the performance-improvement metric is representative of a magnitude in which the input beliefs of the considered kernel Ki,j are not in agreement and/or the performance-improvement metric is representative of a magnitude in which the output beliefs of the considered kernel Ki,j bring instantaneous information rate to the neighbour kernels of said considered kernel Ki,j; and means for sorting the kernels in a list in decreasing order of the values (i,j) of the performance-improvement metric . The decoder further comprises means for performing a beliefs propagation iterative process including: means for updating output beliefs for the W top kernels of the list , wherein W is a positive integer, and propagating said output beliefs as input beliefs of neighbour kernel of said W top kernels; means for updating output beliefs for each neighbour kernel of said W top kernels following update of their input beliefs, and re-computing the performance-improvement metric value (i,j) for each said neighbour kernel; means for setting the performance-improvement metric for said W top kernels to a null value; means for re-ordering the kernels in the list in decreasing order of the values (i,j) of the performance-improvement metric . Furthermore, the decoder is further configured for repeating the beliefs propagation iterative process until a stop condition is met.
The present invention also concerns, in at least one embodiment, a computer program that can be downloaded from a communication network and/or stored on a non-transitory information storage medium that can be read by a computer and run by a processor or processing electronics circuitry. This computer program comprises instructions for implementing the aforementioned method in any one of its various embodiments, when said program is run by the processor or processing electronics circuitry.
The present invention also concerns a non-transitory information storage medium, storing a computer program comprising a set of instructions that can be run by a processor for implementing the aforementioned method in any one of its various embodiments, when the stored information is read from the non-transitory information storage medium by a computer and run by a processor or processing electronics circuitry.
In the scope of the present invention, polar code encoding and transmission over BDMC are performed in accordance with the description provided hereinbefore with respect to
In general, the mutual information I(x′; y′) of a BDMC with input x′, output y′ (i.e. observations), and probability function P(y′ Ix′) characterizing channel transitions probabilities, is defined as follows:
I(x′;y′)=1−EL′[log2(1+e−L′)]
where EL′[ ] represents the mathematical expectation over the random variable L′ called Log Likelihood Ratio (LLR) and defined as follows:
L′=log(P(y′|x′)/P(y′|1−x′))
wherein P(y′|x′) and P(y′|1−x′) both depend on the input bits and output channel observations.
Let us define instantaneous information rate as 1−log2(1+e−L′), i.e. the mathematical expectation of the instantaneous information rate provides the mutual information I (x′; y′).
Let us consider the kernel Ki,j. Each output on the right size of the kernel (as represented in
Let I (xi,2j−1(out); yi,2j−1:2j) be mutual information between the bit xi,2j−1(out) and a channel observation vector yi,2j−1:2j. Let also I(xi,2j(out); yi,2j−1:2j) be mutual information between the bit xi,2j(out) and the channel observation vector yi,2j−1:2j. Let also I(xi,2j−1(in); yi,2j−1:2j) be mutual information between the bit xi,2j−1(in) and the channel observation vector yi,2j−1:2j. Let also I(xi,2j(in); yi,2j−1:2j|xi,2j−1(in)) be the mutual information between the bit xi,2j(in) and the channel observation vector yi,2j−1:2j knowing without error the bit xi,2j−1(in). As a result of the polarization operation and due to the BDMC nature of the channel, the following relationship exists:
which is the capacity conservation property of the considered kernel, under the assumption that xi,2j−1(in) is correctly decoded before trying to decode xi,2j(in).
It is worth noting that, if Li,2j(in)=0, Li,2j−1(in)=(−1)x
which involves that the conservation property is also achieved instantaneously (i.e. without the mathematical expectation being present in the information rates). This property teaches that if a kernel reaches this property after updating {circumflex over (L)}i,2j−1(in) and {circumflex over (L)}i,2j(in) from Li,2j−1(out) and Li,2j(out), then the beliefs Li,2j−1(out) and Li,2j(out) are in perfect agreement, which involves that the kernel is stable from iterative processing point of view. Since the kernel operations are symmetric (left to right or right to left), the same observation applies with the comparison of {circumflex over (L)}i,2j−1(out) and {circumflex over (L)}i,2j(out) with Li,2j−1(in) and Li,2j(in).
As a result, in a particular embodiment, metrics of stability of the kernel can consequently be derived and priority can be given to kernels for which the input beliefs are not in agreement, and which will provide a higher update to their neighbour kernels than other kernels that already are stable.
Moreover, in a particular embodiment, priority can be given to kernels manipulating beliefs with high instantaneous information rate, that will feed their neighbour kernels with more relevant and robust information.
By defining corresponding metrics as described hereafter, priority is given to processing the kernels providing the most beneficial increase to the decoding performance, and thus reduces complexity of the decoding process for a given performance target.
In a step S501, the decoder 120 computes a value (i,j) of a performance-improvement metric for each kernel Ki,j (identified by the couple (i,j), ∀i, j), wherein i is an index such that 0<i≤L and j is an index such that 0<j≤N/2. In other words, the index i represents the depth position (sub-polarization stage) of the considered kernel Ki,j identified by the couple (i,j) in the structure of the polar code and j represents the position of said considered kernel among the N/2 parallel kernels at the structure depth position (sub-polarization stage) identified by the index i.
As expressed by the embodiments detailed hereafter, the performance-improvement metric is representative of a magnitude in which the input beliefs of the considered kernel Ki,j are not in agreement (i.e. the higher value (i,j) of the performance-improvement metric , the higher difference between the input beliefs) and/or the performance-improvement metric is representative of a magnitude in which the output beliefs of the considered kernel Ki,j bring instantaneous information rate to the neighbour kernels of said considered kernel Ki,j (i.e. the higher value (i,j) of the performance-improvement metric , the higher instantaneous information rate).
According to a first embodiment, the performance-improvement metric value (i,j) for each kernel Ki,j (thus identified by the couple (i,j)) depends on a difference between the sum of the information rate associated to the input beliefs of said kernel Ki,j, and the sum of the information rate associated to each input belief of said kernel Ki,j before the previous update of said kernel Ki,j.
In a particular example of the first embodiment, the performance-improvement metric value (i,j)=(1)(i,j) is defined as follows in the first embodiment:
and wherein mi,j(1) becomes mi,j(1)old after each effective update of the kernel Ki,j, during which the output beliefs {circumflex over (L)}i,2j−1:2j(in) and {circumflex over (L)}i,2j−1:2j(out) are computed) from the input beliefs Li,2j−1:2j(in) and Li,2j−1:2j(out), so as to be used in a later update of the performance-improvement metric value (i,j)=(1)(i,j), and wherein mi,j(1)old=0 at the very first computation of the performance-improvement metric value (1)(i,j) for the kernel Ki,j.
When at least one input of the kernel Ki,j has been updated after update of a neighbour kernel connect thereto, the kernel identified Ki,j should also be updated. Sometimes, the update of the kernel Ki,j has no significant change on the overall polar code performance, which involves that priority of update of the kernel Ki,j is lower than priority of update of another kernel that experiences a drastic change in one of its inputs. The performance-improvement metric values (i,j)=(1)(i,j) thus quantitatively allow managing priorities between updates of kernels.
According to a second embodiment, the performance-improvement metric value (i,j) for each kernel Ki,j (thus identified by the couple (i,j)) depends on a sum rate difference between the sum of the information rate associated to input ex post beliefs of said kernel Ki,j, and the sum of the information rate associated to input ex post beliefs of said kernel Ki,j before the previous update of said kernel Ki,j. Ex post belief is a sum of a-priori beliefs and extrinsic beliefs, which are characterized for example for the input identified by 2j−1 (which is the first input of the kernel Ki,j, the input identified by 2j being the second input of the kernel Ki,j) by Li,2j−1(in)+{circumflex over (L)}i,2j−1(in)old, wherein Li,2j−1(in) represents here the corresponding extrinsic belief and wherein {circumflex over (L)}i,2j−1(in)old represents here the corresponding a-priori belief.
In a particular example of the second embodiment, the performance-improvement metric value (i,j)=(2)(i,j) is defined as follows in the second embodiment:
and wherein {circumflex over (L)}i,2j−1(in)old represents the immediately preceding value of {circumflex over (L)}i,2j−1(in), {circumflex over (L)}i,2j(in)old represents the immediately preceding value of {circumflex over (L)}i,2j(in), {circumflex over (L)}i,2j−1(out)old represents the immediately preceding value of {circumflex over (L)}i,2j−1(out), and {circumflex over (L)}i,2j(out)old represents the immediately preceding value of Li,2j(out).
This second embodiment is close to the first embodiment, but keeps memory of the last LLR values provided to or resulting from computation of each input/output of each kernel Ki,j, ∀i, j. For example, by summing Li,2j−1(in) and {circumflex over (L)}i,2j−1(in)old, information related to the ex-post belief instead of the extrinsic belief provided by Li,2j−1(in) solely considered (as in the first embodiment) is obtained. In order to implement the second embodiment, the decoder 120 needs to store in memory {circumflex over (L)}i,2j−1(in), {circumflex over (L)}i,2j(in) on one hand and {circumflex over (L)}i,2j−1(out), {circumflex over (L)}i,2j(out) on the other hand, at each computation of each input/output of each kernel Ki,j, ∀i, j. The stored beliefs respectively become {circumflex over (L)}i,2j−1(in)old, {circumflex over (L)}i,2j*in)old on one hand and {circumflex over (L)}i,2j−1(out)old, {circumflex over (L)}i,2j(out)old on the other hand, for a subsequent processing of said kernel Ki,j.
According to a third embodiment, the performance-improvement metric (i,j) for each kernel Ki,j (thus identified by the couple (i,j)) depends on an increase of information output by the kernel Ki,j during the last update of said kernel Ki,j.
In a particular example of the third embodiment, the performance-improvement metric (i,j)=(3)(i,j) is defined as follows in the third embodiment:
According to a fourth embodiment, the performance-improvement metric (i,j) for each kernel identified by the couple (i,j) is such that (i,j)=(1)(i,j)+(3)(i,j).
According to a fifth embodiment, the performance-improvement metric (i,j) for each kernel identified by the couple (i,j) is such that (i,j)=(1) (i,j)·(3) (i,j).
In a step S502, the decoder 120 sorts the kernel in a list in decreasing order of the performance-improvement metric .
Then starts a beliefs propagation iterative process that is repeated until a stop condition is met, as detailed hereafter.
In a step S503, the decoder 120 updates the LLRs for the W top kernels in the list , wherein W is a positive integer such that 0<W<L·N/2, i.e. the W kernels having the highest performance-improvement metric . In a preferred embodiment, W=1.
In a step S504, the decoder 120 updates the LLRs for each neighbour kernel of said W top kernels in the list .
In a step S505, the decoder 120 sets the performance-improvement metric for the W top kernels in the list to a null value, since said kernels have been processed, and re-computes the performance-improvement metric of their neighbour kernels.
In a step S506, the decoder 120 reorders the kernels in the list following the changes made on the performance-improvement metric in the step S505, still in decreasing order of the performance-improvement metric .
In a step S507, the decoder 120 checks whether the stop condition is met.
According to a first embodiment, the stop condition is met when a time period of predefined duration has ended since the algorithm of
According to a second embodiment, the stop condition is met when a predefined quantity of iterations has been performed since the algorithm of
According to a third embodiment, the stop condition is met when the performance-improvement metric of the kernel at the top position in the list is below a predefined threshold α. Defining the threshold α allows tuning a trade-off between decoding performance and decoding complexity (i.e. time and/or processing resources used).
When the stop condition is met, the beliefs propagation process ends and a step S508 is performed; otherwise, the step S503 is repeated (new iteration of the beliefs propagation iterative process) with the list that has been reordered in the step S506.
In the step S508, the decoder 120 makes a decoding decision in view of the beliefs propagation achieved by execution of the preceding steps of the algorithm of
CPU 600 is capable of executing instructions loaded into RAM 601 from ROM 602 or from an external memory, such as an HDD or an SD card. After the decoder 120 has been powered on, CPU 600 is capable of reading instructions from RAM 601 and executing these instructions that form one computer program.
The steps performed by the decoder 120 may be implemented in software by execution of a set of instructions or program by a programmable computing machine, such as a PC (Personal Computer), a DSP (Digital Signal Processor) or a GPU (Graphics Processing Unit). Indeed, nowadays, GPUs are increasingly used for non-graphical calculations, since they are well-suited to other massive computing parallel problems than in traditional image-related processing. Advantageously, CPU 600 or GPU can perform polar code decoding, including notably the beliefs propagation, during inactivity time of processing resources which usually occur when inter-memory transfers are scheduled.
In a step S701, the decoder 120 by way of its belief propagation decoder 422, initiates a call for achieving beliefs propagation for the input beliefs Li,j(in) and so as to get the output beliefs {circumflex over (L)}i,j(in) and {circumflex over (L)}i,j(out), ∀i,j′ such that 0<i≤L and 0<j′≤N (the index j′ is used here because the index j can not be used here since its maximum value is N/2).
In a step S702, the decoder 120 sets {circumflex over (L)}i,j′(in) and Li,j′(out) to a null value, ∀i,j′.
In a step S703, the decoder 120 computes the performance-improvement metric for each kernel Ki,j (thus identified by the couple (i,j), ∀i,j′ such that 0<i≤L and 0<j≤N/2, as already described with respect to
In a step S704, the decoder 120 sorts the kernel in the list in decreasing order of the performance-improvement metric .
In a step S705, the decoder 120 extracts the W top kernels in the list .
In a step S706, the decoder 120 computes the output beliefs as follows, for each kernel Ki,j among the W extracted kernels:
{circumflex over (L)}i,2j−1(in)=1(in)(Li,2j−1:2j(in),Li,2j−1:2j(out))
{circumflex over (L)}i,2j(in)=2(in)(Li,2j−1:2j(in),Li,2j−1:2j(out))
{circumflex over (L)}i,2j−1(out)=1(out)(Li,2j−1:2j(in),Li,2j−1:2j(out))
{circumflex over (L)}i,2j(out)=2(out)(Li,2j−1:2j(in),Li,2j−1:2j(out))
In a step S707, the decoder 120 resets the performance-improvement metric for each kernel that has been extracted from the list in the step S705. The performance-improvement metric becomes then null for these kernels.
In a step S708, the decoder 120 updates the LLRs for the neighbour kernels of the W kernels that have been extracted from the list in the step S705, as follows:
In a step S709, the decoder 120 updates the performance-improvement metric for each neighbour kernels of the W kernels that have been extracted from the list in the step S705, in accordance with the LLRs update performed in the step S708.
In a step S710, the decoder 120 reorders the kernels in the list following the changes made on the performance-improvement metric in the steps S707 and S709, still in decreasing order of the performance-improvement metric .
In a step S711, the decoder 120 checks whether the stop condition is met, as already described with respect to
In the step S712, the beliefs propagation ends up by returning the output beliefs {circumflex over (L)}i,j′(in) and {circumflex over (L)}i,j′(out) ∀i, j′, in order to enable making a decoding decision as is.
Number | Date | Country | Kind |
---|---|---|---|
17199983 | Nov 2017 | EP | regional |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2018/037828 | 10/3/2018 | WO | 00 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2019/087723 | 5/9/2019 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
20180241414 | Gresset | Aug 2018 | A1 |
20190393897 | Gresset | Dec 2019 | A1 |
20210135733 | Huang | May 2021 | A1 |
20210150747 | Liu | May 2021 | A1 |
20210224619 | Kaltwang | Jul 2021 | A1 |
20210279055 | Saxena | Sep 2021 | A1 |
Number | Date | Country | |
---|---|---|---|
20210376862 A1 | Dec 2021 | US |