The present application claims benefit from U.S. Provisional Patent Application No. 62/589,596 filed on Nov. 22, 2017, the application being hereby incorporated by reference in its entirety.
The presently disclosed subject matter relates to error correction codes (ECCs) and, more particularly, to decoding systems for such codes.
Problems of the decoding of error correction codes have been recognized in the conventional art and various techniques have been developed to provide solutions, for example:
Generalized Concatenated Codes (GCC) are error correcting codes that are constructed by a technique, which was introduced by Blokh and Zyabolov (Blokh, E. & Zyabolov, V. “Coding of Generalized Concatenated Codes”, Probl. Peredachi Inform., 1974, 10, 45-50) and Zinoviev (Zinoviev, V., “Generalized Concatenated Codes”, Probl. Peredachi Inform., 1976, 12, 5-15). The construction of the GCCs is a generalization of Forney's code concatenation method (Forney G. D. J., “Concatenated Codes”, Cambridge, Mass.: M.I.T. Press, 1966). A good survey on GCCs was authored by I. Dumer (I. Dumer, “Concatenated Codes and Their Multilevel Generalizations”, Handbook of Coding Theory, V. S. Pless & W. C. Huffman (Eds.), Elsevier. The Netherlands, 1998).
Polar codes were introduced by Arikan (E. Arikan, “Channel polarization: A method for constructing capacity-achieving codes for symmetric binary-input memoryless channels”). Generalizations of polar codes and their decoding algorithms followed (see e.g. Presman and Litsyn, “Recursive descriptions of polar codes”. Adv. in Math. of Comm. 11(1): 1-65 (2017)). A sequential list decoding algorithm for polar codes called successive cancellation list (SCL) was proposed by Tal and Vardy (Ido Tal and Alexander Vardy, “List Decoding of Polar Codes”, IEEE Trans. Information Theory 61(5): 2213-2226 (2015)). Systems and hardware architectures for such decoders were also proposed (see e.g. Seyyed Ali Hashemi, Carlo Condo and Warren J. Gross, “Fast Simplified Successive-Cancellation List Decoding of Polar Codes”. CoRR abs/1701.08126 (2017), also: Gabi Sarkis, Pascal Giard, Alexander Vardy, Claude Thibeault and Warren Gross, “Fast List Decoders for Polar Codes”, IEEE Journal on Selected Areas in Communications 34(2): 318-328 (2016), and Pascal Giard, Gabi Sarkis, Alexios Balatsoukas-Stimming YouZhe Fan, Chi-Ying Tsui, Andreas Peter Burg, Claude Thibeault, Warren J. Gross, “Hardware decoders for polar codes: An overview”, ISCAS 2016: 149-152).
The problem of re-encoding within the successive cancellation algorithm has received attention in the following papers. Note that in these papers, re-encoding is referred to as the “partial-sums problem”.
The references cited above teach background information that may be applicable to the presently disclosed subject matter. Therefore the full contents of these publications are incorporated by reference herein where appropriate, for appropriate teachings of additional or alternative details, features and/or technical background.
According to one aspect of the presently disclosed subject matter there is provided a computer implemented method of recursive sequential list decoding of a codeword of a polar code, the method provided by a decoder comprising a plurality of processors, the method comprising:
The method according to this aspect of the presently disclosed subject matter can further comprise one or more of features (i) to (vi) listed below, in any desired combination or permutation which is technically possible:
According to another aspect of the presently disclosed subject matter there is provided a decoder configured to perform recursive sequential list decoding of a codeword of a polar code, the decoder composing a memory and a plurality of processors, wherein the decoder is configured to:
This aspect of the disclosed subject matter can optionally comprise one or more of features (i) to (vi) listed above with respect to the system, mutates mutandis, in any desired combination or permutation which is technically possible.
According to another aspect of the presently disclosed subject matter there is provided a non-transitory program storage device readable by a processing circuitry, tangibly embodying computer readable instructions executable by the processing circuitry to perform a method of recursive sequential list decoding of a codeword of a polar code, the method comprising:
Among the advantages of certain embodiments of the presently disclosed subject matter are low latency decoding, low power consumption, and lower memory utilization compared to prior art solutions.
In order to understand the invention and to see how it can be carried out in practice, embodiments will be described, by way of non-limiting examples, with reference to the accompanying drawings, in which;
In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the invention. However, it will be understood by those skilled in the art that the presently disclosed subject matter may be practiced without these specific details. In other instances, well-known methods, procedures, components and circuits have not been described in detail so as not to obscure the presently disclosed subject matter.
Unless specifically stated otherwise, as apparent from the following discussions, it is appreciated that throughout the specification discussions utilizing terms such as “processing”, “computing”, “representing”, “comparing”, “generating”, “assessing”, “matching”, “updating” or the like, refer to the action(s) and/or process(es) of a computer that manipulate and/or transform data into other data, said data represented as physical, such as electronic, quantities and/or said data representing the physical objects. The term “computer” should be expansively construed to cover any kind of hardware-based electronic device with data processing capabilities including, by way of non-limiting example, the “processor”, “processing element”, and “decoder” disclosed in the present application.
The terms “non-transitory memory” and “non-transitory storage medium” used herein should be expansively construed to cover any volatile or non-volatile computer memory suitable to the presently disclosed subject matter.
The operations in accordance with the teachings herein may be performed by a computer specially constructed for the desired purposes or by a general-purpose computer specially configured for the desired purpose by a computer program stored in a non-transitory computer-readable storage medium.
Embodiments of the presently disclosed subject matter are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the presently disclosed subject matter as described herein.
Bearing this in mind, attention is drawn to
The system includes a transmitting unit (110) configured to communicate wirelessly with a receiving unit (150). Wireless communication between transmitting unit (110) and receiving unit (150) can utilize, for example, a cellular technology capable of carrying, for example, data packets, and the wireless signal can be transmitted via antenna (130) and received over antenna (170). The wireless signal can carry, for example, packets such as the ECC encoded data (190) packet.
The wireless signal can be affected by signal dissipation and various kinds of electromagnetic interference which can result in errors occurring in the data received at the receiving unit (150). By encoding using an Error Correction Code (such as Arikan's polar code) at the transmitter and then decoding at the receiver, such errors can be corrected. The communication system of
The transmitting unit (110) can contain an ECC encoder (120). The ECC encoder (120) processes the data that arrives for transmission to the receiving unit (150) (known as the information word), and can process it according to an Error Correction Code such as Arikan's polar code (resulting in a codeword) before transmission. Similarly, the receiving unit (150) can contain an ECC decoder (160). The ECC decoder (160) can process the codeword that arrives at the receiving unit (150) from the transmitting unit (110) (such as the ECC encoded data (190)), and can process it according to the Error Correction Code used at the ECC encoder (120) to restore the original information word as further detailed below.
It is noted that the teachings of the presently disclosed subject matter are not bound by the wireless communications system described with reference to
The illustrated. ECC Decoder system can include a processing and memory circuitry (205) including a processor (not shown) operatively coupled to a memory (220). Processing and memory circuitry (205) can further include zero or more specialized processing elements (235) operably connected to memory (220).
A processing element (235) can be a hardware-based electronic device with data processing capabilities, such as, for example, a general purpose processor, a specialized Application Specific Integrated Circuit (ASIC), a single core in a multicore processor etc. A processing element (235) can also consist, for example, of multiple processors, multiple ASICs, virtual processors, combinations thereof etc. The processing elements (235) can be identical to one another, or they can differ from each other in terms of architecture, processing capacity, clock rate etc. The abbreviation NPE denotes the number of processing elements (235) contained in the processing and memory circuitry (205).
A processing element (235) can be configured to perform, for example, tasks for decoding of constituent codes as part of list sequential decoding of a polar code codeword—as will be described in detail below with reference to
The memory (220) can be, for example, any kind of volatile or non-volatile storage, and can include, for example, a single physical memory component or a plurality of physical memory components.
The memory (220) can be configured to, for example, store various data used in the computation of a decoded codeword. Such data can include, for example, constituent code symbol likelihood data, decoded constituent code codewords (i.e. estimations of original information words of constituent codes), and reencoded constituent code codewords (created, for example, by applying an inner mapping function to decoded information words, and utilized, for example, to decode subsequent constituent codes).
As will be further detailed with reference to
The processing and memory circuitry (205) can comprise a controller unit (210). The controller unit (210) can be configured to receive a codeword of a particular Error Correction Code over an external interface (not shown), and store it into the Memory (220). The controller unit (210) can subsequently initiate and orchestrate a process to decode the codeword so that, for example, an estimation of the original codeword (i.e. an estimation of the word as initially produced by the encoder unit (120)) is available in the memory (220).
In some embodiments of the presently disclosed subject matter, an estimation of the original information word (i.e. an estimation of the word as passed initially into the encoder unit (120)) can be available in the memory (220) upon completion of the decoding. This process will be described in detail below with reference to
It is noted that in some cases it can be sufficient for a decoding operation to generate, for example, an estimation of the original codeword prepared for transmission—without the generating or maintaining an estimation of the original information word. It is further noted that in the case of, for example, systematic encoding, the symbols of the original information word appear among the symbols of the codeword, so that an estimation of information word symbols can be determined simply by selecting the appropriate symbols from an estimation of the original codeword transmitted.
The processing and memory circuitry (205) can include a candidates generator unit (240), configured, for example, to construct information word candidates according to, for example, symbol likelihood data written to memory by a processing element (235)—as will be described in detail below with reference to
The processing and memory circuitry (205) can include a selector unit (245), configured to, for example, select the “best-fit” information word candidates for continued processing—as will be described in detail below with reference to
The processing and memory circuitry (205) can include a reencoder unit (270), configured to, for example, apply an iterative mapping so as to convert the selected candidate information words back into codeword format, in preparation for the decoding of the next constituent code—as will be described in detail below with reference to
It is noted that the teachings of the presently disclosed subject matter are not bound by the ECC decoder system described with reference to
In a generalized concatenated code (GCC), a complex code is constructed, for example, from a group of Nouter outer-codes (also termed “constituent codes”)—the codeword of each of the constituent codes being of length Louter symbols (for example: bits). An inner-code (with associated inner mapping function Finner and codeword length Linner) is also, for example, utilized. A codeword of the GCC can be generated by creating a matrix of Nouter rows and Louter columns wherein each row is a codeword of the outer-code—and then applying Finner to each of the Louter columns of the matrix.
In a recursive GCC, a GCC encoding operation can be performed multiple times—with the output of one encoding operation serving as input to a subsequent encoding. In the recursive structure, the constituent codes are themselves generalized concatenated codes. Each one of the Nouter constituent codes itself comprises N′outer codes each of length L′outer where L′outer<Louter. If the inner mapping length is L′inner, then Nouter=L′inner·L′outer.
Arikan's polar code can be regarded as a non-limiting example of a recursive GCC, as a polar code of a particular length can be formalized as a concatenation of several smaller polar codes in conjunction with a kernel mapping (inner code).
More specifically: an Arikan polar code (over a binary alphabet) of length N=2m bits (with m>1) can be represented as 2 outer polar codes of length 2m−1 that are concatenated using the inner code g(u), wherein:
the two outer codes are defined as:
g(m)(·) is recursively defined as:
Thus: Layer 0 includes a single node (302) corresponding to the 16-bit codeword. Layer 1 includes 2 nodes (304) corresponding to the two 8-bit outer codewords which are constituent codes of the 16-bit codeword. Layer 2 includes 4 nodes (306) corresponding to the 44-bit outer codewords each of which is a constituent code of one of the 8-bit codewords. The nodes (308) in layer 3 correspond to 2-bit codewords each of which is a constituent code of one of the 4-bit codewords. The 2-bit codewords in layer 3 do not include constituent codes themselves.
In
“Outer Code (2,0)” can be referred to as, for example, the parent code of “Outer Code (3,0)” and “Outer Code (3,1)”. Similarly “Outer Code (3,0)” and “Outer Code (3,1)” can be referred to as, for example, child codes of “Outer Code (2,0)”. “Outer Code (3,0)” and “Outer Code (3,1)” can be referred to as, for example, sibling codes of each other. “Outer Code (1,0)” (for example) can be referred to as an ancestor code of—for example—“Outer Code (2,0)”, “Outer Code (3,0)”, “Outer Code (3,1)” etc.
The hierarchy of outer codes and layers illustrated in
Attention is now directed to
The term “sequential list decoding” can refer, by way of non-limiting example, to a decoding method in which segments of a codeword are decoded in a predefined sequence, and in which a certain number of decoding candidates (known as the “list size”) is maintained as the decoding progresses. After the entire codeword is decoded, a single candidate from the list can be selected.
The prior art process illustrated in
For convenience, the process is herein described according to an embodiment utilizing a general purpose computing system with a processor and memory, but it will be clear to one skilled in the art that the process is equally applicable for other platforms. For convenience, the process is herein described for a binary alphabet, but it will be clear to one skilled in the art that the process is equally applicable for a non-binary alphabet.
For each sequentially decoded constituent code, the decoding task utilizes a list of “input models” which provide the task with estimations of the data received from the communication medium—as modified according to the results of decoding of previous constituent codes (as will be described in detail below).
For a constituent code of length N, an input model, can, for example, include:
A likelihood value can be represented in the vectors as, for example, a floating point number between 0 and 1, a natural logarithm, or some other representation or an approximation thereof. Likelihood values can also be represented, as, for example, the ratio between the likelihood of 0 and the likelihood of 1(“likelihood ratio”), or as a logarithm of the ratio between the likelihood of 0 and the likelihood of 1 (“log-likelihood ratio) etc. The system (for example: the processor) can perform, for example, computations of likelihoods using one representation and, for example, store the likelihoods using a different representation.
There can be one input model available, or there can be more than one input model available to be used by the task.
An input model can be created from, for example, data received from a physical receiver attached, for example, to a communication medium. For example, in the case of decoding a codeword of a GCC that is represented at the highest layer of the layered factor graph representation (as described above with reference to
In the course of the sequential list decoding process, a task decoding a higher layer constituent code (as described above with reference to
Early in the sequential list decoding process, there can be, for example, a small number of input models which correspond to the small number of constituent codes which have been estimated at that stage. Later in the sequential list decoding process, there can be, for example, L input models (where L is a maximum list length value used by the sequential decoding process) wherein each input model has an association with a candidate decoded information prefix in a list of L candidates being maintained by the task.
As a prerequisite for the sequential list decoding of a particular constituent code using the recursive process of
To begin recursive sequential list decoding (e.g. on the first iteration of the iterative loop), the system (for example: the processor) can select (410) the next constituent code from which the current code is derived—for example: the first constituent code from the layer below as indicated in a layered graph of the code. Considering as a non-limiting example the decoding of the code (1,0) in layer 1 of
To prepare for the recursive step, the system (for example: the processor) can next prepare (420) likelihood estimations for the bits of the selected constituent code (i.e. of the next lower layer) according to the input models and any previously decoded constituent codes. The system (for example: the processor) can prepare separate likelihoods according to each of the received input models (i.e. the received input models of the current code), and can create an association (for example: in memory) between the prepared likelihoods structure and the received input model from which the likelihoods structure was generated.
In some embodiments of the presently disclosed subject matter, the vector of likelihood values for a selected constituent code for a given input model j can be calculated according to the following formula:
{tilde over (Λ)}r,t(j)=log(Pr(Y=y,{γ0,t={circumflex over (γ)}(j)0,m,γ1,t={circumflex over (γ)}(j)1,t, . . . ,γs−1,t={circumflex over (γ)}(j)s−1,t},Modelσ(j)|γs,t=r))
where:
In this example, the likelihood matrix value is represented as a logarithm, however it will be evident to one skilled in the art that the likelihood matrix value can also be represented as a floating point value between 0 and 1 or some other representation.
At the recursive step, a decoding process can be invoked (430) on the selected constituent code (resulting in, for example, a recursive invocation of the
Upon completion of the recursive invocation of the decoding process, the system (e.g. the processor) can receive (440), for example:
Having received the results of the recursive invocation, the system (for example: the processor) can store (450) CIWs and CCWs to, for example, memory (220).
The system (for example: the processor) can next determine (460) if there is an additional constituent code from the layer below which has not yet been decoded. If so, then this code can be selected (410) and the process can repeat itself.
When the constituent codes (of the current code) from the layer below have all been decoded, the process can, for example, reencode (470) (according to the inner code mapping) each of the final set of reencoded selected candidate information words, so as to provide codeword input for the subsequent decoding task in a higher layer.
Attention is now directed to
To begin the decoding of the constituent code, the system (for example: a processor) can obtain (500) a list of input models to be utilized. The system (for example, a processor) can obtain these from, for example, the memory where they were written by an invoking task.
Next, the system (for example, the processor), can compute (510) candidate information words for the constituent code and can determine rankings for candidates according to the input models.
In some embodiments of the presently disclosed subject matter, the system (for example, a processor) can compute the ranking of each possible leaf code candidate information word under each supplied input model. By way of non-limiting example, in the case of a 2 bit polar code (with no frozen bits), the rankings for information words 00, 01, 10, and 11 under each input model can be computed.
A ranking can be a number that indicates the quality likelihood of a candidate decoded information word. By way of non-limiting example, the ranking can be identical with, an estimation of, or otherwise based on the “path metric”. The path metric of a particular candidate information word u corresponding to a codeword c can be defined as the likelihood that the observed data (as received from the communication medium) would result from the transmission of codeword c corresponding to information word u introduced into the encoder before transmission on the communication medium.
In some embodiments of the presently disclosed subject matter, the path metric is represented using a logarithm—which case it can be computed according to the following formula:
PM(c)=log(Pr(Y=y|X=c))
where Y is a received channel vector, X is a transmitted vector and Pr( ) indicates probability. Throughout the decoding algorithm the PM may be computed for each candidate information prefix, and as such may indicate the likelihood of having transmitted a plurality of codewords with a certain information prefix. As the algorithm progresses and the decided information prefix grows, the set of possible codewords is correspondingly reduced, until a the final decision is reached in which the entire information word, and therefore its (single) codeword representation, is determined.
By way of non-limiting example, in the case of a polar code, the path metric can be computed according to the formulae give, in section 3 of Balatsoukas-Stimming et, al. “LLR-Based successive cancellation list decoding of Polar Codes” IEEE Trans. Signal Proc., 63 (2015), 5165-5179.
After the candidate information words with their rankings have been generated, the system (for example, the processor) can select (520) some number (for example, L) of the candidate information words (together with their associated input models) for further processing according to some method. For example, the L information word candidates with the top rankings can be selected.
It is noted that even if the process generally selects L candidates, early in the operation of the recursive process the total number of possible candidates might be a number smaller than L.
Next, the system (e.g. the processor) can compute a reencoding (530) of each of the selected candidate information words (according to the outer code mapping), resulting in a list of reencoded selected candidate code words corresponding to the selected candidate information words. For each reencoded selected candidate information word, the system (e.g. the processor) can also record which input model was used to generate the candidate and make this data available in memory for use by an invoking task.
In some embodiments of prior art methods, the decoding of a codeword in the recursion base case can directly generate candidate codewords (CWs) with associated rankings—without first computing candidate information words. In such cases, the system (e.g. the processor) selects CWs rather than CIWs, and does not perform reencoding. Such methods can be useful, for example, where systematic encoding is employed—as in such cases it can be unnecessary to maintain the original information word in memory.
Attention is now drawn to
For convenience, three types of tasks are illustrated: (i) Likelihood preparation (ii) generation of codewords and selection of candidates decoding paths, and (iii) re-encoding operations. These tasks can be performed, by way of non-limiting example, as described above with reference to
Moreover, it is assumed that Lin=1 and L=8 (maximum list size).
As is evident from the sequence of tasks listed in
The sequence of tasks listed in
Additionally, there are cases in which re-encoding operations are performed consecutively (e.g. 632 and 634). These cases correspond to the decoding of a constituent code which is the final constituent code of a particular parent code, with the parent code also being the final constituent code of its parent code. Those consecutive re-encoding operations introduce even more latency into the decoding and may result in redundant use of memory storage.
In some embodiments of the current subject matter, likelihood computation can be performed using an alternative implementation based on particular properties of polar codes. As a consequence, reencoding can be executed in parallel with likelihood computation, which can result in reduced decoding latency (in particular in the case of consecutive reencode operations) and more efficient memory usage as will be detailed below with reference to
Attention is now directed to
Specifically,
Leaf code {tilde over (C)}f (702) is outer-code #1 of its parent code {tilde over (C)}f-1(706). Similarly, at the layer above the leaf layer, {tilde over (C)}f-1 (706) is outer-code #1 of its parent code {tilde over (C)}f-2 (710). The pattern in which each parent code in each next-higher layer is outer-code #1 of its parent code continues at each layer (f-2), (f-3) etc. until layer (f-p).
It is noted that—as described above with reference to
At layer (f-p), Code {tilde over (C)}f-p (716) is outer-code #0 of {tilde over (C)}f-p-1 (718), and its sibling outer-code #1 (denoted by 720) is as of yet undecoded. Thus {tilde over (C)}f-p (716) is the first “ancestor” code of code {tilde over (C)}f (702) that has an undecoded sibling.
The following observations are made regarding the execution of the decoder described in
Attention s now drawn to
For convenience, the process is herein described for a binary alphabet, but it will be clear to one skilled in the art that the process is equally applicable for a non-binary alphabet.
The process utilizes example, characteristics inherent in operations used for polar encoding in conjunction with, for example, a multiple processing element architecture such as, for example, the system described above with reference to
Among the features of the
Among the advantages of some embodiments of the method are low latency decoding, low power consumption, and low memory usage requirements.
The process described in
The process can begin its processing of a particular code by obtaining (800) a list of input models. As described above with reference to
Next, for each of the input models, the process can compute likelihood vectors (810) for the first constituent code of the current code, Referring, by way of non-limiting example, to the layered, factor graph of
In some embodiments of the present subject matter, the likelihoods can be computed and maintained, for example, as base-2 logarithms—in this case they are referred to as log-likelihoods (LLs). In such embodiments, LL vectors for a particular first constituent code can be computed, for example, according to the following formulae:
where:
Returning to the example of constituent code 714 in
Next, the process can recursively invoke (820) a decoding process (for example: the current decoding process of
Following the completion of the recursively invoked decoding, the process can receive (825) the results of the recursive decoding. These results can, for example, include:
Thus, in the example of
The process can next obtain (830)—for each candidate decoded information word in the received list—associated likelihood vectors usable for decoding the second constituent code of the current code.
In some embodiments of the presently disclosed subject matter, the process can receive these likelihood vectors usable for decoding the second constituent code of the current code from the recursively invoked decoding process which just completed. For example, when the recursively invoked decoding is according to
In some embodiments of the presently disclosed subject matter, the recursively invoked decoding process might not provide the likelihood vectors for use in decoding the second constituent code of the current code (for example, if the process of
In an embodiment where likelihoods are represented and stored as a base-2 logarithm, the likelihood vectors can be computed, for example, according to the following formulae:
Next, the process can again recursively invoke (840) a decoding process—this time on the second constituent code. The newly computed likelihood vectors (e.g. the LLH0 and LLH1 pairs that have just been computed) can be supplied as the input models to the recursively invoked decoding process. As before (820), the current decoding process of
As before, following the completion of the recursively invoked decoding, the process can receive the results of the recursive decoding. Specifically, the process can receive (850), for example:
Thus, in the
Next, the process can compute (860)—for each reencoded candidate decoded information word of the second constituent code—revised codewords for the current code and likelihoods for the next undecoded constituent code, as described in more detail below with reference to
Subsequently, in a higher layer, the decoder (for example: the candidates generator unit 240) can use the computed likelihoods to decode the next undecoded constituent code.
Attention is now drawn to
In some embodiments of the presently disclosed subject matter, the process illustrated in
The decoder 200, (for example the controller unit 210) can identify (910) a constituent code as the next as-of-yet undecoded constituent code. The decoder can determine this, for example, according to a layered factor graph of the code such as the one illustrated above with reference to
Referring, by way of non-limiting example, to the layered factor graph and decoding sequence illustrated in
Next, the decoder 200 (for example: (reencoder unit 270)) can, for example, compute (920)—for each CCW—a sequence of bits which constitutes a partial or complete result of the application of the polar code's inner code mapping on the CCW together with the sibling constituent code CCW from which it was derived (as described above with reference to
In some embodiments of the presently disclosed subject matter, the decoder 200 (for example: the reencoder unit 270) can compute the bits of the reencoded codeword according to the following formula:
In some embodiments of the presently disclosed subject matter, the decoder 200 (for example: the reencoder unit 270) can compute a bit sequence consisting of fewer bits than the total number of bits in the reencoded codeword (i.e. in these embodiments the sequence of bits constitutes a partial result of reencoding). The decoder 200 can be limited in the number of bits that it can output, and thus output less than a full codeword in circumstances where, for example, the codeword is comparatively long. The term “block” hereforward denotes such a computed group of ordered bits (or more generally: symbols) that is smaller than the length of the target codeword. The term lBlock hereforward denotes the number of bits (or symbols) in a block. In such embodiments, the decoder 200 (for example: (reencoder unit 270)) can require several iterations of block computation to compute the complete sequence of blocks that constitutes the reencoded candidate codeword (for example: (2pN′)/lBlock iterations, where p denotes the number of layers from the leaf constituent code to the parent code being reencoded—as illustrated in
In some embodiments of the presently disclosed subject matter, lBlock can be, for example, equal to NPE (i.e. to the number of processing elements 235 contained within the decoder 200 as described above with reference to
In some embodiments of the presently disclosed subject matter, the decoder 200 (for example: reencoder unit 270) first computes the block that corresponds to the sequentially last bits of the reencoded codeword. In the subsequent iteration, the decoder 200 (for example: reencoder unit 270) can then—in such an embodiment—compute the block corresponding to the bits sequentially preceding the block of the first iteration. For example, in an embodiment where codeword length is 16 bits and lBlock is 4 bits, the decoder 200 (for example: reencoder unit 270) can first compute bits 12:15 i.e. the final 4 bits of the codeword, in the next iteration, the decoder 200 (for example: reencoder unit 270) can then compute bits 8:11 etc. The term “trailing block” is hereforward used to denote a block generated in this manner. The term “next trailing block” is hereforward used to denote the block of the codeword that is the next to be generated in an embodiment where blocks are being generated in this manner. The term “upper-to-lower order” is hereforward used to denote the generation of blocks of reencoded candidate codewords in this manner.
In some embodiments of the presently disclosed subject matter, the decoder 200 (for example: reencoder unit 270) can store (930) the computed reencoded codeword data (for example: the trailing block) to, for example, a memory location associated with the reencoded codeword. In some embodiments of the presently disclosed, subject matter, the memory location can be, for example, a regular memory, a high-speed low-latency memory, or even a temporary memory that is used only until the codewords of the parent code have been calculated. The decoder 200 (for example: reencoder unit 270) can in this manner simply store the entire reencoded codeword by storing the blocks to memory—one block at a time.
In some embodiments of the presently disclosed subject matter, the decoder 200 (for example: reencoder unit 270) can—in addition—store the computed reencoded codeword data (for example: the trailing block) to, for example, a memory location associated with a reencoded codeword of the constituent code that is the sibling of the identified next undecoded constituent code.
Next, e decoder 200 (for example: the processing elements 235) can utilize (930) the computed reencoded codeword data (for example: the trailing block), together with the likelihood vectors that were previously used in decoding the parent of the identified next undecoded constituent code, in order to generate likelihood estimates for (at least) a portion of the bits of the identified next undecoded constituent code.
In some embodiments of the presently disclosed subject matter, the partial log likelihood vector data can be computed according to the following formula:
where:
It is noted that the computation of the likelihood data (930) performed by, for example, the processing elements 235 may start as soon as codeword data generated by, for example, the reencoder unit 270 (as part of reencoding (920)) is available. It is noted that if a block of codeword data contains NPE bits, then the NPE processing elements 235 can execute in parallel so that each processing element 235 of the NPE processing elements 235 simultaneously calculates the likelihood for a single bit of the identified next undecoded codeword.
In some embodiments of the presently disclosed subject matter, the likelihood data computation can start at Δ clock-cycles after the re-encoding sequence started—resulting in the effective latency due to re-encoding being equivalent to Δ.
In some embodiments of the presently disclosed subject matter, Δ=0. In suet embodiments, likelihood data computation starts in the same clock-cycle that re-encoding starts.
In some embodiments of the presently disclosed subject matter, Δ=1. In such embodiments, likelihood data computation (930) starts in the clock-cycle following the start of the re-encoding (920). In such embodiments, the reencoding unit 270 can compute the next trailing block concurrent to the computation by the processing elements 235 of the likelihoods for the preceding trailing block.
Whether Δ=0 or a higher value, the computation time of the decoding can be reduced compared to the method illustrated above with reference to
Next, the decoder 200 (for example: (controller unit 210)) determines (940) whether the complete codeword has been generated. If not, then the decoder 200 (for example: (reencoder unit 270)) can, for example, compute (920) the next trailing block. If the complete codeword has been generated, then the process is complete (950), and the decoder 200 can, for example, process a subsequent candidate codeword from the list.
In some embodiments of e presently disclosed subject matter, pipelining can similarly be implemented between reencoding (530) of CIWs of the first child constituent code resulting in CCWs and obtaining (830) of likelihood vectors for the second child constituent code based on the CCWs. As described above, in this case the decoder 200 (for example: (reencoder unit 270)) can iteratively reencode the next trailing block, and this data can be pipelined to, for example, NPE processing elements 235 operating in parallel to calculate likelihood vector values as described above with reference to
Attention is now directed to
The memory structures illustrated in
The structure of the exemplary memory table storing likelihood data is hereforward termed LLMem.
The maximum number of LLs stored in a row of LLMem can be, for example, 2·NPE. LLMem can be divided into regions, and each region can be allocated to computations pertaining to a different layer of the layered factor graph of the code being decoded. In some embodiments of the presently disclosed subject matter, a single outer-code belonging to a given layer can be in the state of active decoding at a given time.
Thus, in some embodiments of the presently disclosed subject matter, only the likelihoods of a single outer-code in a given layer are required to be stored at any given time in LLMem.
r0(start), r1(start), . . . , denote the beginning addresses of the LLMem regions for layers #0, #1, . . . , #−1, respectively.
LLMem[ri(start): (ri+1(start)−1)] is the memory storing the likelihoods for an outer-code of layer i that is currently being decoded by the decoder 200 (for a code of length N, this outer-code has length
bits).
Attention is now directed to
A series of memory rows in the region of an outer-code in a particular layer can store likelihoods for a single input model of that outer-code. For each input model, the number of rows maintained is hereforward denoted as θi (and is equivalent to
Consequently, for a maximum list size of L, there are L·θi rows in LLMem for layer i, located in the memory range LLMem[ri(start): (ri+1(start)−1)]. The first θi rows contain LL data of the first input model, and the next ones are for LLs of the second input model etc.
Attention is now directed to
Each LLMem memory row can consist, for example, of (2·NPE·nLLBits) bits, where nLLBits is the number of bits assigned to represent, for example, likelihood, log-likelihood, or log-likelihood ratio values.
For example LLMem[i][(nLLBits·(j+1)−1):j·nLLBits] corresponds to the jth likelihood value in row i.
In some embodiments of the presently disclosed subject matter, the memory within a row is organized so that the row contains all the necessary input data to compute likelihoods for NPE bits of the codeword.
Within memory rows for a particular input model, row m can contain the likelihoods corresponding to the symbol indices from (m·NPE) to ((m+1)·NPE−1) and also to bits
up to
The likelihood data with index 1210 pertains to the first range (whereas the likelihood data with index 1200 pertains to the second range).
Attention is now directed to
CWMem is divided into regions—each region is utilized for computations of a different outer-code (at any given time, at most one outer-code per layer in the layered factor graph of the code is actively used for computations). j0(start), j1(start), . . . , denote the beginning address of layers #0, #1, . . . , #−1, respectively. CWMem [ji(start): (ji+1(start)−1)] are the memory rows containing data outer-code of layer i (of length
bits).
Attention is now directed to
Each row in CWMem contains NPE bits. Each bit in the row corresponds to a codeword bit. Consecutive 2θi rows of CWMem contain an outer-code decision for a single model of that outer-code. Consequently if the maximum list size is L, there is a total of L·2θi rows for layer i, located in CWMem [ji(start): (ji+1(start)−1)]. The first 2θi rows are for the first model, and the next ones are for the second model etc.
The organization of the memory within the row can be such that each row provides complete input to NPE LL preparation operations.
Within the memory of a particular model, row m contains bits decisions for codeword symbols with indices of m·NPE up to (m+1)·NPE−1.
CWMem is used together with LLMem to generate the LL values for outer-codes subsequent to the first in each layer. To accomplish this, the LLMem of layer #i is used with outer-codes of layer #(i+1), As a consequence, a single row of LLMem and a single row of CWMem are used for generating a new single row of LLMem as preparation for decoding outer-code #1.
It is to be understood that the invention is not limited in its application to the details set forth in the description contained herein or illustrated in the drawings. The invention is capable of other embodiments and of being practiced and carried out in various ways. Hence, it is to be understood that the phraseology and terminology employed herein are for the purpose of description and should not be regarded as limiting. As such, those skilled in the art will appreciate that the conception upon which this disclosure is based may readily be utilized as a basis for designing other structures, methods, and systems for carrying out the several purposes of the presently disclosed subject matter.
It will also be understood that the system according to the invention may be, at least partly, implemented on a suitably programmed computer. Likewise, the invention contemplates a computer program being readable by a computer for executing the method of the invention. The invention further contemplates a non-transitory computer-readable memory tangibly embodying a program of instructions executable by the computer for executing the method of the invention.
Those skilled in the art will readily appreciate that various modifications and changes can be applied to the embodiments of the invention as hereinbefore described without departing from its scope, defined, in and by the appended claims.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/IL2018/050774 | 7/15/2018 | WO | 00 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2019/102450 | 5/31/2019 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
20130117344 | Gross | May 2013 | A1 |
20150026543 | Li | Jan 2015 | A1 |
20150092886 | Ionita | Apr 2015 | A1 |
20150188666 | Mahdavifar | Jul 2015 | A1 |
20160013810 | Gross | Jan 2016 | A1 |
20170149531 | Raza | May 2017 | A1 |
20170230059 | Giard | Aug 2017 | A1 |
20170366199 | Ge | Dec 2017 | A1 |
20170366204 | Shi | Dec 2017 | A1 |
20180076831 | Johnson | Mar 2018 | A1 |
Entry |
---|
Tal, I., & Vardy, A. (2015). List decoding of polar codes. IEEE Transactions on Information Theory, 61(5), 2213-2226. |
Egilmez, Z. B. K., Xiang, L., Maunder, R. G., & Hanzo, L. (2019). The development, operation and performance of the 5G polar codes. IEEE Communications Surveys & Tutorials, 22(1), 96-122. |
Berhault, G., Leroux, C., Jego, C., & Dallet, D. (Jan. 2015). Partial sums computation in polar codes decoding. In 2015 IEEE International Symposium on Circuits and Systems (ISCAS) (pp. 826-829). IEEE. |
Chaki, P., & Kamiya, N. (May 2019). A novel design of CRC-concatenated polar codes. In ICC 2019-2019 IEEE International Conference on Communications (ICC) (pp. 1-6). IEEE. |
Balatsoukas-Stimming, A., Parizi, M. B., & Burg, A. (Mar. 2015). LLR-based successive cancellation list decoding of polar codes. IEEE transactions on signal processing, 63(19), 5165-5179. |
Presman, N., & Litsyn, S. (2017). Recursive descriptions of polar codes. Advances in Mathematics of Communications 11(1), 1. |
Fan, Y., & Tsui, C. Y. (Jun. 2014). An efficient partial-sum network architecture for semi-parallel polar codes decoder implementation. IEEE Transactions on Signal Processing, 62(12), 3165-3179. |
Sarkis, G., Giard, P., Vardy, A., Thibeault, C., & Gross, W. J. (Feb. 2016). Fast list decoders for polar codes. IEEE Journal on Selected Areas in Communications, 34(2), 318-328. |
Presman, N., & Litsyn, S. (Jun. 2015). Recursive Descriptions of Polar Codes. Version 3. |
Presman, N., & Litsyn, S. (Feb. 2015). Recursive Descriptions of Decoding Algorithms and Hardware Architectures for Polar Codes, version 1, pp. 1-60. |
MacWilliams, F. J., & Sloane, N. J. A. (1977). The theory of error correcting codes (vol. 16). Elsevier. |
T.K. Moon, “Error Correction Coding: Mathematical Methods and Algorithms”, Wiley & Sons, 2005. |
en.wikipedia.org/wiki/Cyclic_redundancy_check, Feb. 23, 2021. |
Blokh, È. L., & Zyablov, V. V. (1974). Coding of generalized concatenated codes. Problemy Peredachi Informatsii, 10(3), 45-50. |
Forney, G. D. (1965). Concatenated codes. |
Zinov'ev, V. A. (1976). Generalized cascade codes. Problemy Peredachi Informatsii, 12(1), 5-15. |
Dumer, I.I. (Jan. 1998). Concatenated codes and their multilevel generalizations. Handbook of coding theory. |
Arikan, E. (Jul. 2009). Channel polarization: A method for constructing capacity-achieving codes for symmetric binary-input memoryless channels. IEEE Transactions on information Theory, 55(7), 3051-3073. |
Tai, I., & Vardy, A. (2015). List decoding of polar codes. IEEE Transactions on Information Theory, 61(5), 2213-2226. |
Hashemi, S. A., Condo, C., & Gross, W. J. (Jan. 2017) Fast simplified successive-cancellation list decoding of polar codes. In 2017 IEEE Wireless Communications and Networking Conference Workshops (WCNCW) (pp. 1-6). IEEE. |
Giard, P., Sarkis, G., Balatsoukas-Stimming, A., Fan, Y., Tsui, C. Y., Burg, A., . . . & Gross, W. J. (Jun. 2016). Hardware decoders for polar codes: An overview. In 2016 IEEE International Symposium on Circuits and Systems (ISCAS) (pp. 149-152). Ieee. |
Number | Date | Country | |
---|---|---|---|
20200295787 A1 | Sep 2020 | US |
Number | Date | Country | |
---|---|---|---|
62589596 | Nov 2017 | US |