SLICED POLAR CODES

Information

  • Patent Application
  • 20170244429
  • Publication Number
    20170244429
  • Date Filed
    February 18, 2016
    8 years ago
  • Date Published
    August 24, 2017
    7 years ago
Abstract
An apparatus for polar coding includes an encoder circuit that implements a transformation C=u1N-sBN-s{tilde over (M)}n, wherein u1N-s, BN-s, {tilde over (M)}n, and C are defined over a Galois field GF(2k), k>1, wherein N=2n, s
Description
BACKGROUND

(a) Technical Field


Embodiments of the present disclosure are directed an error correcting code (ECC) with its accompanied class of decoding algorithms, and more particularly, on a variation of a channel polarization code.


(b) Discussion of the Related Art


Block codes are an important family of ECCs. These codes operate on fixed blocks of information bits. In each encoding operation a vector of k information bits is encoded to a (redundant) vector of N symbols, which in the case of binary codes is N bits. The number of coded bits N is called the block length of the code.


A polar code is a linear block error correcting code developed by Erdal Arikan that is the first code with an explicit construction that achieves the channel capacity for symmetric binary-input, discrete, memoryless channels (B-DMC) under practical complexity. Polar codes are defined as codes of length N=2n, n a positive integer, and there exits algorithms for which the decoding complexity is O(N log N). For example, for n=10 there is a length 1024 polar code.


Polar codes may be appropriate codes for NAND flash devices, due to their capacity achieving properties. One of the first issues that have to be solved to design polar ECC schemes for NAND flash devices is length compatibility. In various applications of ECC in general, the length of the codes needs to be adjusted to fit a given length. However, in many families of codes, as in the case of channel polarization codes, the block length cannot be chosen arbitrarily and there are many off-the-shelf methods known in the literature to tackle this situation. In many communication and storage systems, such as solid state drives and other NAND Flash memory devices, the number of information bits k is an integral power of two, which makes the block length not an integral power of two. Known method of shortening and puncturing may be applied also for this code, as suggested by various authors in past years. The decoding complexity of an ECC scheme is another important performance characteristic of an ECC system. Off-the-shelf methods for adjusting the block length of a code end up with a decoding complexity that relates to the original block length of the codes, before the adjustments. In many cases, as in the channel polarization codes, the original block length is larger (sometimes, much larger, e.g. approximately doubles) than the target block lengths. This can cause the decoding complexity to be substantially larger than that expected due to the actual transmitted/stored block.


For example, one technique called puncturing involves not transmitting certain bits, and operating the FULL decoder for a full length polar code while fixing no usable information (LLR=0) for the punctured bits. The operations are carried on the complete code graph and therefore the complexity matches that of a full length polar code and not a punctured length polar code. An alternative conventional method is called shortening, which involves operating a full decoder with infinite (maximal level) LLR values. This method allows encoding the information bits in such a way that some of the coded bits are known a-priori (e.g. 0 bits). That is why these bits need not be stored or transmitted in a communication scenario. At the decoder however, the FULL decoder is operated while having perfect information for the shortened bits.


SUMMARY

Exemplary embodiments of the disclosure as described herein generally include systems and methods that provide channel polarization codes of an arbitrary block length while operating at a decoding complexity that relates to the target length.


According to embodiments of the disclosure, there is provided an apparatus for polar coding that includes an encoder circuit that implements a transformation C=u1N-sBN-s{tilde over (M)}n, where u1N-s, BN-s, {tilde over (M)}n, and C are defined over a Galois field GF(2k), k>1, where N=2n, s<N, u1N-s=(u1, . . . , uN-s) is an input vector of N−s symbols over GF(2k), BN-s is a permutation matrix, {tilde over (M)}n=((N−s) rows of Mn=custom-character), the matrix M1 is a pre-defined matrix of size q×q, 2<q and N=qn, and C is a codeword vector of N−s symbols, and where a decoding complexity of C is proportional to a number of symbols in C; and a transmitter circuit that transmits codeword C over a transmission channel.


According to a further embodiment of the disclosure, for a set F⊂{1, . . . , N−s}, ui=0 ∀iεF.


According to a further embodiment of the disclosure, {tilde over (M)}n=(first (N−s) rows of Mn=custom-character).


According to a further embodiment of the disclosure, u1N-s, BN-s, M1 and corresponding arithmetic operations are defined over the Galois field GF(2).


According to a further embodiment of the disclosure, the permutation matrix BN-s is a bit reversal matrix.


According to a further embodiment of the disclosure, the apparatus includes a receiver circuit that receives an output word y1N-s from a transmission channel, where y1N-s is a vector of size N−s defined over a predetermined finite alphabet; and a decoder circuit that generates an estimate of information bits û1N-s of u1N-s from the output word y1N-s.


According to a further embodiment of the disclosure,







M
I

=


(



1


0




1


1



)

.





According to embodiments of the disclosure, there is provided an apparatus for polar coding that includes a receiver circuit that receives an output word y1N-s from a transmission channel, where y1N-s is a vector of size N−s defined over a predetermined finite alphabet, where output word y1N-s is related to an input word u1N-s by y1N-s=f(u1N-sBN-s{tilde over (M)}n), where u1N-s, BN-s, {tilde over (M)}n, and y1N-s are defined over a Galois field GF(2k), k>1, where N=2n, s<N, u1N-s=(u1, . . . , uN-s) is an input vector of N−s bits, BN-s is a permutation matrix, {tilde over (M)}n=((N−s) rows of Mn=custom-character), the matrix M1 is a pre-defined matrix of size q×q, 2<q and N=qn, f is a stochastic functional, and a decoder circuit that generates an estimate of information bits of û1N-s of u1N-s from the output word y1N-s, where a decoding complexity of y1N-s is proportional to a number of symbols in y1N-s.


According to a further embodiment of the disclosure, for a set F⊂{1, . . . , N−s}, ui=0 ∀iεF.


According to a further embodiment of the disclosure, {tilde over (M)}n=(first (N−s) rows of Mn=custom-character).


According to a further embodiment of the disclosure, u1N-s, BN-s, M1 and corresponding arithmetic operations are defined over the Galois field GF(2).


According to a further embodiment of the disclosure, the permutation matrix BN-s is a bit reversal matrix.


According to a further embodiment of the disclosure, the apparatus includes an encoder circuit that implements the transformation c1N-s=u1N-sBN-s{tilde over (M)}n; and a transmitter/storage circuit that transmits/store the codeword c1N-s over the transmission channel/storage media.


According to a further embodiment of the disclosure,







M
I

=


(



1


0




1


1



)

.








BRIEF DESCRIPTION OF THE DRAWINGS


FIGS. 1(a)-(b) illustrate examples of an encoding graph and a decoding graph for polar codes, according to an embodiment of the disclosure.



FIG. 2 illustrates slicing of an encoder graph, according to an embodiment of the disclosure.



FIG. 3 illustrates slicing of a decoder graph, according to an embodiment of the disclosure.



FIG. 4 illustrates another method for slicing in which there are equivalent and almost equivalent slicing locations, and FIG. 5 illustrates the resulting graph, according to an embodiment of the disclosure.



FIG. 6 shows the decoder graph of another slicing, according to an embodiment of the disclosure.



FIG. 7 is a block diagram of a machine for performing sliced encoding/decoding, according to an embodiment of the disclosure.





DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS

Exemplary embodiments of the invention as described herein generally provide systems and methods that provide channel polarization codes of an arbitrary block length while operating at a decoding complexity that relates to the target length. While embodiments are susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that there is no intent to limit the invention to the particular forms disclosed, but on the contrary, the invention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention.


Channel polarization is an operation by which one constructs, from a first set of N independent copies of a B-DMC W, a second set of N binary-input channels {WN(i):1≦i≦N}, that show a polarization effect in that, as N becomes large, the fraction of indices i for which the symmetric channel capacity I(WN(i)) is near 1 approaches the channel capacity, denoted by I(W), and the fraction for which I(WN(i)) is near 0 approaches 1-I(W). The channel polarization operation includes a channel combining phase and a channel splitting phase. A code constructed on the basis of these principles is known as a polar code. Note that in the following disclosure, all scalars, vectors, matrices and operations over them will be carried out in vector spaces over the binary field GF(2). However, embodiments are not limited to vector spaces over the binary field GF(2), and embodiments of the present disclosure are applicable to vector spaces over any discrete field GF(N) where N is a positive integer. There are generalization to Arikan's original code where the kernel is no longer a 2×2 matrix, and other generalizations where codes are constructed over a non-binary alphabets. A slicing according to embodiments of the disclosure can be carried for all these methods as it relates to the graph. The notation a1N refers to a row vector (a1, . . . aN). Polar coding creates a coding system in which one can access each coordinate channel WN(i) individually and send data only through those for which probability of maximum-likelihood decision error is near 0. In polar coding, u1N is encoded into a codeword x1N which is sent over the channel WN, and the channel output is y1N.


Channel combining recursively combines copies of a given B-DMC W to produce a vector channel WN: XN→YN where N=2n, n is a positive integer. The encoding of the inputs u1N to the synthesized channel WN into the inputs x1N to the underlying raw channel WN can be expressed as x1N=u1NGN, where GN is a generator matrix defined by






G
N
=B
N
custom-character,  (1)


for any N=2n, n>=2, BN is a bit-reversal matrix, and custom-character is the nth Kronecker power of








F
2

=

[



1


0




1


1



]


,




where custom-character represents the Kronecker product of matrices. The transition probabilities of the two channels WN and WN are related by






W
N(y1N|u1N)=WN(y1N|u1NGN).  (2)


For an arbitrary subset A of {1, . . . , N} of size K, EQ. (1) may be rewritten as






x
1
N
=u
A
G
N(A)⊕uAcGN(Ac),


where GN(A) represents a submatrix of GN formed by rows with indices in A. If A and uAC are fixed, leaving uA free, a mapping from source blocks uA to codewords x1N is obtained. A is referred to as the information set, and uAcεXN-K is referred to as the frozen bits or vector.


Channel splitting splits WN back into a set of N binary-input coordinate channels WN(i): X→YN×Xi-1, 1≦i≦N, defined by transition probabilities












W
N

(
i
)




(


y
1
N

,


u
i

i
-
1




u
i



)


=





u

i
+
1

N



X

N
-
i














1

2

N
-
1






W
N



(


y
1
N



u
1
N


)





,




(
3
)







where (y1N, u1i-1) represents the output of WN(i) and ui its input.



FIG. 1(a) illustrates an encoder for N=8. The input to the circuit is the bit-reversed version of u18, i.e., ũ18=u18B8. Signals flow from left to right. Each edge carries a signal 0 or 1. Each node adds (mod-2) the signals on all incoming edges from the left and sends the result out on all edges to the right. The output is given by x1818custom-character=u18G8. In general, the complexity of this implementation is O(N log N) with O(N) for BN and O(N log N) for custom-character.



FIG. 1(b) illustrates a decoder for N=8. If the channel output is y1N, the decoder's job is to generate an estimate û1N of u1N given knowledge of A, uAc, and y1N. The decoder fixes the frozen part by setting ûAc=uAc, which reduces the decoding task to generating an estimate of ûA of uA. The decoder includes decision elements (DEs), one for each element ui of the source vector u1N, and in general, there is a schedule of node processing over the graph. This is not limited to a specific scheduling, but every decoding algorithm has its own set of possible working schedules, that may depend on the hardware/software resources at hand. Now, based on the schedule, processing of data is flooded within the graph where processing is carried in nodes and data is transferred based on edges. For a node ui, the input would be tentative messages that can be used to generate a ratio of interest.


In an exemplary, non-limiting embodiment of the disclosure as illustrated in FIG. 1(b), the DEs can be activated in the order of 1 to N. If iεAc, the element is known; so, the ith DE, when its turn comes, simply sets ûi=ui and sends this result to all succeeding DEs. Every intermediate node can be activated as soon as its two left neighbor nodes conclude their processing and as soon as certain previous decisions ui are available. If iεA, the ith DE waits until it has received the previous decisions û1i-1 and as soon as its two left nodes conclude their processing, and upon receiving them, can compute based on previous decisions and tentative intermediate node messages, the likelihood ratio (LR)









L
N

(
i
)




(


y
1
N

,


u
^

1

i
-
1



)




=
Δ





W
N

(
i
)




(


y
1
N

,



u
^

1

i
-
1



0


)




W
N

(
i
)




(


y
1
N

,



u
^

1

i
-
1



1


)




,




and generates its decision as








u
^

i

=

{




0
,






if







L
N

(
i
)




(


y
1
N

,

u
1

i
-
1



)




1

,






1
,




otherwise
,









which is then sent to all succeeding DEs. The complexity of this algorithm is determined essentially by the complexity of computing the LRs.


Each node in the graph computes an LR request that arises during the course of the algorithm. Starting from the left side, the first column of nodes correspond to LR requests at length 8 (decision level), the second column of nodes to requests at length 4, the third at length 2, and the fourth at length 1 (channel level). Each node in the graph carries two labels. The first label indicates the LR value to be calculated, and the second label indicates when this node will be activated. It is to be understood that the schedule of node processing as illustrated with regard to FIG. 1(b) is exemplary and non-limiting, and there are other schedules in other embodiments of the disclosure in which the direction of computation is completely different.


Embodiments of the present disclosure are directed to the construction of polar codes of flexible length. The original construction of polar codes produces codes of length N=2n, for n=1, 2, . . . , while practical considerations may obviously require other types of lengths. For example, in NAND flash memories, the code length is determined by the information length, which is typically a power of two, and a small allowed overhead, e.g., 10%, and is typically slightly above a power of two.


For linear codes in general, common methods for building codes of length N′<N from codes of length N are shortening and puncturing. Shortening and puncturing have been also considered for polar codes, but, practical issues have been missed in the literature.


In detail, current state of the art papers use the full factor graph, as if there is no puncturing/shortening, while puncturing/shortening is accounted for by setting input log-likelihood ratios (LLRs) for punctured/shortened bits to 0/∞, respectively. The resulting complexity matches the long block length rather than the block length of what is stored, which is a poor performance characteristic.


However, it can be seen that when puncturing/shortening is performed on the certain specific coordinates, the factor graph may by “sliced” such that the number of variable nodes is reduced by a factor that is equal to the ratio between the punctured/shortened length and the original length N=2n. As mentioned above, for NAND flash memories this factor may be close to 2, so that embodiments of the current disclosure can result in about 2× reduction in the number of variable nodes. This implies an even larger reduction in the edge-count and in the overall complexity in comparison to the state of the art in the literature. To conclude, embodiments of the present disclosure will typically decrease complexity by a factor of about 2 in comparison to the existing state of the art. There is also a large gain to be made in term of area of hardware implementation, which would have a considerable effect on hardware costs and power consumption, and memory size for software implementations, although memory size for software is no longer a significant issue.


More formally, according to an embodiment of the disclosure, for n≧1, define Mn:=custom-character, where







M
1

:=


(



1


0




1


1



)




F
2

2
×
2


.






A polar code CF2N, with N=2n, has the form






C={uB
n
M
n
|u=(u1, . . . ,uNF2N,∀iεF:ui=0},


where F⊂{1, . . . , N} is some set of frozen coordinates.


In detail, according to an embodiment of the disclosure, write {tilde over (M)}n:=(first (N−s) rows of Mn), and note that for z εF2N-s,








M
n

-

1






(



z




0



)


=



M
n




(



z




0



)


=



M
n


~


z






where A′ denotes the transpose of A. Since Mn′ is upper triangular, the last s rows of {tilde over (M)}n′ are zero. This means that when z runs over all possible vectors in F2N-s, the first N−s coordinates of







M
n

-

1






(



z




0



)





run over all possible vectors in F2N-s, while the last s coordinates must be zero. Therefore,






M
n
−1
′C
s={(u′1, . . . ,u′NF2N,∀iεBnF:u′i=0 and u′N-s+1=u′N-s+2= . . . =u′N=0}.


In other words, shortening the last s coordinates of the code is the same thing as shortening the (not-already-frozen bits among the) last s coordinates of u′. However, it can be observed that since all local codes in the factor graph are either repetition codes of length 2 or single-parity codes, zeroing a variable is exactly the same as deleting its node and all edges connected to this node. Hence, the factor graph may be “sliced” at both the output and the input. In addition, it can also be observed that the slicing need not occur at the last s coordinates of u′, but that an arbitrary set of s rows, not necessarily adjacent to each other, may be sliced from anywhere within the polar code. Note that the foregoing description is a particular example of slicing and not a general slicing, which, according to other embodiments of the disclosure, can implemented in terms of any row of the factor graph.


It can be shown that for a proper choice of sliced rows, the decoding performance of sliced polar codes according to embodiments of the disclosure is mathematically equivalent to conventional shortening and puncturing patterns. If the last rows are sliced, the performance can be proven to be identical to shortening the last rows, but with a sliced decoder. If a bit reversed order of rows are sliced, the performance can be proven to be identical to bit-reversed puncturing pattern, but with a sliced decoder. Thus, by a proper choice of the sliced edges and nodes, the resulting coding scheme can operate with the same performance as certain shortening and puncturing techniques. However, a complexity of a sliced code according to an embodiment of the disclosure may match that of the sliced length k(1+OH) and not the original block length 2n where, for the case of storage applications, may be of substantial contribution, as these systems usually operate with high rate codes.


A polar code is sliced from 2n to k(1+OH). A numeric example will better explain the issue. In standard polar coding, all block lengths of codes are integral power of 2, e.g., for n=11 there is a code whose block length is 2048. This includes both information and overhead together. Suppose that the set of good indices is of size 1536, then there is communication with a rate of ¾. This is useful, but in some applications, mainly storage applications, and mainly in NAND flash storage devices, the number of information bits should be an integral power of two, not the entire block length, which includes both information and parity bits together. Suppose that a storage application wishes to communicate with a rate of ¾, in which there is ˜33% overhead on top of the information, then the example at hand is challenging as memory devices, and other applications as well, will not work with an information length of 1536—it is either 1024 bits of information or 2048 bits of information. Suppose there is k=1024 bits of information, then the entire block length k(1+OH) is about 1365 bits. However, there are no polar codes of length 1365. It is either 1024 or 2048 for the entire block, information and parity. In another numerical example, suppose a channel operates with 10% overhead, and that k=8192 bits. Then, k(1+OH) is 9011 bits. Again, there is a polar code of length 2n=8192 or 16384, so it needs to be sliced.


Now, there is prior art that provides a lengthening solution to this situation. However, the decoder complexity of these prior art solutions, which have no ‘slicing’, matches the decoder of the 16384 code and not for a ‘sliced’ code of length 9011. This has a tremendous impact on performance and implementation issues. Both hardware and software decoders will benefit from the complexity reduction, however in hardware there is also a substantial impact on area and power of a decoder implementation if it is a 9011 and not a 16384 length code.


According to an embodiment of the disclosure, the graph is modified by being sliced. That is, given a polar code graph, nodes and corresponding edges are removed from the graph to provide sliced polar codes for the target block length of interest. There are many options to remove nodes and edges. According to an embodiment of the disclosure, at least two of these options, not necessarily the most optimal options, match the error performance of known shortened and punctured techniques. However a decoder according to an embodiment of the disclosure operates on the sliced graph and its decoding complexity therefore matches that of the target length and not the original block length. Note that both the encoder and decoder's graph are sliced, not the input or output of the graph itself. Both encoder and decoder operate/are defined in terms of the code graph, and should normally refer to the same graph.


Shortening and puncturing does not refer to a graph at all, as these operations involve either inserting in some like-information bits that impose zero transmission/storage, and therefore need not to be sent/stored a-priori, or, in the case of puncturing, simply not transmitting certain bits. On the other hand, slicing alters the code itself and therefore the encoding and decoding, while shortening/puncturing simply insert or delete certain bits from transmission. Consequently, schemes based on shortening and puncturing the decoder, and encoder must operate on the entire longer ‘mother’ code and are therefore more costly in terms of performance and implementation costs. In slicing, the code itself is altered and therefore can operate with simpler machines in terms of time complexity, chip area and power costs.


A polar code graph according to an embodiment of the disclosure is sliced both at the encoder and the decoder. Both decoder and encoder should operate on the same graph, so if a code is sliced, this should be done to both encoder and decoder. Both the encoder and the decoder operate on the shorter sliced code, provided an offline slicing of the graph of the original length N=2n polar code. The resulting encoder and decoder therefore process a block of size equal to the actual stored/transmitted data and not on an almost double length block of length 2N=2×2n.


This property is useful in most storage systems, in particular solid state storage and in particular NAND flash storage systems, where typical overheads are 8-11% and information block lengths are integral powers of 2. For a 10.94% overhead, 1 KB code, a conventional decoder would have to operate a block of size 16384 bits. A decoder for sliced polar codes according to an embodiment of the disclosure will process a block of 9088 bits. This difference will contribute to improvements in latency, power and area of channel polarization coded schemes.


Slicing of polar codes is defined in terms of the polar code graphs, which are best defined in terms of their encoding (or decoding) graphs. FIGS. 1(a)-(b) illustrate an encoding and a decoding graph, respectively, for polar codes.



FIG. 2 illustrates the slicing of an encoder graph. In this graph, graph rows below the dashed horizontal border are removed along with all the edges connecting nodes on these rows to remaining nodes in the graph. At the left hand side, an encoder graph of a length 8 polar code is shown. The two bottom information nodes, below the dashed curve, are to be sliced. The nodes as well as all their corresponding edges are just cut out from the graph. The resulting graph of length 6 is shown on the right hand side of the slide. This is the encoder graph of a sliced polar code. Slicing information nodes 8 and 4 is a non-limiting example. One may slice out nodes in an arbitrary manner. Of course, every slicing pattern has its own effect on the error performance of the resulting code and the actual pattern should be made according to the actual application of the code. Note how the encoding operation relates only to the sliced graph and therefore the complexity matches the sliced code and not the original code.


According to an embodiment of the disclosure, one application of sliced polar codes is in ECC schemes. Here, the codes are sliced polar codes, which are then used for the ECC scheme



FIG. 3 illustrates a mirror image of slicing, namely a decoder graph instead of the encoder. The rows below the dashed line on the left side are sliced out along with all the edges connecting nodes within the sliced rows and remaining rows. Node processing at non-altered nodes is kept the same. For altered nodes with a single input, no processing of LLR is performed. Note that decoding of a channel polarization code may use successive cancellation or one of many other cancellation or decoding techniques, such as explicit BP decoding and related simplifications such as MS decoding, and SC list decoding. All of these decoding techniques rely on the graph definition of polar codes and can be applied when the graph is sliced according to an embodiment of the disclosure.


Slicing itself is a simple but novel operation which can solve the lengthening issue in standard polar coding and can provide encoding and decoding with a complexity governed by the target length, k(1+OH), where k is the information length and OH is the overhead, not the full polar length N=2n. Actual node processing is the same for the remaining non-altered nodes. According to embodiments of the disclosure, there is no loss in performance with respect to state-of-the art puncturing and shortening schemes.


According to embodiments of the disclosure, it can be proven that a sliced polar code according to an embodiment of the disclosure can achieve the same error performance as state-of-the-art shortening and puncturing techniques while properly slicing the polar code graphs.


A proof for the slicing proposition may follow by induction: For an N=4 length polar code, the resulting decoding equations can be computed for all possible slicing operations. Assuming the claim follows for N′=2(n-1), it can be verified to satisfy the case for an N=2n length code.


It can be shown that for a proper choice of sliced rows, the decoding is equivalent to conventional shortening and puncturing patterns. If the last rows are sliced, the performance can be proven to be identical to shortening the last rows, but with a sliced decoder. On the other hand, if bit reversed order of rows are sliced, i.e. bit reverse of 1 . . . #punctured bits, the performance can be proven to be identical to a bit-reversed puncturing pattern, but with a sliced decoder.



FIG. 4 illustrates another method for slicing in which there are equivalent and almost equivalent slicing locations. This may be of some interest to certain simplified decoding technique where some operations on the graph are less complex than others. Instead of slicing the bottom two rows below the dashed line of an N=8 polar code, as indicated on the left side, one can slice two rows of one of the component N′=4 polar codes, indicated by the dashed circle on the right side and the left side of FIG. 5, and advance a limited polarization for the next level. Note that within the N′=4 polar code this slicing is like the former slicing, while a proper polar step should be carried, leaving only edges that have nodes to be connected to. The resulting 6 node graph is shown in FIG. 5 on the right side. Note that input indices 3 and 7 act almost as 4 and 8, since in a 24 code it is the same, while only the last polarization step is different. FIG. 6 shows the decoder graph of another slicing, in which the N′=4 and N′=8 codes of an 8 node graph is sliced, indicted by the dashed circles on the left side, to produce a 6 node graph on the right side.



FIG. 7 is a block diagram of an exemplary apparatus according to an embodiment of the disclosure for a sliced polar decoder. An exemplary apparatus according to an embodiment of the disclosure can be implemented in hardware or software, or any combination of the two. Referring now to the figure, an exemplary apparatus includes an input observations queue 71, a main control unit 73 connected to receive a message from the input observations queue 71, a graph definition memory unit 72 and a tentative message memory unit 74 connected to the main control unit 73, a processor unit 70 of node processors 75, 76, 77, and 78 connected to receive and transmit signals from and to the main control unit 73, and a decoded message queue 79 connected to receive a decoded message from the main control unit 73. Note that the number of node processors in the processor unit 70 is exemplary, and the processor unit 70 may contain an arbitrary number of node processors in other embodiments of the disclosure. If this exemplary apparatus is implemented as hardware, then controller and processors can be application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), or a general purpose central processing unit (CPU). The graph definition memory unit 73 can store the definition of a sliced graph being processed and the tentative message memory unit 74 can store tentative results, such as LLRs, while messages on the graph are being processed. Both of the units 72, 74 can be, for example, a bank of registers, or a memory unit that is synthesis within the circuit. Based on the graph definition, the main control unit 73 can choose an appropriate input message from the input observations queue 71 for the processor unit 70, which can be chosen according to the specific node type that is being used, and then the main control unit 73 receives the processor output to be stored in the tentative message unit 74. A node processor can be, for example, a summation or a min operation, depending on the actual polar code decoding algorithm that is chosen. When a node corresponding to a ui value is used, then the main control unit 73 can decide the actual bit message of this ui node for decoded output. It is to be understood that a slicing according to an embodiment of the disclosure can be applied to any polar decoder, with minimal overhead in implementation complexity.

Claims
  • 1. An apparatus for polar coding, comprising: an encoder circuit that implements a transformation C=u1N-sBN-s{tilde over (M)}n,wherein u1N-s, BN-s, {tilde over (M)}n, and C are defined over a Galois field GF(2k), k>1,wherein N=2n, s<N, u1N-s=(u1, . . . , uN-s) is an input vector of N−s symbols over GF(2k), BN-s is a permutation matrix, {tilde over (M)}n=((N−s) rows of Mn=), the matrix M1 is a pre-defined matrix of size q×q, 2<q and N=qn, and C is a codeword vector of N−s symbols, andwherein a decoding complexity of C is proportional to a number of symbols in C; anda transmitter circuit that transmits codeword C over a transmission channel.
  • 2. The apparatus of claim 1, wherein for a set F⊂{1, . . . , N−s}, ui=0 ∀iεF.
  • 3. The apparatus of claim 1, wherein {tilde over (M)}n=(first (N−s) rows of Mn=).
  • 4. The apparatus of claim 1, wherein u1N-s, BN-s, M1 and corresponding arithmetic operations are defined over the Galois field GF(2).
  • 5. The apparatus of claim 1, wherein the permutation matrix BN-s is a bit reversal matrix.
  • 6. The apparatus of claim 1, further comprising: a receiver circuit that receives an output word y1N-s from a transmission channel, wherein y1N-s is a vector of size N−s defined over a predetermined finite alphabet; anda decoder circuit that generates an estimate of information bits of û1N-s of u1N-s from the output word y1N-s.
  • 7. The apparatus of claim 1, wherein
  • 8. An apparatus for polar coding, comprising: a receiver circuit that receives an output word y1N-s from a transmission channel, wherein y1N-s is a vector of size N−s defined over a predetermined finite alphabet,wherein output word y1N-s is related to an input word u1N-s by y1N-s=f(u1N-sBN-s{tilde over (M)}n),wherein u1N-s, BN-s, {tilde over (M)}n, and y1N-s are defined over a Galois field GF(2k), k>1,wherein N=2n, s<N, u1N-s=(u1, . . . , uN-s) is an input vector of N−s bits, BN-s is a permutation matrix, {tilde over (M)}n=((N−s) rows of Mn=), the matrix M1 is a pre-defined matrix of size q×q, 2<q and N=qn, f is a stochastic functional anda decoder circuit that generates an estimate of information bits û1N-s of u1N-s from the output word y1N-s, wherein a decoding complexity of y1N-s is proportional to a number of symbols in y1N-s.
  • 9. The apparatus of claim 8, wherein for a set F⊂{1, . . . , N−s}, ui=0 ∀iεF.
  • 10. The apparatus of claim 8, wherein {tilde over (M)}n=(first (N−s) rows of Mn=).
  • 11. The apparatus of claim 8, wherein u1N-s, BN-s, M1 and corresponding arithmetic operations are defined over the Galois field GF(2).
  • 12. The apparatus of claim 8, wherein the permutation matrix BN-s is a bit reversal matrix.
  • 13. The apparatus of claim 8, further comprising: an encoder circuit that implements the transformation c1N-s=u1N-sBN-s{tilde over (M)}n; anda transmitter/storage circuit that transmits/store the codeword c1N-s over the transmission channel/storage media.
  • 14. The apparatus of claim 8, wherein