Physical layer communication underpins the information age (WiFi, cellular, cable and satellite modems). Codes, composed of encoder and decoder pairs, enable reliable communication: encoder maps original data bits into a longer sequence, and decoders map the received sequence to the original bits. Reliability is precisely measured: bit error rate (BER) measures the fraction of input bits that were incorrectly decoded; block error rate (BLER) measures the fraction of times at least one of the original data bits was incorrectly decoded.
Landmark codes include Reed-Muller (RM), BCH, Turbo, LDPC and Polar codes (Richardson & Urbanke, 2008): each is a linear code and represents a mathematical breakthrough discovered over a span of six decades. The impact on humanity is huge: each of these codes has been used in global communication standards over the past six decades. These codes essentially operate at the information-theoretic limits of reliability over the additive white Gaussian noise (AWGN) channel, when the number of information bits is large, the so-called “large block length” regime.
In the small and medium block length regimes, the state-of-the-art codes are algebraic: encoders and decoders are invented based on specific linear algebraic constructions over the binary and higher order fields and rings. Especially prominent binary algebraic codes are RM codes and closely related polar codes, whose encoders are recursively defined as Kronecker products of a simple linear operator and constitute the state of the art in small-to-medium block length regimes.
Determining new codes for emerging practical applications, e.g., low block length regime in Internet of Things applications (Ma et al., 2019), is an area of frequent research. One difficult challenge in determining new codes is that the space of codes is very vast and the sizes astronomical. For instance, a rate 1/2 code over even 100 information bits involves designing 2100 codewords in a 200-dimensional space. Computationally efficient encoding and decoding procedures are highly desired, in addition to high reliability. Thus, although a random code is information theoretically optimal, neither encoding nor decoding is computationally efficient.
The mathematical landscape of computationally efficient codes has been plumbed over the decades by some of the finest mathematical minds, resulting in two distinct families of codes: algebraic codes (RM, Polar, BCH-focused on properties of polynomials) and graph codes (Turbo, LDPC-based on sparse graphs and statistical physics). The former is deterministic and involves discrete mathematics, while the latter harnesses randomness, graphs, and statistical physics to behave like a pseudorandom code. A major open question is the invention of new codes, and especially fascinating would be a family of codes outside of these two classes.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This summary is not intended to identify key features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
In some embodiments, a method of encoding a set of information bits to produce a codeword that encodes the set of information bits for reliable communication is provided. The set of information bits is received. The set of information bits are provided to a plurality of permutation layers separated by neural network processing layers. Each permutation layer accepts an input vector and generates a reordered output vector that is a reordering of the input vector. Each neural network processing layer accepts a vector of input values and generates a vector of output values based on a non-linear function of the vector of input values. The reordered output vector of a final permutation layer of the plurality of permutation layers is provided as the codeword.
In some embodiments, a method of reliable wireless transmission of a set of information bits is provided. A set of information bits to be transmitted is determined and encoded in a codeword using the encoding method described above. The codeword is wirelessly transmitted.
In some embodiments, a method of decoding a codeword to retrieve a set of information bits is provided. The codeword is received, and is provided to a plurality of permutation layers separated by neural network processing layers. Each permutation layer accepts an input vector and generates a reordered output vector that is a reordering of the input vector. Each neural network processing layer accepts a vector of input values and generates a vector of output values based on a non-linear function of the vector of input values. A plurality of forward calculations and backward calculations are performed using the plurality of permutation layers separated by the neural network processing layers to retrieve the set of information bits.
In some embodiments, a method of reliable wireless reception of a set of information bits is provided. The codeword is wirelessly received. A set of information bits is decoded from the codeword using the decoding method described above.
In some embodiments, a method of training an encoder and a decoder is provided. An encoder is initialized, the encoder comprising a first plurality of first permutation layers separated by first neural network processing layers. Each first permutation layer accepts an input vector and generates a reordered output vector that is a reordering of the input vector. Each first neural network processing layer accepts a vector of input values and generates a vector of output values based on a non-linear function of the vector of input values. A decoder is initialized, the decoder comprising a second plurality of second permutation layers separated by second neural network processing layers. Each second permutation layer accepts an input vector and generates a reordered output vector that is a reordering of the input vector. Each second neural network processing layer accepts a vector of input values and generates a vector of output values based on a non-linear function of the vector of input values. A first number of optimization steps for the decoder are performed using a set of training data. A second number of optimization steps for the encoder are performed using the set of training data. The first number of optimization steps for the decoder and the second number of optimization steps for the encoder are repeated until training of the encoder and the decoder is completed.
In some embodiments, a computer-readable medium is provided. The computer-readable medium has computer-executable instructions stored thereon that, in response to execution by one or more processors of a computing device, cause the computing device to perform actions of a method as described above.
In some embodiments, a computing device is provided. The computing device comprises at least one processor; and a non-transitory computer-readable medium having computer-executable instructions stored thereon. The instructions, in response to execution by the at least one processor, cause the computing device to perform actions of a method as described above.
The foregoing aspects and many of the attendant advantages of this invention will become more readily appreciated as the same become better understood by reference to the following detailed description, when taken in conjunction with the accompanying drawings, wherein:
The present disclosure provides a new family of codes, called KO codes, that have features of both code families: they are nonlinear generalizations of the Kronecker operation underlying the algebraic codes (RM, Polar) parameterized by neural networks. The parameters are learned in an end-to-end training paradigm in a data driven manner.
A linear encoder is defined by a generator matrix, which maps information bits to a codeword. The RM and the Polar families construct their generator matrices by recursively applying the Kronecker product operation to a simple two-by-two matrix and then selecting rows from the resulting matrix. The careful choice in selecting these rows is driven by the desired algebraic structure of the code, which is central to achieving the large minimum pairwise distance between two codewords, a hallmark of the algebraic family. This encoder can be alternatively represented by a computation graph. The recursive Kronecker product corresponds to a complete binary tree, and row-selection corresponds to freezing a set of leaves in the tree, which we refer to as a “Plotkin tree,” inspired by the pioneering construction in (Plotkin, 1960).
The Plotkin tree skeleton allows us to tailor a new neural network architecture: we expand the algebraic family of codes by replacing the (linear) Plotkin construction with a non-linear operation parametrized by neural networks. The parameters are discovered by training the encoder with a matching decoder, that has the matching Plotkin tree as a skeleton, to minimize the error rate over (the unlimited) samples generated on AWGN channels.
Algebraic and the original RM/Polar codes promise a large worst-case pairwise distance (Alon et al., 2005). This ensures that RM/Polar codes achieve capacity in the large block length limit (Arikan, 2009; Kudekar et al., 2017). However, for short block lengths, they are too conservative, as we are interested in the average-case reliability. This is the gap KO codes as described in the present disclosure exploit: we seek a better average-case reliability and not the minimum pairwise distance.
We discover a novel non-linear code and a corresponding efficient decoder that improves significantly over the RM (9, 2) code baseline, assuming both codes are decoded using successive cancellation decoding with similar decoding complexity. In
Also, both codes are decoded using message bits is successive cancellation decoding with similar decoding complexity.
Analyzing the pairwise distances between two codewords reveals a surprising fact. The histogram for KO code nearly matches that of a random Gaussian codebook. This is in contrast to the RM (9, 2) code, which has discrete peaks at several values. The skeleton of the architecture from an algebraic family of codes, the training process with a variation of the stochastic gradient descent, and the simulated AWGN channel have worked together to discover a novel family of codes that harness the benefits of both algebraic and pseudorandom constructions.
In summary, the present disclosure includes at least the following: We introduce novel neural network architectures for the (encoder, decoder) pair that generalizes the Kronecker operation central to RM/Polar codes. We propose training methods that discover novel non-linear codes when trained over AWGN and provide empirical results showing that this family of non-linear codes improves significantly upon the baseline code it was built on (both RM and Polar codes) whilst having the same encoding and decoding complexity.
Interpreting the pairwise distances of the discovered codewords reveals that a KO code mimics the distribution of codewords from the random Gaussian codebook, which is known to be reliable but computationally challenging to decode. The decoding complexities of KO codes are O(n log n) where n is the block length, matching that of efficient decoders for RM and Polar codes.
We highlight that the design principle of KO codes serves as a general recipe to discover new family of non-linear codes improving upon their linear counterparts. In particular, the construction is not restricted to a specific decoding algorithm, such as successive cancellation (SC). In the present disclosure, we focus on the SC decoding algorithm since it is one of the most efficient decoders for the RM and Polar family. At this decoding complexity, i.e. O(n log n), our results demonstrate that we achieve significant gain over these codes. Our preliminary results show that KO codes achieve similar gains over the RM codes, when both are decoded with list-decoding.
We formally define the channel coding problem and provide background on Reed-Muller codes, the inspiration for our approach. Our notation is the following. We denote Euclidean vectors by bold face letters like m, L, etc.
For L ∈ n, Lk:m
(Lk, . . . , Lm). If v ∈ {0, 1}, we define the operator ⊕v as x ⊕vy
x+(−1)vy.
With regard to channel coding, let m=(m1, . . . , mk) ∈ {0, 1}k denote a block of information message bits that we want to transmit. An encoder gθ(·) is a function parametrized by θ that maps these information bits into a binary vector x of length n, i.e. x=gθ(m) ∈ {0, 1}n. The rate ρ=k/n of such a code
measures how many bits of information we are sending per channel use. These codewords are transformed into real (or complex) valued signals, called modulation, before being transmitted over a channel. For example, Binary Phase Shift Keying (BPSK) modulation maps each xi ∈ {0, 1} to 1-2xi ∈ {±1} up to a universal scaling constant for all i ∈ [n]. Here, we do not strictly separate encoding from modulation and refer to both binary encoded symbols and real-valued transmitted symbols as codewords.
Upon transmission of this codeword x across a noisy channel PY|X(·|·), we receive its corrupted version y ∈ n. The decoder fϕ(·) is a function parametrized by ϕ that subsequently processes the received vector y to estimate the information bits {circumflex over (m)}=fϕ(y). The closer {circumflex over (m)} is to m, the more reliable the transmission.
An error metric, such as Bit-Error-Rate (BER) or Block-Error-Rate (BLER), gauges the performance of the encoder-decoder pair (gθ, fϕ). Note that BER is defined as BER(1/k) Σi
[{circumflex over (m)}i≠mi], whereas BLER
[{circumflex over (m)}≠m].
The design of good codes given a channel and a fixed set of code parameters (k, n) can be formulated as:
A fundamental question in machine learning for channel coding is thus: how do we design architectures for our neural encoders and decoders that give the appropriate inductive bias? To gain intuition towards addressing this, we focus on Reed-Muller (RM) codes. Below, we present a novel family of non-linear codes (KO codes) that strictly generalize and improve upon RM codes by capitalizing on their inherent recursive structure. Our approach seamlessly generalizes to Polar codes, as explained in detail below.
RM codes are a family of codes parametrized by a variable size m ∈+ and an order r ∈
+ with r≤m, denoted as RM(m, r). It is defined by an encoder, which maps binar information bits m ∈{0, 1}k to codewords x ∈{0, 1}n. RM(m, r) code sends
information bits with n=2m transmissions. The code distance measures the minimum distance between all (pairs of) codewords. These parameters are summarized as follows:
One way to define RM(m, r) is via the recursive application of a Plotkin construction. The basic building block is a mapping Plotkin: {0, 1 ×{0, 1}
→{0, 1
, where:
Plotkin(u, v)=(u, u⊕v)
Plotkin (1960) proposed this scheme in order to combine two codes of smaller code lengths and construct a larger code with the following properties. It is relatively easy to construct a code with either a high rate but a small distance (such as sending the raw information bits directly) or a large distance but a low rate (such as repeating each bit multiple times). Plotkin construction combines such two codes of rates ρu>ρv and distances du<dv, to design a larger block length code satisfying rate ρ=(ρu+ρv)/2 and distance min{2du, dv}. This significantly improves upon a simple time-sharing of those codes, which achieves the same rate but distance only min{du, dv}.
Note: Following the standard convention, we fix the leaves in the Plotkin tree of a first order RM(m, 1) code to be zeroth order RM codes and the full-rate RM(1, 1) code. On the other hand, a second order RM(m, 2) code contains the first order RM codes and the full-rate RM(2, 2) as its leaves.
In view of the Plotkin construction, RM codes are recursively defined as a set of codewords of the form:
The next operation is the parent of these two leaves, which performs Plotkin (RM(1, 1), RM(1, 0))=Plotkin((m1, m1 ⊕m2), (m3, m3)) which outputs the vector (m1, m1 ⊕m2, m1 ⊕m3, m1 ⊕m2 ⊕m3), which is known as RM(2, 1) code. This coordinate-wise Plotkin construction is applied recursively one more time to combine RM(2, 0) and RM(2, 1) at the root for the tree. The resulting codewords are RM(3, 1)=Plotkin(RM(2, 1), RM(2, 0)) =Plotkin((m1, m1 ⊕m2, mi ⊕m3, m1 ⊕m2 ⊕m3), (m4, m4, M4, m4)).
This recursive structure of RM codes both inherits the good minimum distance property of the Plotkin construction and enables efficient decoding.
With regard to decoding, there have been several decoders developed for RM codes. The most efficient RM code decoder is called Dumer's recursive decoding (Dumer, 2004; 2006; Dumer & Shabunov, 2006b) that fully capitalizes on the recursive Plotkin construction explained above. The basic principle is: to decode an RM codeword x=(u, u ⊕v) ∈RM(m, r), we first recursively decode the left sub-codeword v ∈RM(m−1, r−1) and then the right sub-codeword u ∈RM(m−1, r), and we use them together to stitch back the original codeword. This recursion is continued until we reach the leaf nodes, where we perform maximum a posteriori (MAP) decoding. Dumer's recursive decoding is also referred to as successive cancellation decoding in the context of polar codes (Arikan, 2009).
8 be the corresponding noisy codeword received at the decoder. To decode the bits m, we first obtain the soft-information of the codeword x, i.e., we compute its Log-Likelihood-Ratio (LLR) L ∈
8:
We next use L to compute soft-information for its left and right children: the RM(2, 0) codeword v and the RM(2, 1) codeword u. We start with the left child v.
Since the codeword x=(u, u⊕v), we can also represent its left child as v=u ⊕(u ⊕v)=x1:4 ⊕x5:8. Hence its LLR vector Lv ∈4 can be readily obtained from that of x. In particular it is given by the log-sum-exponential transformation: Lv=LSE(L1:4, L5:8), where LSE(a, b)
log((1+ea+b)/(ea+eb)) for a, b ∈
. Since this feature Lv corresponds to a repetition code, v=m4, m4, m4), majority decoding (same as the MAP) on the sign of Lv yields the decoded message bit as {circumflex over (m)}4. Finally, the left codeword is decoded as {circumflex over (v)}=({circumflex over (m)}4, {circumflex over (m)}4, {circumflex over (m)}4, {circumflex over (m)}4).
Having decoded the left RM(2, 0) codeword v, our goal is to now obtain soft-information Lu ∈4 for the right RM(2, 1) codeword u. Fixing v={circumflex over (v)}, notice that the codeword x=(u, u ⊕{circumflex over (v)}) can be viewed as a 2-repetition of u depending on the parity of {circumflex over (v)}. Thus the LLR Lu is given by LLR addition accounting for the parity of {circumflex over (v)}: Lu=L1:4⊕{circumflex over (v)}L5:8=L1:4+(−1){circumflex over (v)}L5:8. Since RM(2, 1) is an internal node in the tree, we again recursively decode its left child RM(1, 0) and its right child RM(1, 1), which are both leaves. For RM(1, 0), decoding is similar to that of RM(2, 0) above, and we obtain its information bit {circumflex over (m)}3 by virt applying the log-sum-exponential function on the feature Lu and then majority decoding. Likewise, we obtain the LLR feature Luu ∈
2 for the right RM(1, 1) child using parity-adjusted LLR addition on Lu. Finally, we decode its corresponding bits ({circumflex over (m)}1, {circumflex over (m)}2) using efficient MAP-decoding of first order RM codes (abbe et al., 2020). Thus we obtain the full block of decoded message bits as {circumflex over (m)}=({circumflex over (m)}1, {circumflex over (m)}2, {circumflex over (m)}3, {circumflex over (m)}4).
An important observation from Dumer's algorithm is that the sequence of bit decoding in the tree is RM(2, 0)→RM(1, 0)→RM(1, 1). A similar decoding order holds for all RM(m, 2) codes, where all the left leaves (order-1 codes) are decoded first from top to bottom, and the right-most leave (full-rate RM(2, 2)) is decoded at the end.
We design KO codes using the Plotkin tree as the skeleton of a new neural network architecture, which strictly improve upon their classical counterparts.
Earlier we saw the design of RM codes via recursive Plotkin mapping. Inspired by this elegant construction, we present a new family of codes, called KO codes, denoted as KO(m, r, gθ, fϕ). These codes are parametrized by a set of four parameters: a non-negative integer pair (m, r), a finite set of encoder neural networks gθ, and a finite set of decoder neural networks fϕ. In particular, for any fixed pair (m, r), our KO encoder inherits the same code parameters (k, n, ρ) and the same Plotkin tree skeleton of the RM encoder. However, a critical distinguishing component of the KO(m, r) encoder disclosed herein is a set of encoding neural networks gθ={gi} that strictly generalize the Plotkin mapping: to each internal node i of the Plotkin tree, we associate a neural network gi that applies a coordinate-wise real valued non-linear mapping (u v) gi(u, v)∈
as opposed to the classical binary valued Plotkin mapping (u, v)
(u, u ⊕v) ∈{0, 1}2
.
The significance of our KO encoder gθ is that by allowing for general nonlinearities gi to be learnt at each node we enable for a much richer and broader class of nonlinear encoders and codes to be discovered on a whole, which contribute to non-trivial gains over standard RM codes. Further, we have the same encoding complexity as that of an RM encoder since each gi: 2→
is applied coordinate-wise on its vector inputs. The
parameters of these neural networks gi are trained via stochastic gradient descent on the cross entropy loss. Further details of a non-limiting example embodiment of a training technique are provided below.
The present disclosure also provides an efficient family of decoders to match the KO encoder described above. Inspired by the Dumer's decoder, we present a new family of KO decoders that fully capitalize on the recursive structure of KO encoders via the Plotkin tree. Our KO decoder has at least three distinct features:
The KO codes disclosed herein improve significantly on RM codes on a variety of benchmarks.
The KO(8, 2) encoder (gi(u, v).
We carefully parametrize each encoding neural network gi so that they generalize the classical Plotkin map Plotkin(u, v)=(u, u ⊕v). In particular, we represent them as gi(u, v)=(u, {tilde over (g)}i (u, v)+u ⊕v), where {tilde over (g)}i: 2→
is a neural network of input dimension 2 and output size 1. Here {tilde over (g)}i is applied coordinate-wise on its inputs u and v. This clever parametrization can also be viewed as a skip connection on top of the Plotkin map. Using these skip-like ideas for both encoders and decoders further contributes to the significant gains over RM codes.
From an encoding perspective, recall that the KO(8, 2) code has code dimension k=37 and block length n=256. Suppose we wish to transmit a set of 37 message bits denoted as m=(m(2,2), m(2,1), . . . , m(7,1)) through our KO(8, 2) encoder. We first encode the block of four message bits m(2,2) into a RM(2, 2) codeword c(2,2) using its corresponding encoder at the bottom most leaf of the Plotkin tree. Similarly we encode the next three message bits m(2,1) into an RM(2, 1) codeword c(2,1). We combine these codewords using the neural network g6 at their parent node, which yields the codeword c(3, 2)=g6 (c(2,2), c(2,1)) ∈8. The codeword c(3,2) is similarly combined with its corresponding left codeword and this procedure is thus recursively carried out till we reach the top most node of the tree, which outputs the codeword c(8, 2) ∈
256. Finally we obtain the unit-norm KO(8, 2) codeword x by normalizing c(8,2), i.e., x=c(8,2)/∥c(8,2)∥2.
Note that the map of encoding the message bits m into the codeword x, i.e., x=gθ(m), is differentiable with respect to θ since all the underlying operations at each node of the Plotkin tree are differentiable.
Capitalizing on the recursive structure of the encoder, the KO(8, 2) decoder decodes the message bits from top to bottom, similar in style to Dumer's decoding discussed above. More specifically, at any internal node of the tree we first decode the message bits along its left branch, which we utilize to decode that of the right branch and this procedure is carried out recursively till all the bits are recovered. At the leaves, we use the Soft-MAP decoder to decode the bits.
Similar to the encoder gθ, an important aspect of the KO(8, 2) decoder is a set of decoding neural networks fϕ={f1, f2, . . . , f11, f12}. For each node i in the tree, f2i−1:2→
corresponds to its left branch whereas f2i:
4→
corresponds to the right branch. The pair of decoding neural networks (f2i−1, f2i) can be viewed as matching decoders for the corresponding encoding network gi: While gi encodes the left and right codewords arriving at this node, the outputs of f2i-1 and f2i represent appropriate Euclidean feature vectors for decoding them. Further, f2i−1 and f2i can also be viewed as a generalization of Dumer's decoding to nonlinear real codewords: f2i−1 generalizes the LSE function, while f2i extends the operation ⊕{circumflex over (v)}. More precisely, we represent f2i-1(y1, y2)={circumflex over (f)}2i-1(y1, y2)+LSE(y1, y2) whereas f2i(y1, y2, yv, {circumflex over (v)})={tilde over (f)}2i(y1, y2, yv, {circumflex over (v)})+y1+(−1){circumflex over (v)}y2, where (y1, y2) are appropriate feature vectors from the parent node, and yv is the feature corresponding to the left-child v, and u is the decoded left-child codeword. We explain about these feature vectors in more detail below. Note that both the functions {tilde over (f)}2i−1 and {tilde over (f)}2i are also applied coordinate-wise.
At the decoder, suppose we receive a noisy codeword y∈256 at the root upon transmission of the actual codeword x ∈
256 along the channel. The first step is to obtain the LLR feature for the left RM(7, 1) codeword: we obtain this via the left neural networkfi, i.e., yv=f1(y1:128, y129:256) ∈
128. Subsequently, the Soft-MAP decoder transforms this feature into an LLR vector for the message bits, i.e., L(7, 1)=Soft-MAP(f1 (y1:128, y129:256)). Note that the message bits m(7, 1) can be hard decoded directly from the sign of L(7, 1). Instead here we use their soft version via the sigmoid function σ(·), i.e., {circumflex over (m)}(7,1)=σ(L(7, 1)). Thus we obtain the corresponding RM(7, 1) codeword {circumflex over (v)} by encoding the message {circumflex over (m)}(7,1) via an RM(7, 1) encoder.
The next step is to obtain the feature vector for the right child. This is done using the right decoder f2, i.e., yu=f2(y1:128, y129:256, yv, {circumflex over (v)}). Utilizing this right feature yu the decoding procedure is thus recursively carried out until we compute the LLRs for all the remaining message bits m(6, 1), . . . , m(2, 2) at the leaves. Finally, we obtain the full LLR vector L=(L(7, 1), . . . ,L(2, 2)) corresponding to the message bits m. A simple sigmoid transformation, σ(L), further yields the probability of each of these message bits being zero, i.e., σ(L)=[m=0].
Note that the decoding map fϕ: yL is fully differentiable with respect to ϕ, which further ensures a differentiable loss for training the parameters (θ, ϕ).
Dumer's decoder for second-order RM codes RM(m, 2) performs MAP decoding at the leaves. In contrast, the KO decoder disclosed herein applies Soft-MAP decoding at the leaves. The leaves of both RM(m, 2) and KO(m, 2) codes are comprised of order-one RM codes and the RM(2, 2) code.
For MAP decoding, given a length-n channel LLR vector l ∈n corresponding to the transmission of a given (n, k) node, i.e., code dimension is k and block length is n, with codebook C over a general binary-input memoryless channel, the MAP decoder picks a codeword c* according to the following rule, from (Abbe et al., 2020):
Note that the MAP decoder and its FHT version involve an argmax(·) operation, which is not differentiable. In order to overcome this issue, we obtain the soft-decision version of the MAP decoder, referred to as Soft-MAP decoder, to come up with differentiable decoding at the leaves. The Soft-MAP decoder obtains the soft LLRs instead of hard decoding of the codes at the leaves. Particularly, consider an AWGN channel model as y=s+n, where y is the length-n vector of the channel output, s:=1-2c, c ∈C, and n is the vector of the Gaussian noise with mean zero and variance σ2 per element. The LLR of the i-th information bit ui is then defined as:
By applying the Bayes' rule, the assumption of Pr(ui=0)=Pr(ui=1), the law of total probability, and the distribution of the Gaussian noise, we can rewrite this equation as:
We can also apply the max-log approximation to approximate this equation as follows:
It is worth mentioning at the end that, similar to the MAP rule, one can compute all the 2k inner products in O(n2k) time complexity, and then obtain the soft LLRs by looking at appropriate indices. As a result, the complexity of the Soft-MAP decoding for decoding RM(m, 1) and RM(2, 2) codes is O(n2) and O(1) respectively. However, one can apply an approach similar to the calculation of c* above to obtain a more efficient version of the Soft-MAP decoder, with complexity O(n log n), for decoding RM(m, 1) codes.
The (encoder, decoder) neural networks may be designed to generalize and build upon the classical pairing of (Plotkin map, Dumer's decoder). In particular, as discussed above the KO encoder gθ is parameterized as gi (u, v)=(u, {tilde over (g)}i (u, v)+u ⊕v), where {tilde over (g)}: 2→
is a fully connected neural network. A non-limiting example embodiment of the neural network for {tilde over (g)}i, is illustrated in
Similarly, the KO decoder may be parametrized
as f2i-1(y1, y2)={tilde over (f)}2i−1(y1, y2)+LSE(y1, y2) and
f
2i(y1, y2, yv, {circumflex over (v)})={tilde over (f)}2i(y1, y2, yv, {circumflex over (v)})+y1+(−1){circumflex over (v)}y2,
where {tilde over (f)}2i−1: 2→
and {tilde over (f)}2i:
4→
are also fully connected neural networks. The architecture of {tilde over (f)}2i is illustrated in
If {tilde over (f)}≈0 and {tilde over (g)}≈0, we are able to thus recover the standard RM(8, 2) encoder and its corresponding Dumer decoder. By initializing all the weight parameters (θ, ϕ) sampling from (0, 0, 022)), we are able to approximately recover the performance RM(8, 2) at the beginning of training which acts as an appropriate initialization for the training technique described below.
As illustrated, the architecture of the decoder includes a total number of parameters in each decoder neural block that is 69×32. In some embodiments, a simpler decoder may be used by replacing all neural blocks with a smaller neural network with 1 hidden layer of 4 nodes. This decoder neural block has 20 parameters, obtaining a factor of 110 compression in the number of parameters. The computational complexity of this compressed decoder, which we refer to as TinyKO, is within a factor of 4 from Dumer's successive cancellation decoder. Each neural network component has two matrix multiplication steps and one activation function on a vector, which can be fully parallelized on a GPU. With the GPU parallelization, TinyKO has the same time complexity/latency as Dumer's SC decoding. The following table shows that there is almost no loss in reliability for the compressed KO(8, 2) encoder and decoder in this manner. Training a smaller neural network takes about twice as many iterations as the larger one although each iteration is faster for the smaller network.
Recall that we have the following ow diagram from encoder till the decoder when we transmit the message bits m:
In view ofthis, we define an end-to-end differentiable cross entropy loss function to train the parameters (θ, ϕ), i.e.:
The technique illustrated in
In
Performance of a binarized version KO-b(8, 2) is also shown in
To interpret the learned KO code, we examine the pairwise distance between codewords. In classical linear coding, pairwise distances are expressed in terms of the weight distribution of the code, which counts how many codewords of each specific Hamming weight 1, 2, . . . , n exist in the code. The weight distribution of linear codes are used to derive analytical bounds, that can be explicitly computed, on the BER and BLER over AWGN channels (Sason & Shamai, 2006). For nonlinear codes, however, the weight distribution does not capture pairwise distances. Therefore, we explore the distribution of all the pairwise distances of non-linear KO codes that can play the same role as the weight distribution does for linear codes.
The pairwise distance distribution of the RM codes remains an active area of research as it is used to prove that RM codes achieve the capacity (Kaufman et al., 2012; Abbe et al., 2015; Sberlo & Shpilka, 2020). However, these results are asymptotic in the block length and do not guarantee a good performance, especially in the small-to-medium block lengths that we are interested in. On the other hand, Gaussian codebooks, codebooks randomly picked from the ensemble of all Gaussian codebooks, are known to be asymptotically optimal, i.e., achieving the capacity (Shannon, 1948), and also demonstrate optimal finite-length scaling laws closely related to the pairwise distance distribution (Polyanskiy et al., 2010).
Remarkably, the pairwise distance distribution of KO code shows a staggering resemblance to that of the Gaussian codebook of the same rate p and blocklength n. This is an unexpected phenomenon since we minimize only BER. We posit that the NN training has learned to construct a Gaussian-like codebook, in order to minimize BER. Most importantly, unlike the Gaussian codebook, KO codes constructed via NN training are fully compatible with efficient decoding. This phenomenon is observed for all order-2 codes we trained.
We have also analyzed how the KO decoder contributes to the gains in BLER over the RM encoder. Let m=(m(7, 1), . . . , m(2, 2)) denote the block of transmitted message bits, where the ordered set of indices ={(7, 1), . . . , (2, 2)} correspond to the leaf branches (RM codes) of the Plotkin tree. Let {circumflex over (m)} be the decoded estimate by the KO(8, 2) decoder.
Plotkin trees of RM(8, 2) and KO(8, 2) are provided in [{circumflex over (m)}≠m] can be decomposed as:
In other words, BLER can also be represented as the sum of the fraction of errors the decoder makes in each of the leaf branches when no errors were made in the previous ones. Thus, each term in the equation above can be viewed as the contribution of each sub-code to the total BLER. The KO(8, 2) decoder achieves better BLER than the RM(8, 2) decoder by making major gains in the leftmost (7, 1) branch (which is decoded first) at the expense of other branches. However, the decoder (together with the encoder) has learnt to better balance these contributions evenly across all branches, resulting in lower BLER overall. The unequal errors in the branches of the RM code has been observed before, and some efforts made to balance them (Dumer & Shabunov, 2001); that KO codes learn such a balancing scheme purely from data is, perhaps, remarkable.
As the environment changes dynamically in real world channels, robustness is crucial in practice. We therefore test the KO code under canonical channel models and demonstrate robustness, i.e., the ability of a code trained on AWGN to perform well under a different channel without retraining. It is well known that Gaussian noise is the worst case noise among all noise with the same variance (Lapidoth, 1996; Shannon, 1948) when an optimal decoder is used, which might take an exponential time. When decoded with efficient decoders, as we do with both RM and KO codes, catastrophic failures have been reported in the case of Turbo decoders (Kim et al., 2018). We show that both RM codes and KO codes are robust and that KO codes maintain their gains over RM codes as the channels vary.
(0, σ2) is the additive Gaussian noise, and a is from a Rayleigh distribution with the variance of a chosen as
[ai2]=1. As shown, KO(8,2) maintained a significant gain over RM(8, 2).
(0, σ2) is the additive Gaussian noise, and wi˜
(0, σb2) with probability ρ and wi=0 with probability 1−ρ. In the experiment, we chose ρ=0.1 and σb=√{square root over (2)}σ. As shown, KO(8, 2) was robust when tested on the bursty channel and maintains a significant gain over RM(8, 2).
Here we focus on first order KO(m, 1) codes, and in particular KO(6, 1) code that has code dimension k=7 and blocklength n=64. The training of the (encoder, decoder) pair (gθ,fϕ) for KO(6, 1) is almost identical to that of the second order RM(8, 2) described above. The only difference is that we now use the Plotkin tree structure of the corresponding RM(6, 1) code. In addition, we also train our neural encoder gθ together with the differentiable MAP decoder, i.e. the Soft-MAP, to compare its performance to that of the RM codes.
Polar and RM codes are closely related, especially from an encoding point of view. The generator matrices of both codes are chosen from the same parent square matrix by following different row selection rules. More precisely, consider an RM(m, r) code that has code dimension
and blocklength n=2m. Its encoding generator matrix is obtained by picking the k rows of the square matrix
that have the largest Hamming weights (i.e., Hamming weight of at least 2m−r), where [·]⊕m denotes the m-th Kronecker power. The Polar encoder, on the other hand, picks the rows of Gn×n that correspond to the most reliable bit-channels (Arikan, 2009).
The recursive Kronecker structure inherent to the parent matrix Gn×n can also be represented by a computation graph: a complete binary tree. Thus the corresponding computation tree for a Polar code is obtained by freezing a set of leaves (row-selection). We refer to this encoding computation graph of a Polar code as its Plotkin tree. This Plotkin tree structure of Polar codes enables a matching efficient decoder: the successive cancellation (SC). The SC decoding algorithm is similar to Dumer's decoding for RM codes. Hence, Polar codes can be completely characterized by their corresponding Plotkin trees. Successive cancellation decoding can be significantly improved further by list decoding. List decoding allows one to gracefully tradeoff computational complexity and reliability by maintaining a list (of a fixed size) of candidate codewords during the decoding process. The KO(8, 2) code with list decoding enjoys a significant gain over the non-list counterpart.
Inspired by the Kronecker structure of Polar Plotkin trees, we design a new family of KO codes to strictly improve upon them. We build a novel NN architecture that capitalizes on the Plotkin tree skeleton and generalizes it to nonlinear codes. This enables us to discover new nonlinear algebraic structures.
As described above, the Plotkin tree for a Polar code is obtained by freezing a set of leaves in a complete binary tree. These frozen leaves are chosen according to the reliabilities, or equivalently, error probabilities, of their corresponding bit channels. In other words, we first approximate the error probabilities of all the n-bit channels and pick the k-smallest of them using the procedure described in (Tal & Vardy, 2013). These k active set of leaves correspond to the transmitted message bits, whereas the remaining n−k frozen leaves always transmit zero.
In the present description, we focus on a specific Polar code: Polar(64, 7), with code dimension k=7 and blocklength n=64. For Polar (64, 7), we obtain these active set of leaves to be ={48, 56, 60, 61, 62, 63, 64}, and the frozen set to be
C={1, 2, . . . , 64}\
. Using these set of indices and simplifying the redundant branches, we obtain the Plotkin tree for Polar (64, 7) as illustrated in
Capitalizing on the encoding tree structure of Polar(64, 7), we build a corresponding KO encoder gθ which inherits this tree skeleton. In other words, we generalize the Plotkin mapping blocks at the internal nodes of the tree, except for the root node, and replace them with a corresponding neural network gi.
The KO encoder and decoder can be trained in an end-to-end manner using variants of stochastic gradient descent as illustrated in
In
The zero-append block includes an input for a sequence of bits and an output for outputting a bit sequence. In an embodiment, the code may include codeword blocklength n, code rate r, and information blocklength k. Information bit sequence is a binary sequence of length k. Zero-append block appends n−k zeros to the information bit sequence to output a bit sequence of length n. The codeword blocklength may be 8 and information bit length may be 5. Append zero may append 3 bits of zero value to the length 5 information bits.
The permutation block includes an input for a sequence of bits and an output for outputting a bit sequence. In an embodiment, a permutation π for an n dimensional binary vector is chosen and applied to the input bit sequence of length n to output a permuted bit sequence of length n. A permutation block of length 8 may be take as input b=[b1, b2, b3, b4, b5, 0, 0, 0] with length 5 information bits and three appended zeros, and output {tilde over (b)}=[b3, 0, b1, b5, 0, 0, b2, b4]. In some embodiments, different numbers of bits, blocklengths, or code rates may be used, as may different permutations.
where {tilde over (h)}2=[{tilde over (h)}21, {tilde over (h)}22, {tilde over (h)}23, {tilde over (h)}24, {tilde over (h)}25, {tilde over (h)}26, {tilde over (h)}27, {tilde over (h)}28]=[h21, h25, h23, h27, h22, h26, h24, h28]. A neural network 3 layer may be 4 parallel neural networks sharing the same weights denoted by f3, taking {tilde over (h)}2 as input, and
outputting h3=[h31, h32, h33, h34, h35, h36, h37, h38]. The mapping may be (h31, h32)=f3 ({tilde over (h)}21, {tilde over (h)}22)) (h33, h34)=f3({tilde over (h)}23, {tilde over (h)}24)), (h35, h36)=f3({tilde over (h)}25, {tilde over (h)}26)), and (h37, h38)=f3({tilde over (h)}27, {tilde over (h)}28). A rearrangement layer 3 may be a mapping from {tilde over (h)}3 to the codeword x,
where
x=[x
1
, x
2
, x
3
, x
4
, x
5
, x
6
, x
7
, x
8
]=[h
31
, h
33
, h
35
, h
37
, h
32
, h
34
, h
36
, h
38].
Rearrange f-layer 1 takes an n dimensional vector c and outputs a d-dimensional vector {tilde over (h)}f(t, 1) after applying a permutation πf(1) of the coordinates. Neural network step t f-layer 1 takes in {tilde over (h)}f(t, 1) from the rearrange f-layer 1 and {tilde over (h)}b(t, m) from the previous step t backward block and outputs an n-dimensional vector hf(t, 1).
In the subsequent layers, rearrange f-layer performs a permutation
to the input
and outputs
, and neural network step t f-layer
takes as input
from the rearrange f-layer
and
from the previous step t backward block and outputs an n-dimensional vector
.
The final layer in outputs an n-dimensional vector hf(t, m), which is the likelihood of the k information bits. The sign of the likelihood of the t-th information bit is the output
where σ(t) is the coordinate of the t-th information bit.
The step t+1 backward block applies m alternating layers of neural network operations and rearrange operations. Rearrange operation applies a pre-determined permutation of the coordinates to n-dimensional vectors and the permutation is kept the same across different steps, but the permutation is different across the layers. Neural network layer applies a neural network to a 2n dimensional input and outputs an n dimensional vector. The first neural network step t+1 b-layer 1 takes as input the likelihood of information bits hf(t, m) from the step t forward block and outputs an n-dimensional vector {tilde over (h)}b(t+1, 1). Rearrange b-layer 1 applies a permutation πb(1) to the coordinates of {tilde over (h)}b(t+1, 1) and outputs a permuted version hb(t+1, 1). Subsequent rearrange and neural network layers are alternatively applied until the neural network step t+1 b-layer m outputs {tilde over (h)}b(t+1, m). This is permuted by the rearrange b-layer m to output the likelihood of the codeword hb(t+1, m).
In an embodiment, x=[x1, . . . , x8] is the codeword, m=3, all the neural network layers are fully connected 4 parallel neural networks, each taking a 4 dimensional input and outputting a 2 dimensional output except for the step 1 forward block where all neural networks take 2 dimensional inputs. In other embodiments, other values for m, other widths of codewords, other dimensions of inputs and/or outputs for the neural networks, and so on may be used.
In its most basic configuration, the computing device 1600 includes at least one processor 1602 and a system memory 1610 connected by a communication bus 1608. Depending on the exact configuration and type of device, the system memory 1610 may be volatile or nonvolatile memory, such as read only memory (“ROM”), random access memory (“RAM”), EEPROM, flash memory, or similar memory technology. Those of ordinary skill in the art and others will recognize that system memory 1610 typically stores data and/or program modules that are immediately accessible to and/or currently being operated on by the processor 1602. In this regard, the processor 1602 may serve as a computational center of the computing device 1600 by supporting the execution of instructions.
As further illustrated in
In the exemplary embodiment depicted in
Suitable implementations of computing devices that include a processor 1602, system memory 1610, communication bus 1608, storage medium 1604, and network interface 1606 are known and commercially available. For ease of illustration and because it is not important for an understanding of the claimed subject matter,
While illustrative embodiments have been illustrated and described, it will be appreciated that various changes can be made therein without departing from the spirit and scope of the invention.
Example 1: A method of encoding a set of information bits to produce a codeword that encodes the set of information bits for reliable communication, the method comprising: receiving the set of information bits; providing the set of information bits to a plurality of permutation layers separated by neural network processing layers, wherein each permutation layer accepts an input vector and generates a reordered output vector that is a reordering of the input vector, and wherein each neural network processing layer accepts a vector of input values and generates a vector of output values based on a non-linear function of the vector of input values; and providing the reordered output vector of a final permutation layer of the plurality of permutation layers as the codeword.
Example 2: The method of Example 1, wherein each neural network processing layer includes a single neural network that accepts an entirety of the vector of input values.
Example 3: The method of any one of Example 1, wherein each neural network processing layer includes a plurality of neural networks that each accepts a subset of the vector of input values.
Example 4: The method of Example 3, wherein for a given neural network processing layer, weights of each neural network in the plurality of neural networks are the same.
Example 5: The method of any one of Examples 3-4, wherein each neural network of the plurality of neural networks accepts a pair of input values from the vector of input values as input.
Example 6: The method of any one of Examples 1-5, wherein at least one neural network processing layer of the neural network processing layers uses SeLU activation.
Example 7: The method of any one of Examples 1-6, further comprising adding one or more zero values to pad the set of information bits to a size expected by the plurality of permutation layers and the plurality of neural network layers.
Example 8: The method of any one of Examples 1-7, wherein the codeword is a real-valued vector.
Example 9: A method of decoding a codeword to retrieve a set of information bits, the method comprising: receiving the codeword; providing the codeword to a plurality of permutation layers separated by neural network processing layers, wherein each permutation layer accepts an input vector and generates a reordered output vector that is a reordering of the input vector, and wherein each neural network processing layer accepts a vector of input values and generates a vector of output values based on a non-linear function of the vector of input values; and performing a plurality of forward calculations and backward calculations using the plurality of permutation layers separated by the neural network processing layers to retrieve the set of information bits.
Example 10: The method of Example 9, wherein performing the plurality of forward calculations and backward calculations using the plurality of permutation layers separated by the neural network processing layers to retrieve the set of information bits includes: performing a forward calculation; extracting an information bit of the set of information bits from a result of the forward calculation; performing a backward calculation; and repeating at least the forward calculation to extract each information bit of the set of information bits.
Example 11: The method of Example 9, wherein performing the plurality of forward calculations and backward calculations using the plurality of permutation layers separated by the neural network processing layers to retrieve the set of information bits includes: performing the plurality of forward calculations and backward calculations; and extracting the set of information bits from a result of the plurality of forward calculations and backward calculations.
Example 12: The method of any one of Examples 9-11, wherein each neural network processing layer includes a single neural network that accepts an entirety of the vector of input values.
Example 13: The method of any one of Examples 9-11, wherein each neural network processing layer includes a plurality of neural networks that each accepts a subset of the vector of input values.
Example 14: The method of Example 13, wherein for a given neural network processing layer, weights of each neural network in the plurality of neural networks are the same.
Example 15: The method of any one of Examples 13-14, wherein each neural network of the plurality of neural networks accepts a pair of input values from the vector of input values as input.
Example 16: The method of any one of Examples 9-15, wherein at least one neural network processing layer of the neural network processing layers uses SeLU activation.
Example 17: The method of any one of Examples 9-17, further comprising removing one or more zero values from an output of the plurality of permutation layers separated by the neural network processing layers to retrieve the set of information bits.
Example 18: The method of any one of Examples 9-17, wherein the codeword is a real-valued vector.
Example 19: A method of reliable wireless transmission of a set of information bits, the method comprising: determining a set of information bits to be transmitted; encoding the set of information bits in a codeword; and wirelessly transmitting the codeword; wherein encoding the set of information bits in a codeword comprises a method as recited in any one of Example 1 to Example 8.
Example 20: A method of reliable wireless reception of a set of information bits, the method comprising: wirelessly receiving a codeword; and decoding a set of information bits from the codeword; wherein decoding the set of information bits from the codeword comprises a method as recited in any one of Example 9 to Example 18.
Example 21: A non-transitory computer-readable medium having computer-executable instructions stored thereon that, in response to execution by one or more processors of a computing device, cause the computing device to perform actions as recited in any one of Examples 1 to 20.
Example 22: A computing device, comprising: at least one processor; and a non-transitory computer-readable medium having computer-executable instructions stored thereon that, in response to execution by the at least one processor, cause the computing device to perform actions as recited in any one of Examples 1 to 20.
Example 23: A method of training an encoder and a decoder, the method comprising: initializing an encoder comprising a first plurality of first permutation layers separated by first neural network processing layers, wherein each first permutation layer accepts an input vector and generates a reordered output vector that is a reordering of the input vector, and wherein each first neural network processing layer accepts a vector of input values and generates a vector of output values based on a non-linear function of the vector of input values; initializing a decoder comprising a second plurality of second permutation layers separated by second neural network processing layers, wherein each second permutation layer accepts an input vector and generates a reordered output vector that is a reordering of the input vector, and wherein each second neural network processing layer accepts a vector of input values and generates a vector of output values based on a non-linear function of the vector of input values; performing a first number of optimization steps for the decoder using a set of training data; performing a second number of optimization steps for the encoder using the set of training data; and repeating the first number of optimization steps for the decoder and the second number of optimization steps for the encoder until training of the encoder and the decoder is completed.
Example 24: The method of Example 23, wherein the first number of optimization steps is greater than the second number of optimization steps.
Example 25: The method of any one of Examples 23-24, wherein each second permutation layer performs an opposite reordering to a corresponding first permutation layer.
Example 26: The method of any one of Examples 23-25, wherein initializing the encoder and the decoder includes randomly assigning weights for each first neural network processing layer and each second neural network processing layer.
Example 27: The method of any one of Examples 23-26, further comprising generating the set of training data by: generating a plurality of sets of source information bits; and applying a noise generator to each set of source information bits to generate a corresponding plurality of sets of noisy information bits; wherein the plurality of sets of source information bits are used as training input for the encoder and ground truth results for the decoder; and wherein the corresponding plurality of sets of noisy information bits are used as training input for the decoder.
Example 28: The method of any one of Examples 23-27, wherein performing at least one of the first number of optimization steps for the decoder and the second number of optimization steps for the encoder includes using stochastic gradient descent.
Example 29: The method of Example 28, wherein performing stochastic gradient descent includes using an Adam optimization technique.
Example 30: A non-transitory computer-readable medium having computer-executable instructions stored thereon that, in response to execution by one or more processors of a computing device, cause the computing device to perform actions as recited in any one of Examples 23 to 29.
Example 31: A computing device, comprising: at least one processor; and anon-transitory computer-readable medium having computer-executable instructions stored thereon that, in response to execution by the at least one processor, cause the computing device to perform actions as recited in any one of Examples 23 to 29.
This application claims the benefit of Provisional Application No. 63/219,264, filed Jul. 7, 2021, the entire disclosure of which is hereby incorporated by reference herein for all purposes.
This invention was made with Government support under Grant No. 1909771, awarded by the National Science Foundation. The Government has certain rights in the invention.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2022/036251 | 7/6/2022 | WO |
Number | Date | Country | |
---|---|---|---|
63219264 | Jul 2021 | US |