As Maximum Likelihood (ML) error correcting decoding of linear codes is NP-complete, the engineering paradigm has been to co-design restricted classes of linear code-books with code-specific decoding methods that exploit the code-structure to enable computationally efficient approximate-ML decoding. For example, Bose-Chaudhuri-Hocquenghem (BCH) codes with hard detection Berlekamp-Massey decoding, Low Density Parity Check codes and belief propagation, CRC-Assisted Polar (CA-Polar) codes, which have been selected for all control channel communications in 5G New Radio (NR), and CRC-Assisted Successive Cancellation List (CA-SCL) decoding.
Modern applications, including augmented and virtual reality, vehicle-to-vehicle communications, the Internet of Things, and machine-type communications, have driven demand for Ultra-Reliable Low-Latency Communication (URLLC). To enable these technologies requires the use of short, high-rate codes, reviving the possibility of creating high accuracy universal decoders that are suitable for hardware implementation. Accurate, practically realizable universal decoders offer the possibility of reduced hardware footprint, the provision of hard- or soft-detection decoding for codes that currently only have one class of decoder, and future-proofing devices against the introduction of new codes.
Guessing Random Additive Noise Decoding (GRAND) is a recently introduced universal decoder that was originally established for hard decision demodulation systems; see U.S. Pat. Nos. 10,608,673 and 11,095,314. GRAND algorithms operate by sequentially removing putative noise-effects from the demodulated received sequence and querying if what remains is in the code-book. The first instance where a code-book member is found is the decoding. Pseudo-code for GRAND can be found in Table 1 below, as reproduced in the flowchart of
For an [n,k] code, where k information bits are transformed into n coded bits for communication, GRAND algorithms (i.e. algorithms implemented in accordance with the GRAND concepts) in a binary symmetric channel will identify an erroneous decoding after approximately geometrically distributed number of code-book queries with mean 2n-k and correctly decode if they identify a code-word beforehand. As a result, an upper bound on the complexity of GRAND algorithms is determined by the number of redundant bits rather than the code length or rate directly, rendering them suitable to decode any moderate redundancy code of any length. The performance difference among GRAND variants stems generally from their utilization of statistical noise models or soft information to improve the targeting of their queries prior to the identification of an erroneous decoding.
The evident parallelizability of hard detection GRAND's code-book queries have already resulted in the proposal and realization of efficient circuit implementations for binary symmetric channels. An algorithm has also been introduced for channels subject to bursty noise whose statistics are known to the receiver, which has also resulted in proposed circuit implementations.
In accordance with the concepts described herein, it has been recognized that one question is how to make use of soft detection information, when it is available, in order to change, adjust, modify or otherwise alter (and ideally improve) GRAND's query order. Several techniques have been proposed to make use of soft detection information. One technique, referred to as Symbol Reliability GRAND (SRGRAND), avails of the most limited quantized soft information where one additional bit tags each demodulated symbol as being reliably or unreliably received. SRGRAND is mathematically analyzable, implementable in hardware, and provides a 0.5-0.75 dB gain over hard-detection GRAND. One technique, referred to as Soft GRAND (SGRAND), uses real-valued soft information per demodulated bit to build a bespoke noise-effect query order for each received signal. Using dynamic max-heap data structures, it is possible to create a semi-parallelizable implementation in software and it provides a benchmark for optimal decoding accuracy performance, but its execution is less amenable to hardware implementation.
A technique referred to as Ordered Reliability Bits GRAND (ORBGRAND) aims to bridge the gap between SRGRAND and SGRAND by obtaining the decoding accuracy of the latter in an algorithm that is suitable for implementation in circuits. See U.S. Patent Publication 2002/0302931. For a block code of length n, the ORBGRAND technique uses up to [log2(n)] bits of code-book-independent quantized soft detection information per received bit, the rank-order of each bit's reliability, to determine an accurate decoding. It retains the hard-detection algorithm's suitability for a highly parallelized implementation in hardware, and high throughput VLSI designs have already been proposed. ORBGRAND provides near-ML decodings for block error rates greater than 10−4, but is less precise at higher SNR as ORBGRAND's noise effect query order diverges from the optimal rank order. An engineering approach to improve ORBRGRAND's performance at higher SNR has been proposed.
As new applications drive demand for shorter, higher rate error correcting codes, computationally efficient universal decoding becomes a possibility. Universal decoders have many practical benefits, including the ability to support a practical infinity of distinct codes with the one efficient piece of software or hardware, enabling the best choice of code for each application and future proofing devices to the introduction of new codes.
Even though it was only recently introduced, GRAND is one promising approach to realizing this possibility. Hard detection GRAND algorithms enable accurate decoding of codes, such as CA-Polar codes, for which there is only a dedicated soft detection decoder. Moreover, they upgrade codes for which there are only hard detection decoders, such as BCH codes, or no error correcting decoder at all, such as CRCs, to soft detection decoding. Much of the existing literature reporting results from GRAND algorithms, both hard and soft detection, has shown that the decoding performance is largely driven by the quality of the decoder rather than the code, and that good CRCs and codes selected at random offer as good performance as highly structured codes.
As noted above, one soft detection version of GRAND that has been demonstrated to be suitable for hardware implementation is ORBGRAND. Described herein is an alternative variant of GRAND referred to as discretized soft-information for GRAND (or “DSGRAND”). The DSGRAND technique utilizes distinctly quantized soft detection information. It can tailor its levels of quantization to application need, providing improved block error rate performance as quantization becomes finer, which corresponds to more bits of soft information. DSGRAND inherits all the desirable features of GRAND algorithms, including universality, parallelizability and reduced algorithmic effort as SNR increases. DSGRAND has increasingly accurate performance as the number of soft information bits per received bit increases. With five or more soft bits per bit, it provides comparable block error rate performance to ORBGRAND in an algorithmically distinct package that is also suitable for hardware implementation.
Thus, a first embodiment of the concepts, techniques, and structures disclosed herein is a method of decoding a plurality of received symbols. The method includes further receiving, for one or more of the received symbols, up to log2(Q) bits of associated soft information, where log2(Q) is an integer. The method also includes assigning, to at least one of the one or more of the received symbols, one or more noise effect symbols having a respective weight that is determined by the up to log2(Q) bits of associated soft information. The method next includes forming noise effect sequences from the noise effect symbols. The method then includes determining a noise effect sequence guessing order according to the respective weights. The method requires forming one or more words by inverting a set of sequences of noise effect symbols, on the plurality of received symbols, according to the noise effect sequence guessing order. The method then requires determining whether each of the formed one or more words is a codeword. The method concludes by terminating according to a termination condition.
In some embodiments, determining the noise effect sequence guessing order comprises determining a total weight of each noise effect sequence and allocating the noise effect sequences into bins according to their respective total weights.
In some embodiments, allocating the noise effect sequences into bins comprises allocating noise effect sequences having larger reliability values to bins having smaller total weights.
In some embodiments, allocating a noise effect sequence into a given bin comprises solving an integer partition problem associated with the total weight of the given bin.
Some embodiments further include determining, for one or more of the noise effect symbols, one of up to Q reliability levels by discretizing a reliability value for the one or more of the noise effect symbols.
In some embodiments, the received symbols are binary symbols and each of the one or more of the received symbols has one noise effect symbol.
In some embodiments, the selection of noise effect symbols in a particular noise effect sequence is determined by a measure of proximity of the noise effect sequence to the received signal.
In some embodiments, the measure of proximity comprises a Hamming weight.
Another embodiment is a system for decoding a plurality of received symbols. The system includes a receiver for receiving from a data channel the plurality of received symbols and further receiving, for one or more of the received symbols, up to log2(Q) bits of associated soft information, where log2(Q) is an integer. The system also includes a discretization system for assigning, to at least one of the one or more of the received symbols, one or more noise effect symbols having a respective weight that is determined by the up to log2(Q) bits of associated soft information, and for forming noise effect sequences from the noise effect symbols. The system also has a noise guesser for iteratively guessing noise effect sequences according to a noise effect sequence guessing order determined according to the respective weights. And the system has a putative codeword buffer for transiently storing putative codewords formed by inverting a set of sequences of noise effect symbols, on the plurality of received symbols, according to the noise effect sequence guessing order. The system further includes a codeword validator for determining whether each of the formed one or more words is a codeword.
In some embodiments, the receiver comprises a network interface card.
In some embodiments, the putative codeword buffer comprises a primary storage or a volatile memory.
Some embodiments further have a codebook for use by the codeword validator to determine whether the word stored in the putative codeword buffer is a valid codeword.
Some embodiments have a noise outputter for outputting channel noise effect sequences, as determined by the codeword validator.
In some embodiments, the noise guesser is configured to determine the noise effect sequence guessing order by determining a total weight of each noise effect sequence assigned by the discretization system and allocating the noise effect sequences into bins according to their respective total weights.
In some embodiments, allocating the noise effect sequences into bins comprises allocating noise effect sequences having larger reliability values to bins having smaller total weights.
In some embodiments, allocating a noise effect sequence into a given bin comprises solving an integer partition problem associated with the total weight of the given bin.
In some embodiments, the discretization system is configured to determine, for one or more of the noise effect symbols, one of up to Q reliability levels by discretizing a reliability value for the one or more of the noise effect symbols.
In some embodiments, the receiver is configured to receive the received symbols as binary symbols.
In some embodiments, the discretization system forms noise effect symbols into a particular noise effect sequence using a measure of proximity of the noise effect sequence to the received signal.
In some embodiments, the measure of proximity comprises a Hamming weight.
The manner and process of making and using the disclosed embodiments may be appreciated by reference to the figures of the accompanying drawings. It should be appreciated that the components and structures illustrated in the figures are not necessarily to scale, emphasis instead being placed upon illustrating the principals of the concepts described herein. Like reference numerals designate corresponding parts throughout the different views. Furthermore, embodiments are illustrated by way of example and not limitation in the figures, in which:
Before describing the broad concepts, devices, systems and techniques sought to be protected herein, some introductory concepts are explained.
In many systems, particularly in communication systems where a channel comprises one or more communication channels, there may be information available regarding the characteristics of the one or more channels. Such characteristics sometimes effect information (e.g., analog and/or digital signals) propagating in the channel (with such effects referred to herein as “channel effects”). Information about the channel effects (i.e., information describing the channel effects) may include: (1) information related to the general nature of the noise; and/or (2) information related to the specific realization of the noise on the channel.
With respect to information related to a specific realization of the noise affecting the channel, this is may be referred to as “soft information.” Soft information can comprise instantaneous information. For example, soft information can comprise instantaneous detection confidence information, log-likelihood ratio (discussed later), signal-to-noise ratio information, signal-to-interference ratio information, information regarding inter-symbol information (e.g. from a Rake receiver), information about possible noise behaviors (e.g. having noise being more likely at the end of a transmission, when timing information may be stale or approaching obsolescence), or the interplay between modulation (e.g., particular types of modulation) and physical channel.
Such soft information is difficult to account for in decoding schemes. While so-called “Turbo” decoding maintains certain soft information implicitly in its decoding process, other schemes generally require bespoke modifications that add significant complexity. Thus, as for the general behavior of the noise, notwithstanding the development of theoretical improvements, current decoding schemes in general ignore the effect of soft information.
In one embodiment of the concepts, devices, systems and techniques disclosed herein, consider a system using a binary [n,k] block code. Binary data are modulated as symbols, transmitted, and subject to additive continuous noise. The modulated channel output Yn is then demodulated to provide the hard-detection output
y
n=demod(Yn)∈2n.
In contrast to the continuous noise impacting the channel, the “noise effect” is the binary difference between the code-word and demodulated output in
Processing techniques (or generally “algorithms”) based upon the GRAND concepts seek to identify the noise effect, Zn, by rank ordering noise effects, zn, from most likely to least likely based on the information available to them, and querying if what remains when a putative noise effect is removed from a demodulated signal is in the code-book. Thus, for the case of binary symbols, let
denote the reliability of the signal Y, where LLR denotes the log-likelihood ratio. Given L(Y), elementary manipulation reveals that the likelihood that the corresponding hard demodulated bit, y, is in error is
and so
Consequently, to create binary noise effect sequences, zn, rank-ordered by likelihood, it suffices to create them by increasing ϵi=1nL(Yi)zi. Distinct GRAND algorithms use different approximations, R(Yi), to L(Yi) that depends on the information that is available to them.
In the absence of soft detection information, the approximation is that R(Yi) is a constant for all i and so noise effect sequences are rank ordered by their Hamming weight, wH(zn)=∈i=1ni. SRGRAND's binary quantization sets R (Yi)=∞ for bits above a threshold, tagging them as being perfectly reliable, and R(Yi) to a constant for those below the threshold, resulting in noise effect sequences following increasing Hamming weight within the region of unreliable bits. ORBGRAND considers the received bits rank-ordered in increasing reliability. If rank ordered reliabilities are increasing linearly, i.e., R(Yi)=αi with any slope α>0, then noise effect sequences follow increasing logistic weight wL(zn)=∈i=1nzi with positions labelled in that rank order. SGRAND makes no approximation, assumes that the L(Yi) are real-valued, and dynamically creates putative noise effect sequences with increasing ∈i=1nL(Yi)zi.
In contrast to the above techniques, DSGRAND envisages a quantization of the real-valued reliabilities, L (Y), into a restricted number of categories determined by a quantization level as shown in
For a given block of n bits and corresponding discretized reliabilities, let s(i,q)=1 if R(Yi)=βq and zero otherwise, let mq=∈i=1ns(i,q) be the number of bits in quantization level q, and let zn,q denote the subset of the string zn such that s(i,q)=1. With this quantized approximation, one embodiment of DSGRAND's query order follows
in increasing order. That is, the query order would follow a weighted sum of Hamming weights of bit flips corresponding to indices in each discretized level.
Given log2(Q) bits of soft information per bit, DSGRAND creates a quantization of the real valued reliabilities, L(Yi), into the first Q−1 categories, with the final category covering β[Q−1, ∞) and being assigned R(Y)=Q. Thus for a received block of symbols with mQ=(m1, . . . ,mQ) bits in each quantization level and a total Hamming weight W, we wish to identify the set of noise effect sequences (sometimes called herein a “bin”)
Defining wQ=(w1, w2, . . . , wQ), with each wq representing the number of bits in the corresponding quantization level q to be flipped for the given sequence weight W, and setting
to be all viable combinations of weights wq that give the desired overall weight W, the above set of noise effect sequences in the bin of weight W can be identified with
where the product is Cartesian.
With a total weight W=0, one has ω(0,mQ)=(0, . . . ,0) and the most likely noise effect is no bit flips. With W=1, one computes ω(1,mQ)=(1, . . . ,0) so all noise effect sequences that have any one bit with s(i,1)=1, flipped are binned together. With an overall weight of W=2, if m1≥2, ω(2,mQ) consists of two vectors, (2,0, . . . ,0) and (0,1,0 . . . ,0) so that any sequence with two bits, say i and j, with s(i,1)=1 and s(j,1)=1 flipped or any sequence with one s(i,2)=1 bit flipped would be binned together, and so on.
In practice, identifying the set ω(W,mQ) amounts to an integer partition problem that can be solved efficiently. For a given wQ∈ω(W,mQ), generating all noise effect sequences
can be achieved using minor modifications to the circuits developed for hard-detection GRAND-BSC algorithms. As a result, DSGRAND is well suited to implementation in hardware.
For simulated performance evaluation, we assume binary phase-shift keying (BPSK) modulation and additive white Gaussian (AWGN) with variance σ2. From the discussion above, we have that L(Y)=2|Y|σ2, and we elect to quantize into bins with parameter
The first term, 2/σ2, normalizes for the increase in reliability with SNR, while the second term, (1−σ/2)/Q, ensures that approximately 30% of the least reliable bits are accurately quantized, while the 70% most reliable bits are grouped together. For comparison with DSGRAND at distinct levels of quantization, we use GRAND as a ML hard detection decoder and ORBGRAND as the state-of-the-art universal soft detection decoder.
Referring now to
The encoded digital bit stream (or more simply “encoded bit stream”) is provided to a transmitter which receives and modulates the encoded bit stream for transmission over a channel as is generally known. It should be appreciated that any encoding and/or modulation schemes (including any combination or encoding and modulation) may be used (e.g. any linear or nonlinear modulation scheme may be used).
The channel may be implemented as a wired channel (e.g. comprising one or more of twisted-pair wire, coaxial cable, and fiber-optic cable) and/or a wireless channel (e.g., utilizing transmission of radio frequency signals, microwave signals, satellite signals, acoustic signals, or infrared signals). Multiple encoded bit streams may be multiplexed over the channel as is generally known.
The encoded modulated bit stream propagates through the channel and is received in whole or in part by a receiving system. The receiving system includes a receiver configured to receive and demodulate the encoded modulated bit stream signal provide thereto. The receiver may include a demodulator/de-mapper that converts received, raw channel data to encoded data symbols (e.g. encoded via QAM or other technique) that may be bits, and provides demodulated data symbols to a decoder operating in accordance with noise effect guessing techniques. As such, the decoder may decode any strings of symbols provided thereto regardless of the technique used to encode the symbols. Thus, the decoder acts as a universal decoder (i.e. a decoder capable of decoding any encoded data symbols).
The receiving system further includes a binning system. The binning system receives and processes soft information on noise effect provided thereto. The soft information on noise effect may be per-symbol soft information (i.e., soft information related to a particular symbol). The soft information on noise effect may be provided to the binning system from one or more sources. For example, the binning system may receive quantized soft information from the channel (i.e., directly from the channel) and/or from the demodulator/de-mapper and/or from any other sources. The binning system processes the soft information on noise effect provided thereto for a sequence of symbols and provides binned putative sequences of noise effects to the decoder. Examples of the manner in which the binning system processes the soft information will be described in detail hereinbelow. In general, however, the soft information on noise effect associated with a symbol can be used to create a set of quantized weights for the noise effects that may affect that symbol. Sequences of noise effects, where each noise effect in the sequence is selected from the set of possible noise effects for the corresponding symbol, are binned in such a way that all strings of noise effects in a bin have an equal or similar total weight, where the total weight is as a function of the quantized soft information of the noise effects for the symbols in the sequence of strings.
The design of the bins is related to the soft information quantization as described herein. In general overview, however, once a quantization is selected it is fixed and it is possible to have scenarios where different binning with the same quantized soft information is developed.
The decoder receives the quantized soft information and creates the corresponding binning of putative noise effect sequences. The decoder decodes the encoded bits provided thereto using a noise guessing technique based on the binning of sequences of putative noise effect sequences.
It should be appreciated that the various embodiments of binning systems and decoders described herein may be implemented in one or more processors and/or the functionality performed by the binning system and decoder may be in the same processor or distributed across multiple processors. After reading the disclosure provided herein, one of ordinary skill in the art will appreciate how to implement the systems and techniques described herein in practical circuits and devices.
Referring now to
The decoder operates in accordance with noise guessing techniques and also utilizes soft information provided thereto. The decoder may decode any symbols provided thereto regardless of the technique used to encode the bits.
In this example embodiment, the decoder includes a binning system which receives and processes the demodulated symbols and quantized soft information provided thereto. The binning system may be the same as or similar to the binning system described above in conjunction with
The decoder provides quantized noise effects and associated symbols to a guessing processor which operates in accordance with a noise guessing technique which uses the quantized noise effects to generate a binning of putative noise effect sequences on associated symbol sequences, Using the bins to order the guessing, the guessing processor provides symbols to an output processor.
The receiver may also provide channel information (e.g. signal-to-noise ratio) and optionally symbol-specific soft information on noise realization to a termination processor. In embodiments, the termination processor may determine a termination condition based upon received information. In general, the termination processor provides information to the output processor which indicates to the output processor that a termination (or stopping) condition exists.
As indicated in the example embodiment of
In embodiments, the termination processor may determine a termination condition based upon one or more of: a receiver SNR exceeding a threshold SNR; a repetition count reaching a threshold value; a determination that a codebook rate is greater than capacity due to a temporary decrease in channel capacity (e.g. as caused by transient noise); and/or a codeword validator declaring an erasure for a code); based upon the guessing processor providing a first codeword; based upon the guessing processor providing a predetermined number of codewords; when a predetermined number of guesses is reached; after reaching a number of guesses that is determined according to SNR; when some total number of guesses is reached; when a time limit has been reached; a likelihood that a correct symbol has been found (i.e. a likelihood of correctness or a probability of correct decoding has been reached); a likelihood that a correct symbol has been found exceeding a threshold likelihood value; the existence of a set of codewords all having a probability of correctness greater than some threshold probability value PLIST. In embodiments, the termination processor may determine a termination condition based upon soft information. In embodiments, the termination processor may determine a termination condition based upon a combination of receiver information and soft information. In embodiments, the use of soft information may be optional (as indicated by the dashed soft information line leading to the termination processor in
The output processor may operate in a variety of different manners according the needs of a particular application. In one example embodiment, the output processor receives de-coded symbols from the guessing processor and a termination condition from the termination processor and in response thereto the output processor provides an output. The output may correspond, for example: a fixed size list of de-coded symbols; a list of list of symbols according to probabilities; a decision to abandon the decoding process for one or more symbols. The output processor may process the information provided thereto parallel or in series.
It should be appreciated the guessing processor, termination processor and output processors (and any of the processors described herein) may comprise more one processors (or processing devices) and the functions of the guessing, termination and output processors may be distributed among one or more processors or processing devices.
Referring now to
The device 30 includes a receiver 31 for receiving channel output blocks from a data channel. The receiver 31 may be, for example, a network interface card (NIC) described below, or similar means. The receiver 31 may, in some embodiments, be configured to receive data from the data channel as data blocks.
The device 30 includes a noise guesser 32 for iteratively guessing noise effect sequences. The noise guesser 32 may be implemented, for example, using a CPU and primary storage, a custom integrated circuit, or similar means.
The device 30 includes a putative codeword buffer 33 for transiently storing putative codewords (i.e. sequences of symbols input to the channel). The putative codeword buffer 33 may be, for example, primary storage, a volatile memory, or similar means. Thus, the putative codeword buffer 33 stores a channel output block that stores a sequence of symbols demodulated by the receiver 31. The putative codeword buffer 33 is also used to store the channel output block after using a guessed noise effect sequence (i.e. a sequence of noise effect symbols) received from the noise guesser 32 to reverse a putative noise effect, as indicated in
The device 30 includes a codeword validator 34 for validating words. The codeword validator 34 may be implemented, for example, using a CPU and primary storage, a custom integrated circuit, or similar means. Thus, if the codeword validator 34 determines that the word stored in the putative codeword buffer 33 is a valid codeword, it may transmit a “success” signal to the noise guesser 32, and a sent codeword outputter 36 and noise outputter 37 described below. If not, the codeword validator 34 may transmit a “continue” signal to the noise guesser 32 that it should guess a next most likely sequence of noise symbols.
In some embodiments, the codeword validator 34 may determine that the device 30 should cease further attempting to guess noise effect sequences. If so, it may transmit a “failure” signal to the noise guesser 32, the sent codeword outputter 36, and the noise outputter 37.
The device 30 may include an optional codebook 35 for use by the codeword validator 34. The codebook 35 may be implemented, for example, using primary storage, a volatile or non-volatile or programmable memory, or similar means. In some embodiments, the codeword validator 34 uses the codebook 35 to determine whether the word stored in the putative codeword buffer 33 is a valid codeword. If so, the codeword validator 34 transmits the “success” signal described above. In other embodiments, the optional codebook 35 is absent, and the codeword validator 34 performs a computational validation to determine whether the word stored in the putative codeword buffer 33 is valid. If so, the codeword validator 34 generates the “success” signal described above.
The device 30 includes a sent codeword outputter 36 for outputting channel input codewords sent by the data sender, as determined by the codeword validator 34. The sent codeword outputter 36 may be any coupling to another circuit or device (not shown) that performs further processing on the channel input codeword, as determined by the device 30. Thus, the sent codeword outputter 36 outputs the word stored in the putative codeword buffer 33 upon receiving a “success” signal from the codeword validator 34.
In some embodiments, the sent codeword outputter 36 performs the error handling process 27. Thus, upon receiving a “failure” signal from the codeword validator 34, the sent codeword outputter 36 indicates the failure to the coupled circuit. Failure may be indicated, for example, by producing a high- or low-voltage error signal on the coupled circuit. Alternately or in addition, the sent codeword outputter 36 may transmit an erasure (e.g. a block of all zeroes or all ones), or “soft” information about the error to the coupled circuit to permit the coupled circuit to diagnose the error. Such soft information may include, for example, a count of how many tries to decode had been performed, or data indicating an ordering of the noise effect sequences.
The device 30 includes a noise outputter 37 for outputting channel noise effect sequences, as determined by the codeword validator 34. The noise outputter 37 may be any coupling to another circuit or device (not shown) that performs further processing on the guessed sequence of noise effects. The noise outputter 37 outputs the guessed sequence of noise effects upon receiving a “success” signal from the codeword validator 34.
The noise guesser 32 optionally includes a function for analyzing noise effect sequences, as indicated by the dashed line 38 from the noise outputter 37 to the noise guesser 32. Thus, in addition to outputting the guessed sequence of noise symbols to any coupled circuit, the noise outputter 37 may output the guessed noise effect sequence to the noise guesser 32. The noise guesser 32, in turn, may analyze the guessed noise effect sequence, for example using machine learning, to learn patterns of the channel noise. The noise guesser 32 may then use these noise patterns to update its noise model and reorder the noise effect sequences. Such reordering may be accomplished, for example, in accordance with a likelihood order (e.g. a maximum likelihood order), and may be made using an estimation technique or a direct calculation technique. In some embodiments, the analysis includes training, where known input blocks are used to train the receiver on channel noise. Alternately or in addition, the analysis may include includes extrinsic information, such as feedback from the sender or from other decoders, to enable on-the-fly machine learning. The analysis, or a portion thereof, may be performed by a circuit coupled to the noise outputter 37, and the reordered noise effect guesses may be fed back into the noise guesser 32. Such a design advantageously simplifies the design of the noise guesser 32.
The device 30 further includes a binning (or discretization or quantization) system 39. The binning system 39 may be implemented, for example, using an application-specific integrated circuit (ASIC), field-programmable gate array (FPGA), or similar device. In one embodiment, the binning system 39 is programmed, based on the number of bits Q of soft information that are associated with each data symbol received from the channel, to divide a continuum of reliability values into up to 2° reliability levels, and to assign each noise effect associated with the received data symbol to one of the reliability levels according to soft information that is associated with the symbol, as shown in connection with
Moreover, the noise effect guesser 32 is configured to determine a noise effect guessing order according to the reliability levels provided by the binning system 39. In particular, in some embodiments, determining the noise effect guessing order includes ordering sequences of noise effects by increasing reliability values, each reliability value corresponding to a weight associated with a bin, where the weights of all noise effect sequences in a bin are equal or similar to the weight of the bin, and where the weight of a noise effect sequence is derived from the quantized soft information associated with the noise effects of the symbols in the corresponding sequence. In some embodiments, assigning noise effect sequences to a bin is determined by solving the integer partition problem for the total weight associated with that bin.
In the embodiment disclosed above, the quantization is done by a quantization system outside of the decoder, to reduce the amount of data transported internally in the decoder. However, it is also contemplated to perform quantization in some embodiments in the decoder, rather than as a separate function within the device.
It should be appreciated that in various embodiments, quantization is performed on a symbol-by-symbol basis, while determining noise effect sequences is performed on a block-by-block basis.
The method 40 includes, prior to receiving the symbols, two processes. The first process 41 divides a continuum of reliability values into at most 2Q reliability levels, according to the techniques described above. It is appreciated that possession of Q bits of soft information permits dividing the continuum of reliability values into at most 2Q levels, but fewer levels may be used.
The second process 42 determines a noise effect guessing order according to the reliability levels. One such noise effect guessing order is illustratively shown in
The method 40 next includes a process 43 for receiving a data symbol (e.g. a bit, or a point in a constellation such as QAM) in a block of data symbols. In some embodiments, the process 43 receives the data symbol into the receiver 31.
Either concurrently or at a later time, the method 40 includes a process 44 for receiving a Q-bit reliability value for the data symbol, i.e. as soft information relating to the data symbol. In some embodiments, the process 44 receives the reliability value into the quantization processor 39. For example, if the data symbol is meant to represent a point in a constellation, the reliability value may indicate a distance from the symbol to the nearest point in the constellation. Alternately, the reliability value may indicate a signal-to-noise ratio (SNR) of the channel at the time that the symbol was transmitted. It is appreciated that other soft information may be included in the Q-bit reliability value, as known in the art. If the soft information is received at a later time, the method 40 applies a mechanism for associating the received soft information with its corresponding data symbol, e.g. through the use of timestamps or other techniques known in the art.
The method 40 further includes a process 45 for assigning the reliability value to one of the 2Q levels. The assignment may be performed by the quantization processor 39, or by the noise guesser 32, or by some electronic circuitry for that purpose. Assignment of reliability values to levels is shown in
The method 40 also includes a decision process 46 for determining whether there are more data symbols in the block. The process 46 may be performed by the receiver 31, or the noise guesser 32, or a combination of the two. If there are more symbols to receive, the method 40 returns to process 43.
However, if there are no more symbols to receive, the method 40 continues to a process 47 of forming one or more words by inverting a noise effect sequence on the received block according to the noise effect guessing order determined in process 42. It should be observed that process 44 receives soft information that is associated with each data symbol, while process 47 inverts the noise effect on the entire received block to form a sequence of “words” (i.e. putative codewords). In particular, the soft information is applied at a level of granularity that is finer than that of the received block, while the reversal of the guessed noise effect is applied at the coarser level of granularity of the entire block via binned noise effect sequences.
Thus, to complete the method 40, a process 48 determines whether each of the formed words is a codeword. The process 48 may be performed in some embodiments by a codeword validator 34 using a codebook 35. It is appreciated that the process 47 may form a sequence of words using the most likely (or approximate most likely) order of noise according to the received soft information, determined as described below, and thus that the process 48 produces a sequence of decodings of the received data symbols into codewords in decreasing order of likelihood.
In
The method 50 includes a first process 51 further receiving, for one or more of the received symbols, up to log2(Q) bits of associated soft information, where log2(Q) is an integer.
The method 50 next includes a second process 52 assigning, to each of the one or more of the received symbols, one or more noise effect symbols having a respective weight that is determined by the up to log2(Q) bits of associated soft information. The weight for each noise effect symbol may be, illustratively, a Hamming weight of the noise effect symbol. In this way, each received symbol may be associated with a list of possible noise effects weighted by relative likelihood.
The method 50 continues with a third process 53 forming noise effect sequences from the noise effect symbols. As described above, each noise effect sequence will have a total weight that equals the summed weights of its noise effect symbols. Thus, the total weight of the noise effect sequence may be viewed as a measure of the proximity of the noise effect sequence to the received signal.
The method 50 next continues with a fourth process 54 determining a noise effect sequence guessing order according to the respective weights. This may be done, as described above, by placing noise effect sequences into bins according to their respective total weights. That is, each bin holds one or more noise effect sequence, and all of the noise effect sequences in each bin have an equal total weight or range of total weights.
The method 50 further continues with a fifth process 55 forming one or more words by inverting a set of sequences of noise effect symbols, on the plurality of received symbols, according to the noise effect sequence guessing order. Thus, noise effect sequences are chosen according to the noise effect sequence guessing order, and putative codewords are formed in the receiver by reversing the effects of the chosen sequences. Note that soft information applies to each received symbol, but the effect of noise is reversed on a block-by-block basis using noise effect sequences.
The method 50 next includes a decision process 56 determining whether each of the formed one or more words is a codeword. Due to the determination of noise effect sequence guessing order using the techniques described herein, it is expected that each of the formed words will be a codeword on the first such determination the majority of the time. In such cases, the method 50 concludes with a process 57 terminating according to a termination condition (e.g. all symbols have been received and correctly decoded).
In case one or more of the formed words is not a codeword, then the entire decoding is considered incorrect, and the method 50 continues to a process 58 choosing a next noise effect sequence according to the noise effect sequence guessing order. In various embodiments as described above, noise effect sequences are guessed according to maximum likelihood (i.e. the most likely noise effect of no bit flips is guessed first, followed by noise effect sequences that have any one bit error, and so on). The method 50 then returns to the process 55 to form a next set of putative codewords. The method continues this way until either all formed words are codewords, or another termination condition is reached (e.g. a fixed number of sets of noise effect sequences has been tried without success). In this case, an erasure may be output.
Embodiments of DSGRAND disclosed herein advantageously allocate received bits into ordered reliability ranges on the basis of soft information, thereby forming groups of bits having similar reliability. By contrast, in ORBGRAND the received bits are individually ordered by their absolute reliability. While the quantization in DSGRAND discretizes the reliability of individual bits, the putative noise effect sequences are queried in increasing order of their weight, which is calculated as a sum of (quantized reliability times the number of bits in that putative noise effect sequence that has that quantized reliability).
We also compare the performance of GRAND algorithms with CA-Polar codes, which will be used for control channel communications in 5G NR. CA-Polar codes are concatenated Polar inner codes with a CRC outer code. In their dedicated decoder, CA-SCL, the Polar bits are used for soft detection list decoding, typically of length 8, and the CRC bits are used to select a decoding from that list. As a comparator, for CA-Polar codes we also show results for the dedicated soft-detection decoder CA-SCL as implemented in the AFF3CT toolbox. As the product of a linear code with a linear code is a linear code, GRAND algorithms use both the CRC and Polar bits of a CA-Polar code for error correction.
In
In
Reliabilities above level 2 represent bits whose demodulation may, in some embodiments, be treated as certain and for which noise guessing is not required. Such embodiments may advantageously yield fast demodulation of these bits. However, in alternate embodiments and as shown in
It is appreciated that the thresholds for reliability that define the quantization levels need not be chosen to be uniformly spaced, as they are shown in
Once the soft information has been quantized, noise effect guessing proceeds as now described.
Weight 1 represents 1 bit in the least reliable quantization level (i.e. level 1) and 0 bits in the second least reliability bit (i.e. level 2); as a formula, 1=1*1+0*2. Weight 2 represents 0 bits in level 1 and 1 bit in level 2; that is, 2=0*1+1*2. Weight 3 represents 1 bit in level 1 and 1 bit in level 2; that is, 3=1*1+1*2. Higher weights are distributed similarly.
It should be appreciated that the distribution in
Noise effect guessing proceeds first by selecting more bits having a lower reliability level, as these bits are more likely to be in error than bits having a higher reliability level. Thus, the noise guess labeled 0 in
It is appreciated that DSGRAND advantageously permits arbitrary numbers of bits in each received block to be placed in each quantization level of noise effect, and that therefore the noise effect sequence levels and associated noise effect sequence guessing patterns for the received block will change if different numbers of bits land in different quantization levels. It is further appreciated that the example of
In illustrative implementations of the concepts described herein, one or more computers (e.g., integrated circuits, microcontrollers, controllers, microprocessors, processors, field-programmable-gate arrays, personal computers, onboard computers, remote computers, servers, network hosts, or client computers) may be programmed and specially adapted: (1) to perform any computation, calculation, program or algorithm described or implied above; (2) to receive signals indicative of human input; (3) to output signals for controlling transducers for outputting information in human perceivable format; (4) to process data, to perform computations, to execute any algorithm or software, and (5) to control the read or write of data to and from memory devices. The one or more computers may be connected to each other or to other components in the system either: (a) wirelessly, (b) by wired or fiber optic connection, or (c) by any combination of wired, fiber optic or wireless connections.
In illustrative implementations of the concepts described herein, one or more computers may be programmed to perform any and all computations, calculations, programs and algorithms described or implied above, and any and all functions described in the immediately preceding paragraph. Likewise, in illustrative implementations of the concepts described herein, one or more non-transitory, machine-accessible media may have instructions encoded thereon for one or more computers to perform any and all computations, calculations, programs and algorithms described or implied above, and any and all functions described in the immediately preceding paragraph.
For example, in some cases: (a) a machine-accessible medium may have instructions encoded thereon that specify steps in a software program; and (b) the computer may access the instructions encoded on the machine-accessible medium, in order to determine steps to execute in the software program. In illustrative implementations, the machine-accessible medium may comprise a tangible non-transitory medium. In some cases, the machine-accessible medium may comprise (a) a memory unit or (b) an auxiliary memory storage device. For example, in some cases, while a program is executing, a control unit in a computer may fetch the next coded instruction from memory.
In some cases, one or more computers are programmed for communication over a network. For example, in some cases, one or more computers are programmed for network communication: (a) in accordance with the Internet Protocol Suite, or (b) in accordance with any other industry standard for communication, including any USB standard, ethernet standard (e.g., IEEE 802.3), token ring standard (e.g., IEEE 802.5), or wireless communication standard, including IEEE 802.11 (Wi-Fi®), IEEE 802.15 (Bluetooth®/Zigbee®), IEEE 802.16, IEEE 802.20, GSM (global system for mobile communications), UMTS (universal mobile telecommunication system), CDMA (code division multiple access, including IS-95, IS-2000, and WCDMA), LTE (long term evolution), or 5G (e.g., ITU IMT-2020).
As used herein, “including” means including without limitation. As used herein, the terms “a” and “an”, when modifying a noun, do not imply that only one of the noun exists. As used herein, unless the context clearly indicates otherwise, “or” means and/or. For example, A or B is true if A is true, or B is true, or both A and B are true. As used herein, “for example”, “for instance”, “e.g.”, and “such as” refer to non-limiting examples that are not exclusive examples. The word “consists” (and variants thereof) are to be give the same meaning as the word “comprises” or “includes” (or variants thereof).
The above description (including any attached drawings and figures) illustrate example implementations of the concepts described herein. However, the concepts described herein may be implemented in other ways. The methods and apparatus which are described above are merely illustrative applications of the principles of the described concepts. Numerous modifications may be made by those skilled in the art without departing from the scope of the invention. Also, the described concepts includes without limitation each combination, sub-combination, and permutation of one or more of the abovementioned implementations, embodiments and features.
Various embodiments of the concepts, systems, devices, structures and techniques sought to be protected are described herein with reference to the related drawings. Alternative embodiments can be devised without departing from the scope of the concepts, systems, devices, structures and techniques described herein. It is noted that various connections and positional relationships (e.g., over, below, adjacent, etc.) are set forth between elements in the following description and in the drawings. These connections and/or positional relationships, unless specified otherwise, can be direct or indirect, and the described concepts, systems, devices, structures and techniques are not intended to be limiting in this respect. Accordingly, a coupling of entities can refer to either a direct or an indirect coupling, and a positional relationship between entities can be a direct or indirect positional relationship.
As an example of an indirect positional relationship, references in the present description to forming layer “A” over layer “B” include situations in which one or more intermediate layers (e.g., layer “C”) is between layer “A” and layer “B” as long as the relevant characteristics and functionalities of layer “A” and layer “B” are not substantially changed by the intermediate layer(s). The following definitions and abbreviations are to be used for the interpretation of the specification. As used herein, the terms “comprises,” “comprising, “includes,” “including,” “has,” “having,” “contains” or “containing,” or any other variation thereof, are intended to cover a non-exclusive inclusion. For example, a composition, a mixture, process, method, article, or apparatus that comprises a list of elements is not necessarily limited to only those elements but can include other elements not expressly listed or inherent to such composition, mixture, process, method, article, or apparatus.
Additionally, the term “exemplary” is used herein to mean “serving as an example, instance, or illustration. Any embodiment or design described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments or designs. The terms “one or more” and “one or more” are understood to include any integer number greater than or equal to one, i.e. one, two, three, four, etc. The terms “a plurality” are understood to include any integer number greater than or equal to two, i.e. two, three, four, five, etc. The term “connection” can include an indirect “connection” and a direct “connection.”
References in the specification to “one embodiment, “an embodiment,” “an example embodiment,” etc., indicate that the embodiment described can include a particular feature, structure, or characteristic, but every embodiment can include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
For purposes of the description hereinafter, the terms “upper,” “lower,” “right,” “left,” “vertical,” “horizontal, “top,” “bottom,” and derivatives thereof shall relate to the described structures and methods, as oriented in the drawing figures. The terms “overlying,” “atop,” “on top, “positioned on” or “positioned atop” mean that a first element, such as a first structure, is present on a second element, such as a second structure, where intervening elements such as an interface structure can be present between the first element and the second element. The term “direct contact” means that a first element, such as a first structure, and a second element, such as a second structure, are connected without any intermediary elements.
Use of ordinal terms such as “first,” “second,” “third,” etc., in the specification to modify an element does not by itself connote any priority, precedence, or order of one element over another or the temporal order in which acts of a method are performed, but are used merely as labels to distinguish one element having a certain name from another element having a same name (but for use of the ordinal term) to distinguish the elements.
The terms “approximately” and “about” may be used to mean within ±20% of a target value in some embodiments, within ±10% of a target value in some embodiments, within ±5% of a target value in some embodiments, and yet within ±2% of a target value in some embodiments. The terms “approximately” and “about” may include the target value. The term “substantially equal” may be used to refer to values that are within ±20% of one another in some embodiments, within ±10% of one another in some embodiments, within ±5% of one another in some embodiments, and yet within ±2% of one another in some embodiments.
The term “substantially” may be used to refer to values that are within ±20% of a comparative measure in some embodiments, within ±10% in some embodiments, within ±5% in some embodiments, and yet within ±2% in some embodiments. For example, a first direction that is “substantially” perpendicular to a second direction may refer to a first direction that is within ±20% of making a 90° angle with the second direction in some embodiments, within ±10% of making a 90° angle with the second direction in some embodiments, within ±5% of making a 90° angle with the second direction in some embodiments, and yet within ±2% of making a 90° angle with the second direction in some embodiments.
It is to be understood that the disclosed subject matter is not limited in its application to the details of construction and to the arrangements of the components set forth in the following description or illustrated in the drawings. The disclosed subject matter is capable of other embodiments and of being practiced and carried out in various ways. Also, it is to be understood that the phraseology and terminology employed herein are for the purpose of description and should not be regarded as limiting. As such, those skilled in the art will appreciate that the conception, upon which this disclosure is based, may readily be utilized as a basis for the designing of other structures, methods, and systems for carrying out the several purposes of the disclosed subject matter.
Accordingly, although the disclosed subject matter has been described and illustrated in the foregoing exemplary embodiments, it is understood that the present disclosure has been made only by way of example, and that numerous changes in the details of implementation of the disclosed subject matter may be made without departing from the spirit and scope of the disclosed subject matter.
This invention was made with government support under HR0011-2-12-0008 awarded by the Defense Advanced Research Projects Agency. The government has certain rights in the invention.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2023/061153 | 1/24/2023 | WO |
Number | Date | Country | |
---|---|---|---|
63323180 | Mar 2022 | US |