Examples described herein relate to neural networks for use in decoding encoded data. Examples of neural networks are described which may be used with error-correcting coding (ECC) memory, where a neural network may be used to decode encoded data.
Error correction coding (ECC) may be used in a variety of applications, such as memory devices or wireless baseband circuitry. Generally, error correction coding techniques may encode original data with additional bits to describe the original bits which are intended to be stored, retrieved, and/or transmitted. The additional bits may be stored together with the original bits. Accordingly, there may be L bits of original data to be stored and/or transmitted. An encoder may provide N-L additional bits, such that the encoded data may be N bits worth of data. The original bits may be stored as the original bits, or may be changed by the encoder to form the encoded N bits of stored data. A decoder may decode the N bits to retrieve and/or estimate the original L bits, which may be corrected in some examples in accordance with the ECC technique.
Multi-layer neural networks may be used to decode encoded data (e.g., data encoded using one or more encoding techniques). The neural networks may have nonlinear mapping and distributed processing capabilities which may be advantageous in many systems employing the neural network decoders. In this manner, neural networks described herein may be used to implement error code correction (ECC) decoders.
An encoder may have L bits of input data (a1, a2, . . . aL). The encoder may encode the input data in accordance with an encoding technique to provide N bits of encoded data (b1, b2, . . . bN). The encoded data may be stored and/or transmitted, or some other action taken with the encoded data, which may introduce noise into the data. Accordingly, a decoder may receive a version of the N bits of encoded data (x1, x2, . . . xN). The decoder may decode the received encoded data into an estimate of the L bits original data (y1, y2, . . . yL).
Examples of wireless baseband circuitry may utilize error correction coding (such as low density parity check coding, LDPC). An encoder may add particularly selected N-L bits into an original data of L bits, which may allow a decoder to decode the data and reduce and/or minimize errors introduced by noise, interferences and other practical factors in the data storage and transmission.
There are a variety of particular error correction coding techniques, including low density parity check coding (LDPC), Reed-Solomon coding, Bose-Chaudhuri-Hocquenghem (BCH), and Polar coding. The use of these coding techniques, however, may come at the cost of the decrease of the frequency and/or channel and/or storage resource usage efficiency and the increase of the processing complexity. For example, the use of coding techniques may increase the amount of data which may be stored and/or transmitted. Moreover, processing resources may be necessary to implement the encoding and decoding. In some examples, the decoder may be one of the processing blocks that cost the most computational resources in wireless baseband circuitry and/or memory controllers, which may reduce the desirability of existing decoding schemes in many emerging applications such as Internet of Things (IoT) and/or tactile internet where ultra-low power consumption and ultra-low latency are highly desirable.
Examples described herein utilize multi-layer neural networks to decode encoded data (e.g., data encoded using one or more encoding techniques). The neural networks have nonlinear mapping and distributed processing capabilities which may be advantageous in many systems employing the neural network decoders.
Generally, a neural network may be used including multiple stages of nodes. The nodes may be implemented using processing elements which may execute one or more functions on inputs received from a previous stage and provide the output of the functions to the next stage of the neural network. The processing elements may be implemented using, for example, one or more processors, controllers, and/or custom circuitry, such as an application specific integrated circuit (ASIC) and/or a field programmable gate array (FPGA). The processing elements may be implemented as combiners and/or summers and/or any other structure for performing functions allocated to the processing element. In some examples, certain of the processing elements of neural networks described herein perform weighted sums, e.g., may be implemented using one or more multiplication/accumulation units, which may be implemented using processor(s) and/or other circuitry.
In the example, of
The neural network 100 may have a next layer, which may be referred to as a ‘hidden layer’ in some examples. The next layer may include combiner 102, combiner 104, combiner 106, and combiner 108, although any number of elements may be used. While the processing elements in the second stage of the neural network 100 are referred to as combiners, generally the processing elements in the second stage may perform a nonlinear activation function using the input data bits received at the processing element. Any number of nonlinear activation functions may be used. Examples of functions which may be used include Gaussian functions, such as
Examples of functions which may be used include multi-quadratic functions, such as f(r)=(r2+σ2)1/2. Examples of functions which may be used include inverse multi-quadratic functions, such as f(r)=(r2+σ2)−1/2. Examples of functions which may be used include thin-plate-spline functions, such as f(r)=r2 log(r). Examples of functions which may be used include piece-wise linear functions, such as f(r)=½(|r+1|−|r−1|). Examples of functions which may be used include cubic approximation functions, such as f(r)=½(|r3+1|−|r3−1|). In these example functions, a represents a real parameter (e.g., a scaling parameter) and r is the distance between the input vector and the current vector. The distance may be measured using any of a variety of metrics, including the Euclidean norm.
Each element in the ‘hidden layer’ may receive as inputs selected bits (e.g., some or all) of the input data. For example, each element in the ‘hidden layer’ may receive as inputs from the output of multiple selected elements (e.g., some or all elements) in the input layer. For example, the combiner 102 may receive as inputs the output of node 118, node 120, node 122, and node 124. While a single ‘hidden layer’ is shown by way of example in
The neural network 100 may have an output layer. The output layer in the example of
In some examples, the neural network 100 may be used to provide L output bits which represent decoded data corresponding to N input bits. For example, in the example of
Examples of neural networks may be trained. Training generally refers to the process of determining weights, functions, and/or other attributes to be utilized by a neural network to create a desired transformation of input data to output data. In some examples, neural networks described herein may be trained to transform encoded input data to decoded data (e.g., an estimate of the decoded data). In some examples, neural networks described herein may be trained to transform noisy encoded input data to decoded data (e.g., an estimate of the decoded data). In this manner, neural networks may be used to reduce and/or improve errors which may be introduced by noise present in the input data. In some examples, neural networks described herein may be trained to transform noisy encoded input data to encoded data with reduced noise. The encoded data with reduced noise may then be provided to any decoder (e.g., a neural network and/or other decoder) for decoding of the encoded data. In this manner, neural networks may be used to reduce and/or improve errors which may be introduced by noise.
Training as described herein may be supervised or un-supervised in various examples. In some examples, training may occur using known pairs of anticipated input and desired output data. For example, training may utilize known encoded data and decoded data pairs to train a neural network to decode subsequent encoded data into decoded data. In some examples, training may utilize known noisy encoded data and decoded data pairs to train a neural network to decode subsequent noisy encoded data into decoded data. In some examples, training may utilize known noisy encoded data and encoded data pairs to train a neural network to provide encoded data having reduced noise than input noisy encoded data. Examples of training may include determining weights to be used by a neural network, such as neural network 100 of
Examples of training can be described mathematically. For example, consider input data at a time instant (n), given as:
X(n)=[x1(n),x2(n), . . . xN(n)]T
the center vector for each element in hidden layer(s) of the neural network (e.g., combiner 102, combiner 104, combiner 106, and combiner 108 of
The output of each element in a hidden layer may then be given as:
hi(n)=fi(∥X(n)−Ci∥) for (i=1,2, . . . ,H) (1)
The connections between a last hidden layer and the output layer may be weighted. Each element in the output layer may have a linear input-output relationship such that it may perform a summation (e.g., a weighted summation). Accordingly, an output of the i'th element in the output layer at time n may be written as:
yi(n)=Σj=1HWijhj(n)=Σj=1HWijfj(∥X(n)−Cj∥) (2)
for (i=1, 2, . . . , L) and where L is the element number of the output of the output layer and Wij is the connection weight between the j'th element in the hidden layer and the i'th element in the output layer.
Generally, a neural network architecture (e.g., the neural network 100 of
Examples of neural networks may accordingly be specified by attributes (e.g., parameters). In some examples, two sets of parameters may be used to specify a neural network: connection weights and center vectors (e.g., thresholds). The parameters may be determined from selected input data (e.g., encoded input data) by solving an optimization function. An example optimization function may be given as:
E=Σn=1M∥Y(n)−∥2 (3)
where M is a number of trained input vector (e.g., trained encoded data inputs) and Y(n) is an output vector computed from the sample input vector using Equations 1 and 2 above, an is the corresponding desired (e.g., known) output vector. The output vector Y(n) may be written as:
Y(n)=[y1(n),y2(n), . . . ,yN(n)]T
Various methods (e.g., gradient descent procedures) may be used to solve the optimization function. However, in some examples, another approach may be used to determine the parameters of a neural network, which may generally include two steps—(1) determining center vectors Ci (i=1, 2, . . . , H) and (2) determining the weights.
In some examples, the center vectors may be chosen from a subset of available sample vectors. In such examples, the number of elements in the hidden layer(s) may be relatively large to cover the entire input domain. Accordingly, in some examples, it may be desirable to apply k-means cluster algorithms. Generally, k-means cluster algorithms distribute the center vectors according to the natural measure of the attractor (e.g., if the density of the data points is high, so is the density of the centers). k-means cluster algorithms may find a set of cluster centers and partition the training samples into subsets. Each cluster center may be associated with one of the H hidden layer elements in this network. The data may be partitioned in such a way that the training points are assigned to the cluster with the nearest center. The cluster center corresponding to one of the minima of an optimization function. An example optimization function for use with a k-means cluster algorithm may be given as:
Ek_means=Σj=1HΣ=n=1Bjn∥X(n)−Cj∥2 (4)
where Bjn is the cluster partition or membership function forming an H×M matrix. Each column may represent an available sample vector (e.g., known input data) and each row may represent a cluster. Each column may include a single ‘1’ in the row corresponding to the cluster nearest to that training point, and zeros elsewhere.
The center of each cluster may be initialized to a different randomly chosen training point. Then each training example may be assigned to the element nearest to it. When all training points have been assigned, the average position of the training point for each cluster may be found and the cluster center is moved to that point. The clusters may become the desired centers of the hidden layer elements.
In some examples, for some transfer functions (e.g., the Gaussian function), the scaling factor σ may be determined, and may be determined before determining the connection weights. The scaling factor may be selected to cover the training points to allow a smooth fit of the desired network outputs. Generally, this refers to any point within the convex hull of the processing element centers may significantly activate more than one element. To achieve this goal, each hidden layer element may activate at least one other hidden layer element to a significant degree. An appropriate method to determine the scaling parameter a may be based on the P-nearest neighbor heuristic, which may be given as,
where Cj (for i=1, 2, . . . , H) are the P-nearest neighbors of Ci.
The connection weights may additionally or instead be determined during training. In an example of a neural network, such as neural network 100 of
where W={Wij} is the L×H matrix of the connection weights, F is an H×M matrix of the outputs of the hidden layer processing elements and whose matrix elements are computed using
Fin=fi(∥X(n)−Ci∥)(i=1,2, . . . ,H;n=1,2, . . . ,M)
and =[
(1),
(2), . . . ,
(M)] is the L×M matrix of the desired (e.g., known) outputs. The connection weight matrix W may be found from Equation 5 and may be written as follows:
where F+ is the pseudo-inverse of F. In this manner, the above may provide a batch-processing method for determining the connection weights of a neural network. It may be applied, for example, where all input sample sets are available at one time. In some examples, each new sample set may become available recursively, such as in the recursive-least-squares algorithms (RLS). In such cases, the connection weights may be determined as follows.
First, connection weights may be initialized to any value (e.g., random values may be used).
The output vector Y(n) may be computed using Equation 2. The error term ei(n) of each output element in the output layer may be computed as follows:
ei(n)=yi(n)−i(n)(i=1,2, . . . ,L)
The connection weights may then be adjusted based on the error term, for example as follows:
Wij(n+1)=Wij(n)+γei(n)fj(∥X(n)−Ci∥)
(i=1, 2, . . . , L; j=1, 2, . . . , M)
where γ is the learning-rate parameter which may be fixed or time-varying.
The total error may be computed based on the output from the output layer and the desired (known) data:
ϵ=∥Y(n)−∥2
The process may be iterated by again calculating a new output vector, error term, and again adjusting the connection weights. The process may continue until weights are identified which reduce the error to equal to or less than a threshold error.
Accordingly, the neural network 100 of
Recall that the structure of neural network 100 of
In examples of supervised learning, the input training samples: [x1(n), x2 (n), . . . xN(n)] may be generated by passing the encoded samples [b1(n), b2(n), . . . bN(n)] through some noisy channels and/or adding noise. The supervised output samples may be the corresponding original code [a1(n), a2(n), . . . aL(n)] which may be used to generate [b1(n), b2(n), . . . bN(n)] by the encoder. Once these parameters are determined in offline mode, the desired decoded code-word can be obtained from input data utilizing the neural network (e.g., computing equation 2), which may avoid complex iterations and feedback decisions used in traditional error-correcting decoding algorithms. In this manner, neural networks described herein may provide a reduction in processing complexity and/or latency, because some complexity has been transferred to an off-line training process which is used to determine the weights and/or functions which will be used. Further, the same neural network (e.g., the neural network 100 of
The processing unit 230 may receive input data (e.g. x1(n), x2(n), . . . xN(n)) from a memory device, communication transceiver and/or other component. In some examples, the input data may be encoded in accordance with an encoding technique. The processing unit 230 may function to process the encoded input data to provide output data—e.g., y1(n), y2(n), . . . yL(n). The output data may be the decoded data (e.g., an estimate of the decoded data) corresponding to the encoded input data in some examples. The output data may be the data corresponding to the encoded input data, but having reduced and/or modified noise.
While two stages are shown in
Generally, each multiplication/accumulation unit of
Zout=Σi=1IW*Zin(i) (6)
where “I” is the number of multiplications to be performed by the unit, Wi refers to the coefficients to be used in the multiplications, and Zin(i) is a factor for multiplication which may be, for example, input to the system and/or stored in one or more of the table look-ups.
The table-lookups shown in
Accordingly, the hardware implementation of neural network 200 may be used to convert an input code word (e.g. x1(n),x2(n), . . . xN(n)) to an output code word (e.g., y1(n),y2(n), . . . yL(n)). Examples of the conversion have been described herein with reference to
The mode configuration control 202 may be implemented using circuitry (e.g., logic), one or more processor(s), microcontroller(s), controller(s), or other elements. The mode configuration control 202 may select certain weights and/or other parameters from weight memory 228 and provide those weights and/or other parameters to one or more of the multiplication/accumulation units and/or table look-ups of
The host 302 may be a host system such as a personal laptop computer, a desktop computer, a digital camera, a mobile telephone, or a memory card reader, among various other types of hosts. The host 302 may include a number of memory access devices (e.g., a number of processors). The host 302 may also be a memory controller, such as where memory system 304 is a memory device (e.g., a memory device having an on-die controller).
The memory system 304 may be a solid state drive (SSD) or other type of memory and may include a host interface 306, a controller 308 (e.g., a processor and/or other control circuitry), and a number of memory device(s) 310. The memory system 304, the controller 308, and/or the memory device(s) 310 may also be separately considered an “apparatus.” The memory device(s) 310 may include a number of solid state memory devices such as NAND flash devices, which may provide a storage volume for the memory system 304. Other types of memory may also be used.
The controller 308 may be coupled to the host interface 306 and to the memory device(s) 310 via a plurality of channels to transfer data between the memory system 304 and the host 302. The interface 306 may be in the form of a standardized interface. For example, when the memory system 304 is used for data storage in the apparatus 300, the interface 306 may be a serial advanced technology attachment (SATA), peripheral component interconnect express (PCIe), or a universal serial bus (USB), among other connectors and interfaces. In general, interface 306 provides an interface for passing control, address, data, and other signals between the memory system 304 and the host 302 having compatible receptors for the interface 306.
The controller 308 may communicate with the memory device(s) 314 (which in some embodiments can include a number of memory arrays on a single die) to control data read, write, and erase operations, among other operations. The controller 308 may include a discrete memory channel controller for each channel (not shown in
The controller 308 may include an ECC encoder 310 for encoding data bits written to the memory device(s) 314 using one or more encoding techniques. The ECC encoder 310 may include a single parity check (SPC) encoder, and/or an algebraic error correction circuit such as one of the group including a Bose-Chaudhuri-Hocquenghem (BCH) ECC encoder and/or a Reed Solomon ECC encoder, among other types of error correction circuits. The controller 308 may further include an ECC decoder 312 for decoding encoded data, which may include identifying erroneous cells, converting erroneous cells to erasures, and/or correcting the erasures. The memory device(s) 314 may, for example, include one or more output buffers which may read selected data from memory cells of the memory device(s) 314. The output buffers may provide output data, which may be provided as encoded input data to the ECC decoder 312. The neural network 100 of
The ECC encoder 310 and the ECC decoder 312 may each be implemented using discrete components such as an application specific integrated circuit (ASIC) or other circuitry, or the components may reflect functionality provided by circuitry within the controller 308 that does not necessarily have a discrete physical form separate from other portions of the controller 308. Although illustrated as components within the controller 308 in
The memory device(s) 314 may include a number of arrays of memory cells (e.g., non-volatile memory cells). The arrays can be flash arrays with a NAND architecture, for example. However, embodiments are not limited to a particular type of memory array or array architecture. Floating-gate type flash memory cells in a NAND architecture may be used, but embodiments are not so limited. The cells may be multi-level cells (MLC) such as triple level cells (TLC) which store three data bits per cell. The memory cells can be grouped, for instance, into a number of blocks including a number of physical pages. A number of blocks can be included in a plane of memory cells and an array can include a number of planes. As one example, a memory device may be configured to store 8 KB (kilobytes) of user data per page, 128 pages of user data per block, 2048 blocks per plane, and 16 planes per device.
According to a number of embodiments, controller 308 may control encoding of a number of received data bits according to the ECC encoder 310 that allows for later identification of erroneous bits and the conversion of those erroneous bits to erasures. The controller 308 may also control programming the encoded number of received data bits to a group of memory cells in memory device(s) 314.
The apparatus shown in
From the foregoing it will be appreciated that, although specific embodiments have been described herein for purposes of illustration, various modifications may be made while remaining with the scope of the claimed technology. Certain details are set forth herein to provide an understanding of described embodiments of technology. However, other examples may be practiced without various of these particular details. In some instances, well-known circuits, control signals, timing protocols, neural network structures, algorithms, and/or software operations have not been shown in detail in order to avoid unnecessarily obscuring the described embodiments. Other embodiments may be utilized, and other changes may be made, without departing from the spirit or scope of the subject matter presented here.
Block 402 recites “receive known encoded and decoded data pairs, the encoded data encoded with a particular encoding technique.” The known encoded and decoded data pairs may be received by a computing device that includes a neural network, such as the neural network 100 of
Block 404 may follow block 402. Block 404 recites “determine a set of weights for a neural network to decode data encoded with the particular encoding technique.” For example, a neural network (e.g., any of the neural networks described herein) may be trained using the encoded and decoded data pairs received in block 402. The weights may be numerical values, which, when used by the neural network, allow the neural network to output decoded data corresponding encoded input data encoded with a particular encoding technique. The weights may be stored, for example, in the weight memory 228 of
In some examples, multiple sets of data pairs may be received (e.g., in block 402), with each set corresponding to data encoded with a different encoding technique. Accordingly, multiple sets of weights may be determined (e.g., in block 404), each set corresponding to a different encoding technique. For example, one set of weights may be determined which may be used to decode data encoded in accordance with LDPC coding while another set of weights may be determined which may be used to decode data encoded with BCH coding.
Block 406 may follow block 404. Block 406 recites “receive data encoded with the particular encoding technique.” For example, data (e.g., signaling indicative of data) encoded with the particular encoding technique may be retrieved from a memory of a computing device and/or received using a wireless communications receiver. Any of a variety of encoding techniques may have been used to encode the data.
Block 408 may follow block 406. Block 408 recites “decode the data using the set of weights.” By processing the encoded data received in block 406 using the weights, which may have been determined in block 404, the decoded data may be determined. For example, any neural network described herein may be used to decode the encoded data (e.g., the neural network 100 of
Block 410 may follow block 408. Block 410 recites “writing the decoded data to or reading the decoded data from memory.” For example, data decoded in block 408 may be written to a memory, such as the memory 314 of
In some examples, blocks 406-410 may be repeated for data encoded with different encoding techniques. For example, data may be received in block 406, encoded with one particular encoding technique (e.g., LDPC coding). A set of weights may be selected that is for use with LDPC coding and provided to a neural network for decoding in block 408. The decoded data may be obtained in block 410. Data may then be received in block 406, encoded with a different encoding technique (e.g., BCH coding). Another set of weights may be selected that is for use with BCH coding and provided to a neural network for decoding in block 408. The decoded data may be obtained in block 410. In this manner, one neural network may be used to decode data that had been encoded with multiple encoding techniques.
Examples described herein may refer to various components as “coupled” or signals as being “provided to” or “received from” certain components. It is to be understood that in some examples the components are directly coupled one to another, while in other examples the components are coupled with intervening components disposed between them. Similarly, signal may be provided directly to and/or received directly from the recited components without intervening components, but also may be provided to and/or received from the certain components through intervening components.
Number | Name | Date | Kind |
---|---|---|---|
7321882 | Jaeger | Jan 2008 | B2 |
9988090 | Nishikawa | Jun 2018 | B2 |
10176802 | Ladhak | Jan 2019 | B1 |
10400928 | Baldwin et al. | Sep 2019 | B2 |
10552738 | Holt et al. | Feb 2020 | B2 |
10698657 | Kang et al. | Jun 2020 | B2 |
10749594 | O'Shea et al. | Aug 2020 | B1 |
10812449 | Cholleton | Oct 2020 | B1 |
11088712 | Zamir et al. | Aug 2021 | B2 |
11196992 | Huang et al. | Dec 2021 | B2 |
20040015459 | Jaeger | Jan 2004 | A1 |
20060013289 | Hwang | Jan 2006 | A1 |
20060200258 | Hoffberg et al. | Sep 2006 | A1 |
20090292537 | Ehara et al. | Nov 2009 | A1 |
20110029756 | Biscondi et al. | Feb 2011 | A1 |
20170177993 | Draelos et al. | Jun 2017 | A1 |
20170310508 | Moorti et al. | Oct 2017 | A1 |
20170370508 | Baldwin et al. | Dec 2017 | A1 |
20180022388 | Nishikawa | Jan 2018 | A1 |
20180046897 | Kang et al. | Feb 2018 | A1 |
20180174050 | Holt et al. | Jun 2018 | A1 |
20180249158 | Huang et al. | Aug 2018 | A1 |
20180322388 | O'Shea | Nov 2018 | A1 |
20180343017 | Kumar et al. | Nov 2018 | A1 |
20180357530 | Beery et al. | Dec 2018 | A1 |
20190197549 | Sharma | Jun 2019 | A1 |
20200012953 | Sun | Jan 2020 | A1 |
20200014408 | Kim | Jan 2020 | A1 |
20200065653 | Meier | Feb 2020 | A1 |
20200160838 | Lee | May 2020 | A1 |
20200234103 | Luo et al. | Jul 2020 | A1 |
20200296741 | Ayala Romero et al. | Sep 2020 | A1 |
20200402591 | Xiong et al. | Dec 2020 | A1 |
20210143840 | Luo | May 2021 | A1 |
20210273707 | Yoo et al. | Sep 2021 | A1 |
20210287074 | Coenen | Sep 2021 | A1 |
20210304009 | Bazarsky et al. | Sep 2021 | A1 |
20210319286 | Gunduz | Oct 2021 | A1 |
20210336779 | Jho et al. | Oct 2021 | A1 |
20210351863 | Gunduz | Nov 2021 | A1 |
20220019900 | Wong et al. | Jan 2022 | A1 |
Number | Date | Country |
---|---|---|
20180054554 | May 2018 | KR |
20180084988 | Jul 2018 | KR |
20200062322 | Jun 2020 | KR |
20200124504 | Nov 2020 | KR |
2020131868 | Jun 2020 | WO |
2020139976 | Jul 2020 | WO |
2021096641 | May 2021 | WO |
Entry |
---|
International Application No. PCT/US19/68616, titled “Neural Networks and Systems for Decoding Encoded Data”, filed Dec. 26, 2019, pp. all. |
U.S. Appl. No. 16/683,217, titled “Recurrent Neural Networks and Systems for Decoding Encoded Data”, filed Nov. 13, 2019, pp. all. |
U.S. Appl. No. 16/839,447, titled “Neural Networks and Systems for Decoding Encoded Data”, dated Apr. 3, 2020, pp. all. |
International Search Report and Written Opinion for Application No. PCT/US2019/068616, dated Apr. 27, 2020. |
Kim, Minhoe et al., “Building Encoder and Decoder With Deep Neural Networks: On the Way to Reality”, Retrieved from <https://arxiv.org/abs/1808.02401>, dated Aug. 7, 2018, p. 4-5. |
PCT Application PCT/US20/56151 titled “Recurrent Neural Networks and Systems for Decoding Encoded Data” filed Oct. 16, 2020. |
Lipton, Zachary C. et al., “A Critical Review of Recurrent Neural Networks for Sequence Learning”, Retrieved from https://arxiv.org/abs/1506.00019v4, Oct. 17, 2015. |
Sun, Yang , et al., “VLSI Architecture for Layered Decoding of QC-LDPC Codes With High Circulant Weight” IEEE Transactions on Very Large Scale Integration (VLSI) Systems, vol. 21, No. 10, pp. 1960-1964, Oct. 2013. |
U.S. Appl. No. 17/496,703 titled “Systems for Estimating Bit Error Rate (BER) of Encoded Data Using Neural Networks” filed Oct. 7, 2021. |
U.S. Appl. No. 17/302,226, titled “Decoders and Systems for Decoding Encoded Data Using Neural Networks”, filed Apr. 27, 2021. |
U.S. Appl. No. 17/302,228, titled “Systems for Error Reduction of Encoded Data Using Neural Networks”, filed Apr. 27, 2021. |
Huo et al. “A Tutorial and Review on Inter-Layer FEC Coded Layered Video Streaming” in IEEE Communications Survey & Tutorials, vol. 17, No. 2, 2nd Quarter 2015; pp. 1166-1207. |
U.S. Appl. No. 17/821,391, titled, “Recurrent Neural Networks and Systems for Decoding Encoded Data,” filed Aug. 22, 2022, App. all pages of application as filed. |
Cao, Congzhe et al., “Deep Learning-Based Decoding of Constrained Sequence Codes”, Retrieved from URL: https://arxiv.org/pdf/1906.06172, Jun. 2019, 13 pages. |
Huang, Kunping et al., “Functional Error Correction for Robust Neural Networks”, Retrieved from URL: https://arxiv.org/pdf/2001.03814, Jan. 2020, 24 pages. |
Navneet Agrawal “Machine Intelligence in Decoding of Forward Error Correction Codes”; Degree Project in Information and Communication Technology; KTH Royal Institute of Technology, School of Electrical Engineering; Sweeden Oct. 2017; pp. all. |
Number | Date | Country | |
---|---|---|---|
20200210816 A1 | Jul 2020 | US |