Exemplary embodiments of the present inventive concept relate to encoding and decoding of data, and more particularly to encoding and decoding of data using generalized low-density parity check codes for storage on a memory device.
Both low-density parity-check (LDPC) codes and turbo product codes (TPCs) are known for their excellent error-correction capability and their low encoding/decoding complexity. Even better error-correction capabilities can be achieved by generalized LDPC (GLDPC) codes, where local check nodes in a tanner graph are allowed to be arbitrary, as opposed to single-parity checks in “plain” LDPC codes.
In coding theory, a Hamming code is a linear error-correcting code that encodes data with parity bits. For example, a Hamming(7,4) code encodes four bits of data into seven bits by adding three parity bits. GLDPC codes based on Hamming codes provide an excellent combination of high raw bit-error rate (rBER) coverage and low encoding and decoding complexity. Due to a typical error floor, the high rBER coverage of these codes is attainable only for a moderate target frame error rate (FER) in the order of 10−8. Here, the term “error floor” refers to a situation in which below a certain FER value, it is very difficult to decrease the FER. While a moderate FER is sufficient from some applications, this is not the case for nonvolatile memories such as NAND flash memories, where a very low FER on the order of 10−11 is typically required. Thus, data cannot be encoded for storage on NAND flash memories using GLDPC codes based on Hanning codes.
According to an exemplary embodiment of the disclosure, a method of processing a request by a host to access data stored in a memory device is provided. The method includes reading data from the memory device in response to the request; applying an iterative decoder to the read data; performing an error correction upon determining that the iterative decoder is oscillating; and outputting the corrected data to the host. The error correction includes determining a total number of rows in first data the decoder attempted to correct; estimating first visible error rows among the total number that continue to have an error after the attempt; estimating residual error rows among the total number that no longer have an error after the attempt; determining second visible error rows in second data of the decoder that continue to have an error by permuting indices of the residual error rows according to a permutation; determining whether zero or more first hidden error rows are present in the first data from the second visible error rows, where each hidden error row has an error and is a valid Hamming codeword; and correcting the first data using the first visible error rows and the determined number of first hidden error rows.
According to an exemplary embodiment of the disclosure, a memory system including a memory device and a controller is provided. The controller configured to read data from the memory device. The controller includes an iterative decoder. The controller is configured to apply the iterative decoder to the read data and determine whether the iterative decoder is oscillating. The controller is configured to determine a total number of rows in first data the decoder attempted to correct, estimate residual error rows among the total number that no longer have an error after the attempt, determine second visible error rows in second data of the decoder that continue to have an error by permuting indices of the residual error rows according to a permutation, determine whether zero or more first hidden error rows are present in the first data from the second visible error rows, and correct the first data using the first visible error rows and the determined number of first hidden error rows when it is determined that the iterative decoder is oscillating. Each hidden error row has an error and is a valid Hamming codeword.
According to an exemplary embodiment of the disclosure, a memory device is provided that includes a memory array, an iterative decoder, and a logic circuit configured to apply the iterative decoder to decode data read from the memory array. The logic circuit is configured to determine a total number of rows in first data the decoder attempted to correct, estimate residual error rows among the total number that no longer have an error after the attempt, determine second visible error rows in second data of the decoder that continue to have an error by permuting indices of the residual error rows according to a permutation, and correct the first data using the first visible error rows when the iterative decoder is repeatedly changing between two states during the decode.
According to an exemplary embodiment of the disclosure, a method of correcting data stored in a memory device is provided. The method includes: applying an iterative decoder to the data; determining a total number of rows in first data the decoder attempted to correct; estimating first visible error rows among the total number that continue to have an error after the attempt; estimating residual error rows among the total number that no longer have an error after the attempt; determining second visible error rows in second data of the decoder that continue to have an error by permuting indices of the residual error rows according to a permutation; and correcting the first data using the first visible error rows.
The present inventive concept will become more apparent by describing in detail exemplary embodiments thereof with reference to the attached drawings, in which:
Hereinafter, exemplary embodiments of the inventive concept in conjunction with accompanying drawings will be described. Below, details, such as detailed configurations and structures, are provided to aid a reader in understanding embodiments of the inventive concept. Therefore, embodiments described herein may be variously changed or modified without departing from embodiments of the inventive concept.
Modules in the drawings or the following detailed description may be connected with other modules in addition to the components described in the detailed description or illustrated in the drawings. Each connection between the modules or components may be a connection by communication or may be a physical connection.
Referring to
The host controller 110 controls read and write operations of the memory controller 120 and may correspond to a central processing unit (CPU), for example. The memory controller 120 stores data when performing a write operation and outputs stored data when performing a read operation under the control of the host controller 110. The memory controller 120 includes a host interface 121 and an access controller 125. The host interface 121 and the access controller 125 may be connected to one another via an internal bus 127. The access controller 125 is configured to interface with a nonvolatile memory device 126. In an exemplary embodiment, the nonvolatile memory device 126 is implemented by a flash memory device. In an alternate embodiment, the nonvolatile memory device 126 is replaced with a volatile memory but is described herein as nonvolatile for ease of discussion.
The host interface 121 may be connected with a host (e.g., see 4100 in
The access controller 125 is configured to write data to the memory device 126, and read data from the memory device 126. The memory device 126 may include one or more non-volatile memory devices.
The host controller 110 exchanges signals with the memory controller 120 through the host interface 121. The access controller 125 controls an access operation on a memory in which data will be stored within the memory device 126 when a write operation is performed and controls an access operation on a memory in which data to be outputted is stored within the memory device 126 when a read operation is performed. The memory device 126 stores data when a write operation is performed and outputs stored data when a read operation is performed. The access controller 125 and the memory device 126 communicate with one another through a data channel 130. While only a single memory device 126 is illustrated in
Referring to
Herein, the term [n, k, d] code refers to a linear binary code of length n, dimension k, and a minimum Hamming distance d. Also, the term [n, k] code is an [n, k, d] code for some d.
Hamming Codes
A Hamming code Ham is a [2m−1, 2m−m−1, 3] code defined by a m×(2m−1) parity-check matrix whose 2m−1 columns are all non-zero binary vectors of length m, where m is some positive integer.
Extended Hamming Codes
An extended Hamming code eHam is the [2m, 2m−m−1, 4] code obtained by adjoining a parity bit to all codewords of the Hamming code Ham.
Shortening
If C is an [n,k] code and I⊆{1, . . . , n}, then the code obtained by shortening C on I is obtained by first taking only the codewords (c1, . . . , cn) of C with ci=0 for all i∈I, and then deleting all coordinates from I (all of them zero coordinates). The resulting code has length n−|I| and dimension at least k−|I|. The dimension equals k−|I| if I is a subset of the information part in systematic encoding.
Shortened eH-Codes
C is a shortened eH-code if it is obtained by shortening eHam on some subset eH-GLDPC codes
For positive integers n≥2m and N, let Crows be some fixed shortened eH-code of length n, and let π be a permutation on the coordinates of N×n matrices. The eH-GLDPC code C⊂F2N×n is defined as the set of all N×n binary matrices M that satisfy the following conditions:
Property 1 (Line-Intersection Property)
The set of indices obtained by applying a permutation π to a row intersects each row at most once. Here, a row stands for a set of indices of the form {(i,1), (i,2), . . . , (i,n)} for some i∈{1, . . . , N}. It may be verified that π has the line-intersection property if and only if the inverse permutation π−1 has the line-intersection property. Further, the property requires N≥n. The line-intersection property is illustrated in
Pseudo-Errors
A certain type of error (hereinafter referred to as “Pseudo-errors”) is the reason for the error floor in eH-GLDPC codes. Pseudo-errors can be thought of as a special case of near-codewords/trapping sets, i.e., low-weight errors that violate only a very small number of local constrains. They are special in the sense that they result in oscillations between J1 and J2.
By definition, a pseudo-error is an error pattern (say, at J1) that results in decoder oscillations. Pseudo-errors for which the post-decoding patterns at Ji (i=1, 2) have only rows of weight 4 are considered herein. The pre-decoding pseudo-error at Ji (i=1, 2) as illustrated in
In an embodiment, pseudo-errors with two properties are considered: i) in visible-error rows, there are only wrong corrections (e.g., all bits flipped by the decoder 228 should not have been flipped); and ii) all visible-error (wrong) corrections are mapped through π or π−1 (depending on whether i equals 1 or 2, respectively) to rows without an “X”, where an X marks an error present both before and after the decoding.
The method of
The method of
The method of
The choosing of the number of hidden error rows, the choosing of the number of visible errors rows and their locations, and the choosing of the number of residual errors rows and their locations, may be referred to as selecting parameters for a scan.
The method of
The method of
The method of
The method of
The method of
If hidden error rows are not present, then step 801 includes completing a visible error row in the second data to a weight-4 vector using the first visible error errors of the first data (see right side of 801). Then it is verified whether the weight-4 vector is a valid codeword in step 802 (see right side of 802).
The method of
The method of
The method of
As discussed above, step 707 of
The method of
The method of
The method of
The method of
As shown in may correspond to J1 and side
may correspond to J2. Since 1 correction j was made to each of the 3 visible error rows K1 in J1, and one hidden row H1 with 4 errors was chosen, and each of the 3 visible error rows were completed to a weight-4 codeword, the left side of Equation 1 reduces to 13 errors. For example, 4Hi+3*(4−1)=13. The right side of Equation 1 reduces to 4H2+5 since J2 includes two visible errors rows where the first visible error row had 1 correction and the second visible error row has 2 corrections. For example, 4H2+1*(4−1)+1*(4−2)=4H2+5. H2 is then determined to be 2 since 4H2+5=13. Equation assumes that the number errors in J1 is equal to the number of errors in J2.
The method of
The method of
The method of
The method of
The method of
The method of
In
In the general case, if the number of X rows of is larger than 4, then an additional scan over
intersection options is required, and in what follows one considers the case where the scan hits the correct option. Note also that #(X rows of )=
+
is assumed to be known at this stage, since both
and
have been calculated from the current values of the scanned parameters.
In each instance of this scan over M options, for each of the visible error rows of side
there is either 0 or 1 X's coming from the hidden error row of
, in case
=1. In case
=0, it is clear that there are 0 X's from hidden error rows of
in each visible error row of side
, as there are no hidden error rows of
. This is included in the more general case where each visible error row of side
has either 0 or 1 X's from visible error rows of
. Therefore, unless noted otherwise, it is assumed that
=1.
It is noted that all the X's in the visible error rows of
not coming from the hidden error row of
comes from the
visible error rows of
, and by assumption, each such row intersects each visible error row of
at most once, in a known coordinate.
In what follows, one can simultaneously recover, in , the X's from the visible error rows of
, and the identity of the hidden error row of
(if it exists). Moreover, one can reconstruct some unknowns in several different ways, and checking if the resulting values for the same unknown are the same will be used as a criterion for screening out wrong assumptions.
A visible error row is fixed in and it is assumed that the decoder flipped
∈{1,2,3} coordinates in this row. For example, in
=1 flips (
s), and one row with
=2 flips. In this row of
a shortened-eH word of weight 4 has exactly the following “1”s: i) Up to one X from the hidden error row of
such an X is assumed iff if
=1 and the current value of the scan over M options described above implies that this visible error row of
indeed intersects with the hidden error row of
(e.g., mh∈{0,1} is written for the number of X's from the hidden error row of
), ii) exactly
s, and iii) exactly ma:=4−
−mh X's coming from the visible error rows of
each such X coming from a different row of
The algorithm may then run on all the visible error rows of as follows:
Typically, and with high probability, only the correct solution will not be screened out by the above process. In addition, if one of the fixed parameters from outer scans is incorrect, then typically all solutions will be screened out, and it will be clear than the decoder must proceed to the next hypothesis.
For example, in 's, one scans on
choices of a single visible error row from J1, and compete the X resulting from the intersection of this J1-row with the J2 row and the 2 's to a weight-4 shortened-eH codeword (if possible). This results in one additional X on the J2 row. Similarly, for the visible J2 row with a single □, one scans on
choices of two visible error rows from J1, and again complete the 3 resulting coordinates coming from 2 X's mapped from J1 and the single to a fourth coordinate from a weight-4 codeword. If the two completions from the two rows are mapped to the same row of J1, then this option is retained. The situation after this stage is depicted in
As explained above, at this stage, the only unknown X's (if any) are those of the hidden error rows of side
For example, in
is no longer hidden, as the decoder has a hypothesis for this row. Now there are two different options to proceed: 1) since there are no longer any hidden error rows in Jl, the remaining pseudo-error can be solved by a simpler method for the case where there are hidden error rows only on one side and 2) the remaining pseudo-error can be solved directly, similarly to the above method.
The 2nd Option
If =0, then there is nothing to solve, and the entire pattern is already known. If
=1, then work is performed similarly to the above in order to find the single hidden error row of
, and consequently all missing X′. In an embodiment, one can find the hidden error row of
by completing triples of known coordinates in rows of
to a weight-4 codewords. These completions need to be mapped to the same row of
(verification), which is then the estimated hidden error row.
The case where =2, as in
.
In some cases, it is sufficient to consider only pseudo-errors that are allowed to have hidden error rows only on one side. For example, such cases may arise at an intermediate stage of pseudo-error decoding with hidden rows on both sides, as described in the previous section. As another example, when modifying some decoder parameters, it is possible to assure that practically all pseudo-errors have hidden error rows only on one side, at the cost of slightly decreasing the rBER coverage.
It is assumed that all hidden error rows appear only on one side. In this case, we first scan over two options for the side ,
∈{0,1}, that might contain hidden rows. By assumption, there are no hidden error rows on side
. This means that the decoder of side
acted exactly in the rows that contain the permutation-map of the pseudo-error at the output of
's decoder. Referring to the
-rows in which the decoder acted as visible error rows, this suggests the following line of action:
As an alternative, one can set the output LLRs of all visible error rows of to zero, set the magnitudes of output LLR's of all rows that are not visible error rows in
to their maximum possible value, and proceed with eH-GLDPC decoding iterations. Note that when we proceed with the eH-GLDPC decoding iterations, the first step is to map output LLRs from side
to side
. In particular, in each row of
, the zero LLRs mark exactly its intersection with the visible error rows of
, and they are now the lowest LLRs of the row.
Referring back to
The above-described methods may be tangibly embodied on one or more computer readable medium(s) (i.e., program storage devices such as a hard disk, magnetic floppy disk, RAM, ROM, CD ROM, Flash Memory, etc., and executable by any device or machine comprising suitable architecture, such as a general purpose digital computer having a processor, memory, and input/output interfaces).
The host 4100 may write data in the SSD 4200 or read data from the SSD 4200. The host controller 4120 may transfer signals SGL such as a command, an address, a control signal, and the like to the SSD 4200 via the host interface 4111. The DRAM 4130 may be a main memory of the host 4100.
The SSD 4200 may exchange signals SGL with the host 4100 via the host interface 4211, and may be supplied with a power via a power connector 4221. The SSD 4200 may include a plurality of nonvolatile memories 4201 through 420n, an SSD controller 4210, and an auxiliary power supply 4220. Herein, the nonvolatile memories 4201 to 420n may be implemented by NAND flash memory. The SSD controller 4210 may be implemented by the controller 125 of
The plurality of nonvolatile memories 4201 through 420n may be used as a storage medium of the SSD 4200. The plurality of nonvolatile memories 4201 to 420n may be connected with the DDS controller 4210 via a plurality of channels CH1 to CHn. One channel may be connected with one or more nonvolatile memories. Each of the channels CH1 to CHn may correspond to the data channel 130 depicted in
The SSD controller 4210 may exchange signals SGL with the host 4100 via the host interface 4211. Herein, the signals SGL may include a command (e.g., the CMD), an address (e.g., the ADDR), data, and the like. The SSD controller 4210 may be configured to write or read out data to or from a corresponding nonvolatile memory according to a command of the host 4100.
The auxiliary power supply 4220 may be connected with the host 4100 via the power connector 4221. The auxiliary power supply 4220 may be charged by a power PWR from the host 4100. The auxiliary power supply 4220 may be placed within the SSD 4200 or outside the SSD 4200. For example, the auxiliary power supply 4220 may be put on a main board to supply an auxiliary power to the SSD 4200.
While an embodiment with respect to
Referring to
The main processor 1100 may control all operations of the system 1000, more specifically, operations of other components included in the system 1000. The main processor 1100 may be implemented as a general-purpose processor, a dedicated processor, or an application processor.
The main processor 1100 may include at least one CPU core 1110 and further include a controller 1120 configured to control the memories 1200a and 1200b and/or the storage devices 1300a and 1300b. In some embodiments, the main processor 1100 may further include an accelerator 1130, which is a dedicated circuit for a high-speed data operation, such as an artificial intelligence (AI) data operation. The accelerator 1130 may include a graphics processing unit (GPU), a neural processing unit (NPU) and/or a data processing unit (DPU) and be implemented as a chip that is physically separate from the other components of the main processor 1100. The accelerator 1130 may include the ECC encoder 222 and the ECC decoder 228 similar to the accelerator 128 illustrated in
The storage devices 1300a and 1300b may serve as non-volatile storage devices configured to store data regardless of whether power is supplied thereto, and have larger storage capacity than the memories 1200a and 1200b. The storage devices 1300a and 1300b may respectively include storage controllers (STRG CTRL) 1310a and 1310b and NVM (Non-Volatile Memory)s 1320a and 1320b configured to store data via the control of the storage controllers 1310a and 1310b. Although the NVMs 1320a and 1320b may include flash memories having a two-dimensional (2D) structure or a three-dimensional (3D) V-NAND structure, the NVMs 1320a and 1320b may include other types of NVMs, such as PRAM and/or RRAM.
The storage devices 1300a and 1300b may be physically separated from the main processor 1100 and included in the system 1000 or implemented in the same package as the main processor 1100. In addition, the storage devices 1300a and 1300b may have types of solid-state devices (SSDs) or memory cards and be removably combined with other components of the system 100 through an interface, such as the connecting interface 1480 that will be described below. The storage devices 1300a and 1300b may be devices to which a standard protocol, such as a universal flash storage (UFS), an embedded multi-media card (eMMC), or a non-volatile memory express (NVMe), is applied, without being limited thereto.
The image capturing device 1410 may capture still images or moving images. The image capturing device 1410 may include a camera, a camcorder, and/or a webcam.
The user input device 1420 may receive various types of data input by a user of the system 1000 and include a touch pad, a keypad, a keyboard, a mouse, and/or a microphone.
The sensor 1430 may detect various types of physical quantities, which may be obtained from the outside of the system 1000, and convert the detected physical quantities into electric signals. The sensor 1430 may include a temperature sensor, a pressure sensor, an illuminance sensor, a position sensor, an acceleration sensor, a biosensor, and/or a gyroscope sensor.
The communication device 1440 may transmit and receive signals between other devices outside the system 1000 according to various communication protocols. The communication device 1440 may include an antenna, a transceiver, and/or a modem.
The display 1450 and the speaker 1460 may serve as output devices configured to respectively output visual information and auditory information to the user of the system 1000.
The power supplying device 1470 may appropriately convert power supplied from a battery (not shown) embedded in the system 1000 and/or an external power source, and supply the converted power to each of components of the system 1000.
The connecting interface 1480 may provide connection between the system 1000 and an external device, which is connected to the system 1000 and capable of transmitting and receiving data to and from the system 1000. The connecting interface 1480 may be implemented by using various interface schemes, such as advanced technology attachment (ATA), serial ATA (SATA), external SATA (e-SATA), small computer small interface (SCSI), serial attached SCSI (SAS), peripheral component interconnection (PCI), PCI express (PCIe), NVMe, IEEE 1394, a universal serial bus (USB) interface, a secure digital (SD) card interface, a multi-media card (MMC) interface, an eMMC interface, a UFS interface, an embedded UFS (eUFS) interface, and a compact flash (CF) card interface.
Referring to
The application servers 3100 to 3100n may communicate with the storage servers 3200 to 3200m through a network 3300. The network 3300 may be implemented by using a fiber channel (FC) or Ethernet. In this case, the FC may be a medium used for relatively high-speed data transmission and use an optical switch with high performance and high availability. The storage servers 3200 to 3200m may be provided as file storages, block storages, or object storages according to an access method of the network 3300.
In an embodiment, the network 3300 may be a storage-dedicated network, such as a storage area network (SAN). For example, the SAN may be an FC-SAN, which uses an FC network and is implemented according to an FC protocol (FCP). As another example, the SAN may be an Internet protocol (IP)-SAN, which uses a transmission control protocol (TCP)/IP network and is implemented according to a SCSI over TCP/IP or Internet SCSI (iSCSI) protocol. In another embodiment, the network 3300 may be a general network, such as a TCP/IP network. For example, the network 3300 may be implemented according to a protocol, such as FC over Ethernet (FCoE), network attached storage (NAS), and NVMe over Fabrics (NVMe-oF).
Hereinafter, the application server 3100 and the storage server 3200 will mainly be described. A description of the application server 3100 may be applied to another application server 3100n, and a description of the storage server 3200 may be applied to another storage server 3200m.
The application server 3100 may store data, which is requested by a user or a client to be stored, in one of the storage servers 3200 to 3200m through the network 3300. Also, the application server 3100 may obtain data, which is requested by the user or the client to be read, from one of the storage servers 3200 to 3200m through the network 3300. For example, the application server 3100 may be implemented as a web server or a database management system (DBMS).
The application server 3100 may access a memory 3120n or a storage device 3150n, which is included in another application server 3100n, through the network 3300. Alternatively, the application server 3100 may access memories 3220 to 3220m or storage devices 3250 to 3250m, which are included in the storage servers 3200 to 3200m, through the network 3300. Thus, the application server 3100 may perform various operations on data stored in application servers 3100 to 3100n and/or the storage servers 3200 to 3200m. For example, the application server 3100 may execute an instruction for moving or copying data between the application servers 3100 to 3100n and/or the storage servers 3200 to 3200m. In this case, the data may be moved from the storage devices 3250 to 3250m of the storage servers 3200 to 3200m to the memories 3120 to 3120n of the application servers 3100 to 3100n directly or through the memories 3220 to 3220m of the storage servers 3200 to 3200m. The data moved through the network 3300 may be data encrypted for security or privacy.
The storage server 3200 will now be described as an example. An interface 3254 may provide physical connection between a processor 3210 and a controller 3251 and a physical connection between a network interface card (NIC) 3240 and the controller 3251. For example, the interface 3254 may be implemented using a direct attached storage (DAS) scheme in which the storage device 3250 is directly connected with a dedicated cable. For example, the interface 3254 may be implemented by using various interface schemes, such as ATA, SATA, e-SATA, an SCSI, SAS, PCI, PCIe, NVMe, IEEE 1394, a USB interface, an SD card interface, an MMC interface, an eMMC interface, a UFS interface, an eUFS interface, and/or a CF card interface.
The storage server 3200 may further include a switch 3230 and the NIC(Network InterConnect) 3240. The switch 3230 may selectively connect the processor 3210 to the storage device 3250 or selectively connect the NIC 3240 to the storage device 3250 via the control of the processor 3210.
In an embodiment, the NIC 3240 may include a network interface card and a network adaptor. The NIC 3240 may be connected to the network 3300 by a wired interface, a wireless interface, a Bluetooth interface, or an optical interface. The NIC 3240 may include an internal memory, a digital signal processor (DSP), and a host bus interface and be connected to the processor 3210 and/or the switch 3230 through the host bus interface. The host bus interface may be implemented as one of the above-described examples of the interface 3254. In an embodiment, the NIC 3240 may be integrated with at least one of the processor 3210, the switch 3230, and the storage device 3250.
In the storage servers 3200 to 3200m or the application servers 3100 to 3100n, a processor may transmit a command to storage devices 3150 to 3150n and 3250 to 3250m or the memories 3120 to 3120n and 3220 to 3220m and program or read data. In this case, the data may be data of which an error is corrected by an ECC engine. The data may be data on which a data bus inversion (DBI) operation or a data masking (DM) operation is performed, and may include cyclic redundancy code (CRC) information. The data may be data encrypted for security or privacy.
Storage devices 3150 to 3150n and 3250 to 3250m may transmit a control signal and a command/address signal to NAND flash memory devices 3252 to 3252m in response to a read command received from the processor. Thus, when data is read from the NAND flash memory devices 3252 to 3252m, a read enable (RE) signal may be input as a data output control signal, and thus, the data may be output to a DQ bus. A data strobe signal DQS may be generated using the RE signal. The command and the address signal may be latched in a page buffer depending on a rising edge or falling edge of a write enable (WE) signal.
The controller 3251 may control all operations of the storage device 3250. In an embodiment, the controller 3251 may include SRAM. In an embodiment, the controller 3251 may include the ECC encoder 222 and the ECC decoder 228 of
Although the present inventive concept has been described in connection with exemplary embodiments thereof, those skilled in the art will appreciate that various modifications can be made to these embodiments without substantially departing from the principles of the present inventive concept.
This application is a continuation application of U.S. patent application Ser. No. 17/706,179, filed in the United States Patent and Trademark Office on Mar. 28, 2022, the disclosure of which is incorporated by reference in its entirety herein.
Number | Name | Date | Kind |
---|---|---|---|
20050243911 | Kwon et al. | Nov 2005 | A1 |
20050243912 | Kwon et al. | Nov 2005 | A1 |
20050243913 | Kwon et al. | Nov 2005 | A1 |
20050243914 | Kwon et al. | Nov 2005 | A1 |
20160182087 | Sommer et al. | Jun 2016 | A1 |
20170170849 | Bentley et al. | Jun 2017 | A1 |
Number | Date | Country |
---|---|---|
112332869 | Feb 2021 | CN |
WO-2021062289 | Apr 2021 | WO |
Entry |
---|
Christian Hager et al., “Approaching Miscorrection-Free Performing of Product Codes With Anchor Decoding”, IEEE Transactions on Communications, vol. 66, No. 7, Jul. 2018. |
Pascal O. Vontobel, et al., “Graph-Cover Decoding and Finite-Length Analysis of Message-Passing Iterative Decoding of LDPC Codes”, Submitted to IEEE Transactions on Information Theory, Dec. 20, 2005. |
R. Michael Tanner, “A Recursive Approach to Low Complexity Codes”, IEEE Transactions of Information Theory, vol. IT 27, No. 5, Sep. 1981. |
Ramesh Mahendra Pyndiah, “Near-Optimum Decoding of Product Codes: Block Turbo Codes”, IEEE Transactions on Communications, vol. 46, No. 8, Aug. 1998. |
Notice of Allowance dated Apr. 18, 2023 in corresponding U.S. Appl. No. 17/706,179. |
Number | Date | Country | |
---|---|---|---|
20230370090 A1 | Nov 2023 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17706179 | Mar 2022 | US |
Child | 18358660 | US |