Systems and method relating generally to data processing, and more particularly to systems and methods for encoding and decoding information.
Data transfers often include encoding of a data set to be transferred to yield an encoded data set, and subsequent decoding of the encoded data set to recover the original data set. The encoding typically includes the addition of information that are designed to aid in recovering data transferred via a potentially lossy medium. In some cases, the encoding and decoding fails to provide sufficient aid in recovering a transferred data set and/or wastes bandwidth by adding too much information to aid in the recovery.
Hence, for at least the aforementioned reasons, there exists a need in the art for advanced systems and methods for data processing.
Systems and method relating generally to data processing, and more particularly to systems and methods for encoding and decoding information.
Various embodiments of the present invention provide data processing systems that include a two step encoder circuit. The two step encoder circuit is operable to: receive a user data set that includes a first data portion and a second data portion; apply a first level encoding on a first section by section basis to the first data portion to yield a first encoding data, wherein the first encoding data includes a first encoded portion and a second encoded portion; apply a second level encoding on a second section by section basis to the first encoded portion to yield a first parity set; apply a third level encoding on the first section by section basis to a combination of at least the first data portion, the second data portion, and the first encoding data to yield a second parity set; and assemble at least the first data portion, the second data portion, the first parity set and the second parity set to yield an encoded data set.
This summary provides only a general outline of some embodiments of the invention. The phrases “in one embodiment,” “according to one embodiment,” “in various embodiments”, “in one or more embodiments”, “in particular embodiments” and the like generally mean the particular feature, structure, or characteristic following the phrase is included in at least one embodiment of the present invention, and may be included in more than one embodiment of the present invention. Importantly, such phases do not necessarily refer to the same embodiment. Many other embodiments of the invention will become more fully apparent from the following detailed description, the appended claims and the accompanying drawings.
A further understanding of the various embodiments of the present invention may be realized by reference to the figures which are described in remaining portions of the specification. In the figures, like reference numerals are used throughout several figures to refer to similar components. In some instances, a sub-label consisting of a lower case letter is associated with a reference numeral to denote one of multiple similar components. When reference is made to a reference numeral without specification to an existing sub-label, it is intended to refer to all such multiple similar components.
a is a graphical depiction of a codeword encoded in accordance with one or more embodiments of the present invention;
b and
a-9i are graphical representations of different stages of codeword decoding in accordance with some embodiments of the present invention; and
a-10e are graphical representations of different stages of codeword encoding in accordance with some embodiments of the present invention.
Systems and method relating generally to data processing, and more particularly to systems and methods for encoding and decoding information.
Various embodiments of the present invention provide data processing systems that include a two step encoder circuit. The two step encoder circuit is operable to: receive a user data set that includes a first data portion and a second data portion; apply a first level encoding on a first section by section basis to the first data portion to yield a first encoding data, wherein the first encoding data includes a first encoded portion and a second encoded portion; apply a second level encoding on a second section by section basis to the first encoded portion to yield a first parity set; apply a third level encoding on the first section by section basis to a combination of at least the first data portion, the second data portion, and the first encoding data to yield a second parity set; and assemble at least the first data portion, the second data portion, the first parity set and the second parity set to yield an encoded data set.
In some instances of the aforementioned embodiments, the first section by section basis is orthogonal to the second section by section basis. In various cases, the first section by section basis is a row by row basis, and the second section by section basis is a column by column basis. In various instances of the aforementioned embodiments, the first level encoding is a strong row encoding, the third level encoding is a weak row by row encoding, and the second level encoding is a column encoding.
In some instances of the aforementioned embodiments, the user data set further includes a third data portion, and the two step data encoder circuit is further operable to: apply a fourth level encoding on the first section by section basis to a combination of the first data portion, the second data portion, and a portion derived from the first encoded portion to yield a second encoding data, where the second encoding data includes a third encoded portion and a fourth encoded portion; and apply a fifth level encoding on the second section by section basis to the third encoded portion to yield a third parity set. In such instances, applying the third level encoding includes applying the third level encoding on the first section by section basis to a combination of the first data portion, the second data portion, the third data portion, the first encoding data, and the second encoding data to yield the second parity set; and the assembling includes assembling the first data portion, the second data portion, the third data portion, the first parity set, the second parity set and the third parity set to yield the encoded data set.
In some such instances, the first level encoding is a strong row encoding, the third level encoding is a weak row by row encoding, and the fourth level encoding is a medium row encoding. In various of such instances, the data processing system further includes a data decoder circuit. The data decoder circuit is operable to: apply a weak row decoding to the encoded data set on a row by row basis to yield at least a first decoded row and a second decoded row; calculate strong row parity and medium row parity for each of the first decoded row and the second decoded row that failed to converge; reconstruct a medium column code based upon the third parity set; apply erasure decoding to columns corresponding to the third parity set to yield a first syndrome; apply medium row decoding to each of the first decoded row and the second decoded row that failed to converge using the first syndrome to yield first second pass decode row and a second second pass decode row; calculate strong row parity for each of the first second pass decode row and a second second pass decode row that failed to converge; reconstruct a weak column code based upon the first parity set; apply erasure decoding to columns corresponding to the first parity set to yield a second syndrome; apply strong row decoding to each of the f first second pass decode row and a second second pass decode row that failed to converge using the second syndrome to yield first third pass decode row and a second third pass decode row.
Other embodiments of the present invention provide methods for data processing that include: receiving a user data set that includes a first data portion, a second data portion, and a third data portion; applying a first level encoding on a first section by section basis to the first data portion to yield a first encoding data, where the first encoding data includes a first encoded portion and a second encoded portion; applying a second level encoding on a second section by section basis to the first encoded portion to yield a first parity set; applying a third level encoding on the first section by section basis to a combination of the first data portion, the second data portion, and a portion derived from the first encoded portion to yield a second encoding data, wherein the second encoding data includes a third encoded portion and a fourth encoded portion; applying a fourth level encoding on the second section by section basis to the third encoded portion to yield a third parity set; applying a fifth level encoding on the first section by section basis to a combination of at least the first data portion, the second data portion, the third data portion, the first encoding data, and the second encoding data to yield the second parity set; and assembling at least the first data portion, the second data portion, the third data portion; the first parity set, the second parity set, and the third parity set to yield an encoded data set.
In some instances of the aforementioned embodiments, the first level encoding is a strong row encoding, the third level encoding is a medium row encoding, the fifth level encoding is a weak row encoding, the second level encoding is a weak column encoding, and the fourth level encoding is a strong column encoding. In some cases, the methods further include: applying a weak row decoding to the encoded data set on a row by row basis to yield at least a first decoded row and a second decoded row; calculating strong row parity and medium row parity for each of the first decoded row and the second decoded row that failed to converge; reconstructing a medium column code based upon the third parity set; applying erasure decoding to columns corresponding to the third parity set to yield a first syndrome; applying medium row decoding to each of the first decoded row and the second decoded row that failed to converge using the first syndrome to yield first second pass decode row and a second second pass decode row; calculating strong row parity for each of the first second pass decode row and a second second pass decode row that failed to converge; reconstructing a weak column code based upon the first parity set; applying erasure decoding to columns corresponding to the first parity set to yield a second syndrome; and applying strong row decoding to each of the first second pass decode row and a second second pass decode row that failed to converge using the second syndrome to yield first third pass decode row and a second third pass decode row.
In various cases, a data processing circuit is included that includes a data detector circuit and a data decoder circuit. The data detector circuit is operable to apply a data detection algorithm to a codeword to yield a detected output, and the data decoder circuit is operable to apply a data decode algorithm to a decoder input derived from the detected output to yield a decoded output. Processing a codeword through both the data detector circuit and the data decoder circuit is generally referred to as a “global iteration”. During a global iteration, the data decode algorithm may be repeated applied. Each application of the data decode algorithm during a given global iteration is referred to as a “local iteration”.
Turning to
In a typical read operation, read/write head 176 is accurately positioned by motor controller 168 over a desired data track on disk platter 178. Motor controller 168 both positions read/write head 176 in relation to disk platter 178 and drives spindle motor 172 by moving read/write head assembly 176 to the proper data track on disk platter 178 under the direction of hard disk controller 166. Spindle motor 172 spins disk platter 178 at a determined spin rate (RPMs). Once read/write head 176 is positioned adjacent the proper data track, magnetic signals representing data on disk platter 178 are sensed by read/write head 176 as disk platter 178 is rotated by spindle motor 172. The sensed magnetic signals are provided as a continuous, minute analog signal representative of the magnetic data on disk platter 178. This minute analog signal is transferred from read/write head 176 to read channel circuit 110 via preamplifier 170. Preamplifier 170 is operable to amplify the minute analog signals accessed from disk platter 178. In turn, read channel circuit 110 decodes and digitizes the received analog signal to recreate the information originally written to disk platter 178. This data is provided as read data 103 to a receiving circuit. A write operation is substantially the opposite of the preceding read operation with write data 101 being provided to read channel circuit 110. This data is then encoded and written to disk platter 178.
In operation, data written to disk platter 178 is encoded using a two step concatenation encoding that yields equal payloads by read channel circuit 110. In some cases, the encoding may be done by a circuit similar to that discussed below in relation to
It should be noted that storage system 100 may be integrated into a larger storage system such as, for example, a RAID (redundant array of inexpensive disks or redundant array of independent disks) based storage system. Such a RAID storage system increases stability and reliability through redundancy, combining multiple disks as a logical unit. Data may be spread across a number of disks included in the RAID storage system according to a variety of algorithms and accessed by an operating system as if it were a single disk. For example, data may be mirrored to multiple disks in the RAID storage system, or may be sliced and distributed across multiple disks in a number of techniques. If a small number of disks in the RAID storage system fail or become unavailable, error correction techniques may be used to recreate the missing data based on the remaining portions of the data from the other disks in the RAID storage system. The disks in the RAID storage system may be, but are not limited to, individual storage systems such as storage system 100, and may be located in close proximity to each other or distributed more widely for increased security. In a write operation, write data is provided to a controller, which stores the write data across the disks, for example by mirroring or by striping the write data. In a read operation, the controller retrieves the data from the disks. The controller then yields the resulting read data as if the RAID storage system were a single disk.
A data decoder circuit used in relation to read channel circuit 110 may be, but is not limited to, a low density parity check (LDPC) decoder circuit as are known in the art. Such low density parity check technology is applicable to transmission of information over virtually any channel or storage of information on virtually any media. Transmission applications include, but are not limited to, optical fiber, radio frequency channels, wired or wireless local area networks, digital subscriber line technologies, wireless cellular, Ethernet over any medium such as copper or optical fiber, cable channels such as cable television, and Earth-satellite communications. Storage applications include, but are not limited to, hard disk drives, compact disks, digital video disks, magnetic tapes and memory devices such as DRAM, NAND flash, NOR flash, other non-volatile memories and solid state drives.
In addition, it should be noted that storage system 100 may be modified to include solid state memory that is used to store data in addition to the storage offered by disk platter 178. This solid state memory may be used in parallel to disk platter 178 to provide additional storage. In such a case, the solid state memory receives and provides information directly to read channel circuit 110. Alternatively, the solid state memory may be used as a cache where it offers faster access time than that offered by disk platted 178. In such a case, the solid state memory may be disposed between interface controller 120 and read channel circuit 110 where it operates as a pass through to disk platter 178 when requested data is not available in the solid state memory or when the solid state memory does not have sufficient storage to hold a newly written data set. Based upon the disclosure provided herein, one of ordinary skill in the art will recognize a variety of storage systems including both disk platter 178 and a solid state memory.
Turning to
During operation, data is received by transmitter 210 where it is encoded. The encoding is a two step encoding that may be performed using a circuit similar to that discussed below in relation to
Turning to
Turning to
Turning to
Turning to
Analog to digital converter circuit 514 converts processed analog signal 512 into a corresponding series of digital samples 516. Analog to digital converter circuit 514 may be any circuit known in the art that is capable of producing digital samples corresponding to an analog input signal. Based upon the disclosure provided herein, one of ordinary skill in the art will recognize a variety of analog to digital converter circuits that may be used in relation to different embodiments of the present invention. Digital samples 516 are provided to an equalizer circuit 520. Equalizer circuit 520 applies an equalization algorithm to digital samples 516 to yield an equalized output 525. In some embodiments of the present invention, equalizer circuit 520 is a digital finite impulse response filter circuit as are known in the art. It may be possible that equalized output 525 may be received directly from a storage device in, for example, a solid state storage system. In such cases, analog front end circuit 510, analog to digital converter circuit 514 and equalizer circuit 520 may be eliminated where the data is received as a digital data input. Equalized output 525 is stored to an input buffer 553 that includes sufficient memory to maintain a number of codewords until processing of that codeword is completed through a data detector circuit 530 and three step data decoding circuit 570 including, where warranted, multiple global iterations (passes through both data detector circuit 530 and three step data decoding circuit 570) and/or local iterations (passes through three step data decoding circuit 570 during a given global iteration). An output 557 is provided to data detector circuit 530.
Data detector circuit 530 may be a single data detector circuit or may be two or more data detector circuits operating in parallel on different codewords. Whether it is a single data detector circuit or a number of data detector circuits operating in parallel, data detector circuit 530 is operable to apply a data detection algorithm to a received codeword or data set. In some embodiments of the present invention, data detector circuit 530 is a Viterbi algorithm data detector circuit as are known in the art. In other embodiments of the present invention, data detector circuit 530 is a maximum a posteriori data detector circuit as are known in the art. Of note, the general phrases “Viterbi data detection algorithm” or “Viterbi algorithm data detector circuit” are used in their broadest sense to mean any Viterbi detection algorithm or Viterbi algorithm detector circuit or variations thereof including, but not limited to, bi-direction Viterbi detection algorithm or bi-direction Viterbi algorithm detector circuit. Also, the general phrases “maximum a posteriori data detection algorithm” or “maximum a posteriori data detector circuit” are used in their broadest sense to mean any maximum a posteriori detection algorithm or detector circuit or variations thereof including, but not limited to, simplified maximum a posteriori data detection algorithm and a max-log maximum a posteriori data detection algorithm, or corresponding detector circuits. Based upon the disclosure provided herein, one of ordinary skill in the art will recognize a variety of data detector circuits that may be used in relation to different embodiments of the present invention. In some cases, one data detector circuit included in data detector circuit 530 is used to apply the data detection algorithm to the received codeword for a first global iteration applied to the received codeword, and another data detector circuit included in data detector circuit 530 is operable apply the data detection algorithm to the received codeword guided by a decoded output accessed from a central memory circuit 550 on subsequent global iterations.
Upon completion of application of the data detection algorithm to the received codeword on the first global iteration, data detector circuit 530 provides a detector output 533. Detector output 533 includes soft data. As used herein, the phrase “soft data” is used in its broadest sense to mean reliability data with each instance of the reliability data indicating a likelihood that a corresponding bit position or group of bit positions has been correctly detected. In some embodiments of the present invention, the soft data or reliability data is log likelihood ratio data as is known in the art. Detector output 533 is provided to a local interleaver circuit 542. Local interleaver circuit 542 is operable to shuffle sub-portions (i.e., local chunks) of the data set included as detected output and provides an interleaved codeword 546 that is stored to central memory circuit 550. Interleaver circuit 542 may be any circuit known in the art that is capable of shuffling data sets to yield a re-arranged data set. Interleaved codeword 546 is stored to central memory circuit 550.
Once three step data decoding circuit 570 is available, a previously stored interleaved codeword 546 is accessed from central memory circuit 550 as a stored codeword 586 and globally interleaved by a global interleaver/de-interleaver circuit 584. Global interleaver/de-interleaver circuit 584 may be any circuit known in the art that is capable of globally rearranging codewords. Global interleaver/De-interleaver circuit 584 provides a decoder input 552 into three step data decoding circuit 570. Decoder output 552 may encoded similar to that discussed above in relation to
Where decoded output 571 fails to converge (i.e., fails to yield the originally written data set) and a number of local iterations through data decoder circuit 570 exceeds a threshold, the resulting decoded output is provided as a decoded output 554 back to central memory circuit 550 where it is stored awaiting another global iteration through a data detector circuit included in data detector circuit 530. Prior to storage of decoded output 554 to central memory circuit 550, decoded output 554 is globally de-interleaved to yield a globally de-interleaved output 588 that is stored to central memory circuit 550. The global de-interleaving reverses the global interleaving earlier applied to stored codeword 586 to yield decoder input 552. When a data detector circuit included in data detector circuit 530 becomes available, a previously stored de-interleaved output 588 is accessed from central memory circuit 550 and locally de-interleaved by a de-interleaver circuit 544. De-interleaver circuit 544 re-arranges decoder output 548 to reverse the shuffling originally performed by interleaver circuit 542. A resulting de-interleaved output 597 is provided to data detector circuit 530 where it is used to guide subsequent detection of a corresponding data set previously received as equalized output 525.
Alternatively, where the decoded output converges (i.e., yields the originally written data set), the resulting decoded output is provided as an output codeword 572 to a de-interleaver circuit 580 that rearranges the data to reverse both the global and local interleaving applied to the data to yield a de-interleaved output 582. De-interleaved output 582 is provided to a hard decision buffer circuit 528 buffers de-interleaved output 582 as it is transferred to the requesting host as a hard decision output 529.
In operation, an encoded data set (e.g., a data set similar to that discussed above in relation to
Decoding result 1110 of
In a second decoding step, a medium column code (i.e., column code (Q2) generated during the encoding process when column encoding of non-exclusive payload area 420 is performed) is reconstructed. This reconstruction includes XORing strong data (SD2) with medium data (MD2), and XORing strong parity (SP2) with medium parity (MP2). Result 1120 of
Using the calculated medium column codes, erasure decoding is applied to columns to recover the syndrome for each of the failed rows (i.e., row 1112d, row 1112e, row 1112h, and row 1112l). These syndromes are shown in result 1123 of
In a third decoding step, a weak column code (i.e., column code (Q1) generated during the encoding process when column encoding of non-exclusive payload area 410 is performed) is reconstructed. This reconstruction includes XORing strong data (SD1) with data (D1), and XORing strong parity (SP1) with encoding data (P1) to yield Q1. Result 1141 of
Using the calculated weak column codes, erasure decoding is applied to columns to recover the syndrome for each of the failed rows (i.e., row 1112d). These syndromes are shown in result 1142 of
Turning to
In operation, first step data encoding circuit 610 applies strong row encoding to a first subset (D0) of data input 602 to yield strong parity blocks (SD1, SP1, SD2, SP2, SP3). An example result 1000 from applying strong row encoding is shown in
First step data encoding circuit 610 XORs the strong parity block (SP1) with a second subset (D1) to yield a first modified parity block. Weak column encoding is applied to the first modified parity block to yield a first column code (Q1). As used herein, the phrase “weak column encoding” is used in its broadest sense to mean an encoding that generates an amount of encoding data for a direction different from row encoding of an array of data that is greater than both “medium column encoding” and “strong column encoding”. By implication, the phrase “medium column encoding” is used in its broadest sense to mean an encoding that generates an amount of encoding data for a direction different from row encoding of an array of data that is greater than “weak column encoding” and less than “strong column encoding”; and the phrase “strong column encoding” is used in its broadest sense to mean an encoding that generates an amount of encoding data for a direction different from row encoding of an array of data that is greater than both “strong column encoding” and “medium column encoding”. An example result 1010 from applying the aforementioned weak column encoding is shown in
Parity data P1 is then calculated from the first column code Q1 and the strong parity code SP1 by XORing SP1 and Q1. This parity data is then stored for inclusion in the final encoded codeword. Medium row encoding is then applied to D0 and the columns including SD1 XOR D1 and Q1 to yield medium row encoded data MD2, MP2, MP3. An example result 1020 from applying the aforementioned medium row encoding is shown in
Second step data encoding circuit 620 assembles D0, D1, D2, P1 and P2 into an interim encoded data set with P1 and P2 distributed throughout the data set to yield a uniform sector payload. This interim encoded data set is similar to codeword 400 of
Turning to
Decoding result 1110 of
In a second decoding step (block 702), a medium column code (i.e., column code (Q2) generated during the encoding process when column encoding of non-exclusive payload area 420 is performed) is reconstructed (block 726). This reconstruction includes XORing strong data (SD2) with medium data (MD2), and XORing strong parity (SP2) with medium parity (MP2). Result 1120 of
Using the calculated medium column codes, erasure decoding is applied to columns to recover the syndrome for each of the failed rows (e.g., row 1112d, row 1112e, row 1112h, and row 1112l) (block 729). These syndromes are shown in result 1123 of
In a third decoding step (block 703), a weak column code (i.e., column code (Q1) generated during the encoding process when column encoding of non-exclusive payload area 410 is performed) is reconstructed (block 747). This reconstruction includes XORing strong data SD1 with data D1, and XORing strong parity SP1 with encoding data P1 to yield Q1. Result 1141 of
Using the calculated weak column codes, erasure decoding is applied to columns to recover the syndrome for each of the failed rows (e.g., row 1112d) (block 750). These syndromes are shown in result 1142 of
Turning to
Strong parity block SP1 is XORed with second user data set D1 to yield a first modified parity block (block 815). Weak column encoding is applied to the first modified parity block to yield a first column code (Q1) (block 820). An example result 1010 from applying the aforementioned weak column encoding (i.e., applied to the result of SD1 XOR D1 to yield Q1) is shown in
Medium row encoding is then applied to D0 and the columns including (SD1 XOR D1) and Q1 to yield medium row encoded data MD2, MP2, MP3 (block 825). An example result 1020 from applying the aforementioned medium row encoding is shown in
D0, D1, D2, P1 and P2 are assembled into an interim encoded data set with P1 and P2 distributed throughout the data set to yield a uniform sector payload (block 865). This interim encoded data set is similar to codeword 400 of
It should be noted that the various blocks discussed in the above application may be implemented in integrated circuits along with other functionality. Such integrated circuits may include all of the functions of a given block, system or circuit, or a subset of the block, system or circuit. Further, elements of the blocks, systems or circuits may be implemented across multiple integrated circuits. Such integrated circuits may be any type of integrated circuit known in the art including, but are not limited to, a monolithic integrated circuit, a flip chip integrated circuit, a multichip module integrated circuit, and/or a mixed signal integrated circuit. It should also be noted that various functions of the blocks, systems or circuits discussed herein may be implemented in either software or firmware. In some such cases, the entire system, block or circuit may be implemented using its software or firmware equivalent. In other cases, the one part of a given system, block or circuit may be implemented in software or firmware, while other parts are implemented in hardware.
In conclusion, the invention provides novel systems, devices, methods and arrangements for out of order data processing. While detailed descriptions of one or more embodiments of the invention have been given above, various alternatives, modifications, and equivalents will be apparent to those skilled in the art without varying from the spirit of the invention. Therefore, the above description should not be taken as limiting the scope of the invention, which is defined by the appended claims.
The present application claims priority to (is a non-provisional of) U.S. Pat. App. No. 61/869,641 entitled “Systems and Methods for Enhanced Data Encoding and Decoding”, and filed Aug. 23, 2013 by Wilson et al. The entirety of the aforementioned provisional patent application is incorporated herein by reference for all purposes.
Number | Name | Date | Kind |
---|---|---|---|
4553221 | Hyatt | Nov 1985 | A |
4805174 | Kubot | Feb 1989 | A |
5278703 | Rub et al. | Jan 1994 | A |
5278846 | Okayama | Jan 1994 | A |
5317472 | Schweitzer, III | May 1994 | A |
5325402 | Ushirokawa | Jun 1994 | A |
5351274 | Chennakeshu | Sep 1994 | A |
5392299 | Rhines | Feb 1995 | A |
5406593 | Chennakeshu | Apr 1995 | A |
5417500 | Martinie | May 1995 | A |
5450253 | Seki | Sep 1995 | A |
5513192 | Janku | Apr 1996 | A |
5523903 | Hetzler | Jun 1996 | A |
5550810 | Monogioudis et al. | Aug 1996 | A |
5550870 | Blaker | Aug 1996 | A |
5612964 | Haraszti | Mar 1997 | A |
5696504 | Oliveros | Dec 1997 | A |
5710784 | Kindred | Jan 1998 | A |
5717706 | Ikeda | Feb 1998 | A |
5719871 | Helm | Feb 1998 | A |
5802118 | Bliss | Sep 1998 | A |
5844945 | Nam et al. | Dec 1998 | A |
5898710 | Amrany | Apr 1999 | A |
5923713 | Hatakeyama | Jul 1999 | A |
5978414 | Nara | Nov 1999 | A |
5983383 | Wolf | Nov 1999 | A |
6005897 | Mccallister | Dec 1999 | A |
6023783 | Divsalar | Feb 2000 | A |
6029264 | Kobayashi | Feb 2000 | A |
6065149 | Yamanaka | May 2000 | A |
6097764 | McCallister | Aug 2000 | A |
6145110 | Khayrallah | Nov 2000 | A |
6175588 | Visotsky | Jan 2001 | B1 |
6216249 | Bliss | Apr 2001 | B1 |
6216251 | McGinn | Apr 2001 | B1 |
6266795 | Wei | Jul 2001 | B1 |
6317472 | Choi | Nov 2001 | B1 |
6351832 | Wei | Feb 2002 | B1 |
6377610 | Hagenauer | Apr 2002 | B1 |
6381726 | Weng | Apr 2002 | B1 |
6393074 | Mandyam | May 2002 | B1 |
6412088 | Patapoutian et al. | Jun 2002 | B1 |
6473878 | Wei | Oct 2002 | B1 |
6535553 | Limberg et al. | Mar 2003 | B1 |
6625775 | Kim | Sep 2003 | B1 |
6643814 | Cideciyan et al. | Nov 2003 | B1 |
6697441 | Bottomley | Feb 2004 | B1 |
6747827 | Bassett et al. | Jun 2004 | B1 |
6748034 | Hattori | Jun 2004 | B2 |
6757862 | Marianetti, II | Jun 2004 | B1 |
6785863 | Blankenship | Aug 2004 | B2 |
6807238 | Rhee | Oct 2004 | B1 |
6810502 | Eidson | Oct 2004 | B2 |
6839774 | Ahn et al. | Jan 2005 | B1 |
6948113 | Shaver | Sep 2005 | B1 |
6970511 | Barnette | Nov 2005 | B1 |
6975692 | Razzell | Dec 2005 | B2 |
6986098 | Poeppelman | Jan 2006 | B2 |
7035327 | Nakajima et al. | Apr 2006 | B2 |
7047474 | Rhee | May 2006 | B2 |
7058853 | Kavanappillil et al. | Jun 2006 | B1 |
7058873 | Song | Jun 2006 | B2 |
7073118 | Greenberg | Jul 2006 | B2 |
7093179 | Shea | Aug 2006 | B2 |
7117427 | Ophir | Oct 2006 | B2 |
7133228 | Fung | Nov 2006 | B2 |
7136244 | Rothberg | Nov 2006 | B1 |
7184486 | Wu | Feb 2007 | B1 |
7191378 | Eroz | Mar 2007 | B2 |
7203887 | Eroz | Apr 2007 | B2 |
7230550 | Mittal | Jun 2007 | B1 |
7237181 | Richardson | Jun 2007 | B2 |
7308061 | Huang | Dec 2007 | B1 |
7310768 | Eidson | Dec 2007 | B2 |
7313750 | Feng | Dec 2007 | B1 |
7370258 | Iancu | May 2008 | B2 |
7415651 | Argon | Aug 2008 | B2 |
7502189 | Sawaguchi | Mar 2009 | B2 |
7523375 | Spencer | Apr 2009 | B2 |
7587657 | Haratsch | Sep 2009 | B2 |
7590168 | Raghavan | Sep 2009 | B2 |
7596196 | Liu et al. | Sep 2009 | B1 |
7646829 | Ashley | Jan 2010 | B2 |
7702986 | Bjerke | Apr 2010 | B2 |
7738202 | Zheng | Jun 2010 | B1 |
7752523 | Chaichanavong | Jul 2010 | B1 |
7779325 | Song | Aug 2010 | B2 |
7802172 | Casado | Sep 2010 | B2 |
7952824 | Dziak | May 2011 | B2 |
7957251 | Ratnakar Aravind | Jun 2011 | B2 |
7958425 | Chugg | Jun 2011 | B2 |
7996746 | Livshitz | Aug 2011 | B2 |
8018360 | Nayak | Sep 2011 | B2 |
8020069 | Feng | Sep 2011 | B1 |
8020078 | Richardson | Sep 2011 | B2 |
8161361 | Song et al. | Apr 2012 | B1 |
8201051 | Tan | Jun 2012 | B2 |
8225168 | Yu et al. | Jul 2012 | B2 |
8237597 | Liu | Aug 2012 | B2 |
8255765 | Yeo | Aug 2012 | B1 |
8261171 | Annampedu | Sep 2012 | B2 |
8291284 | Savin | Oct 2012 | B2 |
8291299 | Li | Oct 2012 | B2 |
8295001 | Liu | Oct 2012 | B2 |
8296637 | Varnica | Oct 2012 | B1 |
8370711 | Alrod | Feb 2013 | B2 |
8381069 | Liu | Feb 2013 | B1 |
8413032 | Song | Apr 2013 | B1 |
8429498 | Anholt | Apr 2013 | B1 |
8443267 | Zhong et al. | May 2013 | B2 |
8458555 | Gunnam | Jun 2013 | B2 |
8464142 | Gunnam | Jun 2013 | B2 |
8495462 | Liu | Jul 2013 | B1 |
8516339 | Lesea | Aug 2013 | B1 |
8527849 | Jakab | Sep 2013 | B2 |
8949704 | Zhang et al. | Feb 2015 | B2 |
20010010089 | Gueguen | Jul 2001 | A1 |
20010016114 | Van Gestel et al. | Aug 2001 | A1 |
20020021519 | Rae | Feb 2002 | A1 |
20020067780 | Razzell | Jun 2002 | A1 |
20020168033 | Suzuki | Nov 2002 | A1 |
20030031236 | Dahlman | Feb 2003 | A1 |
20030123364 | Nakajima et al. | Jul 2003 | A1 |
20030126527 | Kim et al. | Jul 2003 | A1 |
20030138102 | Kohn et al. | Jul 2003 | A1 |
20030147168 | Galbraith et al. | Aug 2003 | A1 |
20030188252 | Kim | Oct 2003 | A1 |
20040042436 | Terry et al. | Mar 2004 | A1 |
20040194007 | Hocevar | Sep 2004 | A1 |
20040228021 | Yamazaki | Nov 2004 | A1 |
20040264284 | Priborsky et al. | Dec 2004 | A1 |
20050047514 | Bolinth et al. | Mar 2005 | A1 |
20050149842 | Kyung | Jul 2005 | A1 |
20050210367 | Ashikhmin | Sep 2005 | A1 |
20050243456 | Mitchell et al. | Nov 2005 | A1 |
20060002689 | Yang et al. | Jan 2006 | A1 |
20060159355 | Mizuno | Jul 2006 | A1 |
20060195730 | Kageyama | Aug 2006 | A1 |
20070168834 | Eroz et al. | Jul 2007 | A1 |
20070185902 | Messinger et al. | Aug 2007 | A1 |
20070234178 | Richardson | Oct 2007 | A1 |
20070297496 | Park et al. | Dec 2007 | A1 |
20080037676 | Kyun et al. | Feb 2008 | A1 |
20080069373 | Jiang | Mar 2008 | A1 |
20080140686 | Hong | Jun 2008 | A1 |
20080304558 | Zhu et al. | Dec 2008 | A1 |
20090003301 | Reial | Jan 2009 | A1 |
20090092174 | Wang | Apr 2009 | A1 |
20090106633 | Fujiwara | Apr 2009 | A1 |
20090125780 | Taylor | May 2009 | A1 |
20090132893 | Miyazaki | May 2009 | A1 |
20090150745 | Langner et al. | Jun 2009 | A1 |
20090177852 | Chen | Jul 2009 | A1 |
20090185643 | Fitzpatrick | Jul 2009 | A1 |
20090216942 | Yen | Aug 2009 | A1 |
20090273492 | Yang et al. | Nov 2009 | A1 |
20100077276 | Okamura et al. | Mar 2010 | A1 |
20100088575 | Sharon et al. | Apr 2010 | A1 |
20100150252 | Camp | Jun 2010 | A1 |
20100172046 | Liu et al. | Jul 2010 | A1 |
20100241921 | Gunnam | Sep 2010 | A1 |
20100268996 | Yang | Oct 2010 | A1 |
20100322048 | Yang et al. | Dec 2010 | A1 |
20100325511 | Oh | Dec 2010 | A1 |
20110041040 | Su | Feb 2011 | A1 |
20110043938 | Mathew | Feb 2011 | A1 |
20110066768 | Brittner et al. | Mar 2011 | A1 |
20110167227 | Yang | Jul 2011 | A1 |
20110258508 | Ivkovic | Oct 2011 | A1 |
20110258509 | Ramamoorthy | Oct 2011 | A1 |
20110264987 | Li | Oct 2011 | A1 |
20110299629 | Luby et al. | Dec 2011 | A1 |
20110307760 | Pisek | Dec 2011 | A1 |
20110320902 | Gunnam | Dec 2011 | A1 |
20120020402 | Ibing | Jan 2012 | A1 |
20120038998 | Mathew | Feb 2012 | A1 |
20120063023 | Mathew | Mar 2012 | A1 |
20120079353 | Liikanen | Mar 2012 | A1 |
20120124118 | Ivkovic | May 2012 | A1 |
20120182643 | Zhang | Jul 2012 | A1 |
20120185744 | Varnica | Jul 2012 | A1 |
20120203986 | Strasser et al. | Aug 2012 | A1 |
20120207201 | Xia | Aug 2012 | A1 |
20120212849 | Xu | Aug 2012 | A1 |
20120236428 | Xia | Sep 2012 | A1 |
20120262814 | Li | Oct 2012 | A1 |
20120265488 | Sun | Oct 2012 | A1 |
20120317462 | Liu et al. | Dec 2012 | A1 |
20130024740 | Xia | Jan 2013 | A1 |
20130031440 | Sharon | Jan 2013 | A1 |
20130120169 | Li | May 2013 | A1 |
20130194955 | Chang | Aug 2013 | A1 |
20130198580 | Chen | Aug 2013 | A1 |
20130238955 | D'Abreu | Sep 2013 | A1 |
20130326306 | Cideciyan et al. | Dec 2013 | A1 |
Number | Date | Country |
---|---|---|
2001319433 | Nov 2001 | JP |
WO 2010059264 | May 2010 | WO |
WO9 2010126482 | Nov 2010 | WO |
Entry |
---|
Casado et al., Multiple-rate low-density parity-check codes with constant blocklength, IEEE Transations on communications, Jan. 2009, vol. 57, pp. 75-83. |
Cui et al., “High-Throughput Layered LDPC Decoding Architecture”, IEEE Transactions on Very Large Scale Integration (VLSI) Systems, vol. 17, No. 4 (Apr. 2009). |
Fan et al., “Constrained coding techniques for soft iterative decoders” Proc. IEEE Global Telecommun. Conf. vol. 1b, pp. 631-637 (1999). |
Gross, “Stochastic Decoding of LDPC Codes over GF(q)”, HDPCC Workshop, Tel Aviv (Mar. 2, 2010). |
Gunnam et al., “VLSI Architectures for Layered Decoding for Irregular LDPC Codes of WiMax”, IEEE ICC Proceedings (2007). |
Hagenauer, J. et al A Viterbi Algorithm with Soft-Decision Outputs and its Applications in Proc. IEEE Globecom, pp. 47. 11-47 Dallas, TX Nov. 1989. |
Han and Ryan, “Pinning Techniques for Low-Floor Detection/Decoding of LDPC-Coded Partial Response Channels”, 5th International Symposium on Turbo Codes &Related Topics, 2008. |
Kautz, “Fibonacci Codes for Synchronization Control”, IEEE Trans. Info. Theory, vol. 11, No. 2, pp. 284-292 (Apr. 1965). |
Kschischang et al., “Factor Graphs and the Sum-Product Algorithm”, IEEE Transactions on Information Theory, vol. 47, No. 2 (Feb. 2001). |
Leduc-Primeau et al., “A Relaxed Half-Stochastic Iterative Decoder for LDPC Codes”, IEEE Communications Society, IEEE Globecom proceedings (2009). |
Lee et al., “Partial Zero-Forcing Adaptive MMSE Receiver for DS-CDMA Uplink in Multicell Environments” IEEE Transactions on Vehicular Tech. vol. 51, No. 5, Sep. 2002. |
Li et al “Efficient Encoding of Quasi-Cyclic Low-Density Parity Check Codes” IEEE Transactions on Communications on 53 (11) 1973-1973, 2005. |
Lim et al. “Convergence Analysis of Constrained Joint Adaptation in Recording Channels” IEEE Trans. on Signal Processing vol. 54, No. 1 Jan. 2006. |
Lin et al “An efficient VLSI Architecture for non binary LDPC decoders”—IEEE Transaction on Circuits and Systems II vol. 57, Issue 1 (Jan. 2010) pp. 51-55. |
Moon et al, “Pattern-dependent noise prediction in signal-dependent Noise,” IEEE JSAC, vol. 19, No. 4 pp. 730-743, Apr. 2001. |
Moon et al., “Maximum transition run codes for data storage systems”, IEEE Trans. Magn., vol. 32, No. 5, pp. 3992-3994 (Sep. 1996). |
Patapoutian et al “Improving Re-Read Strategies by Waveform Averaging” IEEE Transactions on Mag. vol. 37 No. 6, Nov. 2001. |
Planjery et al “Finite Alphabet Iterative Decoders, pt 1: Decoding Beyond Beliver Propogation on BSC” Jul. 2012, printed from the internet Apr. 21, 2014 http://arxiv.org/pdf/1207.4800.pd. |
Richardson, T “Error Floors of LDPC Codes” Flarion Technologies Bedminster NJ 07921, tjr@flarion.com (not dated). |
Shokrollahi “LDPC Codes: An Introduction”, Digital Fountain, Inc. (Apr. 2, 2003). |
Spagnol et al, “Hardware Implementation of GF(2{hacek over ( )}m) LDPC Decoders”, IEEE Transactions on Circuits and Systems{hacek over (s)}i : Regular Papers, vol. 56, No. 12 (Dec. 2009). |
Tehrani et al., “Fully Parallel Stochastic LDPC Decoders”, IEEE Transactions on Signal Processing, vol. 56, No. 11 (Nov. 2008). |
Todd et al., “Enforcing maximum-transition-run code constraints and low-density parity check decoding”, IEEE Trans. Magn., vol. 40, No. 6, pp. 3566-3571 (Nov. 2004). |
Vasic, B., “High-Rate Girth-Eight Codes on Rectangular Integer Lattices”, IEEE Trans. Communications, vol. 52, Aug. 2004, pp. 1248-1252. |
Weon-Cheol Lee et al., “Vitierbi Decoding Method Using Channel State Info. in COFDM System” IEEE Trans. on Consumer Elect., IEEE Service Center, NY, NY vol. 45, No. 3 Aug. 1999. |
Xiao, et al “Nested Codes With Multiple Interpretations” retrieved from the Internet URL: http://www.ece.nmsu.edu/˜jkliewer/paper/XFKC—CISS06 (retrieved on Dec. 5, 2012). |
Yeo et al., “VLSI Architecture for Iterative Decoders in Magnetic Storage Channels”, Mar. 2001, pp. 748-755, IEEE trans. Magnetics, vol. 37, No. 2. |
Zhang et al., “Analysis of Verification-Based Decoding on the q-ary Symmetric Channel for Large q”, IEEE Trans. on Information Theory, vol. 57, No. 10 (Oct. 2011). |
Zhong et al., “Quasi Cyclic LDPC Codes for the Magnetic Recording Channel: Code Design and VSLI Implementation”, IEEE Transactions on Magnetics, v. 43, pp. 1118-1123, Mar. 2007. |
Zhong, “Block-LDPC: A Practical LDPC Coding System Design Approach”, IEEE Trans. on Circuits, Regular Papers, vol. 5, No. 4, pp. 766-775, Apr. 2005. |
Number | Date | Country | |
---|---|---|---|
20150058693 A1 | Feb 2015 | US |
Number | Date | Country | |
---|---|---|---|
61869641 | Aug 2013 | US |