The present inventions are related to systems and methods for decoding information, and more particularly to systems and methods for data processing that includes data shuffling.
Various data transfer systems have been developed including storage systems, cellular telephone systems, and radio transmission systems. In each of the systems data is transferred from a sender to a receiver via some medium. For example, in a storage system, data is sent from a sender (i.e., a write function) to a receiver (i.e., a read function) via a storage medium. The effectiveness of any transfer is impacted by any losses in data caused by various factors. In some cases, an encoding/decoding process is used to enhance the ability to detect a data error and to correct such data errors. As an example, a simple data detection and decode may be performed, however, such a simple process often lacks the capability to converge on a corrected data stream. To increase the possibility of convergence, various existing processes utilize two or more detection and decode iterations. Further data may be shuffled to limit the impact of burst errors on an ability to converge on the proper data set. In many cases, the aforementioned systems are inefficient.
Hence, for at least the aforementioned reasons, there exists a need in the art for advanced systems and methods for data processing.
The present inventions are related to systems and methods for decoding information, and more particularly to systems and methods for data processing that includes data shuffling.
Various embodiments of the present invention provide methods for data processing that include: receiving a data input having at least a first local chunk and a second local chunk, the data input also being defined as having at least a first global chunk and a second global chunk; rearranging an order of the first local chunk and the second local chunk to yield a locally interleaved data set; storing the locally interleaved data set to a first memory, such that the first global chunk is stored to a first memory space, and the second global chunk is stored to a second memory space; accessing the locally interleaved data set from the first memory; and storing the locally interleaved data set to a second memory. The first global chunk is stored to a third memory space defined at least in part based on the first memory space, and the second global chunk is stored to a fourth memory space defined at least in part based on the second memory space.
In some instances of the aforementioned embodiments, the first memory space is a first column and a first row, and the second memory space is a second column and the first row. In some such instances, the first row is a randomly selected row. In various of such instances, the third memory space is a third column and a second row, and the fourth memory space is a fourth column and a third row. In some such instances, the second row is randomly selected, and the third row is randomly selected. In other such instances, the third column is selected based at least in part on the first column, and the fourth column is selected based at least in part on the second column. In yet other such instances, the third column is the same as the first column, and the fourth column is the same as the second column.
In one or more instances of the aforementioned embodiments, the methods further include: applying a data detection algorithm to a data set to yield the data input; accessing a globally interleaved data set from a fifth memory space in the second memory; and applying a data decode algorithm to the globally interleaved data set. In some such instances, the third memory space is a third column and a second row, the fourth memory space is a fourth column and a third row, and the first memory space is the second row including at least the first global chunk. In various such instances, the data detection algorithm may be, but is not limited to, a maximum a posteriori data detection algorithm, or a Viterbi algorithm data detection algorithm. In some cases, the data decode algorithm is a low density parity check algorithm.
Other embodiments of the present invention provide data processing systems that include: a local interleaver circuit and a column controlled interleaver circuit. The local interleaver circuit is operable to: receive a data input that includes at least a first local chunk and a second local chunk, rearrange an order of the first local chunk and the second local chunk to yield a locally interleaved data set, and write the locally interleaved data set to a first row of a first memory. The locally interleaved data set includes at least a first global chunk stored to a first column of the first memory, and a second global chunk stored to a second column of the first memory. The column controlled interleaver circuit is operable to: access the locally interleaved data set from the first row of the first memory, store the first global chunk to the first column and a second row of a second memory, store the second global chunk to the second column and a third row of the second memory.
In some instances of the aforementioned embodiments, the data processing system is implemented as, but is not limited to, a storage device or a receiving device. In various instances of the aforementioned embodiments, the data processing system is implemented as part of an integrated circuit. In one or more instances of the aforementioned embodiments, the first row of the first memory is randomly selected, the second row of the second memory is randomly selected, and the third row of the second memory is randomly selected. In various instances of the aforementioned embodiments, the first column of the second memory is selected to correspond to the first column of the first memory, and the second column of the second memory is selected to correspond to the second column of the first memory. In some instances of the aforementioned embodiments, the system further includes: a data detector circuit and a data decoder circuit. The data detector circuit is operable to apply a data detection algorithm to a data set to yield the data input. The data decoder circuit is operable to apply a data decode algorithm to a globally interleaved data set generated by accessing the second row of the second memory including the first global chunk.
This summary provides only a general outline of some embodiments of the invention. Many other objects, features, advantages and other embodiments of the invention will become more fully apparent from the following detailed description, the appended claims and the accompanying drawings.
A further understanding of the various embodiments of the present invention may be realized by reference to the figures which are described in remaining portions of the specification. In the figures, like reference numerals are used throughout several figures to refer to similar components. In some instances, a sub-label consisting of a lower case letter is associated with a reference numeral to denote one of multiple similar components. When reference is made to a reference numeral without specification to an existing sub-label, it is intended to refer to all such multiple similar components.
a and 3b show an example of a two step global interleaving process in accordance with some embodiments of the present invention;
The present inventions are related to systems and methods for decoding information, and more particularly to systems and methods for data processing that includes data shuffling.
Various embodiments of the present invention provide for shuffling data between operations of a data detector circuit and a data decoder circuit. The shuffling process, also referred to herein as “interleaving”, includes both a local interleaving and a global interleaving. As used herein, the phrase “local interleaving” or “local shuffling” is used in its broadest sense to mean rearranging data within a defined codeword. Also, as used herein, the phrase “global interleaving” or “global shuffling” is used in its broadest sense to mean rearranging data across multiple codewords. As used herein, the terms “de-interleaving” and “de-shuffling” are used in their broadest sense to mean reversing the process of interleaving and shuffling. In some of the embodiments discussed herein, a combination of local interleaving and global interleaving are to minimize the effects of burst errors in a given codeword upon the data decoding process. A two step global interleaving and corresponding de-interleaving are used that reduce the amount of circuitry needed when compared with a single step global interleaving.
Turning to
Analog to digital converter circuit 114 converts processed analog signal 112 into a corresponding series of digital samples 116. Analog to digital converter circuit 114 may be any circuit known in the art that is capable of producing digital samples corresponding to an analog input signal. Based upon the disclosure provided herein, one of ordinary skill in the art will recognize a variety of analog to digital converter circuits that may be used in relation to different embodiments of the present invention. Digital samples 116 are provided to an equalizer circuit 120. Equalizer circuit 120 applies an equalization algorithm to digital samples 116 to yield an equalized output 125. In some embodiments of the present invention, equalizer circuit 120 is a digital finite impulse response filter circuit as are known in the art. In some cases, equalizer 120 includes sufficient memory to maintain one or more codewords until a data detector circuit 130 is available for processing.
Equalized output 125 is provided to detector circuit 130 that is operable to apply a data detection algorithm to a received codeword, and in some cases can process two or more codewords in parallel. In some embodiments of the present invention, data detector circuit is a Viterbi algorithm data detector circuit as are known in the art. In other embodiments of the present invention, data detector circuit 130 is a maximum a posteriori data detector circuit as are known in the art. Based upon the disclosure provided herein, one of ordinary skill in the art will recognize a variety of data detector circuits that may be used in relation to different embodiments of the present invention. Data detector circuit 130 is started based upon availability of a codeword from either equalizer 120 or efficient interleaving/de-interleaving circuit 140.
Data detector circuit 130 applies the data detection algorithm to either a codeword received as equalized output 125 or to a codeword received as de-interleaved output 197 from efficient interleaving/de-interleaving circuit 140. The result of applying the data detection algorithm is a detected output 195 that is provided to efficient interleaving/de-interleaving circuit 140. When a detected output 195 is ready, it is stored to a central memory circuit 150 where it awaits processing by a data decoder circuit 170. In some cases, detected output 195 is log likelihood ratio data. Before being stored to central memory circuit 150, detected output 195 is processed through local interleaver circuit 142 that shuffles sub-portions (i.e., local chunks) of the codeword included as detected output 195 and provides an interleaved codeword 146 that is stored to central memory circuit 150.
Subsequent to processing by local interleaver circuit 142, the local chunks are placed in a different order. This rearranging increases the randomness and thereby mitigates the effect of any burst errors. In prior art systems, the write operation of interleaved codeword 146 to central memory circuit 150 involved writing one interleaved codeword 146 after another is done on a row by row basis into central memory circuit 150, and the global interleaving is done when the data is transferred out of central memory circuit 150. In contrast, in efficient interleaving/de-interleaving circuit 140, when writing interleaved codeword 146 to central memory circuit 150, each instance of interleaved codeword 146 is written to a random row location in central memory circuit 150. The random row mapping may be done based upon a random number generator limited to row numbers in central memory circuit 150 that are known to be available. In this way, a random row write does not overwrite needed data, but is rather limited to vacated row locations. In some cases, the row mapping function is programmed to a look up table (not shown). This is the first of a two step global interleaving process.
A ping/pong memory circuit 165 is used to pull a global interleaved data set 162 from central memory circuit 150 for data decoder circuit 170 by way of column controlled interleaver/de-interleaver circuit 160. Once data decoder circuit 170 is available, a global interleaved codeword 167 is pulled form ping/pong memory circuit 165 and data decoder circuit 170 applies a data decode algorithm to the received codeword. In some embodiments of the present invention, the data decode algorithm is a low density parity check algorithm as are known in the art. Based upon the disclosure provided herein, one of ordinary skill in the art will recognize other decode algorithms that may be used in relation to different embodiments of the present invention. As the data decode algorithm completes on a given codeword, the completed codeword is written back as a decoded output 169 to ping/pong memory circuit 165. Once the write back is complete to ping/pong memory circuit 165, a corresponding codeword 164 is transferred to central memory circuit 150 by way of column controlled interleaver/de-interleaver circuit 160.
When a codeword is transferred from central memory circuit 150 as a partially globally interleaved codeword 152, column controlled interleaver/de-interleaver circuit 160 again modifies the row into which a given global chunk is placed. A global chunk may be the same size as the local chunks, while in other cases the global chunks may be different in size from the local chunks. Of note, when transferring data from central memory circuit 150 to ping/pong memory circuit 165, column controlled interleaver/de-interleaver circuit 160 changes the row into which a given global chunk is written, but maintains the column. The row into which a global chunk is placed may be randomly selected or selected based upon a mapping scheme. Thus, a global chunk is written to the same column in ping/pong memory circuit 165 that it was pulled from in central memory circuit 150. By maintaining the columns consistent between a location in ping/pong memory circuit 165, a layer of multiplexers may be eliminated yielding a more efficient global interleaving/de-interleaving with a corresponding reduction in power consumption compared with allowing a global interleaving/de-interleaving with randomly assigned columns. This process of modifying the rows while maintaining consistent column location is shown in
When codeword 164 is written from ping/pong memory circuit 165 to central memory circuit 150, column controlled interleaver/de-interleaver circuit 160 reverses the row modification originally applied when the data was originally written from central memory circuit 150 to ping/pong memory circuit 165. This reversal yields a partially globally interleaved codeword 154 that is written to central memory circuit 150. When data detector circuit 130 becomes free, a corresponding partially globally interleaved codeword 148 is provided to data detector circuit 130 as a de-interleaved codeword 197 by a local de-interleaver circuit 144. Local de-interleaver circuit 144 reverses the processes originally applied by local interleaver circuit 142. Once data detector circuit 130 completes application of the detection algorithm to de-interleaved codeword 197, the result is provided as detected output 195.
Where data decoder circuit 170 converges (i.e., results in the originally written data), the resulting decoded data is provided as a hard decision output 172 to a de-interleaver circuit 180. De-interleaver circuit 180 rearranges the data to reverse both the global and local interleaving applied to the data to yield a de-interleaved output 182. De-interleaved output 182 is provided to a hard decision output circuit 190. Hard decision output circuit 190 is operable to re-order codewords that may complete out of order back into their original order. The originally ordered codewords are then provided as a hard decision output 192.
Turning to
It is determined whether a data detector circuit is available (block 420). Where a data detector circuit is available (block 420), a data detection algorithm is applied to the equalized output guided by a de-interleaved codeword where a such a de-interleaved codeword corresponding to the equalized output is available (i.e., the second and later iterations through the data detector circuit and the data decoder circuit). This process yields a detected output (block 425). In some embodiments of the present invention, data detection algorithm is a Viterbi algorithm as are known in the art. In other embodiments of the present invention, the data detection algorithm is a maximum a posteriori data detector circuit as are known in the art. Local chunks in the detected output are re-arranged or shuffled to yield a locally interleaved data set (block 430).
A row of a central memory is randomly selected (block 435), and the locally interleaved data set is stored to the selected row (block 440). The process of writing the locally interleaved data set to a randomly selected row of the central memory completes the first step of a two step global interleaving process. It is then determined whether a partially de-interleaved data set is available for use in the data detection process (block 445). Where a partially de-interleaved data set is available, block 445), the partially de-interleaved data set is accessed from the central memory (block 450) and the partially de-interleaved data set is de-interleaved to yield the de-interleaved data set for use in guiding the detection process (block 455). De-interleaving the partially de-interleaved data set is the reverse of the process described above in relation to
In parallel to the previously discussed processing, it is determined whether a data decoder circuit is available (block 460). Where the data decoder circuit is available (block 460) a previously stored locally interleaved data set is accessed from the central memory (block 465). A first row in a second memory having an available column location corresponding to the column of a first global chunk of the locally interleaved codeword is selected, and a second row in the second memory having an available column location corresponding to the column of a second global chunk of the locally interleaved codeword is selected (block 470). The first chunk of the locally interleaved data set is written to the previously selected row and column in the second memory, and the second chunk of the locally interleaved data set is written to the previously selected row and column in the second memory (block 475). The process of writing the global chunks to the selected rows and columns of the second memory completes the second step of the two step global interleaving process. An example of this second step is shown in
It is determined whether the decode algorithm converged (i.e., the original data set is identified) (block 485). Where the data decode algorithm converged (block 485), the decoded output is provided as a data output (block 499). Otherwise, where the data decode algorithm failed to converge (block 485) the decoded output is partially de-interleaved (block 490). This partial de-interleaving includes reversing the processes discussed above in relation to
Turning to
Turning to
In a typical read operation, read/write head assembly 676 is accurately positioned by motor controller 668 over a desired data track on disk platter 678. Motor controller 668 both positions read/write head assembly 676 in relation to disk platter 678 and drives spindle motor 672 by moving read/write head assembly to the proper data track on disk platter 678 under the direction of hard disk controller 666. Spindle motor 672 spins disk platter 678 at a determined spin rate (RPMs). Once read/write head assembly 678 is positioned adjacent the proper data track, magnetic signals representing data on disk platter 678 are sensed by read/write head assembly 676 as disk platter 678 is rotated by spindle motor 672. The sensed magnetic signals are provided as a continuous, minute analog signal representative of the magnetic data on disk platter 678. This minute analog signal is transferred from read/write head assembly 676 to read channel circuit 610 via preamplifier 670. Preamplifier 670 is operable to amplify the minute analog signals accessed from disk platter 678. In turn, read channel circuit 610 decodes and digitizes the received analog signal to recreate the information originally written to disk platter 678. This data is provided as read data 603 to a receiving circuit. A write operation is substantially the opposite of the preceding read operation with write data 601 being provided to read channel circuit 610. This data is then encoded and written to disk platter 678.
During a read operation, data received from preamplifier circuit 670 is converted from an analog signal to a series of corresponding digital samples, and the digital samples are equalized to yield an equalized output. The equalized output is then provided to a data processing circuit including both a data detector circuit and a data decoder circuit. Data is passed between the data decoder and data detector circuit via an efficient interleaving/de-interleaving circuit. The efficient interleaving/de-interleaving circuit may be implemented similar to that discussed above in relation to
It should be noted that storage system 600 may be integrated into a larger storage system such as, for example, a RAID (redundant array of inexpensive disks or redundant array of independent disks) based storage system. It should also be noted that various functions or blocks of storage system 600 may be implemented in either software or firmware, while other functions or blocks are implemented in hardware.
It should be noted that the various blocks discussed in the above application may be implemented in integrated circuits along with other functionality. Such integrated circuits may include all of the functions of a given block, system or circuit, or only a subset of the block, system or circuit. Further, elements of the blocks, systems or circuits may be implemented across multiple integrated circuits. Such integrated circuits may be any type of integrated circuit known in the art including, but are not limited to, a monolithic integrated circuit, a flip chip integrated circuit, a multichip module integrated circuit, and/or a mixed signal integrated circuit. It should also be noted that various functions of the blocks, systems or circuits discussed herein may be implemented in either software or firmware. In some such cases, the entire system, block or circuit may be implemented using its software or firmware equivalent. In other cases, the one part of a given system, block or circuit may be implemented in software or firmware, while other parts are implemented in hardware.
In conclusion, the invention provides novel systems, devices, methods and arrangements for data processing. While detailed descriptions of one or more embodiments of the invention have been given above, various alternatives, modifications, and equivalents will be apparent to those skilled in the art without varying from the spirit of the invention. Therefore, the above description should not be taken as limiting the scope of the invention, which is defined by the appended claims.
Number | Name | Date | Kind |
---|---|---|---|
4394642 | Currie et al. | Jul 1983 | A |
5056112 | Wei | Oct 1991 | A |
5278703 | Rub | Jan 1994 | A |
5278846 | Okayama et al. | Jan 1994 | A |
5325402 | Ushirokawa | Jun 1994 | A |
5392299 | Rhines et al. | Feb 1995 | A |
5471500 | Blaker et al. | Nov 1995 | A |
5513192 | Janku et al. | Apr 1996 | A |
5523903 | Hetzler | Jun 1996 | A |
5550870 | Blaker et al. | Aug 1996 | A |
5612964 | Haraszti | Mar 1997 | A |
5701314 | Armstrong et al. | Dec 1997 | A |
5710784 | Kindred et al. | Jan 1998 | A |
5712861 | Inoue et al. | Jan 1998 | A |
5717706 | Ikeda | Feb 1998 | A |
5768044 | Hetzler | Jun 1998 | A |
5802118 | Bliss et al. | Sep 1998 | A |
5844945 | Nam et al. | Dec 1998 | A |
5898710 | Amrany | Apr 1999 | A |
5923713 | Hatakeyama | Jul 1999 | A |
5978414 | Nara | Nov 1999 | A |
5983383 | Wolf | Nov 1999 | A |
6005897 | McCallister et al. | Dec 1999 | A |
6023783 | Divsalar et al. | Feb 2000 | A |
6029264 | Kobayashi et al. | Feb 2000 | A |
6041432 | Ikeda | Mar 2000 | A |
6065149 | Yamanaka | May 2000 | A |
6097764 | McCallister et al. | Aug 2000 | A |
6145110 | Khayrallah | Nov 2000 | A |
6216249 | Bliss et al. | Apr 2001 | B1 |
6216251 | McGinn | Apr 2001 | B1 |
6229467 | Eklund et al. | May 2001 | B1 |
6266795 | Wei | Jul 2001 | B1 |
6317472 | Choi et al. | Nov 2001 | B1 |
6351832 | Wei | Feb 2002 | B1 |
6377610 | Hagenauer et al. | Apr 2002 | B1 |
6381726 | Weng | Apr 2002 | B1 |
6438717 | Butler et al. | Aug 2002 | B1 |
6473878 | Wei | Oct 2002 | B1 |
6476989 | Chainer et al. | Nov 2002 | B1 |
6553516 | Suda et al. | Apr 2003 | B1 |
6625763 | Boner | Sep 2003 | B1 |
6625775 | Kim | Sep 2003 | B1 |
6631491 | Shibutani et al. | Oct 2003 | B1 |
6657803 | Ling et al. | Dec 2003 | B1 |
6671404 | Kawatani et al. | Dec 2003 | B1 |
6748034 | Hattori et al. | Jun 2004 | B2 |
6757862 | Marianetti | Jun 2004 | B1 |
6785863 | Blankenship et al. | Aug 2004 | B2 |
6788654 | Hashimoto et al. | Sep 2004 | B1 |
6810502 | Eidson | Oct 2004 | B2 |
6904077 | Toskala et al. | Jun 2005 | B2 |
6980382 | Hirano et al. | Dec 2005 | B2 |
6986098 | Poeppelman | Jan 2006 | B2 |
7010051 | Murayama et al. | Mar 2006 | B2 |
7047474 | Rhee et al. | May 2006 | B2 |
7058873 | Song et al. | Jun 2006 | B2 |
7073118 | Greeberg et al. | Jul 2006 | B2 |
7093179 | Shea | Aug 2006 | B2 |
7113356 | Wu | Sep 2006 | B1 |
7136244 | Rothberg | Nov 2006 | B1 |
7173783 | McEwen et al. | Feb 2007 | B1 |
7184486 | Wu et al. | Feb 2007 | B1 |
7191378 | Eroz et al. | Mar 2007 | B2 |
7203015 | Sakai et al. | Apr 2007 | B2 |
7203887 | Eroz et al. | Apr 2007 | B2 |
7236757 | Raghaven et al. | Jun 2007 | B2 |
7257764 | Suzuki et al. | Aug 2007 | B2 |
7310768 | Eidson et al. | Dec 2007 | B2 |
7313750 | Feng et al. | Dec 2007 | B1 |
7370258 | Iancu et al. | May 2008 | B2 |
7403752 | Raghaven et al. | Jul 2008 | B2 |
7430256 | Zhidkov | Sep 2008 | B2 |
7502189 | Sawaguchi et al. | Mar 2009 | B2 |
7505537 | Sutardja | Mar 2009 | B1 |
7523375 | Spencer | Apr 2009 | B2 |
7583584 | Wang et al. | Sep 2009 | B2 |
7587657 | Haratsch | Sep 2009 | B2 |
7590168 | Raghaven et al. | Sep 2009 | B2 |
7590927 | Shih et al. | Sep 2009 | B1 |
7702989 | Graef et al. | Apr 2010 | B2 |
7712008 | Song et al. | May 2010 | B2 |
7738201 | Jin et al. | Jun 2010 | B2 |
7752523 | Chaichanavong et al. | Jul 2010 | B1 |
7801200 | Tan | Sep 2010 | B2 |
7802163 | Tan | Sep 2010 | B2 |
8381071 | Gunnam | Feb 2013 | B1 |
8381074 | Gunnam et al. | Feb 2013 | B1 |
8402348 | Gunnam et al. | Mar 2013 | B1 |
8522110 | Kwon et al. | Aug 2013 | B2 |
20030023909 | Ikeda et al. | Jan 2003 | A1 |
20030063405 | Jin et al. | Apr 2003 | A1 |
20030081693 | Raghaven et al. | May 2003 | A1 |
20030087634 | Raghaven et al. | May 2003 | A1 |
20030112896 | Raghaven et al. | Jun 2003 | A1 |
20030134607 | Raghaven et al. | Jul 2003 | A1 |
20040071206 | Takatsu | Apr 2004 | A1 |
20040098659 | Bjerke et al. | May 2004 | A1 |
20050010855 | Lusky | Jan 2005 | A1 |
20050078399 | Fung | Apr 2005 | A1 |
20050111540 | Modrie et al. | May 2005 | A1 |
20050157780 | Werner et al. | Jul 2005 | A1 |
20050195749 | Elmasry et al. | Sep 2005 | A1 |
20050216819 | Chugg et al. | Sep 2005 | A1 |
20050273688 | Argon | Dec 2005 | A1 |
20060020872 | Richardson et al. | Jan 2006 | A1 |
20060031737 | Chugg et al. | Feb 2006 | A1 |
20060123285 | De Araujo et al. | Jun 2006 | A1 |
20060140311 | Ashley et al. | Jun 2006 | A1 |
20060168493 | Song et al. | Jul 2006 | A1 |
20060195772 | Graef et al. | Aug 2006 | A1 |
20060210002 | Yang et al. | Sep 2006 | A1 |
20060248435 | Haratsch | Nov 2006 | A1 |
20060256670 | Park et al. | Nov 2006 | A1 |
20070011569 | Vila Casado et al. | Jan 2007 | A1 |
20070047121 | Elefeheriou et al. | Mar 2007 | A1 |
20070047635 | Stojanovic et al. | Mar 2007 | A1 |
20070110200 | Mergen et al. | May 2007 | A1 |
20070230407 | Petrie et al. | Oct 2007 | A1 |
20070286270 | Huang et al. | Dec 2007 | A1 |
20080049825 | Chen et al. | Feb 2008 | A1 |
20080055122 | Tan | Mar 2008 | A1 |
20080065970 | Tan | Mar 2008 | A1 |
20080069373 | Jiang et al. | Mar 2008 | A1 |
20080168330 | Graef et al. | Jul 2008 | A1 |
20080276156 | Gunnam | Nov 2008 | A1 |
20080301521 | Gunnam | Dec 2008 | A1 |
20090185643 | Fitzpatrick | Jul 2009 | A1 |
20090199071 | Graef | Aug 2009 | A1 |
20090235116 | Tan et al. | Sep 2009 | A1 |
20090235146 | Tan | Sep 2009 | A1 |
20090259915 | Livshitz et al. | Oct 2009 | A1 |
20090273492 | Yang et al. | Nov 2009 | A1 |
20090274247 | Galbraith et al. | Nov 2009 | A1 |
20100002795 | Raghaven et al. | Jan 2010 | A1 |
20100042877 | Tan | Feb 2010 | A1 |
20100042890 | Gunam | Feb 2010 | A1 |
20100050043 | Savin | Feb 2010 | A1 |
20100061492 | Noeldner | Mar 2010 | A1 |
20100070837 | Xu et al. | Mar 2010 | A1 |
20100164764 | Nayak | Jul 2010 | A1 |
20100185914 | Tan et al. | Jul 2010 | A1 |
20110075569 | Marrow et al. | Mar 2011 | A1 |
20110080211 | Yang et al. | Apr 2011 | A1 |
20110167246 | Yang et al. | Jul 2011 | A1 |
Number | Date | Country |
---|---|---|
0522578 | Jan 1993 | EP |
0631277 | Dec 1994 | EP |
1814108 | Aug 2007 | EP |
WO 2006016751 | Feb 2006 | WO |
WO 2006134527 | Dec 2006 | WO |
WO 2007091797 | Aug 2007 | WO |
WO 2010126482 | Apr 2010 | WO |
WO 2010101578 | Sep 2010 | WO |
Entry |
---|
U.S. Appl. No. 11/461,026, filed Jul. 31, 2006, Tan, Weijun. |
U.S. Appl. No. 11/461,198, filed Jul. 31, 2006, Tan, Weijun. |
U.S. Appl. No. 11/461,283, filed Jul. 31, 2006, Tan, Weijun. |
U.S. Appl. No. 12/540,283, filed Aug. 12, 2009, Liu, et al. |
U.S. Appl. No. 12/652,201, filed Jan. 5, 2010, Mathew, et al. |
U.S. Appl. No. 12/763,050, filed Apr. 19, 2010, Ivkovic, et al. |
U.S. Appl. No. 12/792,555, filed Jun. 2, 2010, Liu, et al. |
U.S. Appl. No. 12/887,317, filed Sep. 21, 2010, Xia, et al. |
U.S. Appl. No. 12/887,330, filed Sep. 21, 2010, Zhang, et al. |
U.S. Appl. No. 12/887,369, filed Sep. 21, 2010, Liu, et al. |
U.S. Appl. No. 12/901,816, filed Oct. 11, 2010, Li, et al. |
U.S. Appl. No. 12/901,742, filed Oct. 11, 2010, Yang. |
U.S. Appl. No. 12/917,756, filed Nov. 2, 2010, Miladinovic, et al. |
U.S. Appl. No. 12/947,931, filed Nov. 17, 2010, Yang, Shaohua. |
U.S. Appl. No. 12/947,947, filed Nov. 17, 2010, Ivkovic, et al. |
U.S. Appl. No. 12/972,942, filed Dec. 20, 2010, Liao, et al. |
U.S. Appl. No. 12/992,948, filed Nov. 16, 2010, Yang, et al. |
U.S. Appl. No. 13/021,814, filed Feb. 7, 2011, Jin, Ming, et al. |
U.S. Appl. No. 13/031,818, filed Feb. 22, 2011, Xu, Changyou, et al. |
U.S. Appl. No. 13/050,129, filed Mar. 17, 2011, Tan, et al. |
U.S. Appl. No. 13/050,765, filed Mar. 17, 2011, Yang, et al. |
U.S. Appl. No. 13/088,119, filed Apr. 15, 2011, Zhang, et al. |
U.S. Appl. No. 13/088,146, filed Apr. 15, 2011, Li, et al. |
U.S. Appl. No. 13/088,178, filed Apr. 15, 2011, Sun, et al. |
U.S. Appl. No. 13/126,748, filed Apr. 28, 2011, Tan. |
U.S. Appl. No. 13/167,764, filed Jun. 24, 2011, Li, Zongwang, et al. |
U.S. Appl. No. 13/167,771, filed Jun. 24, 2011, Li, Zongwang, et al. |
U.S. Appl. No. 13/167,775, filed Jun. 24, 2011, Li, Zongwang. |
U.S. Appl. No. 13/186,146, filed Jul. 19, 2011, Mathew, et al. |
U.S. Appl. No. 13/186,213, filed Jul. 19, 2011, Mathew, et al. |
U.S. Appl. No. 13/186,234, filed Jul. 19, 2011, Xia, Haitao, et al. |
U.S. Appl. No. 13/186,251, filed Jul. 19, 2011, Xia, Haitao, et al. |
U.S. Appl. No. 13/186,174, filed Jul. 19, 2011, Mathew, et al. |
U.S. Appl. No. 13/186,197, filed Jul. 19, 2011, Mathew, George et al. |
U.S. Appl. No. 13/213,751, filed Aug. 19, 2011, Zhang, Fan, et al. |
U.S. Appl. No. 13/213,808, filed Aug. 19, 2011, Jin, Ming. |
U.S. Appl. No. 13/220,142, filed Aug. 29, 2011, Chang, Wu, et al. |
U.S. Appl. No. 13/227,538, filed Sep. 8, 2011, Yang, Shaohua, et al. |
U.S. Appl. No. 13/227,544, filed Sep. 8, 2011, Yang, Shaohua, et al. |
U.S. Appl. No. 13/239,683, filed Sep. 22, 2011, Xu, Changyou. |
U.S. Appl. No. 13/239,719, filed Sep. 22, 2011, Xia, Haitao, et al. |
U.S. Appl. No. 13/251,342, filed Oct. 2, 2011, filed Xia, Haitao, et al. |
U.S. Appl. No. 13/269,832, filed Oct. 10, 2011, Xia, Haitao, et al. |
U.S. Appl. No. 13/269,852, filed Oct. 10, 2011, Xia, Haitao, et al. |
U.S. Appl. No. 13/284,819, filed Oct. 28, 2011, Tan, Weijun, et al. |
U.S. Appl. No. 13/284,730, filed Oct. 28, 2011, Zhang, Fan, et al. |
U.S. Appl. No. 13/284,754, filed Oct. 28, 2011, Zhang, Fan, et al. |
U.S. Appl. No. 13/284,767, filed Oct. 28, 2011, Zhang, Fan, et al. |
U.S. Appl. No. 13/284,826, filed Oct. 28, 2011, Tan, Weijun, et al. |
U.S. Appl. No. 13/295,150, filed Nov. 14, 2011, Li, Zongwang, et al. |
U.S. Appl. No. 13/295,160, filed Nov. 14, 2011, Li, Zongwang, et al. |
U.S. Appl. No. 13/251,340, filed Oct. 03, 2011, Xia, Haitao, et al. |
Amer et al “Design Issues for a Shingled Write Disk System” MSST IEEE 26th Symposium May 2010. |
Bahl, et al “Optimal decoding of linear codes for Minimizing symbol error rate”, IEEE Trans. Inform. Theory, vol. 20, pp. 284-287, Mar. 1974. |
Casado et al., Multiple-rate low- denstiy parity-check codes with constant blocklength, IEEE Transations on communications, Jan. 2009, vol. 57, pp. 75-83. |
Collins and Hizlan, “Determinate State Convolutional Codes” IEEE Transactions on Communications, Dec. 1993. |
Eleftheriou, E. et al., “Low Density Parity-Check Codes for Digital Subscriber Lines”, Proc ICC 2002, pp. 1752-1757. |
Fisher, R et al., “Adaptive Thresholding”[online] 2003 [retrieved on May 28, 2010] Retrieved from the Internet <URL:http://homepages.inf.ed.ac.uk/rbf/HIPR2/adpthrsh.htm. |
Fossnorier, Marc P.C. “Quasi-Cyclic Low-Density Parity-Check Codes From Circulant Permutation Maricies” IEEE Transactions on Information Theory, vol. 50, No. 8 Aug. 8, 2004. |
Gibson et al “Directions for Shingled-Write and Two-Dimensional Magnetic Recording System” Architectures: Synergies with Solid-State Disks Carnegie Mellon Univ. May 1, 2009. |
K. Gunnam et al., “Next Generation iterative LDPC solutions for magnetic recording storage”, invited paper. The Asilomar Conference on Signals, Systems, and Computers, Nov. 2008. |
K. Gunnam et al., “Value-Reuse Properties of Min-Sum for GF(q)” (dated Oct. 2006) Dept. of ECE, Texas A&M University Technical Note, published about Aug. 2010. |
K. Gunnam et al., “Value-Reuse Properties of Min-Sum for GF(q)” (dated Jul. 2008) Dept. of ECE, Texas A&M University Technical Note, published about Aug. 2010. |
K. Gunnam “Area and Energy Efficient VLSI Architectures for Low-Density Parity-Check Decoders Using an On-The-Fly Computation” dissertation at Texas A&M University, Dec. 2006. |
Han and Ryan, “Pinning Techniques for Low-Floor Detection/Decoding of LDPC-Coded Partial Response Channels”, 5th International Symposium on Turbo Codes &Related Topics, 2008. |
Hagenauer, J. et al A Viterbi Algorithm with Soft-Decision Outputs and its Applications in Proc. IEEE Globecom, pp. 47. 11-47 Dallas, TX Nov. 1989. |
Lee et al., “Partial Zero-Forcing Adaptive MMSE Receiver for DS-CDMA Uplink in Multicell Environments” IEEE Transactions on Vehicular Tech. vol. 51, No. 5, Sep. 2002. |
Lin et al “An efficient VLSI Architecture for non binary LDPC decoders”—IEEE Transaction on Circuits and Systems II vol. 57, Issue 1 (Jan. 2010) pp. 51-55. |
Mohsenin et al., “Split Row: A Reduced Complexity, High Throughput LDPC Decoder Architecture”, pp. 1-6, printed from www.ece.ucdavis.edu on Jul. 9, 2007. |
Moon et al, “Pattern-dependent noise prediction in signal-dependent Noise,” IEEE JSAC, vol. 19, No. 4 pp. 730-743, Apr. 2001. |
Perisa et al “Frequency Offset Estimation Based on Phase Offsets Between Sample Correlations” Dept. of Info. Tech. University of Ulm 2005. |
Sari H et al., “Transmission Techniques for Digital Terrestrial TV Broadcasting” IEEE Communications Magazine, IEEE Service Center Ny, NY vol. 33, No. 2 Feb. 1995. |
Selvarathinam, A.: “Low Density Parity-Check Decoder Architecture for High Throughput Optical Fiber Channels” IEEE International Conference on Computer Design (ICCD '03) 2003. |
Shu Lin, Ryan, “Channel Codes, Classical and Modern” 2009, Cambridge University Press, pp. 213-222. |
Unknown, “Auto threshold and Auto Local Threshold” [online] [retrieved May 28, 2010] Retrieved from the Internet: <URL:http://www.dentristy.bham.ac.uk/landinig/software/autoth. |
Vasic, B., “High-Rate Low-Density Parity-Check Codes Based on Anti-Pasch Affine Geometries,” Proc ICC 2002, pp. 1332-1336. |
Vasic, B., “High-Rate Girth-Eight Codes on Rectangular Integer Lattices”, IEEE Trans. Communications, vol. 52, Aug. 2004, pp. 1248-1252. |
Wang Y et al., “A Soft Decision Decoding Scheme for Wireless COFDM With Application to DVB-T” IEEE Trans. on Consumer elec., IEEE Service Center, NY,NY vo. 50, No. 1 Feb. 2004. |
Weon-Cheol Lee et al., “Vitierbi Decoding Method Using Channel State Info. In COFDM System” IEEE Trans. on Consumer Elect., IEEE Service Center, NY, NY vol. 45, No. 3 Aug. 1999. |
Xia et al, “A Chase-GMD algorithm of Reed-Solomon codes on perpendicular channels”, IEEE Transactions on Magnetics, vol. 42 pp. 2603-2605, Oct. 2006. |
Xia et al, “Reliability-based Reed-Solomon decoding for magnetic recording channels”, IEEE International Conference on Communication pp. 1977-1981, May 2008. |
Yeo et al., “VLSI Architecture for Iterative Decoders in Magnetic Storage Channels”, Mar. 2001, pp. 748-755, IEEE trans. Magnetics, vol. 37, No. 2. |
Youn, et al. “BER Perform. Due to Irrreg. of Row-Weight Distrib. of the Parity-Chk. Matirx in Irreg. LDPC Codes for 10-Gb/s Opt. Signls” Jrnl of Lightwave Tech., vol. 23, Sep. 2005. |
Zhong et al., “Area-Efficient Min-Sum Decoder VLSI Architecture for High-Rate QC-LDPC Codes in Magnetic Recording”, pp. 1-15, Submitted 2006, not yet published. |
Zhong, “Block-LDPC: A Practical LDPC Coding System Design Approach”, IEEE Trans. on Circuits, Regular Papers, vol. 5, No. 4, pp. 766-775, Apr. 2005. |
Zhong et al., “Design of VLSI Implementation-Oriented LDPC Codes”, IEEE, pp. 670-673, 2003. |
Zhong et al., “High-Rate Quasi-Cyclic LDPC Codes for Magnetic Recording Channel with Low Error Floor”, ISCAS, IEEE pp. 3546-3549, May 2006. |
Zhong et al., “Iterative MAX-LOG-MAP and LDPC Detector/Decoder Hardware Implementation for Magnetic Read Channel”, SRC TECHRON, pp. 1-4, Oct. 2005. |
Zhong et al., “Joint Code-Encoder Design for LDPC Coding System VLSI Implementation”, ISCAS, IEEE pp. 389-392, May 2004. |
Zhong et al., “Quasi Cyclic LDPC Codes for the Magnetic Recording Channel: Code Design and VSLI Implementation”, IEEE Transactions on Magnetics, v. 43, pp. 1118-23, Mar. 2007. |
Zhong, “VLSI Architecture of LDPC Based Signal Detection and Coding System for Magnetic Recording Channel”, Thesis, RPI, Troy, NY, pp. 1-95, May 2006. |
Number | Date | Country | |
---|---|---|---|
20130080844 A1 | Mar 2013 | US |