Embodiments of the invention generally relate to the processing of digital data streams with error management. More particularly, the invention relates to processing of digital data streams with error management in Flash storage media.
Writing data, erasing data and reading data to and from memory cells can introduce noise into the process, which will result in errors in the data read from the memory cells. To ensure that the data is error free following a read operation, error correction techniques are employed. For example, error correction codes (ECC) are used to encode the data before it is written to the memory cells and then the encoded data are decoded following the read operation. A code that is used to correct more than one error in data is, for example, Bose-Chaudhuri-Hochquenghem (BCH). With ECC, redundant information is stored or transmitted alongside the regular information bearing data, to permit an ECC decoder to deduce the originally transmitted or stored information even in the presence of errors.
Depending on the number of error corrections desired, BCH codes take up a certain amount of area and consume a certain amount of power. In order to provide a greater number of error corrections, more implementation space for BCH encoders is required and more power must be consumed during operation. Thus, limitations exist with conventional BCH encoding technology.
These drawings and the associated description herein are provided to illustrate specific embodiments of the invention and are not intended to be limiting.
In this description, reference is made to the drawings in which like reference numerals may indicate identical or functionally similar elements. The drawings referred to in this description should not be understood as being drawn to scale unless specifically noted.
Various embodiments are described below, with reference to detailed illustrative embodiments, in the context of variable error bit (“T”) Bose, Chaudhuri, and Hocquenghem (BCH) encoding. It will be apparent from the description provided herein that the systems, apparatuses and methods can be embodied in a wide variety of forms. Consequently, the specific structural and functional details disclosed herein are representative and do not limit the scope of embodiments of the present technology.
The following definitions and examples may be helpful in understanding the discussion below. The examples are not intended to be limiting.
ECC: Error Correction Coding is a class of techniques in which redundant information (parity) is added to information bits in such a way that if errors are subsequently introduced, the original information bits can be recovered. ECC can also stand for error correction code, corresponding to the parity symbols themselves. An ECC has a correction capability, which represents its ability to correct errors. In the simplest form, the ability to correct errors may be limited to a certain number of error bits (correctable errors) T per ECC code word length.
BCH code: Corrects up to a fixed number T of bit errors in a forward error correction (FEC) block; BCH codes are constructed using finite field arithmetic-based encoding and decoding. Standard BCH encoding and decoding are understood to those skilled in art. A more comprehensive discussion of error control techniques, including BCH coding, can be found in the book, “Error Control Coding (Second Edition)” by Shu Lin and Daniel J. Costello, Jr., published in 2005 by Pearson (New York City, N.Y.).
Two examples illustrating the expression for a BCH FEC block are shown below. Example two is more applicable to a flash memory situation in which “N”, the total number of bits in a packet, is large. Definitions of the variables used in examples one and two are as follows:
“K”: Maximum number of data bits.
“N”: Total number of data bits in packet (i.e. K+P [“P” is defined shortly]). Of note, N is related to “m” (defined shortly) as follows: 2m−1=N (maximum number of data bits that can be supported in a FEC block).
“T”: Number of correctable errors in the packet.
“m”: The number of parity bits required per T.
“P=m*T”: Number of parity bits needed, to be added to a FEC block.
(N,K) or (N,K,T): These notational forms are used periodically in the following discussion and are short-hand forms of describing FEC block parameters.
When N=255, m=8 (supports up to N=255), and T=4,
then P=8×4=32 (bits of parity needed to be added to the FEC block);
K=Maximum number of data bits, thus K=255−32=223, i.e. (K=N−bits of parity needed to be added to the FEC block);
Data size can be any number between 1 and 223 bits; and
the BCH FEC block is then expressed as (255, 223, 4).
When K=33792, m is selected as 16, and T=64, then P=16×64=1024 bits of parity needed to be added to the FEC block.
The BCH FEC block is then expressed as (34816, 33792, 64), whereby N=34816.
Of note, usually, K and T are defined by the application and then the appropriate m is selected, as is done in Example Two. Additionally, as is also shown in Example Two, in some cases, very large T and m values are required (e.g., in flash device use cases).
The discussion will begin with a description of conventional approaches for BCH encoding and the limitations involved.
When a host processor issues write and read commands to a memory system that includes one or more memory chips, data is written to the memory cells of the memory chips. The introduction of undesirable errors occurs when this transmitted data is changed by the environment, thus changing a value of a bit from an intended value to an unintended value. Such undesirable errors can be introduced by any type of memory cell that is placed in communication with a memory controller.
More particularly, data (e.g., images, email) is transmitted through communication channels to one or more memory chips. This transmitted data is essentially an ordered set of ones and zeros. As the data is transmitted through the communication channels, the data tends to become corrupted. For instance, if cell phone A transmits an image (message) to cell phone B, the movement of the data (comprising the image) through a communication channel is what introduces errors into the data. In an example of errors being introduced into transmitted data, information is written to flash memory, which is stored on a chip. When the data that is read back is not the same as the data that was intended to be written to the flash memory, the data is said to contain errors and corruption occurs.
To counter this corruption, a method called error code correction (ECC) is performed. BCH coding is a type of ECC. FEC is a technique used for controlling errors in data transmission over unreliable or noisy communication channels, through ECC, such as BCH coding. The central idea regarding FEC is that the sender encodes the message in a redundant way by using an error correcting code. Thus, in FEC, the redundancy allows the receiver to detect a limited number of errors that may occur anywhere in the message, and correct these errors without retransmission of the message. However, such a method of error identification costs a fixed, higher forward channel bandwidth, since additional bits must be transmitted along with the data. FEC is therefore applied in situations where retransmissions are costly or impossible, such as one-way communication links and when transmitting to multiple receivers in multicast. Most telecommunication systems use a fixed channel code that is designed to tolerate the expected worst-case bit error rate. However, some systems adapt to the given channel error conditions by using a variety of FEC rates, adding more error-correction bits per packet when there are higher error rates in the channel or taking them out when they are not needed.
One main type of FEC code is the classical block code. Block codes work on fixed-size blocks (packets) of bits or symbols of a predetermined size. Practical block codes can generally be hard-decoded in polynomial time, increasing in length as their block length increases. The aforementioned BCH code is one such type of block code (and thus one type of error correction technology).
BCH codes are used in applications such as, but not limited to, satellite communications, compact disc players, DVDs, disk drives, solid-state drives and two-dimensional bar codes. Additionally, BCH encoders and decoders are used in conjunction with flash devices. Flash devices require the ability to support a flexible coding rate since channel conditions on flash devices are highly variable in at least the following use case scenarios, as known in the art: device aging; retention requirements; type of flash memory; page type; wear leveling; and die to die variations. The term, “coding rate”, refers to the amount of data that is to be transmitted divided by the full size of the block that is transmitted, including the amount of parity bits, P, that are attached. (Coding Rate=K/N.) Thus, if 1,000 bits of data are to be transmitted, along with an attached 100 bits of parity, then the coding rate would be 1,000 divided by 1,100, or 91%.
In a standard BCH application, the value of T is selected based on an understanding of the expected number of errors that occur during data reception. Higher values of T are required for higher error rate systems. In general, the coding rate decreases as the error rate and the value of T increases. Additionally, higher settings of T increase the area of an encoder's circuit, increase the overall FEC block size and increase the area and power consumption of the decoder. Conventional technology many times requires an overabundance of Ts for any expected bit-error rate.
Flash memory storage presents a challenge to the conventional standard of providing a number of parity bits (that are predetermined to be needed) to a FEC block, P, during BCH coding. For example, since the amount of parity bits needed by flash memory storage is variable, various different values of parity bits, P, are needed to be added to different FEC blocks. In one instance, each of 600 parity bits, 700 parity bits, 800 parity bits, and 900 parity bits may be utilized. The conventional BCH encoder is not well equipped to operate with variable parity bit values. For example, conventionally, every time that the amount of parity bits, P, on a coding block is changed from, for example, 700 parity bits for FEC block A to 900 parity bits for FEC block B, then an entirely new BCH encoder is needed. Thus, conventionally, with a flash device, since a variable number of parity bits, P, is used, then many different BCH encoders are also used, and the conventional circuit becomes increasingly large and more inefficient. Of note, a group of individual FEC blocks (e.g., FEC block A, FEC block B, etc.) has a predetermined P, and each individual FEC block has its own setting of P (for example, FEC block A may have a predetermined P of 700 and FEC block B may have a predetermined P of 900).
An overview of a conventional BCH encoding method, as is known to those skilled in the art, and occurring at a conventional BCH encoder is presented as follows:
Step 1:
Data bits d=[d0, d1, . . . , dK-1] are first represented in polynomial form as follows: d(x)=d0+d1x+ . . . +dK-1xK-1.
Here, d(x) is a large polynomial with binary coefficients which has the input data bits as its binary coefficients.
Step 2:
xN-K d(x) is then divided by the generator polynomial of a T error correcting (N,K) BCH code.
g(x)=g0+g1x+ . . . +gN-KxN-K.
g(x) is the lowest order polynomial over GF(2) that has α, α2, α3, . . . α2T as its roots, where α is a primitive element in the Galois Field (GF)[2M]). N−K is the total number of bits in a packet minus the number of data bits, which equals the parity size. The generator polynomial is the least common multiple of the minimum polynomial associated with the αj roots, that is:
g(x)=Πgi(x).
The g(x), also known as the generator polynomial, is the product of a number of minimal polynomials all multiplied together.
BCH codes rely on the mathematical structure of Extended Galois Fields; GF are known to at least those skilled in the art of BCH coding and GF.
Step 3:
The remainder r(x) of step 2 above is then calculated. r(x) represents the BCH parity bits in polynomial form. The remainder is found as compared to a known generator polynomial.
Step 4:
The BCH code word C(x) polynomial is constructed as:
C(x)=r(x)+xN-Kd(x).
In general, the more parity bits that are included in the transmitted message, the stronger the correction of that message will be.
A central XOR array 102 is shown and is arranged to receive input 108, from a Mux (multiplexer) (not shown), the input 108 including an amount of bits sized for any value of exactly D bits wide. Once the D value is selected, it can generally only support that one selected setting. Of note, the possible supported D in a single clock cycle for any encoding device is from one to K data bits and ideally, K divided by D will be an integer. An encoder receives a total of K values at its input and then presents K+P values at its output. For BCH encoding, the first K input and K output values are identical. After the K values are output, the P values are generated.
For example, in one instance, the m*T parity bits are set to zero at the start of encoding. D bits of data come into the XOR array 102 and the XOR array 102 updates the m*T parity bits. The central XOR array 102 sends the updated m*T parity bits 110 to the parity delay shift register 104. The parity delay shift register 104, not only stores the updated m*T parity bits 110, but also, according to conventional types commercially available, enables periodic shifting along the register (of a cascade of flip flops making up the parity delay shift register 104) such that the inputs are sequentially added to modify the stored data accordingly.
Conventionally, a plurality of central XOR arrays, such as XOR arrays Ta-Te are provided, each of which is arranged for a particular supported T (the predetermined number of correctable errors in the packet). In other words, for each T, conventionally, there will be a different central XOR array 102. For example, if T=5, there will be five XOR arrays, each XOR array arranged for each T of the five Ts. Generally, there is little sharing between these different encoder XOR arrays while BCH encoding is occurring. However, the plurality of XOR arrays Ta-Te will generally be compiled into a single larger XOR array, such as single larger XOR array 205 that combines the operation of the plurality of XOR arrays. The single larger XOR array 205 uses a larger area and also consumes more power than the individual XOR array 102.
In this example of a conventional arrangement, the implementation of the plurality of central XOR arrays Ta-Te is such that the plurality of central XOR arrays are arranged in parallel with each other. Of note, the number of supported XOR arrays is determined based on the required number of code rates that also need to be supported by the BCH encoder. For example, in terms of a flash device, the BCH encoder may support the following five different T values (or five T XOR arrays) that correspond to a particular condition of the flash device: at the start of the flash device life when the code rate is higher, T=10; at 1,000 PE cycles of the flash device, T=20; at 5,000 PE cycles of the flash device, T=40; at 10,000 PE cycles of the flash device, T=60; at the end of life of the flash device when the code rate is lower than at the start of use of the flash device life, T=100. In this example, there are only five T values for selection and the controller will select from any of the above five T values depending on the conditions on the flash device and the errors that will likely occur. For example, early in the flash device's life, the controller will select a low T value, such as T=10. While at the end of the flash device's life, the controller will select a high T value, such as T=100. Of note, conventionally, each T value is associated with a different XOR array, and the higher the T value that is selected, the lower the code rate that results for the system.
Thus, depending on each use situation, a flash drives require a variable amount of error correction strength (parity overhead). Conventionally, the different XOR arrays provide different code rate options to the BCH encoder. In one conventional operational instance of a flash drive that uses BCH encoding and has thirty-two XOR arrays, all thirty-two XOR arrays are utilized. While in another conventional operational instance of the same system, only one XOR array of the thirty-two XOR arrays is utilized. Even though only one XOR array is utilized in one of the foregoing examples, the area within the hardware is still burdened with all thirty-two XOR arrays such that the thirty-two XOR arrays take up space and consume power.
Thus, a method that supports a maximum number of Ts while using the smallest number of XOR arrays as is necessary for operation is desired for applying BCH ECC. The smaller the number of XOR arrays used will result in a correspondingly smaller amount of implementation area needed to enable the operation of the XOR arrays, and thus also reduce the amount of power consumed by the circuit. According to an embodiment of the present invention and with reference to
The system and process for variable T BCH encoding of the present technology will next be described in detail.
Embodiments of the present technology enable a reduced number of XOR arrays to be used during BCH encoding while also enabling a reduced number of BCH encoders to be used and reused during operation, thereby creating a smaller and more efficient circuit than the conventional circuit described above. Embodiments may be used with flash storage technology, though are not limited to use with such technology. Even though flash storage requires a higher granularity of desired code rates, as described herein, one embodiment may use just a single BCH encoder to handle a variable amount of attached parity bits. In one embodiment, the single BCH encoder is reused for each parity bit value, thus making the circuit smaller and more efficient. Another embodiment uses multiple BCH encoders, wherein each BCH encoder supports a different range of T values. Embodiments improve the decoding performance over a range of code rates, while reducing the implementation area. This occurs because embodiments enable a particular T to be selected for a given use case scenario (e.g., channel condition), wherein the possible T selections are unrestricted, in that any value of T that is likely to be needed is available for selection.
In contrast with embodiments, conventional technology provides for a less agile implementation having restrictions for the range of possible T selections, and thereby causing a lower bit error rate (“BER”) performance. For example, and as already stated herein, in a conventional BCH application, the value of T is selected based on an understanding of the expected number of errors that occur during data reception, and the available T selections are restricted (limited such that not every T value that is likely to be needed is available for selection). Conventional technology has an abundance of XOR arrays that remain unused during much of the operation of flash devices, thus requiring a more sizable implementation area for functioning. The higher the expected number of errors, the higher the values of T that are selected. In general, the coding rate decreases as the error rate and the value of T increases. Additionally, higher settings of selected T (and thus a higher number of XOR arrays) increase the implementation area used by the BCH encoder's circuit, increase the overall FEC block size and increase the area and power consumption of the decoder. In general, it is desirable to operate with the minimum value of Ts, and thus, the minimum number of XOR arrays, for an expected bit-error rate. However, conventional technology requires a large number of Ts for large expected bit-error rates, and thus a large number of XOR arrays.
In one embodiment, the memory system 402 is known in the art as flash memory. The memory system 402 can be configured as a solid state disk (SSD) or implemented as removable memory commonly referred to as a thumb drive or a memory stick. In one embodiment, a non-limiting example of an SSD is made using 512 two gigabit NAND chips. The 512 two gigabit NAND chips are configured sixteen to a channel with a total of thirty-two channels for a nominal capacity of one Terabyte (TByte) of storage. Other configurations of chip size, number of chips, and number of channels can be configured depending on the particulars of the use case. Embodiments of the invention are not limited by the size of the memory system selected for a given use case. In the example above, NAND devices were used. Alternatively, NOR memory can be used in place of the NAND memory. Embodiments of the invention are not limited by the particular technology or circuit design underlying a memory cell. Embodiments of the invention can be used with user defined memory cells, with resistive memory, and with memory cells that are yet to be invented.
The memory controller 404 is communicatively coupled, wired and/or wirelessly, to a host processor 420. The host processor 420 includes a dynamically accessible memory indicated by DRAM 422. In various embodiments, the host processor 420 (as well as the communicatively coupled memory system 402) can reside in a variety of devices such as a computer of any type (e.g., stationary, desk top, tablet, and notebook, without limitation). In other embodiments, the memory system 402 can be used with various portable devices such as mobile phones, digital cameras, digital video cameras, global position systems, audio/visual media devices as well as devices yet to be invented. Embodiments of the invention are not limited by the purpose or name of the device in which the memory system 402 is used.
In various embodiments, the memory controller 404 may be implemented by one or more hardware components, one or more software components, or some combination thereof. Examples of hardware components include but are not limited to a combinational logic circuit, a sequential logic circuit, a microprocessor, an embedded processor, an embedded controller or the like. Examples of software components include but are not limited to a computing program, computing instructions, a software routine, e.g., firm-ware or the like.
In various embodiments, the memory system 402 is implemented in an integrated circuit device, which may include an integrated circuit package containing the integrated circuit. In some embodiments, the memory system 402 is implemented in a single integrated circuit die. In other embodiments, the memory system 402 is implemented in more than one integrated circuit die of an integrated circuit device which may include a multi-chip package containing the integrated circuit.
It should be appreciated that
In various embodiments, the data management module 506, circuit 508, decoder 510, communication link 515, circuit 516, decoder 518, and communication link 524 may be implemented by one or more hardware components, one or more software components, or some combination thereof. Examples of hardware components include but are not limited to a combinational logic circuit, a sequential logic circuit, a microprocessor, an embedded processor, an embedded controller or the like. Examples of software components include but are not limited to a computing program, computing instructions, a software routine, e.g., firm-ware or the like.
In various embodiments, the data management module 506 is implemented in an integrated circuit device, which may include an integrated circuit package containing the integrated circuit. In some embodiments, the data management module 506 is implemented in a single integrated circuit die. In other embodiments, the data management module 506 is implemented in more than one integrated circuit die of an integrated circuit device which may include a multi-chip package containing the integrated circuit.
In various embodiments, the circuit 508 is implemented in an integrated circuit device, which may include an integrated circuit package containing the integrated circuit. In some embodiments, the circuit 508 is implemented in a single integrated circuit die. In other embodiments, the circuit 508 is implemented in more than one integrated circuit die of an integrated circuit device which may include a multi-chip package containing the integrated circuit.
In various embodiments, the decoder 510 is implemented in an integrated circuit device, which may include an integrated circuit package containing the integrated circuit. In some embodiments, the decoder 510 is implemented in a single integrated circuit die. In other embodiments, the decoder 510 is implemented in more than one integrated circuit die of an integrated circuit device which may include a multi-chip package containing the integrated circuit.
In various embodiments, the circuit 516 is implemented in an integrated circuit device, which may include an integrated circuit package containing the integrated circuit. In some embodiments, the circuit 516 is implemented in a single integrated circuit die. In other embodiments, the circuit 516 is implemented in more than one integrated circuit die of an integrated circuit device which may include a multi-chip package containing the integrated circuit.
In various embodiments, the decoder 518 is implemented in an integrated circuit device, which may include an integrated circuit package containing the integrated circuit. In some embodiments, the decoder 518 is implemented in a single integrated circuit die. In other embodiments, the decoder 518 is implemented in more than one integrated circuit die of an integrated circuit device which may include a multi-chip package containing the integrated circuit.
The polynomial multiplier 616 (or in one embodiment, the polynomial multiplier/divider 705) performs a multiplication operation on the data 602. The product of this multiplication operation is then sent to the BCH encoder 606. The BCH encoder 606 encodes the product value, and finds its associated remainder (described below). The BCH encoder 606 next sends the remainder value back to the polynomial divider 618 (or in one embodiment, the polynomial multiplier/divider 705). The polynomial divider 618 then performs a division operation on the remainder value. The resulting quotient of the division operation is sent to the memory cells 614 of the memory chips that reside, in one embodiment, at the memory system 402.
Of note, the polynomial multiplier/divider module 705 (as well as the polynomial multiplier 616) performs a polynomial multiplication (i.e. linear feed forward shift register polynomial multiplier which performs a convolution operation over Galois Field (GF) (2)). The polynomial multiplier/divider module 705 (as well as the polynomial divider 618) performs a polynomial division (i.e. linear feedback shift register for polynomial division over GF(2)). Of note, the foregoing methods of polynomial multiplication and division operations are well known to those skilled in the art.
Embodiments of the present technology reduce the Ts, and thus XOR arrays, necessary for implementation of ECC via BCH coding, in a “T reduction” (T−1 error correcting BCH code) method, as will be described below in detail. A brief overview of the T reduction method follows, in accordance with embodiments. Following the brief overview of the T reduction method, a more detailed description is put forth with reference to
According to an embodiment, the T reduction method achieves a T that is one less than the original T error correcting code (T minus one).
First, the value of g′(x), based on the selected BCH code, is calculated. g′(x) is the product of a multitude of gi(x)s that formed the generator polynomial g(x). However, gl(x), which is the last gi(x) to be attached to the equation, is not included in the g′(x) equation. gl(x), is the last minimal polynomial that distinguishes the original T error correcting code from the derived T−1 (T minus one) error correcting code.
Using the Chinese Remainder theorem and polynomial operations over GF, those skilled in the art can show that the remainder of the polynomial representation of the input message, m(x), with respect to g′(x) can be calculated by first multiplying m(x) by gl(x) and then calculating the remainder of this multiplication by g(x) and finally dividing the resulting remainder by gl(x).
Referring to
Referring to
Referring now to
Thus far, the reduction of the T parameter by one of the native BCH codes has been discussed, in accordance with one embodiment. However, further embodiments provide for the reduction of T by more than one.
As previously noted,
With reference now to
Assuming the T reduction of e=4 is to be processed, the polynomial multiplier/divider 904, at data block 1104, multiplies the input message m(x) by gl-e+1(x). gl-e+1(x) is the eth (4th when e=4) minimal polynomial which was omitted from the generator polynomial of the base BCH code g(x) in the process of the T reduction. (Recall, as described above, that g′(x) is the product of a multitude of gi(x)s that formed the generator polynomial g(x). However, gl(x), which is the last gi(x) to be attached to the equation, is not included [omitted] in the g′(x) equation. Thus, in this example, gl(x), is the last minimal polynomial that distinguishes the original T error correcting code from the derived T−4[T minus four] error correcting code.)
At step 1106, the Mux 1108 selects and passes the results of the multiplication operation performed at the data block 1104 to the data block 1110. At the data block 1110, the polynomial multiplier/divider 904 multiplies the results of the multiplication operation performed at the data block 1104 by gl-e+2(x). Of note, gl-e+2(x) is the (e−1)th (3rd when e=4) minimal polynomial which was omitted from the generator polynomial of the base BCH code g(x). However, if T is to be reduced by e=3, the input message (message bits 1102), that entered the Mux 1108 first and then were directed to the data block 1110, are multiplied by the polynomial multiplier/divider 904 by a gl-e+2(x).
At step 1112, if the T is to be reduced by e=4 or e=3, then the Mux 1114 selects and passes the results of the multiplication operation performed at the data block 1110 to the data block 1116. At the data block 1116, the polynomial multiplier/divider 904 multiplies the results of the multiplication operation performed at the data block 1110 by gl-e+3(X). Of note, gl-e+3(X) is the (e−2)th (2nd when e=4) minimal polynomial which was omitted from the generator polynomial of the base BCH code g(x). However, if T is to be reduced by e=2, the message bits 1102 will enter the Mux 1114 first, and then be directed to the data block 1116; the message bits 1102, that entered the Mux 1114 first and then were directed to the data block 1116, are multiplied by the polynomial multiplier/divider 904 by gl-e+3(x).
At step 1118, if the T is to be reduced by e=4, e=3 or e=2, then the Mux 1120 selects and passes the results of the multiplication operation performed at the data block 1116 to the data block 1122. At the data block 1122, the polynomial multiplier/divider 904 multiplies the results of the multiplication operation performed at the data block 1116 by gl(x), the last term in the resulting polynomial gl(x). However, if T is to be reduced by e=1, the message bits 1102 will enter the Mux 1120 first, and then be directed to the data block 1122; the message bits 1102, that entered the Mux 1120 first and then were directed to the data block 1122, are multiplied by the polynomial multiplier/divider 904 by gl(x).
At step 1124, if the T is to be reduced by e=4, e=3, e=2, or e=1, then the Mux 1126 selects and passes the results of the multiplication operation performed at the data block 1122 a shifter/zero padding block 1128. (Of note, when no T reduction is selected, the Mux 1126 selects and passes the message bits 1102 directly to the shifter/zero padding block 1128.) At the shifter/zero padding block 1128, the data that was passed thereto by the Mux 1126 is multiplied by xN-{tilde over (K)} (i.e. shifting the data by zero padding) to align the data correctly with the boundaries of the base T error correcting BCH code. Here {tilde over (K)}=K+ΔT*m (as described herein, the ΔT is the change in the T implemented by the T reduction method) is the new value for the number of message bits after reducing T. After the multiplication operation has been performed by the shifter/zero padding 1128, the resulting number value of the multiplication operation is sent to the base BCH encoder 1130, at which the remainder over the base generator polynomial g(x) is calculated.
Still referring to
At step 1134, if the T reduction of e=4 was selected, the Mux 1136 will select and pass the quotient of the previous division operation performed at the data block 1132 to the data block 1138. However, if the T reduction of e=3 was selected, the output of the BCH Encoder 1130 will be directed directly to the Mux 1136, which will in turn, pass the output to the data block 1138. At the data block 1138, the polynomial multiplier/divider 904 divides the output, that was directed to the data block 1138 by the Mux 1136, by gl-e+2(x).
At step 1140, if the T reduction of e=4 or e=3 was selected, the Mux 1142 will select and pass the quotient of the previous division operation performed at the data block 1138 to the data block 1144. However, if the T reduction of e=2 was selected, the output of the BCH Encoder 1130 will be directed to the Mux 1142, which will in turn, pass the output to the data block 1144. At the data block 1144, the polynomial multiplier/divider 904 divides the output, that was directed to the data block 1144 by the Mux 1142, by gl-e+3(x).
At step 1146, if the T reduction of e=4, e=3, or e=2 was selected, the Mux 1148 selects and passes the quotient of the previous division operation performed at the data block 1144 to the data block 1150. However, if the T reduction of e=1 was selected, the output of the BCH Encoder 1130 will be directed to the Mux 1148, which in turn will pass the output to the data block 1150. At the data block 1150, the polynomial multiplier/divider 904 divides the output, that was directed to the data block 1150 by the Mux 1148, by gl(x).
At step 1152, if the T reduction of e=4, e=3, e=2, or e=1 was selected, the Mux 1154 selects and passes the quotient of the previous division operation performed at the data block 1150 to the output 1158 as parity bits. However, if no T reduction was selected, the output of the BCH Encoder 1130 will be directly sent to the Mux 1154, which will in turn pass the output of the BCH Encoder 1130 to the output 1158.
As described, if the user selects not to reduce T, the operation, according to an embodiment, will begin at the very bottom Mux, which in this example is Mux 1126, and the message bits 1102 will not be disturbed by any of the polynomial multipliers. The Mux 1154 will receive the output of the BCH Encoder 1130 and will send this output to the output 1158 as parity bits without any disturbance from the polynomial dividers.
For T reduction to occur, the following information is given: the generator polynomial g(x) for the base T error correcting BCH code; the minimal polynomials gi(x), i=1 . . . l, from which the generator polynomial is constructed (i.e. g(x)=fΠi=1lgi(x). Additionally, the minimal polynomials are sorted in ascending order such that gl(x) is the difference between the generator polynomial of the T error correction code g(x) and that of the T−1 error correcting code g′(x). Similarly, the generator polynomial for the T−2 error correcting BCH code, g″(x) misses the last two minimal polynomials gl(x) and gl-1(x), and so on. The difference polynomial between the base T error correcting polynomial and that of the T−ΔT error correcting polynomial is hence given as fΔT(x)=Πj=0ΔT-1gl-j(x).
In this example embodiment, the first BCH encoder 1202A supports any T value greater than or equal to ninety-six and less than or equal to one hundred and twenty-eight. The second BCH encoder 1202B supports any T value greater than or equal to sixty-four and less than ninety-six. In one instance, if the user selects T to be one hundred and twenty-six, then embodiments will use the BCH encoder 1202A that supports any T value greater than or equal to ninety-six and less than or equal to one hundred and twenty-eight. There will be two reductions resulting in this T selection and the use of the BCH encoder 1202A, since one hundred and twenty-eight (the maximum supported T value) minus one hundred and twenty-six (the T value selected) is equal to two (i.e. 128−126=2). In another instance, if the user selects T as ninety-eight, then embodiments will also use the BCH encoder 1202A that supports any T value greater than or equal to ninety-six and less than or equal to one hundred and twenty-eight. There will be thirty reductions made, 128−98=30.
The rationale behind creating two BCH encoders instead of one BCH encoder is that two BCH encoders supporting two different ranges of T require less implantation area than one BCH encoder supporting a large range of Ts. For example, if there was only one BCH encoder that supported any T up to one hundred and twenty-eight, then if the user selects T as seventy, fifty-eight reductions will be made (128−70=58). However, if there are two BCH encoders, with one of the BCH encoders supporting any T from sixty-four to ninety-six, then if the user selected T as seventy, only twenty-six reductions would be made (96−70=26). Each reduction requires a new set of multiplier and divider operations to occur (to account for the selected e and operations associated therewith and described herein with reference to at least to
Additionally, of note, the greater the number of different Ts that must be supported, the more gates that are required. Additionally, after the polynomial multiplier/dividers have performed and before parity begins, a parity delay occurs. In one example instance, 100 clock cycles of parity delay may occur. When or if more reductions occur, according to embodiments, more clock cycles of delay will also occur.
Embodiments of the present invention provide a system and method to reuse BCH encoders by presenting polynomial multiplier and polynomial divider functions before and after the BCH encoder performs its operations. Such methods and systems support a variable number of T settings within a high speed BCH encoder. Such methods and systems enable greater error correction capacity than standard T sharing approaches, while reducing implementation area and increasing power efficiency. Such embodiments are desirable for flash memory applications to provide full rate flexibility.
With reference now to
In one embodiment, the receiver 1310 receives the message polynomial 1305. In one embodiment and as described herein above, the polynomial multiplier 1320 accesses the message polynomial 1305, and generates a first value 1325 by multiplying the message polynomial 1315 by the difference polynomial.
In one embodiment and as already described herein, the shifter/zero padder 1330 accesses the first value 1325, in one embodiment and as already described herein, and calculates a second value 1335 by multiplying the first value 1325 by xN-{tilde over (K)}. The shifter/zero padder 1330 then passes the calculated second value 1335 to the BCH encoder 1340.
In one embodiment, the BCH encoder 1340 generates a third value 1345 by dividing the second value 1335 by the generator polynomial of the T error correcting BCH code g (x), and calculating the remainder based on the division. The BCH encoder 1340 then passes the third value 1345 to the polynomial divider 1350.
In one embodiment and as is already described herein, the polynomial divider 1350 calculates a fourth value 1355 by dividing the third value 1345 by the difference polynomial. The fourth value 1355 includes the parity of the T−ΔT error correcting BCH code. (Of note, the fourth value 1355 also includes the original raw data from the user as well as the generated m*(T−ΔT) parity bits.) The polynomial divider 1350 then passes the fourth value 1355, wherein the fourth value 1355 includes the parity of the T−ΔT error correcting BCH code; in one embodiment, the fourth value 1355 is passed to the parity output module 1360, which then outputs the fourth value 1355 from the system 1300.
Of note, the BCH encoder 1340, in one embodiment, is communicatively (wired and/or wirelessly) with the polynomial multiplier 1320, the shifter/zero padder 1330 and the polynomial divider 1350.
Of note, while in one embodiment, the memory storage area 1360 resides on the memory system 1305, in another embodiment, the memory storage area 1360 resides external to, but communicatively (wired and/or wirelessly) coupled with the memory system 1305.
With reference to
Although specific procedures are disclosed in flow diagrams 1400 and 1450, such procedures are examples. That is, embodiments are well suited to performing various other operations or variations of the operations recited in the processes of flow diagrams 1400 and 1450. Likewise, in some embodiments, the operations in flow diagrams 1400 and 1450 may be performed in an order different than presented, not all of the operations described in one or more of this flow diagram may be performed, and/or one or more additional operations may be added.
At operation 1405, in one embodiment and as described herein, a message polynomial is received, wherein the message polynomial includes data bits as coefficients. The T reduction amount ΔT is also received.
At operation 1410, in one embodiment and as described herein, a polynomial multiplier (such as polynomial multiplier 1320 of
At operation 1415, in one embodiment and as described herein, a shifter/zero-padder (such as shifter/zero-padder 1330 of
At operation 1420, in one embodiment and as described herein, the BCH encoder (such as BCH encoder 1340 of
At operation 1425, in one embodiment and as described herein, a polynomial divider (such as the polynomial divider 1350 of
At operation 1430, in one embodiment and as described herein, the fourth value is output. (The fourth value is output as the parity of the T−ΔT error correcting BCH code.) In one embodiment, for example, this fourth value is output from a memory system, such as memory system 1305 of
At operation 1435, in one embodiment and as described herein, a polynomial multiplier/divider module (such as the polynomial multiplier/divider 705 of
At operation 1455, in one embodiment and as described herein, a message polynomial is received, wherein the message polynomial includes data bits as coefficients.
At operation 1460, in one embodiment and as described herein, a selected T reduction parameter value is received, wherein the selected T reduction parameter value is the maximum number of T reductions that are to applied to an original T error correcting code value during the BCH coding to achieve a reduced number value of Ts, such that the reduced number value of Ts that is less than the original T error correcting BCH code value is used for the BCH coding.
At operation 1465, in one embodiment and as described herein, based on the selected T reduction parameter value as compared to the original T error correcting code value, multiplier operations are applied, encoding operations are applied and divider operations are applied to the message polynomial to achieve an output, wherein the output includes parity bits.
In one embodiment, the multiplying operations of step 1465 include multiplying the message polynomial by a difference polynomial to achieve a first value, wherein the difference polynomial includes minimal polynomials that are present in the original T error correcting BCH code and are absent from a T−ΔT error correcting BCH code.
In one embodiment, the encoding operation at step 1465 include multiplying a result of the multiplying operations by xN-{tilde over (K)} to achieve an encoding multiplying value, dividing the encoding multiplying value by a generator polynomial of the T error correcting BCH code and calculating a remainder based on the dividing to achieve an encoding remainder value.
In one embodiment, the divider operations of step 1465 include dividing a result of the encoding operations by the difference polynomial to achieve a divider quotient value that includes parity of a T−ΔT error correcting BCH code.
In one embodiment, the multiplier operations and the divider operations of step 1465 occur at separate times.
Connection with a network is obtained through communications channel 1532 via communications module 1530, as is recognized by those of skill in the art, which enables the data processing device 1500 to communicate with devices in remote locations. Communications channel 1532 and communications module 1530 flexibly represent communication elements in various implementations, and can represent various forms of telemetry, GPRS, Internet, and combinations thereof.
In various embodiments, a pointing device such as a stylus is used in conjunction with a touch screen, for example, via channel 1529 and miscellaneous I/O 1528.
For purposes of discussing and understanding the embodiments of the invention, it is to be understood that various terms are used by those knowledgeable in the art to describe techniques and approaches. Furthermore, in the description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be evident, however, to one of ordinary skill in the art that the present invention may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the present invention. These embodiments are described in sufficient detail to enable those of ordinary skill in the art to practice the invention, and it is to be understood that other embodiments may be utilized and that logical, mechanical, electrical, and other changes may be made without departing from the scope of the present invention.
Some portions of the description may be presented in terms of algorithms and symbolic representations of operations on, for example, data bits within a computer memory. These algorithmic descriptions and representations are the means used by those of ordinary skill in the data processing arts to most effectively convey the substance of their work to others of ordinary skill in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of acts leading to a desired result. The acts are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the discussion, it is appreciated that throughout the description, discussions utilizing terms such as “processing”, “computing”, “calculating”, “determining”, “displaying”, “generating”, “multiplying”, “receiving”, “sending”, “outputting”, “reusing”, “accessing”, “performing”, “storing”, “updating”, “dividing”, “applying” or the like, can refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission, or display devices.
An apparatus for performing the operations herein can implement the present invention. This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer, selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, hard disks, optical disks, compact disk-read only memories (CD-ROMs), and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), electrically programmable read-only memories (EPROM)s, electrically erasable programmable read-only memories (EEPROMs), FLASH memories, magnetic or optical cards, etc., or any type of media suitable for storing electronic instructions either local to the computer or remote to the computer.
The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method. For example, any of the methods according to the present invention can be implemented in hard-wired circuitry, by programming a general-purpose processor, or by any combination of hardware and software. One of ordinary skill in the art will immediately appreciate that the invention can be practiced with computer system configurations other than those described, including hand-held devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, digital signal processing (DSP) devices, set top boxes, network PCs, minicomputers, mainframe computers, and the like. The invention can also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network.
The methods herein may be implemented using computer software. If written in a programming language conforming to a recognized standard, sequences of instructions designed to implement the methods can be compiled for execution on a variety of hardware platforms and for interface to a variety of operating systems. In addition, the present invention is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the invention as described herein. Furthermore, it is common in the art to speak of software, in one form or another (e.g., program, procedure, application, driver), as taking an action or causing a result. Such expressions are merely a shorthand way of saying that execution of the software by a computer causes the processor of the computer to perform an action or produce a result.
It is to be understood that various terms and techniques are used by those knowledgeable in the art to describe communications, protocols, applications, implementations, mechanisms, etc. One such technique is the description of an implementation of a technique in terms of an algorithm or mathematical expression. That is, while the technique may be, for example, implemented as executing code on a computer, the expression of that technique may be more aptly and succinctly conveyed and communicated as a formula, algorithm, or mathematical expression. Thus, one of ordinary skill in the art would recognize a block denoting A+B=C as an additive function whose implementation in hardware and/or software would take two inputs (A and B) and produce a summation output (C). Likewise, one of ordinary skill in the art would recognize the implementation in hardware and/or software of a block denoting polynomial multiplication (A*B=C) and polynomial division (A/B=D) would take at least two inputs (A and B) and produce the product output (C) or the quotient output (D), respectively. Thus, the use of formula, algorithm, or mathematical expression as descriptions is to be understood as having a physical embodiment in at least hardware and/or software (such as a computer system in which the techniques of the present invention may be practiced as well as implemented as an embodiment).
A machine-readable medium is understood to include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer). For example, a machine-readable medium includes read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices; electrical, optical, acoustical or other form of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.); etc.
As used in this description, “one embodiment” or “an embodiment” or similar phrases means that the feature(s) being described are included in at least one embodiment of the invention. References to “one embodiment” in this description do not necessarily refer to the same embodiment; however, neither are such embodiments mutually exclusive. Nor does “one embodiment” imply that there is but a single embodiment of the invention. For example, a feature, structure, act, etc. described in “one embodiment” may also be included in other embodiments. Thus, the invention may include a variety of combinations and/or integrations of the embodiments described herein.
Various example embodiments are thus described. All statements herein reciting principles, aspects, and embodiments of the invention as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents and equivalents developed in the future, i.e., any elements developed that perform the same function, regardless of structure. The scope, therefore, is not intended to be limited to the embodiments shown and described herein. Rather, the scope and spirit is embodied by the appended claims.
Number | Name | Date | Kind |
---|---|---|---|
815137 | Beecher | Mar 1906 | A |
4852098 | Brechard | Jul 1989 | A |
5615235 | Kakuishi et al. | Mar 1997 | A |
5732092 | Shinohara | Mar 1998 | A |
5822244 | Hansen et al. | Oct 1998 | A |
5875343 | Binford et al. | Feb 1999 | A |
5999959 | Weng | Dec 1999 | A |
6115788 | Thowe | Sep 2000 | A |
6148360 | Leak et al. | Nov 2000 | A |
6279137 | Poeppelman | Aug 2001 | B1 |
6412041 | Lee et al. | Jun 2002 | B1 |
6539515 | Gong | Mar 2003 | B1 |
6567313 | Tanaka et al. | May 2003 | B2 |
6633856 | Richardson et al. | Oct 2003 | B2 |
6725409 | Wolf | Apr 2004 | B1 |
6789227 | De Souza et al. | Sep 2004 | B2 |
6871168 | Tanaka et al. | Mar 2005 | B1 |
6895547 | Eleftheriou et al. | May 2005 | B2 |
6934804 | Hashemi | Aug 2005 | B2 |
6963507 | Tanaka et al. | Nov 2005 | B2 |
6976194 | Cypher | Dec 2005 | B2 |
6976197 | Faust et al. | Dec 2005 | B2 |
6990624 | Dohmen | Jan 2006 | B2 |
7032081 | Gefen et al. | Apr 2006 | B1 |
7050334 | Kim et al. | May 2006 | B2 |
7116732 | Worm et al. | Oct 2006 | B2 |
7206992 | Xin | Apr 2007 | B2 |
7209527 | Smith et al. | Apr 2007 | B2 |
7237183 | Xin | Jun 2007 | B2 |
7324559 | McGibney | Jan 2008 | B2 |
7376015 | Tanaka et al. | May 2008 | B2 |
7450668 | Ghosh et al. | Nov 2008 | B2 |
7457906 | Pettey et al. | Nov 2008 | B2 |
7472331 | Kim et al. | Dec 2008 | B2 |
7484158 | Sharon et al. | Jan 2009 | B2 |
7529215 | Osterling | May 2009 | B2 |
7567472 | Gatzemeier et al. | Jul 2009 | B2 |
7620784 | Panabaker et al. | Nov 2009 | B2 |
7650480 | Jiang et al. | Jan 2010 | B2 |
7694047 | Alston | Apr 2010 | B1 |
7708195 | Yoshida et al. | May 2010 | B2 |
7739472 | Guterman et al. | Jun 2010 | B2 |
7752346 | Talayco et al. | Jul 2010 | B2 |
7801233 | Chow et al. | Sep 2010 | B1 |
7860930 | Freimuth et al. | Dec 2010 | B2 |
7904793 | Mokhlesi et al. | Mar 2011 | B2 |
7930623 | Pisek et al. | Apr 2011 | B2 |
7937641 | Amidi | May 2011 | B2 |
7945721 | Johnsen et al. | May 2011 | B1 |
7958430 | Kolokowsky et al. | Jun 2011 | B1 |
7975193 | Johnson | Jul 2011 | B2 |
8094508 | Gatzemeier et al. | Jan 2012 | B2 |
8140930 | Maru | Mar 2012 | B1 |
8176367 | Dreifus et al. | May 2012 | B2 |
8219894 | Au et al. | Jul 2012 | B2 |
8223745 | Johnsen et al. | Jul 2012 | B2 |
8228728 | Yang et al. | Jul 2012 | B1 |
8244946 | Gupta et al. | Aug 2012 | B2 |
8245112 | Hicken et al. | Aug 2012 | B2 |
8245117 | Wu | Aug 2012 | B1 |
8250286 | Yeh et al. | Aug 2012 | B2 |
8254112 | Yang et al. | Aug 2012 | B2 |
8255770 | Park et al. | Aug 2012 | B2 |
8259498 | Yogev et al. | Sep 2012 | B2 |
8259506 | Sommer et al. | Sep 2012 | B1 |
8261136 | D'Abreu et al. | Sep 2012 | B2 |
8281217 | Kim et al. | Oct 2012 | B2 |
8281227 | Inskeep et al. | Oct 2012 | B2 |
8286004 | Williams | Oct 2012 | B2 |
8307258 | Flynn et al. | Nov 2012 | B2 |
8327220 | Borchers et al. | Dec 2012 | B2 |
8335977 | Weingarten et al. | Dec 2012 | B2 |
8341502 | Steiner et al. | Dec 2012 | B2 |
8351258 | Yang et al. | Jan 2013 | B1 |
8359522 | Gunnam et al. | Jan 2013 | B2 |
8392789 | Biscondi et al. | Mar 2013 | B2 |
8402201 | Strasser et al. | Mar 2013 | B2 |
8418023 | Gunnam et al. | Apr 2013 | B2 |
8429325 | Onufryk et al. | Apr 2013 | B1 |
8429497 | Tu et al. | Apr 2013 | B2 |
8464141 | Pilsl | Jun 2013 | B2 |
8473812 | Ramamoorthy et al. | Jun 2013 | B2 |
8493791 | Karakulak et al. | Jul 2013 | B2 |
8504885 | Haratsch et al. | Aug 2013 | B2 |
8504887 | Varnica et al. | Aug 2013 | B1 |
8555140 | Gunnam et al. | Oct 2013 | B2 |
8621318 | Micheloni et al. | Dec 2013 | B1 |
8638602 | Horn et al. | Jan 2014 | B1 |
8640005 | Wilkerson et al. | Jan 2014 | B2 |
8645613 | Tan et al. | Feb 2014 | B2 |
8656257 | Micheloni et al. | Feb 2014 | B1 |
8665648 | Mun et al. | Mar 2014 | B2 |
8694849 | Micheloni et al. | Apr 2014 | B1 |
8694855 | Micheloni et al. | Apr 2014 | B1 |
8706956 | Cagno et al. | Apr 2014 | B2 |
8707122 | Micheloni et al. | Apr 2014 | B1 |
8737141 | Melik-Martirosian | May 2014 | B2 |
8739008 | Liu et al. | May 2014 | B2 |
8755229 | Visconti et al. | Jun 2014 | B1 |
8762620 | Prins et al. | Jun 2014 | B2 |
8769374 | Franceschini et al. | Jul 2014 | B2 |
8775913 | Haratsch et al. | Jul 2014 | B2 |
8787428 | Dai et al. | Jul 2014 | B2 |
8812940 | Pilsl | Aug 2014 | B2 |
8856622 | Ramamoorthy et al. | Oct 2014 | B2 |
8898372 | Yeh | Nov 2014 | B2 |
8917734 | Brown | Dec 2014 | B1 |
8924824 | Lu | Dec 2014 | B1 |
8953373 | Haratsch et al. | Feb 2015 | B1 |
8958247 | Asaoka et al. | Feb 2015 | B2 |
8959280 | Ma et al. | Feb 2015 | B2 |
8984216 | Fillingim | Mar 2015 | B2 |
8995197 | Steiner et al. | Mar 2015 | B1 |
8995302 | Brown et al. | Mar 2015 | B1 |
9025495 | Onufryk et al. | May 2015 | B1 |
9058289 | Tai et al. | Jun 2015 | B2 |
9142314 | Beltrami et al. | Sep 2015 | B2 |
9164891 | Karamcheti et al. | Oct 2015 | B2 |
9244763 | Kankani et al. | Jan 2016 | B1 |
9251909 | Camp et al. | Feb 2016 | B1 |
9257182 | Grunzke | Feb 2016 | B2 |
9268531 | Son et al. | Feb 2016 | B1 |
9287898 | Yen | Mar 2016 | B2 |
9292428 | Kanamori et al. | Mar 2016 | B2 |
9294132 | Peleato-Inarrea | Mar 2016 | B1 |
9397701 | Micheloni et al. | Jul 2016 | B1 |
9444655 | Sverdlov et al. | Sep 2016 | B2 |
9590656 | Micheloni et al. | Mar 2017 | B2 |
9787327 | Zhang | Oct 2017 | B2 |
9916906 | Wu et al. | Mar 2018 | B2 |
20020051501 | Demjanenko et al. | May 2002 | A1 |
20020129308 | Kinoshita et al. | Sep 2002 | A1 |
20020181438 | McGibney | Dec 2002 | A1 |
20030033567 | Tamura et al. | Feb 2003 | A1 |
20030104788 | Kim | Jun 2003 | A1 |
20030225970 | Hashemi | Dec 2003 | A1 |
20040088636 | Cypher | May 2004 | A1 |
20040123230 | Lee et al. | Jun 2004 | A1 |
20040136236 | Cohen et al. | Jul 2004 | A1 |
20040153902 | Machado | Aug 2004 | A1 |
20040181735 | Xin | Sep 2004 | A1 |
20040234150 | Chang | Nov 2004 | A1 |
20040252791 | Shen et al. | Dec 2004 | A1 |
20040268015 | Pettey et al. | Dec 2004 | A1 |
20050010846 | Kikuchi et al. | Jan 2005 | A1 |
20050226355 | Kibune et al. | Oct 2005 | A1 |
20050248999 | Tamura et al. | Nov 2005 | A1 |
20050252791 | Pechtold et al. | Nov 2005 | A1 |
20050286511 | Johnsen et al. | Dec 2005 | A1 |
20060039370 | Rosen et al. | Feb 2006 | A1 |
20060050694 | Bury et al. | Mar 2006 | A1 |
20060126728 | Yu et al. | Jun 2006 | A1 |
20060206655 | Chappell et al. | Sep 2006 | A1 |
20060282603 | Onufryk et al. | Dec 2006 | A1 |
20070050688 | Thayer | Mar 2007 | A1 |
20070089031 | Huffman et al. | Apr 2007 | A1 |
20070101225 | Moon et al. | May 2007 | A1 |
20070118743 | Thornton et al. | May 2007 | A1 |
20070136628 | Doi et al. | Jun 2007 | A1 |
20070147489 | Sun et al. | Jun 2007 | A1 |
20070157064 | Falik | Jul 2007 | A1 |
20070217253 | Kim et al. | Sep 2007 | A1 |
20070233939 | Kim | Oct 2007 | A1 |
20070239926 | Gyl et al. | Oct 2007 | A1 |
20080005382 | Mimatsu | Jan 2008 | A1 |
20080016425 | Khan et al. | Jan 2008 | A1 |
20080049869 | Heinrich et al. | Feb 2008 | A1 |
20080077843 | Cho et al. | Mar 2008 | A1 |
20080148129 | Moon et al. | Jun 2008 | A1 |
20080229079 | Flynn et al. | Sep 2008 | A1 |
20080229164 | Tamura et al. | Sep 2008 | A1 |
20080256280 | Ma | Oct 2008 | A1 |
20080256292 | Flynn et al. | Oct 2008 | A1 |
20080263265 | Litsyn et al. | Oct 2008 | A1 |
20080267081 | Roeck | Oct 2008 | A1 |
20080276156 | Gunnam et al. | Nov 2008 | A1 |
20080320214 | Ma et al. | Dec 2008 | A1 |
20090027991 | Kaizu et al. | Jan 2009 | A1 |
20090067320 | Rosenberg et al. | Mar 2009 | A1 |
20090077302 | Fukuda | Mar 2009 | A1 |
20090164694 | Talayco et al. | Jun 2009 | A1 |
20090290441 | Gatzemeier et al. | Nov 2009 | A1 |
20090296798 | Banna et al. | Dec 2009 | A1 |
20090303788 | Roohparvar et al. | Dec 2009 | A1 |
20090307412 | Yeh et al. | Dec 2009 | A1 |
20090327802 | Fukutomi et al. | Dec 2009 | A1 |
20100085076 | Danilin et al. | Apr 2010 | A1 |
20100162075 | Brannstrom et al. | Jun 2010 | A1 |
20100185808 | Yu et al. | Jul 2010 | A1 |
20100199149 | Weingarten et al. | Aug 2010 | A1 |
20100211737 | Flynn et al. | Aug 2010 | A1 |
20100211852 | Lee et al. | Aug 2010 | A1 |
20100226422 | Taubin et al. | Sep 2010 | A1 |
20100246664 | Citta et al. | Sep 2010 | A1 |
20100262979 | Borchers et al. | Oct 2010 | A1 |
20100293440 | Thatcher et al. | Nov 2010 | A1 |
20110010602 | Chung et al. | Jan 2011 | A1 |
20110055453 | Bennett et al. | Mar 2011 | A1 |
20110055659 | Tu et al. | Mar 2011 | A1 |
20110066902 | Sharon et al. | Mar 2011 | A1 |
20110072331 | Sakaue et al. | Mar 2011 | A1 |
20110119553 | Gunnam et al. | May 2011 | A1 |
20110161678 | Niwa | Jun 2011 | A1 |
20110209031 | Kim et al. | Aug 2011 | A1 |
20110225341 | Satoh et al. | Sep 2011 | A1 |
20110246136 | Haratsch et al. | Oct 2011 | A1 |
20110246842 | Haratsch et al. | Oct 2011 | A1 |
20110246853 | Kim et al. | Oct 2011 | A1 |
20110296084 | Nango | Dec 2011 | A1 |
20110307758 | Fillingim et al. | Dec 2011 | A1 |
20120008396 | Park et al. | Jan 2012 | A1 |
20120051144 | Weingarten et al. | Mar 2012 | A1 |
20120054413 | Brandt | Mar 2012 | A1 |
20120096192 | Tanaka et al. | Apr 2012 | A1 |
20120140583 | Chung et al. | Jun 2012 | A1 |
20120141139 | Bakhru et al. | Jun 2012 | A1 |
20120166690 | Regula | Jun 2012 | A1 |
20120167100 | Li et al. | Jun 2012 | A1 |
20120179860 | Falanga et al. | Jul 2012 | A1 |
20120203986 | Strasser et al. | Aug 2012 | A1 |
20120239991 | Melik-Martirosian et al. | Sep 2012 | A1 |
20120254515 | Melik-Martirosian et al. | Oct 2012 | A1 |
20120311388 | Cronin et al. | Dec 2012 | A1 |
20120311402 | Tseng et al. | Dec 2012 | A1 |
20130013983 | Livshitz et al. | Jan 2013 | A1 |
20130024735 | Chung et al. | Jan 2013 | A1 |
20130060994 | Higgins et al. | Mar 2013 | A1 |
20130086451 | Grube et al. | Apr 2013 | A1 |
20130094286 | Sridharan et al. | Apr 2013 | A1 |
20130094290 | Sridharan et al. | Apr 2013 | A1 |
20130117616 | Tai et al. | May 2013 | A1 |
20130117640 | Tai et al. | May 2013 | A1 |
20130145235 | Alhussien et al. | Jun 2013 | A1 |
20130163327 | Karakulak et al. | Jun 2013 | A1 |
20130163328 | Karakulak et al. | Jun 2013 | A1 |
20130176779 | Chen et al. | Jul 2013 | A1 |
20130185598 | Haratsch et al. | Jul 2013 | A1 |
20130198451 | Hyun et al. | Aug 2013 | A1 |
20130205085 | Hyun et al. | Aug 2013 | A1 |
20130314988 | Desireddi et al. | Nov 2013 | A1 |
20130315252 | Emmadi et al. | Nov 2013 | A1 |
20130318422 | Weathers et al. | Nov 2013 | A1 |
20140029336 | Venkitachalam et al. | Jan 2014 | A1 |
20140040704 | Wu et al. | Feb 2014 | A1 |
20140053037 | Wang et al. | Feb 2014 | A1 |
20140068368 | Zhang et al. | Mar 2014 | A1 |
20140068382 | Desireddi et al. | Mar 2014 | A1 |
20140072056 | Fay | Mar 2014 | A1 |
20140085982 | Asaoka et al. | Mar 2014 | A1 |
20140101510 | Wang et al. | Apr 2014 | A1 |
20140164881 | Chen et al. | Jun 2014 | A1 |
20140181426 | Grunzke et al. | Jun 2014 | A1 |
20140181617 | Wu et al. | Jun 2014 | A1 |
20140185611 | Lie et al. | Jul 2014 | A1 |
20140198569 | Kim et al. | Jul 2014 | A1 |
20140198581 | Kim et al. | Jul 2014 | A1 |
20140215175 | Kasorla et al. | Jul 2014 | A1 |
20140219003 | Ebsen et al. | Aug 2014 | A1 |
20140229774 | Melik-Martirosian et al. | Aug 2014 | A1 |
20140281767 | Alhussien et al. | Sep 2014 | A1 |
20140281771 | Yoon et al. | Sep 2014 | A1 |
20140281800 | Micheloni et al. | Sep 2014 | A1 |
20140281808 | Lam et al. | Sep 2014 | A1 |
20140281822 | Wu et al. | Sep 2014 | A1 |
20140281823 | Micheloni et al. | Sep 2014 | A1 |
20150039952 | Goessel et al. | Feb 2015 | A1 |
20150043286 | Park et al. | Feb 2015 | A1 |
20150046625 | Peddle et al. | Feb 2015 | A1 |
20150127883 | Chen et al. | May 2015 | A1 |
20150131373 | Alhussien et al. | May 2015 | A1 |
20150149871 | Chen et al. | May 2015 | A1 |
20150169468 | Camp et al. | Jun 2015 | A1 |
20150186055 | Darragh | Jul 2015 | A1 |
20150221381 | Nam | Aug 2015 | A1 |
20150242268 | Wu et al. | Aug 2015 | A1 |
20150332780 | Kim et al. | Nov 2015 | A1 |
20150371718 | Becker et al. | Dec 2015 | A1 |
20160034206 | Ryan et al. | Feb 2016 | A1 |
20160049203 | Alrod et al. | Feb 2016 | A1 |
20160071601 | Shirakawa et al. | Mar 2016 | A1 |
20160072527 | Tadokoro et al. | Mar 2016 | A1 |
20160155507 | Grunzke | Jun 2016 | A1 |
20160179406 | Gorobets et al. | Jun 2016 | A1 |
20160247581 | Yoshida et al. | Aug 2016 | A1 |
20160293259 | Kim et al. | Oct 2016 | A1 |
20170213597 | Micheloni et al. | Jul 2017 | A1 |
20180033490 | Marelli et al. | Feb 2018 | A1 |
Entry |
---|
NVM Express, revision 1.0; Intel Corporation;, Jul. 12, 2011, pp. 103-106 and 110-114. |
NVM Express, Revision 1.0; Intel Corporation, Mar. 1, 2011, pp. 1-122. |
RapidIO, PCI Express, and Gigabit Ethernet Comparison: Pros and Cons of Using Three Interconnects in Embedded Systems; RapidIO Trade Association, Technical White Paper, Revision May 3, 2005, 1-36. |
PCI Express Base Specification Revision 3.0 (PCI Express Base Expression, PCISIG, hereinafter “PCIExpress”), Nov. 10, 2010, 1-860. |
RFC 793: Transmission Control Protocol, RFC 793, University of Southern California, IETF,, Sep. 1981, pp. 1-89. |
Cai, et al., “Data Retention in MLC NAND Flash Memory: Characterization, Optimization, and Recovery”, 2015 IEEE 21st International Symposium on High Performance Computer Architecture (HPCA); Carnegie Mellon University, LSI Corporation, 2015, pp. 551-563. |
Chen, et al., “Increasing flash memory lifetime by dynamic voltage allocation for constant mutual information”, 2014 Information Theory and Applications Workshop (ITA), 2014, pp. 1-5. |
Peleato, et al., “Probabilistic graphical model for flash memory programming”, Statistical Signal Processing Workshop (SSP), 2012 IEEE, 2012, pp. 1-4. |
Wu, et al., “Reducing SSD Read Latency via NAND Flash Program and Erase Suspension”, Proceedings of FAST'2012; Department of Electrical and Computer Engineering Virginia Commonwealth University, Richmond, VA 23284, 2012, pp. 117-123. |
Number | Date | Country | |
---|---|---|---|
20180034483 A1 | Feb 2018 | US |
Number | Date | Country | |
---|---|---|---|
62368491 | Jul 2016 | US |