Memory controller method and system compensating for memory cell data losses

Information

  • Patent Grant
  • 9064600
  • Patent Number
    9,064,600
  • Date Filed
    Tuesday, February 25, 2014
    10 years ago
  • Date Issued
    Tuesday, June 23, 2015
    9 years ago
Abstract
A computer system includes a memory controller coupled to a memory module containing several DRAMs. The memory module also includes a non-volatile memory storing row addresses identifying rows containing DRAM memory cells that are likely to lose data during normal refresh of the memory cells. Upon power-up, the data from the non-volatile memory are transferred to a comparator in the memory controller. The comparator compares the row addresses to row addresses from a refresh shadow counter that identify the rows in the DRAMs being refreshed. When a row of memory cells is being refreshed that is located one-half of the rows away from a row that is likely to loose data, the memory controller causes the row that is likely to loose data to be refreshed. The memory controller also includes error checking circuitry for identifying the rows of memory cells that are likely to lose data during refresh.
Description
TECHNICAL FIELD

This invention relates to dynamic random access memory (“DRAM”) devices and controllers for such memory device, and, more particularly, to a method and system for controlling the operation of a memory controller, a memory module or a DRAM to manage the rate at which data bits stored in the DRAM are lost during refresh.


BACKGROUND OF THE INVENTION

As the use of electronic devices, such as personal computers, continue to increase, it is becoming ever more important to make such devices portable. The usefulness of portable electronic devices, such as notebook computers, is limited by the limited length of time batteries are capable of powering the device before needing to be recharged. This problem has been addressed by attempts to increase battery life and attempts to reduce the rate at which such electronic devices consume power.


Various techniques have been used to reduce power consumption in electronic devices, the nature of which often depends upon the type of power consuming electronic circuits that are in the device. For example, electronic devices, such a notebook computers, typically include dynamic random access memory (“DRAM”) devices that consume a substantial amount of power. As the data storage capacity and operating speeds of DRAM devices continues to increase, the power consumed by such devices has continued to increase in a corresponding manner.


In general, the power consumed by a DRAM increases with both the capacity and the operating speed of the DRAM devices. The power consumed by DRAM devices is also affected by their operating mode. A DRAM, for example, will generally consume a relatively large amount of power when the memory cells of the DRAM are being refreshed. As is well-known in the art, DRAM memory cells, each of which essentially consists of a capacitor, must be periodically refreshed to retain data stored in the DRAM device. Refresh is typically performed by essentially reading data bits from the memory cells in each row of a memory cell array and then writing, those same data bits back to the same cells in the row. A relatively large amount of power is consumed when refreshing a DRAM because rows of memory cells in a memory cell array are being actuated in the rapid sequence. Each time a row of memory cells is actuated, a pair of digit lines for each memory cell are switched to complementary voltages and then equilibrated. As a result, DRAM refreshes tends to be particularly power-hungry operations. Further, since refreshing memory cells must be accomplished even when the DRAM is not being used and is thus inactive, the amount of power consumed by refresh is a critical determinant of the amount of power consumed by the DRAM over an extended period. Thus many attempts to reduce power consumption in DRAM devices have focused on reducing the rate at which power is consumed during refresh.


Refresh power can, of course, be reduced by reducing the rate at Which the memory cells in a DRAM are being refreshed. However, reducing the refresh rate increases the risk of data stored in the DRAM memory cells being lost. More specifically, since, as mentioned above, DRAM memory cells are essentially capacitors, charge inherently leaks from the memory cell capacitors, which can change the value of a data bit stored in the memory cell over time. However, current leaks from capacitors at varying rates. Some capacitors are essentially short-circuited and are thus incapable of storing charge indicative of a data bit. These defective memory cells can be detected during production testing, and can then be repaired by substituting non-defective memory cells using conventional redundancy circuitry. On the other hand, current leaks from most DRAM memory cells at much slower rates that span a wide range. A DRAM refresh rate is chosen to ensure that all but a few memory cells can store data bits without data loss. This refresh rate is typically once every 64 ms. The memory cells that cannot reliably retain data bits at this refresh rate are detected during production testing and replaced by redundant memory cells. However, the rate of current leakage from DRAM memory cells can change after production testing, both as a matter of time and from subsequent production steps, such as in packaging DRAM chips. Current leakage, and hence the rate of data loss, can also be effected by environmental factors, such as the temperature of DRAM devices. Therefore, despite production testing, a few memory cells will typically be unable to retain stored data bits at normal refresh rates.


One technique that has been used to reduce prevent data errors during refresh is to generate an error correcting code “ECC” from each item of stored data, and then store the ECC along with the data. A computer system 10 employing typical ECC techniques is shown in FIG. 1. The computer system 10 includes a central processor unit (“CPU”) 14 coupled to a system controller 16 through a processor bus 18. The system controller 16 is coupled to input/output (“I/O”) devices (not shown) through a peripheral bus 20 and to an I/O controller 24 through an expansion bus 26. The I/O controller 24 is also connected to various peripheral devices (not shown) through an I/O bus 28.


The system controller 16 includes a memory controller 30 that is coupled to several memory modules 32a-c through an address bus 36, a control bus 38, a syndrome bus 40, and a data bus 42. Each of the memory modules 32a-c includes several DRAM devices (not shown) that store data and an ECC. The data are coupled through the data bus 42 to and from the memory controller 30 and locations in the DRAM devices mounted on the modules 32a-c. The locations in the DRAM devices to which data are written and data are read are designated by addresses coupled to the memory modules 32a-c on the address bus 36. The operation of the DRAM devices in the memory modules 32a-c are controlled by control signals coupled to the memory modules 32a-c on the control bus 38.


In operation, when data are to be written to the DRAM devices in the memory modules 32a-c, the memory controller 30 generates an ECC, and then couples the ECC and the write data to the memory modules 32a-c through the syndrome bus 40 and the data bus 42, respectively, along with control signals coupled through the control bus 38 and a memory address coupled through the address bus 36. When the store data are to be read from the DRAM devices in the memory modules 32a-c, the memory controller 30 applies to the memory modules 32a-c control signals through the control bus 38 and a memory address 36 through the address bus. Read data and the corresponding syndrome are then coupled from the memory modules 32a-c to the memory controller 30 through the data bus 42 and syndrome bus 40, respectively, The memory controller 30 then uses the FCC to determine if an bits of the read data are in error, and if not too many bits are in error, to correct the read data.


One example of a conventional memory controller 50 is shown in FIG. 2. The operation of the memory controller 50 is controlled by a memory control state machine 54, which outputs control signals on the control bus 38. The state machine 54 also outputs a control signal to an address multiplexer 56 that outputs an address on the address bus 36. The most significant or upper bits of an address are coupled to a first port the multiplexer 56 on an upper address bus 60, and the least significant or lower bits of an address are coupled to a second port of the multiplexer 56 on a lower address bus 62. The upper and lower address buses 60, 62, respectively are coupled to an address bus 18A portion of the processor bus 18 (FIG. 1).


A data bus portion 18D of the processor bus 18 on which write data are coupled is connected to a buffer/transceiver 70 and to an ECC generator 72. A data bus portion 18D′ on which read data are coupled is connected to an ECC check/correct circuit 74, In practice, both data bus portions 18D and 18D′ comprise a common portion of the processor bus 18, but they are illustrated as being separate in FIG. 2 for purposes of clarity. The ECC generator 72 generates an FCC from the write data on bus 18D, and couples the syndrome to the buffer transceiver through an internal ECC syndrome bus 74. The ECC check/correct circuit 76 receives read data from the buffer transceiver 70 through an internal read bus 78 and a syndrome through an internal ECC syndrome bus 80. The buffer/transceiver 70 applies the syndrome received from the ECC generator 72 to the memory modules 32a-c (FIG. 1) through the syndrome bus 40. The buffer/transceiver 70 couples the syndrome to the memory modules 32a-c along with the write data, which are coupled through the data bus 42. The buffer/transceiver 70 also couples read data. from the data bus 42 and a syndrome from the syndrome bus 40 to the FCC check/correct circuit 76. The FCC check/correct circuit 76 then determines whether or not any of the bits of the read data are in error. If the ECC's check/correct circuit 76 determines that any of the hits of the read data are in error, it corrects those bits as long as a sufficiently low number of bits are in error that they can be corrected. As is well-known in the art, the number of bits in the syndrome determines the number of bits of data that can be corrected. The uncorrected read data, if no error was detected, or the corrected read data, if an error was detected, are then coupled through the data bus 18D′. In the event a correctable error was found, the ECC check/correct circuit 76 generates a read error R_ERROR signal, which is coupled to the memory control state machine 54. If, however, too many bits of the read data were in error to be corrected, the ECC check/correct circuit 76 generates a fatal error F_ERROR signal, which is coupled to the CPU 14 (FIG. 1).


The memory controller 50 also includes a refresh timer 84 that schedules a refresh of the DRAM devices in the memory modules 32a-c at a suitable rate, such as once every 64 ms. The refresh timer 84 periodically outputs a refresh trigger signal on line 88 that causes the memory control state machine 54 to issue an auto refresh command on the control bus 38.


The use of ECCs in the memory controller 50 shown in FIG. 2 can significantly improve the reliability of data stored in the DRAM devices in the memory modules 32a-c, Furthermore, the refresh timer 84 can cause the DRAMs to he refreshed at a slower refresh rate since resulting data hit errors can he corrected. The use of a slower refresh rate can provide the significant advantage of reducing the power consumed by the DRAM. However, the use of ECCs requires that a significant portion of the DRAM storage capacity be used to store the ECCs, thus effectively reducing the storage capacity of the DRAM. Further, the use of ECCs can reduce the rate at the DRAM can be refreshed because the ECC must be used to check and possibly correct each item of data read from the DRAM during refresh. Furthermore, the need to perform ECC processing on read data all during refresh can consume a significant amount of power. Also, if the ECCs are not used during normal operation, it is necessary to refresh the DRAM array at the normal refresh rate while checking the entire array for data errors and correcting any errors that are found before switching to the normal operating mode.


There is therefore a need for a method and system that eliminates or corrects data storage errors produced during refresh of a DRAM either without the use of ECCs or without the need to repetitively correct data errors with ECCs.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a block diagram of a conventional computer system.



FIG. 2 is a block diagram of a conventional memory controller that may be used in the computer system of FIG. 1.



FIG. 3 is a block diagram of a computer system according to one embodiment of the invention.



FIG. 4 is a block diagram of a memory controller according to one embodiment of the invention that may he used in the computer system of FIG. 3.



FIG. 5 is a flow chart showing a procedure for transferring error-prone row addresses from a memory module to the memory controller of FIG. 4 and for storing the error-prone row addresses in the memory controller.



FIG. 6 is a flow chart showing a procedure identifying error prone row addresses and for storing information about the error-prone row addresses in a memory module.



FIG. 7 is a schematic diagram illustrating the manner in which the memory controller of FIG. 3 may insert extra refreshes of rows containing at least one error-prone memory cell.



FIG. 8 is a block diagram of a computer system according to another embodiment of the invention.



FIG. 9 is a block diagram of a computer system according to still another embodiment of the invention.





DETAILED DESCRIPTION

One embodiment of a computer system 100 according to one embodiment of the invention is shown in FIG. 3. The computer system 100 uses many of the same components that are used in the conventional computer system 10 of FIG. 1. Therefore, in the interest of brevity, these components have been provided with the same reference numerals, and an explanation of their operation will not be repeated. The computer system 100 of FIG. 3 differs from the computer system 10 of FIG. 1 by including memory modules 102a-c that each include a non-volatile memory 110a-c, respectively (only 110a is shown in FIG. 3), The non-volatile memories 110a-c store row addresses identifying rows containing one or more memory cells in the DRAM devices in the respective modules 102a-c that are prone to errors because they discharge at a relatively high rate. The computer system 100 also differs from the computer system 10 of FIG. 1 by including circuitry that detects and identifies these error-prone memory cells and subsequently takes protective action. More specifically, as described in greater detail below, a memory controller 120 in the computer system 100 uses ECC techniques to determine which memory cells are error-prone during refresh. Once these error-prone memory cells have been identified, the memory controller 120 inserts additional refreshes for the rows containing, these memory cells. As a result, this more rapid refresh is performed only on the rows containing memory cells that need to be refreshed at a more rapid rate so that power is not wasted refreshing memory cells that do not need to be refreshed at a more rapid rate.


One embodiment of the memory controller 120 that is used in the computer system 100 is shown in FIG. 4. The memory controller 120 uses many of the same components that are used in the conventional memory controller 50 of FIG. 2. Again, in the interest, of brevity, these components have been provided with the same reference numerals, and an explanation of their operation will not be repeated except to the extent that they perform different or additional functions in the memory controller 120. In addition to the components included in the memory controller 50, the memory controller 120 includes a failing address register and comparator unit (“FARC”) 124 that stores the row addresses containing error-prone memory cells requiring refreshes at a more rapid rate. The FARC 124 is coupled to the raw write data bus 18D to receive from the CPU 14 (FIG. 3) the row addresses that are stored in the non-volatile memories 110a-c (FIG. 3). At power-up of the computer system 100, the CPU 14 performs a process 130 to either transfer the row addresses from the non-volatile memories 110a-c to the FARC 124 as shown in the flow-chart of FIG. 5 or to test the DRAMs in the memory modules 102a-c to determine which rows contain at least one error-prone memory cell and then program the non-volatile memories 110a-c and the FARC, as shown in the flow-chart of FIG. 6.


With reference, first, to FIG. 5, the process 130 is entered during power-on at step 134. The non-volatile memories 110a-c are then read at 136 by the CPU 14 coupling read addresses to the non-volatile memories 110a-c and the I/O controller coupling control signals to the non-volatile memories 110a-c through line 137. The FARC 124 is then initialized at 140 before continuing at 142 by the CPU 14 coupling the row addresses through the raw write data bus 18D and the data bus 126.


In the event row addresses have not yet been stored in the non-volatile memories 110a-c, the memory controller 120 may determine which rows contain error-prone memory cells and program the non-volatile memories 110a-c with the addresses of such rows. The non-volatile memories 110a-c are initially programmed by the CPU 14 writing data to the DRAMs in the memory modules 110a-c and then reading the stored data from the DRAMs after the DRAMs have been refreshed over a period. Any errors that have arisen as to result of excessive discharge of memory cells during the refresh are detected by the FCC check/correct circuit 76. As the DRAMs are read, the row addresses coupled to the DRAMs through the address bus ISA are stored in address holding registers 128 and coupled to the FARC 124. If the read data are in error, the ECC check/correct circuit 76 outputs an R_ERROR that is coupled through line 148 to the memory control state machine 54. The memory control state machine 54 then processes the R_ERROR signal using the process 150 shown in FIG. 6. The process is initiated by the memory control state machine 54 upon receipt of the R_ERROR signal at step 154. The address holding register 128 is then read at 156, and a determination is made at 160 whether the row responsible for the R_ERROR signal being generated is a new row in which an error-prone memory cells previously not been detected. If an error-prone memory cells was previously detected, the row address being, output from the read address holding register 128 has already been recorded for extra refreshes. The process 150 can therefore progress direction to the final continue step 162 without the need for further action.


if an error-prone memory cells had previously not been detected in the current row, the row address being output from the address holding register 128 is transferred to the FARC 124 at step 164. This is accomplished h the memory control state machine 54 outputting a “FAIL” signal on line 132 that causes the FARC 124 to store the current row address, which is output from the address holding registers 128 on bus 138. The address is also appended at step 16$ to the non-volatile memory 110 in the memory module 102a-c containing the DRAM having the error-prone memory cell. This is accomplished by coupling data identifying the row addresses containing error-prone memory cells to the raw write data bus 18D. The data identifying the row addresses are then coupled to the memory modules 102a-c for storage in the non-volatile memories 110a-c.


Once either the process 130 of FIG. 5 or the process 150 of FIG. 6 has been completed for all rows, the row addresses identifying rows containing one or more error-prone memory cells have been stored in the FARC 124. The memory controller 120 is then ready to insert extra refreshes of such rows. As is well known in the art, when an auto-refresh command is issued to a DRAM, an internal refresh counter in the DRAM generates row addresses that are used to select the rows being refreshed. However, since these row addresses are not coupled from the DRAMs to the memory controller 120, the address of each row being refreshed must be determined in the memory controller 120. This is accomplished by using a refresh shadow counter 170 to generate refresh row addresses in the same that the refresh counter in the DRAMs generate such addresses. Furthermore, for the memory controller 120, the addresses that are used for refreshing the memory cells in the DRAMs are generated by the memory controller 120. When the memory control state machine 54 issues an auto-refresh command to a DRAM, it outputs a trigger signal on line 174 that resets the refresh shadow counter 170 and the refresh timer 84 and causes the refresh shadow counter 170 to begin outputting incrementally increasing row addresses. These incrementally increasing row addresses are coupled to the DRAMs via the address bus 18A, and they are also coupled to the FARC 124 via bus 176. However, the most significant bit (“MSB”) of the row address is applied to an inverter 178 so that the FARC 124 receives a row address that is offset from the current row address by one-half the number of rows in the DRAMs. This offset row address is compared to the addresses of the rows containing error-prone memory cell(s) that are stored in the FARC 124, In the event of a match, the FA RC 124 outputs a HIT signal on line 180.


The memory control state machine 54 responds to the HIT signal by inserting an extra refresh of the row identified by the offset address. For this purpose, the address bus 18A receives all but the most significant bit of the row address from the refresh shadow counter 170 and the most significant bit from the FARC 124 on line 182. As a result, the row identified by the offset is refreshed twice as often as other rows, i.e., once when the address is output from the refresh shadow counter 170 and once when the row address offset from the address by one-half the number of rows is output from the refresh shadow counter 170.


The manner in which extra refreshes of rows occurs will be apparent with reference to FIG. 7, which shows the output of the refresh shadow counter 170 (FIG. 4) on the left hand side and the addresses of the rows actually being refreshed on the right hand side. Every 64 ms, the refresh shadow counter 170 outputs row addresses that increment from “0000000000000” to “1111111111111.” For purposes of illustration, assume that row “0000000000010” contains one or more error-prone memory cells. This row will be refreshed in normal course when the refresh shadow counter 170 outputs “0000000000010” on the third count of the counter 170. When the refresh shadow counter 170 has counted three counts past one-half of the rows, it outputs count “1000000000010.” However. the MSB is inverted by the inverter 178 so that the FARC 124 receives a count of “0000000000010.” Since this count corresponds to an address for a row containing one or more error-prone memory cells, a refresh of row “0000000000010” is inserted between row “1000000000010” and row “1000000000011,” as shown on the right hand side of FIG. 7.


Although the memory controller 120 refreshes rows containing one or more error-prone memory cells twice as often as other rows, it may alternatively refresh rows containing error-prone memory cells more frequently. This can be accomplished by inverting the MSB and the next to MSB (“NTMSB”) of the row address coupled from the refresh shadow counter 170 to the FARC 124. A row would then be refreshed when the refresh shadow counter 170 outputs its address, when the refresh shadow counter 170 outputs its address with the NTMSB inverted, when the refresh shadow counter 170 outputs its address with the MSB inverted, and when the refresh shadow counter 170 outputs its address with both the MSB and the NTMSB inverted. Other variations will be apparent to one skilled in the art.


A computer system 190 according to another embodiment of the invention is shown in FIG. 8. In this embodiment, the computer system 190 includes the conventional memory controller 30 of FIG. 1 coupled to memory modules 194a-e, Each of the memory modules 194a-c includes several DRAMs 196, although only one DRAM is shown in FIG. 8. The DRAM 196 includes the FARC 124, which is coupled to a refresh counter 198 through inverting circuitry 200. The FARC 124 is initialized with data stored in a non-volatile memory 202 that identifies the addresses of the rows containing one or more error-prone memory cells. The non-volatile memory 202 is initially programmed in the same manner that the non-volatile memory was programmed, as explained above, using ECC circuitry 204. The inverting circuitry 200 inverts appropriate bits of refresh addresses generated by the refresh counter 198 to schedule extra refreshes of rows containing one or more error-prone memory cells, The DRAM 196 also includes a memory control state machine 210 that controls the operation of the above-described components.


A computer system 220 according to another embodiment of the invention is shown in FIG. 9. This embodiment includes several memory modules 224a-c coupled to a memory controller 230. The memory modules 224a-c each include the FCC generator 72 and FCC check/correct circuit 76 of FIGS. 2 and 3 as well as the other components that are used. to determine which rows contain one or more error-prone memory cells. The computer system 220 does not include a syndrome bus 40, of course, since the ECC syndromes are generated, in the memory modules 224a-c. However, once the memory modules 224a-c have determined the address of rows containing one or more error-prone memory cells, it programs a non-volatile memory device 234 in each of the memory modules 224a-c with those addresses. DRAMs 238 each include the FARC 124, the refresh counter 198, the inverting circuitry 200, and the memory control state machine 210 of. FIG. 8 to schedule extra refreshed of rows containing one or more error-prone memory cell, as previously explained.


Although the component of the various embodiments have been explained as being in either a memory controller, a memory module or a DRAM, it will he understood that there is substantial flexibility in the location of many components. For example, the FARC 124 may he either in the memory controller as shown in FIG. 4, the DRAMs as shown in FIGS. 8 and 9, or in the memory modules separate from the DRAMs. Furthermore, although the present invention has been described with reference to the disclosed embodiments, persons skilled in the art will recognize that changes may be made in form and detail without departing from the spirit and scope of the invention.

Claims
  • 1. An apparatus, comprising: a memory module, including: a memory device; anda data record configured to store in the memory module identifying information corresponding to memory cells in the memory device having relatively weak data retention characteristics; anda memory controller coupled to the memory module, the memory controller configured to transfer at least some of the identifying information from the memory module to the memory controller, the memory controller further configured to invert a most significant bit of an address to be refreshed, compare the address to be refreshed with the inverted most significant bit to identifying information, and apply signals to the memory module that cause memory cells in the memory device having relatively weak data retention characteristics to be refreshed at a rate that is faster than a rate at which other memory cells in the memory device are refreshed based on the comparison of the address to be refreshed with the inverted most significant bit to the identifying information indicating a match.
  • 2. The apparatus of claim 1, wherein the memory controller further includes a programmable storage device configured to store at least some of the identifying information transferred from the memory module to the memory controller.
  • 3. The apparatus of claim 1, wherein the memory controller includes: a comparator configured to compare received refresh addresses to the transferred identifying information and to generate a signal responsive to a match between a characteristic of the received refresh addresses and a characteristic of the transferred identifying information; anda control circuit configured to apply signals to apply a refresh command to the memory module responsive to the signal.
  • 4. The apparatus of claim 1, wherein the memory device comprises a dynamic random access memory device.
  • 5. The apparatus of claim 1, wherein the memory controller includes an address generating circuit configured to sequentially output the address to be refreshed.
  • 6. The apparatus of claim 5, wherein the address generating circuit comprises a refresh shadow counter configured to increment responsive to a periodic signal at the same rate that a refresh counter in the memory device increments.
  • 7. An apparatus, comprising: a memory including a data record configured to store data indicating an address of cells of the memory having relatively weak data retention characteristics; anda memory controller coupled to the memory, the memory controller configured to refresh the cells of the memory having relatively weak data retention characteristics more frequently than other cells in the memory responsive to a match signal, wherein the memory controler is further configured to flip a most significant bit of an address of each cell as the cells are being refreshed and is further configured to compare the address with the flipped most significant bit to the data record indicating the address of cells of the memory having relatively weak data retention characteristics to generate the match signal.
  • 8. The apparatus of claim 7, wherein the memory controller is configured to sequentially refresh the cells of the memory.
  • 9. The apparatus of claim 7, wherein the memory controller, based on the match signal, is configured to refresh the address of the cell with relatively weak data retention characteristics that matched the address with the flipped most significant bit before moving to a next address in the sequence.
  • 10. The apparatus of claim 7, wherein the memory controller is further configured to flip a next most significant bit of the address of each cell as the cells are being refreshed.
  • 11. The apparatus of claim 7, wherein the memory and the memory controller are part of a dynamic random access memory module included in a computing system.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a divisional of pending U.S. patent application Ser. No. 12/943,830, filed Nov. 10, 2010, which is a divisional of U.S. patent application Ser. No. 12/1235,298, filed Sep. 22, 2008, and issued as U.S. Pat. No. 7,836,374 on Nov. 16, 2010, which is a divisional of U.S. patent application Ser. No. 11/269,248, filed Nov. 7, 2005, and issued as U.S. Pat. No. 7,428,687 on Sep. 23, 2008, which is a divisional of U.S. patent application Ser. No. 10/839,942, filed May 6, 2004, and issued as U.S. Pat. No. 7,099,221 on Aug. 29, 2006. These applications and patents incorporated herein by reference, in their entirety for any purpose.

US Referenced Citations (284)
Number Name Date Kind
4006468 Webster Feb 1977 A
4334295 Nagami Jun 1982 A
4369511 Kimura et al. Jan 1983 A
4380812 Ziegler et al. Apr 1983 A
4433211 McCalmont et al. Feb 1984 A
4493081 Schmidt Jan 1985 A
4598402 Matsumoto et al. Jul 1986 A
4617660 Sakamoto Oct 1986 A
4667330 Kumagai May 1987 A
4694454 Matsuura Sep 1987 A
4706249 Nakagawa et al. Nov 1987 A
4710934 Traynor Dec 1987 A
4766573 Takemae Aug 1988 A
4780875 Sakai Oct 1988 A
4858236 Ogasawara Aug 1989 A
4860325 Aria et al. Aug 1989 A
4862463 Chen Aug 1989 A
4888773 Arlington et al. Dec 1989 A
4918692 Hidaka et al. Apr 1990 A
4937830 Kawashima et al. Jun 1990 A
4958325 Nakagome et al. Sep 1990 A
5012472 Arimoto et al. Apr 1991 A
5033026 Tsujimoto Jul 1991 A
5056089 Furuta et al. Oct 1991 A
5127014 Raynham Jun 1992 A
5172339 Noguchi et al. Dec 1992 A
5208782 Sakuta et al. May 1993 A
5278796 Tillinghast et al. Jan 1994 A
5291498 Jackson et al. Mar 1994 A
5307356 Fifield Apr 1994 A
5313425 Lee et al. May 1994 A
5313464 Reiff May 1994 A
5313475 Cromer et al. May 1994 A
5313624 Harriman et al. May 1994 A
5321661 Iwakiri et al. Jun 1994 A
5331601 Parris Jul 1994 A
5335201 Walther et al. Aug 1994 A
5369651 Marisetty Nov 1994 A
5418796 Price et al. May 1995 A
5428630 Weng et al. Jun 1995 A
5432802 Tsuboi Jul 1995 A
5446695 Douse et al. Aug 1995 A
5448578 Kim Sep 1995 A
5450424 Okugaki et al. Sep 1995 A
5455801 Blodgett et al. Oct 1995 A
5459742 Cassidy et al. Oct 1995 A
5481552 Aldereguia et al. Jan 1996 A
5509132 Matsuda et al. Apr 1996 A
5513135 Dell et al. Apr 1996 A
5515333 Fujita et al. May 1996 A
5555527 Kotani et al. Sep 1996 A
5588112 Dearth et al. Dec 1996 A
5596521 Tanaka et al. Jan 1997 A
5600662 Zook Feb 1997 A
5604703 Nagashima Feb 1997 A
5623506 Dell et al. Apr 1997 A
5629898 Idei et al. May 1997 A
5631914 Kashida et al. May 1997 A
5644545 Fisch Jul 1997 A
5703823 Douse et al. Dec 1997 A
5706225 Buchenrieder et al. Jan 1998 A
5712861 Inoue et al. Jan 1998 A
5732092 Shinohara Mar 1998 A
5740188 Olarig Apr 1998 A
5742554 Fujioka Apr 1998 A
5754753 Smelser May 1998 A
5761222 Baldi Jun 1998 A
5765185 Lambrache et al. Jun 1998 A
5784328 Irrinki et al. Jul 1998 A
5784391 Konigsburg Jul 1998 A
5790559 Sato Aug 1998 A
5808952 Fung et al. Sep 1998 A
5841418 Bril et al. Nov 1998 A
5864569 Roohparvar Jan 1999 A
5878059 Maclellan Mar 1999 A
5896404 Kellogg et al. Apr 1999 A
5909404 Schwarz Jun 1999 A
5912906 Wu et al. Jun 1999 A
5925138 Klein Jul 1999 A
5953278 Mcadams et al. Sep 1999 A
5961660 Capps, Jr. et al. Oct 1999 A
5963103 Blodgett Oct 1999 A
6009547 Jaquette et al. Dec 1999 A
6009548 Chen et al. Dec 1999 A
6018817 Chen et al. Jan 2000 A
6041001 Estakhri Mar 2000 A
6041430 Yamauchi Mar 2000 A
6052815 Zook Apr 2000 A
6052818 Dell et al. Apr 2000 A
6078543 Kim Jun 2000 A
6085283 Toda Jul 2000 A
6085334 Giles et al. Jul 2000 A
6092231 Sze Jul 2000 A
6101614 Gonzales et al. Aug 2000 A
6125467 Dixon Sep 2000 A
6134167 Atkinson Oct 2000 A
6137739 Kim Oct 2000 A
6166908 Samaras Dec 2000 A
6178537 Roohparvar Jan 2001 B1
6199139 Katayama et al. Mar 2001 B1
6212118 Fujita Apr 2001 B1
6212631 Springer et al. Apr 2001 B1
6216246 Shau Apr 2001 B1
6216247 Creta et al. Apr 2001 B1
6219807 Ebihara et al. Apr 2001 B1
6223309 Dixon et al. Apr 2001 B1
6233717 Choi May 2001 B1
6262925 Yamasaki Jul 2001 B1
6279072 Williams et al. Aug 2001 B1
6310825 Furuyama Oct 2001 B1
6324119 Kim Nov 2001 B1
6349068 Takemae et al. Feb 2002 B2
6349390 Dell et al. Feb 2002 B1
6353910 Carnevale et al. Mar 2002 B1
6397290 Williams et al. May 2002 B1
6397357 Cooper May 2002 B1
6397365 Brewer et al. May 2002 B1
6404687 Yamasaki Jun 2002 B2
6426908 Hidaka Jul 2002 B1
6438066 Ooishi et al. Aug 2002 B1
6442644 Gustavson et al. Aug 2002 B1
6457153 Yamamoto et al. Sep 2002 B2
6484246 Tsuchida et al. Nov 2002 B2
6487136 Hidaka Nov 2002 B2
6510537 Lee Jan 2003 B1
6518595 Lee Feb 2003 B2
6526537 Kishino Feb 2003 B2
6545899 Derner et al. Apr 2003 B1
6549460 Nozoe et al. Apr 2003 B2
6556497 Cowles et al. Apr 2003 B2
6557072 Osborn Apr 2003 B2
6560155 Hush May 2003 B1
6570803 Kyung May 2003 B2
6584543 Williams et al. Jun 2003 B2
6591394 Lee et al. Jul 2003 B2
6594796 Chiang Jul 2003 B1
6601211 Norman Jul 2003 B1
6603694 Frankowsky et al. Aug 2003 B1
6603696 Janzen Aug 2003 B2
6603697 Janzen Aug 2003 B2
6603698 Janzen Aug 2003 B2
6609236 Watanabe et al. Aug 2003 B2
6614698 Ryan et al. Sep 2003 B2
6618281 Gordon Sep 2003 B1
6618314 Fiscus et al. Sep 2003 B1
6618319 Ooishi et al. Sep 2003 B2
6628558 Fiscus Sep 2003 B2
6633509 Scheuerlein et al. Oct 2003 B2
6636444 Uchida et al. Oct 2003 B2
6636446 Lee et al. Oct 2003 B2
6646942 Janzen Nov 2003 B2
6662333 Zhang et al. Dec 2003 B1
6665231 Mizuno et al. Dec 2003 B2
6678860 Lee Jan 2004 B1
6681332 Byrne et al. Jan 2004 B1
6697926 Johnson et al. Feb 2004 B2
6697992 Ito et al. Feb 2004 B2
6701480 Karpuszka et al. Mar 2004 B1
6704230 DeBrosse et al. Mar 2004 B1
6715104 Imbert de Tremiolles et al. Mar 2004 B2
6715116 Lester et al. Mar 2004 B2
6721223 Matsumoto et al. Apr 2004 B2
6735726 Muranaka et al. May 2004 B2
6751143 Morgan et al. Jun 2004 B2
6754858 Borkenhagen et al. Jun 2004 B2
6775190 Setogawa Aug 2004 B2
6778457 Burgan Aug 2004 B1
6781908 Pelley et al. Aug 2004 B1
6785837 Kilmer et al. Aug 2004 B1
6788616 Takahashi Sep 2004 B2
6789209 Suzuki et al. Sep 2004 B1
6792567 Laurent Sep 2004 B2
6795362 Nakai et al. Sep 2004 B2
6799291 Kilmer et al. Sep 2004 B1
6807108 Maruyama et al. Oct 2004 B2
6810449 Barth et al. Oct 2004 B1
6819589 Aakjer Nov 2004 B1
6819624 Acharya et al. Nov 2004 B2
6834022 Derner et al. Dec 2004 B2
6920523 Le et al. Jul 2005 B2
6934199 Johnson et al. Aug 2005 B2
6940773 Poechmueller Sep 2005 B2
6940774 Perner Sep 2005 B2
6944074 Chung et al. Sep 2005 B2
6954387 Kim et al. Oct 2005 B2
6965537 Klein et al. Nov 2005 B1
7002397 Kubo et al. Feb 2006 B2
7027337 Johnson et al. Apr 2006 B2
7051260 Ito et al. May 2006 B2
7095669 Oh Aug 2006 B2
7096407 Olarig Aug 2006 B2
7099221 Klein Aug 2006 B2
7116602 Klein Oct 2006 B2
7117420 Yeung et al. Oct 2006 B1
7149141 Johnson et al. Dec 2006 B2
7167403 Riho et al. Jan 2007 B2
7171605 White Jan 2007 B2
7184351 Ito et al. Feb 2007 B2
7184352 Klein et al. Feb 2007 B2
7190628 Choi et al. Mar 2007 B2
7216198 Ito et al. May 2007 B2
7225390 Ito et al. May 2007 B2
7231488 Poechmueller Jun 2007 B2
7249289 Muranaka et al. Jul 2007 B2
7254067 Johnson et al. Aug 2007 B2
7269085 Sohn et al. Sep 2007 B2
7272066 Klein Sep 2007 B2
7272773 Cargnoni et al. Sep 2007 B2
7277345 Klein Oct 2007 B2
7280386 Klein Oct 2007 B2
7317648 Jo Jan 2008 B2
7318183 Ito et al. Jan 2008 B2
7340668 Klein Mar 2008 B2
7372749 Poechmueller May 2008 B2
7428687 Klein Sep 2008 B2
7444577 Best et al. Oct 2008 B2
7447973 Klein Nov 2008 B2
7447974 Klein Nov 2008 B2
7453758 Hoffmann Nov 2008 B2
7461320 Klein Dec 2008 B2
7478285 Fouquet-Lapar Jan 2009 B2
7493531 Ito et al. Feb 2009 B2
7500171 Suzuki Mar 2009 B2
7526713 Klein Apr 2009 B2
7539926 Lesea May 2009 B1
7558142 Klein Jul 2009 B2
7836374 Klein Nov 2010 B2
7894289 Pawlowski Feb 2011 B2
7900120 Pawlowski et al. Mar 2011 B2
8413007 Pawlowski et al. Apr 2013 B2
20010023496 Yamamoto et al. Sep 2001 A1
20010029592 Walker et al. Oct 2001 A1
20010044917 Lester et al. Nov 2001 A1
20010052090 Mio Dec 2001 A1
20010052102 Roohparvar Dec 2001 A1
20020013924 Yamamoto Jan 2002 A1
20020029316 Williams et al. Mar 2002 A1
20020144210 Borkenhagen et al. Oct 2002 A1
20020152444 Chen et al. Oct 2002 A1
20020162069 Laurent Oct 2002 A1
20020184592 Koga et al. Dec 2002 A1
20030009721 Hsu et al. Jan 2003 A1
20030070054 Williams et al. Apr 2003 A1
20030093744 Leung et al. May 2003 A1
20030097608 Rodeheffer et al. May 2003 A1
20030101405 Shibata May 2003 A1
20030128612 Moore et al. Jul 2003 A1
20030149855 Shibata et al. Aug 2003 A1
20030167437 DeSota et al. Sep 2003 A1
20030191888 Klein Oct 2003 A1
20040008562 Ito et al. Jan 2004 A1
20040064646 Emerson et al. Apr 2004 A1
20040083334 Chang et al. Apr 2004 A1
20040098654 Cheng et al. May 2004 A1
20040100847 Derner et al. May 2004 A1
20040117723 Foss Jun 2004 A1
20040205429 Yoshida et al. Oct 2004 A1
20040225944 Brueggen Nov 2004 A1
20050099868 Oh May 2005 A1
20050146958 Moore et al. Jul 2005 A1
20050249010 Klein Nov 2005 A1
20050289444 Klein Dec 2005 A1
20060010339 Klein Jan 2006 A1
20060013052 Klein Jan 2006 A1
20060044913 Klein Mar 2006 A1
20060056259 Klein Mar 2006 A1
20060056260 Klein Mar 2006 A1
20060069856 Klein Mar 2006 A1
20060152989 Klein Jul 2006 A1
20060158949 Klein Jul 2006 A1
20060158950 Klein Jul 2006 A1
20060206769 Klein Sep 2006 A1
20060218469 Klein Sep 2006 A1
20070268756 Johnson et al. Nov 2007 A1
20080002503 Klein Jan 2008 A1
20080092016 Pawlowski Apr 2008 A1
20080109705 Pawlowski et al. May 2008 A1
20080151671 Klein Jun 2008 A1
20090024884 Klein Jan 2009 A1
20090067267 Johnson et al. Mar 2009 A1
20100054070 Klein Mar 2010 A1
20110038217 Johnson et al. Feb 2011 A1
20130003467 Klein Jan 2013 A1
20130254626 Pawlowski et al. Sep 2013 A1
Non-Patent Literature Citations (2)
Entry
Idei, Youm , “Dual-Period Self-Refresh Scheme for Low-Power DRAM's with On-Chip PROM Mode Register”, IEEE Journal of Solid State Circuits, vol. 33, No. 2, Feb. 1998, 253-259.
Stojko, J. et al., “Error-Correction Code”, IBM Technical Disclosure Bulletin, vol. 10, No. 10, Mar. 1968.
Related Publications (1)
Number Date Country
20140181613 A1 Jun 2014 US
Divisions (4)
Number Date Country
Parent 12943830 Nov 2010 US
Child 14189607 US
Parent 12235298 Sep 2008 US
Child 12943830 US
Parent 11269248 Nov 2005 US
Child 12235298 US
Parent 10839942 May 2004 US
Child 11269248 US