Embodiments of the disclosure relate to integrated circuits, in particular to methods and apparatuses for performing write operations on digital memory devices at a granularity level less than a data word.
For well over three decades, semiconductor memories such as DRAM's, SRAM's, ROM's, EPROM's, EEPROM's, Flash EEPROM's, Ferroelectric RAM's, MAGRAM's and others, have played a vital role in many electronic systems. Their functions for data storage, code (instruction) storage, and data retrieval/access (Read/Write) continue to span a wide variety of applications. Usage of these memories in both stand alone/discrete memory product forms, as well as embedded forms such as, for example, memory integrated with other functions like logic, in a module or monolithic IC, continues to grow. Cost, operating power, bandwidth, latency, ease of use, the ability to support broad applications (balanced vs. imbalanced accesses), and nonvolatility are all desirable attributes in a wide range of applications.
Soft error correction is a challenge facing digital memory designers as memory cells density within digital memory designs, in particular DRAM and SRAM designs, continues to increase. As density increases, a single random event such as alpha particle collision, is more likely to cause soft errors or bit flips. Also, as density increases, such events are more likely to result in a larger number of flipped bits versus lower density memory devices. As a result, soft error correction is of increasing concern and chip designers take care to choose semiconductor and packaging materials to minimize the occurrence of cell or bit upset events. However, in most systems, soft errors are inevitable and must be corrected for.
Typically, error correction schemes are employed to detect and correct for soft errors. For example, forward error correction may be used; such schemes store redundant data in each data word. Alternatively, roll-back error correction may be used; such schemes use error correction codes, such as parity or Hamming codes, to detect and correct bit errors. Typical implementations utilize single bit error correction/single bit error detection schemes. Also, error correction schemes capable of correcting additional bit errors are also known. During a typical Read Modify Write (RMW) cycle, a data word is read from memory and an error correction engine detects any bit errors. Then, assuming an error is detected, the entire data word, including corrected bit(s), is written back to the memory device. The access operations required to do so, including precharging the bit lines, results in delay and consumes power. In some systems, a data word may be distributedly stored across multiple memory devices. In these systems, the entire corrected word is written back, even though there may only be a single bit error corresponding to a single memory cell in only one of the memory devices resulting in increased latency and power consumption across all memory devices.
Embodiments of the disclosure will be readily understood by the following detailed description in conjunction with the accompanying drawings. Embodiments of the disclosure are illustrated by way of example and not by way of limitation in the figures of the accompanying drawings.
In the following detailed description, reference is made to the accompanying drawings which form a part hereof wherein like numerals designate like parts throughout, and in which are shown, by way of illustration, specific embodiments in which the invention may be practiced. It is to be understood that other embodiments may be utilized and structural or logical changes may be made without departing from the scope of the present invention. Therefore, the following detailed description is not to be taken in a limiting sense, and the scope of the present invention is defined by the appended claims and their equivalents.
In the following detailed description, reference is made to the accompanying drawings which form a part hereof, and in which is shown by way of illustration embodiments of the disclosure. It is to be understood that other embodiments may be utilized and structural or logical changes may be made without departing from the scope of the disclosure. Therefore, the following detailed description is not to be taken in a limiting sense, and the scope of embodiments in accordance with the disclosure is defined by the appended claims and their equivalents.
Various operations may be described as multiple discrete operations in turn, in a manner that may be helpful in understanding various embodiments; however, the order of description should not be construed to imply that these operations are order dependent. Also, embodiments may have fewer operations than described. A description of multiple discrete operations should not be construed to imply that all operations are necessary.
The terms “coupled” along with its derivatives, may be used. It should be understood that these terms are not intended as synonyms for each other. Rather, in particular embodiments, “coupled” may mean that two or more elements are in direct physical or electrical contact. However, “coupled” may also mean that two or more elements are not in direct contact with each other, but yet still cooperate or interact with each other.
The description may use the phrase, “various embodiments,” “in an embodiment,” or “according to one embodiment,” which may each refers to one or more of the same or different embodiments. Furthermore, the terms “comprising,” “including,” “having,” and the like, as used with respect to embodiments as described in the present disclosure, are synonymous.
Various embodiments may employ a controller to write one or more bits of a data word to a digital memory device, wherein the data word comprises multiple bits including the one or more bits and the writing may be performed at a granularity level less than a data word. In embodiments, the granularity level may be smaller than a nibble. A nibble, as used herein, is a data word smaller than a byte. In embodiments, a nibble may be accessed by a digital memory device serially, rather than in parallel. In other embodiments, a nibble may be accessed in parallel. In embodiments, the writing may be performed particularly for the one or more bit(s) to be written. In embodiments, bit lines corresponding to a memory cell corresponding to the bit(s) to be written may be precharged wherein the precharging may occur at a granularity level smaller than a memory bank; in embodiments the precharging may occur at a granularity level smaller than a data word; in embodiments, the precharging may be performed particularly for the bit(s) to be written. In embodiments, the memory controller may determine an idle time to perform the write operation. In embodiments, the memory controller may perform intervening access operations (such as read, write, precharge, or other operations) on the memory device containing the memory cell corresponding to the single bit prior to determining an idle time to write the one or more bits to the corresponding memory cell(s).
In embodiments, an error correction engine may be employed to determine whether any bits of a data word read from a memory device are erroneous and, in embodiments, to correct one or more erroneous data bits. In embodiments, a controller may determine and/or receive a corrected bit to be written to the memory device. In embodiments, the controller may write the corrected bit to the memory device, the writing occurring at a granularity level smaller than a data word, a nibble, or performed particularly for the corrected bits.
In embodiments, the data word may be distributedly stored across multiple digital memory devices, such as for example multiple dual in-line memory modules (DIMM) or other memory devices. In embodiments, a controller may be configured to write a corrected or altered bit of the distributed data word by performing a write operation on only the memory device containing a memory cell corresponding to the corrected or altered bit, while performing no write operations on some or all of the remaining memory devices.
The term “data word” is used throughout. This term may refer, in embodiments, to multiple bits corresponding to a logical unit of data. Such a unit may include, in embodiments, 2, 4, 8, 16, 32, or 64 bits. In various embodiments, a data word may comprise any number of bits greater than a single bit. In embodiments, all bits of a data word may be accessed in parallel in a first access operation. In embodiments, some burst access operations may occur in a serial or sequential manner following a first access operation. In embodiments, some nibble access operations may occur in a serial or sequential manner following a first access operation.
According to various embodiments,
Address command and control circuit 107 may be configured to receive, from I/O terminals not shown, an address corresponding to particular one or more of memory cells 101 and a corresponding command to write corresponding values to the particular one or more of memory cells 101. The particular one or more memory cells 101 may, in embodiments, comprise less than a data word. In embodiments, the particular one or more memory cells 101 may comprise less than a nibble. In embodiments, address command and control circuitry may be configured to receive an address corresponding to only a particular one of memory cells 101. The received address may comprise a row portion corresponding to one of row lines 103 and a column portion corresponding to one or more of bit lines 105. Address command and control circuitry 107 may be configured to pass the row portion of the received address to row decoder 113 and the column portion to column decoder 109 which may be configured to decode the received row and column portions, respectively.
Sense amplifier and precharge circuit 111, which may be coupled to column decoder 109, may be configured to precharge a particular one or more of bit lines 105 corresponding to the received column portion and/or the particular one or more of memory cells 101. In embodiments sense amplifier and precharge circuit 111 may be configured to perform the precharging at a granularity level less than an entire bank of memory cells. In embodiments, it may be configured to perform the precharging at a granularity level of less than a byte, a nibble, or configured to perform the precharging particularly for the bit line(s) corresponding to the bit(s) to be written. For example, if the received address corresponds to three of memory cells 101, then sense amplifier and precharge circuit 111 may be configured in embodiments, to precharge those of bit lines 105 corresponding to those particular three memory cells.
Column decoder 109 may be configured to cause sense amplifiers within sense amplifier and precharge circuit 111 to drive the particular one or more of bit lines 105 to one or more voltage values corresponding to one or more logical bit values to be written to the particular one or more of corresponding memory cells 101. Row decoder 113 may be configured to receive a row portion of the received address. Row decoder may be configured to activate a one of word lines 103 corresponding to the particular one or more of memory cells 101 to be written. Such activation of one of word lines 103 may serve to activate the particular one or more memory cells 101 connected to the activated one of word lines 103. In embodiments, additional memory cells 101 may also be activated. In embodiments, additional action must be taken to activate the particular one or more of memory cells 101. Such activation of the particular one or more memory cells 101 may cause the voltages driven to the particular one or more bit lines 105 to be input, with assistance from sense amplifiers within amplifier and precharge circuit 111, to storage element(s) within the particular one or more of memory cells 101, thus completing a write operation to the particular one or more memory cells 101.
In embodiments, only a single one of memory cell 101 may be activated and a corresponding data value input into its storage element. In embodiments, multiple memory cells 101 numbering less than a data word may be activated and corresponding data values input into their corresponding storage elements. In this way, memory device unit 100 may be configured to be operated to perform a write operation at a granularity less than a whole data word, less than a nibble, or particularly for the bit(s) of data to be written. In particular, a single bit of a data word may be written in embodiments to a corresponding memory cell 101 without simultaneously writing any other bits of the data word. In alternative embodiments, memory device unit 100 may be configured to be operated to write multiple bits of data to multiple memory cells 101 comprising less than a whole data word. Thus, less power may be consumed by, for example, precharging less than an entire memory bank, or precharging at a granularity less than a subbank, array, subarray, data word or a nibble, or precharging particularly for the bit line(s) corresponding to the bit(s) to be written. Also, memory device unit 100 may consume less power by virtue of writing at a granularity level less than a data word, a nibble, or by performing a write operation particularly for the bit(s) to be written. In embodiments, memory device unit 100 may perform write operations with reduced latency by not being required to wait for all bit lines in a memory bank to be precharged before writing the single bit. Also, latency may be reduced by writing at a granularity level less than a data word, a nibble, or particularly for the bit(s) to be written.
In embodiments the bits to be written to the particular one or more of memory cells 101 may include one or more altered bits, according to various embodiments. Such altered bits may, in embodiments, be corrected bits corresponding to erroneous bits detected by an error correction engine or other device. In embodiments, such erroneous bits may have been caused by any of various soft errors. In embodiments, the number of altered bits may equal the number of the particular one or more of memory cells 101. In other embodiments, the number of altered bits may be fewer than the number of the particular one or more of memory cells 101.
Once an idle time is determined, the controller may command the memory device to precharge one or more bit lines 211. In embodiments, the one or more bit lines may correspond to a granularity level less than a bank, subbank, array, subarray, data word, or nibble. In embodiments, the memory device may precharge bit lines particular to the determined corrected bit(s) to be written. Once precharged, the corrected bit(s) may be written to corresponding memory cells of the digital memory device 213. In this way, corrected bits caused by soft errors may be corrected during an idle time of the device; this may in embodiments improve performance by not delaying scheduled operations that are not affected by the soft error. Also, because the precharging and writing of only a small number of bits may in embodiments require only a very small amount of time and/or power, error correction may be performed with virtually no impact on operating speed or power consumption of the memory device. In embodiments, a timer or other mechanism may be employed as a fail-safe in the event that an idle time is not determined within a reasonable amount of time. In embodiments, the controller may write the corrected bit without waiting for an idle time if a READ operation is scheduled for the data word containing the erroneous bit. In embodiments, the controller may abandon the writing of the corrected bit if a WRITE operation is scheduled for the data word containing the erroneous bit.
Controller 301 may be configured to determine one or more bits of a data word to be written. Such bit(s) may be, in embodiments, altered or corrected bit(s). In embodiments, such altered or corrected bit(s) may correspond to detected soft error(s). In other embodiments, such altered bit(s) may correspond to bit(s) altered for another purpose. In embodiments, the one or more bits to be written may all correspond to sections of the data word that are stored in one or more of memory devices 303 that comprise less than n memory devices. In embodiments, Controller 301 may be configured to perform a write operation only on those of memory devices 303 that contain memory cell(s) corresponding to the determined one or more bits of a data word to be written. In such embodiments, controller 301 may be configured to perform no write operations on those of memory devices 301 that do not contain memory cells corresponding to the determined one or more bits of a data word to be written. Thus, controller 301 may be configured to perform write operations on some, but not all, memory devices distributedly storing the data word. As such, less power may be consumed by operating only a subset of memory devices 303. Also, the other of memory devices 303 may, in embodiments, remain free to perform other unrelated operations. Each of memory devices 303 that do contain memory cells corresponding to the data bits to be written, may be configured to only precharge bit lines corresponding to those memory cells and may be configured to only perform write operations on the corresponding memory cells thus saving additional power and further reducing latency as described elsewhere within this application.
Other than the teachings of the various embodiments of the present invention, each of the elements of computer system/device 400 may perform its conventional functions known in the art. In particular, system memory 404 and mass storage 406 may be employed to store a working copy and a permanent copy of programming instructions implementing one or more software applications.
Although
In various embodiments, the earlier described memory cells are embodied in an integrated circuit. Such an integrated circuit may be described using any one of a number of hardware design languages, such as but not limited to VHSIC hardware description language (VHDL) or Verilog. The compiled design may be stored in any one of a number of data format, such as but not limited to GDS or GDS II. The source and/or compiled design may be stored on any one of a number of medium such as but not limited to DVD.
Although specific embodiments have been illustrated and described herein for purposes of description of the preferred embodiment, it will be appreciated by those of ordinary skill in the art that a wide variety of alternate and/or equivalent implementations may be substituted for the specific embodiment shown and described without departing from the scope of the present invention. Those with skill in the art will readily appreciate that the present invention may be implemented in a very wide variety of embodiments. This application is intended to cover any adaptations or variations of the embodiments discussed herein.
Number | Name | Date | Kind |
---|---|---|---|
4208730 | Dingwall et al. | Jun 1980 | A |
4335459 | Miller | Jun 1982 | A |
4685089 | Patel et al. | Aug 1987 | A |
5214610 | Houston | May 1993 | A |
5233560 | Foss et al. | Aug 1993 | A |
5381363 | Bazes | Jan 1995 | A |
5416746 | Sato et al. | May 1995 | A |
5598374 | Rao | Jan 1997 | A |
5636174 | Rao | Jun 1997 | A |
5657285 | Rao | Aug 1997 | A |
5686730 | Laudon et al. | Nov 1997 | A |
5745428 | Rao | Apr 1998 | A |
5802395 | Connolly et al. | Sep 1998 | A |
5825710 | Jeng et al. | Oct 1998 | A |
5828610 | Rogers et al. | Oct 1998 | A |
5835932 | Rao | Nov 1998 | A |
5856940 | Rao | Jan 1999 | A |
5880990 | Miura | Mar 1999 | A |
5995438 | Jeng et al. | Nov 1999 | A |
6101579 | Randolph et al. | Aug 2000 | A |
6256221 | Holland et al. | Jul 2001 | B1 |
6512715 | Okamoto et al. | Jan 2003 | B2 |
6529412 | Chen et al. | Mar 2003 | B1 |
6621758 | Cheung et al. | Sep 2003 | B2 |
6779076 | Shirley | Aug 2004 | B1 |
6959272 | Wohl et al. | Oct 2005 | B2 |
7124348 | Nicolaidis | Oct 2006 | B2 |
7139213 | Rao | Nov 2006 | B2 |
7207024 | Scheffer | Apr 2007 | B2 |
7254690 | Rao | Aug 2007 | B2 |
7724593 | Rao | May 2010 | B2 |
7755961 | Rao | Jul 2010 | B2 |
20010015933 | Reddy et al. | Aug 2001 | A1 |
20020174292 | Morita et al. | Nov 2002 | A1 |
20030008446 | Osada et al. | Jan 2003 | A1 |
20050007847 | Bell et al. | Jan 2005 | A1 |
20050028061 | Nicolaidis | Feb 2005 | A1 |
20050207201 | Madan et al. | Sep 2005 | A1 |
20060067129 | La Rosa et al. | Mar 2006 | A1 |
20070028060 | Ware et al. | Feb 2007 | A1 |
20080123450 | Rao | May 2008 | A1 |
20080123451 | Rao | May 2008 | A1 |
20090097346 | Rao | Apr 2009 | A1 |
20100202230 | Rao | Aug 2010 | A1 |
Number | Date | Country |
---|---|---|
2128249 | May 1990 | JP |
4162665 | Jun 1992 | JP |
8180695 | Jul 1996 | JP |
8185698 | Jul 1996 | JP |
9213070 | Aug 1997 | JP |
2003271445 | Sep 2003 | JP |
Number | Date | Country | |
---|---|---|---|
20090106505 A1 | Apr 2009 | US |