This application is based upon and claims the benefit of priority from the prior Japanese Patent Application No. 2014-45438, filed on Mar. 7, 2014, the entire contents of which are incorporated herein by reference.
Embodiments relate to a cache memory.
There is a tendency for cache memories to have a larger capacity, along with which there is a problem of increase in leakage current of the cache memories. MRAMs (Magnetoresistive RAMS) attract attention as a candidate for a large-capacity cache memory are non-volatile. MRAMs have a feature of much smaller leakage current than SRAMs currently used in the cache memories.
Spin injection magnetization inversion is one of MRAM data writing techniques. In the spin injection magnetization inversion, a write current having a specific current value or larger flows into a magnetic tunnel junction element (MTJ element) of an MRAM. Also in data reading, a specific read current flows into the MTJ element.
The current value of a write current for spin injection MRAMs is set to be equal or larger than an inversion threshold value at which spin injection causes magnetization inversion. The current value of a read current for the spin injection MRAMs is set to be smaller than the inversion threshold value.
However, due to characteristic variation in a plurality of MTJ elements of an MRAM, the inversion threshold value varies for each MTJ element. Moreover, when data is repeatedly written in the same MTJ element, the inversion threshold value for the MTJ element becomes unstable.
The above-described drawbacks may cause several problems such as write errors in data writing, read disturb originated in a read current in data reading, magnetization inversion due to thermal agitation in data retention, and retention failure.
One proposal to deal with these failures is an MRAM provided with an ECC (Error Correction and Coding) circuitry for error correction in data reading. Another proposed technique is to rewrite data with a long write pulse width when an error is detected. However, the write pulse width is adjusted to be longer for rewriting after an error is detected, which causes a longer average latency in accessing an MRAM.
The present embodiment provides a cache memory has a data cache to store data per cache line, a tag to store address information of the data to be stored in the data cache, a cache controller to determine whether an address by an access request of a processor meets the address information stored in the tag and to control access to the data cache and the tag, and a write period controller to control a period required for writing data in the data cache based on at least one of an occurrence frequency of read errors to data stored in the data cache and a degree of reduction in performance of the processor due to delay in reading the data stored in the data cache.
Hereinafter, embodiments will be explained with reference to the drawings. The following embodiments will be explained mainly with unique configurations and operations of a cache memory and a processor system. However, the cache memory and the processor system may have other configurations and operations which will not be described below. These omitted configurations and operations are also included in the scope of the embodiments.
The MMU 4 converts a virtual address issued by the processor core 3 into a physical address to access a main memory 8 and the cache memory 1. Based on a history of memory addresses accessed by the processor core 3, the MMU 4 looks up to a page table (PT) 9 stored in the main memory 8 to acquire a page table entry corresponding to an address currently accessed to update a conversion table of virtual addresses and physical addresses. The page table 9 is usually managed by an OS. However, a mechanism for managing the page table may be provided in the cache memory 1.
When the operation of the processor core 3 is halting for a certain period and when there is no operation request from outside, the power monitoring circuitry 17 supplies a power control signal to the processor system 2. The power control signal lowers a power supply voltage of or halts power supply to at least a part of circuit blocks in the processor system 2.
The cache memory 1 of
The L1-cache 6 has a memory capacity of, for example, several ten kbytes. The L2-cache 7 has a memory capacity of, for example, several hundred kbytes to several Mbytes. The main memory 8 has a memory capacity of, for example, several Gbytes. The processor core 3 usually accesses the L1-cache 6 and the L2-cache 7 per cache line, and the main memory 8 per page. Each cache line has, for example, 512 bytes and one page has, for example, 4 kbytes. The number of bytes for the cache lines and the pages is set arbitrarily.
Data that is stored in the L1-cache 6 is also usually stored in the L2-cache 7. Data that is stored in the L2-cache 7 is also usually stored in the main memory 8.
The L2-cache 7 of
The data cache 12 stores cache line data that are accessible per cache line. The tag unit 13 stores address information of the cache line data.
The redundant code memory 14 stores a redundant code for correcting an error of each cache line data stored in the data cache 12. The redundant code memory 14 may also store redundant codes for the address information stored in the tag unit 13.
The data cache 12 has non-volatile memories, for example. A non-volatile memory usable for the data cache 12 is, for example, an MRAM (Magnetoresistive RAM) which is easy to be configured to have a larger capacity.
The tag unit 13 has volatile memories, for example. A volatile memory usable for the tag unit 13 is, for example, an SRAM (Static RAM) with a higher-speed performance than the MRAM.
The cache controller 15 determines whether data corresponding to an address issued by the processor core 3 is stored in the data cache 12. In detail, the cache controller 15 performs a hit/miss determination on whether the address issued by the processor core 3 matches the address information stored in the tag unit 13, to control data write and read to and from the L-2 cache 7 and write-back to the main memory 8.
When storing new cache line data in the L-2 cache 7, the error correction controller 16 generates a redundant code for correcting an error of the cache line data and stores the redundant code in the redundant code memory 14. When reading cache line data from the data cache 12, for which there is a read request from the processor core 3, the error correction controller 16 reads a redundant code corresponding to the data from the redundant code memory 14 to perform an error correction process. Then, the error correction controller 16 transfers the error-corrected cache line data to the processor core 3.
The memory array 20 has a plurality of vertically-and-horizontally arranged MRAM cells. To the gates of MRAM cells aligned in the row direction, the corresponding word lines (not shown) are connected, respectively. To the drains and sources of MRAM cells aligned in the column direction, the corresponding bit and source lines (not shown) are connected, respectively.
The timing generator 22 controls the timing of each block of the data cache 12. The decoder 21 drives a word line in synchronism with a signal from the timing generator 22, based on a result of decoding an address to which there is an access request from the processor core 3.
The writing circuitry 23 writes data in the memory array 20 in synchronism with a write pulse signal from the timing generator 22. The reading circuitry 24 reads data from the memory array 20 in synchronism with a read pulse signal from the timing generator 22.
The cache controller 15 controls access to the tag unit 13 and the data cache 12. The pulse controller 25 generates a clock signal CLK having a cycle that varies depending on at least either the occurrence frequency of read errors to data stored in the data cache 12 or the degree of reduction in performance of the processor core 3 due to delay in reading data stored in the data cache 12. The pulse controller 25 also generates a write enable signal WE having a pulse width corresponding to one cycle of the dock signal CLK. The timing generator 22 generates a write pulse signal W in synchronism with the write enable signal WE. The pulse width (write effective term) of the write pulse signal W varies depending on the pulse width of the write enable signal WE.
Therefore, the pulse width of the write enable signal WE varies depending on at least either the occurrence frequency of read errors to data stored in the data cache 12 or the degree of reduction in performance of the processor core 3 due to delay in reading data stored in the data cache 12. The occurrence frequency of data read errors indicates how often errors are detected by the error correction controller 16 in data reading. The degree of reduction in performance of the processor core 3 due to delay in data reading indicates, in more specifically, how much the delay in data reading affects the operation of the processor core 3. As one example, is that it is determined that the performance of the processor core 3 is reduced to a higher degree as data is accessed at a higher frequency. It is also determined that the performance of the processor core 3 is reduced to a higher degree in read access of data that the processor core 3 cannot perform the succeeding process without the data, i.e. critical data, even if the data is accessed at a low frequency.
The timing generator 22 generates a write pulse signal W having a pulse width in accordance with a pulse width of the write enable signal WE. In more detail, the write pulse signal W has a pulse width corresponding to a period in which the write enable signal WE and a signal CSLE which will be explained later are both at a high level. Therefore, the longer the pulse width of the write pulse signal W, the longer it takes to write data in the data cache 12. This means that, the longer the pulse width of the write pulse signal W, the more reliably data can be written in the data cache 12, which lowers error occurrence frequency. Moreover, the timing generator 22 generates a read pulse signal R that indicates a period in which the reading circuitry 24 performs a read operation, and also generates other control signals.
As described later, the tag unit 13 can store information that indicates the occurrence frequency of data read errors and information that indicates the access frequency. Therefore, based on the information stored in the tag unit 13, the pulse controller 25 can generate the dock signal CLK and the write enable signal WE.
The pulse controller 25 has, for example, a built-in frequency divider that can change a frequency division ratio in two or more ways. The frequency divider changes the frequency division ratio based on the information stored in the tag unit 13 described above. With the frequency divider, the pulse controller 25 can generate a variable-cycle clock signal CLK and a write enable signal WE having a pulse width corresponding to one cycle of the dock signal CLK.
The timing generator 22 of
The reading circuitry 24 reads data from the data cache 12 during a period in which the read pulse signal R is high. The writing circuitry 23 writes data in the data cache 12 during a period in which the write pulse signal W is high. As shown in
The internal configuration of the timing generator 22 is not limited to that shown in
The data cache 12 and the tag unit 13 of
As shown in
A virtual address issued by the processor core 3 is converted into a physical address by the MMU 4. As shown in
In the case of
In the case of
In the case of
As understood from the results of
It is also found in this simulation that, by doubling the pulse width of the write pulse signal W, the probability of occurrence of write errors can be reduced to about 1/100.
As described above, in the present embodiment, the pulse width of the write pulse signal W is controlled based on at least either the occurrence frequency of read errors to data stored in the data cache 12 or the degree of reduction in performance of the processor core 3 due to delay in reading data stored in the data cache 12. With this control, it is possible to reduce the frequency of error detection and error correction by the error correction controller 16, improve the average latency of the processor core 3, and restrict the power consumption.
Although several embodiments of the present invention have been explained above, these embodiments are examples and not to limit the scope of the invention. These new embodiments can be carried out in various forms, with various omissions, replacements and modifications, without departing from the conceptual idea and gist of the present invention. The embodiments and their modifications are included in the scope and gist of the present invention and also in the inventions defined in the accompanying claims and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
2014-045438 | Mar 2014 | JP | national |
Number | Name | Date | Kind |
---|---|---|---|
4525800 | Hamerla | Jun 1985 | A |
20040095265 | Yoneyama | May 2004 | A1 |
20040114483 | Kaneko | Jun 2004 | A1 |
20050174860 | Kim | Aug 2005 | A1 |
20060271723 | Visalli | Nov 2006 | A1 |
20080310215 | Ueda | Dec 2008 | A1 |
20110078538 | Ikegawa et al. | Mar 2011 | A1 |
20110280065 | Rao et al. | Nov 2011 | A1 |
20130080687 | Nemazie | Mar 2013 | A1 |
20130191595 | Cheng | Jul 2013 | A1 |
20140157065 | Ong | Jun 2014 | A1 |
20150039836 | Wang | Feb 2015 | A1 |
20150063020 | Kajigaya | Mar 2015 | A1 |
Number | Date | Country |
---|---|---|
2008-310876 | Dec 2008 | JP |
2012-22726 | Feb 2012 | JP |
2013-529350 | Jul 2013 | JP |
2014-110073 | Jun 2014 | JP |
2015-49918 | Mar 2015 | JP |
Entry |
---|
International Search Report dated May 19, 2015 from the Japanese Patent Office in International Application No. PCT/JP2015/056824. 1 page. |
Number | Date | Country | |
---|---|---|---|
20160371189 A1 | Dec 2016 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2015/056824 | Mar 2015 | US |
Child | 15257163 | US |