The present invention relates generally to integrated circuit memory devices and, more particularly, to a design structure for implementing dynamic refresh protocols for DRAM based cache.
Memory devices are used in a wide variety of applications, including computer systems. Computer systems and other electronic devices containing a microprocessor or similar device typically include system memory, which is generally implemented using dynamic random access memory (DRAM). A DRAM memory cell generally includes, as basic components, an access transistor (switch) and a capacitor for storing a binary data bit in the form of an electrical charge. Typically, a first voltage is stored on the capacitor to represent a logic HIGH or binary “1” value (e.g., VDD), while a second voltage on the storage capacitor represents a logic LOW or binary “0” value (e.g., ground). A principal advantage of DRAM is that it uses relatively few components to store each bit of data, and is thus a relatively inexpensive means for providing system memory having a relatively high capacity.
As a result of the package/pin limitations associated with discrete, self-contained devices such as DRAMs, memory circuit designers have used certain multiplexing techniques in order to access the large number of internal memory array addresses through the narrow, pin-bound interfaces. Because these discrete DRAMs have been in use for some time, a standard interface has understandably emerged over the years for reading and writing to these arrays. More recently, embedded DRAM (eDRAM) macros have been offered, particularly in the area of Application Specific Integrated Circuit (ASIC) technologies. For example, markets in portable and multimedia applications such as cellular phones and personal digital assistants utilize the increased density of embedded memory for higher function and lower power consumption. Unlike their discrete counterparts, the eDRAM devices do not have the limited I/O pin interfaces with associated memory management circuitry. In fact, the typical I/O counts for eDRAM devices can number in the hundreds.
One disadvantage of DRAM (including eDRAM), however, is that the DRAM memory cells must be periodically refreshed as the charge on the capacitor eventually leaks away and therefore provisions must be made to “refresh” the capacitor charge. Otherwise, the data bit stored by the memory cell is lost. While an array of memory cells is being refreshed, it cannot be accessed for a read or a write memory access. The need to refresh DRAM memory cells does not present a significant problem in most applications; however, it can prevent the use of DRAM in applications where immediate access to memory cells is required or highly desirable.
Thus, in certain instances, the refresh process involves accessing memory locations corresponding to the same location from which data is needed for system operation. This contention with refresh increases the average latency of the operational accesses. In addition to decreasing memory availability, refresh requirements of eDRAM can also negatively impact power performance and add complexity to system implementation. On the other hand, alternative refresh solutions (such as conventional selective/partial refreshing, for example) save power, but at the cost of degraded performance from the loss of data. Accordingly, it is desirable to be able to provide improved approaches to refresh protocol management for simultaneously improving memory availability, system performance and power consumption.
The foregoing discussed drawbacks and deficiencies of the prior art are overcome or alleviated by a hardware description language (HDL) design structure embodied on a machine-readable data storage medium, the HDL design structure comprising elements that when processed in a computer aided design system generates a machine executable representation of a device for implementing dynamic refresh protocols for DRAM based cache. The HDL design structure further includes a DRAM cache partitioned into a refreshable portion and a non-refreshable portion; and a cache controller configured to assign incoming individual cache lines to one of the refreshable portion and the non-refreshable portion of the cache based on a usage history of the cache lines; wherein cache lines corresponding to data having a usage history below a defined frequency are assigned by the controller to the refreshable portion of the cache, and cache lines corresponding to data having a usage history at or above the defined frequency are assigned to the non-refreshable portion of the cache.
Referring to the exemplary drawings wherein like elements are numbered alike in the several Figures:
a) through 5(d) are flow diagrams illustrating an exemplary operation of the hardware implementation of
Disclosed herein is a method and system for implementing dynamic refresh protocols for DRAM based cache by partitioning regions of the cache into “refreshable” and “non-refreshable” portions. In addition to the partitioning, the embodiments disclosed herein also track the usage history of individual cache lines in order to determine appropriate placement of incoming cache lines into either the refreshable or non-refreshable portion of the DRAM cache. In an exemplary software implementation thereof, a “hint” bit may be provided in a load instruction such that when the instruction is decoded, the decoder may determine the preferred location for the load instruction's data (e.g., refreshable cache partition or non-refreshable cache partition). Alternatively, an exemplary hardware implementation can track and store the usage history of the cache line through various counting and status bits in the cache tag array and in the next level of memory hierarchy, as described more fully hereinafter. When the hardware history saved in the next-level of memory hierarchy is sent along with an incoming line, the cache controller decodes the bit to determine the preferred location for that line.
Referring initially to
The first level cache memory 104 is integrated on the same chip with the CPU 102 and, as a result, is faster than main memory 106 with a higher bandwidth and shorter wire length, therefore avoiding any delay associated with transmitting and/or receiving signals to and/or from an external chip. The second level cache memory 112 is located on a different chip 114 than the CPU 102, and has a larger capacity than the first level cache memory 104 but smaller than the main memory 106.
The cache memories 104, 112 serve as buffers between the CPU 102 and the main memory 106. In each of the cache memories 104, 112, data words are stored in a cache memory and are grouped into small pages called “cache blocks” or “cache lines”. The contents of the cache memory are a copy of a set of main memory blocks. Each cache line is marked with a “TAG address” that associates the cache line with a corresponding part of the main memory. TAG addresses (which may be non-continuous) assigned to the corresponding cache lines are stored in a special memory, called a TAG memory or directory.
In the first level cache memory 104, when an address is requested by the CPU 102 to access certain data, the requested address is compared to TAG addresses stored in a TAG memory of the first level cache memory 104. If the requested address is found among the TAG addresses in the TAG memory, it is determined that data corresponding to the requested address is present in the cache memory 104, which is referred to as a “hit”. Upon finding the data of the requested address in the cache memory 104, the data is transferred to the CPU 102. The TAG memory may also contain an offset address to locate the data in the cache memory 104. Locating data in a cache memory is well known in the art, thus a detailed description thereof is omitted herein.
On the other hand, if the requested address is not found in the TAG memory of the cache memory 104, it is determined that the data corresponding to the requested address is not present in the cache memory 104, which is referred to as a “miss”. When a miss occurs in the first level cache memory 104, the requested address is sent to a lower level memory, for example, the second level cache memory 112. If a miss occurs in the second level cache memory 112 (i.e., the data is not present in the second level cache memory), the requested address is sent to a third level cache memory (if available) or a main memory.
As the present invention embodiments described herein may be applied to any DRAM (and more specifically eDRAM) cache memory used in a hierarchical memory system to support main memory, the implementation of dynamic refresh protocols for DRAM based cache could be applied to the first level (L1) cache memory 104, as well as to the second level (L2) cache memory 112.
Referring to
Dynamically Controlled Refresh Protocol Using Software Hints
Referring now to
As used herein, a refreshable portion of the cache refers to locations that are subjected to a refresh cycle apart from refreshing due to a read/write operation, while data in a non-refreshable portion of the cache is not refreshed unless the line is accessed due to a read/write operation. Thus, the data in the non-refreshable portion of the cache is left to expire if not continually accessed. Further, with respect to partitioning a cache into refreshable and non-refreshable portions, this may represent a physical division or a simply designated partitioning. The relative percentage breakdown of refreshable versus non-refreshable portions of the cache may depend on desired tradeoff characteristics wherein, for example, the non-refreshable portion could represent up to about half the total cache size.
With respect to determining whether particular loaded data needs refreshing to begin with (i.e., assigning the value of the hint bit), the compiler may determine the lifetime of data loaded from the memory into the DRAM-based cache. Such an analysis can be done offline for an application, and is independent of the hardware. In one embodiment, the compiler can track an application's phase changes, and consequently the working set changes. If a particular segment of memory was actively used in one part of the program, and if the program then moves to another phase where the data in that segment is rarely used, then the compiler can insert the hint bit in the load instruction to indicate that the data being loaded is now a candidate for refresh.
Thus, when the load instruction is decoded, if the hint bit in the load instruction is set to 1, the data accessed by this load, if already present in cache, is marked “do not refresh” for future use. If the corresponding data is not present in the cache, the data can be placed in that non-refreshable region/portion of the cache where a refresh signal is not sent. Managing the refresh protocol by using the hint bit therefore allows the same data to be dynamically marked as refreshable during one phase of an application, and subsequently as not refreshable in another phase of the same application.
Dynamically Controlled Refresh Protocol Using Hardware Assists
In lieu of encoded load instructions, cache hardware may be configured to include predictors to determine if the data is present in the cache already, or the new incoming data into the cache is to be marked “do not refresh” for the future. In one embodiment, all modified lines in the cache are in a predetermined set/way, which is always refreshed. In another embodiment, a cache directory entry (if the data is not modified) is invalidated if a refresh has failed. Such invalid lines can be used to accommodate new incoming lines due to demand fetch, or prefetch. In another embodiment, the hardware tracks cache line usage during its residency in the cache using a 2-bit access counter per line. For example, as shown in
Referring generally to
As further shown in
c) illustrates the results of line replacement in the cache. If a cache line is to be evicted (replaced from the cache) as shown in block 514, it is first determined whether the line is in a non-refreshable portion of the cache as reflected in decision block 516. If so, then the tag data is checked to determine whether the data is invalid at decision block 518, wherein the cache line is marked as cold by setting the hot/cold indicator bit to a first value (e.g., 0) in block 520. On the other hand, if the cache line in a non-refreshable portion is still valid, or if the cache line is in a refreshable portion of the cache, it is then determined in decision block 522 whether the value of the reuse counter for that line is at least 2 or higher (i.e., the line has experienced at least 2 usages since the last refresh cycle). For lines having at least 2 usages since the most recent refresh cycle, the cache line is marked as hot by setting the hot/cold indicator bit to a second value (e.g., 1) in block 524. However, if the cache line to be evicted was not used at all, or used only once since its last refresh, the line is marked as cold in block 526. Again, the hot/cold indicator bit is associated with the next level of memory hierarchy, thereby tracking and maintaining a usage history of that line.
In the future, when the line is subsequently brought into the cache for a demand fetch, or prefetch, a cold line is placed in the refreshable portion of the cache, and a hot line is placed in the non-refreshable partition of the cache, as depicted in blocks 528, 530, 532 and 534 of
Conversely, a hot line present in the non-refreshable partition is not refreshed periodically and therefore remains valid only if accessed. Thus, when this line is replaced, if it is invalid (i.e., the data has expired), it is now marked as “cold” back in the next level and therefore designated to be placed into the refreshable partition upon its next tour in the cache. In still another embodiment, a refreshless DRAM cache such as that disclosed in Ser. No. 11/950,015 (assigned to the assignee of the present application, and the contents of which are incorporated herein in their entirety) can be used to determine the data usage pattern in the non-refreshable portion to determine if the line needs to be marked as hot or cold during replacement.
Design process 620 preferably employs and incorporates hardware and/or software modules for synthesizing, translating, or otherwise processing a design/simulation functional equivalent of the components, circuits, devices, or logic structures shown in
Design process 620 may include hardware and software modules for processing a variety of input data structure types including netlist 630. Such data structure types may reside, for example, within library elements 635 and include a set of commonly used elements, circuits, and devices, including models, layouts, and symbolic representations, for a given manufacturing technology (e.g., different technology nodes, 32 nm, 45 nm, 90 nm, etc.). The data structure types may further include design specifications 640, characterization data 650, verification data 660, design rules 670, and test data files 680 which may include input test patterns, output test results, and other testing information. Design process 620 may further include, for example, standard mechanical design processes such as stress analysis, thermal analysis, mechanical event simulation, process simulation for operations such as casting, molding, and die press forming, etc. One of ordinary skill in the art of mechanical design can appreciate the extent of possible mechanical design tools and applications used in design process 620 without deviating from the scope and spirit of the invention. Design process 620 may also include modules for performing standard circuit design processes such as timing analysis, verification, design rule checking, place and route operations, etc.
Design process 620 employs and incorporates logic and physical design tools such as HDL compilers and simulation model build tools to process design structure 610 together with some or all of the depicted supporting data structures along with any additional mechanical design or data (if applicable), to generate a second design structure 690. Design structure 690 resides on a storage medium or programmable gate array in a data format used for the exchange of data of mechanical devices and structures (e.g. information stored in a IGES, DXF, Parasolid XT, JT, DRG, or any other suitable format for storing or rendering such mechanical design structures). Similar to design structure 610, design structure 690 preferably comprises one or more files, data structures, or other computer-encoded data or instructions that reside on transmission or data storage media and that when processed by an ECAD system generate a logically or otherwise functionally equivalent form of one or more of the embodiments of the invention shown in
Design structure 690 may also employ a data format used for the exchange of layout data of integrated circuits and/or symbolic data format (e.g., information stored in a GDSII (GDS2), GL1, OASIS, map files, or any other suitable format for storing such design data structures). Design structure 690 may comprise information such as, for example, symbolic data, map files, test data files, design content files, manufacturing data, layout parameters, wires, levels of metal, vias, shapes, data for routing through the manufacturing line, and any other data required by a manufacturer or other designer/developer to produce a device or structure as described above and shown in
In view of the above, the present method embodiments may therefore take the form of computer or controller implemented processes and apparatuses for practicing those processes. The disclosure can also be embodied in the form of computer program code containing instructions embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other computer-readable storage medium, wherein, when the computer program code is loaded into and executed by a computer or controller, the computer becomes an apparatus for practicing the invention.
While the invention has been described with reference to a preferred embodiment or embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from the scope of the invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the invention without departing from the essential scope thereof. Therefore, it is intended that the invention not be limited to the particular embodiment disclosed as the best mode contemplated for carrying out this invention, but that the invention will include all embodiments falling within the scope of the appended claims.
This non-provisional U.S. patent application is a continuation in part of pending U.S. patent application Ser. No. 11/949,904, which was filed Dec. 4, 2007, and is assigned to the present assignee
Number | Name | Date | Kind |
---|---|---|---|
4625296 | Shriver | Nov 1986 | A |
5390308 | Ware et al. | Feb 1995 | A |
5422846 | Chang et al. | Jun 1995 | A |
5430683 | Hardin et al. | Jul 1995 | A |
5895487 | Boyd et al. | Apr 1999 | A |
6009504 | Krick | Dec 1999 | A |
6032241 | Green | Feb 2000 | A |
6148294 | Beyda et al. | Nov 2000 | A |
6195309 | Ematrudo | Feb 2001 | B1 |
6295593 | Hsu et al. | Sep 2001 | B1 |
6311280 | Vishin | Oct 2001 | B1 |
6341079 | Chadwick | Jan 2002 | B1 |
6347357 | Sartore et al. | Feb 2002 | B1 |
6389505 | Emma et al. | May 2002 | B1 |
6453399 | Wada | Sep 2002 | B2 |
6556501 | Naffziger | Apr 2003 | B1 |
6570803 | Kyung | May 2003 | B2 |
6625056 | Kihara | Sep 2003 | B1 |
6678814 | Arimilli et al. | Jan 2004 | B2 |
6697909 | Wang et al. | Feb 2004 | B1 |
6772277 | Naffziger | Aug 2004 | B2 |
6775176 | Kihara | Aug 2004 | B2 |
6819618 | Kshiwazaki | Nov 2004 | B2 |
6826106 | Chen | Nov 2004 | B2 |
6944713 | Clark et al. | Sep 2005 | B2 |
6965536 | Shirley | Nov 2005 | B2 |
7038940 | Swanson et al. | May 2006 | B2 |
7039756 | Emerson et al. | May 2006 | B2 |
7061306 | Nazarian et al. | Jun 2006 | B2 |
20020138690 | Simmonds et al. | Sep 2002 | A1 |
20030053361 | Zhang et al. | Mar 2003 | A1 |
20030218930 | Lehmann et al. | Nov 2003 | A1 |
20040162961 | Lyon | Aug 2004 | A1 |
20040268031 | Lawrence | Dec 2004 | A1 |
20050002253 | Shi et al. | Jan 2005 | A1 |
20050102475 | Reohr et al. | May 2005 | A1 |
20050108460 | David | May 2005 | A1 |
20050216667 | Cabot et al. | Sep 2005 | A1 |
20060036811 | Diefferderfer et al. | Feb 2006 | A1 |
20060041720 | Hu et al. | Feb 2006 | A1 |
20060107090 | Emma et al. | May 2006 | A1 |
20060133173 | Jain et al. | Jun 2006 | A1 |
20060190676 | Butler et al. | Aug 2006 | A1 |
20070136523 | Bonella et al. | Jun 2007 | A1 |
Number | Date | Country | |
---|---|---|---|
20090144492 A1 | Jun 2009 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 11949904 | Dec 2007 | US |
Child | 12126499 | US |