The present invention relates to a method for operating a cache memory whose memory area is split into sets and is addressed using an address which has a first, a second and a third field.
The performance of a processor system is determined, inter alia, by the access times for connected memory systems. Although the speed of the main memories has increased, they are not equal to the processing speeds of modern processors and cannot supply or store data at the required speed. Read or write commands from the processors for the main memory thus bring about “latencies”.
To increase the performance of the overall system, present-day processor architectures contain cache memories, e.g. for data (D cache), instructions (I cache) or addresses (TLB, translation lookaside buffer). Cache memories are generally smaller, i.e. the number of bytes which can be stored, than main memories or external memories. They are fast buffer stores which are used in order to reduce the latency when a processor accesses slow external memories. In this case, the cache memory covers selected address areas in the external memory and contains the temporarily modified data and also information relating to their location.
The textbook Hennessey, Patterson, Computer Architecture, A Quantitative Approach, 2nd Ed. 1996, Morgan Kaufmann, S. F., describes the common cache architectures and their manners of operation. A cache memory comprises an address bank which comprises at least one index or index field, also called a set, and a marker or marker field. The data in a main memory location with the address associated with the main memory are stored in a line in a cache memory. An address for a cache memory has 12 address bits, for example, with the more significant bits (for example 6 more significant bits) forming the marker and the less significant bits (for example 5 less significant bits) forming the index. The data in the main memory are stored together with the marker in a line in the cache memory, which line corresponds to the index of this address. The line in a cache memory thus comprises an address and main memory data which correspond to this address. A line is the smallest unit of information which can be moved between main memory and cache memory and is also called a block. A processor uses the index bits to address the marker bits which are stored in the cache memory. These marker bits are compared with the marker bits of the address generated by the processor. If there is a match, the data corresponding to the address can be read from the cache memory.
The cache memories can be characterized as buffer stores with “N-way set associative”, “direct mapped” or “fully associative” memory arrays.
In the text below, the N-way set associative and direct mapped cache memories will be assumed in this case. With an N-way set associative cache memory, the same memory areas in a main memory are always mapped onto the same sets in a cache memory. The lines in the main memory can be mapped onto different lines within the sets, however, by using LRU (last recent used) algorithms, for example, which select a memory line in the cache memory whose use is furthest back in time in relation to all the memory lines of a set. In the direct mapped cache memory, each memory line in a main memory is assigned a fixed memory line in the cache memory. The arrangement of the areas in the main memory thus corresponds precisely to the arrangement of the memory lines in the cache memory.
Generally, the data are stored in blocks of 2b bytes per memory entry. In the case of an N-way set associative or direct mapped cache memory with N=2n ways, the memory address is split into a marker field, an index field and an offset field. During a read or write operation in the cache memory, i.e. when a data item is accessed, the index field is used for directly addressing the set. In the case of these cache memories, the stored marker field is used to identify the respective line in the cache, since the set contains a plurality of lines in which a data item is actually stored. The offset field is used to address the data item in the line.
A fundamental drawback with the fixed mapping of memory areas in the main memory onto the sets in the cache memory is firstly that particular configurations of program and data segments in the cache memory involve frequently used blocks being repeatedly expelled from the sets, in which case other sets contained in the cache are utilized less efficiently. This presents a significant performance drawback.
In addition, physical reading methods for the arrangement of the data in the cache memory allow the data in an external memory or main memory to be reconstructed, for example using electron beam analysis. This can be seen as a further significant drawback with regard to physical security, particularly in chip card controllers or other security controllers.
It is an object of the present invention to specify a method for operating a cache memory which improves the utilization level of the sets in the cache memory with increased physical security for the cache memory, which means that a relatively long residence time for the blocks in these sets can be achieved.
The inventive method for operating a cache memory whose memory area is split into sets and is addressed using an address which is split into at least two fields involves the second field for addressing the sets in the cache memory being recalculated by performing a combinational logic function on the basis of a modulo N operation, where N corresponds to the number of sets in the cache memory. Calculating a new field for addressing the sets has the advantage that the individual sets within a cache memory can be utilized more beneficially.
The inventive method is explained in more detail below using an exemplary embodiment with reference to the figures. Identical or corresponding elements in different figures have been provided with the same reference symbols.
In the figures:
The address 1 is divided into a marker field 2, an index field 3 and an offset field 4. In this case, an arrow pointing from the index field 3 in the address 1 to the cache memory 5 is intended to indicate that the index field 3 is used for addressing the sets 61, 62, 6N in the cache memory 5. The marker field 2 is used to identify the respective line in the cache, since in the case of set associative cache memories the set has a plurality of lines available in which the data item can actually be stored. The marker field 2 of an address generated by a processor (not shown in the present case) is stored together with the respective data, which means that when the data item is pulled the marker field 2 of the address 1 generated by a processor is compared with the stored marker field 2 in the addressed set in order to find the data item in question in this manner.
In block 14, the address with the new index field 3 is forwarded to the cache memory 5. Block 15 indicates the end of this program flowchart.
The inventive method has the advantage that the combinational logic function can be taken as a basis for calculating a new address field for addressing the sets in the cache memory, so that the utilization level of the individual sets is assisted when the cache memory is in heavy use. Since the stored data can therefore also be stored in other sets, a relatively long residence time for the stored blocks in these sets is also achieved.
The security of security controllers is significantly increased, since the information obtained through physical reading methods for the arrangement of the data in the cache is no longer concurrent in a fresh program cycle, and hence no conclusion can be drawn about the data structure in the main memory.
Number | Date | Country | Kind |
---|---|---|---|
102 58 767.1 | Dec 2002 | DE | national |
This application is a continuation of International Patent Application Serial No. PCT/DE2003/003984, filed Dec. 3, 2003, which published in German on Jul. 1, 2004 as WO 2004/055678, and is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/DE03/03984 | Dec 2003 | US |
Child | 11153914 | Jun 2005 | US |