1. Field
The present disclosure pertains to the field of caching in data processing apparatuses, and, more specifically, to the field of cache flushing.
2. Description of Related Art
The maintenance of a cache memory in a data processing apparatus, particularly multiprocessor systems, includes flushing the cache from time to time. A typical cache includes one dirty bit per line to indicate whether the information in the cache line was modified while in the cache. A cache flush may be performed with a software routine that includes checking the dirty bit for every line in the cache and writing the lines that are dirty back to memory.
The present invention is illustrated by way of example and not limitation in the accompanying figures.
The following description describes embodiments of techniques for using a dirty line hit array when flushing a cache. In the following description, numerous specific details, such as logic and circuit configurations, may be forth in order to provide a more thorough understanding of the present invention. It will be appreciated, however, by one skilled in the art, that the invention may be practiced without such specific details. Additionally, some well known structures, circuits, and the like have not been shown in detail, to avoid unnecessarily obscuring the present invention.
Embodiments of the present invention provide techniques for using a dirty line hit array when flushing a cache, and may be applied to any cache, regardless of size, level of set associativity, level in the memory hierarchy, or other attributes. These techniques may be used when flushing a cache for any purpose, including flushing shared caches in multiprocessor systems and flushing caches before entering a sleep or other low power mode.
The embodiment of
When cache 100 is flushed, before a dirty bit 102 is checked to determine whether the corresponding cache line 101 must be written back to memory, the hint bit 111 for the set to which the cache line 101 belongs is read. If the hint bit 1111 is set, then at least one of the cache lines 101 in that set must be dirty. Therefore, the dirty bit 102 is checked and the cache flush continues as normal. However, if the hint bit 111 is not set, then every cache line 101 in that set must be clean, so there is no need to check the dirty bit 102. Accordingly, the time and power required to check the dirty bit 111 may be saved.
Furthermore, the hint bit 111 corresponding to a given set, or any other segment, section, or partition, may be read to potentially eliminate other cache accesses during the flush. For example, a hint bit 111 may be read before accessing a cache to determine if there is a hit to a designated address for a possible writeback. If the hint bit 111 is read as clean, then no cache access is needed to determine if the cache line corresponding to the designated address is present and valid.
In other embodiments, a hint bit in a hint array may correspond to any number of dirty bits in a cache. For example, a hint array may have 512K hint bits, one for each of 512K sets in a four-way set associative cache having 2M lines. In this configuration, there are four dirty bits per hint bit. Alternatively, a hint array may have 32K hint bits, and an eight-way set associative cache may be logically divided into 32K segments, where each hint bit corresponds to one segment of eight of the 256K sets in an eight-way set associative cache having 2M lines. In this configuration, there are 64 dirty bits per hint bit. The number of dirty bits per hint bit and the size and configuration of the hint array may be chosen based on any considerations, such as to provide a short enough access time such that the hit bit lookup may be used to gate the cache access.
Maintaining a hint array may include clearing a hint bit whenever a cache flush routine completes looping through all of the memory addresses or cache lines that may be mapped to the hint bit.
However, if the hint bit is read as dirty, then, in block 230, the cache is accessed to determine if the designated line is present and valid in the cache. If it is not, then flow proceeds to block 211 as described above. However, if the designated line is present and valid, then, in block 240, the dirty bit for that cache line is read. If the dirty bit is read as clean, then flow proceeds to block 211 as described above. However, if the dirty bit is read as clean, then, in block 250, the cache line is written back to memory. Then flow proceeds to block 211 as described above.
Other embodiments of methods for using a dirty line hint array when flushing a cache are possible within the scope of the present invention. For example, the flush routine may designate the way in the cache and then increment through the applicable sets. In this embodiment, there would be no need to check for a cache hit.
System 300 also includes memory 330 coupled to cache 100 through bus 335, or through any other buses or components. Memory 330 may be any type of memory capable of storing data to be operated on by processors 310 and 320, such as static or dynamic random access memory, semiconductor-based read only memory, or a magnetic or optical disk memory. The data stored in memory 330 may be cached in cache 100. Memory 330 may also store instructions to implement the cache flush routine of the embodiment of
Furthermore, any combination of the elements shown in
Component 340, or any other component or portion of a component designed according to an embodiment of the present invention, may be designed in various stages, from creation to simulation to fabrication. Data representing a design may represent the design in a number of manners. First, as is useful in simulations, the hardware may be represented using a hardware description language or another functional description language. Additionally or alternatively, a circuit level model with logic and/or transistor gates may be produced at some stages of the design process. Furthermore, most designs, at some stage, reach a level where they may be modeled with data representing the physical placement of various devices. In the case where conventional semiconductor fabrication techniques are used, the data representing the device placement model may be the data specifying the presence or absence of various features on different mask layers for masks used to produce an integrated circuit.
In any representation of the design, the data may be stored in any form of a machine-readable medium. An optical or electrical wave modulated or otherwise generated to transmit such information, a memory, or a magnetic or optical storage medium, such as a disc, may be the machine-readable medium. Any of these mediums may “carry” or “indicate” the design, or other information used in an embodiment of the present invention, such as the instructions in an error recovery routine. When an electrical carrier wave indicating or carrying the information is transmitted, to the extent that copying, buffering, or re-transmission of the electrical signal is performed, a new copy is made. Thus, the actions of a communication provider or a network provider may be making copies of an article, e.g., a carrier wave, embodying techniques of the present invention.
Thus, techniques for using a dirty line hint array when flushing a cache have been disclosed. While certain embodiments have been described, and shown in the accompanying drawings, it is to be understood that such embodiments are merely illustrative of and not restrictive on the broad invention, and that this invention not be limited to the specific constructions and arrangements shown and described, since various other modifications may occur to those ordinarily skilled in the art upon studying this disclosure. In an area of technology such as this, where growth is fast and further advancements are not easily foreseen, the disclosed embodiments may be readily modifiable in arrangement and detail as facilitated by enabling technological advancements without departing from the principles of the present disclosure or the scope of the accompanying claims.