1. Field of the Invention
This invention relates to the field of data processing systems. More particularly, this invention relates to the field of accessing data within a cache.
2. Description of the Prior Art
Caches within data processors can store large amounts of data. Accessing data within caches can be quite a complicated procedure requiring addresses of a relatively large size. Manipulation of such addresses can therefore require significant amounts of power and time. Caches have been organised in a number of ways in order to reduce power and time overheads involved in accessing storage locations within the caches.
One popular way of configuring a cache is the so-called ‘set associative’ cache. A 16 Kbyte set associative cache is shown in
In order to address data stored in this sort of a cache an address 35 comprising a SET or index portion 37, which indicates which of the sets or lines the address is referring to and a TAG portion 36 indicating which of the four ways it is in is used. Such an address identifies a cache line and a cache way. The line being identified by the set and a comparison and match of TAGs stored in 4 TAG RAMs 25 with the TAGs in the corresponding set of the 4 caches 10 indicating the way. In reality more than one data word may be stored in a cache line within a cache way and thus, the address may contain further information.
When accessing data stored in a cache organised in this way, any virtual address produced by a programming model will need to be converted to a physical address. This can slow the procedure, as the program will produce the virtual address early, but the data cannot be accessed until it is converted to a physical address.
A known way of converting a virtual address to a physical address is by the use of a translation lookaside buffer or TLB.
This is one way in which data identified by a virtual address can be accessed. The initial step in this procedure is conversion of the virtual address to a physical address using a table lookaside buffer. This is not a fast step and thus, having this as the first step in the procedure considerably slows the critical path. An alternative to this is shown in
This is faster than the data access shown in
Furthermore, this process requires the accessing of multiple RAMs i.e. multiple cache tag directories and cache data arrays (RAMs) are accessed during the procedure and power consumption is therefore high.
A first aspect of the present invention provides a data processor operable to process data, said data processor comprising: a cache in which a data item storage location within said cache is identified by an address; a hash value generator operable to generate a hash value from at least some of said bits of said address said hash value having fewer bits than said address; a buffer operable to store a plurality of hash values relating to a plurality of storage locations within said cache; wherein in response to a request to access said data item storage location said data processor is operable to compare a hash value generated from said address with at least some of said plurality of hash values stored within said buffer.
The provision of a buffer which stores a plurality of hash values that are each generated from an address of a corresponding storage location within the cache but which have fewer bits than these addresses enables a data access to be performed using a comparison of these stored values with a generated hash value. This has the advantage that a comparison of values having a reduced width, as compared to the width of the tag, can be performed. This comparison therefore has a power and time saving associated with it. Furthermore, as hash values are narrower than the address from which they were generated accessing the hash buffer requires less power than accessing a conventional tag directory.
If a match is found in response to the comparison then the data item is assumed to be located in the indicated storage location. A possible drawback is that as the hash values are smaller in width than the identifying address they may not be unique to one particular address (some addresses will have the same hash value) and thus in some cases the wrong data item may be accessed. It should be noted in this context that a data item refers to any item of data, including instructions.
In some embodiments, said cache is a set associative cache divided into a plurality of cache ways; said address comprising a tag portion and an index portion, said index portion indicating a set that comprises said data item storage location and said tag portion being indicative of one of said plurality of cache ways; said at least some bits of said address used to generate said hash value comprise bits from said tag portion, said hash value having fewer bits than said tag portion.
Set associative caches are a well known way of storing data. If such a storage means is used then it has been found to be advantageous to generate a hash value from the tag portion of the address and to use the index portion in conjunction with the hash value to indicate a line within a particular cache way.
In embodiments, said data processor is operable to access said data item storage location by accessing one of said plurality of cache ways indicated by a match of said generated hash value with one of said plurality of hash values stored within said buffer and a line in said cache way indicated by said index portion of said address.
Thus, in this embodiment the hash value is used to indicate which cache way should be accessed rather than using the tag comparison of the systems shown in
In one embodiment said buffer comprising a plurality of lines corresponding to said plurality of cache lines in each of said cache ways, each of said plurality of buffer lines comprising a plurality of hash values, each hash value in a buffer line corresponding to a respective one of said plurality of cache ways; wherein said data processor is operable in response to said request to access said data item storage location to compare said plurality of hash values stored within a line of said buffer indicated by said index portion of said address with said hash value generated from said address.
By using hash values having a reduced bit length compared to either the tag portion or the whole address of the data item, several of these values can be stored in a single line within the buffer. This enables the hash values relating to several cache ways to be stored within lines of a single buffer. Thus, when the comparison is to be made only this single RAM needs to be accessed. In traditional cases where the tag portion of the address was stored, a RAM corresponding to each cache way was required. The access of these RAMs had a power and time overhead associated with it.
In other embodiments said data processor comprises a plurality of hash buffers each of said plurality of hash buffers corresponding to a respective one of said plurality of cache ways.
It has been found to be advantageous in some embodiments, to have a plurality of hash buffers corresponding to the plurality of cache ways rather than a single hash buffer. The hash values are arranged such that different bits of the index indicate a particular portion of the buffer such as a row. As the hash values have fewer bits than a traditional tag value, they can be stored in fewer rows and thus, require fewer bits to identify a particular row which can make accessing the hash buffer faster.
In some embodiments in response to said request to access said data item storage location said data processor is operable to access said plurality of cache ways while accessing said hash buffer, and in response to a hash match to drive sensing amplifiers of said cache way selected by said hash match and not to drive sensing amplifiers of said cache ways not selected.
As the hash values have a reduced bit length their comparison can be done quite quickly. Thus, the cache way that needs to be accessed is found sooner than would be the case if a longer data value such as the tag of the address were compared. This enables only the sensing amplifiers of the cache way selected by the hash match to be fired, while the sensing amplifiers of the other cache ways remain off, this results in a power saving.
In an embodiment, said address is a virtual address, said data processor further comprising: a plurality of cache tag directories, each of said plurality of cache tag directories corresponding to a respective one of said plurality of cache ways; and a translation lookaside buffer, said data processor being operable to generate a physical address from said virtual address and said translation lookaside buffer; said data processor being operable to compare a physical tag portion of said physical address generated from said virtual address with tags from said plurality of tag directories and in response to a tag match to identify said data item storage location, and to check that said data item accessed in response to said comparison of said hash values is said data item identified by said physical address.
The hash value comparison provides a hint as to where the data item may be located however, as the hash value is not unique to the address in some cases this hint may not be correct. In order to avoid the wrong piece of data being used in subsequent operations, the traditional way of accessing data using a translation lookaside buffer to generate a physical address from the virtual address and performing a comparison of this physical address with a cache tag directory is done in parallel to the data access using the hash values. By placing this operation in parallel it is decoupled from the critical cache access path and thus, it does not matter that it takes longer than the cache access performed with the hash buffer as it in only there to confirm that the hash hit is indeed correct and therefore only becomes important in case of a cache miss. Provided the hash values are selected appropriately the probability of a cache miss occurring is small.
In embodiments, said processor is operable in response to said access request for said data item to delay accessing said plurality of tag directories until after said hash match and then to access only said tag directory corresponding to said cache way indicated by said hash match.
As the access of the tag directories using the physical tag is only used as a check and is not generally in the critical cache access path, it has been found to be advantageous to delay it until after the hash match. This means that only the cache tag directory indicated by the hash match need be accessed in most cases. In the case of the hash miss it would be necessary to access all the cache tag directories. However, in most cases only one of them needs be accessed which provides considerable power savings.
In some embodiments, said data processor is operable to perform a plurality of processes or a plurality of applications on said data, said data processor comprising: a data storage unit operable to store a process or application identifier defining a process or application that is currently being performed by said data processor on said data; wherein said hash value generator is operable to generate a hash value from at least some of said bits of said address and at least one bit of said process or application identifier.
The location within a cache used to store data may depend quite strongly on the process or application that the data processor is performing on that data. Thus, it has been found to be particularly advantageous to use data identifying the process or application being performed in the generation of the hash along with portions of the address itself.
In embodiments, said hash value generator is operable to generate said hash value by performing logical operations on said at least some bits of said address and said at least some bits of said process or application identifier.
A suitable hash value generator can be produced by performing logical operations on the bits of the address and processor application identifier. It may be that not all bits of the address or all bits of the process or application identifier need to be used to provide a result that gives a reliable hint as to the data location.
In embodiments, each of said bits of said reduced bit virtual address is generated from a logical operation performed on at least one bit of said process or application identifier and at least two bits of said address.
A hash value which is likely to provide an accurate hint to where the data item is located is produced if each of the bits of the hash value are derived from a logic operation performed on at least one bit of the processor application identifier and some bits of the address.
In some embodiments, said at least two bits of said address are bits separated from each other by a number of bits, said number being similar in size to a number of bits of said hash value.
By producing individual bits from logical operations performed on bits of the address that are remote from each other rather than on bits that are adjacent, it has been found that a hash value that has a high probability of providing an accurate indication of where a data item is located is produced. In particular, separating them by approximately the number of bits of the hash value has been found to give an accurate indication.
In an embodiment of the invention said hash value generator is operable to generate at least one bit of said hash value by performing two step logical operations on said at least some bits of said address and said at least one bit of said process or application identifier.
Combining the bits of the address and the process or application identifier using logical operations is an effective way of producing the hash value. The number of logical operations performed can increase the number of bits that are used to produce the hash value and thus, possibly increase its accuracy, however it also extends the time required to create the hash values. For these reasons it has been found to be advantageous to use two step logical operations. This is a good compromise between combining several bits while not extending the length of time or circuit area required to produce these values.
In one embodiment, said hash generator is operable to generate at least one bit of said hash value by performing an additional first step comprising performing a logical operation on at least two of said process or application identifier bits.
Although it is advantageous to keep the number of steps of logical operations low it can also be advantageous to combine several bits which require more steps. As the process or application identifier are available before the address, bits of these can be combined in an additional step without impacting on the time taken to produce the hash value. Thus, in some embodiments, some bits of the hash value are produced using a logically combined value of some of the bits of the process or application identifier.
A variety of different logical operations can be used to combine the bits to produce the hash value, for example, in some embodiments said logical operations comprise exclusive OR operations.
A further aspect of the present invention provides a method of processing data comprising the steps of: storing a data item in a storage location within a cache, said storage location being identified by an address; generating a hash value from at least some of said bits of said address said hash value having fewer bits than said address; storing a plurality of hash values relating to a plurality of storage locations within said cache within a buffer; and in response to a request to access said data item storage location, comparing a hash value generated from said address with at least some of said plurality of hash values stored within said buffer.
A still further aspect of the present invention provides a means for processing data, said means for processing data comprising: a means for storing data in which a data item storage location within said means for storing data is identified by an address; a means for generating a hash value, said means being operable to generate a hash value from at least some of said bits of said address said hash value having fewer bits than said address; a means for storing a plurality of hash values relating to a plurality of storage locations within said means for storing data; wherein in response to a request to access said data item storage location said means for processing data is operable to compare a hash value generated from said address with at least some of said plurality of hash values stored within said buffer.
The above, and other objects, features and advantages of this invention will be apparent from the following detailed description of illustrative embodiments which is to be read in connection with the accompanying drawings.
However, as the hash values are not necessarily unique, there is a possibility that the wrong data item has been accessed. Thus, in parallel to this data access a slower and more rigorous identification of the data item storage location is performed. In this case the virtual address 81 is fed to a TLB 30 and the virtual tag portion of the address is converted to a physical tag portion. The virtual index is input to the cache tag directories 40 and the four possible tags may be compared by comparator 60 to the physical tag. A match identifies which of the cache ways the data item is located in. A check can then be made to see if the data access performed in response to the hash value was indeed correct.
It should be noted that although in this embodiment a TLB is still consulted and the tags are compared, this access and comparison is no longer in the cache access path and as such does not slow down the procedure unless a cache miss has occurred. Thus, provided cache misses are rare the procedure is speeded up.
In alternative embodiments (see
In embodiments where there is a single hash buffer, the accessing of only one buffer can save power and time. Furthermore, in all embodiments the comparison of the smaller 6 bit hash values rather than the larger 19 bit tags provides both a time and power saving. However, the cache directories 40 are accessed in parallel to provide a check that the hash value provides the correct data item. Although, this does not increase the time of the cache access (unless there is a cache miss) it does mean that the potential reduction in power may not occur. However, as this cache directory access is not in the critical path it is possible to delay it by a cycle and then when the hash value 82 has produced a match with the hash buffer, the way indicated by the match can be used so that only access to the cache tag directory 40 relating to this way is made. By doing this significant power savings can be made.
The timing of the accesses is shown in
The following table, table 1, compares the number of RAM accesses of the traditional access system compared to the access system of embodiments in cases where a hit and miss occur. As can be seen in the traditional method the four data RAMs of the cache 50 need to be accessed and the four tag RAMs of cache directory 40 need to be accessed in each case. In this embodiment only one of the RAMs of the cache directory needs to be accessed in the case of a cache hit and although all four of the cache data RAMs of the cache 50 are accessed initially only one bank of the sense amplifiers are enabled, the others being turned off in response to the hash value showing that they are not needed. In the case of a cache miss, such that the comparison of the physical tag with the indicated way of the cache tag directory does not give a corresponding hit, then all four of the cache tag directories need to be accessed to determine where the data item is.
In summary by decoupling the TLB/Cache directory path from the data selection path cache access can be speeded up. Furthermore, removing this from the critical path also enables the cache tag directory access to be pushed out further allowing only one cache tag directory RAM to be accessed.
The use of a hash value and a hash value buffer can therefore improve data access times and reduce power consumption in cases where cache hit occurs. It should be noted that the hash value only provides a hint as to the data storage location and in the cases where a cache miss occurs the resulting process is worse than if the hash buffer was not there. It is therefore very important that the number of cache misses are kept low. The number of cache misses occurring depends on the type of logic operations that are performed and the data that is used to form the hash values. It has been found that using bits of the virtual address and bits of the process identifier or application specific identifier produces an algorithm giving an accurate hint as to the data location in most cases.
In the algorithm shown each hash bit is generated from virtual address bits that are separated by approximately the number of hash bits produced, in this case six. These bits are exclusive ORed together. For example hash [0] is generated from virtual address bit [13, 19, 26]. This spreads the bits of the virtual address across the bits of the hash value. The algorithm also spreads the ASID/PID bits across each hash bit. However, since the ASID/PID consists of eight bits and the hash value only consists of six bits in this example three of the ASID/PID bits are XORed together and used to generate hash [0]. It should be noted that as the ASID/PID bits are available earlier than any of the virtual address bits, the extra gates to exclusive OR all these together does not slow down the generation of hash [0] compared to the other hash bits.
It has been found in the running of several benchmarks that although the use hash values producing a subset of the tags instead of the entire tags can result in some performance degradation, in fact this has minimal impact on performance and this slight degradation is more that compensated for by the increase in frequency that is achieved by this implementation.
Although it is desirable to reduce the number of caches misses, they will sometimes occur. For example, when using virtual indexes to access caches, a problem of aliasing a virtual address to the same physical address may arise. This occurs in some caches where a virtual index does not uniquely identify all of the storage locations within that cache. In these cases, an index portion may not be sufficient to uniquely identify each of the cache lines and thus, it may be that one of two lines may be indicated by a particular index value. Data integrity is compromised if this is allowed to occur. This is a normal alias condition and is referred to hereinafter as PC/ALIAS condition and may be present in traditional cache systems where the index portion of the address does not comprise sufficient bits to uniquely identify every line within the cache.
Furthermore, in embodiments, a hash buffer has been created to provide an alternative to the cache directory when accessing cache data. The hash buffer contains reduced bit indications of address locations. As mentioned previously, comparing data within this hash buffer requires less power and less time than comparing full length addresses. However, by its very nature of being reduced bit, the hash values do not provide a unique identification of a storage location within a cache and as such further aliasing may occur. The hashes are generated in such a way as to hopefully provide a good indication of storage location within the cache, however, it is always possible that a hash hit will occur which indicates an incorrect data storage location. In order to avoid incorrect data being accessed and used, a check is made of each data access. Provided, most data accesses are correct, this check should not unduly slow down the processor. However, it is important that the check and in particular, the correction of any false cache hits is done efficiently.
With this is mind, each data access is checked using the physical address derived from the virtual address, the TLB 30 and the cache tag directories 40 (see
In order to do this, the rest of the physical address tags within the line of the cache directory indicated by the virtual address are checked. If a cache hit is not found, then it is concluded that the data is not present in the cache, i.e. there is a hash hit/cache miss and the data needs to be written from memory to the cache. Thus, the storage location indicated by the hash hit is invalidated, or in some cases cleaned and invalidated and then data is retrieved from the memory and written to the storage location indicated by the hash hit. By writing the data to the storage location indicated by the hash hit, a hash value generated from this virtual address in the future should give a correct indication of the storage location. The instruction that caused the replay can then be reissued.
In the case that there is a cache hit, i.e. a physical address match is found in the line of the cache directory indicated by the address, but in a different cache way to that indicated by the hash, the hash buffer itself needs to be amended to avoid having two identical hashes within the same line of the hash buffer. Thus, in such a case as this the hash value within the hash buffer corresponding to the originally indicated storage location is invalidated (and the line potentially evicted if dirty) and the generated hash value is written to the storage location corresponding to the physical address hit. Thus, the hash buffer is repaired and future data accesses to this address should not encounter a false hash hit. The instruction that caused the stall can then be reissued.
If there is a cache hit, then the hash miss was a false miss and the memory access can be cancelled. The hash buffer then needs to be updated and the generated hash value should be written to the location corresponding to the location identified by the cache directory hit. This ensures that next time that data should be accessed there will be a hash hit. The instruction can then be reissued.
A further preferred embodiment is described below:
In a traditional physically tagged way associative data cache, the tag array and data array will have to be fired in parallel with the TLB lookup. Once the TLB generates the physical address, it gets compared with the tag array physical tags which then generate the way hit signal to select the appropriate data array way. There are couple of problems with this approach.
All the data ways have to be fired in parallel which is a lot of power. A system using an embodiment of this invention, hereafter referred to as Tiger will be used in wireless systems where low power consumption (lesser than a watt) is very important. To reduce power, the data array could be fired after the tag array has been fired and tag hits have been determined, but this would make the data cache a slower implementation—longer latency. Going from 2 cycle cache access to a 3 cycle cache access would cause a 5-6% drop in performance. Also, all the way tags have to be fired in parallel.
Secondly, in the traditional implementation, in order to get a fast tag array hit, the TLB has to be made fast which implies that the adder that generates the virtual address has to be fast. Thus, there is a lot of pressure on the adder and TLB implementation.
In the HVAB (hashed virtual address buffer) approach, an array which is smaller than the tag array (in terms of number of bits to be compared) called the HVAB array is looked up first and a hashed virtual address (in Tiger's case 6 bits) is used to compare against this array. Not only is the field to be compared smaller, it also uses virtual address bits rather than physical address bits as is normally used in a traditional design. If there is a hit, only the hitting tag and data way are fired. The physical tag from the tag array is then compared with the TLB's PA to make sure that the hash hit is correct. While the tag validation is going on, the data array that was fired in parallel with the tag array way completes and the data is ready for alignment and forwarding to the various functional units.
The advantages of the HVAB scheme are:
If the Hash lookup misses, then all 4 ways of tag are fired (please note that due to virtual aliasing problems—8 ways have to be compared in the tag array—4 ways belonging to VA[12]=1 and 4 ways belonging to VA[12]=0 where VA is virtual address—this is described in more detail in later sections) and compared with the TLB PA just as in a traditional design—however, the cache doesn't signal a hit even if the PA matches—a case of hash alias—the instruction is replayed (pipes flushed to Idecode stage D2) while the hash is updated. The instruction when re-executed will get a hash and PA hit. If the PA also misses, then it is a cache miss and access will be sent to L2. If the hash hits and PA misses, then again instruction is replayed, miss resolved and instruction re-executed. However, in this case, the same way as the hash hit way is replaced as otherwise, the hash array will have multiple ways with the same hash. When the hash hits and PA misses, all the 4 ways of the tag array have to be looked up to see if the line is in another way as the hash hit could have been a false hash hit—alias. Thus the hash hit way will have to be flushed out of cache in addition to updating the hash for the tag way hit—otherwise, you will have two different hashes in two ways pointing to same PA.
Another advantage of HVAB is, way hit information is known early and can be used to determine Store Buffer forwarding for loads/stores and store merging which is another traditional speed path in a cache design where PA is used. This is explained in detail in later.
However, HVAB scheme does suffer from the problem of aliasing due to usage of virtual addresses.
Cache Organisation—HVAB array
Since the Data cache is 32K and 4 way set associative, each way represents 8K which is 2n where n=13 or Virtual Address (VA) 12:0. Since the cache line is 64 bytes or 512 bits, the HVAB array index will be 12:6 or 128 rows×8 bits (6 hash bits+1 Non-Secure Valid bit+1 Secure Valid bit). The 128×8 can be broken down physically to be a 64×16 or 32×32 bits (4 hashes are represented in one row)−32×32 represents one way—there are 4 such arrays for the 4 ways. Each way is then broken down further into a 16×32 structure. Thus there are 8 16×32 structures—2 for each way. The 16 rows are indexed by 11:8, bit 12 is used to select between the two 16×32 structures for a way and then bits 7:6 are used to select the appropriate hash before being compared with the incoming VA Hash. Both the Secure and Non-Secure valid bits cannot be set at the same time—only one bit can be set at a time and they are a copy of the NS bit from TLB. If MMU is disabled, then these bits are set using the processor Secure state bit.
The HVAB array is actually built as a register file and has two ports—one read and one write port. The D bits which were originally in the HVAB array have been moved to the data array—so, any reference to D bits in HVAB array should be ignored.
The contents of HVAB Array are:
Secure Valid bit
Non-Secure Valid bit
6 bit Hash
(total of 8 bits per entry)
In order to resolve virtual aliasing conditions, 4 ways form VA[12]=0 and 4 ways from VA[12]=1 are read out each cycle—the hash compare always takes place for the VA[12] bit that was produced by the adder. So, the 4 bit hash hit signal that is produced always corresponds to the VA[12] bit out of the adder. The valid bits from the opposite VA[12] bit is required in order to validate the 8 tag compares on a hash miss. We wouldn't have needed to do this had there been a 2:1 mux on the HVAB read port—but, since this will be a speed path, 4 extra valid bits need to be read out in case there is a hash miss.
Data cache organisation is shown in
Valid Bits and Reset of Valid Bits in HVAB Array
There are two bits in the HVAB array to indicate validity of the line—Non-Secure Valid bit and the Secure Valid bit. When Tiger is in Secure mode, Secure valid bit is set to 1 for cache allocations. When Tiger is in Non-Secure mode, Non-Secure Valid bit is set to 1. When we are in Non-Secure mode, the Non-Secure valid bit will be selected and used to validate the hash compare. When we are in Secure mode, the OR of the Secure and Non-Secure valid bits will be used to validate the hash compare—else we will have the same cache line in two different places differing only by Secure/Non-Secure valid bit. Secure mode process can access both Secure and Non-Secure lines.
Resetting of Valid bits for the entire data cache is done out of reset through state machine cycling of all indices or through CP15 cache maintenance operations that can invalidate a single entry.
Replacement Policies on a Cache Miss
A 4 bit random replacement counter is used when all ways are valid. The random replacement counter shifts left by one bit every clock cycle and is sampled on a miss.
Virtual Hash
The virtual hash is formed from VA (31:13) and Process ID (PID)/Address Space ID (ASID). This is formed by a two level XOR. The reason the PID/ASID are being used is to have a good representation of the various processes in the virtual hash. Various hash schemes were studied—5 bit, 6 bits and 7 bits with various levels of XORs. A 6 bit hash has been chosen that gives almost the same hit rate as using the entire physical tag—there was a 1% degradation in hit rate due to aliasing. There can be aliases due to the following reasons:
If timing doesn't work out with two level XOR, then a one level XOR will be used—at present, the hash generation path meets timing. But, during implementation, if this becomes a speed path, then this will be changed to one level XOR hash algorithm.
Page Coloring Problem
As mentioned earlier, the VA(12) and PA(12) cannot be guaranteed to be the same when multiple virtual addresses map to the same physical address—that VA(12)=0 and 1 can map to same PA(12)=0 (or 1). What this means is when VA(12)=0 is used to index into hash and tag arrays and a miss detected, the miss is not a indication of a true miss as the line can be in VA(12)=1 as VA(12)=1 could have brought the line in. Thus we have to look at 8 different places—4 ways at VA(12)=0 and 4 ways at VA(12)=1. Once it is detected to be in VA(12)=1 (or vice versa), the line at VA(12)=1 will be evicted to L2 and brought into VA(12)=0. Of course, we may have to evict a line at VA(12)=0 to make a place for the line being brought into VA(12)=0 from VA(12)=1. This could involve two evictions.
More importantly, a hash miss requires us to look at 8 different places. Now for integer loads, any case other than hash hit, PA hit, replay will be enabled. While we are waiting for L2 data to come back, the 8 ways of the tag array can be looked up to see if the line is present in any of the 8 places. If there is a tag hit, then L2 request is cancelled and the double eviction described above will be take place. If there is no hit, then L2 request is allowed to proceed.
HVAB Array Organization to Solve Virtual Aliasing Conditions
The hash array is already broken down into smaller arrays that can selected by VA(12) bit. However, 8 way compares do not need to be implemented for the following reason. The appropriate VA(12) 4 way hash hit is needed, as only the appropriate way of the 4 ways of the data array corresponding to VA(12) generated by the AGU is accessed rather than accessing 8 ways as is done in tag array. If it is found in the other VA(12), then the line is pushed out to L2. However, since the valid bits are in the Hash array, the valid bits corresponding to the 8 ways are read out for every access—the hash array is actually broken down into smaller sub-arrays of 8 entries—so, twice the number of entries are accessed with respect to organization not supporting hardware Page Coloring. The valid bits are required to validate the 8 way tag compare. The valid bits will be sent to the tag array on a lookup. On a replay, the valid bits will be saved and used to validate the 8 way tag compares—while for a Neon (SIMD processing) Store access, the valid bits will be read out on E2 and used in E3 (see
Virtual Aliasing Solutions explored
Following were the various page coloring solutions that were explored:
If the L1 cache hit rate is high, then only when we miss and replay for integer loads, do we have to access all the 8 ways.
However since we have to stream Neon and No-Write-Allocate stores, all the 8 ways of the tag array get accessed for every Neon access and integer store access.
The integer store problem can be solved if stores are made write-allocate—however, there is a performance loss when this is done at L1 as it displaces useful information and stores have to be exposed to replay penalty. Since the percentage of stores that miss L1 are probably small, this is probably OK with respect to power.
Neon is the bigger problem as every Neon memory access has to fire up all the 8 tags. One solution being explored is to assume that Neon accesses will be marked Inner Non-Cacheable in TLB and replay if it is marked Cacheable. During the replay, the cache will be inspected to see if the line is resident in L1—if not, the access will be sent to L2. If the line is found to be in L1, then evict the line out to L2. If the OS cannot get the setting of the TLB right, then performance for Neon accesses will be poor.
Also, there can be a mode bit which indicates that an OS supports Software Page Coloring and thus not all the 8 ways needs to be fired—only 4—further reduction in power.
Alias Types
Following are the various kinds of aliases possible:
The data array is always looked up using the 4 bit Hash Way hit/miss signal from HVAB array corresponding to the VA[12] bit that was generated by the AGU adder.
Cache Pipelines
A simplified pipeline is shown in
The memory pipeline is comprised of three stages:
Although illustrative embodiments of the invention have been described in detail herein with reference to the accompanying drawings, it is to be understood that the invention is not limited to those precise embodiments, and that various changes and modifications can be effected therein by one skilled in the art without departing from the scope and spirit of the invention as defined by the appended claims.