The exemplary and non-limiting embodiments of this invention relate generally to data storage systems, devices, apparatus, methods and computer programs and, more specifically, relate to cache memory systems, devices, apparatus, methods and computer programs.
This section is intended to provide a background or context to the invention that is recited in the claims. The description herein may include concepts that could be pursued, but are not necessarily ones that have been previously conceived, implemented or described. Therefore, unless otherwise indicated herein, what is described in this section is not prior art to the description and claims in this application and is not admitted to be prior art by inclusion in this section.
The following abbreviations that may be found in the specification and/or the drawing figures are defined as follows:
BO byte offset
CMH (multi-channel) cache miss handler
CPU central processing unit
DRAM dynamic random access memory
HW hardware
LSB least significant bit
MC multi-channel
MC_Cache multi-channel cache
MCMC multi-channel memory controller
MMU memory management unit
PE processing element
SIMD single instructions, multiple data
SW software
TLB translation look-aside buffer
VPU vector processing unit
μP microprocessor
Processing apparatus typically comprise one or more processing units and a memory. In some cases accesses to the memory may be slower than desired. This may be due to, for example, contention between parallel accesses and/or because the memory storage used has a fundamental limit on its access speed. To alleviate this problem a cache memory may be interposed between a processing unit and the memory. The cache memory is typically smaller than the memory and may use memory storage that has a faster access speed.
Multiple processing units may be arranged with a cache available for each processing unit. Each processing unit may have its own dedicated cache. Alternatively a shared cache memory unit may comprise separate caches with the allocation of the caches between processing units determined by an integrated crossbar.
The foregoing and other problems are overcome, and other advantages are realized, in accordance with the exemplary embodiments of this invention.
In a first aspect thereof the exemplary embodiments of this invention provide a method that comprises determining a need to update a multi-channel cache memory due at least to one of an occurrence of a cache miss or a data prefetch being needed; and operating a multi-channel cache miss handler to update at least one cache channel storage of the multi-channel cache memory from a main memory.
In another aspect thereof the exemplary embodiments of this invention provide an apparatus that comprises a multi-channel cache memory comprising a plurality of cache channel storages. The apparatus further comprises a multi-channel cache miss handler configured to respond to a need to update the multi-channel cache memory, due at least to one of an occurrence of a cache miss or a data prefetch being needed, to update at least one cache channel storage of the multi-channel cache memory from a main memory.
The foregoing and other aspects of the exemplary embodiments of this invention are made more evident in the following Detailed Description, when read in conjunction with the attached Drawing Figures, wherein:
The exemplary embodiments of this invention relate to cache memory in a memory hierarchy, and provide a technique to update data in a multi-channel cache at least when a cache miss occurs, or when a need exists to prefetch data to the multi-channel from a main memory. That is, the exemplary embodiments can also be used to prefetch data from a next level of the memory hierarchy to the multi-channel cache, without a cache miss occurring. The exemplary embodiments provide for refreshing data in the multi-channel caches, taking into account the unique capabilities of the multi-channel memory hierarchy. The exemplary embodiments enable a cache line update to be efficiently performed in the environment of a multi-channel cache memory.
Before describing in detail the exemplary embodiments of this invention it will be useful to review with reference to
Referring back to block 2 of
The association is recorded in suitable storage for future use. The association may be direct, for example, a cache block 20 (
In block 4 in
Thus, referring to
It should be noted, from
It should also be noted, that although the unique address spaces 10 are illustrated in
In some embodiments the memory access requests may be in respect of a single processing unit. In other embodiments the memory access requests may be in respect of multiple processing units.
In some embodiments the memory access requests may originate from the processing units that they are in respect of, whereas in other embodiments the memory access requests may originate at circuitry other than the processing units that they are in respect of. The response to a memory access request is returned to the processing unit that the memory access request is for.
The system 18 comprises: a plurality of cache channels 11A, 11B, 11C; arbitration circuitry 24; and multiple processing units 22A, 22B. Although a particular number of cache channels 11 are illustrated this is only an example, there may be M cache channels where M>1. Although a particular number of processing units 22 are illustrated this is only an example, there may be P processing units where P is greater than or equal to 1.
In this embodiment the first processing unit 22A is configured to provide first memory access requests 23A to the arbitration circuitry 24. The second processing unit 22B is configured to provide second memory access requests 23B to the arbitration circuitry 24. Each processing unit 22 can provide memory access requests to all of the cache channels 11A, 11B, 11C via the arbitration circuitry 24.
Each memory access request (depicted by an arrow 23) comprises a memory address. The memory access requests 23 may be described as corresponding to some amount of memory data associated with the memory address, which may be located anywhere in the main memory of the system.
The arbitration circuitry 24 directs a received memory access request 23, as a directed memory access request 25, to the appropriate cache channel based upon the memory address comprised in the request. Each cache channel 11 receives only the (directed) memory access requests 25 that include a memory address that lies within the unique address space 10 associated with the cache channel 11.
Each of the caches channels 11A, 11B, 11C serves a different unique address space 10A, 10B, 10C. A cache channel 11 receives only those memory access requests that comprise a memory address that falls within the unique address space 10 associated with that cache channel. Memory access requests (relating to different unique address spaces) are received and processed by different cache channels in parallel, that is, for example, during the same clock cycle.
However, as a single cache channel 11 may simultaneously receive memory access requests from multiple different processing units, the cache channel preferably includes circuitry for buffering memory access requests.
All of the cache channels 11A, 11B, 11C may be embodied within a single multichannel unit, or embodied within any combination of single-channel units only, or multi-channel units only, or both single-channel units and multi-channels units. The units may be distributed through the system 18 and need not be located at the same place.
In this example the arbitration circuitry 24 comprises input interfaces 28, control circuitry 30 and output interfaces 29.
In this particular non-limiting example the arbitration circuitry 24 comprises local data storage 27. In other implementations storage 27 may be in another component. The data storage 27 is any suitable storage facility which may be local or remote, and is used to store a data structure that associates each one of a plurality of defined, unique address spaces 10 with, in this example, a particular one of a plurality of different output interfaces 29.
In other implementations the association between each one of a plurality of defined, unique address spaces 10 with a cache channel may be achieved in other ways.
The input interface 28 is configured to receive memory access requests 23. In this example there are two input interfaces 28A, 28B. A first input interface 28A receives memory access requests 23A for a first processing unit 22A. A second input interface 28B receives memory access requests 23B for a second processing unit 22B.
Each of the output interfaces 29 is connected to only a respective single cache channel 11. Each cache channel 11 is connected to only a respective single output interface 29. That is, there is a one-to-one mapping between the output interfaces 29 and the cache channels 11.
The control circuitry 30 is configured to route received memory access requests 23 to appropriate output interfaces 29. The control circuitry 30 is configured to identify, as a target address, the memory address comprised in a received memory access request. The control circuitry 30 is configured to use the data storage 27 to identify, as a target unique address space, the unique address space 10 that includes the target address. The control circuitry 30 is configured to access the data storage 27 and select the output interface 29 associated with the target unique address space in the data storage 27. The selected output interface 29 is controlled to send the memory access request 25 to one cache channel 11 and to no other cache channel 11.
In this non-limiting example the selected access request may be for any one of a plurality of processing units, and the selection of an output interface 29 is independent of the identity of the processing unit for which the memory access request was made.
In this non-limiting example the control circuitry 30 is configured to process in parallel multiple memory access requests 23 and select separately, in parallel, different output interfaces 29.
The arbitration circuitry 24 may comprise buffers for each output interface 29. A buffer would then buffer memory access requests 25 for a particular output interface/cache channel. The operation of the arbitration circuitry 24 may be described as: receiving memory access requests 23 from a plurality of processing units 22; sending a received first memory access request 23A that comprises a first memory address to only a first cache channel 11A if the first memory address is from a defined first portion 10A of the address space of the memory, but not if the first memory address is from a portion 10B or 10C of the address space of the memory other than the defined first portion 10A of the address space of the memory; and sending the first memory access request 23A to only a second cache channel 11B if the first memory address is from a defined second portion 10B of the address space of the memory, but not if the first memory address is from a portion 10A or 10C of the address space of the memory other than the defined second portion 10B of the address space of the memory; sending a received second memory access request 23B that comprises a second memory address to only a first cache channel 11A if the second memory address is from a defined first portion 10A of the address space of the memory, but not if the second memory address is from a portion 10B or 10C of the address space of the memory other than the defined first portion 10A of the address space of the memory; and sending the second memory access request 23B to only a second cache channel 11B if the second memory address is from a defined second portion 10B of the memory but not if the second memory address is from a portion 10A or 10C of the address space of the memory other than the defined second portion 10B of the address space of the memory.
The implementation of the arbitration circuitry 24 and, in particular, the control circuitry 30 can be in hardware alone, or it may have certain aspects in software including firmware alone, or it can be a combination of hardware and software (including firmware).
Implementation of arbitration circuitry 24 and, in particular, the control circuitry 30, may be implemented using instructions that enable hardware functionality, for example, by using executable computer program instructions in a general-purpose or special-purpose processor that may be stored on a computer readable storage medium (disk, semiconductor memory, etc.) to be executed by such a processor.
One or more memory storage units may be used to provide cache blocks for the cache channels. In some implementations each cache channel 11 may have its own cache block that is used to service memory access request sent to that cache channel. The cache blocks may be logically or physically separated from other cache blocks. The cache blocks, if logically defined, may be reconfigured by moving the logical boundary between blocks.
The cache blocks 20A, 20B, 20C and 20D are considered to be isolated one from another as indicated by the dashed lines surrounding each cache block 20. ‘Isolation’ may be, for example, ‘coherency isolation’ where a cache does not communicate with the other caches for the purposes of data coherency. ‘Isolation’ may be, for example, ‘complete isolation’ where a cache does not communicate with the other caches for any purpose. The isolation configures each of the plurality of caches to serve a specified address space of the memory. As the plurality of caches are not configured to serve any shared address space of the memory, coherency circuitry for maintaining coherency between cache blocks is not required and is absent.
The plurality of parallel input ports 44A, 44B, 44C, and 44D are configured to receive, in parallel, respective memory access requests 25A, 25B, 25C and 25D. Each parallel input port 44 receives only memory access requests for a single unique address space 10.
In this example each of the plurality of parallel input ports 44 is shared by the processing units 22 (but not by the cache blocks 20) and is configured to receive memory access requests for all the processing units 22. Each of the plurality of cache blocks 20 are arranged in parallel and as a combination are configured to process in parallel multiple memory access requests from multiple different processing units.
Each of the plurality of cache blocks 20 comprises a multiplicity of entries 49. In general, each entry includes means for identifying an associated data word and its validity. In the illustrated example each entry 49 comprises a tag field 45 and at least one data word 46. In this example, each entry also comprises a validity bit field 47. Each entry 49 is referenced by a look-up index 48. It should be appreciated that this is only one exemplary implementation.
The operation of an individual cache block 20 is well documented in available textbooks and will not be discussed in detail. For completeness, however, a brief overview will be given of how a cache block 20 handles a memory (read) access request. Note that this discussion of the operation of an individual cache block 20 should not be construed as indicating that it is known to provide a plurality of such cache blocks 20 in the context of a multi-channel cache memory in accordance with exemplary aspects of the invention.
An index portion of the memory address included in the received memory access request 25 is used to access the entry 49 referenced by that index. A tag portion of the received memory address is used to verify the tag field 45 of the accessed entry 49. Successful verification results in a ‘cache hit’ and the generation of a hit response comprising the word 46 from the accessed entry 49. An unsuccessful verification results in a ‘miss’, a read access to the memory and an update to the cache.
In the illustrated example each cache block 20 has an associated dedicated buffer 42 that buffers received, but not yet handled, memory access requests for the cache channel. These buffers are optional, although their presence is preferred to resolve at least contention situations that can arise when two or more PUs attempt to simultaneously access the same cache channel.
The multi-channel cache memory unit 40 may, for example, be provided as a module. As used here ‘module’ may refer to a unit or apparatus that excludes certain parts/components that would be added by an end manufacturer or a user.
In this example, the arbitration circuitry 24 is an integral part of the accelerator 50. The accelerator 50 has a number of parallel interconnects 52 between the arbitration circuitry 24 and the multi-channel cache. Each interconnect connects a single output interface 29 of the arbitration circuitry 24 with a single cache input port 44.
The processing units 22 in this example include a general purpose processing unit (CPU) 22A, an application specific processing element (PE) 22B and a vector processing unit (VPU) 22C. The CPU 22A and the PE 22B generate their own memory access requests. The VPU 22C is a SIMD-type of processing element and, in this example, requires four parallel data words. Each processing unit executes its own tasks and accesses individually the memory 56.
Although
The system 18 in this embodiment, and also in previously described embodiments, may perform a number of functions. For example, the arbitration circuitry 24 may re-define the unique address spaces and change the association recorded in storage 27. As a consequence, each cache block 20 may become associated with a different unique address space 10.
The control circuitry 30 of the arbitration circuitry 24 is configured to access the data storage 27 to re-define the unique address spaces and configured to generate at least one control signal for the cache blocks 20 as a consequence.
The arbitration circuitry 24 may re-define the unique address spaces after detecting a particular predetermined access pattern to the memory by a plurality of processing units 22. For example, the arbitration circuitry 24 may identify a predetermined access pattern to the memory by a plurality of processing units and then re-define the unique address spaces 10 based on that identification. The redefinition of the unique address spaces may enable more efficient use of the cache channels by increasing the percentage of hits. For example, the redefinition may increase the probability that all of the cache channels are successfully accessed in each cycle. The MCC memory unit 40 is configured to respond to the control signal by setting all of the validity bit fields 47 in the multi-channel cache memory unit 40 to invalid. A single global control signal may be used for all the cache blocks 20 or a separate control signal may be used for each cache block 20. In some embodiments, only portions of the unique address spaces 10 may be redefined and the separated control signals may be used to selectively set validity bits in the MCC memory unit 40 to invalid.
Referring to
When the cache block 20 receives a memory access request 25 and generates a response 70 following a cache look-up, the response includes the identification reference(s) received in the memory access request.
Having thus described the exemplary embodiments of the invention described in commonly-owned PCT/EP2009/062076, the exemplary embodiments of this invention will now be described with respect to
It is first noted that increased HW parallelism in the form of multi-core processing, multi-channel cache and multi-channel DRAM can be expected to increase in order to enhance processing performance. The exemplary embodiments of this invention provide a miss handler for a multi-channel cache (a cache miss handler or CMH 102, shown in
The system shown in
In general, cache memory contents need to be updated in certain situations (e.g., when a cache miss occurs or when a cache prefetch is performed). That is, cache contents are loaded/stored from/to a next level of the memory hierarchy (such as DRAM 56 or Flash memory 118). However, in environments having several memory masters, multi-channel memory, and multi-channel cache, traditional cache update policies either will not be operable or will yield low performance.
Compared to traditional caches, the multi-channel cache (MC_Cache) 40 provides enhanced functionality. However, traditional techniques for handling cache misses may not be adequate. One specific question with the MC_Cache 40 is what data is accessed from the next level of the memory hierarchy. Another issue that may arise with the MC_Cache 40 is that several channels may access the same or subsequent addresses in several separate transactions, which can reduce bandwidth.
Contemporary caches take advantage of spatial locality of the accesses. This is, when some data element is accessed an assumption is made that some data located close to that data element will probably be accessed in the near future. Therefore, when a miss occurs in the cache (i.e., a requested data element is not resident in the cache), not only the required data is updated in the cache but also data around the required address are accessed to the cache as well. The amount of accessed data may be referred to as a “cache line” or as a “cache block”.
The multi-channel cache miss handler (CMH) 102 shown in
The exemplary embodiments of the CMH 102 has a number of cache update methods (described in detail below) to update the MC_Cache 40 from the next level of the memory hierarchy (or from any following level of the memory hierarchy) when a cache miss occurs. Moreover, the CMH 102 operates to combine the accesses from several cache channels when possible. The CMH 102 may access data to other channels, and not just to the channel that produces a miss, and may also combine accesses initialized from several cache channels.
Describing now in greater detail the cache update methods, the memory address interpretation, including the channel allocation, can be explained as follows. Assume a 32-bit address space and a 4-channel (Ch) MC_Cache 40 as shown in
The following examples pertain to cache data update methods from the next level of the memory hierarchy. Unless otherwise indicated these non-limiting examples assume that a miss occurs on each access to the MC_Cache 40.
In a conventional (non-multi-channel) cache the cache line is straightforwardly defined. For example, with 32-bit words and a cache line length 16 bytes, the addresses 0 . . . 15 form a single line, addresses from 16 . . . 31 form a second line and so on. Thus, the cache lines are aligned next to each other. In this case then when a processor accesses one word from address 12 (and a cache miss occurs), the entire line is updated to the cache. In this case data from addresses from 0 to 15 are accessed from the main memory and stored in the cache.
As an example for the MC_Cache 40, assume the use of four channels (Ch0, Ch1, Ch2, Ch3), and assume the addresses are allocated as shown in
1) The first possibility is to access only the data that caused the cache miss to occur (i.e., a word from address 12 in this case).
2) A second possibility is to access a cache line length of data only to the channel where the miss occurs. Address 12 is located in channel 1 (Ch1) in index 1 (In1), therefore, indexes In0, In1, In2, In3 in channel 1 are updated. In this example this means addresses 8-15 and 40-47.
3) A third possibility is to access addresses from 0 to 15, meaning that two of the cache channels (Ch0 and Ch1) are updated although a miss occurs only in one channel. This is based on the assumption that the desired cache line size is 16 bytes.
Optionally, a cache line amount of data is accessed from both the channels (Ch0 and Ch1). In this case addresses 0 to 15 and 32 to 47 are accessed.
4) A fourth possibility is to access the same index of all of the cache channels. Therefore, since a miss occurs at address 12 (index 1 in channel 1); data is updated to index 1 in all of the channels (addresses 4, 12, 20, and 28). In this case then same amount of data is loaded to all channels of the MC_Cache 40 from the main memory 56. With an optional minimum cache line granularity for each channel, the access addresses are from 0 to 63, resulting in a total of 64 bytes being updated.
Another example with the MC_Cache 40 pertains to the case where memory spaces allocated to separate channels are relatively large. As an example with two channels, addresses 0 . . . 4K−1 belong to channel 0 (K=1024), addresses 4K . . . 8K−1 belong to channel 1, addresses 8K . . . 12K−1 to channel 0, and so on. This condition is shown in
A) Addresses 12 . . . 15 are updated;
B) Addresses 0 . . . 15 are updated (indexes In0 . . . In3 in channel 0);
C) Addresses 0 . . . 15 are updated; or
D) Update addresses 12 and 4K+12 (index In3 in both channels 0 and 1).
Thus, only 8 bytes are accessed in case D) since two channels exist in this example. Optionally, the accessed addresses would be 0 . . . 15 and 4k . . . 4k+15. A total of 32 bytes are accessed in this case.
To summarize the cache update methods consider the following.
The multi-channel cache miss handler 102 has the potential to operate with several cache update methods to update the MC_Cache 40 from the next level of memory hierarchy (or from any following level of memory hierarchy) when a cache miss occurs. The multi-channel cache miss handler 102 can switch from using one particular update method to using another, such as by being programmably controlled from the MMU 100. The cache update methods are designated as A, B, C and D below, and correspond to the possibilities 1, 2, 3 and 4, respectively, that were discussed above.
Cache update method A): Update just the data that caused the cache miss to happen. However, this approach may not be efficient due to, for example, the implementation of the DRAM read operation to the memory 56.
Cache update method B): Update a cache line worth of data for a single cache channel storage. Therefore, data is updated only to the cache channel where the miss occurs.
Cache update method C): Update a cache line worth of data from subsequent addresses. In this case data can be updated to several cache channels.
Cache update method D): Update the same index in all of the channels. In this case data is updated to all of the channels, producing the same bandwidth to all the channels.
Methods C and D can be utilized (optionally) with a minimum granularity of a cache line for a single channel. In this case an aligned cache line is the smallest accessed data amount to a single channel.
The size of the cache line can be selected more freely than in a traditional system. A typical cache line is 32 or 64 bytes. Since some of the above methods multiply the number of refresh (i.e., multi-channel cache update) actions necessary with the number of channels, it may be desirable to limit the size of the cache line. The minimum efficient cache line size is basically determined by the memory technology (mainly by the size of read bursts).
For efficient usage, the configuration of the next level memory hierarchy (e.g., multi-channel main memory) is preferably taken account with the above mentioned methods and multi-channel cache configuration.
Discussed now is vector access and combination of accesses.
The accessed addresses are as follows according to the above described methods B, C and D (assume the cache line length=16 bytes and that method A is not shown in this example):
1) Due to the miss in address 4, addresses 0, 4, 16, 20 are accessed (channel 0 indexes In0, In1, In2, and In3). Due to the miss in address 12, addresses 8, 12, 24, 28 are accessed (channel 1 indexes In0, In1, In2, and In3).
2) Due to the miss in address 4, addresses 0, 4, 8, 12 are accessed. Due to the miss in address 12, addresses 0, 4, 8, 12 are accessed.
3) Due to the miss in address 4, addresses 4 and 12 are accessed (channels 0 and 1, index In1). Due to the miss in address 12, addresses 4 and 12 are accessed (channels 0 and 1, index In1).
In these methods the accesses can be combined as follows.
1) Combine as a single access: access addresses 0 to 28 as a single long transaction. This will typically produce better performance than the use of two separate accesses due to characteristics of contemporary buses, DRAMs, and Flash memories, which tend to operate more efficiently with longer access bursts than shorted access bursts.
2) There are two similar accesses. Combine accesses as a single access (access addresses 0 to 12).
3) There are two similar accesses. Combine accesses as a single access (access addresses 4 and 12).
To conclude the combination of accesses, the multi-channel cache miss handler 102 combines the accesses from the several cache channels when possible. Generally, duplicate accesses to the same addresses are avoided and longer access transactions are formed when possible.
One approach to implement the MC_Cache 40 is to utilize traditional cache storages and separate cache miss handlers 102 as building blocks.
Another approach to implement the MC_Cache 40 uses a more distributed version of the general cache miss handler 102 and is shown in
It can be noted that the embodiment of
There are a number of technical advantages and technical effects that can be realized by the use of the exemplary embodiments of this invention. For example, and with respect to the four cache update methods A-D discussed above, there is an enhanced usable bandwidth towards the next level of memory hierarchy due to (a) accesses from several cache channels to the same address are combined and (b) accesses to subsequent addresses are combined to form a single longer access transaction. This is relatively faster due to DRAM and Flash memory characteristics, as well as due to conventional interconnections. Typically DRAM and Flash memories, and interconnections, are more efficient when used with long access bursts.
With specific regard to the update method B, this method is simpler to implement with standard cache units and allows enhanced parallel implementations.
With specific regard to the update method C, from an application perspective spatial locality is utilized as with traditional caches.
With specific regard to the update method D, an advantage is that the utilized throughput is equal in all the cache channels.
Based on the foregoing it should be apparent that the exemplary embodiments of this invention provide a method, apparatus and computer program(s) to provide a miss handler for use with a multi-channel cache memory. In accordance with the exemplary embodiments the cache miss handler 102, which may also be referred to without a loss of generality as a multi-channel cache update handler, is configured to operate as described above at least upon an occurrence of a multi-channel cache miss condition, and upon an occurrence of a need to prefetch data to the multi-channel cache 40 for any reason.
Further in accordance with the method shown in
Further in accordance with the method as recited in the previous paragraphs, where the multi-channel cache miss handler updates a cache line for a single cache channel storage, where the updated cache line includes the data that caused the cache miss to occur.
Further in accordance with the method as recited in the previous paragraphs, where the multi-channel cache miss handler updates a cache line for an address subsequent to an address that caused the cache miss to occur.
Further in accordance with the method as recited in the preceding paragraph, where updating the cache line for an address subsequent to the address that caused the cache miss to occur updates data for a plurality of cache channel storages.
Further in accordance with the method as recited in the previous paragraphs, where the multi-channel cache miss handler updates data associated with a same index in each cache channel storage.
Further in accordance with the method as recited in the previous paragraphs, where the update occurs with a minimum granularity of a single cache line for a single channel of the multi-channel cache memory.
Further in accordance with the method as recited in the previous paragraphs, where the multi-channel cache miss handler operates, when updating a plurality of cache channel storages, to combine accesses to the main memory for the plurality of cache storages.
Further in accordance with the method as recited in the previous paragraphs, where each individual cache channel storage is served by an associated cache miss handler, where the cache miss handlers together form a distributed multi-channel cache miss handler.
Further in accordance with the method as recited in certain ones of the previous paragraphs, where each individual cache channel storage is served by a single centralized multi-channel cache miss handler.
Further in accordance with the method as recited in the previous paragraphs, where the multi-channel cache memory comprises a plurality of parallel input ports, each of which corresponds to one of the channels, and is configured to receive, in parallel, memory access requests, each parallel input port is configured to receive a memory access request for any one of a plurality of processing units, and where the multi-channel cache memory further comprises a plurality of cache blocks wherein each cache block is configured to receive memory access requests from a unique one of the plurality of input ports such that there is a one-to-one mapping between the plurality of parallel input ports and the plurality of cache blocks, where each of the plurality of cache blocks is configured to serve a unique portion of an address space of the memory.
Also encompassed by the exemplary embodiments of this invention is a tangible memory medium that stores computer software instructions the execution of which results in performing the method of any one of preceding paragraphs.
The exemplary embodiments also encompass an apparatus that comprises a multi-channel cache memory comprising a plurality of cache channel storages; and a multi-channel cache miss handler configured to respond to a need to update the multi-channel cache memory, due at least to one of an occurrence of a cache miss or a data prefetch being needed, to update at least one cache channel storage of the multi-channel cache memory from a main memory.
In general, the various exemplary embodiments may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. For example, some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device, although the invention is not limited thereto. While various aspects of the exemplary embodiments of this invention may be illustrated and described as block diagrams, flow charts, or using some other pictorial representation, it is well understood that these blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
It should thus be appreciated that at least some aspects of the exemplary embodiments of the inventions may be practiced in various components such as integrated circuit chips and modules, and that the exemplary embodiments of this invention may be realized in an apparatus that is embodied as an integrated circuit. The integrated circuit, or circuits, may comprise circuitry (as well as possibly firmware) for embodying at least one or more of a data processor or data processors, a digital signal processor or processors, baseband circuitry and radio frequency circuitry that are configurable so as to operate in accordance with the exemplary embodiments of this invention.
Various modifications and adaptations to the foregoing exemplary embodiments of this invention may become apparent to those skilled in the relevant arts in view of the foregoing description, when read in conjunction with the accompanying drawings. However, any and all modifications will still fall within the scope of the non-limiting and exemplary embodiments of this invention.
It should be noted that the terms “connected,” “coupled,” or any variant thereof, mean any connection or coupling, either direct or indirect, between two or more elements, and may encompass the presence of one or more intermediate elements between two elements that are “connected” or “coupled” together. The coupling or connection between the elements can be physical, logical, or a combination thereof. As employed herein two elements may be considered to be “connected” or “coupled” together by the use of one or more wires, cables and/or printed electrical connections, as well as by the use of electromagnetic energy, such as electromagnetic energy having wavelengths in the radio frequency region, the microwave region and the optical (both visible and invisible) region, as several non-limiting and non-exhaustive examples.
The exemplary embodiments of this invention are not to be construed to being limited for use with only the number (32) of address bits described above, as more or fewer address bits may be present in a particular implementation. Further, the MC_Cache 40 may have any desired number of channels equal to two or more. In this case then other than two bits of the memory address may be decoded to identify a particular channel number of the multi-channel cache. For example, if the MC_Cache 40 is constructed to include eight parallel input ports then three address bits can be decoded to identify one of the parallel input ports (channels). The numbers of bits of the tag and index fields may also be different than the values discussed above and shown in the Figures. Other modifications to the foregoing teachings may also occur to those skilled in the art, however such modifications will still fall within the scope of the exemplary embodiments of this invention.
Furthermore, some of the features of the various non-limiting and exemplary embodiments of this invention may be used to advantage without the corresponding use of other features. As such, the foregoing description should be considered as merely illustrative of the principles, teachings and exemplary embodiments of this invention, and not in limitation thereof.