Unless otherwise indicated herein, the materials described in this section are not prior art to the claims in this application and are not admitted to be prior art by inclusion in this section.
In multi-core microprocessor systems, each silicon die processor may contain multiple processing elements (“cores”). These cores may have the ability to parallel process vast amounts of data, using algorithms that may be diversified per core. Some algorithms require that threads of execution (“threads”) execute in parallel on multiple cores in a cooperative manner. In these situations, sharing of data may be essential.
One way to support sharing of data between threads executing on multi-core microprocessors is to supply each core with a respective cache coherent memory mechanism, which may include a cache and a cache controller. Generally, these mechanisms work in hardware to maintain the status of main memory that may be present in one or more of the core's caches.
Two classes of schemes may be utilized to maintain cache coherence, namely bus snoop schemes and coherence directory schemes. In bus snooping cache coherence schemes, the cache controller in each core of the processor monitors an interconnect, couples the processor to a memory to detect writes to and reads from the memory, and then updates the corresponding cache lines accordingly. The bus snooping scheme operates under the assumption that the interconnect is globally-observable by all of the cache controllers. The present disclosure appreciates that such interconnects do not scale well, and may not support multi-core microprocessors with a large number of cores per die, such as in excess of 16 cores per die.
The second cache coherence scheme employs a coherence directory scheme that is maintained either in main memory or in a combination of main memory and the individual caches. Entries (“descriptors”) in this coherence directory store the status of respective sets of memory locations, such as cache-line-sized rows of main memory. The status information stored in the descriptors may include, for example, whether a particular cache-line-sized row of main memory is cached in a particular set of caches.
The foregoing and other features of the present disclosure will become more fully apparent from the following description and appended claims, taken in conjunction with the accompanying drawings. Understanding that these drawings depict only several examples in accordance with the disclosure and are, therefore, not to be considered limiting of its scope, the disclosure will be described with additional specificity and detail through use of the accompanying drawings, in which:
In the following detailed description, reference is made to the accompanying drawings, which form a part hereof. In the drawings, similar symbols typically identify similar components, unless context dictates otherwise. The illustrative examples described in the detailed description, drawings, and claims are not meant to be limiting. Other examples may be utilized, and other changes may be made, without departing from the spirit or scope of the subject matter presented herein. It will be readily understood that the aspects of the present disclosure, as generally described herein, and illustrated in the Figures, can be arranged, substituted, combined, separated, and designed in a wide variety of different configurations, all of which are explicitly and implicitly contemplated and make part of this disclosure.
Described herein, inter alia, are examples of methods, apparatus, computer programs and systems related to multi-core parallel processing directory-based cache coherence.
When one of the plurality of processor cores 101 initiating a read request determines that read data are not present in the corresponding cache 108, the one of the plurality of processor cores 101 may request the data from the main memory 103. In directory-based cache coherence schemes, a directory descriptor 104 for the associated set of memory location, such as a cache-line-sized row 105, in the main memory 103 may be updated with information indicating the particular one of the plurality of processor cores 101 that initiated the read request now has a copy of this data. The cache-line sized row 105 in the main memory 103 comprises the data bits that may be stored in each row of the cache memory 108 in each of the plurality of the processor cores 101. A separate directory descriptor 104 may be provided for each cache-line-sized row 105 in the main memory 103. Each directory descriptor 104 may contain a record of all of the plurality of processor cores 101 having a cache 108 that contains the data stored in the respective cache-line-sized row 105.
When one of the plurality of processor cores 101 in a set, such as core i, asserts a write request to cause the respective one of the plurality of memory controllers 102 for the set to write data in the main memory 103, the processor core may cause the memory controller 102 to inform the main memory 103 to update the status of the line of the directory descriptor 104 in the main memory 103 too. For example, the status of the line of the directory descriptor may be changed to “dirty” or “exclusive” depending on the specific example of the directory-based cache coherence scheme. The status of the line of the directory descriptor 104 may be changed by the directory descriptor 104 for the associated cache-line-sized row 105 to which the data are written providing an indication to all of the plurality of processor cores 101 having a cache that contains the data that has been overwritten. The processor core i 101 may then cause the respective one of the memory controllers 102 to cause an indication to be stored in the main memory 103 that marks the cached data for the associated cache-line-sized row 105 as invalid so that a cache miss will occur if the processor core i 101 subsequently asserts a read request to cause the respective one of the plurality of memory controllers 102 for the set to attempts to read the data stored in the cache-line-sized row 105. A cache miss may cause the processor core i 101 to asserts a read request to cause the respective one of the plurality of memory controllers 102 for the set to read the data from the main memory 103 and may also cause the read data to be stored in the cache 108 for that processor core i 101.
If another one of the plurality of processor cores 101, e.g., core j, asserts a request to cause the respective one of the plurality of memory controllers 102 for the set to access a line after the processor core i has asserted a write request to cause the respective one of the plurality of memory controllers 102 for the set to write data in the main memory 103, the processor core j may assert a request for the respective one of the memory controllers 102 to ask the main memory 103 for the status of this line, which may then be retrieved from the main memory 103 using the directory descriptor 104. When many of the plurality of the plurality of processor cores 101 are present in the multi-core processor 100, the number of directory accesses may become quite high. The multi-core processor system 10 shown in
When one of the plurality of processor cores 101 first asserts a write request to cause the respective one of the plurality of memory controllers 102 for the set e.g., memory controller 1, to access a cache-line-sized row 105 in the main memory 203, the corresponding directory descriptor cache 206, i.e., DDC 1, should be updated to show that DDC 1 has the copy 208 of the directory descriptor 104 for that cache-line-sized row 105. This directory descriptor 104 may provide an indication 210 that the cache 108 for the processor core accessing the cache-line-sized row 105 in the main memory 203 has the copy of the accessed data 212. The directory descriptor 104 may also provide a record 214 of the processor cores 101 having a cache 108 that contains the copy of the accessed data 212. However, the other directory descriptor caches 206 should also be updated to show that DDC 1 has the copy 208 of the directory descriptor 104 for the accessed data 212 from the cache-line-sized row 105. Similarly, if another of the plurality of processor cores 101 accesses the cache-line-sized row 105 through a different one of the plurality of memory controllers 102, e.g., memory controller 2, the directory descriptor 104 for that cache-line-sized row 105 stored in DDC 1 should be updated using one of several different techniques in different examples.
In one example, when one of the DDCs 206 is updated, the updated one of the DDCs 206 provides the update information to the other of the DDCs 206. For example, if one of the processor cores 101, e.g., processor core 2, initiates a write or read of data to/from main memory 203 through memory controller 1, the copy 208 of the directory descriptor 104 stored in DDC 1206 may be updated to provide an indication that the cache 108 for the processor core 2101 has a copy of the data 212 stored in the associated cache-line-sized row 105 in the main memory 203. The DDC 1 may then transmit the copy 208 of the updated directory descriptor 104 from DDC1 to DDC2 and/or DDCk. DDC 2 and/or DDCk may then update the copy 208 of the directory descriptor 104 stored therein to mark the cache-line sized row of data stored in the cache 108 of processor core 101, other than the processor core 2101 as invalid, since that data may have been changed in the main memory 103 and are thus stale.
In still another example, when one of the DDCs 206 updates a directory descriptor 104 contained within it, the update may be sent to the main memory 203. The main memory 203 may then either update other DDCs 206 that have copies of the directory entry 104, or inform those DDCs that their directory descriptors 104 are stale and should be invalidated.
In any of these and other examples, the directory descriptor cache 206 as described herein may be configured to allow the multi-core processor 200 including many processor cores 101 to efficiently implement a directory-based cache coherence scheme without the undue latency to access directory descriptor 104 in the main memory 203.
Although the directory descriptor metadescriptor 307 is shown in
Processing for method 400 may begin at block 402 (Provide Directory Descriptor Caches). Block 402 may be followed by block 404 (Store Plurality of Directory Descriptors in Each Directory Descriptor Cache). Block 406 may be followed by block 406 (Update Directory Descriptors in Cache Responsive to Accessing Main Memory).
At block 402, one or more directory descriptor caches 206 may be provided in the multi-core processor 200 or 300. As explained above, each directory descriptor cache 206 may be associated with at least a subset of processor cores (e.g., one or more of the processor cores) in the multi-core processor 200 or 300. At block 404, a plurality of directory descriptors 104 may be stored in each directory descriptor cache 206. As also explained above, each of the directory descriptors 104 may provide an indication of cache sharing status of a respective set of memory locations, such as a cache-line-sized row 105, of the main memory 103. At block 406 the directory descriptors 104 stored in each directory descriptor cache 206 may be updated responsive to one of the processor cores 101 in the subset accessing the respective set of memory locations, such as a cache-line-sized row 105, of main memory 103.
Processing for method 500 may begin at block 502 (Access Directory Descriptor in Cache). Block 502 may be followed by block 504 (Is Data Stored in Cache of One of the Cores). Block 506 may be followed by block 506 (Access Data from Cache) when method 500 determines, at block 504, that data is stored in a cache for one of the cores. Otherwise, block 506 may be followed by block 508 (Access Data from Main Memory) when method 500 determines, at block 504, that data is not stored in the cache for one of the cores.
At block 502, a directory descriptor 104 in a directory descriptor cache 206 in the processor 200 or 300 may be accessed by one of the processor cores 101 in the multi-core processor. This should be accomplished before the processor core 101 attempts to accesses data stored in a set of memory locations, such as cache-line-sized row 105, of the main memory 103. At block 504 the accessed directory descriptor 104 may be used by the core 101 to determine if the data stored in the set of memory locations, such as cache-line-sized row 105, of the main memory 103 are stored in the cache of one of the processor cores 101 of the multi-core processor 200 or 300. If the determination is made at block 504 that the data stored in the set of memory locations, such as the cache-line-sized row 105, of the main memory 103 are stored in the cache of one of the processor cores 101, then at block 506 the data stored in the set of memory locations, such as the cache-line-sized row 105, of the main memory 104 may be accessed from the cache of the processor cores 101 of the multi-core processor 200 or 300. Otherwise, the set of memory locations, such as the cache-line-sized row 105, corresponding to the accessed directory descriptor 104 may be accessed in the main memory 103 at block 508.
The present disclosure is not to be limited in terms of the particular examples described in this application, which are intended as illustrations of various aspects. Many modifications and examples can may be made without departing from its spirit and scope, as will be apparent to those skilled in the art. Functionally equivalent methods and apparatuses within the scope of the disclosure, in addition to those enumerated herein, will be apparent to those skilled in the art from the foregoing descriptions. Such modifications and examples are intended to fall within the scope of the appended claims. The present disclosure is to be limited only by the terms of the appended claims, along with the full scope of equivalents to which such claims are entitled. It is to be understood that this disclosure is not limited to particular methods, reagents, compounds compositions or biological systems, which can, of course, vary. It is also to be understood that the terminology used herein is for the purpose of describing particular examples only, and is not intended to be limiting.
With respect to the use of substantially any plural and/or singular terms herein, those having skill in the art can translate from the plural to the singular and/or from the singular to the plural as is appropriate to the context and/or application. The various singular/plural permutations may be expressly set forth herein for sake of clarity.
It will be understood by those within the art that, in general, terms used herein, and especially in the appended claims (e.g., bodies of the appended claims) are generally intended as “open” terms (e.g., the term “including” should be interpreted as “including but not limited to,” the term “having” should be interpreted as “having at least,” the term “includes” should be interpreted as “includes but is not limited to,” etc.).
It will be further understood by those within the art that if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the introductory phrases “at least one” and “one or more” to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles “a” or “an” limits any particular claim containing such introduced claim recitation to examples containing only one such recitation, even when the same claim includes the introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an” (e.g., “a” and/or “an” should be interpreted to mean “at least one” or “one or more”); the same holds true for the use of definite articles used to introduce claim recitations. In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should be interpreted to mean at least the recited number (e.g., the bare recitation of “two recitations,” without other modifiers, means at least two recitations, or two or more recitations).
Furthermore, in those instances where a convention analogous to “at least one of A, B, and C, etc.” is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., “a system having at least one of A, B, and C” would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). In those instances where a convention analogous to “at least one of A, B, or C, etc.” is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., “a system having at least one of A, B, or C” would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). It will be further understood by those within the art that virtually any disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms. For example, the phrase “A or B” will be understood to include the possibilities of “A” or “B” or “A and B.”
In addition, where features or aspects of the disclosure are described in terms of Markush groups, those skilled in the art will recognize that the disclosure is also thereby described in terms of any individual member or subgroup of members of the Markush group.
As will be understood by one skilled in the art, for any and all purposes, such as in terms of providing a written description, all ranges disclosed herein also encompass any and all possible subranges and combinations of subranges thereof. Any listed range can be easily recognized as sufficiently describing and enabling the same range being broken down into at least equal halves, thirds, quarters, fifths, tenths, etc. As a non-limiting example, each range discussed herein can be readily broken down into a lower third, middle third and upper third, etc. As will also be understood by one skilled in the art all language such as “up to,” “at least,” “greater than,” “less than,” and the like include the number recited and refer to ranges which can be subsequently broken down into subranges as discussed above. Finally, as will be understood by one skilled in the art, a range includes each individual member. Thus, for example, a group having 1-3 items refers to groups having 1, 2, or 3 items. Similarly, a group having 1-5 items refers to groups having 1, 2, 3, 4, or 5 items, and so forth.
While the foregoing detailed description has set forth various examples of the devices and/or processes via the use of block diagrams, flowcharts, and/or examples, such block diagrams, flowcharts, and/or examples contain one or more functions and/or operations, it will be understood by those within the art that each function and/or operation within such block diagrams, flowcharts, or examples can be implemented, individually and/or collectively, by a wide range of hardware, software, firmware, or virtually any combination thereof. In one example, several portions of the subject matter described herein may be implemented via Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), digital signal processors (DSPs), or other integrated formats. However, those skilled in the art will recognize that some aspects of the examples disclosed herein, in whole or in part, can be equivalently implemented in integrated circuits, as one or more computer programs running on one or more computers (e.g., as one or more programs running on one or more computer systems), as one or more programs running on one or more processors (e.g., as one or more programs running on one or more microprocessors), as firmware, or as virtually any combination thereof, and that designing the circuitry and/or writing the code for the software and or firmware would be well within the skill of one of skill in the art in light of this disclosure. For example, if a user determines that speed and accuracy are paramount, the user may opt for a mainly hardware and/or firmware vehicle; if flexibility is paramount, the user may opt for a mainly software implementation; or, yet again alternatively, the user may opt for some combination of hardware, software, and/or firmware.
In addition, those skilled in the art will appreciate that the mechanisms of the subject matter described herein are capable of being distributed as a program product in a variety of forms, and that an illustrative example of the subject matter described herein applies regardless of the particular type of signal bearing medium used to actually carry out the distribution. Examples of a signal bearing medium include, but are not limited to, the following: a recordable type medium such as a floppy disk, a hard disk drive, a Compact Disc (CD), a Digital Video Disk (DVD), a digital tape, a computer memory, etc.; and a transmission type medium such as a digital and/or an analog communication medium (e.g., a fiber optic cable, a waveguide, a wired communications link, a wireless communication link, etc.).
Those skilled in the art will recognize that it is common within the art to describe devices and/or processes in the fashion set forth herein, and thereafter use engineering practices to integrate such described devices and/or processes into data processing systems. That is, at least a portion of the devices and/or processes described herein can be integrated into a data processing system via a reasonable amount of experimentation. Those having skill in the art will recognize that a typical data processing system generally includes one or more of a system unit housing, a video display device, a memory such as volatile and non-volatile memory, processors such as microprocessors and digital signal processors, computational entities such as operating systems, drivers, graphical user interfaces, and applications programs, one or more interaction devices, such as a touch pad or screen, and/or control systems including feedback loops and control motors (e.g., feedback for sensing position and/or velocity; control motors for moving and/or adjusting components and/or quantities). A typical data processing system may be implemented utilizing any suitable commercially available components, such as those typically found in data computing/communication and/or network computing/communication systems.
The herein described subject matter sometimes illustrates different components contained within, or connected with, different other components. It is to be understood that such depicted architectures are merely examples, and that in fact many other architectures can be implemented which achieve the same functionality. In a conceptual sense, any arrangement of components to achieve the same functionality is effectively “associated” such that the desired functionality is achieved. Hence, any two components herein combined to achieve a particular functionality can be seen as “associated with” each other such that the desired functionality is achieved, irrespective of architectures or intermedial components. Likewise, any two components so associated can also be viewed as being “operably connected”, or “operably coupled”, to each other to achieve the desired functionality, and any two components capable of being so associated can also be viewed as being “operably couplable”, to each other to achieve the desired functionality. Specific examples of operably couplable include but are not limited to physically mateable and/or physically interacting components and/or wirelessly interactable and/or wirelessly interacting components and/or logically interacting and/or logically interactable components.
While various aspects and examples have been disclosed herein, other aspects and examples will be apparent to those skilled in the art. The various aspects and examples disclosed herein are for purposes of illustration and are not intended to be limiting, with the true scope and spirit being indicated by the following claims.