(1) Field of the Invention
The present invention relates to a compiler that converts a source program written in high-level language such as the C++ language into an executable program written in machine language, and particularly to a compiler that converts said source program into an executable program that are executed on a computer having a cache memory.
(2) Description of the Related Art
A variety of compilers for computers having cache memory have been proposed so far. For example, there is a compiler that lays out a set of data items that are accesses at similar timings (e.g. a set of data items with overlapping lifetimes) in contiguous locations on the main memory (for example, see Japanese Laid-Open Patent application No. 7-129410). By laying out, in consecutive locations on the main memory, a set of data items that are accesses at similar timings, these data items are laid out on the same block on the cache memory at one time. Accordingly, it becomes possible to increase the hit rate of the cache memory.
However, if addresses on the main memory of the respective data items that are accessed at similar timings are determined in a way that enables such data items to be laid out on the same block, and if the total size of these data items is larger than the size of such block, it is impossible to write all data included in such data items to the same block at one time. This causes a cache conflict on the same block between or among data included in the same data items, resulting in frequent cache misses. This problem is especially notable in cache memories using a direct mapping scheme in which only one block is associated with one set.
The present invention has been conceived in view of solving the above problem whose object is to provide a compiler apparatus that is capable of avoiding conflicts on the same block and of increasing the hit rate of cache memory.
In order to achieve the above object, the compiler apparatus according to the present invention is a compiler apparatus that targets at a computer having a cache memory and that converts a source program into an object program, comprising: a grouping unit operable to analyze grouping information that is used for grouping data objects included in the source program, and place said data objects into groups based on a result of said analysis; and an object program generation unit operable to generate the object program based on a result of the grouping performed by the grouping unit, said object program not allowing data objects belonging to different groups to be laid out in any blocks with the same set number on the cache memory.
With the above configuration, if the groping information includes information for placing data objects with overlapping lifetimes in different groups, for example, the data objects with overlapping lifetimes are to be placed in set numbers on the cache memory that are different from each other, according to such information. Accordingly, there does not occur any conflicts in which data objects whose lifetimes overlap contend for a block with the same set number on the cache memory and try to flush other data objects. This makes it possible to cause fewer cache misses and therefore to increase the hit rate of the cache memory. Note that in the present specification and following claims, “object/data object” refers to data such as variable and data array.
Moreover, the grouping unit may analyze a directive to the compiler apparatus included in the source program, and place the data objects included in the source program into the groups based on a result of said analysis of the directive. More preferably, the directive is a pragma command for placing a set of one or more data objects specified in said pragma command into one or more groups on a line size basis of the cache memory, and the grouping unit places said specified set of one or more data objects into said one or more groups on a line size basis of the cache memory, based on the pragma command included in the source program.
When an executable program is executed, data objects which are considered by the user to be accessed at similar timings according to a pragma command, are to be laid out in blocks with different set numbers on the cache memory. Accordingly, there does not occur any conflicts in which data objects which are deemed as being accessed at similar timings contend for a block with the same set number on the cache memory and try to flush other data objects. This makes it possible to cause fewer cache misses and therefore to increase the hit rate of the cache memory.
It is also possible that the directive is a pragma command that allows data objects specified in said pragma command to be laid out in blocks with mutually different set numbers and that allows said specified data objects to make exclusive use of the respective blocks, that the grouping unit includes: a grouping processing unit operable to place said specified data objects into groups on a data object basis, based on the pragma command included in the source program; and a set number setting unit operable to set different set numbers to the respective groups, and that the object program generation unit generates the object program that allows the data objects belonging to the respective groups to be laid out in the blocks with the set numbers on the cache memory corresponding to the respective groups and that allows said data objects to make exclusive use of the respective blocks.
With the above configuration, such an object program is generated as enables data objects specified in the pragma command to monopolize the blocks with the set numbers in the cache memory that are set by the set number setting unit. Accordingly, it becomes possible for frequently-used data objects to monopolize the cache memory, as well as to prevent such data objects from being flushed from the cache memory and to achieve high-speed processing.
Moreover, the grouping unit may analyze profile information that is generated when a machine language instruction sequence generated from the source program is executed, and place the data objects included in the source program into the groups based on a result of said analysis of the profile information. More preferably, the profile information includes information related to access frequencies of the respective data objects, and the grouping unit places, into mutually different groups, data objects whose access frequencies are equal to or greater than a predetermined threshold.
When the executable program is executed, data objects with high frequencies are to be laid out in blocks with different set numbers on the cache memory. Accordingly, it becomes possible for data objects with high access frequencies to monopolize blocks on the cache memory, as well as to prevent such frequently-used data objects from being flushed from the cache memory. This makes it possible to prevent cache misses and to increase the hit rate of the cache memory.
Furthermore, it is also possible that the profile information includes information related to lifetimes of the respective data objects, and that the grouping unit places, into mutually different groups, data objects whose lifetimes overlap.
With the above configuration, data objects whose lifetimes overlap are to be laid out in blocks with set numbers that are different from each other. Accordingly, there does not occur any conflicts in which data objects that are accessed at the same timings contend for a block with the same set number and try to flush other data objects. This makes it possible to prevent cache misses and to increase the hit rate of the cache memory.
More preferably, the grouping unit analyzes an overlapping of lifetimes of the respective data objects included in the source program based on the source program, and places, into mutually different groups, data objects whose lifetimes overlap.
With the above configuration, data objects whose lifetimes overlap are to be laid out in blocks with set numbers that are different from each other. Accordingly, there does not occur any conflicts in which data objects that are accessed-at the same timings contend for a block with the same set number and try to flush other data objects. This makes it possible to prevent cache misses and to increase the hit rate of the cache memory.
Note that not only is it possible to embody the present invention as the above compiler apparatus that generates the characteristic object program, but also as a compilation method that includes, as its steps, the characteristic units equipped to the above compiler apparatus, and as a program that causes a computer to function as the above compiler apparatus. It should be noted that such program can be distributed on a recording medium such as CD-ROM and over a transmission medium such as the Internet.
As described above, the present invention is capable of increasing the hit rate of a cache memory at program execution time.
Furthermore, the present invention is also capable of achieving high-speed processing.
The disclosure of Japanese Patent Application No. 2003-356921 filed on Oct. 16, 2003 including specification, drawings and claims is incorporated herein by reference in its entirety.
These and other objects, advantages and features of the invention will become apparent from the following description thereof taken in conjunction with the accompanying drawings that illustrate a specific embodiment of the invention. In the Drawings:
<Hardware Configuration>
The address register 20 is a register that holds an access address that is used to make an access to the main memory 2. This access address shall be 32 bits. As shown in
The memory unit 31 includes 16 (=24) sets (16 blocks here, since a fully associative scheme is employed), since a set index (SI) is made up of 4 bits.
The valid flag V indicates whether the block is valid or not. The tag is a copy of a 21-bit tag address. The line data is a copy of 128-byte data stored in the main memory 2 whose start address is the address held in the address register 20. The dirty flag D indicates whether writing has been performed to the block or not, i.e. whether or not it is necessary for line data that has been cached to the block to be written back to the main memory 2 since it is now different from data stored in the main memory 2 because of the fact that the writing has been performed.
Here, the tag address indicates a location on the main memory 2 of line data to be mapped to the memory unit 31 (the size of such location is determined by the number of sets x the size of line data). The size of the location is 2k bytes, which is determined by a 10-bit address that starts from the next lower bit of the least significant bit of the tag address. Moreover, the set index (SI) refers to one of the 16 sets. A set specified by the tag address and the set index (SI) serves as a unit of replacement. The size of line data is 128 bytes, which is determined by the next lower 7 bits of the least significant bit of the set index (SI). Assuming that one word is 4 bytes, one line data is made up of 32 words.
The decoder 30 shown in
The comparator 32 compares the tag address in the address register 20 with the tag included in the set selected by the set index (SI) to see if they match or not.
The AND circuit 33 carries out the logical AND between the valid flag (V) and a result of the comparison performed by the comparator 32. When the logical AND is 1, it means that there exists, in the memory unit 31, line data corresponding to the tag address in the address register 20 and to the set index (SI). When the logical AND is 0, it means that a cache miss has occurred.
The control unit 38 exercises an overall control of the cache memory 3.
<Overview of Data Layout Method>
<Compiler System>
The compiler unit 46 receives the following data items as inputs, and converts the source program 44 into an assembler file 48 written in assembly language, based on such received data items: the source program 44 written in high-level language such as the C++ language; a cache parameter 42 made up of parameter information related to the cache memory 3 (e.g. the number of sets, and the size of line data, and the like); and profile data 66 that indicates a result of analysis performed at the time of executing the executable program 58.
The assembler unit 50 creates an object file 52 that is a result of converting the assembler file 48 written in assembly language into a machine language file.
The linker unit 54 links one or more object files 52 (only one object file 52 is illustrated in
The simulator unit 60 virtually executes the executable program 58, and outputs an execution log 62.
The profiler unit 64 generates, by analyzing the execution log 62, the profile data 66 that serves as a hint for obtaining an optimum executable program 58, such as the access frequencies of variables and the lifetimes of variables.
<Compiler Unit>
The parser unit 72, which is a pre-processing unit that extracts a reserved word (keyword) and the like from the source program 44 to be compiled and performs lexical analysis of the extracted word, has a pragma analyzing unit 74 that analyzes a pragma command, in addition to the analyzing functionality of ordinary compilers.
Note that “pragma (or pragma command)” is a directive to the compiler unit 46 that is a character string starting with “#pragma” and that can be arbitrarily specified (placed) by the user within the source program 44.
The assembler code conversion unit 76 is a processing unit that converts each statement in the source program 44 passed from the parser unit 72 into an assembly language code after converting each statement into an intermediate code, and outputs the resultant as the assembler file 48. In addition to the conversion functionality of ordinary compilers, the assembler code conversion unit 76 is equipped with a layout set information setting unit 78 that generates an assembler code that enables an object specified by a pragma analyzed by the pragma analyzing unit 74 to be laid out in a block on the cache memory 3 with an appropriate set number.
Here, there shall be the following three types of pragmas:
Pragma (1) indicates that objects “a”, “b”, and “c” are accessed at similar timings. Note that the number of objects may be any number as long as it is equal to or greater than 1. The meaning of this pragma is given later. Pragma (2) is used to specify that the object “a” should be laid in a block with the “n”th set number on the cache memory 3. Pragma (3) is used to specify that the objects “a” and “b” should be laid in blocks with different set numbers on the cache memory 3 and that these blocks should be monopolized by the objects “a” and “b”, i.e. no object other than the objects “a” and “b” should be laid on these blocks.
The pragma analyzing unit 74 analyzes the type of a pragma described on the source program 44 (S1). When the type of such pragma is Pragma (1) (_overlap_access_object in S1), the pragma analyzing unit 74 places a set of objects that are indicated after “#pragma_overlap_access_object” into groups in a way that allows the size of each group to be equal to or lower than an equivalence of one set of line data (i.e. 128 bytes) on the cache memory 3 (S2). The following gives a more specific description of this grouping processing (S2).
After the grouping processing (S2), the layout set information setting unit 78 assigns different set numbers to the respective groups (S3 in
Then, the layout set information setting unit 78 generates assembler codes that enable the objects of these groups to be laid out in corresponding blocks on the cache memory 3 whose set numbers are assigned in the group number setting processing (S3) (S4).
A description is given of the first three lines. The first line indicates that the command “SECTION” serves as the ending delimiter of a group and that the group name is “data_a”. The second line indicates that an object described on the third line is to be stored into a storage location on the main memory 2 that enables such object to be laid out in the zeroth set on the cache memory 3. The third line indicates the object itself and that the data size of the object “a” (array “a”) is 128 bytes. The same goes for the fourth line onward.
When the type of a pragma is categorized as Pragma (2) (_cache_set_number in S1), the pragma analyzing unit 74 places objects into groups according to the pragma specification (S5), and assigns set numbers to the respective groups (S6). For example, in the case of a source program as shown in
Then, the layout set information setting unit 78 generates assembler codes that enable objects of these groups to be laid out in corresponding blocks on the cache memory 3 whose set numbers are assigned in the group number setting processing (S6) (S4).
When the type of the pragma is categorized as Pragma (3) (_cache_set_monopoly in S1), the layout set information setting unit 78 places the respective objects specified by the pragma into independent groups (S7). After that, the layout set information setting unit 78 assigns different set numbers to the respective groups (S8). For example, in the case of a source program as shown in
Then, the layout set information setting unit 78 generates assembler codes that enable objects of the groups to be laid out in corresponding blocks on the cache memory 3 whose set numbers are assigned in the group number setting processing (S8) (S4). Note that when Pragma(3) is specified as the type of a pragma, such assembler codes are generated as enable objects specified by the pragma to monopolize the blocks corresponding to the set numbers on the cache memory 3 that are assigned in the group number setting processing (S7). Accordingly, it becomes possible for frequently-used objects to monopolize the cache memory 3, and therefore to prevent such objects from being flushed from the cache memory 3, as well as to achieve high-speed processing.
The above steps (S1 to S8) are executed for all pragmas (Loop A) to generate assembler codes. Note that it is also possible to set a pragma categorized as Pragma (2) “#pragma_cache_set_number” and a pragma categorized as Pragma (3) “#pragma_cache_set_monopoly” together for the same object.
<Liker Unit>
The address setting unit 56 reads in more than one object file 52, and categorizes objects included in said more than one object file 52 into the following two types of objects (S11): objects whose set numbers on the cache memory 3 have already been determined; and objects whose set numbers on the cache memory 3 have not been determined yet. For example, the address setting unit 56 categorizes objects into ones as shown in (a) in
Next, the address setting unit 56 determines the allocations of the respective objects on the main memory 2 (S12). More specifically, the address setting unit 56 allocates, on an object-by-object basis, the objects whose set numbers have already been determined into locations on the main memory 2 that enable such objects to be laid out on blocks with corresponding set numbers on the cache memory 3. Also, the address setting unit 56 allocates objects without set numbers into locations on the cache memory 3 that correspond to such set numbers which are not yet been set to any objects. At this point of time, as shown in (c) in
Next, the address setting unit 56 checks whether or not all the objects for which set numbers have been determined are laid out on the main memory 2 (S13). If all of such objects have already been laid out on the main memory 2 (YES in S13), the address setting unit 56 terminates the processing. If any one of them has not yet been laid out on the main memory 2 (NO in S13), the address setting unit 56 lays out, on the main memory 2, such object and the subsequent objects, as in the case of the object layout processing (S12). In so doing, nothing shall be laid in a location corresponding to a set number which has been assigned to an object at least once, by regarding such location as an empty location (S14). Thus, as shown in (c) in
As described above, in the present embodiment, when an executable program is executed, objects which are considered by the user as being accessed at similar timings according to a pragma specification are laid in blocks with different set numbers on the cache memory 3. Accordingly, there does not occur any conflicts in which objects which are deemed as being accessed at similar timings contend for a block with the same set number on the cache memory and try to flush other objects. This makes it possible to cause fewer cache misses and therefore to increase the hit rate of the cache memory.
A partial hardware configuration of a target computer of the compiler system according to the second embodiment of the present invention is the same as the one shown in FIGS. 1 to 3. Also, the configuration of the compiler system according to the present embodiment is the same as the one shown in
The parser unit 82 is a pre-processing unit that extracts a reserved word (keyword) and the like from the source program 44 to be compiled and performs lexical analysis of the extracted word, has a profile data analyzing unit 84 that analyzes the profile data 66, in addition to the analyzing functionality of ordinary compilers. The profile data 66 is information that serves as a hint for obtaining an optimum executable program 58, such as the access frequencies of objects (variables, and the like) and the lifetimes of objects, as described in the first embodiment.
The assembler code conversion unit 86 is a processing unit that converts each statement in the source program 44 passed from the parser unit 82 into an assembly language code after converting each statement into an intermediate code, and outputs the resultant as the assembler file 48. In addition to the conversion functionality of ordinary compilers, the assembler code conversion unit 86 is equipped with a layout set information setting unit 88 that generates an assembler code that enables an object to be laid out in a block with an appropriate set number, according to a result of analysis performed by the profile data analyzing unit 84.
The profile data analyzing unit 84 analyzes the type of profile information described in the profile data 66 (S21). When such information described in the profile data 66 is related to the access frequencies of objects (Access frequency information in S21), the layout set information setting unit 88 places, into independent groups, the respective objects whose access frequencies are equal to or grater than a predetermined threshold (S22). Moreover, the layout set information setting unit 88 places, into one group, objects whose access frequencies are smaller than such predetermined threshold (S23). Next, the layout set information setting unit 88 sets different set numbers on the cache memory 3 to the respective groups grouped by the grouping processing (S22 and S23) (S24). Then, the layout set information setting unit 88 generates assembler codes for storing the objects in the above groups into locations on the main memory 2 that enable such objects to be laid in the corresponding blocks with set numbers on the cache memory 3 that are assigned in the group number setting processing (S24) (S25).
Next, providing a concrete example, more detailed descriptions are given of the assembler code generation processing (S22 to S25) that is performed on the basis of access frequency information.
Here, assuming that a threshold is set to 10%, for example, the objects “a” and “b” whose access frequencies are not smaller than 10%, as shown in (c) in
If information described in the profile data 66 is related to the lifetimes of objects (Lifetime information in S21), the layout set information setting unit 88 checks how the lifetimes of the respective objects overlap (S26). Then, the layout set information setting unit 88 groups the objects in a way that enables objects with the overlapping lifetimes to be placed into different groups (S27). After that, the layout set information setting unit 88 sets different set numbers on the cache memory 3 to the groups that are grouped in the grouping processing (S26 and S27) (S28). Subsequently, the layout set information setting unit 88 carries out the above-described assembler code generation processing (S25).
Next, providing a concrete example, more detailed descriptions are given of the assembler code generation processing (S26 to S28, and S25) that is performed on the basis of lifetime information.
(a) in
If the lifetimes overlap with one another as above, the objects are grouped as shown in
As described above, according to the present embodiment, objects with high access frequencies are laid in blocks with different set numbers on the cache memory, when the executable program is executed. Furthermore, objects with low access frequencies are laid in a block with another set number that is different from the above set numbers. This makes it possible for objects with high access frequencies to monopoly blocks on the cache memory. Accordingly, by making it difficult for frequently-used objects to be flushed from the cache memory, it becomes possible to prevent cache misses and to increase the hit rate of the cache memory.
Furthermore, objects whose lifetimes overlap with one another are laid in blocks with different set numbers. Accordingly, there does not occur any conflicts in which objects which are accessed at the same timing contend for a block with the same set number and try to flush other objects. This makes it possible to cause fewer cache misses and therefore to increase the hit rate of the cache memory.
A partial hardware configuration of a target computer of the compiler system according to the third embodiment of the present invention is the same as the one shown in FIGS. 1 to 3. Also, the configuration of the compiler system according to the present embodiment is the same as the one shown in
The parser unit 92 is a pre-processing unit that extracts a reserved word (keyword) and the like from the source program 44 to be compiled and performs lexical analysis of the extracted word, has an overlapping lifetime analyzing unit 94 that analyzes an overlapping of the lifetimes of objects (variables, and the like), in addition to the analyzing functionality of ordinary compilers.
The overlapping lifetime analyzing unit 94 analyzes the source program 44 to analyze an overlapping of the lifetimes of objects. For example, in the case where the source program 44 as shown in (a) in
As described above, according to the present embodiment, objects whose lifetimes overlap are laid in blocks with different set numbers. Accordingly, there does not occur any conflicts in which objects which are accessed at the same timing contend for a block with the same set number and try to flush other objects. This makes it possible to cause fewer cache misses and therefore to increase the hit rate of the cache memory.
Although only some exemplary embodiments of this invention have been described in detail above, those skilled in the art will readily appreciate that many modifications are possible in the exemplary embodiments without materially departing from the novel teachings and advantages of this invention. Accordingly, all such modifications are intended to be included within the scope of this invention.
For example, a cache memory using an “n”-way set associative scheme may be used as a cache memory.
Industrial Applicability
The present invention is applicable to a compiler, and more particularly to a compiler, and the like that targets at a computer having a cache memory.
Number | Date | Country | Kind |
---|---|---|---|
2003-356921 | Oct 2003 | JP | national |