IBM® is a registered trademark of International Business Machines Corporation, Armonk, N.Y., U.S.A. Other names used herein may be registered trademarks, trademarks or product names of International Business Machines Corporation or other companies.
1. Field of the Invention
This invention relates generally to computer architecture and, more particularly, to methods and computer program products for reducing or eliminating “load-hit-store” delays.
2. Description of Background
Some computer architectures, including System-p and System-z, have performance bottlenecks known as “load-hit-store” delays. Such bottlenecks occur in situations where a store is closely followed by a fetch from a common memory fetch unit. A memory fetch unit is an association of memory locations that share a temporal dependency. This association, specific to the timing of the architectural characteristics under observation, is typically a byte, word, double-word, or page-aligned information. For a “load-hit-store”, a fetch request typically needs to wait K extras cycles for the store to the memory fetch unit to complete. In practice, K varies from five to several hundreds of cycles depending on the architecture.
One existing approach for mitigating the problem of “load-hit-store” delays is a technique called instruction scheduling. Instruction scheduling attempts to fill in a slot of K cycles between the store and the fetch with instructions that are independent of the store and fetch operations. Instruction scheduling, however, will not be effective unless enough independent instructions are available to hide the “load-hit-store” delay, or unless the store and fetch are in different scheduling blocks. Accordingly, what is needed is an improved technique for reducing or eliminating “load-hit-store” delays.
The shortcomings of the prior art are overcome and additional advantages are provided by assigning each of a plurality of memory fetch units to any of a plurality of candidate variables to reduce or eliminate load-hit-store delays, wherein a total number of required memory fetch units is minimized. The plurality of memory fetch units are assigned to any of the plurality of candidate variables by identifying a plurality of store/load pairs wherein a store to variable X of the candidate variables is within M instruction cycles of a load of variable Y of the candidate variables, M being a positive integer greater than one; generating a dependency graph by creating a node Nx for each store to variable X and a node Ny for each load of variable Y and, unless X=Y, for each store/load pair of the plurality of store/load pairs, creating an edge between a respective node Nx and a corresponding node Ny; for each created edge, labeling the edge with a heuristic weight ωxy, wherein ωxy is determined by at least one of: (a) a probability that a load of variable Y is executed given that a store of variable X is executed, or (b) a cost of the load-hit-store for variables X and Y; labeling each node Nx with a node weight Wx that combines a plurality of respective edge weights of a plurality of corresponding nodes Nx such that Wx=Σωxj; and determining a color for each of the graph nodes using k distinct colors wherein k is minimized such that no adjacent nodes joined by an edge between a respective node Nx and a corresponding node Ny have an identical color; and assigning a memory fetch unit to each of the k distinct colors.
Computer program products corresponding to the above-summarized methods are also described and claimed herein. Other methods and/or computer program products according to embodiments will be or become apparent to one with skill in the art upon review of the following drawings and detailed description. It is intended that all such additional systems, methods, and/or computer program products be included within this description, be within the scope of the present invention, and be protected by the accompanying claims.
Assigning each of a plurality of memory fetch units to any of a plurality of candidate variables serves to reduce or eliminate load-hit-store delays. This assignment is performed in a manner such that the total number of required memory fetch units is minimized. Illustratively, reducing or eliminating load-hit-store delays is useful in the context of stack-based languages wherein a compiler assigns a plurality of stack-frame slots to hold temporary expressions. Alternatively or additionally, any garbage collected language may utilize the assignment techniques disclosed herein for re-factoring heaps to thereby mitigate load-hit-store delays in the context of any of a variety of software applications.
The subject matter which is regarded as the invention is particularly pointed out and distinctly claimed in the claims at the conclusion of the specification. The foregoing and other objects, features, and advantages of the invention are apparent from the following detailed description taken in conjunction with the accompanying drawings in which:
Next, at block 103, a dependency graph is created by: a) creating a node Nx for each store to variable X and creating a node Ny for each load of variable Y; and b) unless X=Y, for each store/load pair of the plurality of store/load pairs Qxy: {storex, loady}, creating an edge between a respective node Nx and a corresponding node Ny. At block 105, for each edge created in the immediately preceding block, the edge is labeled with a heuristic weight ωxy, where ωxy is a metric product that combines frequency (or probability) of execution and the cost of the load-hit-store, e.g. ωxy=Py|x*Cxy.
At block 107, each node Nx is labeled with a node weight Wx that integrates all edge weights of that node such that Wx=Σωxj. Next, at block 109, a coloring for each of the graph nodes is determined using a minimal number of k distinct colors such that no adjacent nodes joined by an edge between a respective node Nx and a corresponding node Ny have an identical color. At block 111, a respective memory fetch unit is assigned to each of a plurality of corresponding k distinct colors.
In performing block 109, many different heuristics for approximating optimal graph coloring exist. For the sake of completeness and without loss of generality, one example of a graph coloring algorithm is presented herein that may be near-optimal in most load-hit-store situations: Color the nodes in decreasing order of weight Wi. When determining a color for a node, first identify any colors already used in the graph which are not used to color an adjacent node (i.e., a node's neighbors). Out of these identified colors, pick the color with most space available, where space is defined as follows: Space(color i)=Size of memory fetch unit−Σ(node of color i*node size), where “node size” is the size of the variable occupying a node, for example 4 bytes for an integer variable. Make sure the determined color has enough corresponding memory space to hold the variable corresponding to the node to be colored. If no such color is available from the set of colors already in the graph, a new color must be selected.
Accordingly, node pairs are identified as {Store A, Load B}, {Store C, Load A}, and {Store B, Load A}. With reference to
At
Accordingly, node pairs are identified as {Store A, Load D}-90%, {Store A, Load C}-90%, {Store B, Load C}-10%. With reference to
The capabilities of the present invention can be implemented in software, firmware, hardware or some combination thereof. As an example, one or more aspects of the present invention can be included in an article of manufacture (e.g., one or more computer program products) having, for instance, computer usable media. The media has embodied therein, for instance, computer readable program code means for providing and facilitating the capabilities of the present invention. The article of manufacture can be included as a part of a computer system or sold separately.
Additionally, at least one program storage device readable by a machine, tangibly embodying at least one program of instructions executable by the machine to perform the capabilities of the present invention can be provided.
The flow diagrams depicted herein are just examples. There may be many variations to these diagrams or the steps (or operations) described therein without departing from the spirit of the invention. For instance, the steps may be performed in a differing order, or steps may be added, deleted or modified. All of these variations are considered a part of the claimed invention.
While the preferred embodiment to the invention has been described, it will be understood that those skilled in the art, both now and in the future, may make various improvements and enhancements which fall within the scope of the claims which follow. These claims should be construed to maintain the proper protection for the invention first described.