This invention relates to computer systems and their operation and particularly to data persistence within a multi-node system.
Historically, data persistence within a multi-node computer system has been a focus of computer system architecture inasmuch as it avoids stalling of processor requests and the negative impact such stalling may have on overall system performance. As here used, reference to a node means an organization of one or more processors and a memory array operatively associated with the processor(s) which has main memory and one or more levels of cache interposed between the processor(s) and the main memory. Reference to a multi-node system, then means an organization of a plurality of nodes.
Algorithms and design methods intended to minimize and avoid stalling typically have taken only a few forms. The most straight forward of these is an increase in the number and size of cache(s) within a system. In this approach, data persistence is addressed through an overall increase in cache space and improves through the simple increase in the number of lines which can be stored in cache and the length of their tenure in a given cache.
Another approach has been through improved cache line replacement algorithms, which work under the premise that a more intelligent selection of cache lines for eviction, when a new line install is required, will result in the persistence of the most relevant data. The assumption is that a subsequent fetch is more likely to encounter a hit in the remaining lines of code instead of the evicted lines.
Yet another approach is by way of pre-fetch algorithms, which do not directly address data persistence by definition, but instead seek to predict lines of future importance and bring them toward the processor(s) in a timely manner.
With all of these approaches, the technologies generally do well in address the issue of data persistence in various forms. However, they have consistently been focused on what is here described as the vertical aspect of a cache structure. While this characterization will be expanded on in the discussion which follows, it can here be noted that the vertical aspect describes the data path to or from a processor or processor complex through associated levels of cache directly associated with that processor or processor complex and from or to a main memory element directly associated with that processor or processor complex.
What is described here as the present invention focuses more on horizontal aspects of cache design, particularly in a multi-node system.
With the foregoing in mind, it is a purpose of this invention to improve access to retained data useful to a system by managing data flow through cache associated with the processor(s) of a multi-node system. In particular, the management of data flow takes advantage of the horizontal relationship among cache associated with the plurality of processor. In realizing this purpose, a data management facility operable with the processors and memory array directs the flow of data from the processors to the memory array by determining the path along which data evicted from a level of cache close to one of the processors is to return to a main memory and directing evicted data to be stored, if possible, in a horizontally associated cache.
Some of the purposes of the invention having been stated, others will appear as the description proceeds, when taken in connection with the accompanying drawings, in which:
White the present invention will be described more fully hereinafter with reference to the accompanying drawings, in which a preferred embodiment of the present invention is shown, it is to be understood at the outset of the description which follows that persons of skill in the appropriate arts may modify the invention here described while still achieving the favorable results of the invention. Accordingly, the description which follows is to be understood as being a broad, teaching disclosure directed to persons of skill in the appropriate arts, and not as limiting upon the present invention.
Referring now to
In
The problem which arises with such data flow comes from the latency incurred in reaching each level of cache or memory. As a result, a typical processor fetch from L1 cache may incur a penalty of, for example, X, while a fetch from a corresponding L2 cache may incur a penalty of 3X and from main memory a penalty of 24X.
It is in attempts to address this latency problem that numerous schemes have been devised to improve caching algorithms and data flow management facilities, such that the lines selected for eviction are better chosen for a given system design and workload. For the same reasons, pre-fetch algorithms and data flow management have been devised in hardware and software in attempts to pre-empt a processor request for a given line such that the exponential effect of cache latency penalties can be avoids or diminished. Such schemes require the addition of large amounts of hardware and/or software support to provide any measurable gain.
Rather than addressing pre-fetching or improved line selection for cache replacement at the time of a fetch, this invention attends to the route taken by data at the time of eviction and routes that can be taken to assist in persistence of the data where appropriate. Referring now to
More particularly, the data line being evicted from L1 cache 12A to L2 Cache 14A and passing to eventual main storage 15D will pass through L2 cache 14B and be stored there if it is determined that that L2 cache 14B has a compartment capable of receiving the data being evicted. It is a two stage “if . . . then” determination: dos the path pass through another L2 cache; and does that L2 cache have capacity.
While past design implementations have extended the single-node line eviction path to a multi-node system with little or no improvement, this invention improves the process by actively scanning the cache of a remote node, through the horizontal association, for empty/invalid compartments where the evicted line from another node can be installed, if the line has to traverse through this cache on its way to its respective main memory or storage. This allows the evicted line to persist longer than is allowed in prior art implementations. This invention increases the potential for a subsequent fetch to find a line existing within a higher level of cache in the system, as compared to the traditional approach which would have displaced the data back to main memory.
It is contemplated that this invention may be implemented in a variety of ways in apparatus, in methods and in program code originated and made available to computer systems configured as multi-node system as have been here described.
In the drawings and specifications there has been set forth a preferred embodiment of the invention and, although specific terms are used, the description thus given uses terminology in a generic and descriptive sense only and not for purpose of limitation.
Number | Name | Date | Kind |
---|---|---|---|
6338124 | Arimilli et al. | Jan 2002 | B1 |
6349367 | Arimilli et al. | Feb 2002 | B1 |
6434672 | Gaither | Aug 2002 | B1 |
6449671 | Patkar et al. | Sep 2002 | B1 |
20030005232 | Guthrie et al. | Jan 2003 | A1 |
20030009643 | Arimilli et al. | Jan 2003 | A1 |
20030154346 | Gruner et al. | Aug 2003 | A1 |
20060143384 | Hughes et al. | Jun 2006 | A1 |
20060224829 | Evrard et al. | Oct 2006 | A1 |
20070156963 | Chen et al. | Jul 2007 | A1 |
20070239938 | Pong | Oct 2007 | A1 |
20070260819 | Gao et al. | Nov 2007 | A1 |
20080022049 | Hughes et al. | Jan 2008 | A1 |
20080040555 | Iyer et al. | Feb 2008 | A1 |
20080276045 | Kulkarni et al. | Nov 2008 | A1 |
20100153650 | Guthrie et al. | Jun 2010 | A1 |
Number | Date | Country | |
---|---|---|---|
20080320226 A1 | Dec 2008 | US |