1. Technical Field
The present invention is directed to process or thread processing. More specifically, the present invention is directed to a system, application and method of reducing cache thrashing in a multi-processor with a shared cache on which a disruptive process is executing.
2. Description of Related Art
Caches are sometimes shared between two or more processors. For example, in some dual chip modules two processors may share a single L2 cache. Having two or more processors share a cache may be beneficial in certain instances. Particularly, when processing parallel programs and the processors need to access a particular piece of data, only one processor needs to actually fetch the data into the shared cache. In those instances, therefore, system bus contentions are avoided.
Nonetheless, disruptive processes (i.e., processes that have either a poor cache affinity or a very large cache footprint) may adversely affect performance of such systems. Cache affinity is the concept of using data that is already in a cache while cache footprint is actual cache utilization.
As alluded to above, processes that have a good cache affinity often use data that is already in the cache. The data may be in the cache because it has been fetched during a previous execution of the process or through pre-fetching. Obviously, if a process has poor cache affinity, it will not use data that is already in the cache. Instead, it will fetch the data. Depending on the location of the data (i.e., whether on disk or in main memory etc.) performance may be severely impacted.
Processes that have a large cache footprint may fill up the cache rather quickly. Consequently, previously fetched data may have to be discarded to make room for newly accessed data. If the discarded data is to be reused, it has to be fetched once more into the cache. Then, just as in the case of processes with poor cache affinity, performance may be adversely impacted as data will have to be continually fetched into the cache.
In any case, when these processes run in conjunction with other processes on a system having a shared cache, there is a high likelihood that cache thrashing may occur. Thrashing considerably slows down the performance of a system since a processor has to continually move data in and out of the cache instead of doing productive work.
Consequently, what is needed is a system, apparatus and method of reducing the likelihood of cache thrashing in a multi-processor with a shared cache on which a disruptive process is executing.
The present invention provides a system, apparatus and method of reducing cache thrashing in a multi-processor with a shared cache executing a disruptive process (i.e., a thread that has a poor cache affinity or a large cache footprint). As the multi-processor executes threads, it keeps count of the number of processor cycles used to process each instruction (CPI). After the execution of a thread has been suspended, the average CPI is computed and compared to a user-configurable threshold. If the average CPI is greater than the threshold, it is entered into a table that has a list of all the threads being executed on the multi-processor system. The average CPI is then linked to all the threads that were actually executing on the multi-processor system when the high average CPI was exhibited. After dispatching a thread, the table is consulted to determine whether the dispatched thread is a disruptive thread (a disruptive thread is a thread to which the most average CPIs are linked). If the dispatched thread is a disruptive thread, a system idle process is dispatched (when possible) on the processor that shares the cache with the processor executing the disruptive thread.
The novel features believed characteristic of the invention are set forth in the appended claims. The invention itself, however, as well as a preferred mode of use, further objectives and advantages thereof, will best be understood by reference to the following detailed description of an illustrative embodiment when read in conjunction with the accompanying drawings, wherein:
a depicts an block diagram illustrating an exemplary data processing system in which the present invention may be implemented.
b depicts another exemplary data processing system in which the present invention may be implemented.
With reference now to figures,
Returning to
Note that for purpose of simplification processors will be used instead of processor cores. Note further that although the depicted example employs a PCI bus, other bus architectures such as Accelerated Graphics Port (AGP) and Industry Standard Architecture (ISA) may be used.
An operating system runs on processors 101 and 102 and is used to coordinate and provide control of various components within data processing system 100 in
Those of ordinary skill in the art will appreciate that the hardware in
The operating system generally includes a scheduler, a global run queue, one or more per-processor local run queues, and a kernel-level thread library. A scheduler is a software program that coordinates the use of a computer system's shared resources (e.g., a CPU). The scheduler usually uses an algorithm such as a first-in, first-out (i.e., FIFO), round robin or last-in, first-out (LIFO), a priority queue, a tree etc. algorithm or a combination thereof in doing so. Basically, if a computer system has three CPUs (CPU1, CPU2 and CPU3), each CPU will accordingly have a ready-to-be-processed queue or run queue. If the algorithm in use to assign processes to the run queue is the round robin algorithm and if the last process created was assigned to the queue associated with CPU2, then the next process created will be assigned to the queue of CPU3. The next created process will then be assigned to the queue associated with CPU1 and so on. Thus, schedulers are designed to give each process a fair share of a computer system's resources.
Note that a process is a program. When a program is executing, it is loosely referred to as a task. In most operating systems, there is a one-to-one relationship between a task and a program. However, some operating systems allow a program to be divided into multiple tasks or threads. Such systems are called multithreaded operating systems. For the purpose of simplicity, threads and processes will henceforth be used interchangeably.
Threads must take turns running on a CPU lest one thread prevents other threads from performing work. Thus, another one of the scheduler's tasks is to assign a unit of CPU time (i.e., quantum) to each thread.
Now suppose Th1 is a disruptive thread (i.e., Th1 has either a large cache footprint or a poor cache affinity). Suppose further that both Th1 and Th2 are dispatched for execution at the same time (i.e., both threads are being executed at the same time). Then, since Th1 is a disruptive thread, it will request a lot of data. In the mean time, Th2 may also be requesting data. Hence, the L2 cache 103 may quickly fill up. If the L2 cache 103 is filled up, data being requested anytime thereafter by either processor 101 or processor 102 may have to replace data already in the cache. If either Th1 or Th2 needs to reuse data that has been replaced, it will have to fetch the data once more from main memory 104. As a result, both processors may register a high number of cache misses. (A cache miss is a request to read data, which cannot be satisfied from the L2 cache 103 and for which the main memory 104 has to be consulted.)
When the data is brought from main memory 104, it may have to replace other data in the cache that had been brought in by either Th1 or Th2. However, modified data in the L2 cache 103 may not be replaced until it has been copied in main memory 104. Hence, in certain instances thrashing may occur. In other words, both processors 101 and 102 may continually be moving data in and out of the L2 cache 103. Consequently, the two processors may register a high number of cycles per instruction (CPI).
The present invention may be used to decrease the number of cache misses and therefore, the CPI that may be used by a processor of a multi-processor system with a shared cache when a thread with a large cache footprint or poor cache affinity is executing thereon. When a thread is executing, the number of cycles it takes to execute an instruction is counted. After the execution of the thread, the average CPI is computed. If the average CPI is greater than a user-configurable threshold, the average CPI may be categorized as a high CPI. All high CPIs are entered into a table that may be used to determine whether a thread is disruptive.
Obviously, an entry 315 will be entered and linked to Th1 in column 310 of
In any event, when a thread is dispatched for execution on a processor (i.e., CPU1 205), the table is consulted to determine if the thread is a disruptive thread. A thread to which a lot of high CPI entries are linked is considered to be a disruptive thread. If the thread is a disruptive thread, a system idle process is dispatched for execution on the other processor (i.e., CPU2 210). Ordinarily, system idle processes run only when no other processes are using the processors. Thus, when a CPU is idle, the system idle process is in action, executing special halt (HLT) instructions that put the CPU into a suspended mode and thereby allowing the CPU to cool down.
In the case of the present invention, however, a system idle process is run on each processor that shares a cache with a processor on which a disruptive thread is executing. Although counter-intuitive, tests have shown that the adverse performance impact that may be exhibited with an idle processor (in the case of two processors sharing a cache) is considerably less than having both processors exhibit a very poor CPI.
The description of the present invention has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art. The embodiment was chosen and described in order to best explain the principles of the invention, the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.
This application is related to co-pending U.S. patent application Ser. No. ______ (IBM Docket No. AUS920040017), entitled SYSTEM, APPARATUS AND METHOD OF REDUCING ADVERSE PERFORMANCE IMPACT DUE TO MIGRATION OF PROCESSES FROM ONE CPU TO ANOTHER, filed on even date herewith and assigned to the common assignee of this application, the disclosure of which is herein incorporated by reference.