1. Field of the Invention
The present invention generally relates to a computer system and more particularly to a multi-node computer system in which software objects are distributed across the various nodes according to their frequency of use.
2. Background Information
An operating system comprises executable code that provides an infrastructure on which application programs may run. Operating systems generally provide a variety of resources which application programs may access. Such resources may include memory allocation resources, graphics drivers, etc., and may generally be referred to as the operating system “kernel.”
Part of the process of launching an operating system during system initialization involves loading the kernel in memory. For some operating systems, the kernel is loaded into a predetermined area of memory. That is, a pre-designated portion of memory is allocated for the operating system kernel. For such operating systems, that portion of memory allocated for the kernel is not relocatable and is the same region of memory each time the operating system is launched. Such an operating system may be referred to as being “zero-based, memory dependent.”
Some computer system architectures include a plurality of inter-coupled “nodes” with each node comprising a processor, memory and possibly other devices. Each processor may access its own “local” memory (i.e., the memory contained in the processor's node) as well as the memory of other nodes in the system. The processor-memory combination in each node may be referred to as being “tightly coupled” in that it is much easier and faster for a processor to access its own local memory than the memory of other nodes. Accessing remote memory involves submitting a request through the network between nodes for the desired data, whereas accessing local memory does not require use of network communication resources and the associated latencies.
In a zero-based, memory dependent operating system in which the operating system kernel must be loaded in a pre-designated portion of the memory space, the kernel may be loaded into the local memory of a single node. The node containing the kernel thus has easy, rapid access to the kernel resources. Other nodes in the system also have access to the kernel, but not necessarily as rapidly as the node in which the kernel physically resides. Some nodes may be coupled directly to the node containing the kernel, while other nodes may not couple to the kernel's node via other intervening nodes. This latter type of node, which does not have a direct connection to the node containing the kernel, may be granted access to the kernel, but such requests and accesses flow through the nodes intercoupling the node needing the kernel and the node containing the kernel. Moreover, the latency associated with kernel accesses is exacerbated as the number of intervening nodes increases between the requesting node and the node containing the kernel. It is desirable to reduce latency in this regard.
One or more of the problems noted above may be addressed by a computer system that includes a plurality of nodes coupled together wherein each node may comprise a processor and memory. The system may also include a plurality of software objects usable by any of the nodes. Each object may be provided to, and stored in, the memory of the node that most frequently uses the object. Without limitation, various embodiments of the invention may comprise a single node that performs the functionality described herein, a computer system having a plurality of nodes, and an associated method.
For a detailed description of the preferred embodiments of the invention, reference will now be made to the accompanying drawings in which:
Certain terms are used throughout the following description and claims to refer to particular system components. As one skilled in the art will appreciate, computer companies may refer to a component by different names. This document does not intend to distinguish between components that differ in name but not function. In the following discussion and in the claims, the terms “including” and “comprising” are used in an open-ended fashion, and thus should be interpreted to mean “including, but not limited to . . .”. Also, the term “couple” or “couples” is intended to mean either an indirect or direct electrical connection. Thus, if a first device couples to a second device, that connection may be through a direct electrical connection, or through an indirect electrical connection via other devices and connections.
The following discussion is directed to various embodiments of the invention. Although one or more of these embodiments may be preferred, the embodiments disclosed should not be interpreted or otherwise used as limiting the scope of the disclosure, including the claims, unless otherwise specified. In addition, one skilled in the art will understand that the following description has broad application, and the discussion of any embodiment is meant only to be exemplary, and not intended to intimate that the scope of the disclosure, including the claims, is limited to these embodiments.
Referring to
Each node 102-108 may include one or processors 120, labeled as “P0”-“P3” as shown. Each node may also include local memory 124 that is coupled to the processor contained in that node. The local memory 124 of each node is accessible by that node's processor and the processors of other nodes. Thus, each processor 120 may access the memory of all other nodes in the system. Each processor 120 may execute one or more applications. Each node may run the same or different applications as are run on other nodes.
The system 100 may include other devices as well. For example, node 102 may couple to bridge device 130 which provides a bus 132 to which other devices may couple. Such other devices may include, for example, a keyboard controller 134 coupled to a keyboard 136 and a floppy disk controller 138. Other and/or different devices may be coupled to the system 100 via bus 132 and bridge 130. Although the bridge 130 is shown coupled to node 102, the bridge may be coupled to another node if desired.
The system may also include an input/output (“I/O”) controller 140 which provides a bus 142 to which one or more I/O devices may be coupled. Examples of such I/O devices may include a small computer system interface (“SCSI”) controller 144 and a network interface card “NIC”) 146. Other and/or different I/O devices may be coupled to the system 100 via bus 142 and I/O controller 140.
Referring now to
Referring to
Referring now to
Referring to
It is possible that an application that runs on two or more of the nodes 102-108 may have an approximately equal frequency of use of a particular kernel resource. In this case, the resource may be loaded on any of such nodes. The frequency of use of other nodes may also be considered. For example, two nodes may run applications that result in an approximately equal frequency of use of a particular kernel resource. While both nodes may have an approximately equal use of a particular resource, a third node may have a substantial (albeit lower) need for the same resource. With regard to
The preferred embodiments permit a more efficient distribution of operating system kernel resources in a multi-node computer system, thereby advantageously reducing latency. The above discussion is meant to be illustrative of the principles and various embodiments of the present invention. Numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. For example, although the resources comprising the operating system kernel are described as being distributed across the various nodes, in general, any type of memory-based object may be distributed as described above. Without limitation, such objects may include statically located system environment variables. It is intended that the following claims be interpreted to embrace all such variations and modifications.
Number | Name | Date | Kind |
---|---|---|---|
4914570 | Peacock | Apr 1990 | A |
5442771 | Filepp et al. | Aug 1995 | A |
5517662 | Coleman et al. | May 1996 | A |
5574944 | Stager | Nov 1996 | A |
5594910 | Filepp et al. | Jan 1997 | A |
5687370 | Garst et al. | Nov 1997 | A |
5758072 | Filepp et al. | May 1998 | A |
5924116 | Aggarwal et al. | Jul 1999 | A |
5940621 | Caldwell | Aug 1999 | A |
6026415 | Garst et al. | Feb 2000 | A |
6065058 | Hailpern et al. | May 2000 | A |
6085193 | Malkin et al. | Jul 2000 | A |
6088758 | Kaufman et al. | Jul 2000 | A |
6092098 | Araki et al. | Jul 2000 | A |
6182123 | Filepp et al. | Jan 2001 | B1 |
6304884 | Garst et al. | Oct 2001 | B1 |
6421713 | Lamparter | Jul 2002 | B1 |
6651141 | Adrangi | Nov 2003 | B2 |
6779030 | Dugan et al. | Aug 2004 | B1 |
7024450 | Deo et al. | Apr 2006 | B1 |
7054900 | Goldston | May 2006 | B1 |
20020073167 | Powell et al. | Jun 2002 | A1 |
20020091763 | Shah et al. | Jul 2002 | A1 |
20020133537 | Lau et al. | Sep 2002 | A1 |
20020165939 | Terribile | Nov 2002 | A1 |
20030018960 | Hacking et al. | Jan 2003 | A1 |
Number | Date | Country | |
---|---|---|---|
20040123301 A1 | Jun 2004 | US |