Method and system for minimizing memory access latency in a computer system

Information

  • Patent Application
  • 20040123301
  • Publication Number
    20040123301
  • Date Filed
    December 23, 2002
    21 years ago
  • Date Published
    June 24, 2004
    20 years ago
Abstract
A computer system includes a plurality of nodes coupled together wherein each node may comprise a processor and memory. The system may also include a plurality of software objects usable by any of the nodes. Each object may be provided to, and stored in, the memory of the node that most frequently uses the object.
Description


BACKGROUND

[0001] 1. Field of the Invention


[0002] The present invention generally relates to a computer system and more particularly to a multi-node computer system in which software objects are distributed across the various nodes according to their frequency of use.


[0003] 2. Background Information


[0004] An operating system comprises executable code that provides an infrastructure on which application programs may run. Operating systems generally provide a variety of resources which application programs may access. Such resources may include memory allocation resources, graphics drivers, etc., and may generally be referred to as the operating system “kernel.”


[0005] Part of the process of launching an operating system during system initialization involves loading the kernel in memory. For some operating systems, the kernel is loaded into a predetermined area of memory. That is, a pre-designated portion of memory is allocated for the operating system kernel. For such operating systems, that portion of memory allocated for the kernel is not relocatable and is the same region of memory each time the operating system is launched. Such an operating system may be referred to as being “zero-based, memory dependent.”


[0006] Some computer system architectures include a plurality of inter-coupled “nodes” with each node comprising a processor, memory and possibly other devices. Each processor may access its own “local” memory (i.e., the memory contained in the processor's node) as well as the memory of other nodes in the system. The processor-memory combination in each node may be referred to as being “tightly coupled” in that it is much easier and faster for a processor to access its own local memory than the memory of other nodes. Accessing remote memory involves submitting a request through the network between nodes for the desired data, whereas accessing local memory does not require use of network communication resources and the associated latencies.


[0007] In a zero-based, memory dependent operating system in which the operating system kernel must be loaded in a pre-designated portion of the memory space, the kernel may be loaded into the local memory of a single node. The node containing the kernel thus has easy, rapid access to the kernel resources. Other nodes in the system also have access to the kernel, but not necessarily as rapidly as the node in which the kernel physically resides. Some nodes may be coupled directly to the node containing the kernel, while other nodes may not couple to the kernel's node via other intervening nodes. This latter type of node, which does not have a direct connection to the node containing the kernel, may be granted access to the kernel, but such requests and accesses flow through the nodes intercoupling the node needing the kernel and the node containing the kernel. Moreover, the latency associated with kernel accesses is exacerbated as the number of intervening nodes increases between the requesting node and the node containing the kernel. It is desirable to reduce latency in this regard.



BRIEF SUMMARY OF EMBODIMENTS OF THE INVENTION

[0008] One or more of the problems noted above may be addressed by a computer system that includes a plurality of nodes coupled together wherein each node may comprise a processor and memory. The system may also include a plurality of software objects usable by any of the nodes. Each object may be provided to, and stored in, the memory of the node that most frequently uses the object. Without limitation, various embodiments of the invention may comprise a single node that performs the functionality described herein, a computer system having a plurality of nodes, and an associated method.







BRIEF DESCRIPTION OF THE DRAWINGS

[0009] For a detailed description of the preferred embodiments of the invention, reference will now be made to the accompanying drawings in which:


[0010]
FIG. 1 shows a system diagram of a multi-node processor system in accordance with embodiments of the invention;


[0011]
FIG. 2 illustrates a memory map in which an operating system kernel may be mapped; and


[0012]
FIG. 3 shows an exemplary method of reducing latency in accordance with embodiments of the invention.







NOTATION AND NOMENCLATURE

[0013] Certain terms are used throughout the following description and claims to refer to particular system components. As one skilled in the art will appreciate, computer companies may refer to a component by different names. This document does not intend to distinguish between components that differ in name but not function. In the following discussion and in the claims, the terms “including” and “comprising” are used in an open-ended fashion, and thus should be interpreted to mean “including, but not limited to.”. Also, the term “couple” or “couples” is intended to mean either an indirect or direct electrical connection. Thus, if a first device couples to a second device, that connection may be through a direct electrical connection, or through an indirect electrical connection via other devices and connections.



DETAILED DESCRIPTION OF EMBODIMENTS OF THE INVENTION

[0014] The following discussion is directed to various embodiments of the invention. Although one or more of these embodiments may be preferred, the embodiments disclosed should not be interpreted or otherwise used as limiting the scope of the disclosure, including the claims, unless otherwise specified. In addition, one skilled in the art will understand that the following description has broad application, and the discussion of any embodiment is meant only to be exemplary, and not intended to intimate that the scope of the disclosure, including the claims, is limited to these embodiments.


[0015] Referring to FIG. 1, a computer system 100 is shown in accordance with embodiments of the invention. The system 100 may include a plurality of nodes 102, 104, 106 and 108. Although four nodes 102-108 are shown in FIG. 1, any number of nodes may be included. As shown, each node couples to two adjacent nodes. Thus, node 102 couples to nodes 104 and 108, node 104 couples to nodes 102 and 106, node 106 couples to nodes 104 and 108, and node 108 couples to nodes 102 and 106. Other configurations are possible as well and included within the scope of this disclosure.


[0016] Each node 102-108 may include one or processors 120, labeled as “P0”-“P3” as shown. Each node may also include local memory 124 that is coupled to the processor contained in that node. The local memory 124 of each node is accessible by that node's processor and the processors of other nodes. Thus, each processor 120 may access the memory of all other nodes in the system. Each processor 120 may execute one or more applications. Each node may run the same or different applications as are run on other nodes.


[0017] The system 100 may include other devices as well. For example, node 102 may couple to bridge device 130 which provides a bus 132 to which other devices may couple. Such other devices may include, for example, a keyboard controller 134 coupled to a keyboard 136 and a floppy disk controller 138. Other and/or different devices may be coupled to the system 100 via bus 132 and bridge 130. Although the bridge 130 is shown coupled to node 102, the bridge may be coupled to another node if desired.


[0018] The system may also include an input/output (“I/O”) controller 140 which provides a bus 142 to which one or more I/O devices may be coupled. Examples of such I/O devices may include a small computer system interface (“SCSI”) controller 144 and a network interface card “NIC”) 146. Other and/or different I/O devices may be coupled to the system 100 via bus 142 and I/O controller 140.


[0019] Referring now to FIG. 2, a memory map 200 is shown. In general, the memory map 200 includes the all of the local memories 124 in the system 100 and address ranges assigned to the local memories so that preferably no two memory locations have the same memory address. As explained previously, the operating system kernel may include various resources 202, labeled in FIG. 2 as RESOURCE 1, RESOURCE 2, . . . , RESOURCE n, which are mapped to a non-relocatable and pre-designated portion 204 of memory.


[0020] Referring to FIGS. 1 and 2 and in accordance with various embodiments of the invention, the resources comprising operating system kernel may be distributed across two or more nodes 102-108 in a way so as to reduce the latency associated with inter-node kernel accesses. A variety of embodiments may be possible for accomplishing this result. One exemplary embodiment involves allocating the kernel's resources 202 to the nodes that typically have a greater need for such resources. For example, one node out of the plurality of nodes 102-108 may run an application that uses a particular kernel resource 202 more often than the other nodes. That being the case, that particular resource 202 may be provided to the node most often needing the resource for storage in that node's memory 124. Further, each resource may be assigned to the node that history as shown most frequently needs the resource. The memory map 200 may remain the same as in conventional systems. That is, the memory map 200 will still list the kernel resources 202 as being located in the pre-designated region of memory 204. However, the kernel resources 202 identified in the memory map 200 may be distributed among the various nodes 102-108 in a manner that takes advantage of the frequency of use of such resources among the various applications that run on the nodes 102-108.


[0021] Referring now to FIG. 3, a method of implementing this feature may include blocks 250 and 252. In block 250, a profile of use of the kernel resources 202 is generated. This profiling act may include tracking the frequency of use of each kernel resource by each of the various applications running on the various nodes in the system 100. This act can be accomplished by any suitable technique such as statistical sampling software (i.e., VTUNE—Intel). In some embodiments, a boot strap processor (“BSP”) node may perform the act of profiling. In general, the boot strap processor node may be responsible for initializing the system 100 and launching the operating system, as well as profiling kernel resource usage in accordance with the preferred embodiment. In general, any node, such as node 102, may function as the BSP, but typically one node is predetermined to be the BSP, or a BSP selection algorithm may be implemented during initialization. Profiling may occur continuously during run-time or at predetermined discrete times. The profiling results generally may include a frequency of use distribution of the various resources with the various applications that may run on the nodes 102-108. The results of the act of profiling kernel resource usage may be stored in a file in non-volatile memory 125 on the boot strap processor node, which may be node 102 as shown. The non-volatile memory 125 may be coupled to the processor 120 and may comprise a suitable type of read only memory (“ROM”), such as electrically erasable read only memory (“EEPROM”), battery backed-up random access memory (“RAM”), or other suitable type of storage medium. Further, the non-volatile memory 125 may be pre-loaded with a default profile data file. Then, during normal system operation, the default profile data file may be updated with new profiling results according to the particular operation of the system 100.


[0022] Regardless of how the act 250 of profiling is accomplished, the result may comprise a file or set of other set of values stored in non-volatile memory in at least one of the nodes 102-108. Such profile data then may be used in block 252 to initialize the memory map of each node. Block 252 may be performed by the boot strap processor node 102 during system initialization. In performing the action described in block 252, the boot strap processor node determines from the profiled data which application most frequently uses a particular kernel resource 202 and provides the kernel resource to the node on which that application will run, or is running. Determining on which node an application is to run may be determined by the agent ID of the node identified by the profiling software that has/is executing the application.


[0023] It is possible that an application that runs on two or more of the nodes 102-108 may have an approximately equal frequency of use of a particular kernel resource. In this case, the resource may be loaded on any of such nodes. The frequency of use of other nodes may also be considered. For example, two nodes may run applications that result in an approximately equal frequency of use of a particular kernel resource. While both nodes may have an approximately equal use of a particular resource, a third node may have a substantial (albeit lower) need for the same resource. With regard to FIG. 1, for example, nodes 106 and 108 may have an approximately equal frequency of use of a particular resource 202, and node 104 may require the same resource, but less often. In this case, the resource may be provided to node 106, rather than node 108. In that way, node 104 need only incur a one node “hop” to node 106 to acquire the necessary resource. If, instead, the resource 202 had been provided to node 108, node 104 would have had to incur a two node hop through node 106 to acquire the resource from node 108.


[0024] The preferred embodiments permit a more efficient distribution of operating system kernel resources in a multi-node computer system, thereby advantageously reducing latency. The above discussion is meant to be illustrative of the principles and various embodiments of the present invention. Numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. For example, although the resources comprising the operating system kernel are described as being distributed across the various nodes, in general, any type of memory-based object may be distributed as described above. Without limitation, such objects may include statically located system environment variables. It is intended that the following claims be interpreted to embrace all such variations and modifications.


Claims
  • 1. A computer system, comprising: a plurality of nodes coupled together, each node comprising a processor and memory; and a plurality of software objects usable by any of the nodes, each object is provided to and stored in the memory of the node that most frequently uses the object.
  • 2. The computer system of claim 1 wherein one of said nodes comprises a boot strap processor that determines to which node to provide and store each software object.
  • 3. The computer system of claim 2 including a plurality of applications that run on the nodes, each application capable of running on one or more nodes, and the boot strap processor includes profile data that is indicative of a frequency of use of each software object by the plurality of applications, and wherein the boot strap processor determines to which node to provide and store each software object by retrieving and examining said profile data.
  • 4. The computer system of claim 1 wherein a software object may be used approximately equally by first and second nodes and to a lesser degree by a third node that is connected to one of said first or second nodes, and said object is provided to the one of said first and nodes that is connected to said third node.
  • 5. The computer system of claim 1 including a plurality of applications that run on the nodes, each application capable of running on one or more nodes, and at least one of said nodes includes profile data that is indicative of a frequency of use of each software object by the plurality of applications.
  • 6. The computer system of claim 5 wherein said profile data is generated during run-time of said computer system.
  • 7. The computer system of claim 5 wherein said profile data comprises a predetermined data set.
  • 8. The computer system of claim 5 wherein said profile data comprises a plurality of software objects and a frequency of use of each object with an application that runs on one or more nodes.
  • 9. The computer system of claim 1 wherein said software objects comprise operating system kernel resources.
  • 10. A computer node operable in a system comprising a plurality of computer nodes, said computer node comprising: a processor; memory coupled to said processor and containing profile data that is indicative of a frequency of use of one or more operating system kernel resources with regard to various applications that may run on said computer node or other computer nodes; wherein said processor provides each of said operating system kernels to the one of said computer nodes that most frequently uses said kernel.
  • 11. The computer node of claim 10 wherein said node comprises a boot strap processor that determines to which node to provide and store each operating system kernel resource.
  • 12. The computer node of claim 11 further comprising a boot strap processor.
  • 13. The computer node of claim 10 wherein an operating system kernel resource may be used approximately equally by first and second nodes and to a lesser degree by a third node that is connected to one of said first or second nodes, and said processor provides said resource to the one of said first and nodes that is connected to said third node.
  • 14. The computer node of claim 10 wherein said system includes a plurality of applications that run in the system, each application capable of running on one or more nodes, and said profile data is indicative of the frequency of use of each operating system kernel resource by the plurality of applications.
  • 15. The computer node of claim 14 wherein said profile data is generated during run-time.
  • 16. The computer node of claim 14 wherein said profile data comprises a predetermined data set.
  • 17. The computer node of claim 14 wherein said profile data comprises a plurality of operating system kernel resources and a frequency of use of each resource with an application that runs on one or more nodes.
  • 18. A method, comprising: (a) obtaining profile data indicative of a frequency of use of software objects with respect to various applications running on nodes in a multi-node computer system; (b) determining which node runs an application that uses a particular software object more frequently than another node running said application; (c) copying the software object to the node that runs the application that uses the software object more frequently than another node; (d) repeating (b) and (c) for additional software objects.
  • 19. The method of claim 18 further including generating said profile data and storing said profile data in a node in the multi-node system.
  • 20. The method of claim 19 wherein (b) includes determining a plurality of nodes that use the particular software object approximately equally and (c) includes copying the software object to one of said plurality of nodes that use the particular software object approximately equally.
  • 21. The method of claim 20 wherein (c) includes copying the software object to one of said plurality of nodes that use the particular software object approximately equally and that connects to another node that uses the software object, albeit less frequently than said plurality of nodes.
  • 22. A computer node operable in a system comprising a plurality of computer nodes, said computer node comprising: memory coupled to said processor and containing profile data that is indicative of a frequency of use of one or more operating system kernel resources with regard to various applications that may run on said computer node or other computer nodes; and a means for providing each of said operating system kernels to the one of said computer nodes that most frequently uses said kernel.
  • 23. The computer node of claim 22 wherein an operating system kernel resource may be used approximately equally by first and second nodes and to a lesser degree by a third node that is connected to one of said first or second nodes, and said computer node includes a means for providing said resource to the one of said first and nodes that is connected to said third node.