System and method for prevention of boot storms in a computer network

Abstract
This invention is useful in a networked system with densely packaged servers or server blades. The servers are connected to a system management network, a communication network and an image server. A management module attached to the system management network and a network switch monitors and controls network booting from an image server on the communication network to prevent over commitment of network and image server resources in order to avoid a boot storm. The management module collects system information and calculates the number of servers or clients the networked system can boot at any one instant of time without burdening the system. The management module logic controls booting via the system management network and service processor elements, which can block server booting and release servers to boot when other servers have completed their boot process.
Description


BACKGROUND OF THE INVENTION

[0001] This invention pertains to computers and other data processing systems and, more particularly, to computers and other data processing systems for use in a network in which an operating system, application, device driver, data or other software is stored on one computer system in the network and downloaded or “served” to other computer systems in the network.


[0002] A typical desktop computer system includes a non-volatile semiconductor memory, such as a well known “flash” memory, for storing program code commonly called “POST” or Power On Self Test. This typical desktop computer also includes a nonvolatile magnetic disk storage device, typically a hard disk drive, onto which the operating system, as well as other programs and data are stored. At power-up, the computer's central processing unit or “CPU” executes the POST code, which performs diagnostic checks, initializes the computer's internal devices, and then loads an operating system program called the “boot loader” from the local hard disk to the computer's main memory. After validating the boot loader code in main memory, control is passed to the boot loader, which loads and executes additional operating system programs and data stored on the hard disk.


[0003] In this way, the system loads the operating system kernel and any device drivers, management agents, communications stacks, application programs, etc., that are required for the computer system to become fully functional. The collection of operating system programs, device drivers, management agents, applications, etc., is often referred to collectively as an “operating system image” and is typically customized for a specific computer system or class of systems.


[0004] In the desktop computer example above, the operating system image was stored locally on the hard disk drive of the computer, an arrangement that can be described as a “local boot” system. In a network of a plurality of computers, computer servers, or other data processing equipment, it is possible to employ the local boot technique described above by storing a copy of the operating system image on each of the computers in the network, such that each computer boots the operating system from its local hard disk or other non-volatile mass storage device within or attached to each computer.


[0005] In addition, a well known “network boot” system can also be used in which the operating system image is not stored locally on each computer in the network, but is stored on a remote computer and downloaded to various computers, servers, and other data processing equipment in the network. The computer, server, or other data processing equipment storing an image to be downloaded will be referred to as an “image server”, and a computer, server, or other data processing equipment in the network that is capable of receiving an image from the image server is referred to as a “client computer” or, simply, a “client.”


[0006] Network booting is beneficial and desirable under circumstances where tight control is required over the operating system image, where the operating system image used by the client computers may change frequently, and where the availability of a local hard disk or other non-volatile mass storage device on the client computers is limited or non-existent. The use of client computers lacking a hard disk drive or other non-volatile mass storage device is particularly beneficial in reducing the total cost of ownership of a large network of computers, such as may be found in large corporations.


[0007] Network booting is not limited to desktop computers and workstations, but is increasingly being used in networks of servers. In addition, network booting is useful in dense-server packaging schemes, such as “server blades.” A server blade is a complete computer server on a single printed circuit board. Typically, a dozen or more individual server blades can be plugged into a server blade chassis, which provides power, control and inter-blade communication capability.


[0008] It is common to find several hundred client computers in a network in which network booting is employed. If a large number of clients are started or restarted simultaneously, then the network and image server resources may become overburdened. For example, this may happen at the restoration of power after a power failure, at initial power-up of multiple rack mounted servers or server blades, or upon reception by a plurality of clients of a command from a management console to restart the operating system or obtain a new operating system image.


[0009] Because the network and server resources are not able to process all of the requests placed by a large number of clients in a relatively short time interval, some requests for images either fail or time-out, and therefore, must be retried at some time in the future. This results in a situation where a large number of requests flooding the network interferes with the successful handling of other requests and, therefore, causes a significant increase in the amount of network traffic, which may aggravate the situation even more. The term “boot storm” is used, in particular, to describe the situation in which the image server and network resources are overburdened from too many requests from clients for the operating system image. The term is also used expansively to describe the situation wherein, within a narrow window of time, too many clients make requests for any type image (application, device drivers, data or other computer code) which results in these resources becoming overburdened.


[0010]
FIG. 1(a) is a graph representing network system boot performance during a boot storm in a prior art system, wherein the vertical axis 101 is indicative of the number of clients attempting to simultaneously access the image server, and the horizontal axis 102 indicates the total time required for the image server to download the boot image to all of the clients. Horizontal line 103 represents the maximum capacity, in terms of the number of clients simultaneously requesting an image from the image server, of the network resources to simultaneously download images to requesting clients. Plot 104 represents a boot storm scenario in a prior art system. Note that, between times t0 and t1, the number of clients requesting an image from the image server is below maximum capacity line 103, from time t1 to t2 the number of clients requesting an image exceeds this maximum capacity and does not drop below the line until after time t2. During the time t1 to t2 when the number of clients requesting service exceeds maximum capacity 103, clients will interfere with each other, messages will be retried, and responses will be lost. Occasionally, a client may decide that things are so bad that it will give up and may later decide to retry the process from the beginning (effectively throwing away whatever programs and data it was able to collect in the previous attempts). Because of the conflicts and interference, the time to complete the entire process of booting all clients, time t4, takes longer than expected.


[0011] By comparison, FIG. 1(b) is identical to the graph of FIG. 1(a), except that plot 105 represents the network system boot performance of a system of the present invention in which the total number of clients requesting an image from the image server is identical to the total number of clients requesting service in the prior art system of FIG. 1(a). Note that at all times, plot 105 is below maximum capacity line 103. More importantly, note that all clients are serviced by time t3 while, in the prior art system of FIG. 1(a), the total time to service all clients is time t4. Thus, as will be described in more detail below, one of the many advantages of the present invention is that it can be used to prevent boot storms, thereby reducing the total boot time of a plurality of clients in a network boot environment.



SUMMARY OF THE INVENTION

[0012] Briefly, in one embodiment, the invention is a network including an image server for downloading software images and a plurality of clients coupled to the image server. A controller is coupled to each of the clients for individually controlling the operation of the clients, and for building a wait list of each client requiring an image from the image server. The controller is also for repetitively enabling each of the clients on the wait list to download an image from the image server until the total number of enabled clients is equal to “M”, or until no more clients remain on the wait list. “M” is the maximum number of clients that are permitted to download an image from the image server at any one time.


[0013] In another embodiment, the invention is an assembly of clients (such as, but not limited to, a server blade chassis) for use with an image server. Included in the assembly are a plurality of clients, which are connectable to the image server. The assembly also includes a controller coupled to each of the clients for individually controlling the operation of the clients, and for building a wait list of each client requiring an image from the image server. The controller is also for repetitively enabling each client on the wait list to download an image from the image server until the total number of enabled clients is equal to “M” or until no more clients remain on the wait list. “M” is the maximum number of clients that are permitted to download an image from the image server at any one time.


[0014] In another embodiment, the invention is a method for controlling the downloading of images from an image server to a plurality of clients. In a first step, the method builds a wait list of clients that are requesting an image from the image server. In a next step, the method repetitively enables each client on the wait list to download an image from the image server until the total number of enabled clients is equal to “M”, or until no more clients remain on the wait list. “M” is the maximum number of clients that are permitted to download an image from the image server at any one time.







BRIEF DESCRIPTION OF THE DRAWINGS

[0015]
FIG. 1(a) is a graph representing network system boot performance during a boot storm of a prior art system, wherein the vertical axis is indicative of the number of clients attempting to simultaneously access the image server, and the horizontal axis indicates the total time required for the clients to boot from the image server.


[0016]
FIG. 1(b) is a graph representing network system boot performance of a system of the present invention, wherein the vertical axis is indicative of the number of clients attempting to simultaneously access the image server, and the horizontal axis indicates the total time required for the clients to boot from the image server.


[0017]
FIG. 2 is a block diagram of a network system of the present invention.


[0018]
FIG. 3 is a logical flow diagram for the network boot logic portion of the management module of the present invention.







DESCRIPTION OF THE ILLUSTRATIVE EMBODIMENTS

[0019] In the following description of the illustrative embodiments, the best implementation of practicing the invention presently known to the inventors will be described with some particularity. However, this description is intended as a broad, general teaching of the concepts of the present invention in specific embodiments, and is not intended to limit the present invention to these embodiments. Those skilled in the relevant art will recognize that there are many variations and changes to the specific structure and operation shown and described in these embodiments that embody the broad, general teachings of the present invention.


[0020] The invention will now be described with respect to a network including an image server and a server blade chassis populated with a plurality of server blades. A server blade is is a complete computer server on a single printed circuit board. Typically, a dozen or more individual server blades can be plugged into a server blade chassis, which provides power, control and inter-blade communications channels. However, those skilled in the art will recognize that the invention may be practiced not only with server blades, but with any other computer, server, or data processing equipment that is capable of downloading a software image from an image server, and that these computers, servers and data processing equipment that obtain images from an image server will be referred to generally as a “client.”


[0021] In FIG. 2, a server blade chassis 200 contains a plurality of densely packed “server blades” 202a-202d (collectively referred to as server blades 202). Each server blade 202 is a complete computer server on a printed circuit board and includes a main processor, memory, network interface circuitry, and other well known server circuits and functions (not illustrated). Each server blade 202 is designed to plug into one of a plurality of server blade connectors (not illustrated) on chassis 200 in a manner similar to an adapter board plugging into an I/O connector of a personal computer. Each server blade 202a-202d includes, respectively, a well known service processor element 212a-212d (collectively referred to as SPE's 212) which includes a service processor, memory and other well known circuitry. As is well known, each SPE 212 can place the main processor on the server blade in the reset state, and is also configured to remove power from the main processor. Server blade chassis 200 also includes a management module 203, which is connected to each SPE 212 through a management network 208, preferably a well known IEEE485 network. Certain functions of each server blade 202 can be controlled by commands sent by management module 203 across management network 208. Thus, management module 203 functions as a controller to individually control each server blade 202, as well as network switch 201.


[0022] A network switch 201, such as a well known gigabit ethernet switch, interconnects each server blade 202 through network links 205a-205d (collectively referred to as links 205). Network switch 201 is also connected to management module 203 through network link 207. In addition, network switch 201 is connected to an external communications network 210 via network link 206. Communications network 210 may include hubs, switches, routers and other well known network elements. Network link 209 connects network 210 to image server 204, which is a well know server or other computer system that includes an operating system image stored on a hard disk drive or other non-volatile mass storage device within or associated with the image sever. Preferably, network links 205, 206, 207 and 209 are well known gigabit ethernet links, although other network connections may be used.


[0023] To boot server blades 202, the operating system image is downloaded from image server 204 to each server blade, rather than being stored locally on a hard disk drive or other non-volatile mass storage device that may or may not be included with each server blade. In addition, image server 204 may be responsive to requests for images arising from other “clients” (not shown) on other parts of the network 210 via network links 211.


[0024] In the prior art, when a server blade chassis is initially powered on, all server blades initiate a request for their respective operating system image from the image server. Depending upon the number of server blades, the size of the requested images, the bandwidth of the network, and the capabilities of the image server, a boot storm may result in which one or more of the system resources is over burdened. This is compounded by the realization that more than one server blade chassis, possibly located in the same mechanical rack, may also have received power at the same time, or that other clients on the network may also be using the resources of the same image server.


[0025] In the current invention, a predetermined number of server blades 202 are allowed to boot to keep the network and the image server load just below the maximum capability. As those server blades finish, additional server blades are allowed to start. The point of control is management module 203, which is a centralized management resource. Using SPE's 112 in each server blade 202, management module 203 may instruct individual server blades 202 to be held in the reset state or have their power removed. In addition, using network connection 207, management module 203 may query and control the operation of network switch 201. Control of network switch 201 includes at least the ability to configure the Virtual LAN's (“VLANs”) that determine the connectivity of network links 205 to external network link 106. Because of this staged booting, which is described in more detail with respect to FIG. 3, many of the retries and restarts of the prior art system are prevented, and the total time required for all server blades 204 to boot (t3 in FIG. 1(b)) is substantially less than the total time in the prior art system (t4 in FIG. 1(a)).


[0026]
FIG. 3 is a logical flow diagram for the network boot logic portion of the management module of the present invention. Referring to this figure, in first step 300, the network boot logic portion of management module 203 detects an event that requires multiple server blades to request an operating system image from image server 204. Examples of such events include powering on chassis 200 and activating a number of server blades from sleep mode.


[0027] In next step 302, management module 203 builds a “wait-to-boot list” of all server blades 202 that currently need an image to boot. In addition, management module 203 sends an appropriate command or string of commands to disable each server blade on the wait-to-boot list, and then sets a counter “N” equal to zero. Counter “N” is used to indicate the total number of server blades currently in the process of booting.


[0028] There are a number of ways to disable a server blade 202, all of which will prevent the boot process from starting. For example, management module 203 may disable a particular server blade 202 by instructing the corresponding service processor element 212 to either remove power from the main processor on the server blade, or to hold the main processor in the reset state. While either of these commands will disable the server blade's main processor and prevent the boot process from starting, the service processor element remains powered and responsive to commands received from management module 203. Alternatively, management module 203 may disable a particular server blade 202 by preventing communications with image server 204. If a particular server blade cannot communicate with its boot image server, it is effectively disabled since it cannot boot without its boot image. To accomplish this, management module 203 instructs network switch 201 to reconfigure the VLAN's within the switch to prevent the server blade from communicating with image server 204 via communications network 210.


[0029] In one embodiment of the invention, each server blade may be preassigned a level of priority. If priority levels have been assigned, then the server blades currently on the wait-to-boot list are reordered in next step 204 in descending order of priority, such that the server blade with the highest priority will be allowed to download its boot image first. It is envisioned that any number of methods may be used to assign a particular level of priority to each of the server blades as is well understood in the art and, as such, is outside of the scope of the present invention.


[0030] In next step 306, management module 203 queries network switch 201 and image server 204 to determine the availability of resources, such as the amount of memory available in the image server, and other parameters indicative of network bandwidth. One way for management module 203 to acquire this information is through the use of well known SNMP instrumentation or other forms of well known management agents. Although only one query of the system resources is illustrated in FIG. 3, it is envisioned that network switch 201 and image server 204 can be queried repeatedly to allow for changing conditions in the system. Having determined the availability of network and server resources and related parameters, management module 203 can now estimate the maximum number “M” of server blades 202 that can be allowed to boot at any one time on the network without overburdening system resources.


[0031] In the alternative, “M” can be determined empirically by simple incrementing “M” and measuring the total time required to boot a fixed number of server blades. This fixed number of server blades must be larger than the maximum value of “M” used during empirical testing. The value of “M” that results in the shortest total boot time for this fixed number of server blades during empirical testing, is the value of “M” that should be used during normal operation of the system.


[0032] In next step 308, the network boot logic of management module 203 determines if there are any more server blades 202 on the wait-to-boot list. If not, the network boot logic jumps to step 310 to determine if any server blades 202 are currently booting. If no server blades 202 are currently booting, then N=0 and the network boot logic jumps to step 312 to complete the process. In step 312, management module 203 performs other processes before returning to step 300.


[0033] Returning to step 308, if there are one or more server blades 202 on the wait-to-boot list, the network boot logic jumps to step 314 to determine if the number of server blades currently booting N is less than the maximum number M. If N is less than M, the network boot logic jumps to step 320 wherein the next server blade on the wait-to-boot list is enabled, network switch 201 is programmed to place the selected server blade on the operational VLAN, and then N is incremented. Following the completion of step 320, the network boot logic returns to step 308.


[0034] Returning to step 314, if N=M, the network boot logic jumps to step 316 wherein management module 203 checks to determine if any previously booting server blade has completed its boot process. If a server blade has just completed its boot process, the network boot logic jumps to step 318 wherein “N” is decremented. After step 318, the network boot logic returns to step 308.


[0035] If, in step 316, the network boot logic fails to identify a server blade that has just completed its boot process, the network boot logic returns to step 308 after passing through wait state 322, which may be used by management module 203 to perform other management functions during the wait period.


[0036] The present invention as described contemplates a system with densely packed servers which include connections to a system management network 208 and a communication network 210, and a management module 203 for avoiding boot storms when multiple server blades 202 or other clients attempt to boot from an image server 204. Upon power up or other event, management module 203 will detect when multiple server blades 202 require a network boot operation. Management module 203 will acquire enough system information to control and regulate the number of server blades 202 allowed to boot from the communication network 210 at any instant of time via system management network 208. Management module 203 continuously monitors the system for changes and, upon detecting the completion of a boot by a server blade 202, will release via the system management network 208 any remaining server blades 202 who have been blocked from booting their respective images.


[0037] It is envisioned that management module 203 which controls the client 202 booting process will maintain a log of all transactions. This log can easily be used by the network administrator to configure the network topology to improve the network booting performance.


[0038] The instant invention has been shown and described herein in what is considered to be the most practical and preferred embodiment. It is recognized, however, that departures may be made within the scope of the invention and that obvious modifications will occur to a person skilled in the art that are within the scope and spirit of the claimed invention.


Claims
  • 1. A network, comprising: an image server for downloading software images; a plurality of clients coupled to said image server; a controller coupled to each of said clients, said controller for individually controlling the operation of each of said clients and for building a wait list of each of said clients requiring an image from said image server, said controller for repetitively enabling each of said clients on the wait list to download an image from said image server until the total number of said clients that have been enabled to download an image is equal to “M” or until no more of said clients remain on the wait list, wherein “M” is the maximum number of said clients that are permitted to download an image from said image server at any one time.
  • 2. The network of claim 1, wherein said controller for enabling the next one of said clients on the wait list to download an image from said image server as each previously enabled client completes the download of an image from said image server, such that the total number of said clients downloading an image from said image server at any one time is equal to or less than “M”.
  • 3. The network of claim 1, further comprising a programable switch coupling each of said clients to said image server, wherein said controller enables at least one of said clients on the wait list to download an image from said image server by programming said switch to communicate with said image server.
  • 4. The network of claim 2, further comprising a programable switch coupling each of said clients to said image server, wherein said controller enables at least one of said clients on the wait list to download an image from said image server by programming said switch to communicate with said image server.
  • 5. An assembly of clients, for use with an image server, said assembly comprising: a plurality of clients connectable to the image server; a controller coupled to each of said clients, said controller for individually controlling the operation of each of said clients and for building a wait list of each of said clients requiring an image from the image server, said controller for repetitively enabling each of said clients on the wait list to download an image from the image server until the total number of said clients that have been enabled to download an image is equal to “M” or until no more of said clients remain on the wait list, wherein “M” is the maximum number of said clients that are permitted to download an image from the image server at any one time.
  • 6. The assembly of clients of claim 5 wherein said controller for enabling the next one of said clients on the wait list to download an image from the image server as each previously enabled client completes the download of an image from the image server, such that the total number of said clients downloading an image from the image server at any one time is equal to or less than “M”.
  • 7. The assembly of clients of claim 5, further comprising a programable switch for coupling each of said clients to the image server, wherein said controller enables at least one of said clients on the wait list to download an image from the image server by programming said switch to communicate with the image server.
  • 8. The assembly of clients of claim 6, further comprising a programable switch for coupling each of said clients to the image server, wherein said controller enables at least one of said clients on the wait list to download an image from the image server by programming said switch to communicate with the image server.
  • 9. A method for controlling the downloading of images from an image server to a plurality of clients, comprising the steps of: building a wait list of clients that are requesting an image from the image server; and repetitively enabling each client on the wait list to download an image from the image server until the total number of clients that have been enabled to download an image is equal to “M”, or until no more clients remain on the wait list, wherein “M” is the maximum number of clients that are permitted to download an image from the image server at any one time.
  • 10. The method of claim 9, further comprising the step of enabling the next client on the wait list to download an image from the image server as each previously enabled client completes the download of an image from the image server, such that the total number of clients downloading an image from the image server at any one time is equal to or less than “M”.
  • 11. The method of claim 9, wherein the step of repetitively enabling each client on the wait list to download an image further includes the step of programming a switch to establish a communications link between the image server and at least one of the clients.
  • 12. The method of claim 10, wherein the step of repetitively enabling each client on the wait list to download an image further includes the step of programming a switch to establish a communications link between the image server and at least one of the clients.