In order to understand the invention and to see how it may be carried out in practice, a preferred embodiment will now be described, by way of non-limiting example only, with reference to the accompanying drawings, in which:
In the following description, the same reference numerals are used in different figures to refer to identical components.
The switches and the blades (103-113) are in association with the chassis 102 through a back plane 118 having at least one bus, such as a serial bus. The bus is used, for example, for managing, controlling and monitoring the blades (103-113). The switches and blades can also be connected by a computer network 119 or a segment thereof connecting the blades' NICs (114, 115) to the to the switches' NICs (116, 117).
Those versed in the art will readily appreciate that the block diagram of
It should also be noted that the description below discloses a blade server with Ethernet switches. However, this is non-limiting, and any other suitable networking protocol can also be applicable.
It should also be noted that the blades 105-113 can be substantially identical. Therefore, unless specifically noted, when hereinafter reference is made to a blade (such as blade 105), the disclosed embodiment is applicable to any of the blades 105-113. In the same way, the switches 103-104 can also be substantially identical, and therefore, unless specifically noted, when hereinafter reference is made to a switch (such as switch 103), the disclosed embodiment is applicable to any of the switches 103-104.
Initially, when installing a new blade server 101, new blades 105 and new switches 103 are swapped in to the chassis, where these blades have no operating systems and no software executables installed on them. Therefore such a blade is referred to, hereinafter, as an unloaded blade. After loading at least a kernel and after setting basic networking configurations, it is possible to load software executables to run on the blades. Therefore, such a blade having at least a kernel and basic networking configuration, and sometimes also an executable stored in its memory is referred to, hereinafter, as a “loaded blade”. Examples of software executables that can be loaded on to a blade are firewalls, web or mail servers etc.
According to an embodiment of the invention, a controller executable, referred to hereinafter as a “controller”, can be loaded to a blade.
According to one embodiment of the invention, one or more controllers can be loaded to at least one blade 105 accessible to a blade server 101. The embodiment relates to the case when each controller is loaded to a respective blade. However, this is non-limiting, and multiple controllers can be loaded to the same blade if applicable. In the figure, a second controller 204, redundant to the first controller 201, provides fault tolerance when the first controller 201 fails. Therefore, the first controller 201 is referred to, hereinafter as a “master controller”, while the second controller 204 is referred to, hereinafter, as a “slave controller”. In the case that more than one controller (201, 204) is provided, each may be substantially identical, and therefore, unless specifically noted, hereinafter reference will be made to the controller 201. It should be noted that each of the controllers (201, 204) can have access to a different storage device (202, 205 respectively). For example, each controller can be in association with a local disc attached to its blade 105. The blade 105 can also have an external disc or a RAID device associated therewith, as illustrated in
It should be appreciated that when the master controller fails, the slave controller can become a master controller, thereby providing fault tolerance. Generally, if there are more than one redundant controller in association with a blade server, one of them is selected to be a master controller, while the others become slaves. The selection of the master controller from the multiple controllers can be random, on a first swapped-in criterion (i.e., the first controller to be swapped in is the master) or according to any other criteria as appropriate. When the master controller fails, one of the slave controllers can become a master controller (therefore referred as a “replacing master”). Again, if there are more than one slave controller, the selection of the replacing master can be done according to any appropriate criteria.
It should be noted that unless specifically noted otherwise, whenever the description below refers to an operation performed by a controller, the description refers to the master controller. The master controller notifies each slave controller of any change, in order to synchronize them and the storage devices associated with them, using a mechanism described below with reference to
Being in association with a storage device 202, the controller 201 can store data for other blades 105 and switches 103 accessible to the blade server 101. The data can include, for example, executable code (such as software executables), operating systems (such as UNIX, Linux or Microsoft Windows etc.), configuration data and any other data such as information stored in a database, files, etc. This way, the other switches 103 and blades 105 do not need storage devices to be directly associated therewith. According to the described embodiment, executables running on switches 103 and blades 105, including, for example, operating systems, scripts and applications, can be stored on the controller's associated storage device 202. Mounting the controller's associated storage device 202 on a blade, the storage device 202 becomes accessible to the blade, and therefore the blade can run an executable stored thereon. It should be noted that the term “executable” embraces binary code, scripts, Java applets or other software programs that can operate on a computer.
It can be realized, therefore, that according to the described embodiment, where blades can run (i.e., execute) executables stored on the controller's associated storage device 202, installation of executables can take place on the controller's associated storage device, where data (the executables and their respective data) is stored, instead of being installed on a blade's local storage devices. An embodiment of installing executables on a controller's associated storage device is described below, with reference to
Furthermore, having executables (including operating systems) stored on the storage device, besides providing storage for the blades, the controller can provide boot and set-up services therefor.
When swapping a blade into the blade server, or when re-starting a blade accessible to a blade server, an operating system, or at least a kernel thereof is required in order to boot the blade. When the blades 105 have no local storage devices, or in those cases when no operating systems are loaded to their local storage devices, a mechanism is required to enable boot, startup or basic configuration thereof, referred to hereinafter as “pre-loading procedure”.
According to one embodiment of the invention, in order to be able to boot the blade 105 and perform basic configuration on it, Preboot Execution Environment (PXE) should be pre-installed, for example, on the blade's ROM chip or on the boot sector of a blade's dedicated storage device, if one exists. PXE provides a Dynamic Host Configuration Protocol (DHCP) client, which allows the blade 105 to receive an IP address to gain access and to be accessible by the controller 201. PXE also provides the blade's Basic Input/Output Operating System (BIOS) with a set of Application Program Interfaces (APIs), used to automate the booting of the operating system and other configuration steps. When the blade's power supply is turned on, the blade uses DHCP to receive an IP address from the controller 201 that operates as a DHCP server. The blade 105 also notifies the controller 201 that it is booting, and receives a pointer to a file (such as a file name) that can be used to download the kernel from the controller's associated storage device to the blade's memory. The blade 105 then downloads the file using, for example, Trivial File Transfer Protocol (TFTP) and executes it, which loads the operating system's kernel into the blade's memory. With the kernel can be included also an agent to be running on the blade, where the agent is responsible among other things for allowing the controller 201 to monitor the blade's status, for example. Another exemplary responsibility of the agent is to provide networking services to the blade 105 on which it is running, as explained below.
After the pre-loading procedures, i.e., after the kernel and the agent are loaded to the blade 105, the blade is operating and ready for running at least one executable. Such a blade is referred to as a pre-loaded blade. According to the disclosed embodiment, executables are stored on the controller's associated storage device, and therefore the controller 201 can load at least one executable on to the blade 105 in order for it to execute (or run) thereon. As was previously mentioned, loading at least one executable can be done by mounting the controller's associated storage device 202 or a partition thereof on the blade 105. Those versed in the art will appreciate that the controller 201 can identify blades that have passed at least the pre-loading procedures using the bus 118 or the computer network 119. For example, the agent operating on a swapped-in pre-loaded blade can convey, at a predetermined rate, a data packet indicative of its status. The packets are considered as the blade's heartbeat. By detecting the heartbeat, the controller can monitor the status of the blade, and more specifically, the controller can detect that the blade is swapped-in and operating.
Those versed in the art will appreciate that the blade's heartbeat can be used by the controller 201 also after loading the executable(s) to it, in order to monitor the status of the blade 105 and verify that the blade is operating. Blades whose heartbeat is monitored by the controller are referred to, hereinafter, as “operating blades”. In the same way, switches can also have heartbeat, thereby enabling the controller to monitor their status. Switches whose heartbeat is monitored by the controller are referred to, hereinafter, as “operating switches”. However, in many aspects there are similarities in handling and monitoring operating switches and operating blades, and therefore, unless specifically noted, the term “operating blades” will refer also to operating switches. Likewise, unless specifically noted, the term “blade heartbeat” denotes also a switch's heartbeat, and “monitoring a blade's heartbeat” applies also to monitoring the heartbeat of a switch etc.
In order to load on to a blade 105 an executable stored on the storage device 202, the controller 201 has to select an available blade of the blades accessible to the blade server, i.e., a blade that has enough resources to run the executable, as will be explained below.
Those versed in the art will readily appreciate that the flow chart of
One of the operations performed in
In order to determine whether a blade 105 is an available blade for loading an executable, the controller should have access to information about the resources required by the executable (401), referred to, hereinafter, as the executable's “required resources”. According to one embodiment of the invention, when installing an executable on the controller's associated storage device, it is possible to configure the required resources of the executable, storing it, for example, on the controller's storage device.
In order to determine whether a blade 105 is an available blade, the controller also needs to find out at 402 what are the blade's available resources. It should be noted that a blade's “available resources” are not necessarily the resources available at the time when the controller makes this determination. Thus, there may be occasions when an executable requires a certain amount of resources, although there are times that it can use fewer resources. The available resources are therefore the blade's “intrinsic resources” (i.e., the resources characteristic of the blade 105 before having any executable or operating system loaded on it, that is, when it was an unloaded blade) less the required resources of the operating system and executables that were pre-loaded (the agent, for example, is considered here as an executable), i.e., less the pre-load required resources. However, it is also possible that there are already other executables running on the blade. Therefore, in order to find out what are the available resources on 402, the controller also has to reduce the required resources of executables that are already loaded on to the blade.
If (at 403) the available resources are less than the required resources, then the blade is considered unavailable for loading the executable. However, if at 403 the available resources are found to be substantially equal to or more than the executable's required resources, the blade is considered an available blade. However, this is non-limiting and other embodiments may require that the available resources be larger than the executable's required resources in order to establish a blade as an available blade
Those versed in the art will readily appreciate that the flow chart of
The following simple example demonstrates loading and running three executables (referred to as Ea, Eb and Ec) on a blade server in association with only two blades that are available to run executables (referred to as Ba and Bb). In a priority list, “available blade” is the lowest priority, Ea is the second lowest priority, Eb is the second highest, and Ec is the highest priority. The required resources of the three applications allow them to run on each of the two blades Ba and Bb, but none of the blades Ba and Bb has enough resources to run more than one executable in parallel. First, according to the example, the controller tries to load an instance of Ea. As “available blade” has the lowest priority in the priority list and as Ba is found to have enough available resources, the controller loads Ea to Ba. Afterwards the controller re-starts Eb. Again, “available blade” is the lowest priority and Bb is available, therefore the controller loads Eb to Bb. Now the controller tries to re-start Ec. The controller cannot find an available blade and therefore checks the second lowest priority in the priority list, which is Aa. In this case, the controller would terminate Ea and load Ec to Ba instead, for example, by sending a terminate signal to Ea or by re-starting the blade. That is, by having a higher priority, Ec is determined to be “more important” than Ea, and therefore if it impossible to run both at the same time, the controller prefers Ec to Ea.
It should be noted that this example is non-limiting. Blade servers can be in association with more than two blades and they can load more or less than three executables. In addition, an opposite policy can be used when handling the priority list, in a way that the highest priority can be considered first, then the second highest priority etc.
Furthermore, many times executables running on blade servers require access to computer communication networks such as access to Local Area Networks (LANs). It was previously described (with reference to
It is to be noted that having two NICs for providing network fault tolerance is non-limiting, and it is possible to have a different number of NICs for providing network fault tolerance, as required and appropriate for the case.
Many executables exist that require access to a plurality of LANs. A common, non-limiting example is a firewall. This is achieved in accordance with an embodiment of the invention by an agent that runs on the blade and provides access to multiple virtual bridged LANs.
Those versed in the art will appreciate that the agent can operate, for example, in accordance with IEEE Standard 802.1Q (IEEE Standards for Local and Metropolitan Area Networks: Virtual Bridged Local Area Networks, Approved 8 Dec. 1998) The standard describes, amongst others, Media Access Control (MAC) Bridge management and MAC bridges. That is, the agent, operating in accordance with IEEE 802.1Q can emulate the existence of several NICs although only one NIC is actually in use.
In
The NIC 501 is associated with a trunk 511. The trunk 511 is also associated with a blade 105, associated with a NIC 512. By such means, the switch 103 and the blade 105 are mutually accessible through NICs 501 and 512, and via the trunk 511.
As mentioned before, with association to the pre-loading procedures, an agent 513 runs on the blade 105, and is coupled to the NIC 512. The agent 513 operates as a switch configured to provide multiple (four, according to this example) virtual bridged LANs through the NIC 512. In
In the figure, the virtual NICs 504 and 504′ together give rise to a virtual bridged LAN. The virtual NICs 505 and 505′ give rise to a second virtual bridged LAN, 506 and 506′ to a third etc.
As was mentioned before, with reference to
The configuration data can be stored in a storage accessible by the controller, such as the controller's associated storage device 202. The configuration data can include data such as identification of the switch's NIC (such as NIC 501 in
As was previously explained, the controller also runs the agent 513 on a blade, therefore the controller 201 can configure the agent 513 to provide access to at least one virtual bridged LAN, corresponding to the accessible virtual bridged LANs configured on the switch 103.
Before turning to an embodiment of the invention showing principal operations carried out by an agent for providing multiple virtual bridged LANs access to at least one application running on a blade, it should be remembered that in the exemplary embodiment of
As explained above, the agent 513 detects and encodes the network packets received on the NIC 512 in order to route them to the appropriate virtual bridged NICs. It should be appreciated, that in parallel to routing the packets, the agent can also monitor networking traffic on the NIC 512, i.e., traffic to and from the blade. The agent can also tap communication to and from the blade, and provide the information, or part thereof, to any other application on the same blade or on a different, accessible blade.
As mentioned above, the agent, when running on a blade, can also provide network fault tolerance to at least one application running on the blade. Together with monitoring the NIC 512 (802), the agent observes idle durations (806) of the NIC, i.e. if on 806 the agent finds that the NIC is idle for duration substantially longer than a “predefined idle duration”, i.e., no network packets (no traffic) are detected during the predefined idle duration or longer, the agent 515 suspects a network fault. One way to provide network fault tolerance is by migrating (807) to the redundant NIC 514 associated with the redundant switch 104, which is also accessible to the blade server. If the network fault occurred in the switch 103, in the trunk 511 or in NIC 512, migrating to the redundant NIC 514, and therefore also to the redundant switch 104, would bypass the switch 103 so as to provide access to the blade via the virtual bridged LANs.
After migrating to the redundant NIC and switch, the agent communicates with the controller at 808, conveying an indication of the migration to the controller.
Those versed in the art will appreciate that migrating can be done locally by the agent (wherein the agent is coupled to the NIC, as described with reference to
The description turns now to an exemplary embodiment for providing network fault tolerance for a blade server.
When the controller receives data indicative of migration from a blade (901), for example, data indicating that an agent running on the blade migrated to a redundant NIC and switch, the controller checks the heartbeat (i.e. the operating status) of the switch 103, i.e., (902). Detecting on 903 that the switch 103 is idle (i.e. not operating) at least for a predetermined switch idle duration (903), the controller bypasses a connection between the switch and the blade (904). A bypass can be achieved by turning the switch off, for example by sending a termination signal over the bus turning the switch off or rebooting it thereby. The controller can also alert fault in the switch (905).
However, if the controller finds (on 903) that the switch is operating, it deduces that the fault occurred in the NIC 512 (coupled to blade 105.) or in trunk 511. In this case, according to one embodiment, the controller turns the blade 105 off (906), reloads instances of the executables that previously operated on the blade on to a different available blade (906). Then the controller can alert the fault in the blade (907).
However, according to the embodiment described above, services provided by the blade 105 are characterized by downtime: the time required to turn the blade 105 off and to load instances of the executables that previously ran on it on a different available blade. Yet another embodiment (not shown) can reduce the downtime during which the at least one executable is not operating by identifying an available blade before turning off blade 105. One should recall that blade 105 is operating and communicating via the redundant NIC 514 and switch 104. Therefore, the controller can locate an available blade to run the executables before it turn blade 105 off, reducing the downtime thereby.
As can be realized from the description above, the controller should configure the agent to provide the required network configuration in order to be able to run an application on a blade. After loading the kernel and the agent on to the blade (and more accurately, on to the blade's memory) and after configuring the agent, the blade is considered as a pre-loaded blade, where the kernel and the agent consume part of the blade's intrinsic resources, therefore leaving available resources which are smaller than the blade's intrinsic resources. The available resources can be utilized for loading at least one executable.
It should further be noted that while loading an executable on to a blade, the executable (including binary code, scripts etc.) and respective data (constituting together an instance) are usually copied to the blade's memory. When the executable is operating, the data sometimes changes to reflect modifying states of the executable. And more specifically, when an executable is operating, data such as configuration data, information stored in data bases, files or sometimes even the executable itself might change. If the computer is turned off and then turned on, for example, it is sometimes preferred that the executable will start from the state that characterized it when the computer turned off, and not from the state characterized it immediately after the loading, this being referred to as recovery. Alternatively, instead of recovery, it is sometimes preferred to run the executable in the state that characterized it after the initial loading, at some time point in the past, or before the occurrence of the last changes. This is required, for example, when it is suspected that the changes caused the executable's failure. Loading an instance of the executable representative of the executable's state at some point in the past is referred to below as rollback. A recovery policy to be explained below can be used to define what instance should be loaded in the different situations requiring the controller to re-load an executable.
When installing an application on the controller's storage device, an image of the executable, referred to as a “snapshot” can be stored on the controller's storage device. The snapshot associates data such as an operating system and/or kernel, the executable code (such as binary code, script or any other form of executable) and other data such as configuration data (including the agent's network configuration), files, data stored in databases etc, all referred to as snapshot data. Those versed in the art will appreciate that being in association with the snapshot data, a snapshot can include data, it can point (by reference) to data stored, for example, on the controller's associated storage device, or a combination thereof. After the creation of a snapshot, the snapshot reflects the image of the executable as it was at the time of saving, before undergoing further changes.
It was previously mentioned that sometimes it is desirable to provide recovery and/or rollback capabilities when restarting an application. Rollback can go as far as to the point in the past when the executable was installed (before loading it on to a blade on the first time). Therefore, before loading the executable for the first time the controller can store an initial snapshot of the executable. The controller can also store intermediate snapshots of an executable, being images of the executable each saved at a certain time-point in the past, after loading the executable for the first time. Storing a set of intermediate snapshots at different time points while an executable is operating, provides an evolution of the executable, since the intermediate snapshots reflect the changes to the executable.
A snapshot from which an instance is instantiated is referred to, hereinafter, as a “running snapshot”. It will be realized that when loading an instance on to an available blade, this instance can undergo changes such as in state, configuration or even in the executable code, etc. Normally, when an executable undergoes changes, the changes are reflected on the storage device from where this executable was started, or on other associated storage devices. Likewise, according to the embodiment, the changes are reflected by the running snapshot that changes whenever the instance changes.
A running snapshot can be generated from any other snapshot (referenced hereinafter as a “source snapshot”) for example by copying the source snapshot. In addition, it is possible to generate intermediate snapshots from the running snapshot at different time points, for example by copying it.
It should be noted that sometimes more that one instance of an executable can run at the same time on a blade server. It will readily be appreciated that if there are at least two instances of the same executable, the instances can start from a similar intermediate snapshot, but undergo different, independent changes, giving rise to further dissimilar running snapshots. These different running snapshots can then be used to generate different intermediate snapshots of the same executable.
It was previously mentioned that a running instance of an executable reflects the current state of the executable and that it is possible to generate intermediate instances from the running instance.
In order to understand the snapshot generation and storage process, one should recall that a running snapshot can be generated from any source snapshot, or in other words, the running snapshot is associated with the source snapshot. Those versed in the art can appreciate that “associated with” can mean that the running snapshot is a copy of the source snapshot. However, this is non-limiting and according to other embodiments the source snapshot itself can be used as a running snapshot. Any other form of association that can be used is also applicable.
It is also possible to keep a pointer (such as a file name) to the source snapshot, to serve as a reference snapshot (1002). In order to load an instance of the running snapshot to an available blade (1003), instantiation is made (for example, by mounting the partition on the blade and starting operation of the executable, as explained above).
By comparing the reference snapshot to the running snapshot (1004) on the first cycle after loading, the controller compares the source snapshot to the running snapshot. If the running snapshot had undergone changes, the two snapshots will be different. Therefore, if (on 1004) the two snapshots are found to be different, an intermediate snapshot is generated from the running snapshot (1005), and the reference snapshot is changed to point to this intermediate snapshot (1006). Those versed in the art can appreciate that generating an intermediate snapshot from a running snapshot can be done, for example, by copying the running snapshot, wherein the copy is the intermediate snapshot. Therefore, on the following cycles, when comparing the reference snapshot to the running snapshot (1004) the controller will compare the last generated intermediate snapshot to the running snapshot, detecting changes to the running snapshot, and generating intermediate snapshots when changes are detected.
Those versed in the art can also appreciate that storing snapshots can also be done in an event triggered mechanism. For example, the controller can use for interrupts signaling modifications to files or disk partitions in order to detect changes to instance referenced by the running snapshot. For example, in the UNIX operating system, whenever changes occur, signals are raised and the controller can use them in order to generate an intermediate snapshot by copying the running snapshot.
The controller can maintain a repository adapted to store an initial snapshot and one or more intermediate snapshots, referred to hereinafter as a “repository of snapshots”, the snapshots reflect the changes that occurred in the past in instances of the application and allow rollback thereby.
It was previously mentioned that recovery policies can be used to select an intermediate snapshot that should serve as a source snapshot for generating a running snapshot. That way, the recovery policy can determine, for example, that in re-load after normal termination the controller should perform normal recovery, i.e., it should load the most recent snapshot stored in the repository of snapshots. But in any case of failure the policy can determine that the controller should select at least one snapshot older then the most recent snapshot. It should be noted that this example is non-limiting and any other policy can be used whenever required and appropriate.
It is possible to provide a management utility that provides the ability to delete old intermediate snapshots, to store them on external storage devices such as tapes, or to perform any other management activity, as can be appreciated by a person skilled in the art. The management utility can operate on a cyclic basis, performing its tasks once in a certain time interval, it can also be event-triggered (for example, started when a certain predetermined percentage of the storage device's capacity is consumed) or it can be operated by a system operator.
It should be noted that storing and/or deleting snapshots from the repository of snapshots can be affected by the recovery policy of an application. For example, if the recovery policy determines that it is always the most recent intermediate snapshot that is used for recovery, and the controller should never perform roll-back to older intermediate snapshots, the management utility can delete intermediate snapshots, leaving only the latest one in the repository of snapshots, saving storage space thereby. In other embodiments the controller can ignore recovery policies when managing the repository of snapshots.
After having described how intermediate snapshots can provide the options of recovery and rollback, there will now be described with reference to
After selecting a source snapshot and accessing this selected source snapshot, the controller generates a running snapshot associated with the source snapshot (1106).
The controller also selects an available blade (1107) from among the operating blades accessible to the blade server. This can be done, for example in accordance with the operations described with reference to
It will be appreciated in light of the above description, that when restarting an application (for providing recovery or rollback) the controller can load it on any available blade, and not necessarily on the same blade where it was previously loaded. That is, if, for example, a blade stops operating because of some fault, when the controller detects that the application (or blade) is not operating it can reload the application to a different blade and restart it thereon, providing fault tolerance. In the same way, if the controller monitors the resources available on a blade running instances of at least one application, when the controller notices that the blade's resources (for example: memory) are about to be exhausted it can load new instances of the application(s) to other available blades. The new instances can be generated either from snapshot or from any other intermediate or running snapshots of the running instances. By running multiple instances of the same application at the same time, the controller can provide load balancing.
It was previously mentioned, with reference to
After identifying the required operating system and verifying that the required operating system is supported by the controller, the controller provides an image (1303) to which associated is the required operating system and/or kernel. With the image is associated also the agent. It should be noted that a kernel or an operating system, in association with its configuration data (also including for example script files) and in association with networking configuration, can form a snapshot that can be loaded to a blade.
During installation (1304), the executable and the configuration data (if such data exists, that is, if the executable is configured at all at (1305)) is also stored in association with the image to form an initial snapshot. The initial snapshot can also be in association with a fault repository listing faults and required actions from the controller, in association with a list of relative executables' priorities (i.e., the priority list), used by the controller if re-start on a different blade is required (for fail over recovery or for load balancing) and in association with a recovery policy etc. When all these (the configuration data, the faults repository, the executables' priority list, the recovery policy or any other data) are updated (1306, 1307) for the executable being installed, the controller can store the initial snapshot, terminating the installation thereby. Later this initial snapshot can be used to create running snapshots as was previously explained above. That is, steps 1303-1308 can be considered together as storing the initial snapshot (1309).
It should be noted that this embodiment and flow chart are non-limiting. One or more of the steps can be absent, other steps can be added, and their order can change, as appropriate to the case. For example, a snapshot need not necessarily include the operating system and an agent. In such an embodiment “storing an initial snapshot” (1309) can include only steps 1304-1308.
Reverting back again to
Before describing
When a change occurs in the active model (network configuration change, data being stored in the controller's storage device or any other change that is reflected in the active model), the master controller can perform a two-phase commit in order to certify that the slave controllers will also reflect the change. When a change occurs, the master controller notifies the all slave controllers (1401, 1402) about the change. Now the master controller waits for the slave controllers to confirm the change (1403). For example, if the change is data that should be stored on the controller's storage device, a slave controller can confirm the change after storing the data in its storage device, certifying the successful storage thereby. If no confirmation arrives within a certain predefined time-out (1404), the change fails (according to this example, when the change is data to be stored, the master controller can fail to store the data). However, if the slaves' confirmations arrive on time, the master controller performs the change (1405), by storing the data in its storage device.
Understanding the invention as disclosed above, those versed in the art can appreciate that by having a first controller (master or slave) in association with a storage device where an active model exists, it is possible to duplicate the first blade server by swapping the first controller and its associated storage device into another blade server. On start-up, the first controller creates another, redundant controller in association with the other blade server, wherein the redundant controller synchronizes with the first controller and becomes identical thereto. Then one of the controllers become a master controller and loads the active model onto the other blade server. Because the two controllers are identical, and are the same as was on the first blade server, the other blade server becomes a duplicate of the first blade server.
It is noted, however, that after loading the active model to the other blade server, the active model (i.e., the images, the network configuration etc) on the other blade server (and therefore on the first and other controllers) may change and differentiate from the first blade server.
The switch configuration apparatus 1501 includes a configuration data access unit 1502 and a switch configuration unit 1503. The configuration data access unit can access configuration data stored on an accessible storage device (as is shown at 601 in
The storage processor 1605 stores intermediate snapshots in a repository of snapshots, adapted to store one or more intermediate snapshots. The instance generator 1606 instantiates initial snapshots or intermediate snapshots, loading them to an available blade thereby. One exemplary way to instantiate and load an instance to an available blade is described with reference to
The network failure protection unit 1904 is coupled to an agent 1905, which is adapted to convey indications to a controller coupled to the same blade server when the network failure protection unit migrates to a redundant NIC. It will be appreciated that the agent 1905 can be included in the blade access apparatus 1901, the blade access apparatus 1901 can be included in the agent 1905, or they can be separate units coupled by any known means, such as pipes, network connections or others, as illustrated in
The migration detector 2002 receives migration indications from blades, indicative that agents loaded to the blades migrated to redundant NICs. The switch status detection unit 2003 checks the status of switches having access to blades accessible to the blade server (and mainly to switches having access to the migrating blades), for example by detecting their heartbeat.
The bypass generator 2004 bypasses a connection between the switch and the blade having access to it. A bypass is generated, for example, when the switch status detection unit 2003 detects that a switch is not operating. The switch fault alerts generator 2005 alerts that one or more faults occurred in switches, for example when the switch status detection unit 2003 detects faults in the switches' operation.
However, if the network fault tolerance apparatus 2001 detect no faults in the switches, it is most probable that migration detected by the migration detector 2002 was caused by faults in the trunks or in the blades. Therefore, the instance fault tolerance unit 2006 can load at least one instance on to a different blade accessible to the blade server. The blade fault alerts generator 2007 alerts one or more faults in blades.
It will also be understood that the apparatus according to the invention may be a suitably programmed computer. Likewise, the invention contemplates a computer program being readable by a computer for executing the method of the invention. The invention further contemplates a machine-readable memory tangibly embodying a program of instructions executable by the machine for executing the method of the invention.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/IL03/01060 | 12/11/2003 | WO | 00 | 3/14/2007 |