The present application incorporates by reference for all purposes the entire contents of U.S. Non-Provisional Ser. No. 12/842,945, titled PERSISTING DATA ACROSS WARM BOOTS, filed Jul. 23, 2012.
The present application herein incorporates by reference for all purposes the entire contents of the following U.S. patents, all assigned to Brocade Communications Systems, Inc.:
(1) U.S. Pat. No. 7,188,237 B2 titled “Reboot Manager Usable to Change Firmware in a High Availability Single Processor System”;
(2) U.S. Pat. No. 7,194,652 B2 titled “High Availability Synchronization Architecture”; and
(3) U.S. Pat. No. 7,284,236 B2 titled “Mechanism to Change Firmware in High Availability Single-Processor System”.
The present disclosure relates to processing systems and more particularly to techniques for providing enhanced availability in single processor-based systems.
Achieving high-availability is an important design goal for any network architecture. Several networking technologies have been developed to achieve high-availability. Existing technologies facilitate high availability by providing redundant network devices or by providing multiple physical processors. For example, according to one architecture, redundant network devices are provided for forwarding data with one network device operating in active mode and the other operating in standby (or passive) mode. In this active-standby model, the active network device performs the data forwarding-related functions while the redundant second network device operates in standby mode. Upon a failover, which may occur, for example, due to an error on the active device, the standby device becomes the active device and takes over data forwarding functionality from the previously active device. The previous active device may then operate in standby mode. The active-standby model using two network devices thus strives to reduce interruptions in data forwarding.
Some network devices comprise multiple physical processors. For example, a network device may comprise two management cards, each having its own physical processor. One management card may be configured to operate in active mode while the other operates in standby mode. The active management card performs the data forwarding-related functions while the redundant second management card operates in standby mode. Upon a failover, the standby management card becomes the active card and takes over data forwarding-related functionality from the previously active management card. The previous active management card may then operate in standby mode. The active-standby model is typically used to enable various networking technologies such as graceful restart, non-stop routing (NSR), and the like.
As described above, conventional networks facilitate high-availability by providing redundant network devices or multiple physical processors. However, providing this redundancy increases the expense of the network or network device. Further, there are systems, including several network devices, and subsystems of a system that comprise only a single physical processor. These systems and subsystems cannot provide an active-standby capability. For example, line cards in a network device do not comprise redundant physical processors that can enable an active-standby model of operation. As another example, several network devices comprise only a single management card with a single physical CPU and thus do not support an active-standby model.
Embodiments of the present invention provide techniques for achieving high-availability using a single processor (CPU). In one embodiment, in a system comprising a single multi-core CPU, at least two partitions may be configured with each partition being allocated one or more cores of the multiple cores. Each partition may be configured to operate as a virtual machine. The partitions may be configured such that one partition operates in active mode while another partition operates in standby mode. In this manner, a single processor is able to provide active-standby functionality, thereby enhancing the availability of the system comprising the processor.
According to an embodiment of the present invention, techniques are provided in a system comprising a multi-core processor to support an active mode and a standby mode of operation. The plurality of cores provided by the processor may be partitioned into at least a first partition and a second partition, wherein a first set of cores from the plurality of cores is allocated to the first partition and a second set of cores from the plurality of cores is allocated to the second partition. The first set of cores may be different from the second set of cores. The first partition is configured to operate in active mode, wherein a set of functions is performed in the active mode. When the first partition is operating in active mode, the second partition may be configured to operate in a standby mode, wherein the set of functions is not performed in the standby mode. In response to an event, the second partition may be configured to start operating in the active mode instead of the first partition and to start performing the set of functions corresponding to the first mode. The first partition may be configured to operate in the standby mode after the second partition operates in the active mode.
The event that causes the second partition to become the active partition may be of various types. Examples include a reset or restart of the first partition, a software upgrade, a failure in operation of the first partition, a timeout, or an instruction to cause the second partition to operate in the first mode instead of the first partition.
In one embodiment, a hypervisor may be provided for managing the first partition and the second partition, including allocating processing and memory resources between the partitions.
In one embodiment, the active-standby mode capabilities provided by a single physical processor may be embodied in a network device such as a switch or router. The network device may comprise a multi-core processor that may be partitioned into multiple partitions, with one partition operating in active mode and another operating in standby mode. The set of functions performed in active mode may include one or more functions related to processing of a packet received by the network device. In one embodiment, the processor enabling the active-standby capability may be located on a line card of the network device. The processor may also be located on a management card of the network device.
The foregoing, together with other features and embodiments will become more apparent upon referring to the following specification, claims, and accompanying drawings.
In the following description, for the purposes of explanation, specific details are set forth in order to provide a thorough understanding of embodiments of the invention. However, it will be apparent that the invention may be practiced without these specific details.
Embodiments of the present invention provide techniques for achieving high-availability using a single processor (CPU). In a system comprising a multi-core processor, at least two partitions may be configured with each partition being allocated one or more cores of the multiple cores. The partitions may be configured such that one partition operates in active mode while another partition operates in standby mode. In this manner, a single processor is able to provide active-standby functionality, thereby enhancing the availability of the system comprising the processor.
For purposes of this application, the term “system” may refer to a system, a device, or a subsystem of a system or device. For example, the term “system” may refer to a network device such as a router or switch provided by Brocade Communications Systems, Inc. The term “system” may also refer to a subsystem of a system such as a management card or a line card of a router or switch.
Physical processor 102 represents the processing resources of system 100. In one embodiment, processor 102 is a multi-core processor comprising a plurality of processing cores. For example, in the embodiment depicted in
Volatile memory 104 represents the memory resources available to physical processor 102. Information related to runtime processing performed by processor 102 may be stored in memory 104. Memory 104 may be a RAM (e.g., SDR RAM, DDR RAM) and is sometimes referred to as the system's main memory.
Hardware resources of system 100 may include I/O devices 106 and other hardware resources 108. I/O devices 106 may include devices such as Ethernet devices, PCIe devices, eLBC devices, and others. Interconnect 110 may include one or more interconnects or buses.
In one embodiment, the processing, memory, and hardware resources of system 100 may be partitioned into one or more logical partitions (referred to herein as partitions). For example, in the embodiment depicted in
The memory resources provided by memory 104 may also be partitioned and allocated to the different partitions. For example, as depicted in
The memory assigned to a partition may store, during runtime, an operating system for the partition and data related to one or more entities executed by the partition. The data may include code and other data. These entities may include but are not restricted to an application, a process, a thread, an operating system (including a component of the operating system such as an operating system kernel module), a device driver, a hypervisor, and the like. For example, in the embodiment depicted in
Volatile memory 114 allocated to partition P2 may comprise a section 124 storing an operating system OS2 operating on P2, a section 126 storing data related to one or more entities executed by partition P2. A section 128 of volatile memory 114 may optionally be set aside as warm memory to store data that is to be persisted across a warm boot of that partition.
Shared memory 116 may be shared by different partitions and also by hypervisor 130. Shared memory 116 may be shared by entities from the same partition or by entities from different partitions. A portion 129 of shared memory 116 may be optionally set aside as warm memory that enables stored data to be persisted across a warm boot. In one embodiment, shared memory 116 may be used for messaging between the sharers. Warm memory 129 may be shared between multiple entities, including applications/processes/threads executed by one or more partitions, different operating systems and their components, and the hypervisor. In one embodiment, shared memory 116 is configured such that the contents stored in the shared memory are not affected by a boot of a single partition.
The hardware resources of system 100, including I/O devices 106 and other hardware resources 108, may also be partitioned between partitions P1 and P2. A hardware resource may be assigned exclusively to one partition or alternatively may be shared between multiple partitions. For example, in one embodiment, a private Ethernet interface may be assigned to each partition, while access to PCIe may be shared between the partitions. In one embodiment, even though access to PCIe may be shared between the active and standby partitions, PCIe enumeration is performed only by the active partition.
Hypervisor 130 is a software program that facilitates secure partitioning of resources between the partitions of system 100 and management of the partitions. Hypervisor 130 enables multiple operating systems to run concurrently on system 100. Hypervisor 130 presents a virtual machine to each partition and allocates resources between the partitions. For example, the allocation of memory, processing, and hardware resources, as described above, may be facilitated by hypervisor 130. In one embodiment, hypervisor 130 may run directly on processor 102 as an operating system control.
Hypervisor 130 may present a virtual machine to each partition. For example, a virtual machine VM1 may be presented to partition P1 and a virtual machine VM2 may be presented to partition P2. Hypervisor 130 may manage multiple operating systems executed by the partitions. Hypervisor 130 may also facilitate the management of various warm memory portions (e.g., warm memory portions 122, 128, and 129) set aside in volatile memory 104.
Each virtual machine for a partition may operate independently of the other partitions and may not even know that the other partition exists. The operating system executed for one partition may be the same as or different from the operating system for another partition. For example, in
The warm memory portions depicted in
According to an embodiment of the present invention, the multiple partitions configured for system 100 enable system 100 to provide the active-standby model in which one partition of system 100 operates in “active” mode while another partition operates in “standby” mode. For example, in the embodiment depicted in
During normal operation of system 100, there may be some messaging that takes place between the active partition and the standby partition. For example, the active partition may use messaging to pass state information to the standby partition. The state information may comprise information that enables the standby partition to become the active partition upon a failover in a non-disruptive manner. Various different schemes may be used for the messaging including but not restricted to Ethernet-based messaging, PCI-based messaging, shared memory based messaging (such as using shared memory 116), and the like.
In the manner described above, even though system 100 comprises a single physical processor 102, it is capable of supporting multiple partitions with one partition configured to operate in active mode and another partition configured to operate in standby mode. This enables the single physical processor 102 to support the active-standby model. This in turn enhances the availability of system 100.
There are different ways in which one or more cores of a multi-core processor such as processor 102 depicted in
System 100 may be embodied in various different systems. For example, in one embodiment, system 100 may be embodied in a network device such as a switch or router provided by Brocade Communications Systems, Inc. A network device may be any device that is capable of forwarding data. The data may be received in the form of packets.
Ports 212 represent the I/O plane for network device 200. Network device 200 is configured to receive and forward packets using ports 212. A port within ports 212 may be classified as an input port or an output port depending upon whether network device 200 receives or transmits a data packet using the port. A port over which a data packet is received by network device 200 is referred to as an input port. A port used for communicating or forwarding a data packet from network device 200 is referred to as an output port. A particular port may function both as an input port and an output port. A port may be connected by a link or interface to a neighboring network device or network. Ports 212 may be capable of receiving and/or transmitting different types of data traffic at different speeds including 1 Gigabit/sec, 10 Gigabits/sec, 100 Gigabits/sec, or even more. In some embodiments, multiple ports of network device 200 may be logically grouped into one or more trunks.
Upon receiving a data packet via an input port, network device 200 is configured to determine an output port to be used for transmitting the data packet from the network device to facilitate communication of the packet to its intended destination. Within network device 200, the packet is forwarded from the input port to the determined output port and then transmitted from network device 200 using the output port. In one embodiment, forwarding of packets from an input port to an output port is performed by one or more line cards 204. Line cards 204 represent the data forwarding plane of network device 200. Each line card may comprise one or more packet processors that are programmed to perform forwarding of data packets from an input port to an output port. In one embodiment, processing performed by a line card may comprise extracting information from a received packet, performing lookups using the extracted information to determine an output port for the packet such that the packet can be forwarded to its intended destination, and to forward the packet to the output port. The extracted information may include, for example, the header of the received packet.
Management card 202 is configured to perform management and control functions for network device 200 and thus represents the management plane for network device 200. In one embodiment, management card 202 is communicatively coupled to line cards 204 via switch fabric 206. In the embodiment depicted in
According to an embodiment of the present invention, system 100 depicted in
By providing multiple partitions, each capable of operating independently of the other partition, management card 202 is able to provide processing element redundancy. This redundancy enables management card 202 to support the active-standby model wherein one partition is configured to operate in active mode (as the active partition) and another partition is configured to operate in standby mode. The ability to support the active-standby model, even though management card 202 comprises a single physical processor 208, enhances the availability of management card 202 and allows it to support various high-availability networking protocols such as graceful restart, non-stop routing (NSR), and the like.
In one embodiment, for a line card 204, one partition may be configured to operate in active mode while another partition operates in standby mode. For example, as depicted in
By providing multiple partitions, each capable of operating independently of the other partition, a line card 204 is able to provide processing redundancy. This redundancy enables line card 204 to support the active-standby functionality wherein one partition is configured to operate in active mode (as the active partition) and another partition is configured to operate in standby mode. The ability to support the active-standby model, even though line card 204 comprises a single physical processor 222, enhances the availability of line card 204. For example, even though the active partition of a line card may run into problems, the functions performed by the active partition may be taken over by the standby partition, which then becomes the active partition. In this manner, the functionality of a line card is not interrupted in spite of a failure or problem with one of the partitions. Resources previously owned by the active partition will be taken over by the standby partition when it becomes active. The resource can be hardware resources (PCIe devices, memory, CPU cores, device ports, etc.) and software related resources (message queues, buffers, interrupts, etc).
In one embodiment (not shown), a network device may be provided with a single physical multi-core CPU, where the CPU is configured to handle functions performed by a line card and a management card. Such a network device is sometimes referred to as a “pizza box.” In such an embodiment, the CPU may be partitioned into multiple partitions, each partition being allocated one or more cores of the multi-core processor. One of the partitions may operate in active mode while another partition operates in standby mode.
For a system comprising a single physical multi-core processor or CPU that can be partitioned into one or more partitions, processing may be performed to determine which partition becomes the active partition and which partition becomes the standby partition. For example, this processing may be performed upon a power-on reset (cold reset) of the system.
Processing may be initiated upon a power up of the system (step 302). The power up may be performed upon a cold boot or a power-on reset. A boot loader is then launched (step 304). The boot loader may run on one or more cores of processor 102.
The boot loader then loads and may update the hardware configuration for the system (step 306). The partition configuration may be determined statically based upon a configuration file loaded by the boot loader. The configuration data may be stored locally or retrieved from a remote location (e.g., from a remote server). The configuration data may identify the number of partitions to be configured for system 100, a set of cores of processor 102 to be assigned to each partition, and the operating system to be loaded for each partition. As part of 306, the boot loader may also determine the processor, memory, and hardware resources that are available for system 100. In one embodiment, the boot loader may dynamically adjust the partition configuration based on specific hardware resources available (typically based upon the amount of memory available).
The boot loader then starts the hypervisor (step 308). In one embodiment, the hypervisor is loaded in a section of the memory that is protected from access by the partitions. As part of 308, the boot loader may also pass hardware configuration information to the hypervisor. This information may identify the number of partitions, configuration information for each partition, and other information that may be used by the hypervisor for setting up the partitions.
The hypervisor then sets up the partitions (step 310). In one embodiment, based upon the hardware configuration information received from the boot loader, the hypervisor determines the partitions for the processor and how resources are to be allocated to the partitions. In one embodiment, a compact flash device may be provided for each partition and configured to store information for configuring the associated partition. As part of 310, the hypervisor may be configured to determine, for each partition, the compact flash corresponding to the partition and determine configuration information for the partition from the compact flash. The information for a partition may identify the operating system to be loaded in that partition.
While the hypervisor is responsible for setting up the partitions according to 310, the hypervisor does not determine how the system is to be partitioned. The hypervisor partitions the system based upon the configuration file (also sometimes referred to as a device tree) data loaded in 306. The configuration file may be set by a user or administrator of the system. The hypervisor is thus responsible for creating partitions defined by the configuration file data.
As part of 310, the hypervisor may launch an operating system for each partition. The operating system for a partition may be loaded in a section of memory configured for the partition. For example, in the embodiment depicted in
The partitions then arbitrate for mastership (or active/standby status) (step 312). Processing is performed in 312 to determine which partition is to become the active partition and which partition is to be the standby partition. A deterministic algorithm is typically used to determine mastership. Processing for determining mastership is performed by the operating systems loaded for the partitions (also referred to as the guest operating systems) and not by the hypervisor or boot loader. Accordingly, while a hypervisor facilitates management of resources for partitions, it is not involved or required for processing related to mastership arbitration (and hence not essential for providing the high availability (HA) in a system.
As a result of the processing performed in 312, one partition becomes the active partition and the other one becomes the standby partition (step 314). The active partition then takes ownership of and starts managing hardware resources of the system (step 316). In one embodiment, the active partition may take control of all the hardware resources. Certain hardware resources may be shared between the active partition and the standby partition. The sharing is typically done to ensure that the process of the standby partition becoming the active partition in the event of a failover can occur with minimal impact in a non-disruptive manner. Accordingly, data can be shared between the partitions to facilitate a failover.
The active partition may then configure and manage any shared resources (step 318). For example, in
The active partition may then start running one or more applications and perform functions performed by an active partition (step 320). For example, if system 100 is embodied in a line card of a network device, the applications may include applications for forwarding data packets received by the network device. The active partition on a line card may perform functions such as managing I/O devices, managing control state, programming hardware (e.g., programming hardware-based data packet processors), sending out control packets, maintaining protocol/state information, maintaining timing information/logs, and other functions performed by a line card in a network device.
For the partition that comes up as a standby partition, the standby partition gets an initial state information dump from the active partition (step 322). The standby partition then periodically receives state information updates from the active partition such that the state information for the standby partition is synchronized with the state information for the active partition. The communication of state information from the active partition to the standby partition is performed as part of the functions performed by the active partition in 320. The active partition may communicate state information to the standby partition using, for example, a messaging mechanism. In one embodiment, the active partition is configured to periodically check if the state information on the standby partition is synchronized with the state information on the active partition. If not in sync, then the active partition communicates state information to the standby partition to bring its state information in synchrony with the state information on the active partition. In one embodiment, a change in state information on the active partition (e.g., a configuration change) may cause the active partition to synchronize the state information with the standby partition. Accordingly, in one embodiment, the standby partition does not interact with the resources owned/managed by the active partition. The standby partition receives the state information from the active partition.
The state information that is synchronized or shared between the active partition and the standby partition may comprise information that is needed by the standby partition to become the active partition when a failover event occurs in a non-disruptive manner. State information may comprise of application data (routing tables, queue structures, buffers, etc) and hardware specific state information (ASIC configuration tables, port maps, etc). In one embodiment, the active partition may not even know the existence of the standby partition. In another embodiment, the active and the standby partitions may be aware of each other. For example, the active partition may know the presence and state (healthy or degraded) of the standby partition. Knowing the state enables the active partition to determine whether a failover to the standby can be performed without causing data disruption.
As described above, a single physical multi-core processor may be partitioned into multiple partitions with one partition being configured as the active partition and another partition configured as the standby partition. The active partition is configured to perform a set of functions related to the system that are not performed by the standby partition. When a failover event occurs, the standby partition becomes the active partition and starts performing the set of functions that were previously performed by the partition that was previously active.
In the embodiment described above, one partition operates in active mode and another operates in standby mode. In alternative embodiments, there may be multiple standby partitions. In such an embodiment, one of the multiple standby partitions may become the active partition upon a failover. The new active partition then resets the former active partition to make it the standby partition.
In one embodiment, at a high level, failover events, i.e., events that cause a failover to occur, may be categorized into one of the following two categories:
(1) a voluntary failover event, and
(2) an involuntary failover event.
A voluntary failover event is one that causes the active partition to voluntarily yield control to the standby partition. For example, a command received from a network administrator to perform a failover is a voluntary failover event. There are various situations when this may be performed. As one example, a voluntary failover may be performed when software on the active partition is to be upgraded. In this situation, a system administrator may voluntarily issue a command/instruction to cause a failover to occur. Details related to processing performed during a failover are provided below. As another example, a voluntary failover may be initiated by the system administrator upon noticing a performance degradation on the active partition or upon noticing that software executed by the active partition is malfunctioning—in these cases, the network administrator may voluntarily issue a command for a failover with the hope that problems associated with the active partition will be remedied when the standby partition becomes the new active partition. Various interfaces, including a command line interface (CLI), may be provided for initiating a voluntary failover. Various events that occur in or are detected by a system providing a multi-core CPU or of which the system receives notification may qualify as a failover event.
An involuntary failover typically occurs due to some critical failure in the active partition. Examples include when a hardware watchdog timer goes off (or times out) and resets the active partition, possibly due to a problem in the kernel of the operating system loaded for the active partition, critical failure of software executed by the active partition, loss of heartbeat, and the like. An involuntary failover event causes the standby partition to automatically become the active partition. An involuntary failover event may be any event that occurs and/or is detected by a system comprising a multi-core processor.
Events that cause a voluntary or involuntary failover may come in different forms. A multi-core CPU system may be configured such that various events that occur in the system, or are detected by the system, or of which the system receives a notification may cause a failover to occur, as a result of which the standby partition becomes the active partition and the active partition may become the standby partition.
As depicted in
The hypervisor detects or is notified of a failover-related event (step 404). The failover-related event may be a failover event itself (e.g., a catastrophic failure on the active partition, a watchdog timer going off, a boot of the active partition, etc.) or a signal or interrupt caused by a failover event. The hypervisor then sends a notification to the standby partition (P2) about the failover-related event (step 406). For example, the hypervisor may send a notification to the standby partition (P2) that the active partition (P1) has rebooted.
The detection or notification of failover-related events is not restricted to the hypervisor. The hypervisor may not even be involved with the detection or notification. For example, the active partition may itself send a notification to the standby partition of a failover-related event. For example, the active partition may send a notification to the standby partition that the active partition is to reboot. The standby partition may also be capable of detecting failover-related events and take over as the active partition.
The standby partition (P2) then requests the hypervisor to allocate it hardware resources that were previously allocated to the active partition (P1) (step 408). As part of 408, the standby partition (P2) may also request the hypervisor to stop the active partition (P1) such that resources held by the active partition can be reallocated to the standby partition (P2).
The standby partition (P2) then takes over as the new active partition and starts active processing (step 410). In one embodiment, as part of 410, the new active partition may perform processing depicted in steps 316, 318, and 320 described above with respect to
The new active partition (P2) may then attempt to restart the previous active partition (P1) (step 412). As part of 412, the new active partition (P2) may request the hypervisor to restart partition P1. Partition P1 will assume the standby role when it comes upon detecting that another partition is operating as an active partition. In one embodiment, the new active partition may monitor the status of partition P1 to see if it comes up successfully in standby mode (step 414). If the partition successfully comes up as the standby partition, then the active partition knows that in the case of another failover event, a standby partition is available to take over as the active partition without disrupting the functionality of the system. If the active partition determines that the previously active partition could not successfully come up in standby mode, then it indicates to the active partition that, due to the non-availability of a standby partition, a subsequent failover event may cause disruption of service for the system.
In one embodiment, if any of the processing steps depicted in
After the standby partition becomes the active partition in 410, it synchronizes its state information with that of the previous active partition. As previously discussed, during normal processing, the active partition may communicate state information to the standby partition to update the state information of the standby partition. After the standby partition becomes the active partition, it checks whether the state information that it has received from the active partition is synchronized with the state information of the previous active partition. If the information is deemed to be synchronized, then the standby partition continues as the active partition. If the information is not synchronized or if the warm recovery fails, then the standby partition may perform functions to recover the state information, including potentially initiating cold recovery functions (e.g., reset hardware) to reinitialize its operational state information. The active partition then continues to operate in active mode.
In one embodiment, a failover may be used for making software/firmware (software in general) upgrades to a system without disrupting processing performed by the system. For example, if the system is a network device, a failover may be used to upgrade software for the network device without disrupting traffic forwarding/switching performed by the network device. The upgrade may be stored in non-volatile memory. In one embodiment, the non-volatile memory may store information for the different partitions. In one embodiment, compact flash (CF) serves as the non-volatile memory for storing the information. For example, in one embodiment, each partition has a corresponding CF that may be used to store the upgrades for that partition. In an alternative embodiment, a CF may store information for multiple partitions. In other embodiments, other types of non-volatile memory may also be used.
As discussed above, in one non-limiting embodiment, a CF may be provided for each partition and store information for the partition. In one such embodiment, the CF for a partition may be divided into a primary volume (CF_P) and a secondary volume (CF_S). The primary volume may be used for providing the root file system for the partition and the secondary volume may be used for software upgrades for the partition.
In the embodiment depicted in
In one embodiment, there are two different situations for software upgrades. The first case involves upgrading the software executed by a partition and does not involve the upgrade of the hypervisor image. A method for performing such an upgrade is depicted in
As depicted in
A new software image V2 to which the software is to be upgraded is downloaded and stored in CF2_S corresponding to the secondary file system of standby partition P2 (step 602). The new software image V2 is then copied to CF1_S corresponding to the secondary file system of active partition P1 (step 604).
Standby partition P2 is then rebooted/restarted to activate the new image (step 606). As a result of 606, standby partition P2 comes up running the new software image V2 and still remains the standby partition. CF2_S is mounted as the root file system and CF2_P is mounted as the secondary file system for partition P2.
Active partition P1 communicates the system state information to standby partition P2 (step 608). Active partition P1 then initiates a failover by initiating a reboot/restart (step 610). As a result of the failover, standby partition P2 becomes the active partition. Also, as a result of the failover, partition P1 comes back up running the new software image V2 and becomes the standby partition. CF1_S is mounted as the root file system and CF1_P is mounted as the secondary file system for partition P1.
Active partition P2 communicates the system state information to standby partition P1 (step 612). New software V2 is then copied from CF2_S to CF2_P for active partition P2 and from CF1_S to CF1_P for standby partition P1 (step 614). Now all of the volumes have the new software image.
In the manner described above, a software upgrade may be performed in a non-disruptive manner without interrupting processing performed by system 500. For example, if system 500 were a management card in a network device (such as management card 202 depicted in
New software image V2 comprising the new hypervisor image is downloaded and stored in CF2_S corresponding to the secondary file system of standby partition P2 (step 702). V2 is then copied to CF1_S corresponding to the secondary file system of active partition P1 (step 704).
The system state information on active partition P1 is saved to a non-volatile storage medium (step 706). In one embodiment, some of the techniques described in U.S. Pat. No. 7,188,237, 7,194,652, or 7,284,236 may be used for saving the system state information in the context of partitions within a single CPU. The entire contents of these patents are incorporated herein by reference for all purposes. Other techniques may be used in alternative embodiments.
The new hypervisor image is then extracted from new image V2 and is activated (step 708). As a result of 708, both active partition P1 and standby partition P2 need to be rebooted. Both partitions are rebooted (step 710). As a result of 710, active partition P1 comes up running new image V2 while remaining as the active partition and standby partition P2 comes up running new image V2 while remaining as the standby partition. CF1_S is mounted as root file system and CF1_P is mounted as secondary partition for active partition P1. CF2_S is mounted as the root file system and CF2_P is mounted as the secondary file system for standby partition P2.
The system state information saved in 706 is restored on active partition P1, and is communicated to the standby partition P2 (step 712). In one embodiment, the method for saving the system state information as described in U.S. Pat. No. 7,188,237, 7,194,652, or 7,284,236 may be used for saving the system state information in the context of partitions within a single CPU. The entire contents of these patents are incorporated herein by reference for all purposes. Other methods may be used in alternative embodiments.
New software image V2 is then copied from CF1_S to CF1_P for active partition P1, and from CF2_S to CF2_P for standby partition P2 (step 714). Now all of the volumes have the new software images.
As described above, techniques are provided that enable an active-standby model to be provided by a single physical multi-core processor. By providing for multiple partitions with one partition operating in active mode and another operating in standby mode, a failover mechanism is provided whereby, when a failover event occurs (e.g., something goes wrong with the active partition, a software upgrade is to be performed), the standby partition can take over as the active partition and start performing the set of functions corresponding to the active mode without disrupting processing that is being performed by the system. As a result, the set of functions related to the system continue to be performed without interruption. This reduces or even eliminates the downtime of the system's functionality, which translates to higher availability of the system. In this manner, even if a system comprises only a single processor, the system can support active-standby mode functionality.
Various embodiments have been described above where a system comprises a single physical multi-core processor configured as described above. This enables the system to provide active-standby functionality even with one physical processor. The scope of the present invention is however not restricted to systems comprising a single physical processor. A multi-core processor configured as described above may also be provided in a system comprising multiple processors where the multi-core processor enables active-standby functionality.
Although specific embodiments of the invention have been described, various modifications, alterations, alternative constructions, and equivalents are also encompassed within the scope of the invention. Embodiments of the present invention are not restricted to operation within certain specific data processing environments, but are free to operate within a plurality of data processing environments. Additionally, although embodiments of the present invention have been described using a particular series of transactions and steps, it should be apparent to those skilled in the art that the scope of the present invention is not limited to the described series of transactions and steps.
Further, while embodiments of the present invention have been described using a particular combination of hardware and software, it should be recognized that other combinations of hardware and software are also within the scope of the present invention. Embodiments of the present invention may be implemented only in hardware, or only in software, or using combinations thereof.
The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. It will, however, be evident that additions, subtractions, deletions, and other modifications and changes may be made thereunto without departing from the broader spirit and scope as set forth in the claims.
Number | Name | Date | Kind |
---|---|---|---|
5159592 | Perkins | Oct 1992 | A |
5278986 | Jourdenais et al. | Jan 1994 | A |
5410710 | Sarangdhar et al. | Apr 1995 | A |
5550816 | Hardwick et al. | Aug 1996 | A |
5649110 | Ben-Nun et al. | Jul 1997 | A |
5878232 | Marimuthu | Mar 1999 | A |
5970232 | Passint et al. | Oct 1999 | A |
5978578 | Azarya et al. | Nov 1999 | A |
6047330 | Stracke, Jr. | Apr 2000 | A |
6097718 | Bion | Aug 2000 | A |
6101188 | Sekine et al. | Aug 2000 | A |
6104700 | Haddock et al. | Aug 2000 | A |
6111888 | Green et al. | Aug 2000 | A |
6115393 | Engel et al. | Sep 2000 | A |
6161169 | Cheng | Dec 2000 | A |
6233236 | Nelson et al. | May 2001 | B1 |
6282678 | Snay et al. | Aug 2001 | B1 |
6331983 | Haggerty et al. | Dec 2001 | B1 |
6374292 | Srivastava et al. | Apr 2002 | B1 |
6397242 | Devine et al. | May 2002 | B1 |
6424629 | Rubino et al. | Jul 2002 | B1 |
6430609 | Dewhurst et al. | Aug 2002 | B1 |
6496510 | Tsukakoshi et al. | Dec 2002 | B1 |
6496847 | Bugnion et al. | Dec 2002 | B1 |
6567417 | Kalkunte et al. | May 2003 | B2 |
6570875 | Hegde | May 2003 | B1 |
6577634 | Tsukakoshi et al. | Jun 2003 | B1 |
6580727 | Yim et al. | Jun 2003 | B1 |
6587469 | Bragg | Jul 2003 | B1 |
6597699 | Ayres | Jul 2003 | B1 |
6604146 | Rempe et al. | Aug 2003 | B1 |
6608819 | Mitchem et al. | Aug 2003 | B1 |
6633916 | Kauffman | Oct 2003 | B2 |
6636895 | Li et al. | Oct 2003 | B1 |
6674756 | Rao et al. | Jan 2004 | B1 |
6675218 | Mahler et al. | Jan 2004 | B1 |
6678248 | Haddock et al. | Jan 2004 | B1 |
6680904 | Kaplan et al. | Jan 2004 | B1 |
6691146 | Armstrong et al. | Feb 2004 | B1 |
6704925 | Bugnion | Mar 2004 | B1 |
6711672 | Agesen | Mar 2004 | B1 |
6725289 | Waldspurger et al. | Apr 2004 | B1 |
6731601 | Krishna et al. | May 2004 | B1 |
6732220 | Babaian et al. | May 2004 | B2 |
6763023 | Gleeson et al. | Jul 2004 | B1 |
6785886 | Lim et al. | Aug 2004 | B1 |
6789156 | Waldspurger | Sep 2004 | B1 |
6791980 | Li | Sep 2004 | B1 |
6795966 | Lim et al. | Sep 2004 | B1 |
6847638 | Wu | Jan 2005 | B1 |
6859438 | Haddock et al. | Feb 2005 | B2 |
6880022 | Waldspurger et al. | Apr 2005 | B1 |
6898189 | Di Benedetto et al. | May 2005 | B1 |
6910148 | Ho et al. | Jun 2005 | B1 |
6938179 | Iyer et al. | Aug 2005 | B2 |
6944699 | Bugnion et al. | Sep 2005 | B1 |
6961806 | Agesen et al. | Nov 2005 | B1 |
6961941 | Nelson et al. | Nov 2005 | B1 |
6975587 | Adamski et al. | Dec 2005 | B1 |
6975639 | Hill et al. | Dec 2005 | B1 |
7039720 | Alfieri et al. | May 2006 | B2 |
7058010 | Chidambaran et al. | Jun 2006 | B2 |
7061858 | Di Benedetto et al. | Jun 2006 | B1 |
7065059 | Zinin | Jun 2006 | B1 |
7093160 | Lau et al. | Aug 2006 | B2 |
7188237 | Zhou et al. | Mar 2007 | B2 |
7194652 | Zhou et al. | Mar 2007 | B2 |
7236453 | Visser et al. | Jun 2007 | B2 |
7269133 | Lu et al. | Sep 2007 | B2 |
7284236 | Zhou et al. | Oct 2007 | B2 |
7292535 | Folkes et al. | Nov 2007 | B2 |
7305492 | Bryers et al. | Dec 2007 | B2 |
7308503 | Giraud et al. | Dec 2007 | B2 |
7315552 | Kalkunte et al. | Jan 2008 | B2 |
7317722 | Aquino et al. | Jan 2008 | B2 |
7324500 | Blackmon et al. | Jan 2008 | B1 |
7327671 | Karino et al. | Feb 2008 | B2 |
7339903 | O'Neill | Mar 2008 | B2 |
7360084 | Hardjono | Apr 2008 | B1 |
7362700 | Frick et al. | Apr 2008 | B2 |
7382736 | Mitchem et al. | Jun 2008 | B2 |
7385977 | Wu et al. | Jun 2008 | B2 |
7406037 | Okita | Jul 2008 | B2 |
7417947 | Marques et al. | Aug 2008 | B1 |
7417990 | Ikeda et al. | Aug 2008 | B2 |
7418439 | Wong | Aug 2008 | B2 |
7441017 | Watson et al. | Oct 2008 | B2 |
7447225 | Windisch et al. | Nov 2008 | B2 |
7483370 | Dayal et al. | Jan 2009 | B1 |
7483433 | Simmons et al. | Jan 2009 | B2 |
7518986 | Chadalavada et al. | Apr 2009 | B1 |
7522521 | Bettink et al. | Apr 2009 | B2 |
7535826 | Cole et al. | May 2009 | B1 |
7599284 | Di Benedetto et al. | Oct 2009 | B1 |
7609617 | Appanna et al. | Oct 2009 | B2 |
7620953 | Tene et al. | Nov 2009 | B1 |
7656409 | Cool et al. | Feb 2010 | B2 |
7694298 | Goud et al. | Apr 2010 | B2 |
7720066 | Weyman et al. | May 2010 | B2 |
7729296 | Choudhary | Jun 2010 | B1 |
7739360 | Watson et al. | Jun 2010 | B2 |
7751311 | Ramaiah et al. | Jul 2010 | B2 |
7787360 | Windisch et al. | Aug 2010 | B2 |
7787365 | Marques et al. | Aug 2010 | B1 |
7788381 | Watson et al. | Aug 2010 | B2 |
7802073 | Cheng et al. | Sep 2010 | B1 |
7804769 | Tuplur et al. | Sep 2010 | B1 |
7804770 | Ng | Sep 2010 | B2 |
7805516 | Kettler et al. | Sep 2010 | B2 |
7830802 | Huang et al. | Nov 2010 | B2 |
7843920 | Karino et al. | Nov 2010 | B2 |
7886195 | Mayer | Feb 2011 | B2 |
7894334 | Wen et al. | Feb 2011 | B2 |
7929424 | Kochhar et al. | Apr 2011 | B2 |
7940650 | Sandhir et al. | May 2011 | B1 |
7944811 | Windisch et al. | May 2011 | B2 |
7974315 | Yan et al. | Jul 2011 | B2 |
8009671 | Guo et al. | Aug 2011 | B2 |
8014394 | Ram | Sep 2011 | B2 |
8028290 | Rymarczyk et al. | Sep 2011 | B2 |
8074110 | Vera et al. | Dec 2011 | B2 |
8086906 | Ritz et al. | Dec 2011 | B2 |
8089964 | Lo et al. | Jan 2012 | B2 |
8095691 | Verdoorn, et al. | Jan 2012 | B2 |
8099625 | Tseng et al. | Jan 2012 | B1 |
8102848 | Rao | Jan 2012 | B1 |
8121025 | Duan et al. | Feb 2012 | B2 |
8131833 | Hadas et al. | Mar 2012 | B2 |
8149691 | Chadalavada et al. | Apr 2012 | B1 |
8156230 | Bakke et al. | Apr 2012 | B2 |
8161260 | Srinivasan | Apr 2012 | B2 |
8180923 | Smith et al. | May 2012 | B2 |
8181174 | Liu | May 2012 | B2 |
8291430 | Anand et al. | Oct 2012 | B2 |
8335219 | Simmons et al. | Dec 2012 | B2 |
8345536 | Rao et al. | Jan 2013 | B1 |
20020035641 | Kurose et al. | Mar 2002 | A1 |
20020103921 | Nair et al. | Aug 2002 | A1 |
20020129166 | Baxter et al. | Sep 2002 | A1 |
20030105794 | Jasinschi et al. | Jun 2003 | A1 |
20030202520 | Witkowski et al. | Oct 2003 | A1 |
20040001485 | Frick et al. | Jan 2004 | A1 |
20040030766 | Witkowski | Feb 2004 | A1 |
20040078625 | Rampuria et al. | Apr 2004 | A1 |
20050036485 | Eilers et al. | Feb 2005 | A1 |
20050114846 | Banks et al. | May 2005 | A1 |
20050213498 | Appanna et al. | Sep 2005 | A1 |
20060002343 | Nain et al. | Jan 2006 | A1 |
20060004942 | Hetherington et al. | Jan 2006 | A1 |
20060018253 | Windisch et al. | Jan 2006 | A1 |
20060018333 | Windisch et al. | Jan 2006 | A1 |
20060090136 | Miller et al. | Apr 2006 | A1 |
20060143617 | Knauerhase et al. | Jun 2006 | A1 |
20060171404 | Nalawade et al. | Aug 2006 | A1 |
20060176804 | Shibata | Aug 2006 | A1 |
20060224826 | Arai et al. | Oct 2006 | A1 |
20060274649 | Scholl | Dec 2006 | A1 |
20070027976 | Sasame et al. | Feb 2007 | A1 |
20070036178 | Hares et al. | Feb 2007 | A1 |
20070076594 | Khan et al. | Apr 2007 | A1 |
20070162565 | Hanselmann | Jul 2007 | A1 |
20070189213 | Karino et al. | Aug 2007 | A1 |
20080022410 | Diehl | Jan 2008 | A1 |
20080068986 | Maranhao et al. | Mar 2008 | A1 |
20080120518 | Ritz et al. | May 2008 | A1 |
20080159325 | Chen et al. | Jul 2008 | A1 |
20080189468 | Schmidt et al. | Aug 2008 | A1 |
20080201603 | Ritz et al. | Aug 2008 | A1 |
20080222633 | Kami | Sep 2008 | A1 |
20080225859 | Mitchem | Sep 2008 | A1 |
20080243773 | Patel et al. | Oct 2008 | A1 |
20080244222 | Supalov et al. | Oct 2008 | A1 |
20090028044 | Windisch et al. | Jan 2009 | A1 |
20090036152 | Janneteau et al. | Feb 2009 | A1 |
20090049537 | Chen et al. | Feb 2009 | A1 |
20090080428 | Witkowski et al. | Mar 2009 | A1 |
20090086622 | Ng | Apr 2009 | A1 |
20090092135 | Simmons et al. | Apr 2009 | A1 |
20090094481 | Vera et al. | Apr 2009 | A1 |
20090106409 | Murata | Apr 2009 | A1 |
20090198766 | Chen et al. | Aug 2009 | A1 |
20090219807 | Wang | Sep 2009 | A1 |
20090245248 | Arberg et al. | Oct 2009 | A1 |
20090316573 | Lai | Dec 2009 | A1 |
20100017643 | Baba et al. | Jan 2010 | A1 |
20100039932 | Wen et al. | Feb 2010 | A1 |
20100107162 | Edwards et al. | Apr 2010 | A1 |
20100169253 | Tan | Jul 2010 | A1 |
20100257269 | Clark | Oct 2010 | A1 |
20100287548 | Zhou et al. | Nov 2010 | A1 |
20100325381 | Heim | Dec 2010 | A1 |
20100325485 | Kamath et al. | Dec 2010 | A1 |
20110023028 | Nandagopal et al. | Jan 2011 | A1 |
20110072327 | Schoppmeier et al. | Mar 2011 | A1 |
20110125949 | Mudigonda et al. | May 2011 | A1 |
20110126196 | Cheung et al. | May 2011 | A1 |
20110154331 | Ciano et al. | Jun 2011 | A1 |
20110228770 | Dholakia et al. | Sep 2011 | A1 |
20110228771 | Dholakia et al. | Sep 2011 | A1 |
20110228772 | Dholakia et al. | Sep 2011 | A1 |
20110228773 | Dholakia et al. | Sep 2011 | A1 |
20110231578 | Nagappan et al. | Sep 2011 | A1 |
20120023319 | Chin et al. | Jan 2012 | A1 |
Number | Date | Country |
---|---|---|
0887731 | Dec 1998 | EP |
0926859 | Jun 1999 | EP |
1107511 | Jun 2001 | EP |
2084605 | May 2009 | EP |
WO 2008054997 | May 2008 | WO |
Entry |
---|
U.S. Appl. No. 09/703,057, filed Oct. 31, 2000, Brewer et al. |
U.S. Appl. No. 12/823,073, filed Jun. 24, 2010, Nagappan et al. |
U.S. Appl. No. 12/913,572, filed Oct. 27, 2010, Dholakia et al. |
U.S. Appl. No. 12/913,598, filed Oct. 27, 2010, Dholakia et al. |
U.S. Appl. No. 12/913,612, filed Oct. 27, 2010, Dholakia et al. |
U.S. Appl. No. 12/913,650, filed Oct. 27, 2010, Dholakia et al. |
Braden et al., “Integrated Services in the Internet Architecture: an Overview,” Jul. 1994, RFC 1633, Network Working Group, pp. 1-28. |
“Brocade Serveriron ADX 1000, 4000, and 8000 Series Frequently Asked Questions,” pp. 1-10, Copyright 2009, Brocade Communications Systems, Inc. |
Chen, “New Paradigm in Application Delivery Networking: Advanced Core Operating System (ACOS) and Multi-CPU Architecture—They Key to Achieving Availability, Scalability and Preformance.” White Paper, May 2009, A10 Networks, 5 pages. |
Cisco IP Routing Handbook, Copyright 2000, pp. 22-23, 119-135, and 405-406, M&T Books. |
Demers et al., “Analysis and Simulation of a Fair Queueing Algorithm,” Xerox PARC, Copyright 1989, pp. 1-12, ACM. |
Extreme v. Enterasys WI Legal Transcript of Stephen R. Haddock, May 7, 2008, vol. 2, 2 pages. |
Floyd et al., “Link-sharing and Resource Management Models for Packet Networks,” IEEE/ACM Transactions on Networking, Aug. 1995, vol. 3, No. 4, Copyright 1995, IEEE, pp. 1-22. |
Freescale Semiconductor, Inc., “Freescale's Embedded Hypervisor for QorIQ™ P4 Series Communications Platform,” White Paper, Oct. 2008, Copyright 2008, pp. 1-8, Document No. EMHYPQIQTP4CPWP, Rev. 1. |
Freescale Semiconductor, Inc., “Embedded Multicore: An Introduction,” Jul. 2009, Copyright 2009, 73 pages, Document No. EMBMCRM, Rev. 0. |
“GIGAswitch FDDI System—Manager's Guide,” Part No. EK-GGMGA-MG.B01, Jun. 1993 first printing, Apr. 1995 second printing, Copyright 1995, Digital Equipment Corporation, Maynard, MA, 113 pages. |
“GIGAswitch System—Manager's Guide,” Part No. EK-GGMGA-MG.A01, Jun. 1993, Copyright 1993, Digital Equipment Corporation, Maynard, MA, 237 pages. |
Hemminger, “Delivering Advanced Application Acceleration & Security,” Application Delivery Challenge, Jul. 2007, pp. 1-3. |
KaashoEk et al., “An Efficient Reliable Broadcast Protocol,” Operating System Review, Oct. 4, 1989, 15 pages. |
Keshav, “An Engineering Approach to Computer Networking: ATM Networks; the internet, and the Telephone Network,” Addison-Wesley Professional Computing Series, part 1 of 5, May 15, 1997, Copyright 1997 by AT&T, Addison-Wesley Publishing Company, pp. 1-129. |
Keshav, “An Engineering Approach to Computer Networking: ATM Networks; the internet, and the Telephone Network,” Addison-Wesley Professional Computing Series, part 2 of 5, May 15, 1997, Copyright 1997 by AT&T, Addison-Wesley Publishing Company, pp. 130-260. |
Keshav, “An Engineering Approach to Computer Networking: ATM Networks; the internet, and the Telephone Network,” Addison-Wesley Professional Computing Series, part 3 of 5, May 15, 1997, Copyright 1997 by AT&T, Addison-Wesley Publishing Company, pp. 261-389. |
Keshav, “An Engineering Approach to Computer Networking: ATM Networks; the internet, and the Telephone Network,” Addison-Wesley Professional Computing Series, part 4 of 5, May 15, 1997, Copyright 1997 by AT&T, Addison-Wesley Publishing Company, pp. 390-519. |
Keshav, “An Engineering Approach to Computer Networking: ATM Networks; the internet, and the Telephone Network,” Addison-Wesley Professional Computing Series, part 5 of 5, May 15, 1997, Copyright 1997 by AT&T, Addison-Wesley Publishing Company, pp. 520-660. |
May, et al., “An Experimental Implementation of Traffic Control for IP Networks,” 1993, Sophia-Antipolis Cedex, France, 11 pages. |
Moy, “OSPF Version 2,” Network Working Group, RFC 2328, Apr. 1998, 204 pages. |
Order Granting/Denying Request for Ex Parte Reexamination for U.S. Appl. No. 90/010,432, mailed on May 21, 2009, 18 pages. |
Order Granting/Denying Request for Ex Parte Reexamination for U.S. Appl. No. 90/010,433, mailed on May 22, 2009, 15 pages. |
Order Granting/Denying Request for Ex Parte Reexamination for U.S. Appl. No. 90/010,434, mailed on May 22, 2009, 20 pages. |
Pangal, “Core Based Virtualization—Secure, Elastic and Deterministic Computing is Here . . . ,” Blog Posting, May 26, 2009, 1 page, printed on Jul. 13, 2009, at URL: http://community.brocade.com/home/community/brocadeblogs/wingspan/blog/tags/serveri. . . . |
Partridge, “A Proposed Flow Specification,” Sep. 1992, RFC 1363, Network Working Group, pp. 1-20. |
Riggsbee, “From ADC to Web Security, Serving the Online Community,” Blog Posting, Jul. 8, 2009, 2 pages, printed on Dec. 22, 2009, at URL: http://community.brocade.com/home/community/brocadeblogs/wingspan/blog/2009/07/0 . . . . |
Riggsbee, “You've Been Warned, the Revolution Will Not Be Televised,” Blog Posting, Jul. 9, 2009, 2 pages, printed on Dec. 22, 2009, at URL: http://community.brocade.com/home/community/brocadeblogs/wingspan/blog/2009/07/0 . . . . |
Schlansker, et al., “High-Performance Ethernet-Based Communications for Future Multi-Core Processors,” SC07 Nov. 10-16, 2007, 12 pages, Copyright 2007, ACM. |
TCP/IP Illustrated, vol. 2: The Implementation, Gray R. Wright and W. Richard Stevens, Addison-Wesley 1995, pp. 64, 97, 128,158,186,207,248,277,305,340,383,398,437,476,572,680,715,756,797,1028, and 1051. |
Wolf, et al., “Design Issues for High-Performance Active Routers,” IEEE Journal on Selected Areas in Communications, IEEE, Inc. New York, USA, Mar. 2001, vol. 19, No. 3, Copyright 2001, IEEE, pp. 404-409. |
European Search Report for Application No. EP 02254403, dated Mar. 18, 2003, 3 pages. |
European Search Report for Application No. EP 02256444, dated Feb. 23, 2005, 3 pages. |
Non-Final Office Action for U.S. Appl. No. 09/896,228, mailed on Jul. 29, 2005, 17 pages. |
Non-Final Office Action for U.S. Appl. No. 09/896,228, mailed on Sep. 7, 2006, 17 pages. |
Non-Final Office Action for U.S. Appl. No. 09/896,228, mailed on Mar. 5, 2007, 14 pages. |
Final Office Action for U.S. Appl. No. 09/896,228, mailed Aug. 21, 2007, 15 pages. |
Notice of Allowance for U.S. Appl. No. 09/896,228, mailed on Jun. 17, 2008, 20 pages. |
Non-Final Office Action for U.S. Appl. No. 09/953,714, mailed on Dec. 21, 2004, 16 pages. |
Final Office Action for U.S. Appl. No. 09/953,714, mailed on Jun. 28, 2005, 17 pages. |
Non-Final Office Action for U.S. Appl. No. 09/953,714, mailed on Jan. 26, 2006, 15 pages. |
Final Office Action for U.S. Appl. No. 09/953,714, mailed on Aug. 17, 2006, 17 pages. |
Notice of Allowance for U.S. Appl. No. 09/953,714, mailed on Sep. 14, 2009, 6 pages. |
Notice of Allowance for U.S. Appl. No. 09/953,714, mailed on Feb. 5, 2010, 10 pages. |
Non-Final Office Action for U.S. Appl. No. 12/333,029, mailed on May 27, 2010, 29 pages. |
Non-Final Office Action for U.S. Appl. No. 12/210,957, mailed on Sep. 2, 2009, 16 pages. |
Notice of Allowance for U.S. Appl. No. 12/210,957, mailed on Feb. 4, 2010, 10 pages. |
Non-Final Office Action for U.S. Appl. No. 12/333,029, mailed on Mar. 30, 2012, 15 pages. |
Non-Final Office Action for U.S. Appl. No. 12/626,432 mailed on Jul. 12, 2012, 13 pages. |
Non-Final Office Action for U.S. Appl. No. 12/913,572 mailed on Aug. 3, 2012, 6 pages. |
Non-Final Office Action for U.S. Appl. No. 12/823,073 mailed on Aug. 6, 2012, 21 pages. |
Notice of Allowance for U.S. Appl. No. 12/333,029 mailed on Aug. 17, 2012, 7 pages. |
CISCO Systems, Inc., “BGP Support for Nonstop Routing (NSR) with Stateful Switchover (SSO).” Mar. 20, 2006, pp. 1-18. |
CISCO Systems, Inc., “Graceful Restart, Non Stop Routing and IGP routing protocol timer Manipulation,” Copyright 2008, pp. 1-4. |
CISCO Systems, Inc., “Intermediate System-to-Intermediate System (IS-IS) Support for Graceful Restart (GR) and Non-Stop Routing (NSR),” Copyright 2008, pp. 1-3. |
CISCO Systems, Inc., “Internet Protocol Multicast,” Internetworking Technologies Handbook, 2000, 3rd Edition, Chapter 43, pp. 43-1 through 43-16. |
CISCO Systems, Inc., “Multicast Quick—Start Configuration Guide,” Document ID:9356, Copyright 2008-2009, 15 pages. |
CISCO Systems, Inc., “Warm Reload,” CISCO IOS Releases 12.3(2)T, 12.2(18)S, and 12.2(27)SBC, Copyright 2003, pp. 1-14. |
Fenner, et al., “Protocol Independent Multicast—Sparse Mode (PIM-SM): Protocol Specification (Revised).” Network Working Group, RFC 4601, Aug. 2006, pp. 1-151. |
Hardwick, “IP Multicast Explained,” Metaswitch Networks, Jun. 2004, pp. 1-68. |
IP Infusion Brochure, “ZebOS® Network Platform: Transporting You to Next Generation Networks,” ip infusion™ An Access Company, Jun. 2008, pp. 1-6. |
Kakadia, et al., “Enterprise Network Design Patterns: High Availability” Sun Microsystems, Inc., Sun BluePrints™ Online, Revision A, Nov. 26, 2003, pp. 1-35, at URL: http://www.sun.com/blueprints. |
Kaplan, “Part 3 in the Reliability Series: NSR™ Non-Stop Routing Technology,” White Paper, Avici Systems, Copyright 2002, pp. 1-8. |
Khan, “IP Routing Use Cases,” Cisco Press, Sep. 22, 2009, pp. 1-16, at URL:http://www.ciscopress.com/articles/printerfriendly.asp?p=1395746. |
Lee, et al., “Open Shortest Path First (OSPF) Conformance and Performance Testing,” White Papers, Ixia—Leader in Convergence IP Testing, Copyright 1998-2004, pp. 1-17. |
Manolov, et al., “An Investigation into Multicasting, Proceedings of the 14th Annual Workshop on Architecture and System Design,” (ProRISC2003), Veldhoven, The Netherlands, Nov. 2003, pp. 523-528. |
Pepelnjak, et al., “Using Multicast Domains,” informIT, Jun. 27, 2003, pp. 1-29, at URL:http://www.informit.com/articles/printerfriendly.aspx?p=32100. |
Product Category Brochure, “J Series, M Series and MX Series Routers—Juniper Networks Enterprise Routers—New Levels of Performance, Availability, Advanced Routing Features, and Operations Agility for Today's High-Performance Businesses,” Juniper Networks, Nov. 2009, pp. 1-11. |
Rodbell, “Protocol Independent Multicast—Sparse Mode,” CommsDesign, Dec. 19, 2009, pp. 1-5, at URL: http://www.commsdesign.com/main/9811/9811standards.htm. |
Non-Final Office Action for U.S. Appl. No. 12/913,598 mailed on Sep. 6, 2012, 10 pages. |
Non-Final Office Action for U.S. Appl. No. 12/913,612 mailed on Sep. 19, 2012, 11 pages. |
Non-Final Office Action for U.S. Appl. No. 12/913,650 mailed on Oct. 2, 2012, 9 pages. |
Notice of Allowance for U.S. Appl. No. 12/913,572 mailed on Nov. 21, 2012, 7 pages. |
Final Office Action for U.S. Appl. No. 12/823,073 mailed on Jan. 23, 2013, 23 pages. |
Intel® Virtualization Technology, Product Brief, “Virtualization 2.0—Moving Beyond Consolidation”, 2008, 4 pages. |
VMWARE., “Automating High Availability (HA) Services With VMware HA”, VMware Infrastructure, Copyright ® 1998-2006, 15 pages. |
VMWARE, “Resource Management with Vmware DRS”, VMware Infrastructure, Copyright ® 1998-2006, 24 pages. |
VMWARE, “Dynamic Balancing and Allocation of Resources for Virtual Machines”, Product Datasheet, Copyright ® 1998-2006, 2 pages. |
Quickspecs, “HP Online VM Migration (for HP Integrity Virtual Machines)”, Wordwide—Version 4, Sep. 27, 2010, 4 pages. |
VMWARE, “Live Migration for Virtual Machines Without Service Interruption”, Product Datasheet, Copyright ® 2009 Vmware, Inc., 4 pages. |
Burke, “Vmware Counters Oracle, Microsoft With Free Update”, Nov. 13, 2007, 2 pages. |
Final Office Action for U.S. Appl. No. 12/626,432 mailed on Apr. 12, 2013, 14 pages. |
Notice of Allowance for U.S. Appl. No. 12/913,598 mailed on Mar. 12, 2013, 5 pages. |
Notice of Allowance for U.S. Appl. No. 12/913,650 mailed on Mar. 25, 2013, 6 pages. |
Number | Date | Country | |
---|---|---|---|
20120023309 A1 | Jan 2012 | US |