The present disclosure relates to computing systems that provide redundant computing domains. More particularly, in a computing system comprising multiple computing domains, techniques are provided for intentionally biasing the race to gain mastership between competing computing domains in favor of a particular computer domain.
In an effort to reduce downtime and increase availability, computing systems often provide redundant computing domains. In a typical configuration, during normal operations, one of the computing domains is configured to operate in an active mode and perform a set of functions of the computing system while the other computing domain is configured to operate in standby (or passive) mode in which the set of functions performed by the active computing domain are not performed. The standby computing domain remains ready to take over the functions performed by the active computing domain, and in the process become the active computing domain, in case of any event affecting the functionality of the current active computing domain. The process of a standby computing domain becoming the active computing domain is referred to as a failover. As a result of the failover, the computing domain operating in active mode prior to the failover may operate in standby mode as a result of the failover.
The active-standby model mentioned above is used in various fields to provide enhanced system availability. For example, in the networking field, redundancies are provided at various levels to achieve high availability and minimize data loss. For example, in some network environments, redundant network devices are provided with one network device operating in active mode (the active network device) and the other operating in standby (or passive) mode (the standby network device). The active network device performs the data forwarding-related functions while the standby network device operates in standby mode. Upon a failover, which may occur, for example, due to an error on the active device, the standby device becomes the active device and takes over data forwarding functionality from the previously active device. The previously active device may then operate in standby mode. The active-standby model using two network devices strives to reduce interruptions in data forwarding.
Redundancies may also be provided within a network device. For example, a network device may comprise multiple cards (e.g., multiple management cards, multiple line cards), each card having its own one or more physical processors. One card may be configured to operate in active mode while the other operates in standby mode. The active card performs the data forwarding and/or management related functions while the redundant second card operates in standby mode. Upon a failover, the standby card becomes the active card and starts performing the functions performed in active mode. The previous active card may then operate in standby mode.
The active-standby model may also be provided for in a system comprising a single multicore processor. For example, as described in U.S. Pat. No. 8,495,418, two partitions may be created in such a system with each partition being allocated one or more cores of the multiple cores of the processor. The partitions may be configured such that one partition operates in active mode while another operates in standby mode. In this manner, a single processor is able to provide active-standby functionality, thereby enhancing the availability of the system comprising the processor.
In a system comprising multiple computing domains configured to operate according to the active-standby model, when the system is power cycled, the multiple computing domains are booted and then compete with each other to become the active computing domain. This competition between the computing domains to become the active computing domain is commonly referred to as the race to gain mastership. Only one computing domain “wins” this race to gain mastership and becomes the active computing domain. The other “losing” computing domain becomes the standby computing domain. In conventional systems, which of the multiple computing domains becomes active is arbitrary.
The present disclosure relates to computing systems that provide multiple computing domains configured to operate according to an active-standby model. In such a computing system, techniques are provided for intentionally biasing the race to gain mastership between competing computing domains (i.e., to determine which computing domain operates in the active mode), in favor of a particular computing domain. The race to gain mastership may be biased in favor of a computing domain operating in a particular mode prior to the occurrence of the event that triggered the race to gain mastership. For example, in certain embodiments, the race to gain mastership may be biased in favor of the computing domain that was operating in the active mode prior to the occurrence of an event that triggered the race to gain mastership.
In certain embodiments, the biasing of the race to gain mastership in favor of a particular computing domain is time limited to a particular period of time. If the particular computing domain towards which the race is biased (e.g., the computing domain that operated in active mode prior to the occurrence of the event that triggered the race to gain mastership) is unable to become the active computing domain before the expiry of that particular time period, the biasing is removed and the race is then opened to all the competing computing domains. This time-limited biasing provides a mechanism to recover in the scenario where the computing domain towards which the race is biased has some problems and cannot become the active computing domain. In this scenario, the other computing domain is automatically, without any human intervention, provided the opportunity to become the active computing domain. In certain embodiments, systems, methods, code or instructions executed by one or more processing units are provided that enable the time-limited biasing described above.
For example, a computing system can comprise a first computing domain and a second computing domain. An event such as power being cycled to the system or a reset or reboot of the computing domains may cause a race to gain mastership to be triggered in the computing system between the two computing domains. Upon the occurrence of an event that triggers the race to gain mastership, each computing domain may execute arbitration logic to determine which computing domain becomes the active computing domain. The processing may be performed in parallel by the first and second computing domains.
In certain embodiments, as a result of executing the arbitration logic, a particular computing domain is configured to determine it operated in a mode (e.g., a first mode) towards which the race to gain mastership is to be biased. Upon determining that the computing domain did not operate in the first mode prior to the occurrence of the event triggering the race to gain mastership, then the computing domain refrains, for a period of time, from attempting to start operating in the first mode. During this period of time, the other computing domain participating in the race to gain mastership is allowed to attempt to start operating in the first mode unhindered. In this manner, for the period of time, the computing domain against which the race to gain mastership is to be biased does not compete in the race to gain mastership while the computing domain in favor of which the race to gain mastership is to be biased competes uninhibited. After the period of time has passed, the particular computing domain that has refrained from participating in the race to gain mastership then determines whether the other computing domain is operating in the first mode. Upon determining that the other computing domain is operating in the first mode, i.e., has won the race to gain mastership, then the particular computing domain that has refrained from the race for the period of time starts operating in a second mode different from the first mode. Upon determining that the other computing domain is not operating in the first mode, i.e., has not already won the race to gain mastership, the particular computing domain that has refrained from the race for the period of time is now allowed to attempt to operate in the first mode.
In certain embodiments, the race to gain mastership may be biased in favor of a computing domain that operated in the active mode prior to the occurrence of the event that triggered the race to gain mastership. The computing domain operating in active mode may be configured to perform a set of functions that are not performed by the computing domain operating in standby mode (i.e., by the standby computing domain). In other embodiments, the race to gain mastership can be biased in favor of a computing domain that operated in the standby mode prior to the occurrence of the event that triggered the race to gain mastership. In this manner, the race to gain mastership can be biased in favor of a computing domain operating in a particular mode prior to occurrence of the event triggering the race to gain mastership. The biasing is time-limited for a period of time. Different techniques may be used for calculating this period of time.
In certain embodiments, whether or not a particular computing domain refrains from participating in the race to gain mastership for a period of time is determined based upon the mode in which the particular computing domain operated prior to the occurrence of the event that triggered the race to gain mastership. The particular computing domain may determine this based upon information stored in non-volatile memory prior to the occurrence of the event causing the race to gain mastership, where the stored information can be used to determine whether or not the particular computing domain operated in the mode in favor of which the race to gain mastership is to be biased. For example, the particular computing domain may read information from a non-volatile memory, the information stored in the non-volatile memory prior to the occurrence of the event triggering the race to gain mastership. Based upon this information, the particular computing domain may determine whether it is to refrain from participating in the race to gain mastership for a period of time.
After refraining from participating in the race to gain mastership for a period of time, a computing domain is configured to determine if the other computing domain has already won the race to gain mastership, i.e., is already operating in the first mode, and take appropriate actions based upon this determination. In certain embodiments, the computing domain may make this determination based upon information stored in a memory location, the memory location being readable and writable by both the computing domains. For example, a bit may be stored in memory indicative of whether the race to gain mastership has already been won. Based upon this bit information, the computing domain can determine whether the other computing domain is operating in the first mode.
In certain embodiments, each computing domain in a computing system is allocated its own set of resources, such as processing resources, memory resources, and the like. For example, in a computing system comprising a first computing domain and a second computing domain, a first set of one or more processing units can be allocated to the first computing domain and a second set of one or more processing units can allocated to the second computing domain. The processing resources allocated to a computing domain can include one or more processing units. A processing unit can be a processor or a core of a multicore processor. Accordingly, the one or more processing resources allocated to a computing domain can include a single processor, a single core, multiple processors, multiple cores, or various combinations of cores and processors.
In certain embodiments, the computing system may be embodied in a network device configured to forward data (e.g., data packets). The network device can comprise a set of one or more ports for forwarding one or more data packets from the network device, a plurality of processing units, a first computing domain that is allocated a first set of one or more processing units from the plurality of processing units, and a second computing domain that is allocated a second set of one or more processing units from the plurality of processing units. In response to the occurrence of an event that triggers a race to gain mastership, the first computing domain may be configured to refrain from attempting to start operating in a first mode for a period of time. In response to the occurrence of the event, the second computing domain may be configured to attempt to start operating in the first mode during the period of time. After the period of time has passed, the first computing domain may be configured to, if the second computing domain is operating in the first mode, start to operate in a second mode different from the first mode, and, if the second computing domain is not operating in the first mode, attempt to operate in the first mode.
In one embodiment, the network device may be configured such that a set of functions is performed by the first computing domain or the second computing domain when operating in the first mode and not performed when operating in the second mode. In another embodiment, the network device may be configured such that a set of functions is performed when operating in the second mode and is not performed when operating in the first mode. The first computing domain may be configured to, based upon information stored prior to the occurrence of the event, determine whether the first computing domain operated in the first mode prior to the occurrence of the event.
In certain embodiments, the first computing domain in the network device corresponds to a first card of the network device and the second computing domain corresponds to a second card of the network device. In some other embodiments, the first computing domain and the second computing domain may be located on a card of the network device.
In certain embodiments, the first computing domain in the network device may be allocated a first set of one or more processing units and the second computing domain may be allocated a second set of one or more processing units. A processing unit can be a processor or a core of a multicore processor. Accordingly, the processing resources allocated to a computing domain in the network device can include a single processor, a single core, multiple processors, multiple cores, or various combinations of cores and processors.
The foregoing, together with other features and embodiments will become more apparent upon referring to the following specification, claims, and accompanying drawings.
In the following description, for the purposes of explanation, specific details are set forth in order to provide a thorough understanding of embodiments of the invention. However, it will be apparent that various embodiments may be practiced without these specific details. The figures and description are not intended to be restrictive.
The present disclosure relates to computing systems that provide multiple computing domains configured to operate according to an active-standby model. In such a computing system, techniques are provided for intentionally biasing the race to gain mastership between competing computing domains, i.e., to determine which of the computing domains operates in the active mode, in favor of a particular computer domain. The race to gain mastership may be biased in favor of a computing domain operating in a particular mode prior to the occurrence of the event that triggered the race to gain mastership. For example, in certain embodiments, the race to gain mastership may be biased in favor of the computing domain that was operating in the active mode prior to the occurrence of an event that triggered the race to gain mastership.
Computing system 100 may be configured to provide multiple computing domains such as a computing domain “A” (CD_A) 102 and a computing domain “B” (CD_B) 104 depicted in
In certain embodiments, a computing domain can logically be considered as a collection of resources. Each computing domain may be allocated its own share of system resources of computing system 100, including but not limited to processing resources, memory resources, input/output (I/O) resources, networking resources, and other types of resources. For example, as shown in
The resources allocated to a computing domain can be accessed and used only by that computing domain and are not accessible to the other computing domains. In this manner, each computing domain has its own secure and private set of resources. A computing domain may use its allocated resources and operate independently of the other computing domains in computing system 100. For example, a first computing domain may execute a first set of programs independently of a second computing domain that may be executing a different set of programs possibly in parallel with the first computing domain. In some embodiments, one computing domain may not even be aware of the other computing domains within computing system 100.
In some embodiments, certain resources of computing system 100 may be configured as shared resources 122 that can be accessed and used by multiple computing domains. For example, in the embodiment depicted in
In some embodiments, an administrator or user of computing system 100 may configure the number of computing domains provided by computing system 100, the resources allocated to each computing domain, and the shared resources. For example, the administrator may specify configuration information identifying the number of computing domains to be set up, the resources to be allocated to each computing domain, information related to shared resources, and other configuration information. In some embodiments, when computing system 100 is powered up, after booting up (or as part of its boot up sequence), system 100 can be configured to read the configuration information, and per the configuration information, create one or more computing domains, allocate resources to the computing domains, and configure shared resources.
The processing resources allocated to a computing domain can include one or more processing units. A processing unit can be a processor or a core of a multicore processor. Accordingly, the processing resources allocated to a computing domain can include a single processor, a single core, multiple processors, multiple cores, or various combinations of cores and processors.
The memory resources allocated to a computing domain can include volatile and non-volatile memory resources. Volatile memory resources, also sometimes referred to as system memory resources, may include random access memory (RAM), and the like. Non-volatile memory resources may include flash memory, disk memory, and the like. For example, in the embodiment depicted in
The volatile memory of a computing domain may store various data used by the computing domain during runtime operations. For example, the volatile memory assigned to a computing domain may store, during runtime, an operating system for the computing domain and data related to one or more entities executed by the partition. The data may include code or programs or instructions that are executed by the processing resources of the computing domain and other data. The entities executed by a computing domain may include, without restriction, an application, a process, a thread, an operating system, a device driver, and the like. For example, in the embodiment depicted in
The non-volatile memory of a computing domain may store data that is to be persisted, for example, data that is to be persisted even when computing system 100 or the particular computing domain is power cycled. For example, data used or written by CD_A 102 that is to be persisted across a power cycle or a boot up of CD_A 102 may be stored in non-volatile memory 130 of CD_A 102.
The I/O and networking resources allocated to computing domains can include various hardware resources, access to various communication buses (e.g., access to PCIe), Ethernet interfaces, resources such as security engines, queue managers, buffer managers, pattern matching engines, direct memory access (DMA) engines, and the like. The I/O and networking resources of computing system 100 may be allocated exclusively to each computing domain or alternatively may be shared between multiple computing domains. For example, in one embodiment, a private Ethernet interface may be assigned to each computing domain, while access to PCIe may be shared between the computing domains.
In a computing system, such as computing system 100, that can be configured to provide multiple computing domains, in an effort to reduce downtime and increase availability of the computing system, during normal operations, one of the computing domains can be configured to operate in an active mode while the other computing domain operates in standby (or passive) mode. For example, in the embodiment depicted in
The set of functions that are performed in active mode and not performed in standby mode depend upon the context of use of computing system 100. For example, if computing system 100 were a network device such as a router or switch, the active computing domain may be configured to perform functions related to data forwarding such as managing network topology information, maintaining routing tables information, managing and programming I/O devices such as I/O ports of the network device, programming forwarding hardware, executing various network protocols and maintaining protocol/state information, maintaining timing information/logs, and the like. These functions are not performed by the standby computing domain.
According to the active-standby model, the standby computing domain remains ready to take over and perform the functions performed by the active computing domain when an event occurs that results in the current active computing domain not being able to perform the set of functions. When such an event occurs, the standby computing domain is configured to start operating in the active mode (i.e., become the active computing domain) and start performing the set of functions that are performed in the active mode. The process of a standby computing domain becoming the active computing domain and taking over performance of the set of functions from the previous active computing domain is referred to as a failover. As a result of the failover, the previous active computing domain, i.e., the computing domain operating in the active mode prior to the failover, may be reset and operate in the standby mode as a result of the failover.
For example, if computing system 100 is a network device, the new active computing domain may start performing the data forwarding functions that were previously performed by the previous active computing domain, preferably without any impact on the functionality of the network device or without any loss of data. A failover thus increases the availability of the network device while ensuring continued forwarding operations without any data loss.
An event that causes a failover (a failover event) can, at a high level, be categorized into one of the following two categories:
A voluntary failover event is one that causes the active computing domain to voluntarily initiate a failover and yield control to the standby computing domain. An example of such an event is receiving a command line instruction from a network administrator or user of the computing system to perform a voluntary failover. Upon receiving this command, the active computing domain initiates failover processing as a result of which the standby computing domain becomes the active computing domain and the previous active computing domain may become the standby computing domain.
There are various situations when this may be performed. As one example, a voluntary failover may be performed when software on the active computing domain is to be upgraded. In this situation, an administrator may voluntarily issue a command or instruction to cause a failover to occur. For further details related to performing a failover in a network device to perform a software upgrade, please refer to U.S. Pat. Nos. 7,188,237, 7,284,236, and 8,495,418.
As another example, a voluntary failover may be initiated by a system administrator upon noticing performance degradation on the active computing domain or upon noticing that software executed by the active computing domain is malfunctioning. In such situations, the network administrator may voluntarily issue a command for a failover with the hope that problems associated with the active computing domain will be remedied when the standby computing domain becomes the new active computing domain. Various interfaces, including a command line interface (CLI), may be provided for initiating a voluntary failover.
An involuntary failover event typically occurs due to some critical failure in the active computing domain. Examples include when a hardware watchdog timer goes off (or times out) and resets the active computing domain possibly due to a problem in the kernel of the operating system loaded for the active computing domain, critical failure of software executed by the active computing domain, loss of heartbeat, and the like. An involuntary failover event causes the standby computing domain to automatically become the active computing domain.
During normal operations, the active computing domain performs a set of functions that are not performed by the standby computing domain. In order to perform the set of functions, the active computing domain generally maintains various types of state information that is used by the active computing domain for performing these functions. For example, in a network device, the active computing domain may maintain state information comprising network topology information, routing tables, queue structures, data buffers, hardware specific state information such as configuration tables, port maps, etc., and other types of information. When the standby computing domain becomes the active computing domain after a failover, it also needs this state information in order to perform the functions that are performed in active mode and to do so in a non-disruptive manner. In some embodiments, the standby computing domain builds this state information after it becomes the active computing domain. In some other embodiments, the active computing domain may periodically send synchronization updates to the standby computing domain to synchronize the standby's state information with the active's state information. The active computing domain may communicate state information to the standby computing domain using, for example, a messaging mechanism. In one embodiment, the active computing domain is configured to periodically check if the state information on the standby computing domain is synchronized with the state information on the active computing domain. If not synchronized, then the active computing domain communicates state information to the standby computing domain to bring its state information in synchrony with the state information maintained by the active computing domain.
Before a computing system comprising multiple computing domains can operate according to the active-standby model, an initial determination is made by the computing system as to which of the multiple computing domains will become the active computing domain and which will become the standby computing domain. For example, in computing system 100, a determination has to be made whether CD_A 102 or CD_B 104 will become the active computing domain. This processing is performed, for example, when computing system 100 is power cycled (e.g., when it is powered on, or it restarts due to power being cycled to the system).
In some embodiments, the determination of which computing domain will become the active computing domain is achieved by executing arbitration logic that results in one computing domain becoming the active computing domain and the other computing domain becoming the standby computing domain. As part of this arbitration logic, the multiple computing domains compete with each other to become the active computing domain. This competition between the computing domains to become the active computing domain (or the “master”) is commonly referred to as the race to gain mastership. Only one computing domain “wins” this race to gain mastership and becomes the active computing domain and the other “losing” computing domain becomes the standby computing domain.
In conventional systems, which of the competing computing domains becomes the active computing domain and wins the race to gain mastership is arbitrary. This arbitrariness however becomes a problem in certain situations. For example, as previously described, the active computing domain in a network device generally stores state information that is used by the active computing domain to perform the set of functions that are performed in the active mode. The standby computing domain may not have this state information when it becomes the active computing domain or, even if it has received synchronization updates from the active computing domain, may not have the most up-to-date information. Accordingly, in many situations, when a standby computing domain becomes the active computing domain, it has to first spend time and resources (e.g., processing and memory resources) to build this information. Accordingly, after the occurrence of an event (mastership race triggering event that triggers a race to gain mastership, if the computing domain that was a standby computing domain prior to the event wins the race and becomes the active computing domain, it has to first spend time and resources to build this information. This, however, would not be the case if the race to gain mastership is won by the computing domain that was operating in active mode prior to the mastership race triggering event. It is thus desirable in this scenario that the race to gain mastership be won by the computing domain that was operating in active mode prior to the occurrence of the mastership race triggering event.
While, in the above networking example, it is desirable that the previously active computing domain wins the race to gain mastership and becomes the active computing domain after the occurrence of the mastership race triggering event, in some other situations it may be desirable that the computing domain operating as a standby prior to the occurrence of the mastership race triggering event wins the race to gain mastership and becomes the active computing domain. For example, consider the situation where the mastership race triggering event was caused by a fatal failure in the active computing domain. In this situation, it may be preferred that the standby computing domain wins the race to gain mastership and becomes the active computing domain. Accordingly, depending upon the context, it may be desirable that a computing domain operating in a particular mode, either active or standby, prior to the occurrence of the mastership race triggering event wins the race to gain mastership and becomes the active computing domain.
Certain embodiments of the present invention enable such intentional biasing towards a particular computing domain based upon the mode of operation of the computing domain prior to occurrence of the mastership race triggering event. Between multiple computing domains participating in the race to gain mastership, techniques are provided that automatically and intentionally bias the race to gain mastership in favor of a particular computer domain based upon the mode of operation of that particular computing domain prior to the occurrence of the mastership race triggering event. For example, in certain embodiments, the race to mastership may be biased in favor of the computing domain that was operating in the active mode prior to an event that triggered the race to mastership.
Additionally, the biasing is time limited. If the particular computing domain (e.g., the active computing domain prior to the occurrence of the mastership race triggering event) towards which the race is biased is unable to become the active computing domain within a particular time period, the biasing is removed after passage of that time period and the race is then opened to all the competing computing domains. This time limiting biasing provides a recovery mechanism to cover scenarios where the computing domain towards which the race is biased has some problems and cannot become the active computing domain.
In the embodiment depicted in
In some embodiments, whether the processing is to be biased in favor of the computing domain that was operating in active mode prior to the occurrence of the event that triggered the race to gain mastership or whether the processing is to be biased in favor of the computing domain that was operating in standby mode prior to the occurrence of the event that caused the race to gain mastership may be preconfigured or specified by a user of computing system 100. For example, in one embodiment, the biasing information may be stored in a configuration file. In certain embodiments, a default may be preconfigured. For example, in the default, the processing may be biased in favor of the computing domain that was operating in active mode prior to the event that caused the race to gain mastership processing to be triggered. In some other embodiments, a user of computing system 100 may provide information indicative of the computing domain towards which the processing is to be biased user via a command line instruction.
As depicted in
Upon the occurrence of the event in 202, processing depicted in box 203 is then performed by each computing domain that competes in the race to gain mastership. The processing may be performed in parallel by the computing domains. For example, for computing system 100 depicted in
At 204, the computing domain performing the processing is booted. At 206, a check is then made as to whether the computing domain performing the processing was operating in the active mode prior to the occurrence of the event that triggered the race to gain mastership. In some embodiments, this is determined based upon information stored in non-volatile memory for the computing domain prior to the occurrence of the event that triggers the race to gain mastership.
In certain embodiments, during normal processing of computing system 100, a computing domain is configured to store information to its non-volatile memory indicative of the mode of operation (i.e., active mode or passive mode) of the computing domain. The computing domains may be configured to update this information when a failover occurs. For example, in
In some embodiments, as described above, both the computing domains may be configured to store operating mode information to their respective non-volatile memories. In some other embodiments, only the computing domain operating in a particular mode may be configured to store the mode indicating information in its non-volatile memory. For example, in certain embodiments, only the computing domain operating in active mode may be configured to store information in its non-volatile memory indicating that the computing domain was operating in active mode. For example, if CD_A 102 was operating in active mode prior to occurrence of the event in 202, then CD_A 102 may store “active flag” information 136 to non-volatile memory 130. In such an embodiment, no such information may be stored by CD_B 104 to its non-volatile memory 134 since it is operating in standby mode. In such embodiments, as part of the processing in 206, each computing domain may access its non-volatile memory to see if it stores any mode indicating information. The computing domain that is able to access and read this information from its non-volatile memory determines that it was operating in active mode prior to the occurrence of the event in 202. Alternatively, absence of this information from a computing domain's non-volatile memory indicates that the computing domain was operating in standby mode prior to the occurrence of the event in 202.
If it is determined in 206 that the computing domain was operating in active mode prior to the event occurrence in 202 (or more generally, was operating in the mode in favor of which the race to gain mastership is to be biased), then at 208, the computing domain participates in the race to gain mastership and attempts to start operating in the active mode.
If it is determined in 206 that the computing domain was operating in standby mode prior to the event occurrence in 202 (or more generally, was operating in a mode other than the mode in favor of which the race to gain mastership is to be biased), then at 210, a check is made to see if the other computing domain has already established mastership (i.e., has won the race to gain mastership) and started to operate in the active mode. If it is determined in 210 that the other computing domain has established mastership and is already operating in the active mode (i.e., has won the race to gain mastership), then at 212, the computing domain performing the processing starts operating in the standby mode.
There are various ways in which a particular computing domain can determine if the other computing domain is already operating in active mode in 210 (and also later in 216). In some embodiments, as soon as a computing domain starts to operate in the active mode, it is configured to write information to memory location indicating that mastership has already been established. The memory location where this information is written may be part of the memory resources that are shared between the multiple computing domains such that each computing domain is able to write to and read from this memory location. Appropriate mechanisms may be provided such that only one computing domain is able to start operating in active mode and once that has occurred, no other computing domain can operate in active mode. Appropriate memory locking mechanisms are provided such that only the computing domain that has gained mastership can write information to the memory location that a computing domain has started to operate in active mode.
In one embodiment, as shown in
If it is determined instead in 210 that no other computing domain has yet started to operate in active mode, then at 214, the computing domain performing the processing refrains from attempting to become the active computing domain for a period of time. Essentially, for that period of time, the computing domain intentionally does not participate in the race to gain mastership. This automatically biases the race to gain mastership in favor of the other computing domain, which is not constrained in any way during this time period from attempting to start operating in the active mode. In this manner, for the period of time, the race to gain mastership is biased in favor of the non-constrained computing domain and biased against the computing domain that refrains from participating in the race to gain mastership.
In certain embodiments, the period of time for which the computing domain refrains from participating in the race to gain mastership is configured by a user or administrator of computing system 100. In some other embodiments, the computing domain performing the processing may be configured to calculate the period of time.
After the time period has passed in 214, the computing domain performing the processing in 203 then checks at 216 if the other computing domain has already established mastership (i.e., has won the race to gain mastership) and started to operate in the active mode. If it is determined in 216 that another computing domain is already operating in active mode, then at 212, the computing domain starts operating in the standby mode. If, however, it is determined in 216 that the other computing domain has not yet become the active computing domain, then at 208, the computing domain performing the processing also joins the race to gain mastership.
As described above, in 214, the computing domain refrains from participating in the race to gain mastership for a period of time, but participates in the race to gain mastership after that period of time has passed. In this manner, the biasing of the race to gain mastership in favor of a particular computing domain is time limited to a period of time. If the particular computing domain towards which the race is biased (e.g., the active computing domain prior to the occurrence of the mastership race triggering event) is unable to become the active computing domain before the expiry of that particular time period, the biasing is removed and the race is then opened to all the competing computing domains. This time-limited biasing provides a mechanism to recover in the scenario where the computing domain towards which the race is biased has some problems and cannot become the active computing domain. In this scenario, the other computing domain is automatically, without any human intervention, provided the opportunity to become the active computing domain.
As an example of application of flowchart 200 in
Comparing
After determining in 306 that the computing domain (i.e., the computing domain doing the processing) was not the active computing domain prior to the event triggering the race to gain mastership and further upon determining in 310 that the other computing domain has not become the active computing domain as yet, at 320, a variable “AggregateDelay” is initialized to zero. This variable is used to accumulate the total delay of time during which the computing domain refrains from participating in the race to gain mastership.
At 322, a “DELAY” period of time is calculated. In some embodiments, this may be preset or preconfigured by a user of the computing system. In some other embodiments, the DELAY may be calculated based upon one or more factors.
At 324, the computing domain performing the processing refrains from attempting to become the active computing domain for the period of time corresponding to “DELAY”. Essentially, for this period of time, the computing domain intentionally does not participate in the race to gain mastership, thereby biasing the race to gain mastership in favor of the other computing domain, which is not constrained in any way during this time period from attempting to start operating in the active mode.
After the “DELAY” time period has passed in 324, the computing domain performing the processing then checks at 326 if the other computing domain has already established mastership (i.e., has won the race to gain mastership) and started to operate in the active mode. If yes, then at 314, the computing domain starts operating in the standby mode. If, however, it is determined in 326 that the other computing domain has not yet become the active computing domain, then at 328, the “AggregateDelay” variable is updated to “AggregateDelay=AggregateDelay+DELAY”. Thus, with each iteration, the AggregateDelay variable keeps track of the amount of time that the computing domain has been refrained from participating in the race to gain mastership.
At 330, a check is made to see if the AggregateDelay has exceeded a preconfigured threshold. The threshold may be configured by an administrator or user of computing system 100. If it is determined in 330 that the value of AggregateDelay has not exceeded the threshold, then processing continues with 322 wherein another DELAY time period is calculated and then processing continues as described above. If, however, it is determined in 330 that the value of AggregateDelay has exceeded the threshold, then it indicates that the period of time for which the race to gain mastership is to be biased has passed and at 308 the computing domain performing the processing also joins the race to gain mastership.
Various different techniques may be used to calculate the DELAY in each processing iteration in
Computing system 100 can be embodied in various different forms. The computing domains with a computing system can also be embodied in various different forms within a computing system.
Physical processor 402 represents the processing resources of system 400. In one embodiment, processor 402 is a multicore processor comprising a plurality of processing cores. For example, in the embodiment depicted in
Memory resources of system 400 include volatile memory 404 and non-volatile memory 406. Volatile memory 404 represents the system memory resources of system 400 that are available to physical processor 402. Information related to runtime processing performed by processor 402 may be stored in memory 404. Memory 404 may be a RAM (e.g., SDR RAM, DDR RAM) and is sometimes referred to as the system's main memory. Non-volatile memory 406 may be used for storing information that is to be persisted beyond a power cycle of system 400. I/O devices 408 may include devices such as Ethernet devices, PCIe devices, eLBC devices, and others. Hardware resources 410 can include resources such as security engines, queue managers, buffer managers, pattern matching engines, direct memory access (DMA) engines, and so on.
In certain embodiments, system 400 can be configured to provide multiple computing domains. In the embodiment depicted in
The memory resources of system 400 may also be partitioned and allocated to the different partitions. For example, as depicted in
I/O devices 408 and hardware resources 410 may also be partitioned between partitions P1 and P2. A hardware resource or an I/O device may be assigned exclusively to one partition or alternatively may be shared between multiple partitions. For example, in one embodiment, a private Ethernet interface may be assigned to each partition, while access to PCIe may be shared between the partitions.
Although not shown in
The memory resources assigned to a partition may store, during runtime, an operating system for the partition and data related to one or more entities executed by the partition. The data may include code or programs or instructions that are executed by the processing resources of the computing domain and other data. The entities executed by a computing domain may include, without restriction, an application, a process, a thread, an operating system, a device driver, and the like. For example, in the embodiment depicted in
In some embodiments, each partition may be presented as a virtual machine executed by system 400. A software program like a hypervisor 428 may be executed by system 400 to facilitate creation and management of virtual machines. Hypervisor 428 facilitates secure partitioning of resources between the partitions of system 400 and management of the partitions. Each virtual machine can run its own operating system and this enables system 400 to run multiple operating systems concurrently. In one embodiment, hypervisor 428 presents a virtual machine to each partition and allocates resources to the partitions. For example, the allocation of memory, processing, and hardware resources, as described above, may be facilitated by hypervisor 428. In one embodiment, hypervisor 428 may run on processor 402 as an operating system control. Each virtual machine for a partition can operate independently of the other virtual machines and can operate as an independent virtual system.
In certain embodiments, hypervisor 428 may be configured to determine and set up the partitions based upon configuration information specified by a user or administrator of the system. The hypervisor may then create virtual machines for the partitions and allocate resources as defined by the configuration data.
In some embodiments, the multiple computing domains or partitions of system 400 can be configured to operate according to the active-standby model such that, during normal operations, one partition 100 operates in active mode while another partition operates in standby mode. For example, in the embodiment depicted in
There are different ways in which resources of system 400 can be allocated to the various partitions. For example, with respect to processing resources, in the configuration depicted in
Upon the occurrence of an event that triggers a race to gain mastership, the partitions compete with each other to determine who will become the active partition. This race to gain mastership can be biased in favor of a particular partition. In this respect, each partition can execute arbitration logic that performs the processing depicted in
Information related to the mode of operation of P1 prior to the event triggering the race to gain mastership may be stored as mode information 434 in non-volatile memory 420 allocated to P1. Likewise, information related to the mode of operation of P2 prior to the event triggering the race to gain mastership may be stored as mode information 436 in non-volatile memory 422 allocated to P2. In some embodiments, the information may be stored only by the partition operating in a specific mode, for example, by the partition operating in active mode prior to occurrence of the event that triggered the race to gain mastership. Each partition may use the mode information stored in its non-volatile memory to determine its mode of operation prior to the occurrence of the event triggering the race to gain mastership and take appropriate actions per flowcharts 200 or 300 depicted in
Accordingly, for system 400 depicted in
The teachings described above can be embodied in several different systems and devices including, but not restricted to, network devices such as routers and switches that are configured to facilitate forwarding and routing of data, such as the forwarding and routing of data packets according to one or more network protocols. Network devices can be provided in various configurations including chassis-based devices and “pizza box” configurations. A “pizza box” configuration generally refers to a network device comprising a single physical multicore CPU as depicted in
In the embodiment depicted in
Ports 512 represent the I/O plane for network device 500. Network device 500 is configured to receive and forward packets using ports 512. A port within ports 512 may be classified as an input port or an output port depending upon whether network device 500 receives or transmits a data packet using the port. A port over which a data packet is received by network device 500 is referred to as an input port. A port used for communicating or forwarding a data packet from network device 500 is referred to as an output port. A particular port may function both as an input port and an output port. A port may be connected by a link or interface to a neighboring network device or network. Ports 512 may be capable of receiving and/or transmitting different types of data traffic at different speeds including 1 Gigabit/sec, 10 Gigabits/sec, 100 Gigabits/sec, or even more. In some embodiments, multiple ports of network device 500 may be logically grouped into one or more trunks.
Upon receiving a data packet via an input port, network device 500 is configured to determine an output port to be used for transmitting the data packet from the network device to facilitate communication of the packet to its intended destination. Within network device 500, the packet is then forwarded from the input port to the determined output port and then transmitted from network device 500 using the output port. In certain embodiments, forwarding of packets from an input port to an output port is performed by one or more line cards 504 with possible assistance from management card 502. Line cards 504 represent the data forwarding plane of network device 500. Each line card may be coupled to one or more ports 512 and comprise one or more packet processors that are programmed to perform processing related to determining an output port for the packets and for forwarding the data packets from an input port to the output port. In one embodiment, processing performed by a line card 504 may comprise extracting information from a received packet (e.g., extracting packet header information), performing lookups using the extracted information to determine an output port for the packet such that the packet can be forwarded to its intended destination, forwarding the packet to the output port, and then forwarding the packet from network device 500 via the output port.
Management card 502 is configured to perform management and control functions for network device 500 and represents the management plane for network device 500. In certain embodiments, management card 502 is communicatively coupled to line cards 504 via switch fabric 506. Switch fabric 506 provides a mechanism for enabling communications and forwarding of data between management card 502 and line cards 504, and between line cards 504. As depicted in
In some embodiments, the computing domains of network device 500 may be configured to operate according to the active-standby model. For example, one of the computing domains may be configured to operate in active mode and perform a set of management-related functions while the other computing domain operates in standby mode in which the management-related functions are not performed. The management-related functions performed in active mode may include, for example, maintaining routing tables, programming line cards 504 (e.g., downloading information to a line card that enables the line card to perform data forwarding functions), some data forwarding functions, running various management protocols, and the like. When a failover occurs, the standby computing domain becomes the active computing domain and takes over performance of the set of functions performed in active mode. The previous active computing domain may become the standby computing domain after a failover.
Upon the occurrence of an event that triggers a race to gain mastership on management card 502, the two computing domains of management card 502 compete with each other to become the active computing domain. In certain embodiments, when such a race to gain mastership is triggered between the computing domains on management card 502, each computing domain may perform processing, as described above, that, for a specific period of time, biases the race to gain mastership in favor of a particular computing domain, for example, in favor of the computing domain that operated in active mode prior to the occurrence of the event that triggered the race to gain mastership, as described above. Further, the biasing is time-limited such that if a particular computing domain towards which the race is biased is unable to become the active computing domain within that particular time period, the biasing is removed and the race is then opened to all the competing computing domains.
Network device 600 has similarities with network device 500 depicted in
In certain embodiments, the computing domains on line card 602 may be configured to operate according active-standby model in which one of the computing domains operates in active mode and performs a set of data forwarding-related functions while the other computing domain operates in standby mode in which the data forwarding-related functions are not performed. The data forwarding-related functions performed in active mode may include, for example, extracting header information from packets, determining output ports for packets, forwarding the packets to the output ports, receiving forwarding information from management card 502 and programming forwarding hardware based upon the received information, running data forwarding networking protocols, managing I/O devices, managing control state, sending out control packets, maintaining protocol/state information (e.g., application data (routing tables, queue structures, buffers, etc.) and hardware specific state information (ASIC configuration tables, port maps, etc.)), maintaining timing information/logs, and the like. When a failover occurs, the standby computing domain becomes the active computing domain and takes over performance of the set of data forwarding-related functions performed in active mode. Resources previously owned by the active computing domain are taken over by the standby computing domain when it becomes active. The resources can be hardware resources (PCIe devices, memory, CPU cores, device ports, etc.) and software related resources (e.g., message queues, buffers, interrupts, etc.). The previous active computing domain may become the standby computing domain after a failover.
Upon the occurrence of an event that triggers a race to gain mastership on a line card 602, the two computing domains of line card 602 compete with each other to become the active computing domain. In certain embodiments, when such a race to gain mastership is triggered between the computing domains on line card 602, each computing domain may perform processing, as described above, that, for a specific period of time, biases the race to gain mastership in favor of a particular computing domain, for example, in favor of the computing domain that operated in active mode prior to the occurrence of the event that triggered the race to gain mastership, as described above. Further, the biasing is time-limited such that if a particular computing domain towards which the race is biased is unable to become the active computing domain within that particular time period, the biasing is removed and the race is then opened to all the competing computing domains.
As depicted in
When configured to operate according to the active-standby model, one of the management cards is configured to operate in active mode and perform management-related functions while the other management card operates in standby mode in which the management-related functions are not performed. When a failover occurs, the standby management card becomes the active management card and takes over performance of the set of management-related functions performed in active mode. The active management card prior to the failover may become the standby after the failover.
Upon the occurrence of an event that triggers a race to gain mastership, the two management cards compete with each other to become the active management card. In certain embodiments, when such a race to gain mastership is triggered between the management cards, each management card may perform processing that, for a specific period of time, biases the race to gain mastership in favor of a particular management card, for example, in favor of the management card that operated in active mode prior to the occurrence of the event that triggered the race to gain mastership. Further, the biasing is time-limited such that if a particular management card towards which the race is biased is unable to become the active management card within that particular time period, the biasing is removed and the race is then opened to both the competing management cards.
As depicted in
Additionally, line card 602 also comprises two computing domains 604 and 606 that can be configured to operate according to the active-standby model such that, during normal operations of network device 800, one of the computing domains operates in active mode and performs data forwarding-related functions while the other computing domain operates in standby mode in which the data forwarding-related functions are not performed. When a failover occurs, the standby computing domain becomes the active computing domain on the line card and takes over performance of the set of data forwarding-related functions performed in active mode. The active computing domain on the line card prior to the failover may become the standby after the failover. Switch fabric 506 provides a mechanism for enabling communications and forwarding of data between management cards 702 and 704, between a management card and a line card, and between line cards 602.
In certain embodiments, the occurrence of an event may trigger a race to gain mastership both at the management cards level and at the line card level. Alternatively, an event may trigger a race to gain mastership only at the management cards level or only at the line card level. When such a race to gain mastership is triggered between the management cards, each management card may perform processing that, for a specific period of time, biases the race to gain mastership in favor of a particular management card, for example, in favor of the management card that operated in active mode prior to the occurrence of the event that triggered the race to gain mastership. Likewise, when a race to gain mastership is triggered between computing domains on a line card, each computing domain may perform processing that, for a specific period of time, biases the race to gain mastership in favor of a particular computing domain, for example, in favor of the computing domain that operated in active mode prior to the occurrence of the event that triggered the race to gain mastership. Further, both at the management cards level and at the line card level, the biasing is time-limited such that if a particular management card or computing domain on a line card towards which the race is biased is unable to become the active management card or the active computing domain within that particular time period, the biasing is removed and the race is then opened to both the competing management cards or computing domains on the line card.
Although specific embodiments have been described, these are not intended to be limiting. Various modifications, alterations, alternative constructions, and equivalents are also encompassed within the scope of the invention. The embodiments are not restricted to operation within certain specific data processing environments, but are free to operate within a plurality of data processing environments. Additionally, although certain embodiments have been described using a particular series of transactions and steps, it should be apparent to those skilled in the art that the scope is not limited to the described series of transactions and steps.
Further, while certain embodiments have been described using a particular combination of hardware and software, it should be recognized that other combinations of hardware and software are also within the scope of the present invention. Embodiments may be implemented only in hardware, or only in software, or using combinations thereof.
The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. It will, however, be evident that additions, subtractions, deletions, and other modifications and changes may be made thereunto without departing from the broader spirit and scope as set forth in the claims.
Number | Name | Date | Kind |
---|---|---|---|
5159592 | Perkins | Oct 1992 | A |
5278986 | Jourdenais et al. | Jan 1994 | A |
5410710 | Sarangdhar et al. | Apr 1995 | A |
5473599 | Li | Dec 1995 | A |
5550816 | Hardwick et al. | Aug 1996 | A |
5550973 | Forman | Aug 1996 | A |
5553230 | Petersen | Sep 1996 | A |
5649110 | Ben-Nun et al. | Jul 1997 | A |
5701502 | Baker et al. | Dec 1997 | A |
5732209 | Vigil et al. | Mar 1998 | A |
5828578 | Blomgren | Oct 1998 | A |
5878232 | Marimuthu | Mar 1999 | A |
5878264 | Ebrahim | Mar 1999 | A |
5970232 | Passint et al. | Oct 1999 | A |
5978578 | Azarya et al. | Nov 1999 | A |
6047330 | Stracke, Jr. | Apr 2000 | A |
6097718 | Bion | Aug 2000 | A |
6101188 | Sekine et al. | Aug 2000 | A |
6104700 | Haddock et al. | Aug 2000 | A |
6111888 | Green et al. | Aug 2000 | A |
6115393 | Engel et al. | Sep 2000 | A |
6119200 | George | Sep 2000 | A |
6161169 | Cheng | Dec 2000 | A |
6233236 | Nelson et al. | May 2001 | B1 |
6269391 | Gillespie | Jul 2001 | B1 |
6282678 | Snay et al. | Aug 2001 | B1 |
6331983 | Haggerty et al. | Dec 2001 | B1 |
6374292 | Srivastava et al. | Apr 2002 | B1 |
6397242 | Devine et al. | May 2002 | B1 |
6424629 | Rubino et al. | Jul 2002 | B1 |
6430609 | Dewhurst et al. | Aug 2002 | B1 |
6442682 | Pothapragada et al. | Aug 2002 | B1 |
6496510 | Tsukakoshi et al. | Dec 2002 | B1 |
6496847 | Bugnion et al. | Dec 2002 | B1 |
6526054 | Li et al. | Feb 2003 | B1 |
6567417 | Kalkunte et al. | May 2003 | B2 |
6570875 | Hegde | May 2003 | B1 |
6577634 | Tsukakoshi et al. | Jun 2003 | B1 |
6580727 | Yim et al. | Jun 2003 | B1 |
6587469 | Bragg | Jul 2003 | B1 |
6597699 | Ayres | Jul 2003 | B1 |
6604146 | Rempe et al. | Aug 2003 | B1 |
6608819 | Mitchem et al. | Aug 2003 | B1 |
6633916 | Kauffman | Oct 2003 | B2 |
6636895 | Li et al. | Oct 2003 | B1 |
6674756 | Rao et al. | Jan 2004 | B1 |
6675218 | Mahler et al. | Jan 2004 | B1 |
6678248 | Haddock et al. | Jan 2004 | B1 |
6680904 | Kaplan et al. | Jan 2004 | B1 |
6683850 | Dunning et al. | Jan 2004 | B1 |
6691146 | Armstrong et al. | Feb 2004 | B1 |
6704925 | Bugnion | Mar 2004 | B1 |
6711357 | Brewer et al. | Mar 2004 | B1 |
6711672 | Agesen | Mar 2004 | B1 |
6725289 | Waldspurger et al. | Apr 2004 | B1 |
6731601 | Krishna et al. | May 2004 | B1 |
6732220 | Babaian et al. | May 2004 | B2 |
6763023 | Gleeson et al. | Jul 2004 | B1 |
6785886 | Lim et al. | Aug 2004 | B1 |
6789156 | Waldspurger | Sep 2004 | B1 |
6791980 | Li | Sep 2004 | B1 |
6795966 | Lim et al. | Sep 2004 | B1 |
6847638 | Wu | Jan 2005 | B1 |
6854054 | Kavanagh | Feb 2005 | B1 |
6859438 | Haddock et al. | Feb 2005 | B2 |
6879559 | Blackmon et al. | Apr 2005 | B1 |
6880022 | Waldspurger et al. | Apr 2005 | B1 |
6894970 | McDermott, III et al. | May 2005 | B1 |
6898189 | Di Benedetto et al. | May 2005 | B1 |
6910148 | Ho et al. | Jun 2005 | B1 |
6938179 | Iyer et al. | Aug 2005 | B2 |
6944699 | Bugnion et al. | Sep 2005 | B1 |
6961806 | Agesen et al. | Nov 2005 | B1 |
6961941 | Nelson et al. | Nov 2005 | B1 |
6975587 | Adamski et al. | Dec 2005 | B1 |
6975639 | Hill et al. | Dec 2005 | B1 |
6983294 | Jones | Jan 2006 | B2 |
7039720 | Alfieri et al. | May 2006 | B2 |
7058010 | Chidambaran et al. | Jun 2006 | B2 |
7061858 | Di Benedetto et al. | Jun 2006 | B1 |
7065059 | Zinin | Jun 2006 | B1 |
7065079 | Patra et al. | Jun 2006 | B1 |
7080283 | Songer et al. | Jul 2006 | B1 |
7093160 | Lau et al. | Aug 2006 | B2 |
7133399 | Brewer et al. | Nov 2006 | B1 |
7188237 | Zhou et al. | Mar 2007 | B2 |
7194652 | Zhou et al. | Mar 2007 | B2 |
7236453 | Visser et al. | Jun 2007 | B2 |
7269133 | Lu et al. | Sep 2007 | B2 |
7284236 | Zhou et al. | Oct 2007 | B2 |
7292535 | Folkes et al. | Nov 2007 | B2 |
7305492 | Bryers et al. | Dec 2007 | B2 |
7308503 | Giraud et al. | Dec 2007 | B2 |
7315552 | Kalkunte et al. | Jan 2008 | B2 |
7317722 | Aquino et al. | Jan 2008 | B2 |
7324500 | Blackmon et al. | Jan 2008 | B1 |
7327671 | Karino et al. | Feb 2008 | B2 |
7339903 | O'Neill | Mar 2008 | B2 |
7360084 | Hardjono | Apr 2008 | B1 |
7362700 | Frick et al. | Apr 2008 | B2 |
7382736 | Mitchem et al. | Jun 2008 | B2 |
7385977 | Wu et al. | Jun 2008 | B2 |
7392424 | Ho | Jun 2008 | B2 |
7404006 | Slaughter et al. | Jul 2008 | B1 |
7406037 | Okita | Jul 2008 | B2 |
7417947 | Marques et al. | Aug 2008 | B1 |
7417990 | Ikeda et al. | Aug 2008 | B2 |
7418439 | Wong | Aug 2008 | B2 |
7424014 | Mattes et al. | Sep 2008 | B2 |
7441017 | Watson et al. | Oct 2008 | B2 |
7444422 | Li | Oct 2008 | B1 |
7447225 | Windisch et al. | Nov 2008 | B2 |
7483370 | Dayal et al. | Jan 2009 | B1 |
7483433 | Simmons et al. | Jan 2009 | B2 |
7487277 | Rinaldi | Feb 2009 | B2 |
7503039 | Inoue et al. | Mar 2009 | B2 |
7518986 | Chadalavada et al. | Apr 2009 | B1 |
7522521 | Bettink et al. | Apr 2009 | B2 |
7529981 | Childress et al. | May 2009 | B2 |
7533254 | Dybsetter et al. | May 2009 | B2 |
7535826 | Cole et al. | May 2009 | B1 |
7599284 | Di Benedetto et al. | Oct 2009 | B1 |
7609617 | Appanna et al. | Oct 2009 | B2 |
7613183 | Brewer et al. | Nov 2009 | B1 |
7620953 | Tene et al. | Nov 2009 | B1 |
7631066 | Schatz | Dec 2009 | B1 |
7652982 | Kovummal | Jan 2010 | B1 |
7656409 | Cool et al. | Feb 2010 | B2 |
7664020 | Luss | Feb 2010 | B2 |
7694298 | Goud et al. | Apr 2010 | B2 |
7720066 | Weyman et al. | May 2010 | B2 |
7729296 | Choudhary | Jun 2010 | B1 |
7739360 | Watson et al. | Jun 2010 | B2 |
7751311 | Ramaiah et al. | Jul 2010 | B2 |
7787360 | Windisch et al. | Aug 2010 | B2 |
7787365 | Marques et al. | Aug 2010 | B1 |
7788381 | Watson et al. | Aug 2010 | B2 |
7802073 | Cheng et al. | Sep 2010 | B1 |
7804769 | Tuplur et al. | Sep 2010 | B1 |
7804770 | Ng | Sep 2010 | B2 |
7805516 | Kettler et al. | Sep 2010 | B2 |
7830802 | Huang et al. | Nov 2010 | B2 |
7830895 | Endo et al. | Nov 2010 | B2 |
7843920 | Karino et al. | Nov 2010 | B2 |
7843930 | Mattes et al. | Nov 2010 | B2 |
7873776 | Hetherington et al. | Jan 2011 | B2 |
7886195 | Mayer | Feb 2011 | B2 |
7894334 | Wen et al. | Feb 2011 | B2 |
7898937 | O'Toole | Mar 2011 | B2 |
7929424 | Kochhar et al. | Apr 2011 | B2 |
7940650 | Sandhir et al. | May 2011 | B1 |
7944811 | Windisch et al. | May 2011 | B2 |
7974315 | Yan et al. | Jul 2011 | B2 |
8009671 | Guo et al. | Aug 2011 | B2 |
8014394 | Ram | Sep 2011 | B2 |
8028290 | Rymarczyk et al. | Sep 2011 | B2 |
8040884 | Arunachalam et al. | Oct 2011 | B2 |
8074110 | Vera et al. | Dec 2011 | B2 |
8086906 | Ritz et al. | Dec 2011 | B2 |
8089964 | Lo et al. | Jan 2012 | B2 |
8095691 | Verdoorn, Jr. et al. | Jan 2012 | B2 |
8099625 | Tseng et al. | Jan 2012 | B1 |
8102848 | Rao | Jan 2012 | B1 |
8121025 | Duan et al. | Feb 2012 | B2 |
8131833 | Hadas et al. | Mar 2012 | B2 |
8149691 | Chadalavada et al. | Apr 2012 | B1 |
8156230 | Bakke et al. | Apr 2012 | B2 |
8161260 | Srinivasan | Apr 2012 | B2 |
8180923 | Smith et al. | May 2012 | B2 |
8181174 | Liu | May 2012 | B2 |
8289912 | Huang | Oct 2012 | B2 |
8291430 | Anand et al. | Oct 2012 | B2 |
8335219 | Simmons et al. | Dec 2012 | B2 |
8341625 | Ferris et al. | Dec 2012 | B2 |
8345536 | Rao et al. | Jan 2013 | B1 |
8406125 | Dholakia et al. | Mar 2013 | B2 |
8495418 | Abraham | Jul 2013 | B2 |
8503289 | Dholakia et al. | Aug 2013 | B2 |
8576703 | Dholakia et al. | Nov 2013 | B2 |
8599754 | Li | Dec 2013 | B2 |
8607110 | Peng et al. | Dec 2013 | B1 |
8769155 | Nagappan et al. | Jul 2014 | B2 |
8776050 | Plouffe et al. | Jul 2014 | B2 |
9094221 | Dholakia et al. | Jul 2015 | B2 |
9104619 | Chin et al. | Aug 2015 | B2 |
9137671 | Fahldieck | Sep 2015 | B2 |
20020002640 | Barry | Jan 2002 | A1 |
20020013802 | Mori et al. | Jan 2002 | A1 |
20020035641 | Kurose et al. | Mar 2002 | A1 |
20020103921 | Nair et al. | Aug 2002 | A1 |
20020129166 | Baxter et al. | Sep 2002 | A1 |
20020150094 | Cheng et al. | Oct 2002 | A1 |
20030084161 | Watson et al. | May 2003 | A1 |
20030105794 | Jasinschi et al. | Jun 2003 | A1 |
20030202520 | Witkowski et al. | Oct 2003 | A1 |
20040001485 | Frick et al. | Jan 2004 | A1 |
20040030766 | Witkowski | Feb 2004 | A1 |
20040078625 | Rampuria et al. | Apr 2004 | A1 |
20050028028 | Jibbe | Feb 2005 | A1 |
20050036485 | Eilers et al. | Feb 2005 | A1 |
20050055598 | Chen et al. | Mar 2005 | A1 |
20050114846 | Banks et al. | May 2005 | A1 |
20050147028 | Na | Jul 2005 | A1 |
20050149633 | Natarajan et al. | Jul 2005 | A1 |
20050213498 | Appanna et al. | Sep 2005 | A1 |
20060002343 | Nain et al. | Jan 2006 | A1 |
20060004942 | Hetherington et al. | Jan 2006 | A1 |
20060018253 | Windisch et al. | Jan 2006 | A1 |
20060018333 | Windisch et al. | Jan 2006 | A1 |
20060090136 | Miller et al. | Apr 2006 | A1 |
20060136913 | Sameske | Jun 2006 | A1 |
20060143617 | Knauerhase et al. | Jun 2006 | A1 |
20060171404 | Nalawade et al. | Aug 2006 | A1 |
20060176804 | Shibata | Aug 2006 | A1 |
20060184349 | Goud et al. | Aug 2006 | A1 |
20060184938 | Mangold | Aug 2006 | A1 |
20060190766 | Adler | Aug 2006 | A1 |
20060212677 | Fossum | Sep 2006 | A1 |
20060224826 | Arai et al. | Oct 2006 | A1 |
20060274649 | Scholl | Dec 2006 | A1 |
20060294211 | Amato | Dec 2006 | A1 |
20070027976 | Sasame et al. | Feb 2007 | A1 |
20070036178 | Hares et al. | Feb 2007 | A1 |
20070076594 | Khan et al. | Apr 2007 | A1 |
20070083687 | Rinaldi | Apr 2007 | A1 |
20070162565 | Hanselmann | Jul 2007 | A1 |
20070169084 | Frank et al. | Jul 2007 | A1 |
20070174309 | Pettovello | Jul 2007 | A1 |
20070189213 | Karino et al. | Aug 2007 | A1 |
20080022410 | Diehl | Jan 2008 | A1 |
20080068986 | Maranhao et al. | Mar 2008 | A1 |
20080082810 | Cepulis et al. | Apr 2008 | A1 |
20080120518 | Ritz et al. | May 2008 | A1 |
20080137528 | O'Toole | Jun 2008 | A1 |
20080159325 | Chen et al. | Jul 2008 | A1 |
20080165681 | Huang et al. | Jul 2008 | A1 |
20080165750 | Kim | Jul 2008 | A1 |
20080189468 | Schmidt et al. | Aug 2008 | A1 |
20080201603 | Ritz et al. | Aug 2008 | A1 |
20080212584 | Breslau et al. | Sep 2008 | A1 |
20080222633 | Kami | Sep 2008 | A1 |
20080225859 | Mitchem | Sep 2008 | A1 |
20080243773 | Patel et al. | Oct 2008 | A1 |
20080244222 | Supalov et al. | Oct 2008 | A1 |
20080250266 | Desai et al. | Oct 2008 | A1 |
20090028044 | Windisch et al. | Jan 2009 | A1 |
20090031166 | Kathail et al. | Jan 2009 | A1 |
20090036152 | Janneteau et al. | Feb 2009 | A1 |
20090037585 | Miloushev et al. | Feb 2009 | A1 |
20090049537 | Chen et al. | Feb 2009 | A1 |
20090051492 | Diaz et al. | Feb 2009 | A1 |
20090054045 | Zakrzewski et al. | Feb 2009 | A1 |
20090055831 | Bauman et al. | Feb 2009 | A1 |
20090059888 | Nelson | Mar 2009 | A1 |
20090080428 | Witkowski et al. | Mar 2009 | A1 |
20090086622 | Ng | Apr 2009 | A1 |
20090092135 | Simmons et al. | Apr 2009 | A1 |
20090094481 | Vera et al. | Apr 2009 | A1 |
20090106409 | Murata | Apr 2009 | A1 |
20090144579 | Swanson | Jun 2009 | A1 |
20090185506 | Watson et al. | Jul 2009 | A1 |
20090186494 | Bell, Jr. | Jul 2009 | A1 |
20090193280 | Brooks | Jul 2009 | A1 |
20090198766 | Chen et al. | Aug 2009 | A1 |
20090216863 | Gebhart et al. | Aug 2009 | A1 |
20090219807 | Wang | Sep 2009 | A1 |
20090245248 | Arberg et al. | Oct 2009 | A1 |
20090316573 | Lai | Dec 2009 | A1 |
20100017643 | Baba et al. | Jan 2010 | A1 |
20100039932 | Wen et al. | Feb 2010 | A1 |
20100042715 | Tham | Feb 2010 | A1 |
20100058342 | Machida | Mar 2010 | A1 |
20100064293 | Kang et al. | Mar 2010 | A1 |
20100107162 | Edwards et al. | Apr 2010 | A1 |
20100138208 | Hattori et al. | Jun 2010 | A1 |
20100138830 | Astete et al. | Jun 2010 | A1 |
20100169253 | Tan | Jul 2010 | A1 |
20100235662 | Nishtala | Sep 2010 | A1 |
20100257269 | Clark | Oct 2010 | A1 |
20100278091 | Sung et al. | Nov 2010 | A1 |
20100287548 | Zhou et al. | Nov 2010 | A1 |
20100325381 | Heim | Dec 2010 | A1 |
20100325485 | Kamath et al. | Dec 2010 | A1 |
20110010709 | Anand et al. | Jan 2011 | A1 |
20110023028 | Nandagopal et al. | Jan 2011 | A1 |
20110029969 | Venkataraja et al. | Feb 2011 | A1 |
20110072327 | Schoppmeier et al. | Mar 2011 | A1 |
20110125949 | Mudigonda et al. | May 2011 | A1 |
20110126196 | Cheung et al. | May 2011 | A1 |
20110154331 | Ciano et al. | Jun 2011 | A1 |
20110173334 | Shah | Jul 2011 | A1 |
20110228770 | Dholakia et al. | Sep 2011 | A1 |
20110228771 | Dholakia et al. | Sep 2011 | A1 |
20110228772 | Dholakia et al. | Sep 2011 | A1 |
20110228773 | Dholakia et al. | Sep 2011 | A1 |
20110231578 | Nagappan et al. | Sep 2011 | A1 |
20120023309 | Abraham et al. | Jan 2012 | A1 |
20120023319 | Chin et al. | Jan 2012 | A1 |
20120030237 | Tanaka | Feb 2012 | A1 |
20120158995 | McNamee et al. | Jun 2012 | A1 |
20120166764 | Henry | Jun 2012 | A1 |
20120174097 | Levin | Jul 2012 | A1 |
20120230240 | Nebat et al. | Sep 2012 | A1 |
20120290869 | Heitz | Nov 2012 | A1 |
20120297236 | Ziskind et al. | Nov 2012 | A1 |
20130013905 | Held | Jan 2013 | A1 |
20130070766 | Pudiyapura | Mar 2013 | A1 |
20130211552 | Gomez et al. | Aug 2013 | A1 |
20130259039 | Dholakia et al. | Oct 2013 | A1 |
20130263117 | Konik et al. | Oct 2013 | A1 |
20140007097 | Chin et al. | Jan 2014 | A1 |
20140029613 | Dholakia et al. | Jan 2014 | A1 |
20140036915 | Dholakia et al. | Feb 2014 | A1 |
20140068103 | Gyambavantha et al. | Mar 2014 | A1 |
20140089425 | Chin et al. | Mar 2014 | A1 |
20140089484 | Chin et al. | Mar 2014 | A1 |
20140095927 | Abraham et al. | Apr 2014 | A1 |
20140143591 | Chiang | May 2014 | A1 |
20150039932 | Kaufmann | Feb 2015 | A1 |
20160092324 | Young | Mar 2016 | A1 |
20160105390 | Bernstein | Apr 2016 | A1 |
20160182241 | Chin et al. | Jun 2016 | A1 |
Number | Date | Country |
---|---|---|
0887731 | Dec 1998 | EP |
0926859 | Jun 1999 | EP |
1107511 | Jun 2001 | EP |
1 939 742 | Feb 2008 | EP |
2 084 605 | Aug 2009 | EP |
WO 2008054997 | May 2008 | WO |
WO 2014004312 | Jan 2014 | WO |
Entry |
---|
“GIGAswitch FDDI System—Manager's Guide,” Part No. EK-GGMGA-MG.B01, Jun. 1993 first printing, Apr. 1995 second printing, Copyright 1995, 113 pages, Digital Equipment Corporation, Maynard, MA. |
“GIGAswitch System—Manager's Guide,” Part No. EK-GGMGA-MG.A01, Jun. 1993, Copyright 1993, 237 pages, Digital Equipment Corporation, Maynard, MA. |
“Brocade Serverlron ADX 1000, 4000, and 8000 Series Frequently Asked Questions,” 10 pages, Copyright 2009, Brocade Communications Systems, Inc. |
Braden et al., “Integrated Services in the Internet Architecture: an Overview,” Jul. 1994, RFC 1633, Network Working Group, pp. 1-28. |
Burke, “Vmware Counters Oracle, Microsoft With Free Update”, Nov. 13, 2007, 2 pages. |
Chen, “New Paradigm in Application Delivery Networking: Advanced Core Operating System (ACOS) and Multi-CPU Architecture—They Key to Achieving Availability, Scalability and Performance.” White Paper, May 2009, 5 pages, A10 Networks. |
Cisco IP Routing Handbook, Copyright 2000, 24 pages, M&T Books. |
Cisco Systems, Inc., “BGP Support for Nonstop Routing (NSR) with Stateful Switchover (SSO).” Mar. 20, 2006, 18 pages. |
Cisco Systems, Inc., “Graceful Restart, Non Stop Routing and IGP routing protocol timer Manipulation,” Copyright 2008, 4 pages. |
Cisco Systems, Inc., “Intermediate System-to-Intermediate System (IS-IS) Support for Graceful Restart (GR) and Non-Stop Routing (NSR),” Copyright 2008, pp. 1-3. |
Cisco Systems, Inc., “Internet Protocol Multicast,” Internetworking Technologies Handbook, 3rd Edition, Published 2000, Chapter 43, 16 pages. |
Cisco Systems, Inc., “Multicast Quick—Start Configuration Guide,” Document ID:9356, Copyright 2008-2009, 15 pages. |
Cisco Systems, Inc., “Warm Reload,” Cisco IOS Releases 12.3(2)T, 12.2(18)S, and 12.2(27)SBC, Copyright 2003, 14 pages. |
Demers et al., “Analysis and Simulation of a Fair Queueing Algorithm,” Xerox PARC, Copyright 1989, 12 pages, ACM. |
Fenner, et al., “Protocol Independent Multicast—Sparse Mode (PIM-SM): Protocol Specification (Revised).” Network Working Group, RFC 4601, Aug. 2006, pp. 1-151. |
Floyd et al., “Link-sharing and Resource Management Models for Packet Networks,” IEEE/ACM Transactions on Networking, Aug. 1995, vol. 3, No. 4, Copyright 1995, IEEE, 22 pages. |
Freescale Semiconductor, Inc., “Freescale's Embedded Hypervisor for QorIQ™ P4 Series Communications Platform,” White Paper, Oct. 2008, Copyright 2008, 8 pages, Document No. EMHYPQIQTP4CPWP, Rev. 1. |
Freescale Semiconductor, Inc., “Embedded Multicore: An Introduction,” Jul. 2009, Copyright 2009, 73 pages, Document No. EMBMCRM, Rev. 0. |
Hardwick, “IP Multicast Explained,” Metaswitch Networks, Jun. 2004, 71 pages. |
Hemminger, “Delivering Advanced Application Acceleration & Security,” Application Delivery Challenge, Jul. 2007, 3 pages. |
Intel® Virtualization Technology, Product Brief, “Virtualization 2.0—Moving Beyond Consolidation”, 2008, 4 pages. |
IP Infusion Brochure, “ZebOS® Network Platform: Transporting You to Next Generation Networks,” ip infusion™ An Access Company, Jun. 2008, 6 pages. |
Kaashok et al., “An Efficient Reliable Broadcast Protocol,” Operating System Review, Oct. 4, 1989, 15 pages. |
Kakadia, et al., “Enterprise Network Design Patterns: High Availability” Sun Microsystems, Inc., Sun BluePrints™ Online, Revision A, Nov. 26, 2003, 37 pages, at URL: http://www.sun.com/blueprints. |
Kaplan, “Part 3 in the Reliability Series: NSR™ Non-Stop Routing Technology,” White Paper, Avici Systems, Copyright 2002, 8 pages. |
Keshav, “An Engineering Approach to Computer Networking: ATM Networks; the internet, and the Telephone Network,” Addison-Wesley Professional Computing Series, part 1 of 5, May 15, 1997, Copyright 1997, 148 pages, by AT&T, Addison-Wesley Publishing Company. |
Keshav, “An Engineering Approach to Computer Networking: ATM Networks; the internet, and the Telephone Network,” Addison-Wesley Professional Computing Series, part 2 of 5, May 15, 1997, Copyright 1997, 131 pages, by AT&T, Addison-Wesley Publishing Company. |
Keshav, “An Engineering Approach to Computer Networking: ATM Networks; the internet, and the Telephone Network,” Addison-Wesley Professional Computing Series, part 3 of 5, May 15, 1997, Copyright 1997, 129 pages, by AT&T, Addison-Wesley Publishing Company. |
Keshav, “An Engineering Approach to Computer Networking: ATM Networks; the internet, and the Telephone Network,” Addison-Wesley Professional Computing Series, part 4 of 5, May 15, 1997, Copyright 1997, 130 pages, by AT&T, Addison-Wesley Publishing Company. |
Keshav, “An Engineering Approach to Computer Networking: ATM Networks; the internet, and the Telephone Network,” Addison-Wesley Professional Computing Series, part 5 of 5, May 15, 1997, Copyright 1997, 142 pages, by AT&T, Addison-Wesley Publishing Company. |
Khan, “IP Routing Use Cases,” Cisco Press, Sep. 22, 2009, pp. 1-16, at URL: http://www.ciscopress.com/articles/printerfriendly.asp?p=1395746. |
Lee, et al., “Open Shortest Path First (OSPF) Conformance and Performance Testing,” White Papers, Ixia—Leader in Convergence IP Testing, Copyright 1998-2004, pp. 1-17. |
Manolov, et al., “An Investigation into Multicasting, Proceedings of the 14th Annual Workshop on Architecture and System Design,” (ProRISC2003), Veldhoven, The Netherlands, Nov. 2003, 6 pages. |
May, et al., “An Experimental Implementation of Traffic Control for IP Networks,” 1993, 11 pages, Sophia-Antipolis Cedex, France. |
Moy, “OSPF Version 2,” Network Working Group, RFC 2328, Apr. 1998, 204 pages. |
Pangal, “Core Based Virtualization—Secure, Elastic and Deterministic Computing is Here . . . ,” Blog Posting, May 26, 2009, 1 page, printed on Jul. 13, 2009, at URL:http://community.brocade.com/home/community/brocadeblogs/wingspan/blog/tags/serveri . . . . |
Partridge, “A Proposed Flow Specification,” RFC 1363, Sep. 1992, pp. 1-20, Network Working Group. |
Pepelnjak, et al., “Using Multicast Domains,” informIT, Jun. 27, 2003, pp. 1-29, at URL: http://www.informit.com/articles/printerfriendly.aspx?p=32100. |
Product Category Brochure, “J Series, M Series and MX Series Routers—Juniper Networks Enterprise Routers—New Levels of Performance, Availability, Advanced Routing Features, and Operations Agility for Today's High-Performance Businesses,” Juniper Networks, Nov. 2009, 11 pages. |
Quickspecs, “HP Online VM Migration (for HP Integrity Virtual Machines)”, Wordwide—Version 4, Sep. 27, 2010, 4 pages. |
Riggsbee, “From ADC to Web Security, Serving the Online Community,” Blog Posting, Jul. 8, 2009, 2 pages, printed on Dec. 22, 2009, at URL: http://community.brocade.com/home/community/brocadeblogs/wingspan/blog/2009/07/0 . . . . |
Riggsbee, “You've Been Warned, the Revolution Will Not Be Televised,” Blog Posting, Jul. 9, 2009, 2 pages, printed on Dec. 22, 2009, at URL: http://community.brocade.com/home/community/brocadeblogs/wingspan/blog/2009/07/0 . . . . |
Rodbell, “Protocol Independent Multicast—Sparse Mode,” CommsDesign, Dec. 19, 2009, pp. 1-5, at URL: http://www.commsdesign.com/main/9811/9811standards.htm. |
Schlansker, et al., “High-Performance Ethernet-Based Communications for Future Multi-Core Processors,” SC07 Nov. 10-16, 2007, Copyright 2007, 12 pages, ACM. |
TCP/IP Illustrated, vol. 2: The Implementation, Gray R. Wright and W. Richard Stevens, Addison-Wesley 1995, 23 pages. |
VMware, “Dynamic Balancing and Allocation of Resources for Virtual Machines”, Product Datasheet, Copyright® 1998-2006, 2 pages. |
VMware, “Live Migration for Virtual Machines Without Service Interruption”, Product Datasheet, Copyright® 2009 VMware, Inc., 4 pages. |
VMware, “Resource Management with VMware DRS”, VMware Infrastructure, Copyright® 1998-2006, 24 pages. |
VMware., “Automating High Availability (HA) Services With VMware HA”, VMware Infrastructure, Copyright® 1998-2006, 15 pages. |
Wolf, et al., “Design Issues for High-Performance Active Routers,” IEEE Journal on Selected Areas in Communications, IEEE, Inc. New York, USA, Mar. 2001, vol. 19, No. 3, Copyright 2001, IEEE, 6 pages. |
Number | Date | Country | |
---|---|---|---|
20160103745 A1 | Apr 2016 | US |