The present disclosure relates generally to failure protection for communication networks and, more particularly, to a N+1 redundancy scheme for virtualized services with low latency fail-over.
There are two main failure protection schemes for maintaining continuity of service in the event that a network node in a communication network responsible for handling user traffic fails. The two main protection schemes are 1+1 protection and N+1 protection. With 1+1 protection, N standby nodes are available for N network nodes to take over the function of a failed primary node or nodes. In 1+1 protection schemes, each network node has its own dedicated standby node, which can take over traffic currently being handled by its corresponding network node without loss of sessions. This is known as “hot standby.” One drawback of 1+1 protection is that it requires doubling of system resources. With N+1 protection, 1 standby node is available for N network nodes to take over the function of a single failed primary node. However, the N+1 redundancy scheme typically provides only “cold standby” protection so that traffic handled by the failed network node is lost at a switchover.” Existing N+1 solutions, don't preserve state of the failed primary node resulting in tear-down of existing sessions. This is because the standby node is not dedicated to any specific one of the N primary nodes, so there was no solution on how to have the state of any one of the primary nodes available in the backup node after the failure. Ultimately, the only benefit is that capacity will not drop after a failure but it does not provide protection for ongoing sessions.
In case of Virtual Router Redundancy Protocol (VRRP)-based solutions, a standby node may take over the Internet Protocol (IP) address of a failed primary node as well as the functions of the failed primary node, but these solutions do not take over real-time state of the failed primary node that would be needed for preserving session continuity for sockets. Moreover, the operator of the network has to configure separate VRRP sessions with separate IP addresses for each VRRP relationship (i.e., the standby nodes need a separate VRRP context per each primary node it is deemed to protect). This way the configuration overhead in a bigger cluster makes the solution cumbersome.
The present disclosure comprises methods and apparatus of providing N+1 redundancy for a cluster of network nodes including a standby node and a plurality of primary nodes. When the standby node determines that a primary node in a cluster has failed, the standby node configures the standby node to use an IP address of the failed primary node. The standby node further retrieves session data for user sessions associated with the failed primary node from a low latency database for the cluster and restores the user sessions at the standby node. When the user sessions are restored, the standby node switches from a standby mode to an active mode.
A first aspect of the disclosure comprises methods of providing N+1 redundancy for a cluster of network nodes. In one embodiment, the method comprises determining, by a standby node, that a primary node in a cluster has failed, configuring the standby node to use an IP address of the failed primary node, retrieving session data for user sessions associated with the failed primary node from a low latency database for the cluster, restoring the user sessions at the standby node, and switching from a standby mode to an active mode.
A second aspect of the disclosure comprise a network node configured as a standby node to provide N+1 protection for a cluster of network nodes including the standby node and a plurality of primary nodes. The standby node comprises a network interface for communicating over a communication network and a processing circuit. The processing circuit is configured to determine that a primary node in a cluster has failed. Responsive to determining that a primary node has failed, the processing circuit configures the standby node to use an IP address of the failed primary node. The processing circuit is further configured to retrieve session data for user sessions associated with the failed primary node from a low latency database for the cluster and restore the user sessions at the standby node. After the user sessions are restored, the processing circuit switches the standby node from a standby mode to an active mode.
A third aspect of the disclosure comprises a computer program comprising executable instructions that, when executed by a processing circuit in a redundancy controller in a network node, causes the redundancy controller to perform the method according to the first aspect. A fourth aspect of the disclosure comprises a carrier containing a computer program according to the third aspect, wherein the carrier is one of an electronic signal, optical signal, radio signal, or non-transitory computer readable storage medium.
Referring now to the drawings,
User sessions (e.g., telephone calls, media streams, etc.) are distributed among the primary nodes 12 by a load balancing node 16. An orchestrator 18 manages the server cluster 10. A distributed, low-latency database 20 serves as a data store for the cluster 10 to store the states of the user sessions being handled by the primary nodes 12 as hereinafter described. An exemplary distributed database 20 is described in an article titled “DAL: A Locality-Optimizing Distributed Shared Memory System” by Gabor Nemeth, Daniel Gehberger and Peter Matray. (Németh, Gábor, Dániel Géhberger, and Péter Mátray. “{DAL}: A Locality-Optimizing Distributed Shared Memory System.” 9th {USENIX} Workshop on Hot Topics in Cloud Computing (HotCloud 17). Santa Clara, Calif.—July 10-11, 2017).
The network nodes 12, 14 are part of the same subnet with a common IP prefix. Each user session is associated with a particular IP address, which identifies the primary node 12 that handles user traffic for the user session. State information for the user sessions is stored in a distributed, low latency, database 20 that serves the server cluster 10. In the event that a primary node 12 fails, the standby node 14 can retrieve the state information of user sessions handled by the failed primary node 12 and restore the “lost” user sessions so that service continuity is maintained for the user sessions.
The failure protection scheme used in the present disclosure can be viewed as having three separate phases. In a first phase, referred to as the preparation phase, a redundant system is built so that the system is prepared for failure of a primary node 12. The second phase comprises a fail-over process in which the standby node 14, upon detecting a failure of then primary node 12, takes over the active user sessions handled by the failed primary node 12. After the fail-over process is complete, a post-failure process restores capacity and redundancy to the system that is lost by the failure of the primary node 12 so that backup protection is re-established to protect against future network node failures.
During the preparation phase, state information necessary to restore the user sessions is externalized and stored in the database 20 by each primary node 12. Conventional log-based approaches or checkpointing can be used to externalize the state information. Another suitable method of externalizing state information is described in co-pending application 62/770,550 titled “Fast Session Restoration of Latency Sensitive Middleboxes” filed on Nov. 21, 2018.
Necessary data to be stored in the database 20 is dependent on the application and the communication protocol used in the application. For TCP sessions, such state information may comprise port numbers, counters, sequence numbers, various data on Transport Control Protocol (TCP) buffer windows, etc. Generally, all state information that is necessary to continue the user session should be stored externally.
In order to ensure that a backup is readily available to replace a primary node 12 that has failed, a “warm” standby node 14 is provisioned and made available to take over any user sessions for a failed primary node 12. During the provisioning, system checks are performed to ensure that:
It is not known in advance which of the primary nodes 12 will fail, however, the standby node 14 is ready to fetch necessary state information from the database 20 to take over for any one of the primary nodes 12. This standby mode is referred to herein a “warm” standby.
The fail-over process is triggered by the failure of one of the primary nodes 12. In some embodiments, failure is detected by the standby node 12 based on a “heartbeat” or “keepalive” signaling. In some embodiments, the primary nodes 12 may periodically transmit a “heartbeat” signal and a failure is detected when the heartbeat signal is not received by the standby node 14. In other embodiments, the standby node 14 may periodically transmit a “keepalive” signal or “ping” message to each of the primary nodes 12. In this case, a failure is detected when a primary node 12 fails to respond. This “keepalive” signaling process should be continuously run pairwise between the standby node 14 and each primary node 12.
In other embodiments, the failure of a primary node 12 can be detected by another network entity and communicated to the standby node 14 in the form of a failure notification. For example, one primary node 12 may detect the failure of another primary node 12 and send a failure notification to the standby node 14. In another embodiment, the database 20 may detect the failure of a primary node 12 and send a failure notification to the standby node 14.
When a fail-over is triggered, the standby node 14 retrieves the IP address or addresses of the failed primary node 12, as well as the session states (e.g., application and protocol dependent context data) necessary to re-initiate the user sessions at the standby node 14. In some embodiments, the standby node 14 writes the network identify (e.g., IP address) of the failed primary node 12 into a global key called STDBY-IDENTITY, which is stored in the database 20 so that all nodes in the server cluster 10 are aware that the standby node 14 has assumed the role of the failed primary node 12. Responsive to the failure detection or failure indication, the standby node 14 configures its network interface to use the IP address or addresses of the failed primary node 12 and loads the retrieved session states into its own tables. When the standby node 14 is ready to take over, the standby node 14 broadcasts a Gratuitous Address Resolution Protocol (GARP) message with its own Medium Access Control (MAC) address and its newly configured IP address(es), so that the routers in the subnet know to forward packets with the IP address(es) formerly used by the failed primary node 12 to the standby node's MAC address. The same general principles also apply to Internet Protocol version 6 (IPv6) interfaces (Unsolicited Neighbor Advertisement message).
During the post-failure phase, the original capacity of the server cluster 10 with N primary nodes 12 and 1 standby node 14 is restored. There are essentially two alternative approaches to restoring the system capacity.
In a first approach for the post-failure phase, the standby node 14 switches from a standby mode to an active mode and serves only temporarily as a primary node, reverting to a “warm” standby mode when done. The standby node 14 serves only the user sessions that were taken over from the failed primary node 12 and is not assigned to handle any new user sessions by the load balancing node 16. When the orchestrator 18 learns about the failure of a primary node 12, it re-establishes a new primary node 12 to replace the failed primary node 12 and restore system capacity according to a regular scale-out procedure. The orchestrator 18 should ensure that the IP addresses used by the failed primary node 12 on the user plane are reserved, because these addresses are taken over by the standby node 14. In case of an OpenStack-based orchestrator 18, reserving the IP addresses means that the “ports” should not be deleted when the failed primary node 12 disappears. This requires, however, garbage collection. The standby node 14, when it terminates, sends a trigger to the orchestrator 18 indicating that the ports used by the affected IP addresses can be deleted. After this notification, the IP addresses can be assigned to new network nodes (e.g., VNFs).
During the post-failure phase, the operation of the load balancing node 18 needs to take into account the failed primary node 12. Immediately after the failure, however, the load balancing node 18 does not assign new incoming sessions to either the failed primary node 12 or the standby node 14. As noted above, the standby node 14 continues serving existing user sessions taken over from the failed primary node 12, but does not receive new sessions. After the last session is finished at the standby node 14, or upon expiration of a timer (MAX_STANDBY_LIFETIME), the standby node 14 erases or clears the STDBY_IDENTITY field in the database 20, sends a notification to the orchestrator 18 indicating that the IP addresses of X can be released, and transitions back to a “warm” standby mode. The MAX_STANDBY_LIFETIME timer, if used, is started when the standby node 14 takes over for the failed primary node 12.
In a second approach for the post-failure phase, the standby node 14 permanently assumes the role of the failed primary node 12 and the orchestrator re-establishes system capacity by initiating a new standby node 14. In this case, The standby node 14 sends a notification message or other indication to the orchestrator 18 indicating that the IP address(es) of the failed primary node 12 were assumed or taken over by the standby node 14 so that the orchestrator 18 knows (i) to which primary node 12 the IP addresses belong, and (ii) that these IP addresses cannot be used for new instances of the primary nodes 12 in case of a scale-out. The standby node 14 (now fully a primary node 12) triggers the orchestrator 18 to launch a new instance for the standby node 14 to restore the original redundancy protection.
There may be circumstance where primary node 12 fails only temporarily, typically because of a VM reboot. Following the restart, the primary node 12 may try to use its earlier IP address(es), which would cause a conflict with the standby node 14 that is serving the ongoing user sessions associated with those addresses. Before restarting, the primary node reads the STDBY_IDENTITY key in the database 20. If the STDBY_IDENTITY key matches the identity of the primary node 12, the primary node 12 pauses and waits until the key is erased, indicating that the IP address used by the standby node has been released, or asks for new configuration parameters from the orchestrator 18.
In the embodiment shown in
In some embodiments of the method 100, determining that a primary node 12 in a cluster 10 has failed comprises sending a periodic keepalive message to one or more primary nodes 12 in the cluster 10, and determining a node failure when the failed primary node 12 fails to response to a keepalive message.
In some embodiments of the method 100, determining that a primary node 12 in a cluster has failed comprises receiving a failure notification. As an example, the failure notification can be received from the database 20.
In some embodiments of the method 100, configuring the standby node 14 to use an IP address of the failed primary node 12 comprises configuring a network interface to use the IP address of the failed primary node 12.
In some embodiments of the method 100, configuring the standby node 14 to use an IP address of the failed primary node 12 further comprises announcing a binding between the IP address and a MAC address of the standby node 14.
Some embodiments of the method 100 further comprise setting a standby identity key in the database to an identity of the failed primary node 12.
Some embodiments of the method 100 further comprise, after a last one of the user sessions ends, releasing the IP address of the failed primary node 12 and switching from the active mode to the standby mode.
Some embodiments of the method 100 further comprise, after a last one of the user sessions ends, clearing the standby identity key in the database 20.
Some embodiments of the method 100 further comprise notifying an orchestrator 18 that the standby node 14 has replaced the failed primary node 12 and receiving new user sessions from a load-balancing node 16.
In one embodiment of the method 200, the primary node 12 determines whether an IP address of the primary node 12 is being used by a standby node 14 in the cluster 10 of network nodes by getting a standby identity from a database 20 serving the cluster 10 of network nodes, and comparing the standby identity to an identity of the primary node 12.
In another embodiment of the method 200, the primary node 12 determines when the IP address is released by monitoring the standby identity stored in the database 20 and determining that the IP address is released when the standby identity is cleared or erased.
Those skilled in the art will also appreciate that embodiments herein further include corresponding computer programs. A computer program comprises instructions which, when executed on at least one processor of an apparatus, cause the apparatus to carry out any of the respective processing described above. A computer program in this regard may comprise one or more code modules corresponding to the means or units described above.
Embodiments further include a carrier containing such a computer program. This carrier may comprise one of an electronic signal, optical signal, radio signal, or computer readable storage medium.
In this regard, embodiments herein also include a computer program product stored on a non-transitory computer readable (storage or recording) medium and comprising instructions that, when executed by a processor of an apparatus, cause the apparatus to perform as described above.
Embodiments further include a computer program product comprising program code portions for performing the steps of any of the embodiments herein when the computer program product is executed by a computing device. This computer program product may be stored on a computer readable recording medium.
The methods and apparatus herein described enable provide N+1 redundancy for a cluster of network nodes including a standby node and a plurality of primary nodes. When a primary node in a cluster has failed, the user sessions can be restored at the standby node. When the user sessions are restored, the standby node switches from a standby mode to an active mode.
The above description of illustrated implementations is not intended to be exhaustive or to limit the scope of the disclosure to the precise forms disclosed. While specific implementations and examples are described herein for illustrative purposes, various equivalent modifications are possible within the scope of the present disclosure, as those skilled in the relevant art will recognize. The words “example” or “exemplary” are used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as “example” or “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs.
This application claims priority to U.S. Provisional Application No. 62/779,313 filed 13 December 2018 and U.S. Provisional Application No. 62/770,550 filed 21 Nov. 2018. The disclosures of each of these references are incorporated in their entireties by reference herein.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/IB2019/060037 | 11/21/2019 | WO | 00 |
Number | Date | Country | |
---|---|---|---|
62770550 | Nov 2018 | US | |
62779313 | Dec 2018 | US |