A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
The current invention relates to version control systems and in particular to version control systems having high availability.
Version Control Systems (VCS) provide for the management of changes to documents or any other collection of information. A VCS provides the ability to keep track of changes, revert a document to a previous revision, etc. These features make Version Control Systems (VCS) a suitable solution for persisting artifacts in a development environment.
In particular a centralized VCS provides additional capabilities (such as: centralized access control, one single source of authoritative data, etc.) that makes a centralized VCS advantageous in a development environment.
A disadvantage of centralized VCS becomes apparent in cluster environments used by thousands of client systems. In such large systems failures of hardware components in the cluster are the norm, not the exception. Unfortunately in cluster environments, a centralized VCS does not deal with such failures gracefully and therefore cannot guarantee high availability. Due to the architecture of the centralized VCS, a single repository failure can cause complete denial of service for all users. As a consequence, centralized VCSs are fragile and do not scale very well making them unsuitable as persistent storage of artifacts in a cluster environment.
Accordingly it would be desirable to provide a VCS having the advantages of a centralized VCS while dealing with failures of components in a cluster environment gracefully to guarantee uptime.
It would further be desirable to provide a VCS having the advantages of a centralized VCS without the inherent fragility and which therefore scales well making it suitable for persistent storage in a cluster environment.
Embodiments of the present invention provide Version Control Systems (VCS) and methods having high availability.
Embodiments of the present invention provide a high availability VCS and method which has the advantages of a centralized VCS while overcoming the limitations of centralized VCSs in a cluster environment.
Embodiments of the present invention provide a high availability VCS and method having the advantages of a centralized VCS while dealing with failures of components in a cluster environment gracefully to guarantee uptime.
Embodiments of the present invention provide a VCS having the advantages of a centralized VCS without the inherent fragility and which therefore scales well making it suitable for persistent storage in a cluster environment.
In an embodiment the present invention provides a VCS which supports high availability in a centralized VCS utilizing a plurality of repositories having a suitable architecture. In particular embodiments the architecture utilizes one or more of: Active-Passive repository replication; Active-Passive repository replication with automatic recovery; Active-Active repository replication; and hybrid model (Active-Active and Passive repository replication).
Other objects and advantages of the present invention will become apparent to those skilled in the art from the following detailed description of the various embodiments, when read in light of the accompanying drawings.
In the following description, the invention will be illustrated by way of example and not by way of limitation in the figures of the accompanying drawings. References to various embodiments in this disclosure are not necessarily to the same embodiment, and such references mean at least one. While specific implementations are discussed, it is understood that this is provided for illustrative purposes only. A person skilled in the relevant art will recognize that other components and configurations may be used without departing from the scope and spirit of the invention.
Furthermore, in certain instances, numerous specific details will be set forth to provide a thorough description of the invention. However, it will be apparent to those skilled in the art that the invention may be practiced without these specific details. In other instances, well-known features have not been described in as much detail so as not to obscure the invention.
Common reference numerals are used to indicate like elements throughout the Figures and detailed description; therefore, reference numerals used in a Figure may or may not be referenced in the detailed description specific to such figure if the element is described elsewhere. The first digit in a three digit reference numeral indicates the series of Figures in which the element first appears.
Although the Figures depict components as logically separate, such depiction is merely for illustrative purposes. It will be apparent to those skilled in the art that the components portrayed in this figure can be combined or divided into separate software, firmware and/or hardware. Furthermore, it will also be apparent to those skilled in the art that such components, regardless of how they are combined or divided, can execute on the same computing device or can be distributed among different computing devices connected by one or more networks or other suitable communication means.
Version Control Systems (VCS) provide for the management of changes to documents or any other collection of information. A VCS provides the ability to keep track of each change, revert a document to a previous revision, etc. When data in the VCS is modified, after being retrieved (read) by checking out, this is not in general immediately reflected in the repository of the VCS, but must instead be checked in or committed (write). These features make Version Control Systems (VCS) a suitable solution for persisting artifacts in a development environment. VCSs are often centralized, with a single authoritative data store, the repository, and reads and commits are performed with reference to this central repository. A centralized VCS provides additional capabilities (such as: centralized access control, one single source of authoritative data, etc.) that make them advantageous in a development environment, for example a Business Process Management development environment.
Embodiments of the present invention provide Version Control Systems (VCS) and methods having high availability. The present inventions provide a high availability Version Control System and method which has the advantages of a centralized VCS while overcoming the limitations of centralized VCSs in a cluster environment. In particular the present invention provides a VCS and method having the advantages of a centralized VCS while dealing with failures of components in a cluster environment gracefully to guarantee uptime. The VCS and method has the advantages of a centralized VCS without the inherent fragility and which therefore scales well making it suitable for persistent storage in a cluster environment.
In an embodiment the present invention provides a VCS which supports high availability in a centralized VCS utilizing a plurality of repositories having a suitable architecture.
In particular embodiments the architecture utilizes one or more of: Active-Passive repository replication; Active-Passive repository replication with automatic recovery; Active-Active repository replication; and hybrid model (Active-Active and Active-Passive repository replication) as described below.
All of the repository replication schemas utilize repository replication in which multiple copies of data are replicated across multiple hardware devices in a cluster in order to prevent data loss. Data is duplicated on different hardware nodes to provide one or more backup copies in case of the failure of a hardware node and to decrease latency of read and write operations. However, where multiple copies of data are stored, it is necessary to provide a coherent view of the repository in any moment and avoid data inconsistencies which can occur when different users make modifications to the same data.
CAP theorem (a.k.a. Brewer's theorem) states that it is impossible for a distributed system to simultaneously provide absolute: data consistency (all nodes see the same data at the same time), data availability (a guarantee that a read/write request receives whether its was successful or failed), and data partition tolerance (the system continues to operate despite arbitrary message loss or failure of part of the system). A principal objective of a centralized VCS as persistence storage for a highly concurrent application is data consistency. Accordingly the present solution favors consistency and availability over partition tolerance.
Furthermore the VCS and method also provides failure transparency such that it is not evident to a user when a node fails. An advantageous aspect of the VCS and method is that the user is never aware of the replication mechanism used to provide high availability. The replication is transparent to the user. Furthermore the user interacts with the VCS exactly as he would on any usual occasion. This transparency permits the system operator to dynamically change the composition of the cluster without impact on the user experience.
1A. Active-Passive Repository Replication
In Active-Passive Repository replication schema, there are multiple nodes which are a copy of the repository, but only one of them is regarded as the source of authoritative data. This node is called the “active” or “master” while the others are “passive” or “slave”. Passive nodes synchronize directly from the active node. If the active node is unavailable, the passive nodes switch to read-only mode until the active node becomes available.
In the Active-Passive Repository Replication schema, writes can only occur on the master, while the slaves can only retrieve the information, listen for changes, and stay synchronized with the active node. However, when a user is connected to a passive node all write requests are internally forwarded to the master—therefore the user can't tell the difference when connected to a passive node. Having a single writing point ensures us that no conflict of concurrent modification can arise. On the other hand as a consequence of this restriction, the entire writing load relies on a single node instead of being distributed. This can cause congestion in a large system.
The main disadvantage of this first configuration is that when the active node goes down users will be able to read from the repository but not to write to it. It is undesirable in this schema to resolve the failure by choosing an available passive node and converting it to a master without human intervention. For example, in a case where the network gets partitioned, if passive nodes are promoted to master nodes, the result could be two or more different nodes acting as masters causing the history to diverge and provoking an inconsistent view of the data depending on which partition the user is connected to. Since a VCS should prefer consistency over availability, divergent history should be avoided. Thus, it is preferable to declare the system as down for writes until an administrator manually sets up a new primary node. Thus the Active-Passive schema supports high availability primarily for the reads whereas the writes depend upon a single (primary) system.
Another disadvantage of this configuration is that the active node has more responsibilities than the passive nodes. Reads can be distributed across the passive nodes, but writes must all be handled on the active node. Consequently there may be congestion in the active node. Additionally, the passive nodes may exhibit a certain level of desynchronization owing to the time required to distribute commits from the active node to the passive nodes. During the synchronization delays, read operations against a passive node may have increased latency to allow the passive node to be synchronized before providing the data.
1B. Active-Passive Repository Replication with Automatic Recovery
As described above, in the Active-Passive schema, when the active node fails, it is preferable to simply declare the system as down for writes until an administrator manually sets up a new primary node. However, the Active-Passive Repository Replication schema can be enhanced with automatic failure recovery. Automatic failure recovery uses a technique dubbed ‘quorum’ (also called ‘consensus’) to decide which member should become primary. Using quorum means that whenever the primary node becomes unreachable, the secondary nodes trigger an election. The first node to receive votes from a majority of the set of secondary nodes will become the primary node. This means that for a passive node to take the role of active node at least a number N nodes have to be up and running and successfully accepted the decision. That N is called quorum. In a simple schema, the N=(total nodes in the cluster)/2+1. This provides automated failover while preventing the previously mentioned case of divergent history in case of network partition.
There is, however, one particular case in which this configuration requires manual intervention to recover a possible lost commit. This is in the case of a failover situation in which the primary node has accepted a write operation that has not been replicated to the secondary nodes after the failover occurs. In this particular case the node selected to take the place of the primary will have no knowledge of this last commit. Therefore to avoid consistency issues that last commit is saved in a particular location for the admin to recover and this commit is applied manually. This situation could be considered the worst case scenario. Please note that in contrast with the abovementioned Active-Passive Repository Replication without automatic recovery, manual intervention is needed only to recover a particular commit while the system as a whole will choose a new primary automatically and continue working as expected. Thus, the effect on the user of a failure of the primary node is significantly reduced.
2. Active-Active Repository Replication
In the Active-Active Repository Replication schema, there are multiple nodes with authoritative repository information. Although the master role will still exist in this schema, the master responsibilities can be switched from node to node. If the master goes down, another node can take over the responsibility of the master node and claim itself as a master. Thus, in this schema we all the nodes can be viewed as masters able to write on the repository.
Because writes may be made to any active node, the system could develop data inconsistency due to concurrent modifications over the repository at different nodes. To solve this issue all nodes have to negotiate each write operation to obtain a majority of nodes to accept the modifications introduced on the repository. This requires use of two techniques: two phase commit and “quorum”, the same technique used for primary selection but in this case applied to commits' acceptance. Using quorum in this context means that to succeed in a write operation a majority of the nodes forming the cluster at the time of writing have to be up and running and successfully accept the commit without any failure. Then, when the write operation is successful, all other nodes are notified via a notification bus, and replicate the change from one of the nodes that accepted the request.
In this configuration there is no single point of failure since any active node can take the place of another in the quorum. This provides high availability for both reading and writing in the repository. However, the semantics of the two phase commit and a required undo primitive are more complex than for commits in the Active-Passive repository schema and generate additional transaction overhead. If the quorum is not satisfied, all the write (commit) operations will be rejected, however read operations will still be allowed. Moreover, the use of quorum for each commit generates significantly more communication overhead between the nodes. This is disadvantageous for large distributed systems having significant communication latency.
3. Hybrid Model
In the Hybrid Model, it is possible to configure an architecture in which Active-Passive Repository Replication and Active-Active Repository Replication are used together in a cluster. For example in some cases it is advantageous to run Active-Active Repository Replication on a small number of nodes (for example 5 nodes) located on the same data center (facilitating high bandwidth communication between the nodes) and fifty or so passive nodes situated at different locations around the globe providing the user quick read access to the repository. The availability of multiple nodes within the small number of active nodes), also reduces the overhead on the write function, reduces congestion, and increases the failure tolerance as compared to a single master system. Furthermore, the interaction between the master group and the passive nodes utilizes the less complex and less communication intensive mechanisms of the Active-Passive Repository Replication schema which is better suited for large geographically distributed systems
The Hybrid model thus combines advantageous features of both the Active-Passive Repository Replication Schema and the Active-Active Repository Replication Schema. Additionally, in the Hybrid model, the configuration can be changed dynamically by adding or removing servers to the cluster without user impact. This can be achieved thanks to the node state machine described in the next section.
Node Communication
In embodiments of the present invention, the system and method utilizes two different types of communication, these are: node to node communication, and node to cluster communication (or broadcast). Node to node communication is used when it is desired to have a commit replicated to another server. Node to node communication makes use of the VCSs own functionality to replicate history to a different repository. Node to cluster communication is used as a way of notifying a writing event to all nodes. Node to cluster communication makes use of a distributed cache to provide a message queue with a write-through configuration for persistence.
Node State
When the administrator sets up a new node, before this node can take requests it has to be synchronized with the latest data on the repository. A node state machine is used to synchronize the new node with the latest data. When a triggering event occurs (a new node added to cluster, master node has been detected to be down, etc.), the node will change its internal state. The node behavior will depend on the state it has.
As shown in
As shown in
Where a commit has been performed in the active node, the active node should publish the commit revision through to the passive node using the node to cluster communication mechanism. The passive nodes are encouraged to read the commit information from the node to cluster communication mechanism as soon as possible. The active node may optionally publish the commit content so that the passive nodes can replicate such information. If the commit content was not published by the active node, then passive nodes should request the commit content to the active using the mode to node communication mechanism.
If a client connects to a passive node for a read request and the passive node status is running or read only, then it will serve the request using the information stored locally. If the node is in other state, it will forward the request to the active node.
If a client connected to a passive node tries to perform a commit operation, the passive node will forward the request to the active node. If the commit finishes successfully, the passive node will not communicate that to the client until the new commit gets synchronized. Once the commit gets synchronized, the client will receive the success confirmation.
This procedure ensures that all the written data can then be immediately found by client. If the active node goes down while the passive node was synchronizing the new commit, the passive node will respond to the client with the new commit information. In this case, if the client comes back to the node before the active node comes back, most client operations will fail because the client will be more updated than the passive node.
Metadata Store
To maintain the cluster configuration (the names and location of the different machines that conform the cluster, the configuration chosen for each machine, the path of the repositories, etc.) another high availability storage is required to provide a Metadata store. The metadata store should be high availability to prevent a failure in the Metadata store causing the whole system to go down. In a particular embodiment the Metadata store is provided in a distributed cache with a write-behind configuration over database.
Appropriate software coding can readily be prepared by skilled programmers based on the teachings of the present disclosure, as will be apparent to those skilled in the software art. The invention may also be implemented by the preparation of application specific integrated circuits or by interconnecting an appropriate network of conventional component circuits, as will be readily apparent to those skilled in the art.
The various embodiments include a computer program product which is a storage medium (media) having instructions stored thereon/in which can be used to program a general purpose or specialized computing processor(s)/device(s) to perform any of the features presented herein. The storage medium can include, but is not limited to, one or more of the following: any type of physical media including floppy disks, optical discs, DVDs, CD-ROMs, microdrives, magneto-optical disks, holographic storage, ROMs, RAMs, PRAMS, EPROMs, EEPROMs, DRAMs, VRAMs, flash memory devices, magnetic or optical cards, nanosystems (including molecular memory ICs); paper or paper-based media; and any type of media or device suitable for storing instructions and/or information. The computer program product can be transmitted in whole or in parts and over one or more public and/or private networks wherein the transmission includes instructions which can be used by one or more processors to perform any of the features presented herein. The transmission may include a plurality of separate transmissions. In accordance with certain embodiments, however, the computer storage medium containing the instructions is non-transitory (i.e. not in the process of being transmitted) but rather is persisted on a physical device.
The foregoing description of the preferred embodiments of the present invention has been provided for purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations can be apparent to the practitioner skilled in the art. Embodiments were chosen and described in order to best explain the principles of the invention and its practical application, thereby enabling others skilled in the relevant art to understand the invention. It is intended that the scope of the invention be defined by the following claims and their equivalents.
Number | Name | Date | Kind |
---|---|---|---|
6748381 | Chao | Jun 2004 | B1 |
7590886 | Moscirella | Sep 2009 | B2 |
20020161889 | Gamache | Oct 2002 | A1 |
20040133444 | Defaix | Jul 2004 | A1 |
20070061383 | Ozawa | Mar 2007 | A1 |
20100114826 | Voutilainen | May 2010 | A1 |
Number | Date | Country | |
---|---|---|---|
20140365811 A1 | Dec 2014 | US |