The present invention relates to a data storage controller and to a method of controlling data volumes in a data storage system.
There are many scenarios in computer systems where it becomes necessary to move a volume of data (a data chunk) from one place to another place. One particular such scenario arises in server clusters, where multiple servers arranged in a cluster are responsible for delivering applications to clients. An application may be hosted by a particular server in the cluster and then for one reason or another may need to be moved to another server. An application which is being executed depends on a data set to support that application. This data set is stored in a backend storage system associated with the server. When an application is moved from one server to another, it may become necessary to move the data volume so that the new server can readily access the data.
For example, a file system local to each server can comprise a number of suitable storage devices, such as disks. Some file systems have the ability to maintain point in time snapshots and provide a mechanisms to replicate the difference between two snapshots from one machine to another. This is useful when a change in the location of a data volume is required when an application migrates from one server to another. One example of a file system which satisfies these requirements is the Open Source ZFS file system.
Different types of backend storage system are available, in particular backend storage system in which data volumes are stored on storage devices virtually associated with respective machines, rather than physically in the case of the ZFS file system.
At present, there is a constraint on server clusters in that any particular cluster of server can only operate effectively with backend storage of the same type. This is because the mechanism and requirements for moving data volumes between the storage devices within a storage system (or virtually) depends on the storage type.
Moreover, the cluster has to be configured for a particular storage type based on a knowledge of the implementation details for moving data volumes in that type.
According to one aspect of the invention, there is provided a data storage controller for controlling data storage in a storage environment comprising: a backend storage system of a first type in which data volumes are stored on storage devices physically associated with respective machines; and a backend storage system of a second type in which data volumes are stored on storage devices virtually associated with respective machines, the controller comprising: a configuration data store including configuration data which defines for each data volume at least one primary mount, wherein a primary mount is a machine with which the data volume is associated; a volume manager connected to access the configuration data store and having a command interface configured to receive commands to act on a data volume; and a plurality of convergence agents, each associated with a backend storage system and operable to implement a command received from the volume manager by executing steps to control its backend storage system, wherein the volume manager is configured to receive a command which defines an operation on the data volume which is agnostic of, and does not vary with, the backend storage system type in which the data volume to be acted on is stored, and to direct the command to a convergence agent based on the configuration data for the data volume, wherein the configuration agent is operable to act on the command to execute the operation in its back end storage system.
Another aspect of the invention provides a method of controlling data storage in a storage environment comprising a backend storage system of a first type in which data volumes are stored on storage devices physically associated with respective machines; and a backend storage system of a second type in which data volumes are stored on storage devices virtually associated with respective machines, the method comprising: providing configuration data which defines for each data volume at least one primary mount, wherein a primary mount is a machine with which the data volume is associated; generating a command to a volume manager connected to access the configuration data, wherein the command defines an operation on the data volume which is agnostic and does not vary with the backend storage system type in which the data volume to be acted on is stored; implementing the command in a convergence agent based on the configuration data for the data volume, wherein the convergence agent acts on the command to execute the operation in its backend storage system based on the configuration data.
Thus, the generation and recognition of commands concerning data volumes is separately semantically from the implementation of those commands. This allows a system to be built which can be configured to take into account different types of backend storage and to allow different types of backend storage to be added in. Convergence agents are designed to manage the specific implementation details of a particular type of backend storage, and to recognise generic commands coming from a volume manager in order to carry out those implementation details.
In preferred embodiments, a leasing/polling system allows the backend storage to be managed in the most effective manner for that storage system type as described more fully in the following.
For a better understanding of the invention and to show how the same may be carried into effect, reference will now be made by way of example, to the accompanying drawings in which:
Each server is associated with a local storage facility 6 which can constitute any suitable storage, for example discs or other forms of memory. The storage facility 6 supports a database or an application running on the server 1 which is for example delivering a service to one or more client terminal 7 via the Internet. Embodiments of the invention are particularly advantageous in the field of delivering web-based applications over the Internet.
In
The applications can be run directly or they can be run inside containers. When run inside containers, the containers can mount parts of the host server's dataset. Herein an application specific chunk of data is referred to as a “volume”. Herein, the term “application” is utilised to explain operation of the various aspects of the invention, but is understood that these aspect apply equally when the server cluster is supporting a database.
Each host server (that is a server capable of hosting an application or database) is embodied as a physical machine. Each machine can support one or more virtual application. Application may be moved between servers in the cluster, and as a consequence of this, it may be necessary to move data volumes so that they are available to the new server hosting the application or database. A data volume is referred to as being “mounted on” a server (or machine) when it is associated with that machine and accessible to the application(s) running on it. A mount (sometimes referred to as a manifestation) is an association between the data volume and a particular machine. A primary mount is a read-unit and guaranteed to be up to date. Any others are read only.
For example, the system might start with a requirement that:
“Machine 1 runs a PostgreSQL server inside a container, storing its data on a local volume”, and later on the circumstances will alter such that the new requirement is:
“to run PostgreSQL server on machine 2”.
In the later state, it is necessary to ensure that the volume originally available on machine 1 will now be available on machine 2. These machines can correspond for example, to the server's 1W/1E in
In addition the server supporting one or more convergence agent 36 to be described later, implemented by the processor 5.
As already mentioned, there are variety of different distributed storage backend types. Each backend type has a different mechanism for moving data volumes. A system in charge of creating and moving data volumes is a volume manager. Volume managers are implemented differently depending on the backend storage type:
Data is stored initially locally on one of machine A's hard drives, and when it is moved it is copied over to machine B's hard drive. Thus, a Peer-to-Peer backend storage system comprises hard drives of machines.
Cloud services like Amazon Web Service AWS provide on demand virtual machines and offer block devices that can be accessed over a network (e.g. AWS has Elastic Block Store EBS). These reside on the network and are mounted locally on the virtual machines within the cloud as a block device. They emulate a physical hard drive. To accomplish the command:
“Machine 1 will run a PostgreSQL server inside a container, storing its data on a local volume”
such a block device is attached on machine 1, formatted as a file system and the data from the application or database is written there. To accomplish the “move” command such that the volume will now be available on machine 2, the block device is detached from machine 1 and reattached to machine 2. Since the data was anyway always on some remote server (in the cloud) accessible via the network, no copying of the data is necessary. SAN setups would work similarly.
Rather than a network available block device, there may be a network file system. For example, there may be a file server which exports its local file system via NFS or SMB network file systems. Initially, this remote file system is mounted on machine O. To “move” the data volumes, the file system is unmounted and then mounted on machine D. No copying is necessary.
Local Storage on a Single Node only is also a backend storage type which may need to be supported.
One example of a Peer-to-Peer backend storage system is the Open Source ZFS file system. This provides point in time snapshots, each named with a locally unique string, and a mechanism to replicate the difference between two snapshots from one machine to another.
From the above description, it is evident that the mechanism by which data volumes are moved depends on the backend storage system which is implemented. Furthermore, read-only access to data on other machines might be available (although possibly out of date). In the Peer-to-Peer system this would be done by copying data every once in a while from the main machine that is writing to the other machines. In the network file system set up, the remote file system can be mounted on another machine, although without write access to avoid corrupting the database files. In the block device scenario this access is not possible without introducing some reliance on the other two mechanisms (copying or a network file system).
There are other semantic differences. In the case of the Peer-to-Peer system, the volume only really exists given a specific instantiation on a machine. In the other two systems, the volume and its data may exist even if they are not accessible on any machine.
A summary of the semantic differences between these backend storage types is given below.
In the scenario outlined above, the problem that manifests itself is how to provide a mechanism that allows high level commands to be implemented without the requirement of the command issuer understanding the mechanism by which the command itself will be implemented. Commands include for example:
“Move data” [as discussed above]
“Make data available here”
“Add read-only access here”
“Create volume”
“Delete volume”, etc.
This list of commands is not all encompassing and a person skilled in the art will readily understand the nature of commands which are to be implemented in a volume manager.
The control service 30 understands the configuration data but does not need to understand the implementation details of the backend storage type. At most, it knows that certain backends have certain restrictions on the allowed configuration.
The architecture comprises convergence agents 36 which are processes which request the configuration from the control service and then ensure that the actual system state matches the desired configuration. The convergence agents are implemented as code sequences executed by a processor. The convergence agents are the entities which are able to translate a generic model operating at the control service level 30 into specific instructions to control different backend storage types. Each convergence agent is shown associated with a different backend storage type. The convergence agents understand how to do backend specific actions and how to query the state of a particular backend. For example, if a volume was on machine O and is now supposed to be on machine D, a Peer-to-Peer convergence agent will instruct copying of the data, but an EBS agent will instruct attachment and detachment of cloud block devices. Because of the separation between the abstract model operating in the control service 30 and the specific implementation actions taken by the convergence agents, it is simple to add new backends by implementing new convergence agents. This is shown for example by the dotted lines in
The abstract configuration model operated at the control service 30 has the following properties.
A “volume” is a cluster wide object that stores a specific set of data. Depending on the backend storage type, it may exist even if no nodes have access to it. A node in this context is a server (or machine).
Volumes can manifest on specific nodes.
A manifestation may be authoritative, meaning it has the latest version of the data and can be written to. This is termed a “primary mount”.
Otherwise, the manifestation is non-authoritative and cannot be written to. This is termed a “replica”.
A primary mount may be configured as read-only, but this is a configuration concern, not a fundamental implementation restriction.
If a volume exists, it can have the following manifestations depending on the backend storage type being used, given N servers in the cluster:
Given the model above, the cluster is configured to have a set of named volumes. Each named volume can be configured with a set of primary mounts and a set of replicas. Depending on the backend storage type, specific restrictions may be placed on a volume's configuration, for example, when using EBS no replicas are supported and no more than one primary mount is allowed.
The architecture of
The architecture shown in
Embodiments of the present invention provide a way to do this which works with various distributed storage backend types, such that the system that is in charge of the processes does not need to care about the implementation details of the system that is in charge of the data. The concept builds on the volume manager described above which is in charge of creating and moving volumes. The schedule layer 38 provides a container scheduling system that decides which container runs on which machine in the cluster. In principle, the scheduler and the volume manager operate independently. However, there needs to be coordination. For example, if a container is being executed on machine O with a volume it uses to store data, and then the scheduler decides to move the container to machine D, it needs to tell the volume manager to also move the volume to machine D. In principle, a three-step process driven by the scheduler would accomplish this:
A difficulty with this scenario is that it can lead to significant downtime for the application. In the case where the backend storage type is Peer-to-Peer, all of the data may need to be copied from machine O to machine D in the second step. In the case where the backend storage type is network block device, the three-step process may be slow if machine O and machine D are in different data centres, for example, in AWS EBS a snapshot will need to be taken and moved to another data centre.
As already mentioned in the case of the ZFS system, one way of solving this is to use incremental copying of data which would lead for example to the following series of steps:
The problem associated with this approach is that it puts a much more significant requirement for coordination between the scheduler and the volume manager. Different backends have different coordination requirements. Peer-to-Peer backends as well as crossdata centre block device backends require a four-step solution to move volumes, while a single datacentre block device as well as network file system backends only need the three-step solution. It is an aim of embodiments of the present invention to support multiple different scheduler implementations, and also to allow adoption of the advantageous volume manager architecture already described.
In order to fit into the framework described with respect to
The solution to this problem is set out below. Reference is made herewith to
For example, one kind of deployment state is whether or not an application A is running on machine M. This true/false value is implicitly represented by whether a particular program (which has somehow been defined as the concrete software manifestation of application A is running on the operating system of machine M).
Another example is whether a replica of a data volume V exists on machine M. The exact meaning of this condition varies depending on the specific storage system in use. When using the ZFS P2P storage system, the condition is true if a particular ZFS dataset exists on a ZFS storage pool on machine M.
In all of these cases, when a part of the system needs to learn the current deployment state, it will interrogate the control service. To produce the answer, the control service will interrogate each machine and collate and return the results. To produce an answer for the control service, each machine will inspect the various heterogeneous sources of the information and collate and return those results.
Put another way, the deployment state mostly does not exist in any discrete storage system but is widely spread across the entire cluster.
The only exception to this is the lease state which is kept together with the configuration data in the discrete configuration store mentioned above.
The desired volume configuration is changed once, when the operation is initiated. When a desired change of container location is communicated to the container scheduler (message 60) it changes the volume manager configuration appropriately. After that all interactions between scheduler and volume manager are based on changes to the current deployment state via leases, a mobility attribute and polling/notifications of changes to the current deployment state:
Leases on primary mounts are part of the current deployment state, but can be controlled by the scheduler: a lease prevents a primary mount from being removed. When the scheduler mounts a volume's primary mount into a container it should first lease it from the volume manager, and release the lease when the container stops. This will ensure the primary mount isn't moved while the container is using it. This is shown in the lease state 40 in the primary mount associated with volume V1. For example, the lease state can be implemented as a flag—for a particular data volume, either the lease is held or not held.
Leases are on the actual state, not the configuration. If the configuration says “volume V should be on machine D” but the primary mount is still on machine O, a lease can only be acquired on the primary mount on machine O since that is where it actually is.
A primary mount's state has a mobility flag 42 that can indicate “ready to move to X”. Again, this is not part of the desired configuration, but rather part of the description of the actual state of the system. This flag is set by the volume manager (control service 30).
Notifications let the scheduler know when certain conditions have been met, allowing it to proceed with the knowledge that volumes have been setup appropriately. This may be simulated via polling, i.e. the scheduler continuously asks for the state of the lease and mobility flag 8, see poll messages 50 in
When the scheduler first attaches a volume V to a container, say on machine Origin, it acquires a lease 40. We want to move to node Destination. The scheduler will:
The interface 39 between the scheduler 38 and the volume manager 30 is therefore quite narrow:
There follows a description of how two different volume manager backends might handle this interaction.
First, in the peer-to-peer backend:
Second, in EBS backend within a single datacentre:
Notice that no details of how data is moved is leaked: the scheduler has no idea how the volume manager moves the data and whether it's a full copy followed by incremental copy, a quick attach/detach or any other mechanism. The volume manager in turn doesn't need to know anything about containers or how they are scheduled. All it knows is that sometimes volumes are moved, and that it can't move a volume if the relevant primary mount has a lease.
Embodiments of the invention described herein provide the following features.
These features are explained in more detail below.
The volume manager is a cluster volume manager, not an isolated per-node system. A shared, consistent data storage system 32 stores:
Where the configuration is set by an external API, the API supports:
The convergence agents:
Note that each convergence agent has its own independent queue.
Convergence loop: (this defines the operation of a convergence agent)
Failures result in a task and all tasks that depend on it being removed from the queue; they will be re-added and therefore automatically retried because of the convergence loop.
Given the known configuration and the task queue, it is possible at any time to know what relevant high-level operations are occurring, and to refuse actions as necessary.
Moreover given a task queue one it is possible to insert new tasks for a node ahead of currently scheduled ones.
The configuration data storage is preferably selected so that nodes can only write to their own section of task queue, and only external API users can write to desired configuration.
Nodes will only accept data from other nodes based on desired configuration.
Data will only be deleted if explicitly requested by external API, or automatically based on policy set by cluster administrator. For example, a 7-day retention policy means snapshots will only be garbage collected after they are 7 days old, which means a replicated volume can be trusted so long as the corruption of the master is noticed before 7 days are over.
The task queue will allow nodes to ensure high-level operations finish even in the face of crashes.
A side-effect of using a shared (consistent) database.
The API will support operations that include a description of both previous and desired state: “I want to change owner of volume V from node A to node B.” If in the meantime owner changed to node C the operation will fail.
Leases on volumes prevent certain operations from being done to them (but do not prevent configuration changes from being made; e.g., configuration about ownership of a volume can be changed while a lease is held on that volume. Ownership won't actually change until the lease is released). When e.g. Docker mounts a volume into a container it leases it from the volume manager, and releases the lease when the container stops. This ensures the volume isn't moved while the container is using it.
Notifications let the control service know when certain conditions have been met, allowing it to proceed with the knowledge that volumes have been setup appropriately.
In these scenarios, the scheduler 38 is referred to as an orchestration framework (OF), and the control service 30 is referred to as the volume manager (VM).
A detailed integration scenario for creating a volume for a container:
A detailed integration scenario for two-phase push, moving volume V from node A to node B (presuming previous steps).
The execution model of the distributed volume API is based on asserting configuration changes and, when necessary, observing the system for the events that take place when the deployment state is brought up-to-date with respect to the modified configuration.
Almost all of the APIs defined in this section are for asserting configuration changes in this way (the exception being the API for observing system events).
Change the desired configuration to include a new volume.
Optionally specify a UUID for the new volume.
Optionally specify a node where the volume should exist.
Optionally specify a non-unique user-facing name?
Receive a success response (including UUID) if the configuration change is accepted (not necessarily prior to the existence of the volume)
Receive an error response if some problem prevents the configuration change from being accepted (for example, because of lack of consensus).
Change desired configuration to exclude a certain volume.
Specify the UUID of the no-longer desired volume.
Receive a success response if the configuration change is accepted
(volume is not actually destroyed until admin-specified policy dictates; for example, not until seven days have passed).
Receive an error response if some problem prevents the configuration change from being accepted (for example, because of lack of consensus, because there is no such volume)
Change the desired configuration of which node is allowed write access to a volume (bringing that node's version of the volume up to date with the owner's version first if necessary).
Specify the UUID of the volume.
Specify the node which will become the owner.
Optionally specify a timeout—if the volume cannot be brought up to date before the timeout expires, give up
Receive a success response if the configuration change is accepted.
Receive an error response if not (lack of consensus, invalid UUID, invalid node identifier, predictable disk space problems)
Create a replication relationship for a certain volume between the volume's owner and another node.
Specify the UUID of the volume.
Specify the node which should have the replica.
Specify the desired degree of up-to-dateness, e.g. “within 600 seconds of owner version” (or not? just make it as up to date as possible. maybe this is an add on feature later)
Open an event stream describing all changes made to a certain volume.
Specify the UUID of the volume to observe.
Specify an event type to restrict the stream to (maybe? can always do client-side filtering)
Receive a response including a unique event stream identifier (URI) at which events can be retrieved and an idle lifetime after which the event stream identifier will expire if unused.
Receive an error response if the information is unavailable (no such volume, lack of consensus?)
Fetch buffered events describing changes made to a certain volume.
Issue request to previously retrieved URI.
Receive a success response with all events since the last request
(events like: volume created, volume destroyed, volume owner changed, volume owner change timed out? replica of volume on node X updated to time Y, lease granted, lease released)
Receive an error response (e.g. lack of consensus, invalid URI)
Retrieve UUIDs of all volumes that exist on the entire cluster, e.g. with paging.
Follow-up: optionally specify node.
Receive a success response with the information if possible.
Receive an error response if the information is unavailable (lack of consensus, etc.)
Retrieve all information about a particular volume.
Specify the UUID of the volume to inspect.
Receive a success response with all details about the specified volume
(where it exists, which node is the owner, snapshots, etc.)
Receive an error response (lack of consensus, etc.)
Mark a volume as in-use by an external system (for example, mounted in a running container) and inhibit certain other operations from taking place (but not configuration changes).
Specify the volume UUID.
Specify lease details (opaque OF-meaningful string? If OF wants to say “in use running container ABCD” and spit this out in some later human interaction, that's useful maybe. Also debugging stuff.”)
Receive a success response (including a unique lease identifier) if the configuration change is successfully made (the lease is not yet acquired! The lease-holder is on a queue to acquire the lease.)
Receive an error response for normal reasons (lack of consensus, invalid UUID, etc.)
Mark the currently held lease as no longer in effect
(freeing the system to make deployment changes previously prevented by the lease).
Specify the unique lease id to release.
Receive a success response if the configuration change is accepted
(the lease is not release yet).
Receive an error response (lack of consensus, invalid lease id)