The present invention relates to a method and system for coordinating lifecycle changes of system components.
A system such as a computer system can be viewed as a collection of cooperating components. Some of these components may depend on others in a way that affects what they can do at any time. For example, in a distributed computer system there will typically be a number of inter-dependent software components such as database and application servers and it may only be possible to start an application server when its database server is running, or it may only be possible stop the database when the application server has stopped using it. When starting or stopping such a software system, it is necessary to start or stop all the components in a coordinated way that respects these dependencies; if this is not done, the system may not operate correctly. More generally, any action taken by one component may need to be coordinated with the action of others. Conceptually, the simplest way to do this is to control all actions from a single point, but this has the disadvantage that the single point needs to know everything about the system and the whole system could stop operating if the single point stops operating.
It is also known to provide a distributed deployment engine; however, this approach requires scripts or structured descriptions to define the order of life cycle operation on the components.
It is an object of the present invention to provide a way of coordinating lifecycle changes in system components that does not require separate coordinating managers but is consistent in operation.
According to a first aspect of the present invention, there is provided a system comprising:
The system enables the components to coordinate their lifecycles with decisions regarding changing their lifecycle states being taken locally at the components; the state-dissemination arrangement provides for a consistent view to all components of the current component lifecycle states. Absence of lifecycle-state information from a component can be taken as indicating that it does not exist so that the existence of a component can be used by another component in determining whether or not to change its current lifecycle state.
Coordination of life cycle transitions can be both on a sequential basis (one component only effects a particular transition after another component has transited to a specific lifecycle state), and/or on a simultaneous basis (two components effect respective particular transitions at substantially the same time upon a further component transiting to a specific lifecycle state).
The state-dissemination arrangement can be arranged to deliver the state information provided by all the resources to every resource user and manager. Preferably, however, each resource user and the or each resource manager is arranged to register with the state-dissemination arrangement to indicate its interest in particular state information, and the state-dissemination arrangement is arranged to use these registered interests to manage the dissemination of state information.
According to a second aspect of the present invention, there is provided a computer system comprising:
According to a third aspect of the present invention, there is provided a method of coordinating the lifecycle of computer system components arranged to operate according to a respective life cycle comprising a plurality of lifecycle states; the method comprising:
Embodiments of the invention will now be described, by way of non-limiting example, with reference to the accompanying diagrammatic drawings, in which:
The embodiments of the invention to be described hereinafter are based on the dissemination of state information about an entity of a system from that entity to other entities of the system.
The state-dissemination service 15 can be arranged simply to supply the state information it receives from any entity to every other entity; however, preferably, each entity that wishes to receive state information registers a state-information indicator with the state-dissemination service 15 to indicate the particular state information in which it is interested in receiving. This indicator could, for example, simply indicate that the registering entity wants to receive all state information provided by one or more specified other entities; alternatively, the indicator could indicate the identity of the particular state information that the registering entity wants to receive regardless of the entity providing it. In this latter case, when state information is provided by an entity to the state-dissemination service 15, the providing entity supplies a state-information identifier which the service 15 seeks to match with the indicators previously registered with it; the provided state information is then passed by the state-dissemination service to the entities which have registered indicators that match the identifier of the provided state information.
Rather than this matching being effected by the state-dissemination service 15 at the time the state information is provided to it, entities that intend to provide state information to the service 15 are preferably arranged to register in advance with the service to specify state-information identifier(s) for the state information the registering entity intends to provide; the state-dissemination service 15 then seeks to match the registered identifiers with the registered indicators and stores association data that reflects any matches found. The association data can directly indicate, for each registered identifier, the entities (if any) that have registered to receive that information; alternatively, the association data can be less specific and simply indicate a more general pattern of dissemination required for the state information concerned (for example, where the entities are distributed between processing nodes, the association data can simply indicate the nodes to which the state information should be passed, it then being up to each node to internally distribute the information to the entities wishing to receive it). The association data is updated both when a new identifier is registered and when a new indicator is registered (in this latter case, a match is sought between the new indicator and the registered identifiers).
When an entity subsequently provides state information identified by a state-information identifier to the state-dissemination service, the latter uses the association data to facilitate the dissemination of the state information to the entities that have previously requested it by registering corresponding state-information indicators.
As will be more fully described below, where the entities are distributed between processing nodes, the state-dissemination service is preferably provided by an arrangement comprising a respective state-dissemination server entity at each node. In addition, where the state-dissemination service operates by generating association data from supplied state-information identifiers and indicators, preferably not only are the state-information identifiers and indicators associated with the entities at each node recorded in registration data held by that node, but the association data concerning the state-information identifiers registered by the node entities of that node is also stored at the node. Furthermore, each node preferably stores source data indicating, for each state-information indicator registered by the entities of that node, the origin of the corresponding state information. As will be explained hereinafter, by arranging for this local storage of registration data, association data and source data, a relatively robust and scalable state-dissemination service can be provided.
The
Each one of the entities 24 to 29 that intends to provide state information to the state-dissemination service is arranged to register a corresponding state-information identifier with the local SD server 50 (that is, with the SD server at the same node). To this end, each such entity instantiates a software “state provider” object P (generically referenced 40) and passes it the identifier of the state information to be provided to the state-dissemination service. The state provider object 40 is operative to the register itself and the state-information identifier with the local SD server 50 and the latter stores this registration data in a local register 61; the state provider object 40 is also operative to subsequently provide instances of the identified state information to the SD server.
Similarly, each one of the entities 24 to 29 that wishes to receive particular state information from the state-dissemination service is arranged to register a corresponding state-information indicator with the local SD server 50 (that is, with the SD server at the same node). To this end, each such entity instantiates a software “state listener” object L (generically referenced 41) and passes it the indicator of the state information to be provided by the state-dissemination service. The state listener object 41 is operative to register itself and the state-information indicator with the local SD server 50 and the latter stores this registration data in the local register 61; the state listener object 41 is also operative to subsequently receive the indicated state information from the SD server.
It will be appreciated that the use of software state provider and listener objects 40 and 41 to interface the entities 24 to 29 with their respective SD servers 50 is simply one possible way of doing this.
In the present example, regarding the provision of state information:
Regarding the receipt of state information:
The data registered by the or each state provider and/or listener associated with a particular node constitutes registration data and is held by the SD server of that node.
In this example, it can be seen that the same state-information labels S1, S2, and S3 have been used for the state-information identifiers and indicators; in this case, the matching of identifiers and indicators carried out by the state-dissemination service simply involves looking for a full match between an identifier and indicator. However, using exactly the same identifiers and indicators is not essential and matching based on parts only of an identifier and/or indicator is alternatively possible (for example, the state-dissemination service can be arranged to determine that a state-information indicator ‘abcd’ is a match for a state-information identifier ‘abcdef’). Furthermore, although not illustrated in the
The state-dissemination service provided by the SD servers 50A-C is arranged to derive association data and source data from the registered state-information identifiers and indicators. In the present case, the association data is used to indicate, for each state-information identifier, the SD server(s) where corresponding indicators have been registered; the source data is used to indicate, for each state-information indicator, the SD server(s) where corresponding identifiers have been registered (of course, the source data can also be considered to be a form of association data, however, the term ‘source data’ is used herein to distinguish this data from the above-mentioned data already labelled with the term ‘association data’). For each identifier, the corresponding association data is held by the SD server where the identifier is registered; similarly, for each indicator, the corresponding source data is held by the SD server where the indicator is registered. As will be more fully explained below with reference to FIGS. 3 to 5, the association data and source data are determined in the present example by making use of a global register; 91, maintained by one of the SD servers, that records the SD server(s) where each identifier and indicator has been registered. The global register 91 is only used for compiling the association data and source data and its loss is not critical to the dissemination of state information in respect of previously registered state-information identifiers and indicators already taken account of in the association data held by operative SD servers; furthermore, the contents of the global register can be reconstituted from the registration data held by the operative SD servers.
The state manager 51 comprises a local registry 60, an outbound channel for receiving state information from a local state provider 40 and passing this information on to other SD servers 50 as required, and an inbound channel 80 for distributing state information received from other SD servers 50 to interested local listeners 41. The state manager of one of the SD servers also includes a global registry; all SD servers have the capability of instantiating the global register and the servers agree amongst themselves by any appropriate mechanism which server is to provide the global registry. The registry is not shown in the state manager 51 of
The local registry 60 comprises the local register 61 for holding the registration data concerning the local entities as represented by the local providers 40 and listeners 41, the association data for the state-information identifiers registered by the local providers 40, and source data for the state-information indicators registered by the local listeners 41. As depicted in
In the local provider table 65, for each identifier registered by a local provider 40, there is both a list of the or each local provider registering that identifier, and a list of every SD server, if any, where a matching state-information indicator has been registered. Table 65 thus holds the registration data for the local providers 40 and their associated identifiers, along with the association data concerning those identifiers.
In the local listener table 66, for each indicator registered by a local listener 41, there is both a list of the or each local listener registering that indicator, and a list of every SD server, if any, where a matching state-information identifier has been registered. Table 66 thus holds the registration data for the local listeners 41 and their associated indicators, along with the source data concerning those indicators.
With respect to the global registry 90 (
When a local provider 40 is first instantiated, a registration/deregistration functional element 42 of the provider 40 notifies the local registry 60 and the registration process proceeds as follows:
In a similar manner, when a local listener 41 is first instantiated, a registration/deregistration functional element 43 of the listener 41 notifies the local registry 60 and the registration process proceeds as follows:
With regard to the updating of the source data held in the local listener table 66 of each SD server 66 in response to the registration of a new provider 40 or listener 41, this is effected by the inbound channel 80 of each SD server when it receives state information in respect of an identifier that the registry 60 finds is a match for one or more state-information indicators in the table 66 (the handling of newly-received state information by the state manager 60 is described more fully below)
Rather than a newly registered listener having to wait for a change in state information for which it has registered before receiving that state information, provision can be made for providers of this information to send the current version of the state information of interest to the listener concerned (either by a dedicated exchange of messages or by the provider(s) being triggered to re-send their information via the state-dissemination arrangement).
The deregistration of a provider 40 or listener 41 is effectively the reverse of registration and involves the same functional elements as for registration. The main difference to note is that an identifier/indicator deregistration message is only sent from the local registry 60 to the global registry 90 if a state-information identifier or indicator is removed from the local provider table 65 or local listener table 66 (which is done when there ceases to be any associated provider or listener respectively).
In normal operation, upon an entity detecting a change in state information for which it has a provider 40 registered with its local register 60, a functional element 44 of the provider notifies the outbound channel 70 of the local register that there is new state information in respect of the state-information identifier concerned. A functional element 72 of the outbound channel 70 then looks up in the local provider table 65 of the register 60, the association data for the identifier in order to ascertain the SD servers to which the new state information needs to be sent; the new state information is then distributed, together with its identifier, to these servers by functional element 74. This distribution will typically involve use of the communication services provided by block 53; however, where a local listener 41 (that is, one at the same node) has registered to receive the state information, then the functional element 74 simply passes it to the inbound channel 80 of the same server (see arrow 77 in
When an SD server 50 receives new state information, identified by a state-information identifier, from another SD server, it passes the information to the inbound channel 80 of the state manager 51. Upon new state information being received at the inbound channel 80 (whether from another SD server or from the local outbound channel), a functional element 82 of the inbound channel uses the identifier associated with the new state information to look up in the local listeners table 66 the listeners that have registered state-information indicators that match the identifier. The functional element 82 also checks that the SD server that sent the state information is in the list of provider SD servers for each matched indicator, if this is not the case, the list is updated (thereby updating the source data for the indicator concerned). A functional element 84 of the inbound channel is then used to distribute the received state information to the matched listeners 41 where it is received by respective functional elements 45 of the listeners.
As so far described, the state-dissemination arrangement of the
As will be described below with reference to
It may be noted that, for present purposes, any internal time delays in a node in passing state information received by an SD server to a listener or in notifying it that the information is no longer available, can be discounted. The communication timings between SD servers are therefore taken as being representative of the communication timings between entities (more specifically, between providers and matched listeners).
Considering first the TSD arrangement, the connection-timing functionality 56 added to the communications services block 53 comprises a respective timed-connection functional element 57 for checking the timing of communication between every other SD server and the subject SD server. This check involves checking that communication is possible between every other SD server and the subject server within a predetermined time value (for example, 3 seconds). To this end, every SD server is provided with a heartbeat message function 58 which broadcasts periodic messages, identifying the originating SD server, to every other server; this broadcast is, for example effected using the UDP service provided by the block 53. When an SD server receives such a heartbeat messages it passes it to the timed-connection functional element 57 associated with the server that originated the heartbeat message. This functional element 57 thereupon resets a timer that was timing out a period equal to the aforesaid predetermined time interval. Provided this timer is reset before time out, the connection with the corresponding server is considered to be timely. The interval between heartbeat messages is such that several such messages should be received by an associated timed-connection functional element 57 over a period equal to the predetermined time value so that it is possible for a heartbeat message to be missed without the corresponding timer timing out.
In the event that the timer of a timed-connection functional element 57 times out, the state manager 51 of the same SD server is notified that timely communication with the server associated with that functional element 57 has been lost. The state manager 51 then uses the source data held in the local register 61 to determine which of the local listeners 41 were registered to receive state information from the SD server with which timely communication has been lost; these listeners are then informed that state information is no longer available from this server.
The heartbeat messages broadcast by a SD server 50 also enables a new SD server to announce itself to the existing SD servers, the connection timing function 56 of each existing SD server being arranged to listen out for broadcast heartbeat messages from new SD servers and to instantiate a new timed-connection functional element 57 for each such server detected.
It will be appreciated that the above described way of checking communication timing is simply one example of how to carry out this task and many other ways are possible, for example, by the use of round trip timing or by time-stamping one-way messages using synchronized clocks at all SD servers.
The operational messages passed between the SD services (such as those used to distribute state information) are, in the present example, sent on a point to point basis using the TCP service provided by block 53. These messages are preferably also used for checking communication timing, temporarily substituting for the heartbeat messages.
The enhanced state-dissemination service provided by the TSD arrangement ensures that listeners only receives timely information. Furthermore, a state listener can assume that all other state listeners with an equivalent matching indicator will either see the same state information from a given provider within the aforesaid predetermined time limit or are notified that there is no such state information within the same time limit.
Considering next the TPSD arrangement, the partition manager 52 that is interposed between the communication services block 53 and the state manager 51 in each SD server, implements a partition membership protocol and a leader election protocol. Suitable implements of such protocols will be apparent to person skilled in the art so only a brief description is given here.
The partition manager 52 uses three conceptual views of the SD servers that are participating in the state-dissemination service, each view being determined locally. The first, the connection set, is the set of connections between the subject SD server and other SD servers identified by the communication services block 53. The second view, the connection view 54, is derived directly from the connection set and represents SD servers that are potential members of a partition including the subject SD server. All SD servers in the connection set are admissible to the connection view 54, except those that are untimely or have recently been untimely. All partition managers 52 communicate their connection views 54 to each other whenever these views change, so each SD server has a copy of the connection view derived by every node in its own connection view—the fact that these connections are timely guarantees that the exchanges of connection views are timely.
The collection of connection views 54 known to the partition manager 52, including its own view, are used to derive the partition including the subject SD server. A partition manager 54 is said to be stable when its collection of connection views remain unchanged and they all agree (i.e. they are all the same). When stable, the partition manager 54 sets the partition 55 to be the same as the local connection view. When unstable, the partition manager 54 reduces the partition by selectively evicting SD servers according to the changes. Each partition manager 54 derives its own partition, but the sharing of connection views and the function used to derive the partition provide the following properties:
The second property is actually derived from the first, if two partitions are subsets of each other then clearly they are the same, and so these two actually represent one property. The second property is stated to emphasise the point that the partition managers either converge on the same partition or distinctly different partitions—they do not overlap. As a result, by the time one partition manager stabilizes, all SD servers that are excluded from its partition know that they are excluded; or rather they derive their own partition that does not intersect it. The third property demonstrates that if the partition remains stable then all SD servers will figure this out.
The leader election protocol operates similarly to the partition protocol. As well as exchanging connection views 54 the partition managers 52 exchange leader candidates. Each manager re-evaluates its choice of leader when connection view changes occur in such a way that they all chose the same leader. Conveniently, the leader SD server provides the global registry 90.
By arranging for each SD server 50 only to send registration messages to the global registry 90 of the same partition 55, the state listeners 41 only see state information from state providers 40 that are in the same partition as them.
The enhanced state-dissemination service provided by the TPSD arrangement enables a state listener to assume that all other state listeners with equivalent matching indicators are either in the same partition and see all the same state information within the given predetermined time limit or they are not in the same partition and do not see any of the same state information within the same time limit.
Listeners are informed by the SD servers when the partition has become unstable. If a provider provides state information s at time t to the TPSD service, then provided the partition remains stable, all interested listeners will receive the information s by time t+Δ. All such listeners can each then know by time t+2Δ that all other interested listeners have received the information s because it will be aware by this time of any disruption of the partition that would have prevented another interested listener from receiving the information by the time t+Δ.
Put another way, whenever an entity is informed by its local SD server that the partition of which it is a member is no longer stable, such an entity knows that it cannot rely upon the receipt by interested entities of the partition, of any item of lifecycle-state information which the entity itself has received within an immediately preceding time period of duration corresponding to 2Δ.
It may be noted that the TPSD service has the effect of partitioning the totality of state information knowledge. When the partitions are stable, two entities either have access to the same knowledge partition or non-overlapping knowledge partitions. So, whatever state information the entities are interested in knowing, even if these are completely different items of state information, will be consistent. Thus, if a first entity knows state information s by time t+Δ, then at time t+2Δ this entity knows that whatever state information a second entity knew by time t+Δ, is consistent with information s, whether it be the information s or something else all together.
The state-dissemination arrangements described above, including all the variants mentioned, are suited for use in disseminating life-cycle state information between entities formed by components of a system and, in particular, between software components of a distributed computer system (by ‘software component’ is meant a component that can be instantiated and terminated as required and takes the form of one or more processes that work together to provide a particular function). As will be described below, this enables the lifecycle changes of such components to be coordinated.
The life cycle of a component can be expressed as a state machine, with a number of states and transitions between states. Typical states include “STARTING” during which a component is initializing, “STANDBY” during which a component is ready but not active, and “ACTIVE” when a component is actively performing its intended function. The life cycles of the components of a system are often inter-dependent. For example, one component may depend on a second component already being in a particular state before it can transition to its next state. These dependencies are frequently found in system instantiation or termination as in the following examples from a system comprising application server components and a database component:
By way of illustration,
Associated with each possible transition is an explicit or implicit set of one or more conditions that must be fulfilled before the transition can be executed. Condition set 104 in
The explicit condition set 104 shown in
All three conditions must be satisfied before the condition set 104 is fulfilled and the transition 102 can be taken. The condition set 104 is given simply by way of example and it is to be understood that condition sets associated with other state transitions can contain more or less conditions as required.
With respect to the management trigger condition, this condition, if present, requires that a particular management input has been received at a management interface of the component concerned. The required management input is, for example, a specific direction or authorisation to transit to the lifecycle state “Y” (that is, the lifecycle state reached by the transition governed by the condition set 104 comprising the management trigger condition). A further example of a required management input is a direction or authorisation to transit lifecycle states until a specific state is reached where this specific state is other than the current lifecycle state “X” of the component concerned.
With regard to the condition concerning the existence or current lifecycle state of each of at least one other component of the system, this type of condition enables the lifecycles of the system components to be coordinated. To this end, each component is arranged to maintain a state variable indicative of its current lifecycle state and to provide this lifecycle state information to the state-dissemination arrangement for delivery to other components that may be interested (generally because the current lifecycle state of the providing component forms part of a lifecycle state transition condition set, such as condition set 104). In the present case, the component 100 is arranged to receive from the state-dissemination arrangement the lifecycle state information it needs for checking the corresponding condition. With regard to determining the existence or otherwise of another component, absence of lifecycle-state information from a component is taken as indicating that the component concerned does not exist.
Each life cycle manager 130 is arranged to instantiate a state provider 40 for providing the current lifecycle state of the component of which it forms a part to the local SD server.
Thus:
Each provider 40J, 40K and 40L is arranged to provide its associated lifecycle state information upon a change in the current lifecycle state of the component concerned.
Each life cycle manager 130 is further arranged to instantiate a listener 41 for each other component from which it wishes to receive current lifecycle state information as a result of the current lifecycle state (or the existence or non-existence) of that component being in a transition condition set governing the lifecycle transitions of the component of which the life cycle manager forms a part. In the present example, both the components 121 and 122 wish to know the current lifecycle state of the component 120 and their lifecycle managers have accordingly instantiated listeners 41J and 41K respectively, both listeners being in respect of state-information indicator S120.
In the simplest case where the presence of component 120 in a particular state “Z1” is used by both components 121 and 122 as the sole condition for transiting from respective states Z2 and Z3, then when the component 120 is not initially in its state “Z1” and the components 121 and 120 are in their respective states “Z2” and “Z3”, the life cycle managers 130 of the components 120 and 121 will both be waiting to receive lifecycle state information from component 120, via the state-dissemination arrangement, indicative of that component entering its state “Z1”. As soon as this happens, the components 121 and 122 are informed and transit out of their respective states “Z1” and “Z2”. This simple example illustrates that coordination of life cycle transitions can be both on a sequential basis (component 121/122 only effects its transition after the component 120 has transited to a specific lifecycle state), and/or on a simultaneous basis (two components 121, 122 effect respective transitions at substantially the same time upon the component 120 transiting to a specific lifecycle state). However, as will be more fully discussed below, a component can only rely on coordination on a simultaneous basis where the state-dissemination arrangement is the TPSD arrangement because only in this case can the component be sure that the lifecycle-state information it observes is also observed by all other all interested components within a predetermined time limit.
The components observe the following consistency properties depending on the type of dissemination service used:
A particularly useful application of lifecycle coordination concerns fully distributed startup coordination to instantiate an entire system. In this case, a running state dissemination service needs to be present which the components can use to announce their own lifecycle state values and observe the lifecycle state values of others. All components understand their own life cycle and their transition constraints are encoded as predicates associated with the life cycle transitions. All the components can be deployed immediately without any coordination and instructed, via their management interfaces, to perform the transitions that take them to their running state; each component will determine when to perform its own transitions as the appropriate transition condition sets are satisfied. As an alternative to the components being instructed to transit to their running states, the components can simply be arranged to effect whatever transitions become valid as a result of the corresponding condition sets being satisfied (the condition sets not including any required management input).
Components can make other types of state information, additional to current lifecycle state information, available either by providing it in association with the lifecycle state information or by instantiating additional state providers for that information. As an example, an application server component may provide information about its current workload. This information can then be used in the transition condition sets of other components.
This example assumes that there are no partition changes throughout. Starting one such component would lead to it progressing through STARTED, STANDBY, and ACTIVE_ALONE, finally reaching ACTIVE after starting a second, replicate, component. The second component will reach the STANDBY state. Therefore the normal running configuration has one ACTIVE component and one STANDBY component.
If the component in the ACTIVE state fails the other component would transit to ACTIVE_ALONE, create a new standby, and then transit to ACTIVE. If the component in STANDBY fails the other will return to ACTIVE_ALONE to create a new standby component, and then transit back to ACTIVE.
The components provide their system function so long as one of them is in the ACTIVE or ACTIVE_ALONE state and so only a simultaneous failure of both components takes the function out of service.
In the
The embodiments described above with reference to FIGS. 7 to 9 provide a fully distributed approach to coordinating component life cycles. There is no central control that needs to gather or maintain information about component states purely to coordinate transitions, or that can fail and render the system temporarily or permanently inoperable. Furthermore, the component life cycle dependencies are declarative and there is no need to derive an explicit sequence of component transitions that satisfy the dependency constraints. As indicated, the system can be created by randomly creating all the components and letting them organize themselves. As a result the mechanism that creates the system can do its job without being involved in the coordination of startup.
It will be appreciated that many variants are possible to the above described embodiments of the invention. For example, the implementations of the state-dissemination arrangement described with reference to FIGS. 2 to 6 are by way of example and other implementations are possible, particularly with respect to how the interest of an entity in particular state information is associated with the source(s) of such information.
Whilst components are preferably arranged to provide their lifecycle state information to the state-dissemination service whenever this lifecycle state information changes, the lifecycle state information can additionally or alternatively be provided to the state-dissemination service in other circumstances, such as at regular time intervals.
It will be appreciated that the SD servers and components described above will typically be implemented using appropriately programmed general purpose program-controlled processors and related hardware devices (such as storage devices and communication devices). However, other implementations are possible.
The state-dissemination arrangements described herein can be used for disseminating other types of state information in addition, or alternatively, to lifecycle state information.
Number | Date | Country | Kind |
---|---|---|---|
0407119.7 | Mar 2004 | GB | national |