The present invention relates generally to a computing system deployment method. In particular, the invention relates to a computing system deployment method for deploying and migrating computing system components.
Conventional computing systems, for example enterprise applications, typically possess multi-tier architectures. Unlike standalone computing systems in the past, such computing systems provide specialized solutions catering to different business aspects within an organization or across geographically distant installations. The elaborate structure of these computing systems gives rise to a vast quantity of heterogeneous back-end computing.
Management of the computing systems in order to maintain architectural integrity and performance of the computing systems is critical for providing availability of business services to users, for example customers.
The aspects of the computing systems typically requiring management includes the deployment and configuration of computing system services, system functionality diagnosis, maintaining the integrity of component dependencies within a computing system and the monitoring and balancing of computing system component loading for improving computing system performance.
In the course of managing the computing systems, a situation requiring components of a computing system to be moved between two host systems residing at different locations may arise. Alternatively, new resources may be made available to the host system within which the computing systems reside in. In both these situations, there is a need to reconfigure a previously configured host system. In most cases, the deployment of a computing system or its components requires complicated procedures that requires specialized training in the computing system being installed as system integrity of the host system has to be preserved at all times.
A computing system typically undergoes several configuration changes and a few versions of its associated components in the course of its life. Once a computing system is deployed within a host system and becomes operational, it will undergo further component replacements, enhancements and expansion in scale.
Maintaining the dependencies and the integrity of a large-scale computing system becomes problematic as different components of the computing system are typically provided by different vendors. Furthermore, maintenance of inter-connected host systems, computing systems or its components needs to be performed by an administrator who is deploying the computing system. In such a situation, the dependencies and inter-connection requirements are provided to the administrator in the form of instructional manuals. Further knowledge of the requirements and limitations of each host system, computing system or its components is dependant on the experience and tacit capability of the administrator.
It is therefore desirable to have a common way of capturing or specifying all these information in a structured way, so that the discovery of dependencies can be automated.
A conventional method of deploying a computing system is to remove a computing system from its current deployed location and to deploy a copy of the computing system in its new environment. The dynamic contents generated during the lifetime of the computing system in its previous location would be manually copied to the new location. This will require the presence of an expert of the computing system to be deployed. The extent of the expert's contribution is to make the necessary changes to allow the system or computing system to function. This however, does not establish compatibility of the computing system with other deployed computing systems. As a result, this may expose the host system to integrity loss.
Another method requires the utilising of a group of experts, for example computing system integrators, to work out a plan for migrating or deploying multi-vendor computing systems and components. Fundamentally, this method is similar to the mentioned method. These methods require experts to oversee and manage the deployment or migration process, leading to high cost, high consumption of time and effort and possibility of future deployment error.
Hence, this clearly affirms a need for a computing system deployment method for migrating and deploying applications and its components.
Therefore, in accordance with a first aspect of the invention, there is disclosed a computing system deployment method comprising the steps of:
In accordance with a second aspect of the invention, there is disclosed a computing system deployment model comprising:
Embodiments of the invention are described hereinafter with reference to the following drawings, in which:
A computing system deployment method for addressing the foregoing problems is described hereinafter.
An embodiment of the invention, a computing system deployment method (not shown) is described with reference to
The computing system deployment method is preferably for deploying a computing system onto the host system 20, the host system being computer-based and typically comprising a plurality of geographically dispersed sub-systems. A plurality of components, hardware and software, resides within the host system 20. These components are organised into one of service layer 30, system layer 32 and resource layer 34 within the host system 20 as shown in
The service layer 30 contains a plurality of service components 36 as shown in
Allocated in the resource layer 34 are resource components 44 as shown in
The service components 36, the system components 40, and the resource components 44 corresponding to and being grouped within the service cluster 38, the system cluster 42 and the resource cluster 46, can be further grouped into sub-clusters (not shown). For example, the service components 36 within a service cluster 38 are further grouped into sub-clusters based on domain requirements, with each sub-cluster of service components 36 providing service support to other service components 36 within a particular domain.
Associated with each service component 36 is a service profile 48 as shown in
The service profile 48 further contains a list of access controls 56 specifying the ability of a service component 36 contained in another service cluster 38 to access the service component 36 with which the service profile 48 is associated therewith and vice-versa. The access controls 56 are conventionally provided by the vendors of the service components 36 to avoid association of the service components 36 supplied by one vendor from accessing or being accessed by service components 36 supplied by another vendor.
A system profile 58 is associated with each system component 40 as shown in
The system profile 58 further contains a list of access controls 66 specifying the ability of the resource components 44 or system components 40 contained in another system cluster 42 to access the system components 40 with which the system profile 58 is associated and vice-versa. The access controls 66 are conventionally provided by the vendors of the system components 40 to avoid association of the system components 40 supplied by one vendor from accessing or being accessed by system components 40 supplied by another vendor.
A resource profile 70 is associated with each resource component 44 as shown in
The resource profile 70 further contains a list of access controls 78 specifying the ability of a resource component 44 contained in another resource cluster 70 to access the resource component 44 with which the resource profile 70 is associated therewith and vice-versa. The access controls 78 are conventionally provided by the vendors of the resource components 70 to avoid association of the resource components 44 supplied by one vendor from accessing or being accessed by resource components 44 supplied by another vendor.
Each of the service profiles 48, system profiles 58 and resource profiles 70 contains one of application-specific, vendor-specific or domain-specific data (not shown) for facilitating customisation of the computing system deployment method. Preferably, each of the service profiles 48, system profiles 58 and resource profiles 70 further contains a profile security envelope (not shown) for protecting the contents of the service profiles 48, system profiles 58 and resource profiles 70 from unauthorised access thereto. Access to the contents of the service profiles 48, system profiles 58 and resource profiles 70 is permitted only when a valid authentication (not shown) is provided in accordance to the profile security envelope. The profile security envelope further facilitates implementation of access policies for different users.
The corresponding association restrictions 54/64/76 of each of the service profile 48, system profile 58 and resource profile 70 further provide information on potential and known conflicts. The information on the conflicts allows the conflicts to be properly managed or alleviated during the deployment of the computing system.
The corresponding access controls 56/66/78 of each of the service profile 48, system profile 58 and resource profile 70 may be utilised for marketing, political, security or operational reasons. The access controls 56/66/78 allows for further policies on access and associations to be provided therein.
Further specified in each of the service profile 48, system profile 58 and resource profile 70 is a list of corresponding contract specification 57a/67a/79a, a list of corresponding ownership indicator 57b/67b/79b, a list of corresponding component history 57c/67c/79c, and a list of corresponding cost specifications 57d/67d/79d as shown in FIGS. 5 to 7.
The contract specification 57a/76a/79a states the information to be provided by a service component 36, system component 40 or resource component 44 by another corresponding service component 36, system component 40 or resource component 44 respectively for the accessing of the same former.
An application of the contract specification 57a/67a/79a is illustrated using a hypertext transfer protocol (HTFP) server (not shown). This HTTP server example, an Apache HTTP server's (not shown) system component 40 requires a valid alias and a root directory location to be specified for access thereto. The valid alias and root directory location requirements are stated in the contract specification 67a of the system profile 58 associated with the system component 40 of the Apache HTTP server. Therefore, a service component 36 of an Enterprise server (not shown) requiring access to the system component 40 of the Apache HTTP server has to be provided with information required by the contract specification 67a thereof. The service component 36 of the Enterprise server then provides the Apache HTTP server with the required valid alias and the root directory location to the system component 40 of the Apache HTTP server for access of the same thereby in accordance to the association requirements 52 of the service profile 48 of the service component 36.
The ownership indicator 57b/76b/79b indicates one or more owners of the service component 36, system component 40 or resource component 44 and the relative priority that each owner has over the respective service component 36, system component 40 or resource component 44 based on the configuration of the deployment. The owner is one or more of any combination of a system including the host system 20, a cluster including the service cluster 38, system cluster 42 and resource cluster 46, and a component including the service component 36, system component 40 and resource component 44.
The component history 57c/67c/79c of a component, for example the service component 36, system component 40 or resource component 44, tracks the current and past configurations the component is deployed upon. The component history 57c/67c/79c further reflects the dependency of other components on the component. The component history 57c/67c/79c is further used for restoring and archiving of deployed computing systems. This enables any corruption to the computing system or the components therein to be rectified by enabling redeployment or restoration of the computing system to its most recent pre-corrupted state.
The ownership indicator 57b/67b/79b and component history 57c/67c/79c are applicable within a system, for example, System A (not shown). In this example, Component B, a system component 40, is configured using a first deployment configuration for use by System A. When another system, for example System C (not shown), requires Component B (not shown) to be configured using a second deployment configuration for use thereby, the component history 67c of Component B is consulted upon. The component history 67c indicates that System A is depended thereon and configured under the first deployment configuration. Next, the ownership indicator 76b is checked for any configuration conflict. If the first deployment configuration is in conflict with System B or the second deployment configuration is in conflict with System A, the relative priorities of both System A and System B are compared. If System A is declared as the main owner of Component B within the ownership indicator 76b thereof and therefore has a higher priority relative to System B, the first deployment configuration is maintained and System B is restricted from configuring Component B for use thereby. However, if there are no configuration conflicts between System C, Component B and System A, the association restrictions 64 of Component B is checked to ensure that System C is not prohibited from accessing Component B.
The list of cost specifications 57d/67d/79d specifies the corresponding cost of using each of the service components 36, system components 40 and resource components 44. The cost of using a component includes virtual memory usage (for example a random access memory or RAM), physical storage usage (for example a hard disk drive), the physical storage expansion requirements with respect to time and the like system resource requirements. The cost specifications 57d/67d/79d allow an administrator of a system to decide upon the viability of installing a component or a cluster of components while considering the current and future impact on system resource requirements if the component is installed.
Referring to
A resource map 88 is associated with the resource layer 34 as shown in
A service map 94 is associated with the service layer 30 as shown in
Prior to the deployment of the computing system onto a host system 20, a deployment manager 100 residing in the host system 20, as shown in
The association requirements 52 for each service component 36 are obtained from the associated service profile 48. The system components 40 available in the system layer 32 of the host system 20 are matched with the association requirements 52 of the service components 36. If any of the system components 40 specified in the association requirements 52 are not available on the host system 20, the administrator is immediately prompted for further instructions. If the association requirements 52 are satisfied, the association restrictions 62 of the required system components 40 are checked for any conflicts between the system components 40 and service components 36 to be installed.
Availability of information required is assessed in accordance to the corresponding contract specification 57a/67a of the service components 36 and system components 36. If information is inadequate, the deployment manager 100 prompts the administrator to provide the deployment manager 100 with more information.
If no conflict arises, the deployment manager 100 proceeds to deploy the computing system onto the host system 20. The host system 20 typically includes one or more physical systems deployed within or across multiple geographical locations, for example, an instance of a single computing system having multiple computing nodes. First, a new service cluster 38 is generated in the service layer 30 to accommodate the service components 36 provided by the computing system if the required service cluster 38 is unavailable. A cluster profile 80 is also generated for the new service cluster 38 for association with the newly generated service cluster 38. Next, the service components 36 and their associated service profiles are deployed onto the service layer 30. The description 82 of the new service cluster 38 and the function descriptor 84 within the cluster profile 80 are updated in accordance to the information contained in the service profiles 48 of the service components 36.
Based on the description 50 of the service components 36, the deployment manager identifies an adaptor 102 required for deploying the service components 36. The adaptor 102 shown in
Next, the service address list 96 of the service map 94 is updated with the locations of the newly deployed service components 36 within the host system 20, for example, an instance of the aforementioned computing system with multiple computing nodes. The service dependency list 98 of the service map 94 is also updated with the new associations between the service components 36 and the system component 40. All activities undertaken by the deployment manager 100 to deploy the computing system onto the host system 20 is recorded in a deployment profile 106. The component history 57c/67c/79c of each corresponding service components 36, system components 40 and resource components 44 are updated with the new associations and configurations derived therefrom.
The deployment manager 100 allows the administrator to test the viability of configuring and deploying a specific computing system onto the host system 20. Furthermore, the cost specifications 57d/67d/79d allows the administrator to assess current and future resource requirements for the deployment. This preventive approach is preferred over a rectification approach of trying to solve a compatibility problem only after the deployment of the computing system onto the host system 20.
During the life of the computing system, changes are made to service components 36, the system components 40, resource components 44 and associations therebetween. The request for these changes are monitored and verified by the deployment manager 100 which readily updates one or more of the affected service profiles 48, system profiles 58, resource profile 70, cluster profile 80, resource map 88 and service map 94.
When a need arises for a component (for example the service component 36, system component 40 or resource component 44), the components within a cluster (the service cluster 38, system cluster 42 or resource cluster 46), a cluster, or a computing system to be migrated from the host system 20 to a new system (not shown), system integrity has to be maintained for both the host system 20 and the new system. The first phase of migrating the computing system requires that all its service components 36 and its associated components be duplicated on the new system. Using the cluster profile 80 of the service cluster 38 containing the service components 36 of the computing system, the associated service profiles 48 and system profiles 58 are used for duplicating the configuration of the computing system in the host system 20 onto the new system. This allows any changes made to the service components 36 of the computing system to be maintained in the new system without the need for manual reconfiguration of a fresh deployment of the computing system onto the new system.
Once the computing system is deployed onto the new system, the second phase of migrating the computing system requires the removal of the service components 36 residing in the host system 20. In order for the system integrity of the host system to be maintained, the deployment manager has to utilise the information stored within the deployment profile 106 of the computing system and the component history 57c/67c/79c of each corresponding service component 36, system component 40 and resource component 44. Furthermore, removal of the computing system requires information from the service map 94, the resource map 88 and the ownership indicators 57b/67b/79c. This prevents components associated with other computing systems from being removed during the migration process.
In the foregoing manner, a computing system deployment method is described according to an embodiment of the invention for addressing the foregoing disadvantages of conventional computing system deployment methods. Although only one embodiment of the invention is disclosed, it will be apparent to one skilled in the art in view of this disclosure that numerous changes and/or modification can be made without departing from the scope and spirit of the invention.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/SG02/00095 | 5/16/2002 | WO |