The present invention is related to increasing utilization of server computer capacity. More particularly, the present invention is related to computer-implemented methods, configurations, systems, and computer program products for configuring and deploying portable application containers in a shared platform environment.
Deploying an application, such as an internal business application, to a conventional standalone server can present challenges with low server capacity utilization, availability, deployment time, and high overall costs. High costs are usually associated with multiple and underutilized servers that have lengthy design and build cycles. Typically each deployment project acquires a server that is custom built by hand. Thus, numerous server designs, having one application with one server, lend themselves to very poor utilization. Consequently, as the number of discrete servers increases, costs can scale linearly without gaining any cost efficiencies. Also, inconsistencies in build techniques and software lead to increased support costs and operational challenges. And physical partitioning results in wasted compute capacity.
Typically, with conventional systems, the design and build process involves a requester contacting an administrator for the system and attempting to describe application requirements over the phone or over e-mail. The administrator attempts to design and build the application according to the request and may or may not get it right. For instance, there may have been some details left out thereby requiring the administrator get feedback from the requestor and potentially start over. Normally, the administrator figures out what is needed for each requested feature or requirement per request. The administrator then types appropriate commands on each requested server. The administrator may accidentally type a command differently or in a slightly different order on one server than he did another one. This activity could amount to hundreds of tasks for an administrator, having a tendency to being a very interactive kind of, non-repeatable, process.
Accordingly there is an unaddressed need in the industry to address the aforementioned and other deficiencies and inadequacies.
This Summary is provided to introduce a selection of concepts in simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is the Summary intended for use as an aid in determining the scope of the claimed subject matter.
In accordance with embodiments of the present invention, methods, configurations, systems, and computer program products configure and deploy portable application container PACs in a shared platform application container environment. Embodiments of the present invention develop and implement a pre-provisioned, sustainable, and shared computing infrastructure that provides a standardized approach for efficiently deploying applications. The computing infrastructure is centrally managed and support shared. Embodiments of the present invention include application stacking models that facilitate an increase in overall infrastructure utilization and a reduction overall infrastructure costs and deployment time. The application stacking models are built for implementation with multiple architectures to keep them agnostic to any particular server technology.
One embodiment provides a computer-implemented method for configuring and deploying PACs in a shared server environment. The method involves receiving metadata describing an application and receiving an instruction on what metadata to use in configuring the application where the application is associated with a PAC. The method also involves transforming the metadata into a list of commands in response to receiving the instruction and deploying the list of commands to a group of servers wherein the commands are operative to create the PAC. The PAC is a logical construct of the application configured to be logically separate from and to execute via any server in the group of servers. Each server in the group of servers can be used by multiple PACs to enable improved utilization of server capacity.
Another embodiment is a deployment engine including a computer-readable medium having control logic stored therein for causing a computer to configure and deploy PACs to a group of servers (PODs). The deployment engine includes a layered POD library having computer-readable program code for causing the computer to abstract a POD storage configuration and provide functions to access sections of the POD storage configuration. The deployment engine also includes a layered PAC library having computer-readable program code for causing the computer to abstract a PAC storage configuration and provide functions to access sections of the PAC storage configuration. Still further, the deployment engine includes a layered consolidated infrastructure software stack (CISS) library having computer-readable program code for causing the computer to abstract a CISS storage configuration and provide functions to access sections of the CISS storage configuration. There is a different CISS library for each type of server.
Still further, another embodiment is a computer-implemented system for configuring and deploying a PAC. The system includes a repository operative to receive metadata describing an application and a deployment engine operative to receive an instruction on what metadata to use in configuring the application where the application is a PAC. The deployment engine is also operative to retrieve the metadata from the repository and generate a list of commands based on the metadata in response to receiving the instruction. Additionally, the deployment engine is operative to deploy the list of commands to a logical group of servers where the commands are operative to create the PAC. The PAC is a logical construct of the application configured to be logically separate from and to execute via any server in the logical group of servers.
Aspects of the invention may be implemented as a computer process, configuration, a computing system, or as an article of manufacture such as a computer program product or computer-readable medium. The computer program product may be a computer storage media readable by a computer system and encoding a computer program of instructions for executing a computer process. The computer program product may also be a propagated signal on a carrier readable by a computing system and encoding a computer program of instructions for executing a computer process.
Other configurations, computer program products, methods, features, systems, and advantages of the present invention will be or become apparent to one with skill in the art upon examination of the following drawings and detailed description. It is intended that all such additional configurations, methods, systems, features, and advantages be included within this description, be within the scope of the present invention, and be protected by the accompanying claims.
As described briefly above, embodiments of the present invention provide methods, configurations, systems, and computer-readable mediums for configuring and deploying a PAC in a shared platform application environment (SPACE). In the following detailed description, references are made to accompanying drawings that form a part hereof, and in which are shown by way of illustration specific embodiments or examples. These illustrative embodiments may be combined, other embodiments may be utilized, and structural changes may be made without departing from the spirit and scope of the present invention. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined by the appended claims and their equivalents.
Referring now to the drawings, in which like numerals represent like elements through the several figures, aspects of the present invention and the illustrative operating environment will be described.
Generally, program modules include routines, programs, components, data structures, and other types of structures that perform particular tasks or implement particular abstract data types. Moreover, those skilled in the art will appreciate that the invention may be practiced with other computer system configurations, including hand-held devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers, and the like. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.
Referring now to
The networked environment 100 also includes the control center server 108 housing, among other components, a deployment engine 110 and a repository 112 which houses metadata 114 received via the PAC requirements package 107 and transferred over the network 104. The metadata defines the PAC for deployment based on inputs received via the PAC requirements package. Additionally, the networked environment 100 includes multiple servers, for instance servers 117a-117x forming a logical group of servers, also referred to as a POD or multiple PODs, to be utilized by PACs deployed over the network 104. The servers 117a-117x include multiple PACs, for instance PACs 120a-121c and PACs 121a-121d stored on a mass storage device (MSD) 118. The servers 117a-117x can also be a part of more than one POD. For example, the PACs 120a-120c on the server 117x and PACs 120d-120g on the server 117a can be part of the one POD while the PACs 121a-121d and PACs 121e-121f are part of a different POD. Additional details regarding multiple PODs will be described below with respect to
The MSD 118 also includes an operating system 122 housing a workload management system 124. Each PAC utilizing the server 117x is associated with the operating system 122. If one of the PACs 120 or 121 tries to consume all the resources associated with a POD, the workload management system 124 is a configuration that keeps a PAC from totally overwhelming the other applications or PACs while ensuring that the PAC trying to consume still has some amount of resources to operate. The servers 117a-117x are in communication with storage arrays 138a-138n over a storage area network 137 including switches 135. The storage arrays, for instance the storage array 138n includes a memory 140 storing data in the form of a file system 142, volumes 144, and/or disk groups 145. This data and binaries make up or are part of a PAC. Configuration components that define the data and the binaries reside logically in the configuration repository 112. This configuration is deployed across all servers.
Thus, the data provides a portable piece associated with a PAC because the data is not tied to any hardware component and does not physically reside on any server. However, any of the servers 117a-117n can attach to or access the data on the storage arrays 138a-138n via the storage area network 137. The logical configuration of each PAC, including data components residing on the memory 140, is defined by the metadata 114 residing in the repository 112. The deployment engine 110 transforms the metadata 114 into a list of commands that instructs the servers on where the disk groups 145, the volumes 144, and file systems 142 are. The list of commands also instructs the servers 117a-117x on necessary addresses to locate the data, what users are involved, what to monitor out of a monitoring system, and back-up information. Additional details regarding the control center server 108 and the deployment engine 110 will be described below with respect to
Similarly, POD names are a preinstalled infrastructure. The PRP interface 202 actually checks naming conventions. Thus, some of the fields the PRP interface 202 will not allow the use of entries that are invalid. The PAC type is also a preinstalled infrastructure. The PAC type field 210 can include a web server, a database, an application server, or even a custom designed type. Some components are foundational and are common to all PAC types. For instance, an ORACLE database or web server will both need an IP address. Other fields include an allocated shares field 212 and an earliest PAC completion date field 214. Allocated shares are part of a workload management profile. This is where a request is made for a minimum amount of resources. For instance, if a requester estimated that two CPUs and four gigabytes of RAM are needed to run a web server effectively, the allocated shares field 212 is where such request is made.
Other fields include a mount point field 215, a logical volume name field 217, a file system (FS) type field 218, and a size field 220. Other fields are a stripe size field 222, a perms field 224, a disk volume group name indicator field 225, a software mirrored field 227, a mount options field 228, and a backup field 230. Still further, the PRP interface 202 includes, a disk volume group name field 232, a lun size field 234, a quantity field 235, a storage type field 237 and a comment field 240. Once, the entries are input into the PRP interface 202, a selection of a submit button 242 sends a PRP to a review state. During the review state, one or more reviewers examine the entries to make sure that there is enough capacity, all entry criteria are met, and that the dates can be met. The PRP is then sent to finalize status where all of the data in the PRP forwarded to and stored in the repository 112. The PRP also generates requests in other sub systems via APIs associated with the subsystems. For instance, the PRP generates a request to create a back up job. Additional details regarding the use of metadata received via the PRP interface 202 will be described below with respect to
Referring now to
The DE 110 includes a layered POD library 338, for example PODS.inc, operative to cause the CCS 108 to abstract a POD storage configuration and provide functions to access sections of the POD storage configuration and a layered PAC library, PACS.inc, 342 operative to cause the CCS 108 to abstract a storage PAC configuration and provide functions to access sections of the PAC storage configuration. The DE 110 also includes a layered consolidated infrastructure software stack (CISS) library 340, CISS.inc, operative to cause the CCS 108 to abstract a CISS storage configuration and provide functions to access sections of the CISS storage configuration. There is a different CISS library 340 for each type of server in the POD. Still further, the DE 110 includes a layered global library, Global.inc, operative to cause the CCS 108 to abstract a shared global configuration and provide functions in support of creating pathnames, configuration locking, time stamps, an IP address pool, a user ID (UID) pool, a group ID (GID) pool, and/or eTrust connectivity.
The DE 110 also includes core scripts or functions 347 that are called to use or view configurations in libraries. Although the CISS library 340 has a different infrastructure stack library for each type of server, the DE 110 will run the commands appropriate for that CISS. The PACs are created in the same pattern, but the tasks necessary for PAC creation vary according to the operating system and storage technology in the POD. The core script, “Ciss_manage.pl” can be used by outside programs to use or view CISS configurations and for SPACE administrators to manage CISS configurations.
ciss_manage.pl-l to list all CISS
Similarly, the core script “pod-manage.pl” can be used by outside programs to use or view POD configurations and for SPACE admins to manage POD configurations.
pod_manage.pl-sync <podname>
pod-manage.pl-l <podname>
Still further, the core script “pac_manage.pl” can be used by outside programs to use or view PAC configurations and for SPACE admins to manage PAC configurations.
pac_manage.pl-create <podname>
pac_manage.pl-l <podname>
Because access by outside SPACE scripts rely on the functions in the POD library 238, the PAC library 342, and the CISS library 340, the method of storing the configuration can be easily modified in the future. For example, the current flat file structure for storing the SPACE configuration could be converted into a relational database schema.
The DE 110 utilizes the libraries and the metadata 114 to create and run the commands 337 and to create and push the virtual root 315 to the POD.
The MSD 314 also includes the repository 112 housing the metadata 114 received from the PRP server 105. The metadata 114 includes a cluster service group 322, a workload management profile 324 defining the workload management system 124 (
It should be appreciated that the MSD 314 may be a redundant array of inexpensive discs (RAID) system for storing data. The MSD 314 is connected to the CPU 307 through a mass storage controller (not shown) connected to the system bus 309. The MSD 314 and its associated computer-readable media, provide non-volatile storage for the CCS 108. Although the description of computer-readable media contained herein refers to a mass storage device, such as a hard disk or RAID array, it should be appreciated by those skilled in the art that computer-readable media can be any available media that can be accessed by the CPU 307.
The CPU 307 may employ various operations, discussed in more detail below with reference to
According to various embodiments of the invention, the CCS 108 operates in a networked environment, as shown in
A computing system, such as the CCS 108, typically includes at least some form of computer-readable media. Computer readable media can be any available media that can be accessed by the CCS 108. By way of example, and not limitation, computer-readable media might comprise computer storage media and communication media.
Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, disk drives, a collection of disk drives, flash memory, other memory technology or any other medium that can be used to store the desired information and that can be accessed by the central server 104.
Communication media typically emibodies computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared, and other wireless media. Combinations of any of the above should also be included within the scope of computer-readable media. Computer-readable media may also be referred to as computer program product.
Referring now to
Next at operation 710 the PRP system 105 transfers the metadata 114 to the repository 112 of the CCS 108. Then at operation 712, the DE 110 receives an instruction on what metadata to use in configuring a PAC. The operational flow 700 then continues to operation 714.
At operation 714, the DE 110 transforms the metadata 114 into a list of commands 337. As described above, with respect to
At operation 720, the DE 110 creates or updates the virtual root 315 based on the list of commands. The DE 110 then deploys the virtual root 315 to the POD at operation 722.
Next, at operation 724, the POD receives and executes the commands to initiate creation of the PAC. At operation 727, the POD also receives and stores the virtual root 315 for backup or update purposes. Then at operation 730, the POD stores the configured PAC on a target server in the POD. Then the operational flow 700 continues to operation 732. At operation 732, the commands 337 populate the storage arrays 138 via the POD. Next, control returns to other routines at return operation 735.
Thus, the present invention is presently embodied as methods, systems, apparatuses, computer program products or computer readable mediums encoding computer programs for configuring and deploying PACs for improved server capacity utilization.
The above specification, examples and data provide a complete description of the manufacture and use of the composition of the invention. Since many embodiments of the invention can be made without departing from the spirit and scope of the invention, the invention resides in the claims hereinafter appended.
The present application claims priority from U.S. provisional application No. 60/621,557 entitled “Shared Platform Application Container Environment,” filed Oct. 22, 2004, said application incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
60621557 | Oct 2004 | US |