Not applicable.
Not applicable.
Not applicable.
Even experienced development organizations find themselves struggling with the rapid deployment and support of new online technologies. Despite the web's standardizing influence on emerging technologies, the development, deployment and operations processes that evolve around each application are anything but standardized. We refer to the organizations, processes and tools that exist in support of an application as the Application Enterprise. The Application Enterprise includes not only the application itself, but also such things as configuration parameters, data, release notes, deployment tools, operations tools and execution environments, in other words, components that span development, deployment and operations activity.
Effective development teams typically employ experienced staff to properly structure their environments and processes. Tools are selected by development groups (IDEs, source control systems, bug tracking), by operations and support groups (package installation, change management and operational monitoring tools) and by managers (project tracking tools) according to best practices and their individual experiences. However, even if thought can be given ahead of time to the interaction of these tools, they usually become segregated by organization because they are not designed to interoperate. Typically, custom scripts, flat files and informal processes are developed to bridge the gap between development, deployment and operations environments, often in haste as problems are identified. In each Application Enterprise therefore, an assortment of fragile tools and ad-hoc solutions evolve, creating one-of-a-kind environments understood and managed by critical individuals, typically of which there are only a small number. Development and operations teams spend valuable time learning and struggling with the custom components, unable to leverage previous experience with similar tasks.
Compounding this is the lack of documentation and the use of non-standard tools and processes that characterize organizational interfaces in the Application Enterprise. Without standard tools, inefficiencies occur and wheels are reinvented to solve recurring problems. Custom build environments are developed. Informal scripts are written to package and move applications from one environment to another. Flat files are adopted as a means to define configuration parameters and whole teams are assembled to structure, parameterize and propagate applications. Each ad-hoc tool that is written becomes part of the application itself. This often goes unrecognized, leading to poorly or completely undocumented application components. Undocumented tools and processes lead to installation and execution errors. Releases are delayed and downtime increases. Ultimately, thrashing and finger pointing occurs between development, support and operations groups as the failures fall into gaps between the organizations.
Independent Software Vendors (ISVs) that develop complex, configurable products for their customers quickly recognize the importance of structuring their deployments. The ability to parameterize an application, to train customers on the meaning and tuning of those parameters, and the ability to troubleshoot a system after installation are important application features. The challenge for these organizations is to minimize the development and maintenance costs of deployment and maintenance tools. Building infrastructure of this type can defocus an organization from its core development tasks and can significantly inflate the team needed to deliver and maintain a product. Organizations attempt to decrease lifecycle costs by developing robust deployment tools, but the development involved can in the long run drive up costs, adding new components to the system without creating unique value for the product.
Many product organizations recognize the symptoms of process breakdown within the Application Enterprise. Unfortunately, solutions can be as elusive as an understanding of the problem itself.
Distributed infrastructure for lifecycle management of an application, using a common model of the application A solution is provided that feeds forward information from development to other phases such as, the quality assurance phase to operations and support environments. Status and parameterization is fed back from operating environments, such as, to interested support, QA, and development staff. In all phases, a common model of the application is provided so that participants speak in common terms and use common tools to manipulate application components.
Distributed infrastructure for lifecycle management of an application, using a common model of the application A solution is provided that feeds forward information from development to other phases such as, the quality assurance phase to operations and support environments. Status and parameterization is fed back from operating environments, such as, to interested support, QA, and development staff. In all phases, a common model of the application is provided so that participants speak in common terms and use common tools to manipulate application components.
The Application Enterprise Bus (AEB) includes, in some examples, a suite of data models, tools and interfaces integrating development, test, deployment, support and operations tasks. Once exposed to the AEB, an application's structure, parameterization and operating status are visible and manageable by participants in the Application Enterprise. The AEB, for example, provides release management, configuration management, packaging and deployment tools. It facilitates applications to be structured in development, configured in the deployment phase, and moved into an operational environment without custom tools. Once installed, applications can be viewed securely in their operating environment, maintained by support and development staff and troubleshot through a common view onto the system. Operations can integrate monitoring tools to the AEB, enabling them with visibility of application structure and parameterization, while at the same time exposing information from the live system to support and development.
The technical components of the example AEB include:
The AEB, like a “traditional bus” operates according to standard interfaces for modular system components. In this case, the system whose interfaces are to be standardized is the Application Enterprise (including development, deployment and operations tasks). The AEB integrates tools and data representations used by IT, support departments, operations and development organizations. Additionally, the bus allows tools to be introduced to bridge the gaps between organizations, providing a unified toolkit and view of application components.
The AEB defines a set of objects, each with its own data model. These objects can be created, viewed and manipulated by tools connected to the AEB. These objects include; for example:
Development organizations maintain strict control over source code, data and known system bugs. It is less common during the development phase to directly consider deployment and operational issues. The AEB, through its interfaces to the development environment, provides the ability to address these issues up front. Deployment and operational factors impacting or caused by the development organization include; for example:
To address these issues, the AEB facilitates developers to structure and expose application configuration, scripts, and release notes to the application bus. Developers can identify application assets as they are defined during the development phase. Via integration with the IDE, or through a developer's use of the Product Release View, elements in or outside of the source control system can be identified as deployment assets. After identification, these assets are visible as part of the application object and can be viewed and managed through the on dashboard AEB. Views (i.e. visibility)and actions are available whether the object remains in development, is packaged and about to be deployed, or is deployed and in operation.
Release Management
Applications are typically an assembly of built products (libraries and executables), content, configuration parameters, scripts, policies, third party applications, documentation and supporting files. Product organizations generally have one or more “buildmeisters” who are responsible to schedule builds, manage build products and package the build products as applications for delivery to operational environments. Packages are versioned and correlated with labels or branches in the source control system and bug reports are correlated with the released package. The Product Manager tool is used to organize the packaging, versioning and correlation of application components. It saves the buildmeister from writing custom tools for this purpose, as happens in most development organizations. The Product Manager is a bridge from the AEB to the build, bug tracking, and source control systems. The Product Manager is compatible with common source control applications (ClearCase, CVS, SourceSafe) bug tracking; build technologies, and packaging tools. The Product Manager gathers targets from the build environment, packaging them into an application object that is connectable to the AEB. The Product Manager's packaging is compatible with existing package formats (tar, jar, RPM, InstallShield, CAB, cpio).
The Product Manager is flexible such that it can be connected at whatever level the organization desires or requires. A simple linkage can be set up at first, for example, wrapping existing packages with the Product Manager then using the AEB to move and manage the package to operations. With time, more functions can be migrated to the Product Manager, making more elements of the application visible to the AEB. Flexibility of the interface and the ability to evolve integration of release processes to the AEB are desirable features of the Product Manager.
Propagation Policies
Applications are characterized by policies that define a way to move them between build and execution environments (development, QA, staging, operations). Policies, such as check-offs from one group to another, tracking of issues and documentation of application status can be organized and managed through the Product Manager. When products are ready for deployment (using the Installer, see below), policies are enforced and movement activities are tracked and can be audited.
A deployed application is a packaged product, configured and deployed to an execution environment. The target host group may include, for example, various types of servers, network elements, storage, data and supporting software components. Structured deployment of a package to a target environment uses an environment description, and typically assumes that the incoming package is sized and parameterized for that environment. The interface between development and operations with respect to product deployment is typically unique for each product, worked out between the groups according to each one's capabilities. Sometimes complete system images are delivered to operations; sometimes only data, content and a few executables are delivered, and IT or operations sets up system components (hardware and/or software) to support the product. In most cases, a deployment team adds parameterization to the product as it is moved into production, to specifically configure it for the target execution environment.
The ability to define an execution environment and to inspect the implementation of the definition (for example to verify it) is provided by the Execution Environment Builder. Using a common view through the Application Enterprise Dashboard, development, support and operations staff can see and understand the target environment and visualize how an application package maps onto it. Parameterization can move in both directions, facilitating developers to define and document configurations to operations and support, while enabling the export of operational configurations to development and support environments. This addresses a fundamental set of issues that arise between development, support and operations; such as:
Component Blueprints are supplied for the construction of execution environments, allowing drag-drop style definition of target systems. Component Blueprints provide built in knowledge of component structure, installed footprint, dependencies, and parameterization for system setup and tuning. Component Blueprints are defined for all layers, from hardware, networking, firewalls, to load-balancers, operating systems (NT, Win2000, Unix), application platforms (Net, J2EE, ), web-servers (IIS, Apache, iPlanet), databases (Oracle, SQL Server, DB2) and application suites (SAP, PeopleSoft, Siebel). Once defined, application components and configuration parameters can be viewed, queried and modified through the Application Enterprise Dashboard.
Installation Tools
The Installer moves application packages and their parameterization and updates from the development or deployment environment to an operations environment. Often a firewall is in place between these organizations to prevent un-audited modification of executing software, or to prevent non-operations staff from accessing data in the operations environment. The Installer provides secure, audited movement of packages to operations and ensures proper deployment of products to selected machines.
Many issues associated with application configuration and installation are addressed by the Installer.
The Installer in conjunction with the Release Manager implements transactional installation/deinstallation, roll-in/roll-out of patches and tuning of parameters in such a way that changes are recognized and automatically organized into deployment patches
Releases are application versions managed by the Release Manager. One step in the workflow of application installation is the check-off that exposes a version as a Release. This function is reversible, allowing a release to be decommissioned when it is obsolete, or withdrawn as an installation candidate if problems are discovered.
Once checked off, the Installer is employed to install a Release to one or more execution environments. The installation process combines a Product application object with an Environment object to map the virtual application to a specific target. At this time, configuration parameters identified by development are visible to the deployment team through the Installation View. Data, environment variables, runtime parameters and other configuration can be populated and verified. Once the configuration has been specified and verified, installation can take place. This involves securely moving application components to their targets, placing components where they belong and verifying that all of the parts are accounted for and are in place. Installation may include the placement of application products on a configured server, or complete imaging of the servers with applications components included. The Installer operates in either mode. Installation workflows can be defined, ordering the installation of components, running scripts, and prompting for feedback during the process. Because installations are transactional, they can be aborted and rolled back at any point in the workflow.
After component installation, a common problem is inability to ‘bring the system up’. This may be due to application misconfiguration, missing components, bad data, hardware/network failure, software version incompatibilities or other problems. The Installer and Development Interfaces can help to structure an application so that misconfiguration and missing components are minimized or eliminated. But beyond application structuring, the Installer allows OS independent application bring-up to be scripted, and made visible to the AEB. The Installer handles security and OS specific tasks on installation, allowing the application to be activated and deactivated from the Installer View. The Activation step is often overlooked until an application is first deployed and it becomes the operations team's task to script system bring up and debug problems as they occur. By explicitly defining Activation as an application component and by providing tools to structure and automate the process, Cendura fills a critical gap in the Application Enterprise is filled.
Updating installed systems and tuning application parameters are common, although not always well structured, tasks in the Application Enterprise. Through product definition with the Release Manager, partial updates are applied to installed system, using similar (if not the same) process and interfaces as installation. The Installer optimizes updates by installing only those components that have changed, speeding the deployment of patches, thus minimizing downtime for updates.
The tuning of installed applications is called adaptation. Many parts of an application are tunable, including parameters from hardware, OS, application and network levels. Both the Installer View and Environment View facilitate enabled users to view and modify parameters that have been exposed to the AEB. All changes are recorded for auditability, and may be rolled back if needed. This capability addresses two fundamental issues in the Application Enterprise. First, explicit definition of tunable parameters helps to document the variables that control application behavior. Second, auditable control over adaptation keeps undocumented changed from creeping into the system, giving operators confidence to allow support and development staff access to the system for troubleshooting.
Data model of an application—abstract blueprint.
(a) Layered, with nesting.
The Application Blueprint describes the generic structure of a software application. It is an abstract model that is not specific to a particular deployed instance. The application may at first be decomposed into (potentially nested) sub-applications. Sub-applications are independent units within the larger application structure, that may be separately maintained or released within the Application Enterprise. For example a billing system or streaming video capability may be considered a sub-application within a larger customer facing service. Within the sub-application (or the application, if no sub applications exist), specific host types are identified. Distributed applications typically have different types of computational servers performing specialized functions such as serving web pages or acting as a database server. Each host type has a set of associated components, potentially nested.
The application blueprint data model represents the structure shown in
Component Blueprint
The component blueprint, an example of which is illustrated in
The Managed container holds rules for determining the ‘parts-list’ of the component, identifying all of the pieces belonging to the component, including files, data, registry values, directory server sub-trees or other resources available through interfaces. Overlays can be provided that define rules, annotation and categorization of individual managed elements.
The Configuration container enumerates and defines all of the configuration ‘knobs’ for the component. Structure classes can be provided that define how to parse configuration information, rules, annotation and interpretive information for each configuration element.
The Runtime container identifies the components runtime signature, including processes, log files and other resources uses or modified while the component is running.
The Documentation container collects documentation from the component vendor into one location. This includes files, web pages, data and the output of executables.
The Diagnostics and Utilities containers organize executables that can be used to respectively administer or troubleshoot the component. Executables and scripts are exposed, along with common parameterizations as Diagnostics/Utility files. Sequences of actions and conditional logic can be chained together as Macros, allowing typically sequential activities to be gathered together and executed as a unit.
Elements in the blueprint can be categorized and weighted. Categorization facilitates any number of descriptors such as “Security” or “Performance” to be associated with an element. These act as attributes that can be queried for during operations executed against a discovered component. Weights allow the importance of elements in the blueprint to be identified. This allows operations on discovered components to be tuned so that only the most relevant elements are considered.
Software components can be defined using the same component blueprint data model. This normalization process facilitates all components to be stored and viewed similarly. Users not familiar with a given component are able to find and work with information in the model because of this normalization.
2. Discovery
Discovery is the process of locating installed components and applications on a set of hosts. The mechanism of discovery is to, in parallel, query an agent software process running on each host that is to be interrogated. The agent process looks for the indicators defined in the component blueprints and reports the results back to a centralized server. At the centralized server, results from all of the agents are correlated into a complete image of the deployment. The results of discovery are stored in a database from which they can be retrieved, viewed and updated.
(a) Refresh
After a deployment has been discovered, elements among the managed components may change. For example files may be moved or configuration parameters may be updated. To get a current image of the deployment, and to update the stored deployment image in the database, the deployment may be refreshed. During refresh, agents on each managed host are asked to review all of the managed components, and report differences to the server. The time stamped, updated deployment image is stored in place of the previous deployment image.
(b) Snapshot
To retain an image of a discovered deployment, so that refresh operations do not cause historical information to be lost, a snapshot can be taken. A snapshot causes a duplicate copy of the deployment image to be created in a database. This image is marked as a snapshot and subsequently cannot be modified, since it is a historical record that should remain unchanged from the time that the snapshot is taken.
(c) Compare
Comparison can be used to determine if a deployment is drifting away from a standardized configuration (a gold-standard or template), or it can be used to investigate the difference between different deployments, or the same deployment across time.
(d) Verification
Verification is the process of running rules that have been defined on the elements of a deployment image or snapshot. Rules are Boolean expressions involving the value of one or more elements, or the values of element attributes. All elements in a deployment have a value (for example the value of a registry key is its defined value), and all elements have attributes, which are name-value pairs (for example a managed file has an attribute called size, which is the number of bytes in the file).
Rules are used to define a set of constraints on the deployment image. They can limit a value, or constrain one value relative to another. All rules return a Boolean result, true or false. Rules are assigned a severity, allowing selection at verification time of the severity level or rules to run.
Rules can be defined in the component blueprint, or a rule can be defined directly on the deployment. If defined on the blueprint, the rule is attached to the deployment when it is discovered. Many rules are automatically generated, and these are called implicit rules. Implicit rules are created from data type restrictions and default value specification. When an element has it's data type defined in a component blueprint, a rule is generated that will fail if the value of that element does not conform to the data type. If an element has a default value (a value that the system will use if no other value is defined, for example in a configuration file), then an implicit default value rule will be generated.
When verification is rule, a severity level is chosen, and type set of rules to execute is defined. One constraint on the set of rules to rule is the rule type. Rule types include Component Blueprint rules, Deployment rules, and default value Rules.
(e) Export/Import
In addition to a common data model, a portable representation of the model and its contents is defined. The portable format allows deployment images to be exported from one data store, and imported to another. An exported blueprint, deployment or snapshot image is represented in a single file that can be encrypted and easily be moved from one location to another. This allows comparison and verification to take place away from the actual physical location of the deployment. ISVs for example can utilize this capability to take exported images of customer installations and import them within their support organizations to help troubleshoot problems. The export format can also be used for archiving since it is a space efficient representation of the deployment image.
(f) Communication
The deployment image can be used as a communication tool that binds together members of the Application Enterprise. Links into the image can be embedded into conventional communication tools, like e-mail so that co-workers can communicate effectively about the exact location of issues within the deployment image. Notes and rules can be attached to the application and component blueprints, or do deployment images allowing members to annotate the application with information at precise locations within the data model.
Organizations outside of development and operations (for example finance, marketing or sales) may require visibility to portions of the Application Enterprise. Even customers may want access so that they can verify application parameters, verify system functions and ‘feel comfortable’ that their applications are running and are well managed. Via a secured Custom View, guests, customers and/or other users can be granted access to any or all products and execution environments plugged into the Application Enterprise Bus. This is a powerful extension, allowing controlled and auditable access to what is conventionally a closed environment. Access control can be configured so that only selected objects are visible and selected operations are enabled.
A common view shared by all associated organizations and customers helps to expedite troubleshooting during periods of application instability, helps all parties to plan future releases and strategies, and provides a common vocabulary and understanding of how the application runs and how it is developed and released.
From both the development and operations environments, the open interfaces of Application Enterprise Bus allow integration of all parts of an Application Enterprise into a common view. Project management, Schema and OO design tools can be plugged into the bus via standard interfaces. Schema design, for example can be used to provide operations staff and DBAs a sophisticated view of database, triggers, stored procedures and constraints that they would otherwise not be afforded. Alternatively, operational tools (e.g. HP OpenView, Tivioli, Unicenter) can be integrated into the bus, providing developers a view onto the running system. Because these custom tools are accessed through the Application Enterprise Bus, the applications associated with each capability do not have to be installed (saving license costs). The Application Enterprise Bus can be used effectively as an application integration platform.
Custom tools can also be plugged into the Application Enterprise Bus. For example an interface could be developed that allows managers or customers to request summaries of source control system activity, correlated with filed bugs. The custom plug-in would in this case consist of a wrapper around scripts executed against the source control and bug tracking systems.
4. Security
(a) Access Control to the Schema
Access control is configurable to restrict views and/or application objects across the user base. Well-managed security implementations require that policy be coherent and fully documented by a team of security experts. While it is often the case that policies are documented, it is rarely true that implementation of the policy can be accurately or conveniently tracked. The Product Manager allows security policies and associated parameterizations to be defined, viewed and managed from a single interface. This provides a powerful capability to centralize security policy, allowing only those individuals responsible for and knowledgeable of the policies to control their implementation. The security policy of an application can be audited from a single place and those responsible for security within an organization can be assured that the policy is defined and implemented correctly
(b) Access Control to Application Object
Users within the Application Enterprise have different needs and restrictions as they view and act on deployment images. Users can be restricted to read, write, or execute access on any object or function within the deployment image. Access can also be controlled to the meta-data, for example the building and modification of application and component blueprints.
(c) Integrate with Existing Security—e.g., Directory Services
Users within the Application Enterprise can be configured and given permissions by an administrator as they are imported from the organization's enterprise directory (e.g. LDAP).
5. Transactional Operations
Because the application enterprise bus federates the Application Enterprise, operations can be transactionally performed across the enterprise in way not previously possible. Transactional operations are actions on the deployment or deployment image that conform to the well-known ACID properties of transactions. That is, they are (a) Atomic, all parts of the operation happen, or all do not, (b) Consistent, the target of a transaction remains in a consistent state before and after the transaction, (c) Isolation, the transaction is isolated from other activity or other transactions in the system and (d) Durable, once completed and committed, the changed caused by the transaction are permanent. An important feature of transactions is ‘rollback’, allowing changes to be removed before they are committed to the system.
Transactions across the deployment are enabled by federations and aided by the underlying data model. The transactional operations enabled include
| Number | Date | Country | |
|---|---|---|---|
| 60510590 | Oct 2003 | US |