Integrating design, deployment, and management phases for systems

Information

  • Patent Grant
  • 8122106
  • Patent Number
    8,122,106
  • Date Filed
    Friday, October 24, 2003
    20 years ago
  • Date Issued
    Tuesday, February 21, 2012
    12 years ago
Abstract
Integrating design, deployment, and management phases for a system in accordance with certain aspects includes using a system definition model to design a system. The system definition model is subsequently used to deploy the system on one or more computing devices and, after deployment of the system, the system definition model is used to manage the system deployed on the one or more computing devices.
Description
TECHNICAL FIELD

The invention relates to an architecture for a distributed computing system.


BACKGROUND

Internet usage has exploded over the past several years and continues to grow. People have become very comfortable with many services offered on the World Wide Web (or simply “Web”), such as electronic mail, online shopping, gathering news and information, listening to music, viewing video clips, looking for jobs, and so forth. To keep pace with the growing demand for Internet-based services, there has been tremendous growth in the computer systems dedicated to hosting Websites, providing backend services for those sites, and storing data associated with the sites.


One type of distributed computer system is a data center (such as an Internet data center (IDC) or an Enterprise Data Center (EDC)), which is a specifically designed complex that houses many computers for hosting network-based services. Data centers, which may also go by the names of “Webfarms” or “server farms”, typically house hundreds to thousands of computers in climate-controlled, physically secure buildings. Data centers typically provide reliable Internet access, reliable power supplies, and a secure operating environment.


Today, large data centers are complex and often called upon to host multiple applications. For instance, some websites may operate several thousand computers, and host many distributed applications. These distributed applications often have complex networking requirements that require operators to physically connect computers to certain network switches, as well as manually arrange the wiring configurations within the data center to support the complex applications. As a result, this task of building physical network topologies to conform to the application requirements can be a cumbersome, time consuming process that is prone to human error. Accordingly, there is a need for improved techniques for designing and deploying distributed applications onto the physical computing system.


SUMMARY

Integrating design, deployment, and management phases for systems is described herein.


In accordance with certain aspects, a system definition model is used to design a system. The system definition model is subsequently used to deploy the system on one or more computing devices. After deployment of the system, the system definition model is used to manage the system deployed on the one or more computing devices.





BRIEF DESCRIPTION OF THE DRAWINGS

The same numbers are used throughout the drawings to reference like features.



FIG. 1 illustrates an example network setting.



FIG. 2 is a block diagram illustrating an example architecture using the SDM definition model.



FIG. 3 illustrates an example layered setting.



FIG. 4 is a flowchart illustrating an example process for using the system definition model (SDM) across the entire lifecycle of a system.



FIG. 5 illustrates an example architecture using an SDM runtime.



FIG. 6 illustrates an example SDM document.



FIG. 7 illustrates an base definition and members.



FIG. 8 illustrates an example member.



FIG. 9 illustrates example setting values and value lists.



FIG. 10 illustrates an example lifecycle of an SDM application in accordance with certain embodiments.



FIG. 11 shows an example mapping of a web application to a web server host.



FIG. 12 illustrates an example built-in datatype hierarchy.



FIG. 13 illustrates an example of implicit extension of an abstract object definition.



FIG. 14 illustrates an example of implicit extension of an abstract relationships.



FIG. 15 illustrates an example of a change request.



FIG. 16 illustrates an example process of loading new definitions into the runtime.



FIG. 17 illustrates an example of carrying out change requests.



FIG. 18 illustrates examples of connected members.



FIG. 19 illustrates example structures with regard to connections.



FIG. 20 illustrates an example UML diagram that provides an overview of the instance space.



FIG. 21 illustrates a general computer environment which can be used to implement the techniques described herein.





DETAILED DESCRIPTION

The following disclosure describes a number of aspects pertaining to an architecture for designing and implementing a distributed computing system with large-scale application services. The disclosure includes discussion of a system definition model (SDM), which may also be referred to as a service definition model (SDM), and an SDM runtime environment. The SDM provides tools and a context for an application architect to design distributed computer applications and data centers in an abstract manner. The model defines a set of elements that represent functional units of the applications that will eventually be implemented is by physical computer resources and software. Associated with the model elements is a schema that dictates how functional operations represented by the components are to be specified.


As used herein, the term “wire” may also be referred to as “connections”, “communication”, or “communication relationship”. Also, the term “system” may be referred to as “module” and the term “resource space” may be referred to as “resources”. Additionally, the term “application space” may also be referred to as “applications”, and the term “instance space” may also be referred to as “instances”. Further, the term “class” may also be referred to as “abstract definition”, the term “port” may also be referred to as “endpoint”, and the term “type” may also be referred to as “definition”.



FIG. 1 illustrates an example network 100. In setting 100, multiple (x) computing devices 102(1), 102(2), . . . , 102(x) are coupled to a network 106. Network 106 is intended to represent any of a variety of conventional network topologies and types (including wire and/or wireless networks), employing any of a variety of conventional network protocols (including public and/or proprietary protocols). Network 106 may include, for example, a local area network (LAN), a wide area network (WAN), portions of the Internet, and so forth. Setting 100 represents any of a wide variety of settings, including, for example, data centers (e.g., Internet data centers (IDCs)), office or business settings, home settings, educational or research facilities, retail or sales settings, data storage settings, and so forth.


Computing devices 102 can be any of a variety of conventional computing devices, including desktop PCs, workstations, mainframe computers, server computers, Internet appliances, gaming consoles, handheld computers, cellular telephones, personal digital assistants (PDAs), etc. One or more of devices 102 can be the same types of devices, or alternatively different types of devices. Additionally, even if multiple devices are the same types of devices, the multiple devices may still be configured differently (e.g., two devices 102 may be server computers, but may have different hardware configurations, such as different processors, different amounts of RAM, different sizes of hard disk drives, and so forth).


One or more computing devices 102 may also be re-configured after being added to setting 100. For example, a particular computing device 102 may operate for a period of time (e.g., on the order of minutes, hours, days, months, etc.) performing one function, and then an administrator may decide that a different function is desirable (e.g., change from being a server computer to a workstation computer, from a web server to a local file server, etc.).



FIG. 2 is a block diagram illustrating an example architecture 200 using the system definition model. The SDM is designed to be used across the entire lifecycle of a system. A system is a set of related software and/or hardware resources that can work together to accomplish a common function. One example of such a system is an application, which refers to a set of instructions that can be run or executed by a computing device to perform various functionality. Examples of applications include entertainment applications such as games, productivity applications such as word processors, reference applications such as electronic encyclopedias, distributed applications such as may be used for web services or financial analysis, and so forth. Another example of such a system is an environment on which an application (or another environment) can be deployed. An environment refers to the software and/or hardware resources on which an application (or another environment) is deployed. Such environments can be layered, as discussed in more detail below.


The lifecycle of a system typically includes three primary phases (also referred to as stages): a design or development phase, followed by a deployment or installation phase, followed by an operations or management phase. As the model applies to all three phases of the lifecycle of a system, the model can thus be seen as an integration point for the various phases in the lifecycle of a system, and facilitates each of these phases. Additionally, by using the model knowledge can be transferred between these phases, such as: knowledge regarding management of the system (e.g., being fed back to the design and development team, allowing the design and development team to modify the system, such as for future versions or to improve the performance of the current version); knowledge of the structure, deployment requirements and operational behavior of the system; knowledge of the operational environment from the desktop to the data center; knowledge of the service level as observed by the end user; and so forth.


Generally, during the design phase, development tools leveraging the SDM are used to define a system comprised of communicating software and hardware components. A system definition contains all information necessary to deploy and operate a distributed system, including required resources, configuration, operational features, policies, etc. During the deployment phase, the system definition is used to automatically deploy the system and dynamically allocate and configure the software and hardware (e.g., server, storage and networking) resources required. The same system definition can be used for deployments to different host environments and to different scales. During the management phase, an SDM Service in the operating system provides a system-level view for managing the system. This enables new management tools to drive resource allocation, configuration management, upgrades, and process automation from the perspective of a system.


The architecture 200 employs the SDM definition model as well as a schema that defines functional operations within the SDM definition model. The definition model includes various different kinds of data structures which are collectively referred to as “definitions”. Functionality of the SDM is exposed through one or more platform services, such as application program interfaces (APIs).


During the design phase for a system, a development system 202 generates a document that contains the system definition, such as an SDM document 204. Development system 202 can be any of a variety of development systems, such as the Visual Studio® development system available from Microsoft® Corporation of Redmond, Wash. SDM document 204 defines all information (also referred to herein as knowledge) related to the deployment and management of the system. Any knowledge necessary for or used when deploying the system or managing the system is included in SDM document 204. Although described herein as a single document, it is to be appreciated that the knowledge could alternatively be spread out and maintained in multiple documents.


A system definition defines a system in terms of one or more of resources, endpoints, relationships and sub-systems. A system definition is declared in an SDM document (e.g., an XML document). Resources may be hardware resources or software resources. Endpoints represent communications across systems. Relationships define associations between systems, resources and endpoints. Sub-systems can be treated as complete systems and are typically part of a larger system.


A system definition captures the basic structure of a dynamic system. It can be viewed as the skeleton on which all other information is added. This structure is typically specified during the development process, by architects and developers, and typically does not change frequently. In addition to the structure, the SDM can contain deployment information, installation processes, schemas for configuration, events and instrumentation, automation tasks, health models, operational policies, etc. Other information can be added by the operations staff, by vendors, and/or by management systems across the lifetime of a distributed system.


SDM document 204 includes one or more constraints (also referred to as requirements) of the system that an environment in which the system is to be deployed and/or run must satisfy. The environment itself is also described using an SDM document. Such environments can be single computing devices, or alternatively collections of computing devices (e.g., data centers), application hosts, etc. Different systems can be installed to different environments. For example, a data center may include fifty computing devices, and one system may be deployed to five of those computing devices, while another system may be deployed to thirty five of those computing devices. These requirements can take a variety of forms, such as: hardware requirements regarding the computing device(s) on which the system is to be deployed (e.g., a minimum processor speed, a minimum amount of memory, a minimum amount of free hard drive space, a minimum amount of network bandwidth available, particular security mechanisms available, and so forth), software requirements regarding the computing device(s) on which the system is to be deployed (e.g., a particular operating system, one or more other applications that also must be installed, specifications regarding how a particular system and/or the operating system is to be configured, a particular type of security or encryption in use, and so forth), other requirements regarding the computing device(s) on which the system is to be deployed (e.g., particular security keys available, data center policies that must be enforced, authentication that is used, environment topology, etc.).


Requirements can also go in the other direction—that is, the environment can have constraints or requirements on the configuration of the system that is to be installed (e.g., to implement the standards or policies of the environment). These can be “explicit” requirements that are created by the operator of the environment, such as particular settings or configurations the system must have, particular functionality the system must provide or support, particular security mechanisms the system must support, and so forth. These can also be “implicit” requirements that that arise because of a particular configuration of the environment. For example, if a host computing device in the environment is using a particular type of file system then it may not be possible for some actions to be performed using that file system (although it may be possible for those same actions to be performed using another file system).


During the design and development phase of the system, SDM document 204 can be used to validate the system for one or more particular environment(s). This is a two-way validation: the system is validated for the environment and the environment is validated for the system. The environment can be validated for the system by comparing the requirements identified in the SDM document 204 with the environment and determining whether all of the requirements are satisfied by the environment. The system can be validated for the environment by comparing the requirements identified in an SDM document for the environment with the system and determining whether all of the requirements are satisfied by the system. If all of the requirements are satisfied by the environment and the system, then the designer or developer knows that the system can be deployed in and will run in the environment. However, if all of the requirements are not satisfied by the environment and/or the system, then the designer or developer is optionally informed of the requirements that were not satisfied, thereby informing the designer or developer of what changes should be made to the SDM document 204 (and correspondingly to the system) and/or to the environment in order for the system to be deployed and run in that environment.


The knowledge regarding deployment of the system that is included in the SDM document 204 describes how the system is to be deployed in one or more environments. The SDM document 204 is made available to a controller 206, which includes a deployment module 208 and a management module 210. In certain embodiment, the SDM document 204 as well as all of the files of the system (e.g., binaries, data, libraries, etc.) needed to install the system are packaged together into a single container (e.g., a single file) referred to as an SDU (System Definition Unit). Controller 206 can be one or more of computing devices 102 of FIG. 1. For example, a single device 102 of FIG. 1 may be the controller for a particular data center, or alternatively the controller responsibilities may be distributed across multiple devices 102.


Deployment module 208 includes services that are used to deploy the system in the environment(s). In FIG. 2, the environment in which the system is deployed is (or is deployed on) one or more target devices 212. Systems may also be deployed to controller 206. These services of deployment module 208 include one or more functions that can be called or invoked to install or deploy one or more systems in the environment.


Different knowledge for deployment in different environments may be included in the SDM document 204. This deployment knowledge describes any changes that need to be made to the in the environment (e.g., changes to a system registry; folders, directories, or files that need to be created; other setting or configuration parameters of the computing device that need to be set to particular values; and so forth), as well as what files (e.g., program and/or data files) that need to be copied to the computing device(s) in the environment and any operations that need to be performed on those files (e.g., some files may need to be decompressed and/or decrypted). In many implementations, the deployment knowledge in the SDM document 204 includes, for example, information analogous to that presently found in typical setup or installation programs for systems.


During the deployment process, controller 206 generates a record or store of the software and hardware resources involved in the deployment as well as the relationships between them. This record or store can subsequently be used by controller 206 during the management phase.


Management module 210 includes services that are used to manage the system once it is installed in the environment(s). These services of management module 210 include one or more functions that can be called or invoked to manage the systems in the environment. The knowledge regarding management of the system that is included in the SDM document 204 describes how the system is to be managed in one or more environments.


Different knowledge for managing a system in different environments may be included in the SDM document 204. The management knowledge includes any knowledge used in the management or operation of the system. Management involves, for example, configuration (and optionally subsequent reconfiguration), patching and upgrading, maintenance tasks (e.g., backup), health or performance monitoring, and so forth.


Changes to deployed systems are made through management module 210. The services of management module 210 include one or more functions that can be called or invoked to make changes to one or more systems deployed in the environment. By making such changes through the management module 210, several benefits can be realized. One such benefit is that controller 206 can maintain a record of the changes that have been made. Controller 206 may maintain a copy of the SDM document 204 for the system and record in the SDM document 204 any changes that are made to the system. Alternatively, controller 206 may maintain a separate record of the changes made to the system.


This record of changes maintained by controller 206 can simplify subsequent operations, such as solving problems with the system and/or environment, or when having to reinstall the system due to a hardware failure (allowing the system to be reinstalled and returned to running with the same parameters/settings as it had at the time of failure). By having such changes made through controller 206 and by having controller 206 maintain the record, some human error can be removed from the environment (e.g., if the administrator making the change is supposed to log the change in a book but forgets to do so there would be no record of the change—this problem is solved by having controller 206 maintain the record).


Furthermore, by making changes to systems through controller 206, as well as deploying systems through controller 206, controller 206 can serve as the repository of knowledge about the environment, the systems deployed in the environment, and interactions between them. Knowledge regarding the environment and/or systems deployed in the environment can be readily obtained from controller 206. This knowledge can be used to ensure the consistency of the controlled environment by validating that the controlled devices in the environment reflect the state stored in the central controller 206.


It should be noted that in some situations changes may be made to a system and/or environment but are not made through controller 206. For example, a computing device may be accidentally turned off or may fail. In these situations, attempts are made to reflect such changes in controller 206. These changes may be reflected in controller 206 automatically (e.g., a system may run that attempts to detect device failures and use the services of management module 210 to notify controller 206 of such failures) or may be reflected in controller 206 manually (e.g., an administrator may use the services of management module 210 to notify controller 206 of such changes). Alternatively, the changes that were made could be reversed to bring the system and/or portion of the environment back into line with the desired state of the system as recorded by controller 206.


The SDM document 204 can thus be viewed as a “live” document—it can be constantly changing based on changes to the environment and/or changes to the system throughout the lifecycle of the system.


The SDM enables the functional composition of systems across a horizontal and vertical axis. Composition along the horizontal axis is done with systems and subsystems. Composition along the vertical axis is done with “layers”. Applications, services, network topologies, and hardware fulfill a role in a distributed system, but are typically defined independently and owned by different teams or organizations. Layering is accomplished by components defining a set of constraints on a host and vice versa.



FIG. 3 illustrates an example layered setting. Four layers are illustrated in FIG. 3: layer 302, layer 304, layer 306, and layer 308. Although four layers are shown in FIG. 3, the actual number of layers can vary, and can be greater or less than four. Additionally, the content of different layers can vary in different embodiments. As can be seen in FIG. 3, the different layers are situated above and/or below other layers (e.g., layer 306 is above layer 304 but below layer 308).


Different systems and subsystems within a layer can interact with one another, and also can interact with systems and subsystems of different layers. For example, a subsystem 310 in layer 308 can interact with a subsystem 312 in layer 308, as well as a subsystem 314 in layer 306. Additionally, each layer can be viewed as the environment for the next higher layer. For example layer 306 is the environment for systems and subsystems in layer 308, while layer 304 is the environment for systems and subsystems in layer 306. Each layer 302, 304, 306, and 308 has its own associated SDM document.


The different layers 302, 304, 306, and 306 can represent different content. In certain embodiments, layer 302 is a hardware layer, layer 304, is a network topology and operating systems layer, layer 306 is an application hosts layer, and layer 308 is an applications layer. The hardware layer represents the physical devices (e.g., computing devices) on which the layered system is built (e.g., devices 102 of FIG. 1). The network topology and operating systems layer represents the network topology of the computing devices (e.g., network setting 100 of FIG. 1) as well as the operating systems installed on those computing devices. The application hosts layer represents applications installed on the computing devices that can host other applications (e.g., SQL Server, IIS, and so forth). The application layer represents applications that are installed on the computing devices that do not host other applications (e.g., entertainment applications such as games, productivity applications such as word processors, reference applications such as electronic encyclopedias, distributed applications such as may be used for web services or financial analysis, and so forth).



FIG. 4 is a flowchart illustrating an example process 400 for using the SDM across the entire lifecycle of a system. The various acts of process 400 of FIG. 4 can be implemented in software, firmware, hardware, or combinations thereof.


Initially, the system is designed based on the SDM (act 402). The system is designed to include requirements that an environment(s) must satisfy in order for the system to be deployed and run in the environment, as well as additional knowledge that is used for the deployment and management of the system. This knowledge is included in an SDM document associated with the system. Once designed, the system can optionally be validated using the SDM (act 404). This validation allows the designer or developer to verify that the system will be able to be deployed and run in the environment being validated against. As discussed above, not only is the system validated against the environment in which it is to be deployed, but that environment is also validated against the system. If the validation fails for a particular environment, then additional design steps can be taken to alter the system so that the system can be run in that environment (or alternatively steps can be taken to alter the environment).


Once validated, the system can be deployed using the SDM (act 406). When deploying the system in an environment, the system is installed in that environment so that it can be subsequently run in the environment. The knowledge used to install the system is included in the SDM document associated with the system. Once deployed, the system is monitored and/or managed using the SDM (act 408). Knowledge in the SDM document associated with the system identifies how the system is to be monitored and/or managed, and the system is monitored and/or managed within the environment in accordance with this knowledge.


Process 400 of FIG. 4 can be used with systems that are applications as well as systems that are environments. For example, an application can be validated against the SDM of its environment (e.g., an application in layer 308 of FIG. 3 is validated against the environment of layer 306 of FIG. 3). By way of another example, an operator or system architect can design environments (e.g., layer 306 or layer 304 of FIG. 3) and validate those environments against the environments they are to be deployed in (e.g., layers 304 and 302, respectively).


The constraints on a system and/or the environment can also be used during runtime (while the system is being monitored and/or managed) to validate changes to the system and/or the environment during runtime. Such runtime validation allows, for example, an operator of an environment to determine how changes to the environment may affect a running system, or a system designer to determine how changes to the system may affect its running in the environment.


In the discussions to follow, reference is made to flow and setting flow with respect to the runtime. Flow is used to pass configuration information between parts of a distributed system (e.g., allowing a developer to specify the configuration information in one place or allowing an operator to only provide a single entry). Flow is also used to determine the impact of changes to configuration by following the flow of setting data between parts of the system.



FIG. 5 illustrates an example architecture 500 using an SDM runtime. Architecture 500 is an example of architecture 200 of FIG. 2 using an SDM runtime 510 as well as the example implementation of the SDM discussed below in the section “Example SDM Implementation”. The SDM runtime 510 contains a set of components and processes for accepting and validating SDM files, loading SDUs (System Definition Units—which are packages of one or more SDM files and their related files), creating and executing SDM Change Requests and deploying SDM based systems into target environments. The runtime functionality allows systems described using the SDM to be defined and validated, deployed to a set of computing devices, and managed.


The SDM, which is discussed in more detail below in the section “Example SDM Implementation” is designed to support description of the configuration, interaction and changes to the components in a distributed system (the modeled system). SDM is based on an object-relational model. “Definitions” describe entities that exist in a system and “relationships” identify the links between the various entities. Definitions and relationships are further defined to capture semantic information relevant to the SDM. In particular, definitions are divided into components, endpoints and resources. Relationships are divided into the following: connections (also referred to as communication), containment, hosting, delegation and reference. Further details regarding definitions and relationships are provided below.


The SDM includes “abstract definitions” that provide a common categorization of system parts, provide tool support for a wide range of systems and provide the basis for definition checking at design time. A set of abstract definitions provide a comprehensive basis for service design. “Concrete definitions” represent parts of an actual system or data center design. A concrete definition is generated by selecting an abstract definition and providing an implementation that defines the concrete definition's members and setting values for its properties. Distributed applications are generated using collections of these concrete definitions.


The SDM also includes “constraints” that model restrictions based on the allowed set of relationships in which an instance of a relationship can participate. Constraints are useful in describing requirements that depend on the configuration of objects involved in a relationship. For example, a constraint may be used to determine whether participants on each end of a communication protocol are using compatible security settings.


In order to effect change on a target system, SDM uses a declarative description of the required changes called a “change request” or CR. SDM defines the process that is used to expand, validate and execute a change request as part of the “SDM execution model”.


The “instance space” captures both the desired and current state of the managed application. Changes in the instance space are tracked and associated with the change request that initiated the change. The instance space is stored in an SDM runtime and reflects the current state of the modeled system. The runtime contains a complete record of the instances that have been created and the relationships between these instances. Each instance has an associated version history where each version is linked to a change request. The process of creating new instances is initiated by a change request. The change request defines a set of create, update and delete requests for definitions and relationships associated with specific members of an existing instance.


The following is a brief, functional discussion of how the components in FIG. 5 work together. An operator or administrator is able to describe an environment into which applications can be deployed, such as the topology of a data center. The operator or administrator produces an SDM file describing the environment, the file being referred to as the “logical infrastructure” (LIM) 502, or as a data center description or data center model. This SDM file can be generated using any of a variety of development systems, such as the Visual Studios development system available from Microsoft® Corporation of Redmond, Wash.


Additionally, an application developer is able to design and develop their application using any of a variety of development systems, such as the Visual Studio® development system. As the developer defines components of the application and how these components relate to one another, the developer is able to validate the application description against the datacenter description 502. This is also referred to as “Design Time Validation”.


Once the application is complete, the developer saves the description in an SDM and requests that the application be packaged for deployment as an SDU 504. The SDU includes the application SDM as well as the application binaries and other referenced files used to install the application.


The LIM 502 and SDU 504 are fed to deployment tool 506 of a controller device 520 for deployment. Deployment tool 506 includes a user interface (UI) to enable an operator to load the desired SDU 504. Deployment tool 506 works with create CR module 530 to install the application associated with the SDU 504 in accordance with the information in the SDM within SDU 504. Additionally, SDM definitions and instances from SDU 504 are populated in a store 508 of the SDM runtime 510. SDUs are managed in SDM runtime 510 by SDU management module 540, which makes the appropriate portions of the SDUs available to other components of runtime 510 and target(s) 522.


The operator can also specify what actions he or she wants to take on the targets 522 (e.g., target computing devices) on which the application is being deployed. The operator can do this via a deployment file, which is also referred to herein as a Change Request (CR). The CR is run through one or more engines 512, 514, 516, and 518. Generally, expand CR engine 512 expands the CR to identify all associated components as well as their connections and actions, flow values engine 514 flows values for the components (such as connection strings), check constraints engine 516 checks constraints between the environment and the application, and order actions engine 518 specifies the order for all of the necessary actions for the CR.


To initiate change to the system (including deploying an application) or validation of a model, an operator or process submits a CR. The CR contains a set of actions that the operator wants performed over the instances in the runtime 510. These actions can be, for example, create actions, update actions, and/or delete actions.


In addition to user or operator initiated change requests, there may also be expansion/automatically generated change requests that are generated as part of the expansion process, discussed in more detail below. Regardless of their source, the change requests, once fully expanded and checked, are executed by sending actions to the targets 522, such as: discover, install, uninstall and change a target instance.


The CR is treated as an atomic set of actions that complete or fail as a group. This allows, for example, the constraint checking engine 516 to consider all actions when testing validity.


In design time validation, the CR will be created by the SDM Compiler 528 and will contain one or the minimum of each SDM component in the SDM file. This CR of create instance commands will flow through the expansion engine 512, the flow values engine 514, and the constraint checking engine 516. Errors found in these three phases will be returned to the user via the development system he or she is using.


In deployment, the operator will create a CR with the UI presented by deployment tool 506. The CR will flow through all the engines 512, 514, 516, and 518 in the SDM runtime 510, and the appropriate actions and information will be sent by CR module 532 to the appropriate target(s) 522, where the request is executed (e.g., the application is installed). The appropriate target(s) 522 for a particular installation are typically those target(s) on which the application is to be installed.


When beginning to process a CR, in a definition resolution phase, create CR module 530 resolves all definitions and members that are referenced in the change request. The change request will assume that these are already loaded by the runtime 510; create CR module 530 initiates a load/compile action if they do not exist. Create CR module 530 also implements a path resolution phase where references to existing instances and instances defined by create actions within the change request are resolved.


The expansion performed by expansion engine 512 is a process where, given a change request, all the remaining actions required to execute the request are populated. In general, these actions are construction and destruction actions for definition and relationship instances. The operator could optionally provide details for all the actions required to construct or destroy an instance, or alternatively portions of the process can be automated: e.g., the operator provides key information about the changes he or she wants by identifying actions on members (e.g., byReference members), and the remainder of the actions are filled in on nested members (e.g., byReference and byValue members) and relationships. By way of another example, automated expansion can also refer to external resource managers that may make deployment decisions based on choosing devices with available resources, locating the application close to the data it requires, and so forth.


Expansion engine 512 also performs “auto writing”. During auto writing, engine 512 analyzes the scale invariant grouping of components and compound components specified in the SDM and determines how the components should be grouped and interconnected when scaled to the requested level.


Expansion engine 512 also performs value member expansion, reference member expansion, and relationship expansion.


Value member expansion refers to identification of all of the non-reference definition members. The cardinality of these members are noted and, since all the required parameters are known, for each member create requests are added to the change request for those members whose parent is being created. If the change request contains destruction operations, then destruction operations are added for all their contained instances.


Reference member expansion refers to reference members (as opposed to non-reference definition members). The cardinality of reference members is often undefined and they can have deployment time settings that require values in order for the instance to be constructed. So the process of expanding a reference member (e.g., a byReference member) can require more information about the instance than the runtime is in a position to provide.


Related to reference member expansion is a process referred to as discovery, which is a process used to find instances that have already been deployed. Discovery is an action typically initiated by an operator of the environment. For example, during an install request, expansion engine 512 determines if the instance already exists, if so determines what exists and if not then creates it. An instance manager (IM) 534 on the controller 520 communicates with the instance managers 526 on the target device 522 to initiate a discovery process. The discovery process returns data regarding the instance from the target device 522 to the controller 520.


The process of discovery populates reference definition members as part of a construction or update action. Typically, only reference members with object managers (instance managers that also do discovery) that support discovery participate in this process.


When a new instance is discovered a check is made that the instance does not already exist in the SDM database using instance specific key values. Once it is known that it is a new instance, the instance is classified according to the definitions of the members being discovered. If the instance does not match a member or there is an ambiguous match then the member reference is left blank and the instance is marked as offline and incomplete.


Relationship expansion refers to, once all the definition instances that will be constructed are known, creating relationship instances that bind the definition instances together. If definition instances are being destroyed, all relationship instances that reference the definition instances are removed.


To create the relationships the member space is used to identify the configurations of the relationships that should exist between the instances. Where the definition members have cardinality greater than one the topology of the relationships is inferred from the base relationship definition. For example, for communication relationship an “auto wiring” can be done, and for host relationships a host is picked based on the algorithm associated with the hosting relationship.


During a flow stage, flow values engine 514 evaluates flow across all the relationship instances. Flow values engine 514 may add update requests to the change request for instances that were affected by any altered parameter flow. Engine 514 evaluates flow by determining the set of instances that have updated settings as a result of the change request. For each of these, any outgoing settings flows that depend on the modified settings are evaluated and the target nodes added to the set of changed instances. The process continues until the set is empty or the set contains a cycle.


After the flow stage, a process of duplicate detection is performed. The duplicate detection may be performed by one of the engines illustrated in FIG. 5 (e.g., flow values engine 514 or check constraints engine 516), or alternatively by another engine not shown in FIG. 5 (e.g. a duplicate detection engine may be included in SDM runtime 510). The process of duplicate detection matches expanded instances against instances that already exist in the SDM data store. For example, the process detects if another application has installed a shared file. When an instance that already exists is detected, one of several actions can be taken depending on the version of the existing instance: the install can be failed; the instance can be reference counted; the instance can be upgraded; or the installation can be performed side-by-side.


Check constraints engine 516 implements a constraint evaluation phase in which all the constraints in the model are checked to see if they will still be valid after the change request has been processed.


After check constraints engine 516 finishes the constraint evaluation phase, a complete list of actions is available. So, order actions engine 518 can use the relationships between components to determine a valid change ordering. Any of a variety of algorithms can be used to make this determination.


Once order actions engine 518 is finished determining the ordering, deployment can be carried out by distributing subsets of the ordered set of actions that are machine specific. Once the actions have been ordered and grouped by machine, the actions as well as a copy of the necessary portion of the SDM runtime store 508 with instance information are sent to a target computing device 522. The SDM can be stored temporarily at the target device in a store cache 538.


The target computing device includes a target portion 536 of the SDM runtime that communicates with SDM runtime 510. The target computing device 522 also includes an agent that contains an execution engine 524 and can communicate with the appropriate instance managers (IMs) 526 on the target device to make changes on the target, such as create, update, and delete actions. Each action is sent as an atomic call to the instance manager 526 and the instance manager 526 returns a status message and for some actions, also returns data (e.g., for discovery). Once all the actions are completed on target 522, the target's agent returns any errors and status to the controller 520. The controller 510 then uses this information to update the SDM runtime store 508.


As discussed above, change is carried out by breaking the change requests down into distributable parts based on the relationships that are affected. Once all the parts are completed (or after one or more has failed) the results are collated in the runtime 510 and a summary returned to the operator. In the event of a failure, all the actions can be “rolled back” and the system returned to the state it was in before the change was initiated.


In certain embodiments, during design time validation discussed above, an SDM Compiler 528 receives an SDM file, creates a test CR, runs the test CR through the expand, flow values and check constraints engines of the SDM runtime, and returns any errors to the development system. This process provides SDM validation for deployment during design time for the developer.


The public interface to SDM runtime 510 and/or controller 520 is through an object model (APIs) library. The library is a managed code object model and allows the following to be performed:

    • Manage the SDMs in the runtime—SDM files can be loaded into the runtime. SDMs are immutable and are loaded one at a time (i.e., an SDM file can be loaded rather than only parts of the file (e.g., individual ones of the individual definitions, classes or mappings from the SDM file)). SDMs can be deleted from the runtime and an XML document for an SDM in the runtime can be produced.
    • Manage the SDUs known by the runtime.
    • Manage SDM definitions—find and reflect on SDM elements (from an SDM loaded in the runtime). There is no public API provided for authoring a new SDM (i.e., this is a read only object model over the immutable elements of the SDM). This includes SDMs, SDUs, identities, versions, classes, definitions, binding/mappings and versioning policy.
    • Manage SDM instances—find and reflect on instances of components, endpoints, resources and relationships. In the instance space each instance can be identified by a GUID, a stable path or an array based path. The paths are strings and can be relative. These identifiers, including relative paths allows instances to be found and referenced in documents such as the change request document.
    • Manipulate instances—make changes to SDM instances, including creating, changing topology, upgrading, changing settings and deleting. Instance changes are made within the bounds of a change request which provides an atomic unit of update so that any errors or constraint violations will result in the entire request failing. Instance requests also allow for instances to exist temporarily without a binding to a host, as an instance must have a host when the request is committed. It also allows for many operations that will affect a single component's installation or settings to be performed and have the installation or settings update deferred until commit so that a single update occurs on the component. The SDM model checking is performed prior to or at change request commit time and the commit will fail on any model or constraint violations.
    • Load a change request—a change request is a document, for example an XML file, that represents a set of instance space operations. This document can take advantage of relative paths to be a reusable ‘script’ for creating or deleting application instances.
    • Find and reflect on change requests—including getting the installation/update tasks and all error information, and retrying the installation/update of components affected by the request.
    • Generate a change request document from a change request in the database. Such documents are somewhat portable.
    • Subscribe to events on change request tasks, such as progress, log or status updated. The lifetime of these event subscriptions limited by the lifetime of the process that loaded the client library (i.e. these are regular CLR events).


The SDM runtime engine performs the reasoning on the SDM model and the functions surfaced by the APIs. The library communicates to the runtime engine as a web service with fairly coarse calls such as load SDM, create component instance and get entire SDM (for reflecting on SDM entities). The format of many of the parameters for this web service is XML with the same schema for SDM files. The engine may also perform checks on permissions.


The controller 520 can make use of Instance Managers (IMs), which can be associated with any definition or relationship in the model. IMs may perform one or more of the following roles:

    • Support deployment of the instance.
    • Support validation of the instance once it has been deployed (auditing).
    • Support discovery of already deployed instances that were not deployed through the runtime.
    • Support flow of setting values.
    • Support evaluation of constraints.
    • Support expansion of a change request.
    • Support presentation of the instance to a user as a CLR class through the API.


For deployment, an instance manager (IM) plug-in on controller 520 is associated with a class host relation and is separate from the plug-in used in the development system that provides the design experience for the classes and produces the associated binaries in the SDU 504 and the settings schema. Instance managers are supplied to the SDM runtime 510 as CLR classes (e.g., in a dll assembly) that implement an instance manager interface or inherit from abstract class. An SDM Instance Manager, also referred to as an Instance Manager (IM) plug-in, provides the following functions to the controller 520:

    • Generates the files and commands (tasks) to install, uninstall or reinstall component instances on their hosts—When a change request results in a new component instance, removal of a component instance or a change to a component that requires an uninstall and reinstall, it is the instance manager that takes the settings for the instance, the host instance, the definitions associated with the component and the binaries associated with those definitions in the SDU 204 and produces the files and commands needed to perform the install or uninstall on a target server ready for either manual execution or dispatch via the deployment engine.
    • Generates the files and commands (e.g., tasks) to update a component instance when its settings change or when the view from one of its endpoints changes (e.g., due to communication relationship topology changes or a visible endpoint has settings change)
    • Maps the endpoint instances visible on a component instance's endpoints to settings on component instance—In the SDM a component instance has endpoint instances that, as a result of some communication relationship topology, can see other endpoint instances. The details of the other endpoint instances are mapped to settings that the component instance can fetch at runtime, usually so that it can bind to it. For example, a web site may have a database client endpoint instance so a communication relationship can be established with a database. When correctly established its database client endpoint is able to see a single database server endpoint instance and the settings on that server endpoint. This information is used by the instance manager to place a connection string for the server in a configuration file under the name of the client endpoint. The end result is that code simply reads the connection string for the database from its configuration settings.
    • Generates the files and commands (tasks) to audit a component instance—Auditing confirms existence, correct settings. This may apply to host instance settings also.
    • For any task will report status—The IM will translate the output captured, either partial or complete, and provide the status of the task as success, failure or incomplete and optionally offer progress on incomplete (% or last response), details on failure (error message) and a human readable log on any status. By going back to the instance manager to interpret the output of a task, the instance manager is free to have its tasks log structured information (for example, as XML or even SOAP) rather than trying to have to produce sufficient logging for diagnosis while keeping it human readable.
    • The instance managers may also provide code that does the constraint checking between hosts and their guests. Installers may use a common constraint language, for example based on XML, XPath and XQuery.


      Example SDM Implementation


The following discussion describes an embodiment of the schema that defines the elements of the SDM.


Example Computer Environment



FIG. 21 illustrates a general computer environment 600, which can be used to implement the techniques described herein. The computer environment 600 is only one example of a computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the computer and network architectures. Neither should the computer environment 600 be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in the exemplary computer environment 600.


Computer environment 600 includes a general-purpose computing device in the form of a computer 602. Computer 602 can be, for example, a computing device 102 of FIG. 1, or implement development system 202 or be a controller 206 of FIG. 2, or be a target device 212 of FIG. 2, or be a controller 520 or target 522 of FIG. 5. The components of computer 602 can include, but are not limited to, one or more processors or processing units 604, a system memory 606, and a system bus 608 that couples various system components including the processor 604 to the system memory 606.


The system bus 608 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures can include an Industry Standard Architecture (ISA) bus, a Micro Channel Architecture (MCA) bus, an Enhanced ISA (EISA) bus, a Video Electronics Standards Association (VESA) local bus, and a Peripheral Component Interconnects (PCI) bus also known as a Mezzanine bus.


Computer 602 typically includes a variety of computer readable media. Such media can be any available media that is accessible by computer 602 and includes both volatile and non-volatile media, removable and non-removable media.


The system memory 606 includes computer readable media in the form of volatile memory, such as random access memory (RAM) 610, and/or non-volatile memory, such as read only memory (ROM) 612. A basic input/output system (BIOS) 614, containing the basic routines that help to transfer information between elements within computer 602, such as during start-up, is stored in ROM 612. RAM 610 typically contains data and/or program modules that are immediately accessible to and/or presently operated on by the processing unit 604.


Computer 602 may also include other removable/non-removable, volatile/non-volatile computer storage media. By way of example, FIG. 21 illustrates a hard disk drive 616 for reading from and writing to a non-removable, non-volatile magnetic media (not shown), a magnetic disk drive 618 for reading from and writing to a removable, non-volatile magnetic disk 620 (e.g., a “floppy disk”), and an optical disk drive 622 for reading from and/or writing to a removable, non-volatile optical disk 624 such as a CD-ROM, DVD-ROM, or other optical media. The hard disk drive 616, magnetic disk drive 618, and optical disk drive 622 are each connected to the system bus 608 by one or more data media interfaces 626. Alternatively, the hard disk drive 616, magnetic disk drive 618, and optical disk drive 622 can be connected to the system bus 608 by one or more interfaces (not shown).


The disk drives and their associated computer-readable media provide non-volatile storage of computer readable instructions, data structures, program modules, and other data for computer 602. Although the example illustrates a hard disk 616, a removable magnetic disk 620, and a removable optical disk 624, it is to be appreciated that other types of computer readable media which can store data that is accessible by a computer, such as magnetic cassettes or other magnetic storage devices, flash memory cards, CD-ROM, digital versatile disks (DVD) or other optical storage, random access memories (RAM), read only memories (ROM), electrically erasable programmable read-only memory (EEPROM), and the like, can also be utilized to implement the exemplary computing system and environment.


Any number of program modules can be stored on the hard disk 616, magnetic disk 620, optical disk 624, ROM 612, and/or RAM 610, including by way of example, an operating system 626, one or more application programs 628, other program modules 630, and program data 632. Each of such operating system 626, one or more application programs 628, other program modules 630, and program data 632 (or some combination thereof) may implement all or part of the resident components that support the distributed file system.


A user can enter commands and information into computer 602 via input devices such as a keyboard 634 and a pointing device 636 (e.g., a “mouse”). Other input devices 638 (not shown specifically) may include a microphone, joystick, game pad, satellite dish, serial port, scanner, and/or the like. These and other input devices are connected to the processing unit 604 via input/output interfaces 640 that are coupled to the system bus 608, but may be connected by other interface and bus structures, such as a parallel port, game port, or a universal serial bus (USB).


A monitor 642 or other type of display device can also be connected to the system bus 608 via an interface, such as a video adapter 644. In addition to the monitor 642, other output peripheral devices can include components such as speakers (not shown) and a printer 646 which can be connected to computer 602 via the input/output interfaces 640.


Computer 602 can operate in a networked environment using logical connections to one or more remote computers, such as a remote computing device 648. By way of example, the remote computing device 648 can be a personal computer, portable computer, a server, a router, a network computer, a peer device or other common network node, and the like. The remote computing device 648 is illustrated as a portable computer that can include many or all of the elements and features described herein relative to computer 602.


Logical connections between computer 602 and the remote computer 648 are depicted as a local area network (LAN) 650 and a general wide area network (WAN) 652. Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets, and the Internet.


When implemented in a LAN networking environment, the computer 602 is connected to a local network 650 via a network interface or adapter 654. When implemented in a WAN networking environment, the computer 602 typically includes a modem 656 or other means for establishing communications over the wide network 652. The modem 656, which can be internal or external to computer 602, can be connected to the system bus 608 via the input/output interfaces 640 or other appropriate mechanisms. It is to be appreciated that the illustrated network connections are exemplary and that other means of establishing communication link(s) between the computers 602 and 648 can be employed.


In a networked environment, such as that illustrated with computing environment 600, program modules depicted relative to the computer 602, or portions thereof, may be stored in a remote memory storage device. By way of example, remote application programs 658 reside on a memory device of remote computer 648. For purposes of illustration, application programs and other executable program components such as the operating system are illustrated herein as discrete blocks, although it is recognized that such programs and components reside at various times in different storage components of the computing device 602, and are executed by the data processor(s) of the computer.


Various modules and techniques may be described herein in the general context of computer-executable instructions, such as program modules, executed by one or more computers or other devices. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Typically, the functionality of the program modules may be combined or distributed as desired in various embodiments.


An implementation of these modules and techniques may be stored on or transmitted across some form of computer readable media. Computer readable media can be any available media that can be accessed by a computer. By way of example, and not limitation, computer readable media may comprise “computer storage media” and “communications media.”


“Computer storage media” includes volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules, or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer.


“Communication media” typically embodies computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as carrier wave or other transport mechanism. Communication media also includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared, and other wireless media. Combinations of any of the above are also included within the scope of computer readable media.


Alternatively, portions of the framework may be implemented in hardware or a combination of hardware, software, and/or firmware. For example, one or more application specific integrated circuits (ASICs) or programmable logic devices (PLDs) could be designed or programmed to implement one or more portions of the framework.


Conclusion


Although the invention has been described in language specific to structural features and/or methodological acts, it is to be understood that the invention defined in the appended claims is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as exemplary forms of implementing the claimed invention.

Claims
  • 1. A method comprising: using, by one or more computing devices, a system definition model in a development phase of a system to design the system, the system definition model including one or more requirements of the system to be satisfied by an environment of which the system is to be deployed in order for the system to run in the environment, wherein the system includes an application;validating the environment for the system, by the one or more computing devices, by comparing the one or more requirements of the system with the environment of which the system is to be deployed to determine whether the one or more requirements of the system are satisfied by the environment during the development phase;validating the system for the environment, by the one or more computing devices, by comparing the one or more requirements of the environment with the system to determine whether the one or more requirements of the environment are satisfied by the system during the development phase;subsequently using, by the one or more computing devices, the system definition model in a deployment phase of the system to deploy the system on at least one of the one or more computing devices;after deployment of the system, calling, by the one or more computing devices, one or more functions defined within the system definition model during a management phase of the system to manage the system deployed on the at least one of the one or more computing devices; andintegrating the development phase, the deployment phase, and the management phase in a lifecycle of the system based on the system definition model.
  • 2. A method as recited in claim 1, further comprising: using knowledge obtained during management of the system to design a subsequent version of the system.
  • 3. A method as recited in claim 1, wherein the system definition model includes knowledge describing how to deploy the system on the one or more computing devices.
  • 4. A method as recited in claim 1, wherein the system definition model includes knowledge describing how to deploy the system on multiple different computing devices, and wherein the knowledge includes different knowledge describing how to deploy the system on each of the multiple different computing devices.
  • 5. A method as recited in claim 1, wherein the validating the environment compares the one or more requirements of the system with the environment during both the design of the system prior to the using the system definition model to deploy the system and during the management phase of the system.
  • 6. A method as recited in claim 1, wherein the system definition model includes knowledge describing how to manage the system after deployment of the system.
  • 7. A method as recited in claim 1, further comprising: during management of the system, using a flow to automatically propagate a configuration change to the system.
  • 8. A method as recited in claim 1, wherein a plurality of environments are deployed on the one or more computing devices, the method further comprising: using a plurality of different system definition models to design each of the plurality of environments, wherein each of the plurality of environments is associated with one of the plurality of different system definition models;using, for each environment, the associated one of the plurality of different system definition models to deploy the environment; andafter deployment, using, for each environment, the associated one of the plurality of different system definition models to manage the environment.
  • 9. A method as recited in claim 8, wherein each of the plurality of environments is layered, and wherein each of the plurality of environments serves as environment to one other of the plurality of environments or to the system.
  • 10. A method as recited in claim 1, wherein the system definition model includes information describing how to deploy the system in multiple different runtimes, and wherein the information includes different information describing how to deploy the system in each of the multiple different runtimes.
  • 11. A method as recited in claim 1, further comprising, prior to the design, deployment, and management of the system, using, by the one or more computing devices, another system definition model to design the environment, wherein the system is deployed to the environment on the one or more computing devices;subsequently using, by the one or more computing devices, the other system definition model to deploy the environment on the one or more computing devices; andafter deployment of the environment, using, by the one or more computing devices, the other system definition model to manage the environment deployed on the one or more computing devices,wherein the system definition model includes the one or more requirements of the system to be satisfied by the environment in order for the system to be run on the one or more computing devices, and wherein the other system definition model includes constraints to be satisfied by the system in order for the system to be run on the one or more computing devices.
  • 12. A method as recited in claim 11, wherein the system definition model for the environment is derived through examination of the configuration of the one or more computing devices.
  • 13. A computing device having stored a plurality of instructions that when executed by a processor, cause the processor to perform rising: using a system definition model in a development phase of a system to design the system, wherein the system includes an application, the system definition model includes a representation of an environment in which the application is to be deployed, and the using includes binding the application to the representation in the system definition model, the representation including definitions for hosts of the environment of their application components and constraints on the configuration of their applications;determining that the environment in which the application is to be deployed satisfies the constraints on the configuration of their applications prior to deploying the system;validating the environment for the system, by the processor, by comparing the one or more requirements of the system with the environment of which the system is to be deployed to determine whether the one or more requirements of the system are satisfied by the environment during the development phase;validating the system for the environment, by the processor, by comparing the one or more requirements of the environment with the system to determine whether the one or more requirements of the environment are satisfied by the system during the development phase;subsequently using the system definition model in a deployment phase of the system to deploy the system on one or more computing devices;during the deployment phase of the system, generating a record of resources involved in the deployment phase and relationships between the resources; andafter deployment of the system, invoking one or more functions defined within the system definition model in a management phase of the system to manage the system deployed on the one or more computing devices.
  • 14. The computing device as recited in claim 13, wherein the system definition model includes knowledge describing how to deploy the system.
  • 15. The computing device as recited in claim 13, wherein the system definition model includes knowledge describing how to deploy the system in multiple different environments, and wherein the knowledge includes different knowledge describing how to deploy the system in each of the multiple different environments.
  • 16. The computing device as recited in claim 13, wherein the system definition model includes constraints that must to be satisfied by an environment in order for the system to be run in the environment.
  • 17. The computing device as recited in claim 16, wherein to use the system definition model to deploy the system is to use the system definition model to check whether the constraints are to be satisfied by the environment during design of the system.
  • 18. The computing device as recited in claim 13, wherein the system definition model includes knowledge describing how to manage the system.
  • 19. An apparatus comprising: a processor;a controller executed on the processor, configured to use a system definition model in a development phase of a system to design the system, the system definition model includes requirements of the system that must to be satisfied by an environment in order for the system to be run in the environment, wherein the system includes an application;a development system executed on the processor to validate the environment by comparing the requirements of the system with the environment to determine whether the requirements of the system are satisfied by the environment during the development phase;the development system further executed on the processor to validate the system by comparing the requirements of the environment with the system to determine whether the requirements of the environment are satisfied by the system during the development phase;a deployment module executed on the processor for subsequently using the system definition model in a deployment phase of the system to deploy the system on one or more computing devices;a management module executed on the processor after deployment of the system, to call one or more functions defined in the system definition model in a management phase of the system to manage the system deployed on the one or more computing devices; andthe controller further executed on a processor to use the system definition model to integrate the development phase, the deployment phase, and the management phase in a lifecycle of the system.
  • 20. An apparatus as recited in claim 19, wherein the subsequently using the system definition model in the development phase includes knowledge describing how to deploy the system.
  • 21. An apparatus as recited in claim 19, wherein the subsequently using the system definition model in the development phase includes knowledge describing how to deploy the system in multiple different environments, and wherein the knowledge includes different knowledge describing how to deploy the system in each of the multiple different environments.
  • 22. An apparatus as recited in claim 19, wherein the call one or more functions defined in the system definition model in the management phase of the system includes knowledge describing how to manage the system.
  • 23. A system comprising: a processor; anda plurality of executable instructions which, when executed by the processor, perform operations comprising: using a system definition model to design an application, the system definition model being applicable across a lifecycle of the application, wherein the lifecycle of the application includes design of the application, deployment of the application, and management of the application, and wherein the system definition model includes a representation of an environment in which the application is to be deployed, and the using includes binding the application to the representation in the system definition model, the system definition model including requirements of the application that must to be satisfied by the environment in order for the application to run in the environment;a development system executed on the processor to validate the environment by comparing the requirements of the system with the environment to determine whether the requirements of the system are satisfied by the environment during a development phase;the development system further executed on the processor to validate the system by comparing the requirements of the environment with the system to determine whether the requirements of the environment are satisfied by the system during the development phase;subsequently using the system definition model to deploy the application on one or more computing devices;during the deployment phase of the application, generating a record of resources involved in the deployment phase and relationships between the resources; andafter deployment of the application, calling one or more functions defined within the system definition model to manage the application deployed on the one or more computing devices;wherein the system further includes a schema to dictate how functional operations within the system definition model are to be specified.
  • 24. A system as recited in claim 23, wherein the system definition model includes information describing how to deploy the application.
  • 25. A system as recited in claim 23, wherein the system definition model includes information describing how to deploy the application in multiple different environments, and wherein the information includes different information describing how to deploy the application in each of the multiple different environments.
  • 26. A system as recited in claim 23, wherein the system definition model further includes requirements of the environment to be satisfied by the application environment in order for the to be run in the environment, the plurality of executable instructions to further perform operations comprising using the requirements of the environment during runtime while the application is being managed to validate the changes to the application during runtime.
  • 27. A system as recited in claim 26, wherein the plurality of executable instructions to further perform operations comprising validating the environment by comparing the requirements of the application with the environment to determine whether the requirements are satisfied by the environment during at least the design of the application.
  • 28. A system as recited in claim 27, wherein the plurality of executable instructions to further perform operations comprising validating the application by comparing the requirements of the environment with the application to determine whether the requirements of the environment are satisfied by the application during at least the design of the application.
  • 29. A system as recited in claim 23, wherein the system definition model includes information describing how to manage the application.
  • 30. A system as recited in claim 23, wherein the system further comprises: another system definition model applicable across a lifecycle of the environment, wherein the lifecycle of the environment includes design of the environment, deployment of the environment, and management of the environment; andwherein the schema is further to dictate how functional operations within the other system definition model are to be specified.
  • 31. A system as recited in claim 30, wherein the other system definition model for the environment is derived through examination of the configuration of one or more computing devices.
  • 32. A system as recited in claim 30, wherein the system definition model includes constraints to be satisfied by the environment in order for the application to be run on the environment, and wherein the other system definition model includes constraints to be satisfied by the application in order for the application to be run on the environment.
  • 33. A system as recited in claim 30, wherein the system further comprises: an additional system definition model applicable across a lifecycle of an additional environment, wherein the lifecycle of the additional environment includes design of the additional environment, deployment of the additional environment, and management of the additional environment;wherein the additional environment is layered below the environment; andwherein the schema is further to dictate how functional operations within the additional system definition model are to be specified.
RELATED APPLICATIONS

This application claims the benefit of U.S. Provisional Application No. 60/452,736, filed Mar. 6, 2003, entitled “Architecture for Distributed Computing System and Automated Design, Deployment, and Management of Distributed Applications”, which is hereby incorporated herein by reference. This application is related to the following US patent applications (all of which are incorporated by reference herein): U.S. patent application Ser. No. 10/382,942, filed on Mar. 6, 2003, titled “Virtual Network Topology Generation”, which is hereby incorporated herein by reference. U.S. patent application Ser. No. 09/695,812, filed on Oct. 24, 2000, titled “System and Method for Distributed Management of Shared Computers”, which is hereby incorporated herein by reference. U.S. patent application Ser. No. 09/695,813, filed on Oct. 24, 2000, titled “System and Method for Logical Modeling of Distributed Computer Systems”, which is hereby incorporated herein by reference. U.S. patent application Ser. No. 09/695,820, filed on Oct. 24, 2000, titled “System and Method for Restricting Data Transfers and Managing Software Components of Distributed Computers”, which is hereby incorporated herein by reference. U.S. patent application Ser. No. 09/695,821, filed on Oct. 24, 2000, titled “Using Packet Filters and Network Virtualization to Restrict Network Communications”, which is hereby incorporated herein by reference. U.S. patent application Ser. No. 09/696,707, filed on Oct. 24, 2000, titled “System and Method for Designing a Logical Model of Distributed Computer System and Deploying Physical Resources According to the Logical Model”, which is hereby incorporated herein by reference. U.S. patent application Ser. No. 09/696,752, filed on Oct. 24, 2000, titled “System and Method Providing Automatic Policy Enforcement in a Multi-Computer Service Application”, which is hereby incorporated herein by reference.

US Referenced Citations (504)
Number Name Date Kind
4200770 Hellman et al. Apr 1980 A
4218582 Hellman et al. Aug 1980 A
4405829 Rivest et al. Sep 1983 A
4424414 Hellman et al. Jan 1984 A
5031089 Liu et al. Jul 1991 A
5220621 Saitoh Jun 1993 A
5430810 Saeki Jul 1995 A
5490276 Doli, Jr. et al. Feb 1996 A
5495610 Shing et al. Feb 1996 A
5499357 Sonty et al. Mar 1996 A
5504921 Dev et al. Apr 1996 A
5557774 Shimabukuro et al. Sep 1996 A
5579482 Einkauf et al. Nov 1996 A
5668995 Bhat Sep 1997 A
5686940 Kuga Nov 1997 A
5724508 Harple, Jr. et al. Mar 1998 A
5748958 Badovinatz et al. May 1998 A
5758351 Gibson et al. May 1998 A
5768271 Seid et al. Jun 1998 A
5774689 Curtis et al. Jun 1998 A
5784463 Chen et al. Jul 1998 A
5790895 Krontz et al. Aug 1998 A
5818937 Watson et al. Oct 1998 A
5822531 Gorczyca et al. Oct 1998 A
5826015 Schmidt Oct 1998 A
5845124 Berman Dec 1998 A
5845277 Pfeil et al. Dec 1998 A
5867706 Martin et al. Feb 1999 A
5872928 Lewis et al. Feb 1999 A
5878220 Olkin et al. Mar 1999 A
5895499 Chu Apr 1999 A
5905728 Han et al. May 1999 A
5917730 Rittie et al. Jun 1999 A
5920705 Lyon et al. Jul 1999 A
5930798 Lawler et al. Jul 1999 A
5958009 Friedrich et al. Sep 1999 A
5968126 Ekstrom et al. Oct 1999 A
6035405 Gage et al. Mar 2000 A
6041054 Westberg Mar 2000 A
6047323 Krause Apr 2000 A
6049528 Hendel et al. Apr 2000 A
6052469 Johnson et al. Apr 2000 A
6059842 Dumarot et al. May 2000 A
6065058 Hailpern et al. May 2000 A
6075776 Tanimoto et al. Jun 2000 A
6076108 Courts et al. Jun 2000 A
6081826 Masuoka et al. Jun 2000 A
6085238 Yuasa et al. Jul 2000 A
6086618 Al-Hilali et al. Jul 2000 A
6108702 Wood Aug 2000 A
6112243 Downs et al. Aug 2000 A
6115393 Engel et al. Sep 2000 A
6118785 Araujo et al. Sep 2000 A
6125442 Maves et al. Sep 2000 A
6125447 Gong Sep 2000 A
6134594 Helland et al. Oct 2000 A
6147995 Dobbins et al. Nov 2000 A
6151688 Wipfel et al. Nov 2000 A
6167052 McNeill et al. Dec 2000 A
6167383 Henson Dec 2000 A
6178529 Short et al. Jan 2001 B1
6182275 Beelitz et al. Jan 2001 B1
6185308 Ando et al. Feb 2001 B1
6195091 Harple et al. Feb 2001 B1
6195355 Demizu Feb 2001 B1
6208345 Sheard et al. Mar 2001 B1
6208649 Kloth Mar 2001 B1
6209099 Saunders Mar 2001 B1
6212559 Bixler et al. Apr 2001 B1
6226788 Schoening et al. May 2001 B1
6230312 Hunt May 2001 B1
6233610 Hayball et al. May 2001 B1
6236365 LeBlanc et al. May 2001 B1
6236901 Goss May 2001 B1
6237020 Leymann et al. May 2001 B1
6253230 Couland et al. Jun 2001 B1
6256773 Bowman-Amuah Jul 2001 B1
6259448 McNally et al. Jul 2001 B1
6263089 Otsuka et al. Jul 2001 B1
6266707 Boden et al. Jul 2001 B1
6269076 Shamir et al. Jul 2001 B1
6269079 Marin et al. Jul 2001 B1
6304972 Shavit Oct 2001 B1
6305015 Akriche et al. Oct 2001 B1
6308174 Hayball et al. Oct 2001 B1
6311144 Abu El Ata Oct 2001 B1
6311270 Challener et al. Oct 2001 B1
6330605 Christensen et al. Dec 2001 B1
6336138 Caswell et al. Jan 2002 B1
6338112 Wipfel et al. Jan 2002 B1
6351685 Dimitri et al. Feb 2002 B1
6353806 Gehlot Mar 2002 B1
6353861 Dolin, Jr. et al. Mar 2002 B1
6353898 Wipfel et al. Mar 2002 B1
6360265 Falck et al. Mar 2002 B1
6364439 Cedillo Apr 2002 B1
6367010 Venkatram et al. Apr 2002 B1
6370573 Bowman-Amuah Apr 2002 B1
6370584 Bestavros et al. Apr 2002 B1
6377996 Lumelsky et al. Apr 2002 B1
6389464 Krishnamurthy et al. May 2002 B1
6393386 Zager et al. May 2002 B1
6393456 Ambler et al. May 2002 B1
6393474 Eichert et al. May 2002 B1
6393485 Chao et al. May 2002 B1
6408390 Saito Jun 2002 B1
6418554 Delo et al. Jul 2002 B1
6424718 Holloway Jul 2002 B1
6424992 Devarakonda et al. Jul 2002 B2
6427163 Arendt et al. Jul 2002 B1
6427171 Craft et al. Jul 2002 B1
6434598 Gish Aug 2002 B1
6438100 Halpern et al. Aug 2002 B1
6442557 Buteau et al. Aug 2002 B1
6442713 Block et al. Aug 2002 B1
6449650 Westfall et al. Sep 2002 B1
6457048 Sondur et al. Sep 2002 B2
6463536 Saito Oct 2002 B2
6466985 Goyal et al. Oct 2002 B1
6470025 Wilson et al. Oct 2002 B1
6470464 Bertram et al. Oct 2002 B2
6473791 Al-Ghosein et al. Oct 2002 B1
6480955 DeKoning et al. Nov 2002 B1
6484261 Wiegel Nov 2002 B1
6502131 Vaid et al. Dec 2002 B1
6505244 Natarajan et al. Jan 2003 B1
6519615 Wollrath et al. Feb 2003 B1
6529953 Van Renesse Mar 2003 B1
6539494 Abramson et al. Mar 2003 B1
6546423 Dutta et al. Apr 2003 B1
6546553 Hunt Apr 2003 B1
6549934 Peterson et al. Apr 2003 B1
6564261 Gudjonsson et al. May 2003 B1
6570847 Hosein May 2003 B1
6570875 Hegde May 2003 B1
6574195 Roberts Jun 2003 B2
6584499 Jantz et al. Jun 2003 B1
6587876 Mahon et al. Jul 2003 B1
6597956 Aziz et al. Jul 2003 B1
6598077 Primak et al. Jul 2003 B2
6598173 Sheikh et al. Jul 2003 B1
6598223 Vrhel, Jr. et al. Jul 2003 B1
6601101 Lee et al. Jul 2003 B1
6601233 Underwood Jul 2003 B1
6606708 Devine et al. Aug 2003 B1
6609148 Salo et al. Aug 2003 B1
6609213 Nguyen et al. Aug 2003 B1
6611522 Zheng et al. Aug 2003 B1
6628671 Dynarski et al. Sep 2003 B1
6631141 Kumar et al. Oct 2003 B1
6636929 Frantz et al. Oct 2003 B1
6640303 Vu Oct 2003 B1
6651101 Gai et al. Nov 2003 B1
6651240 Yamamoto et al. Nov 2003 B1
6654782 O'Brien et al. Nov 2003 B1
6654796 Slater et al. Nov 2003 B1
6665714 Blumenau et al. Dec 2003 B1
6671699 Black et al. Dec 2003 B1
6675308 Thomsen Jan 2004 B1
6678821 Waugh et al. Jan 2004 B1
6678835 Shah et al. Jan 2004 B1
6681262 Rimmer Jan 2004 B1
6691148 Zinky et al. Feb 2004 B1
6691165 Bruck et al. Feb 2004 B1
6691168 Bal et al. Feb 2004 B1
6694436 Audebert Feb 2004 B1
6701363 Chiu et al. Mar 2004 B1
6717949 Boden et al. Apr 2004 B1
6718361 Basani et al. Apr 2004 B1
6718379 Krishna et al. Apr 2004 B1
6725253 Okano et al. Apr 2004 B1
6728885 Taylor et al. Apr 2004 B1
6735596 Corynen May 2004 B2
6738736 Bond May 2004 B1
6741266 Kamiwada et al. May 2004 B1
6742020 Dimitroff et al. May 2004 B1
6748447 Basani et al. Jun 2004 B1
6754716 Sharma et al. Jun 2004 B1
6754816 Layton et al. Jun 2004 B1
6757744 Narisi et al. Jun 2004 B1
6760765 Asai et al. Jul 2004 B1
6760775 Anerousis et al. Jul 2004 B1
6769008 Kumar et al. Jul 2004 B1
6769060 Dent et al. Jul 2004 B1
6779016 Aziz et al. Aug 2004 B1
6782408 Chandra et al. Aug 2004 B1
6789090 Miyake et al. Sep 2004 B1
6801528 Nassar Oct 2004 B2
6801937 Novaes et al. Oct 2004 B1
6801949 Bruck et al. Oct 2004 B1
6804783 Wesinger et al. Oct 2004 B1
6813778 Poli et al. Nov 2004 B1
6816897 McGuire Nov 2004 B2
6820042 Cohen et al. Nov 2004 B1
6820121 Callis et al. Nov 2004 B1
6823299 Contreras et al. Nov 2004 B1
6823373 Pancha et al. Nov 2004 B1
6823382 Stone Nov 2004 B2
6829639 Lawson et al. Dec 2004 B1
6829770 Hinson et al. Dec 2004 B1
6836750 Wong et al. Dec 2004 B2
6845160 Aoki Jan 2005 B1
6853841 St. Pierre Feb 2005 B1
6854069 Kampe et al. Feb 2005 B2
6856591 Ma et al. Feb 2005 B1
6862613 Kumar et al. Mar 2005 B1
6868062 Yadav et al. Mar 2005 B1
6868454 Kubota et al. Mar 2005 B1
6879926 Schmit et al. Apr 2005 B2
6880002 Hirschfeld et al. Apr 2005 B2
6886038 Tabbara et al. Apr 2005 B1
6888807 Heller et al. May 2005 B2
6895534 Wong et al. May 2005 B2
6898791 Chandy et al. May 2005 B1
6904458 Bishop et al. Jun 2005 B1
6907395 Hunt et al. Jun 2005 B1
6912568 Nishiki et al. Jun 2005 B1
6915338 Hunt et al. Jul 2005 B1
6922791 Mashayekhi et al. Jul 2005 B2
6928482 Ben Nun et al. Aug 2005 B1
6944183 Iyer et al. Sep 2005 B1
6944759 Crisan Sep 2005 B1
6947987 Boland Sep 2005 B2
6954930 Drake et al. Oct 2005 B2
6957186 Guheen et al. Oct 2005 B1
6963981 Bailey et al. Nov 2005 B1
6968291 Desai Nov 2005 B1
6968535 Stelting et al. Nov 2005 B2
6968550 Branson et al. Nov 2005 B2
6968551 Hediger et al. Nov 2005 B2
6971063 Rappaport et al. Nov 2005 B1
6971072 Stein Nov 2005 B1
6973620 Gusler et al. Dec 2005 B2
6973622 Rappaport et al. Dec 2005 B1
6976079 Ferguson et al. Dec 2005 B1
6976269 Avery, IV et al. Dec 2005 B1
6978379 Goh et al. Dec 2005 B1
6983317 Bishop et al. Jan 2006 B1
6985956 Luke et al. Jan 2006 B2
6986133 O'Brien et al. Jan 2006 B2
6986135 Leathers et al. Jan 2006 B2
6990666 Hirschfeld et al. Jan 2006 B2
7003562 Mayer Feb 2006 B2
7003574 Bahl Feb 2006 B1
7012919 So et al. Mar 2006 B1
7013462 Zara et al. Mar 2006 B2
7016950 Tabbara et al. Mar 2006 B2
7024451 Jorgenson Apr 2006 B2
7027412 Miyamoto et al. Apr 2006 B2
7028228 Lovy et al. Apr 2006 B1
7035786 Abu El Ata et al. Apr 2006 B1
7035930 Graupner et al. Apr 2006 B2
7043407 Lynch et al. May 2006 B2
7043545 Tabbara et al. May 2006 B2
7046680 McDysan et al. May 2006 B1
7047279 Beams et al. May 2006 B1
7047518 Little et al. May 2006 B2
7050961 Lee et al. May 2006 B1
7054943 Goldszmidt et al. May 2006 B1
7055149 Birkholz et al. May 2006 B2
7058704 Mangipudi et al. Jun 2006 B1
7058826 Fung Jun 2006 B2
7058858 Wong et al. Jun 2006 B2
7062718 Kodosky et al. Jun 2006 B2
7069204 Solden et al. Jun 2006 B1
7069432 Tighe et al. Jun 2006 B1
7069480 Lovy et al. Jun 2006 B1
7069553 Narayanaswamy et al. Jun 2006 B2
7072807 Brown et al. Jul 2006 B2
7072822 Humenansky et al. Jul 2006 B2
7076633 Tormasov et al. Jul 2006 B2
7080143 Hunt et al. Jul 2006 B2
7082464 Hasan et al. Jul 2006 B2
7089281 Kazemi et al. Aug 2006 B1
7089293 Grosner et al. Aug 2006 B2
7089530 Dardinski et al. Aug 2006 B1
7093005 Patterson Aug 2006 B2
7093288 Hydrie et al. Aug 2006 B1
7096258 Hunt et al. Aug 2006 B2
7099936 Chase et al. Aug 2006 B2
7103185 Srivastava et al. Sep 2006 B1
7103874 McCollum et al. Sep 2006 B2
7113900 Hunt et al. Sep 2006 B1
7117158 Weldon et al. Oct 2006 B2
7117261 Kryskow, Jr. et al. Oct 2006 B2
7120154 Bavant et al. Oct 2006 B2
7124289 Suorsa Oct 2006 B1
7127625 Farkas et al. Oct 2006 B2
7130881 Volkov et al. Oct 2006 B2
7131123 Suorsa et al. Oct 2006 B2
7134011 Fung Nov 2006 B2
7134122 Sero et al. Nov 2006 B1
7139930 Mashayekhi et al. Nov 2006 B2
7139999 Bowman-Amuah Nov 2006 B2
7140000 Yucel Nov 2006 B2
7143420 Radhakrishnan Nov 2006 B2
7146353 Garg et al. Dec 2006 B2
7150015 Pace et al. Dec 2006 B2
7152109 Suorsa et al. Dec 2006 B2
7152157 Murphy et al. Dec 2006 B2
7155380 Hunt et al. Dec 2006 B2
7155490 Malmer et al. Dec 2006 B1
7162427 Myrick et al. Jan 2007 B1
7162509 Brown et al. Jan 2007 B2
7174379 Agarwal et al. Feb 2007 B2
7181731 Pace et al. Feb 2007 B2
7188335 Darr et al. Mar 2007 B1
7191344 Lin et al. Mar 2007 B2
7191429 Brassard et al. Mar 2007 B2
7194439 Kassan et al. Mar 2007 B2
7194616 Axnix et al. Mar 2007 B2
7197418 Fuller, III et al. Mar 2007 B2
7200530 Brown et al. Apr 2007 B2
7200655 Hunt et al. Apr 2007 B2
7203911 Williams Apr 2007 B2
7210143 Or et al. Apr 2007 B2
7213231 Bandhole et al. May 2007 B1
7222147 Black et al. May 2007 B1
7225441 Kozuch et al. May 2007 B2
7231410 Walsh et al. Jun 2007 B1
7246351 Bloch et al. Jul 2007 B2
7254634 Davis et al. Aug 2007 B1
7257584 Hirschfeld et al. Aug 2007 B2
7275156 Balfanz et al. Sep 2007 B2
7278273 Whitted et al. Oct 2007 B1
7281154 Mashayekhi et al. Oct 2007 B2
7302608 Acharya et al. Nov 2007 B1
7305549 Hunt et al. Dec 2007 B2
7305561 Hunt et al. Dec 2007 B2
7313573 Leung et al. Dec 2007 B2
7315801 Dowd et al. Jan 2008 B1
7318216 Diab Jan 2008 B2
7333000 Vassallo Feb 2008 B2
7343601 Azagury et al. Mar 2008 B2
7349891 Charron et al. Mar 2008 B2
7350068 Anderson et al. Mar 2008 B2
7350186 Coleman et al. Mar 2008 B2
7366755 Cuomo et al. Apr 2008 B1
7367028 Kodosky et al. Apr 2008 B2
7370103 Hunt et al. May 2008 B2
7370323 Marinelli et al. May 2008 B2
7376125 Hussain et al. May 2008 B1
7379982 Tabbara May 2008 B2
7386721 Vilhuber et al. Jun 2008 B1
7389411 King et al. Jun 2008 B2
7395320 Hunt et al. Jul 2008 B2
7403901 Carley et al. Jul 2008 B1
7404175 Lee et al. Jul 2008 B2
7406517 Hunt et al. Jul 2008 B2
7406692 Halpern et al. Jul 2008 B2
7409420 Pullara et al. Aug 2008 B2
7461249 Pearson et al. Dec 2008 B1
7464147 Fakhouri et al. Dec 2008 B1
7478381 Roberts et al. Jan 2009 B2
7478385 Sierer et al. Jan 2009 B2
7480907 Marolia et al. Jan 2009 B1
7496911 Rowley et al. Feb 2009 B2
7506338 Alpern et al. Mar 2009 B2
7512942 Brown et al. Mar 2009 B2
7530101 Gallo et al. May 2009 B2
7568019 Bhargava et al. Jul 2009 B1
7571082 Gilpin et al. Aug 2009 B2
7587453 Bhrara et al. Sep 2009 B2
7594224 Patrick et al. Sep 2009 B2
7624086 Keith, Jr. Nov 2009 B2
7630877 Brown et al. Dec 2009 B2
7653187 Clark et al. Jan 2010 B2
7653903 Purkeypile et al. Jan 2010 B2
7689676 Vinberg et al. Mar 2010 B2
7743373 Avram et al. Jun 2010 B2
7765540 McCollum et al. Jul 2010 B2
7797147 Vinberg et al. Sep 2010 B2
7802144 Vinberg et al. Sep 2010 B2
20010014158 Baltzley Aug 2001 A1
20010016909 Gehrmann Aug 2001 A1
20010020228 Cantu et al. Sep 2001 A1
20010039586 Primak et al. Nov 2001 A1
20010047400 Coates et al. Nov 2001 A1
20010051937 Ross et al. Dec 2001 A1
20020009079 Jungck et al. Jan 2002 A1
20020010771 Mandato Jan 2002 A1
20020022952 Zager et al. Feb 2002 A1
20020040402 Levy-Abegnoli et al. Apr 2002 A1
20020049573 El Ata Apr 2002 A1
20020057684 Miyamoto et al. May 2002 A1
20020069267 Thiele Jun 2002 A1
20020069369 Tremain Jun 2002 A1
20020075844 Hagen Jun 2002 A1
20020082820 Ferguson et al. Jun 2002 A1
20020087264 Hills et al. Jul 2002 A1
20020090089 Branigan et al. Jul 2002 A1
20020118642 Lee Aug 2002 A1
20020120761 Berg Aug 2002 A1
20020131601 Ninomiya et al. Sep 2002 A1
20020138551 Erickson Sep 2002 A1
20020143960 Goren et al. Oct 2002 A1
20020152086 Smith et al. Oct 2002 A1
20020156613 Geng et al. Oct 2002 A1
20020156900 Marquette et al. Oct 2002 A1
20020161839 Colasurdo et al. Oct 2002 A1
20020171690 Fox et al. Nov 2002 A1
20020184327 Major et al. Dec 2002 A1
20020188941 Cicciarelli et al. Dec 2002 A1
20020194342 Lu et al. Dec 2002 A1
20020194345 Lu et al. Dec 2002 A1
20020194369 Rawlings et al. Dec 2002 A1
20020198995 Liu et al. Dec 2002 A1
20030008712 Poulin Jan 2003 A1
20030009559 Ikeda Jan 2003 A1
20030014644 Burns et al. Jan 2003 A1
20030026426 Wright et al. Feb 2003 A1
20030028642 Agarwal et al. Feb 2003 A1
20030028770 Litwin, Jr. et al. Feb 2003 A1
20030041142 Zhang et al. Feb 2003 A1
20030041159 Tinsley et al. Feb 2003 A1
20030046615 Stone Mar 2003 A1
20030051049 Noy et al. Mar 2003 A1
20030056063 Hochmuth et al. Mar 2003 A1
20030065743 Jenny et al. Apr 2003 A1
20030069369 Belenkaya et al. Apr 2003 A1
20030074395 Eshghi et al. Apr 2003 A1
20030101284 Cabrera et al. May 2003 A1
20030105963 Slick et al. Jun 2003 A1
20030120763 Voilpano Jun 2003 A1
20030126464 McDaniel et al. Jul 2003 A1
20030130833 Brownell et al. Jul 2003 A1
20030138105 Challener et al. Jul 2003 A1
20030165140 Tang et al. Sep 2003 A1
20030200293 Fearn et al. Oct 2003 A1
20030204734 Wheeler Oct 2003 A1
20030214908 Kumar et al. Nov 2003 A1
20030217263 Sakai Nov 2003 A1
20030225563 Gonos Dec 2003 A1
20040002878 Maria Hinton Jan 2004 A1
20040049365 Keller et al. Mar 2004 A1
20040049509 Keller et al. Mar 2004 A1
20040059812 Assa Mar 2004 A1
20040068631 Ukeda et al. Apr 2004 A1
20040073443 Gabrick et al. Apr 2004 A1
20040073795 Jablon Apr 2004 A1
20040078787 Borek et al. Apr 2004 A1
20040102924 Jarrell et al. May 2004 A1
20040111315 Sharma et al. Jun 2004 A1
20040117438 Considine et al. Jun 2004 A1
20040117476 Steele et al. Jun 2004 A1
20040160386 Michelitsch et al. Aug 2004 A1
20040161111 Sherman Aug 2004 A1
20040193388 Outhred et al. Sep 2004 A1
20040199572 Hunt et al. Oct 2004 A1
20040205179 Hunt et al. Oct 2004 A1
20040208292 Winterbottom Oct 2004 A1
20040210771 Wood et al. Oct 2004 A1
20040220792 Gallanis et al. Nov 2004 A1
20040226010 Suorsa Nov 2004 A1
20040261079 Sen Dec 2004 A1
20040264481 Darling et al. Dec 2004 A1
20040267920 Hydrie et al. Dec 2004 A1
20040268357 Joy et al. Dec 2004 A1
20040268358 Darling et al. Dec 2004 A1
20050008001 Williams et al. Jan 2005 A1
20050021742 Yemini et al. Jan 2005 A1
20050055435 Gbadegesin et al. Mar 2005 A1
20050080811 Speeter et al. Apr 2005 A1
20050086502 Rayes et al. Apr 2005 A1
20050091078 Hunt et al. Apr 2005 A1
20050091227 McCollum et al. Apr 2005 A1
20050097097 Hunt et al. May 2005 A1
20050097146 Konstantinou et al. May 2005 A1
20050102154 Dodd et al. May 2005 A1
20050102388 Tabbara et al. May 2005 A1
20050102513 Alve May 2005 A1
20050102538 Hunt et al. May 2005 A1
20050125212 Hunt et al. Jun 2005 A1
20050131773 Daur et al. Jun 2005 A1
20050138416 Qian et al. Jun 2005 A1
20050152270 Gomez Paredes et al. Jul 2005 A1
20050181775 Rideout et al. Aug 2005 A1
20050192971 Tabbara et al. Sep 2005 A1
20050193103 Drabik Sep 2005 A1
20050246529 Hunt et al. Nov 2005 A1
20050246771 Hunt et al. Nov 2005 A1
20050251783 Torone et al. Nov 2005 A1
20050257244 Joly et al. Nov 2005 A1
20050268325 Kuno et al. Dec 2005 A1
20060025984 Papaefstathiou et al. Feb 2006 A1
20060025985 Vinberg et al. Feb 2006 A1
20060031248 Vinberg et al. Feb 2006 A1
20060034263 Outhred et al. Feb 2006 A1
20060037002 Vinberg et al. Feb 2006 A1
20060048017 Anerousis et al. Mar 2006 A1
20060123040 McCarthy et al. Jun 2006 A1
20060149838 Hunt et al. Jul 2006 A1
20060155708 Brown et al. Jul 2006 A1
20060161879 Lubrecht et al. Jul 2006 A1
20060161884 Lubrecht et al. Jul 2006 A1
20060232927 Vinberg et al. Oct 2006 A1
20060235664 Vinberg et al. Oct 2006 A1
20060259609 Hunt et al. Nov 2006 A1
20060259610 Hunt et al. Nov 2006 A1
20060271341 Brown et al. Nov 2006 A1
20070006177 Aiber et al. Jan 2007 A1
20070112847 Dublish et al. May 2007 A1
20070192769 Mimura et al. Aug 2007 A1
20080059214 Vinberg et al. Mar 2008 A1
Foreign Referenced Citations (56)
Number Date Country
1368694 Sep 2002 CN
1375685 Oct 2002 CN
0964546 Dec 1999 EP
1180886 Feb 2002 EP
1307018 May 2003 EP
1550969 Jul 2005 EP
2363286 Dec 2001 GB
6250956 Sep 1994 JP
07006110 Jan 1995 JP
08044677 Feb 1996 JP
8297567 Nov 1996 JP
8305609 Nov 1996 JP
9034723 Feb 1997 JP
9091143 Apr 1997 JP
09218842 Aug 1997 JP
09247648 Sep 1997 JP
10124343 May 1998 JP
10150470 Jun 1998 JP
10198642 Jul 1998 JP
10240576 Sep 1998 JP
10285216 Oct 1998 JP
11007407 Jan 1999 JP
11110256 Apr 1999 JP
11340980 Dec 1999 JP
11345180 Dec 1999 JP
2000151574 May 2000 JP
2000268012 Sep 2000 JP
2000293497 Oct 2000 JP
2001339437 Dec 2001 JP
2001526814 Dec 2001 JP
2002084302 Mar 2002 JP
2002354006 Dec 2002 JP
2003006170 Jan 2003 JP
2003030424 Jan 2003 JP
2003058698 Feb 2003 JP
2003532784 Nov 2003 JP
2005155729 Jun 2005 JP
10-2002-0026751 Apr 2002 KR
10-2004-0008275 Jan 2004 KR
2111625 May 1998 RU
2156546 Sep 2000 RU
2189072 Sep 2002 RU
444461 Jul 2001 TW
511352 Nov 2002 TW
WO9728505 Aug 1997 WO
WO9853410 Nov 1998 WO
WO9930514 Jun 1999 WO
WO9963439 Dec 1999 WO
WO0022526 Apr 2000 WO
WO0031945 Jun 2000 WO
WO0073929 Dec 2000 WO
WO0230044 Apr 2002 WO
WO0237748 May 2002 WO
WO02085051 Oct 2002 WO
WO03027876 Apr 2003 WO
WO03039104 May 2003 WO
Related Publications (1)
Number Date Country
20040205179 A1 Oct 2004 US
Provisional Applications (1)
Number Date Country
60452736 Mar 2003 US