The present embodiment(s) described below relates to associating virtual machine (VM) performance with VM configuration. More specifically, the embodiment(s) relates to evaluating Software as a Service (SaaS) instances using shadow system configurations.
The cloud computing environment refers to delivery of hosted services over the Internet. There are different classifications of cloud computing, including private, public, and hybrid. Private cloud services pertain to services delivered from an internal data center to one or more internal user and preserves management, control and security. Public cloud services relate to a third party delivering the service over the Internet. Hybrid cloud services pertain to a combination of public and private cloud services. More specifically, in the hybrid configuration, an organization provides and manages some resources in-house and has others provided externally. For example, an organization might use a public cloud service for archived data but continue to maintain in-house storage for operational customer data. This hybrid approach allows an organization to take advantage of scalability and cost effectiveness that a public cloud computing environment offers without exposing internal or confidential applications and data.
Information technology infrastructure is defined by computational maximums or limits. In the case where an internal infrastructure is physically insufficient to handle a job, the functionality and ability may be outsourced to a third party with external cloud capacity. More specifically, external hardware available through a third party may be engaged over the Internet to support capacity requirements.
The invention includes a method, system, and computer program product for performance analysis of one or more system instances.
An application executes in the foreground as a primary system instance. In addition, a first system instance is provided with a first configuration and a second system instance is provided as a second configuration. The application executes as a background process on both with the first and second system instances. Performance data is generated for each system instance. More specifically, first performance data is generated for the first system instance and second performance data is generated for the second system instance. The first and second performance data are stored at a first location and a second location, respectively. The first and second performance data are compared and one of the first and second system instances is selected in response to the comparison. More specifically, the selected system instance is converted to a new primary configuration, so that the application may be executed with the new primary system associated with the converted configuration.
These and other features and advantages will become apparent from the following detailed description of the presently preferred embodiment(s), taken in conjunction with the accompanying drawings.
The drawings referenced herein form a part of the specification. Features shown in the drawings are meant as illustrative of only some embodiments of the invention, and not of all embodiments of the invention unless otherwise explicitly indicated.
It will be readily understood that the components of the present invention, as generally described and illustrated in the Figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the apparatus, system, and method of the present invention, as presented in the Figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention.
Reference throughout this specification to “a select embodiment,” “one embodiment,” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, appearances of the phrases “a select embodiment,” “in one embodiment,” or “in an embodiment” in various places throughout this specification are not necessarily referring to the same embodiment.
The illustrated embodiments of the invention will be best understood by reference to the drawings, wherein like parts are designated by like numerals throughout. The following description is intended only by way of example, and simply illustrates certain selected embodiments of devices, systems, and processes that are consistent with the invention as claimed herein.
Software as a Service (SaaS) is a software distribution model in which applications are hosted by a vendor or service provider and made available to customers over a network, such as the Internet. SaaS is closely related to the application service provider (ASP) and on demand computing, including hosted application management model(s) and software on demand model(s). The hosted application management model is where a provider hosts software for a customer and delivers the service over the Internet. The software on demand model is where the provider gives customers network based access to a single copy of an application created specifically for SaaS distribution. An entity faced with a large and frequent data analysis requirement may choose to contract with a SaaS provider, also referred to herein as outsourcing.
Performing services internally has a cost, specifically, the cost of utilizing specific resources. At the same time, outsourcing of services has an explicit cost, namely a fee from the service provider providing the outsourced services. In some cases, the fees of the outsourced provider may be static, and in one embodiment, the fees may be dynamic and subject to change based on various factors. For example, use of the outsourced services may be subject to change based on the time of day when the services are performed, recognizing that there may be an increased demand during business hours and having the fees reflect the changes in demand.
Services may be outsourced to a cloud computing environment, essentially utilizing hardware of an external system and associated resources. The cloud computing environment is service oriented with a focus on statelessness, low coupling, modularity, and semantic interoperability. At the heart of cloud computing is an infrastructure comprising a network of interconnected nodes. Referring now to
Computer system/server (112) may be described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, and so on that perform particular tasks or implement particular abstract data types. Computer system/server (112) may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices.
As shown in
System memory (128) can include computer system readable media in the form of volatile memory, such as random access memory (RAM) (130) and/or cache memory (132). Computer system/server (112) may further include other removable/non-removable, volatile/non-volatile computer system storage media. By way of example only, storage system (134) can be provided for reading from and writing to a non-removable, non-volatile magnetic media (not shown and typically called a “hard drive”). Although not shown, a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a “floppy disk”), and an optical disk drive for reading from or writing to a removable, non-volatile optical disk such as a CD-ROM, DVD-ROM or other optical media can be provided. In such instances, each can be connected to bus (118) by one or more data media interfaces. As will be further depicted and described below, memory (128) may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
Program/utility (140), having a set (at least one) of program modules (142), may be stored in memory (128) by way of example, and not limitation, as well as an operating system, one or more application programs, other program modules, and program data. Each of the operating systems, one or more application programs, other program modules, and program data or some combination thereof, may include an implementation of a networking environment. Program modules (142) generally carry out the functions and/or methodologies of embodiments of the invention as described herein.
Computer system/server (112) may also communicate with one or more external devices (114), such as a keyboard, a pointing device, a display (124), etc.; one or more devices that enable a user to interact with computer system/server (112); and/or any devices (e.g., network card, modem, etc.) that enable computer system/server (112) to communicate with one or more other computing devices. Such communication can occur via Input/Output (I/O) interfaces (122). Still yet, computer system/server (112) can communicate with one or more networks such as a local area network (LAN), a general wide area network (WAN), and/or a public network (e.g., the Internet) via network adapter (120). As depicted, network adapter (120) communicates with the other components of computer system/server (112) via bus (118). It should be understood that although not shown, other hardware and/or software components could be used in conjunction with computer system/server (112). Examples, include, but are not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data archival storage systems, etc.
Referring now to
Referring now to
Virtualization layer (362) provides an abstraction layer from which the following examples of virtual entities may be provided: virtual servers; virtual storage; virtual networks, including virtual private networks; virtual applications and operating systems; and virtual clients.
In one example, management layer (364) may provide the following functions: resource provisioning, metering and pricing, user portal, service level management, and SLA planning and fulfillment. The functions are described below. Resource provisioning provides dynamic procurement of computing resources and other resources that are utilized to perform tasks within the cloud computing environment. Metering and pricing provides cost tracking as resources are utilized within the cloud computing environment, and billing or invoicing for consumption of these resources. In one example, these resources may comprise application software licenses. Security provides identity verification for cloud consumers and tasks, as well as protection for data and other resources. User portal provides access to the cloud computing environment for consumers and system administrators. Service level management provides cloud computing resource allocation and management such that required service levels are met. Service Level Agreement (SLA) planning and fulfillment provides pre-arrangement for, and procurement of, cloud computing resources for which a future requirement is anticipated in accordance with an SLA.
Workloads layer (366) provides examples of functionality for which the cloud computing environment may be utilized. Examples of workloads and functions which may be provided from this layer includes, but is not limited to: mapping and navigation; software development and lifecycle management; virtual classroom education delivery; data analytics processing; operation processing; and maintenance of consistent application data to support migration within the cloud computing environment.
In the shared pool of configurable computer resources described herein, hereinafter referred to as a cloud computing environment, applications may be processed by different entities and under different system parameters. For example, an application may be processed in a virtual machine environment or a container environment, each referred to as a system instance, which although similar have differing physical parameters. At the same time, each virtual machine or container may be separately configured, with each configuration utilizing different physical machine hardware. Differences in configuration and hardware of the system instances may have different costs. Selection of one system instance over another may affect the cost of application processing. Accordingly, by operating different system configurations as background operations, an optimal system environment may be selected for future application processing.
Referring to
In the example shown herein, the primary shadow container (406) is provided in the external cloud (402) to mimic or imitate the functionality of the visible virtual machine (404). The primary shadow container (406) is configured to reflect the parameters of the visible virtual machine (404) within the confines and parameters of a container configuration. At the same time, at least one set of secondary entities, also referred to as shadow systems, are created. In one embodiment, the quantity of secondary entities may be increased. For descriptive purposed, two sets of secondary entities are shown and described. In relation to the illustration provided, first and second sets of entities, (420) and (430), respectively, are provided and identified.
The first set (420) includes a first shadow virtual machine (424) and a first secondary shadow container (426). The first shadow virtual machine (424) and first secondary shadow container (426) of the first set (420) reflect a specified decrease in supporting hardware. In one embodiment, the container and virtual machine of the first set (420) processes an application at a 25% decrease in hardware resources. For example, in one embodiment, the specified decrease of the first set (420) represents a virtual machine and container that are smaller in processing speed, RAM, and/or storage in comparison to the visible virtual machine (404) and the primary shadow container (406). The second set (430) includes a second shadow virtual machine (434) and a second secondary shadow container (436). In one embodiment, the container and virtual machine of the second set (430) processes an application at a 25% increase in hardware resources. The second shadow virtual machine (434) and second secondary shadow container (436) of the second set (430) reflect a specified increase in supporting hardware, including but not limited to a specified increase in processing speed, RAM, and/or storage in comparison to the visible virtual machine (404). The quantity of secondary shadow containers and shadow virtual machines shown herein should not be considered limiting. In one embodiment, there may be a minimum of one shadow container or virtual machine. Regardless of the quantity of shadow containers and virtual machines, each shadow container or virtual machine functions to mimic or imitate the functionality of the visible virtual machine (404) and primary shadow container (406), with each second set of entities (420) and (430) representing a different configuration. Accordingly, the sets of secondary containers and virtual machines (420) and (430), respectively, demonstrate a shadow system deployed in the external cloud (402) associated with the visible virtual machine (404) and the primary shadow container (406).
Referring to
The visible virtual machine executes an application (506), and the primary shadow container executes the same application (508). In one embodiment, both the visible virtual machine and the primary shadow container accept input from a user's internal cloud and execute the application under the respective configurations, e.g. virtual machine and container. In one embodiment, the application is an SaaS instance. The visible virtual machine may execute the application (506) in parallel to the primary shadow container executing the application (508). Alternatively, the visible virtual machine may execute the application (506) prior to the primary shadow container executing the application (508). The order of executing the application by the visible virtual machine and the primary shadow container is for exemplary purposes and is not considered to be limiting.
Performance and price data associated with executing the application on the visible virtual machine is generated (510). Similarly, performance and price data associated with executing the application on the primary shadow container is generated (512). In one embodiment, performance data comprises CPU utilization. Performance data may also comprise I/O rates or disk usage. Also, the performance data may comprise an execution time threshold. The performance and price data associated with the visible virtual machine that results from the executed application may be returned to the user. In one embodiment, the performance and price data associated with the primary shadow container are written to a null pipe to avoid confusing the user. Accordingly, performance and price data are generated for a plurality of sets of virtual machines and associated shadow containers, including at least a visible virtual machine and a primary shadow container, executing the same application.
The performance and price data associated with the virtual machine is stored in a first memory location (514). Similarly, the performance and price data associated with the primary shadow container is stored in a second memory location (516). Following storage of the data, sets of data may be accessed and compared. In one embodiment, data generated by the visible virtual machine is accessed and compared to data generated by the primary shadow container (518). Accordingly, performance between a virtual machine and a similarly configured container may be evaluated so that an optimal configuration may be selected for future application processing.
Referring to
Data associated with execution and processing of the application on the visible virtual machine is generated and stored (670), and data associated with execution and processing of the application on the shadow container is generated and stored (672). In one embodiment, the data at steps (670) and (672) may be stored at separate memory locations. Each configured background system also executes the application, and associated processing data for each configuration is generated and stored. As shown, two alternative system configurations are provided, with each system including a virtual machine and a shadow container. The application is shown processing in the background under containerP (622), virtual machineX (632), containerX (642), virtual machineY (652), and containerY (662). In one embodiment, additionally configured systems are provided, and the application is processed in the background for each additionally configured system. Performance data for containerP is generated (624) and stored (626), performance data for VMX is generated (634) and stored (636), performance data for containerX is generated (644) and stored (646), performance data for VMY is generated (654) and stored (656), and performance data for containerY is generated (664) and stored (666). In one embodiment, data for each virtual machine and each container operating in the background is separately stored in a different memory location and is separately accessible. After the data is stored, data from any or each of the background application processes may be accessed and employed (680) for evaluation of the operating efficiency of the application under different system parameters.
Once the generated data is stored, the data may be utilized for comparison to the performance and price data generated by the visible virtual machine. As presented in
Following step (706), it is determined if at least one of the alternative configurations has an economic or processing benefit to the primary form of the application execution (708). The determination at step (708) is an evaluation to automatically change the processing environment to one of the alternatively configured systems. The processing benefit may come in different forms. One example of a processing benefit is a faster completion of the application execution. Another example is an increased efficiency associated with the application execution, which may maximize resources. For example an prior job may have required two cores for execution, while a new alternative configuration may only require one core, leaving the non-used core available for a different job execution. The processing benefit may also be expressed as a physical benefit in the form of reduced energy usage with lower operating costs. The processing benefit may also be expressed in the form of a sequential benefit. For example, if a second application depends on completion of a first application, and the first application finished sooner based on time improvement, the second application can also have an earlier start and likely an earlier completion. Application processing that completes earlier may also open up physical configuration and resources for other applications, thereby benefiting a hosting service. In one embodiment, two applications are set for sequential processing wherein the same physical system could dynamically be re-configured for the second application, which in one embodiment has different efficiency requirements than the first application. Accordingly, the processing benefits may take on different forms, and in one embodiment, may be expanded to forms that are not explicitly identified herein.
The determination at step (708) is an evaluation to automatically change the processing environment to one of the alternatively configured systems. In one embodiment, the comparison data must exceed a threshold value. A positive response to the determination at step (708) is followed by selecting one of the alternative configurations and converting the primary configuration to the selected alternative configuration for application processing (710). In addition, the prior primary configuration may be configured to operate as a background process (712). A negative response to the determination at step (708) is followed by the application continuing to execute and process under the same system configuration without any changes. (714). In one embodiment, the evaluation shown and described herein may take place on a periodic basis, or after each application completes execution. Regardless of the period for evaluation, any one of the secondary executing environments may be selected to replace the primary execution environment.
Referring now to
As shown and described above, the generated data may be accessed and evaluated for selection of an appropriate processing environment. An analyzer (850) is shown operating coupled to the processing unit (812). The analyzer (850) functions to analyze operating efficiency of the executing application and to support comparison of the data, and in one embodiment, to generate one or more reports associated with the generated data. The analyzer (850) provides analysis with respect to container configuration, performance, price, and any combination of resource management variables. In one embodiment, performance data includes processor utilization. Performance data may also include I/O rates or disk usage. Also, the performance data may include an execution time threshold. The analyzer (850) may correlate container configuration and application execution performance. The analyzer (850) may also correlate container configuration to price. The type of analysis and the type of variables described herein are not meant to be limiting and are provided for exemplary purposes. The analyzer (850) compares performance and price data to the executing form of the application to each background process of the application that is executing, and in one embodiment, may select one of the alternative configuration environments as a replacement to the primary configuration. Accordingly, the analysis and actions provided by the processing unit (812) and the analysis tool (850) supports subsequently executing an instance of an application on a converted configuration.
Performance and price data may be reviewed and analyzed to evaluate alternative configurations of containers and/or virtual machines. If threshold values are established for either performance or price, the alternative configurations may be evaluated to determine if they satisfy the threshold values. The analysis discussed above may be run dynamically and without human interaction. Indeed, data may be received and analyzed in the form of a generated report. To that end, the analyzer (850) is shown in communication with a report generator (860). The report generator (860) consolidates analysis performed by the analyzer (850) into a deliverable format. In one embodiment, the consolidated analysis comprises a report (862), which is shown stored in memory (816). The report (862) may be sent to, for instance, a requester, an administrator, or other system evaluator. A container configuration change may be authorized in view of the provided analysis of the visible and shadow container systems.
Referring now to
The matrix and associated system configuration parameters is provided for exemplary purposes and is not meant to be limiting. In one embodiment, a report is generated and reviewed to compare container and virtual machine configurations in view of performance and cost. Further, the report may indicate whether certain configurations meet or exceed threshold values. In view of the data presented with the associated container and virtual machine configurations, an externally provided container configuration may be pre-selected for subsequent application execution. Accordingly, comparative analysis is provided and supported so that informed decisions about employing external cloud containers and optimal configurations for the employed containers may take place.
The host described above in
Indeed, executable code could be a single instruction, or many instructions, and may even be distributed over several different code segments, among different applications, and across several memory devices. Similarly, operational data may be identified and illustrated herein within the tool, and may be embodied in any suitable form and organized within any suitable type of data structure. The operational data may be collected as a single data set, or may be distributed over different locations including over different storage devices, and may exist, at least partially, as electronic signals on a system or network.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided, such as examples of agents, to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention can be practiced without one or more of the specific details, or with other methods, components, materials, etc. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of the invention.
Referring now to the block diagram of
The computer system can include a display interface (1006) that forwards graphics, text, and other data from the communication infrastructure (1004) (or from a frame buffer not shown) for display on a display unit (1008). The computer system also includes a main memory (1010), preferably random access memory (RAM), and may also include a secondary memory (1012). The secondary memory (1012) may include, for example, a hard disk drive (1014) and/or a removable storage drive (1016), representing, for example, a floppy disk drive, a magnetic tape drive, or an optical disk drive. The removable storage drive (1016) reads from and/or writes to a removable storage unit (1018) in a manner well known to those having ordinary skill in the art. Removable storage unit (1018) represents, for example, a floppy disk, a compact disc, a magnetic tape, or an optical disk, etc., which is read by and written to by removable storage drive (1016).
In alternative embodiments, the secondary memory (1012) may include other similar means for allowing computer programs or other instructions to be loaded into the computer system. Such means may include, for example, a removable storage unit (1020) and an interface (1022). Examples of such means may include a program package and package interface (such as that found in video game devices), a removable memory chip (such as an EPROM, or PROM) and associated socket, and other removable storage units (1020) and interfaces (1022) which allow software and data to be transferred from the removable storage unit (1020) to the computer system.
The computer system may also include a communications interface (1024). Communications interface (1024) allows software and data to be transferred between the computer system and external devices. Examples of communications interface (1024) may include a modem, a network interface (such as an Ethernet card), a communications port, or a PCMCIA slot and card, etc. Software and data transferred via communications interface (1024) is in the form of signals which may be, for example, electronic, electromagnetic, optical, or other signals capable of being received by communications interface (1024). These signals are provided to communications interface (1024) via a communications path (i.e., channel) (1026). This communications path (1026) carries signals and may be implemented using wire or cable, fiber optics, a phone line, a cellular phone link, a radio frequency (RF) link, and/or other communication channels.
In this document, the terms “computer program medium,” “computer usable medium,” and “computer readable medium” are used to generally refer to media such as main memory (1010) and secondary memory (1012), removable storage drive (1016), and a hard disk installed in hard disk drive (1014).
Computer programs (also called computer control logic) are stored in main memory (1010) and/or secondary memory (1012). Computer programs may also be received via a communication interface (1024). Such computer programs, when run, enable the computer system to perform the features of the present invention as discussed herein. In particular, the computer programs, when run, enable the processor (1002) to perform the features of the computer system. Accordingly, such computer programs represent controllers of the computer system.
The present invention may be a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.
The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.
Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present invention has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the invention. The embodiment was chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated. Alternative system configurations are provided to operate in the background, and more specifically, to execute an application in the background. Data associated with application processing is generated and stored. Analysis of the data enables one of the alternative configurations to be selected as a primary processing environment for the application. Accordingly, an optimal processing environment is selected based on analysis of data gathered from alternative configurations processing the application in the background.
It will be appreciated that, although specific embodiments of the invention have been described herein for purposes of illustration, various modifications may be made without departing from the spirit and scope of the invention. In particular, comparison of data from the background execution of the system instances may yield an alternative system configuration. Selection of a new primary system instance for the application may come from one of the background system instances or from an alternative system configuration. Accordingly, the scope of protection of this invention is limited only by the following claims and their equivalents.