The present invention relates to a data processing method and system for efficient logical partition (LPAR) capacity consolidation, and more particularly to a technique for determining an optimized configuration of LPARs and physical servers that host the LPARs.
Conventional LPAR capacity consolidation systems inefficiently rely exclusively on processor utilization as a metric, thereby failing to account for other properties and allowing an insufficient number of LPARs or an excessive number of LPARs to be included in a virtualization of a computing system (i.e., an undersubscription or an oversubscription, respectively, of the physical properties of the computing system from an architectural point of view). When undersubscribed, the server is underutilized and is wasted. When oversubscribed, the server is overutilized and the customer's service is negatively impacted. Thus, there exists a need to overcome at least one of the preceding deficiencies and limitations of the related art.
In a first embodiment, the present invention provides a computer-implemented method of optimizing a configuration of a plurality of LPARs and a plurality of server computer systems (servers) that host the LPARs. The method of the first embodiment comprises:
receiving configuration data that describes an enterprise configuration of the plurality of LPARs and the plurality of servers;
receiving optimization characteristic data that describes one or more characteristics on which an optimized version of the enterprise configuration (optimized enterprise configuration) is to be based;
a processor of a computer determining the optimized enterprise configuration by determining a best fit of the LPARs into the servers based on a bin packing methodology that applies the configuration data and the optimization characteristic data;
storing the optimized enterprise configuration; and
migrating one or more LPARs of the plurality of LPARs to one or more servers of the plurality of servers, wherein a result of the step of migrating is the plurality of LPARs and the plurality of servers being configured in the optimized enterprise configuration.
In a second embodiment, the present invention provides a computer-implemented method of determining an optimal configuration of a plurality LPARs and a plurality of servers that host the LPARs. The method of the second embodiment comprises:
a processor of a computing system determining a draft configuration of the plurality of LPARs and the plurality of servers is a tentative version of the optimal configuration of the plurality of LPARs and the plurality of servers by performing an iteration of a first loop, wherein performing the iteration of the first loop includes iteratively evaluating LPARs from a list of n LPARs in a second loop, and wherein the tentative version of the optimal configuration has a tentative final total cost;
determining no other draft configuration resulting from one or more additional iterations of the second loop or one or more additional iterations of the first loop has a draft total cost less than the tentative final total cost of the tentative version of the optimal configuration;
in response to determining no other draft configuration has the draft total cost less than the tentative final total cost, saving the tentative version of the optimal configuration as a final version of the optimal configuration of the plurality of LPARs and the plurality of servers; and
migrating one or more LPARs of the plurality of LPARs to one or more servers of the plurality of servers so that the plurality of LPARs and the plurality of servers are configured in the final version of the optimal configuration.
Systems, program products and processes for supporting computing infrastructure corresponding to the above-summarized methods are also described herein.
The present invention provides a technique for efficient LPAR capacity consolidation. Further, the present invention may provide energy efficiency by determining a minimum number of physical servers to support all required LPARs, an optimization of computer room floor space that reduces energy requirements, and an optimization of server equipment that favors energy efficient models. Still further, the optimized configuration provided by the present invention may reduce requirements for equipment, power, floor space, cost and support personnel costs. Further yet, the optimal configuration provided by the present invention may enhance standardization efforts when appropriate priorities are established.
Overview
Embodiments of the present invention determine a physical server inventory that accommodates LPARs (i.e., all LPARs in a system) in an optimized configuration so that a total resource utilization of the LPARs collocated on any of the physical servers does not exceed a total capacity of the physical server in any time interval. The optimization may take place in multiple dimensions (e.g., processor, memory, power requirements, footprint (i.e., floor space), equipment cost, etc.), each dimension optimized and prioritized to identify the best fit of available resources for the resources required by the LPARs. The multi-dimensional optimization may employ an N-dimensional cube, where the optimal configuration is the intersection of the N dimensions in prioritized order. In one embodiment, the present invention uses shadow costs to identify the real costs of LPAR reconfiguration and to allow identification of the optimized configuration.
A set of LPARs whose configuration is optimized by embodiments of the present invention is characterized by individual resource requirements, such as memory and central processing unit (CPU) requirements. Further, a set of physical server platforms whose configuration is optimized by embodiments of the present invention is characterized by individual resource availability and cost. The LPAR resource requirements are specified at given time intervals. The time interval may have arbitrary length, but the intervals are the same for all the LPARs. For an additional cost, it is possible to upgrade the physical server resources, within a maximum capacity limit per resource and per server, at the beginning of the first time interval. As used herein, the operating cost is defined as the cost of the physical server and server upgrades, including prorated power costs and costs associated with reconfiguring the current LPAR allocation.
The capacity consolidation provided by embodiments of the present invention utilizes a variant of a solution to the bin packing problem, or a variant to the multidimensional bin packing problem if the number of resource types is greater than one. Although the bin packing and multidimensional bin packing problems can be solved efficiently with general purpose integer optimization tools, such tools do not scale well for larger problem sizes. Further, because the bin packing and multidimensional bin packing problems are each known to be a combinatorial NP-hard problem, the most efficient known algorithms use heuristics (e.g., First Fit, First Fit Decreasing, Best Fit, and Best Fit Decreasing) to provide fast and very good, but often non-optimal solutions. The variants of solutions to the bin packing and multidimensional bin packing problems disclosed herein provide an optimized solution (i.e., an optimized configuration of LPARs and servers that host the LPARs).
System for Determining an Optimized Configuration of Lpars and Servers
Calculation component 108 receives the aforementioned configuration data and data from an optimization characteristics data file 114 from one or more computer data storage units (not shown). Optimization characteristics data file 114 may include optimization dimensions and a prioritization of the optimization dimensions. For example, optimization characteristics data file may include the following requirements: processor, memory, power, footprint (i.e., floor space), and/or equipment cost.
Using calculations provided by calculation component 108, where the calculations are based on the optimization dimensions and the prioritization retrieved from optimization characteristics data file 114, optimized LPAR consolidator 104 generates an optimized configuration 116 of the LPARs and the servers that host the LPARs. Optimized LPAR consolidator 104 sends the optimized configuration 116 to an LPAR migration tool 118. LPAR migration tool 118 manages migration(s) so that the set of LPARs and the set of servers are configured according to the optimized configuration 116. Each migration is a movement of an LPAR from one server to another server. Instructions of program code included in LPAR migration tool 118 may be carried out by computer system 102 or by another computer system (not shown).
In one embodiment, calculation component 108 sends optimized configuration 116 to a non-automated LPAR migration tool 118, which presents the optimized configuration to a user. The user analyzes optimized configuration 116 and specifies migrations via LPAR migration tool 118.
In another embodiment, a post-processing component (not shown) is included in optimized LPAR consolidator 104 and the LPAR migration tool 118 is an automated tool that receives the optimized configuration 116 and automatically performs the migrations necessary to place the LPARS and servers in a configuration that conforms to the optimized configuration. Before the post-processing component sends the optimized configuration 116 to the LPAR migration tool 118, the post-processing component formats the optimized configuration so that the output of system 100 (i.e., the optimized configuration) can be used by the LPAR migration tool.
Processes for Determining an Optimized Configuration of Lpars and Servers
In one embodiment, the configuration data received in step 202 is received by configuration component 106 (see
In one embodiment, the optimization characteristics data received in step 202 is stored in optimization characteristics data file 114 (see
In step 204, LPAR consolidator 104 (see
In one embodiment, step 204 employs a variant of a solution to a bin packing or multidimensional bin packing problem, where the variant utilizes shadow costs determined by a shadow cost function. The shadow cost function and how shadow costs are used to determine an optimal placement of an LPAR in a target server are described below in the Shadow Costs section.
In step 206, LPAR consolidator 104 (see
In step 208, one or more LPARs are migrated (i.e., moved) to one or more servers, where each migration of an LPAR is a movement of the LPAR from a server that hosts the LPAR in the initial configuration to another server that hosts the LPAR in the best enterprise configuration determined in step 204. In step 210, the process of
In step 304, configuration component 106 (see
In step 306, optimized LPAR consolidator 104 (see
In one embodiment, the configuration component 106 (see
In one embodiment, step 306 includes receiving an adjustment of a shadow cost function that weighs the received optimization dimensions according to the received prioritization. Shadow costs and the shadow cost function are described below in the Shadow Costs section.
In step 308, calculation component 108 (see
In step 310, calculation component 108 (see
In step 312, calculation component 108 (see
In step 314, calculation component 108 (see
In one embodiment, the determination of the optimal placement in step 314 is based on a determination of shadow costs calculated by a shadow cost function. The shadow cost function may utilize the optimization characteristics received in step 306 and the configuration data received in steps 302 and 304. In one embodiment, the shadow costs are used to determine net shadow cost savings between a current configuration and a configuration resulting from migrating the LPAR to a target server, thereby indicating an accurate cost of migrating the LPAR to the target server. Shadow costs and the shadow cost function are described below in the discussion of
After step 314, the process of
In step 318, calculation component 108 (see
In step 320, calculation component 108 (see
If calculation component 108 (see
In step 324, calculation component 108 (see
Returning to step 316, if calculation component 108 (see
Returning to step 322, if calculation component 108 (see
Step 326 in
If calculation component 108 (see
If calculation component 108 (see
Iterations of the loop starting at step 308 (see
If calculation component 108 (see
In one embodiment, step 330 also includes the calculation component 108 (see
In step 332, LPAR migration tool 118 migrates (i.e., moves) one or more LPARs to one or more servers, where each migration of an LPAR is a movement of the LPAR from a server that hosts the LPAR in the current configuration to another server that hosts the LPAR in the best enterprise configuration determined and saved in step 330. In step 334, the process of
Determining a Best Enterprise Configuration
In one embodiment, the loops in the process of
The cost of a system configuration can be granular and in some case changes only when a server can be removed after all the LPARs have been migrated away from it. However, the actual system configuration cost is insufficient to judge the quality of a solution (i.e., a configuration that is a candidate for the best enterprise configuration), because the burden of the cost of a new server is imposed on the first LPAR carried on the new server, and the cost of each subsequent LPAR is only the migration cost of the subsequent LPAR until the capacity of the server is exhausted and a new server is required. In order for the algorithm to converge on an optimal solution, one embodiment of the present invention provides a finer cost function granularity that reflects differentiable solutions, even when their actual system configuration costs (e.g., actual dollar costs) are the same. Therefore, in one embodiment, a shadow cost function is derived where the shadow cost generated by the shadow function reflects a cost per LPAR that decreases as the server's capacity is filled. The term “shadow cost” is used to distinguish this cost from the actual system configuration cost of the configuration provided by a solution. The embodiment that employs the shadow cost function discourages inefficiently used servers by increasing the shadow costs of inefficiently used servers, which increases the likelihood of removing LPARs from and/or diverting LPARs from migrating to the inefficiently used servers, so that the inefficiently used servers can eventually be removed. The shadow cost provides an indication of a given server configuration's attractiveness with respect to a given resource utilization.
In one embodiment, the shadow cost function is expressed as the sum of [(resource x allocated to LPAR/resources x in use by the server)*cost of the resource x in use by the server], where the sum is a summation over one or more types of resource x. Resource x may be processor, storage, environmentals, support costs or any other measurable characteristic. The shadow cost function identifies the cost of all resources allocated to the LPAR in proportion to the overall resources in use.
In one embodiment the processes of
The upfront cost is the sum of the reconfiguration costs on both the new (i.e., target or receiving) server and the old server (a.k.a. current server or initial server; i.e., the server in the current configuration that hosts the LPAR), plus the migration cost of the LPAR from the current server to the new server. The reconfiguration cost is negative or null on the current server j since the reconfiguration involves a reduction of the resource capacity after the LPAR is removed from the current server (e.g., remove a memory module), and conversely the reconfiguration cost is positive or null on the receiving server. The migration cost is (Fj+Tk) if j is the initial server of the LPAR, (−Tj−Fk) if k is the initial server of the LPAR, and (Tk−Tj) in all other cases, where Fj and Fk are the costs of migrating the LPAR from server j and server k, respectively, and where Tj and Tk are the costs of migrating the LPAR to server j and server k, respectively.
As used herein, the shadow cost is defined as the cost of used resources shared by an LPAR on a physical server. The shadow cost is further defined to be equal to the sum of the costs in a monetary unit (e.g., dollars) per shared resource (e.g., frame, memory, or CPU), divided by the amount of resources used by the LPAR. The sum of the costs in a shadow cost may also include one or more other costs, such as the resource footprint (floor space costs), the energy costs, the cost of support human resources (HR) requirements, etc.
To one skilled in the art it is readily apparent that additional dimensions can be used in the calculation of the shadow cost. For example, the energy costs of each server can be included in the shadow cost calculation to emphasize preferred servers that are energy efficient. As another example, floor space may be a lower priority dimension, such that the smallest total footprint that meets the other dimension requirements can be calculated.
In step 404, configuration component 106 (see
In step 502, calculation component 108 (see
In step 504, calculation component 108 (see
If calculation component 108 (see
If calculation component 108 (see
Although not shown in
In step 508, calculation component 108 (see
Shadow Cost Example
In the first view 600-1 of the example, two servers, j and k, each have 16 CPUs as available resources, where each CPU costs $100,000. Servers j and k are each equipped with 256 Gb of memory, where each Gb of memory costs $5,000. The set of LPARs 604 hosted by server j use a total of 8 CPUs and 80 Gb of memory, and the set of LPARs 610 hosted by server k use a total of 8 CPUs and 112 Gb of memory. The amount of resources used by LPARs 604 by characteristic are for 8 CPUs*$100,000=$800,000 and for 80 Gb*$5,000=$400,000. The amount of resources used by LPARs 610 by characteristic are for 8 CPUs*$100,000=$800,000 and for 112 Gb*$5,000=$560,000. Since LPAR x hosted by server j requires 2 CPU and 16 Gb of memory, calculation component 108 (see
If an LPAR y (i.e., an LPAR of the same size as LPAR x) migrates from server j to server k after LPAR x migrates to server k as described above, the shadow cost savings for the LPAR y migration exceeds the aforementioned shadow cost savings for the LPAR x migration. That is, the shadow cost for the LPAR y migration is 2/(8+2+2)*$800,000+16/(112+16+16)*$560,000 and the difference between the shadow cost for the LPAR y migration and the shadow cost of the configuration after the aforementioned LPAR x migration is: (2/(8+2+2)*$800,000+16/(112+16+16)*$560,000)−(2/(8−2)*$800,000+16/(80−16)*$400,000=−$171,111 (approximately) or a net shadow cost savings of approximately $171,111. The increase in the net shadow cost savings for migrating LPAR y as compared to the net shadow cost savings for migrating LPAR x is because with subsequent migrations of LPARs from server j to server k, the resource utilization improves on server k and deteriorates on server j, thereby making server k more attractive cost-wise than server j.
To one skilled in the art it is readily apparent that additional dimensions can easily be added to the example depicted in
Integer Linear Programming Formulation
The present invention may utilize a technique for solving a multi-dimensional bin packing problem, where each dimension corresponds to a different resource type. In one embodiment, the Integer Linear Programming (ILP) formulation that may be used to determine an optimized configuration of LPARs and the servers that host the LPARs is as follows:
Given (inputs, constants):
λ Set of LPARs (1)
φ Set of host (physical) server frames (2)
τ Set of time intervals over which optimization is being performed (3)
ρ Set of resource types (e.g., CPU, memory, power, floor space, equipment cost (lease), etc.) (4)
μjr Set of possible resource configurations for resource of type r on server jεφ One of the configurations is the initial configuration which has cost 0. The costs of other configurations are relative to the initial configuration and can be negative or positive. See Xu presented below. (5)
αrijt Normalized requirement of resource type r ερ of LPAR iελ on server jεφ during time interval tετ. (6)
aij 1 if LPAR iελ is initially on server jεφ, and 0 otherwise. (7)
Fj cost of migrating any LPAR from host jεφ (8)
Tj cost of migrating any LPAR to host jεφ (9)
Mij From (6)-(9) the cost of migrating LPAR iελ to server jεφ is deducted. This cost applies if and only if aij=0 and xij=1. That is, the LPAR has to be moved to incur a migration cost. Therefore:
Mij=0, ∀iελ, ∀jε{φ|aij=1}
Mij=Tj+Fk, ∀(i,j,k)ε{φ×φ×λ|aij=0aik=1} (10)
Cj operating cost of server jεφ (11)
Xu cost of configuring resource type rερ with uεμjr on server jεφ( 12)
Ru normalized size of resource rερ using configuration uεμjr on server jεφ (13)
Wjr normalized reserved size of resource type rερ on server jεφ (14).
This resource is reserved for future growths on server j.
Find (output, variables):
xij 1 if LPAR iελ is (re)located on server jεφ, and 0 otherwise (15)
yu 1 if resource rερ is configured with uεμjr on server jεφ, and 0 otherwise (16)
mj 1 if server jεφ is used by at least one LPAR, and 0 otherwise. (17)
That minimizes:
Such that:
Explanations for parameters listed above for the ILP include:
In the ILP formulation presented above, all resource requirements and capacities are normalized. For instance, the CPU normalization is computed according to a performance ranking value (e.g., a Relative Performance Estimate 2 value provided by Ideas International located in Hornsby, Australia) of each server j. If some servers do not support fractional resource sizes, their sizes are rounded up to the next supported fraction on that server. The formulation uses a server-dependent resource utilizations αrijt for each LPAR, allowing the expression of server-specific capabilities.
Computer System
Memory 704 may comprise any known computer readable storage medium, which is described below. In one embodiment, cache memory elements of memory 704 provide temporary storage of at least some program code (e.g., program code 714) in order to reduce the number of times code must be retrieved from bulk storage while instructions of the program code are carried out. Moreover, similar to CPU 702, memory 704 may reside at a single physical location, comprising one or more types of data storage, or be distributed across a plurality of physical systems in various forms. Further, memory 704 can include data distributed across, for example, a local area network (LAN) or a wide area network (WAN).
I/O interface 706 comprises any system for exchanging information to or from an external source. I/O devices 710 comprise any known type of external device, including a display device (e.g., monitor), keyboard, mouse, printer, speakers, handheld device, facsimile, etc. Bus 708 provides a communication link between each of the components in computer system 700, and may comprise any type of transmission link, including electrical, optical, wireless, etc.
I/O interface 706 also allows computer system 700 to store and retrieve information (e.g., data or program instructions such as program code 714) from an auxiliary storage device such as computer data storage unit 712 or another computer data storage unit (not shown). Computer data storage unit 712 may comprise any known computer readable storage medium, which is described below. For example, computer data storage unit 712 may be a non-volatile data storage device, such as a magnetic disk drive (i.e., hard disk drive) or an optical disc drive (e.g., a CD-ROM drive which receives a CD-ROM disk).
Memory 704 may include computer program code 714 that provides the logic for determining an optimized configuration of LPARs and servers that host the LPARs (e.g., the process of
Memory 704, storage unit 712, and/or one or more other computer data storage units (not shown) that are coupled to computer system 700 may store configuration data file 112 (see
As will be appreciated by one skilled in the art, the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “module” or “system” (e.g., system 100 in
Any combination of one or more computer readable medium(s) (e.g., memory 704 and computer data storage unit 712) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium (i.e., computer readable storage device). A computer readable storage device may be, for example, but not limited to, an electronic, magnetic, electromagnetic, or semiconductor system, apparatus, device or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer-readable storage device includes: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage device may be any tangible device that can contain or store a program (e.g., program 714) for use by or in connection with a system, apparatus, or device for carrying out instructions. Each of the terms “computer readable storage device” and “computer readable storage medium” does not include a signal propagation medium such as a copper cable, optical fiber or a wireless transmission medium.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electromagnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with a system, apparatus, or device for carrying out instructions.
Program code (e.g., program code 714) embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code (e.g., program code 714) for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java®, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. Instructions of the program code may be carried out entirely on a user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server, where the aforementioned user's computer, remote computer and server may be, for example, computer system 700 or another computer system (not shown) having components analogous to the components of computer system 700 included in
Aspects of the present invention are described herein with reference to flowchart illustrations (e.g.,
These computer program instructions may also be stored in a computer readable medium (e.g., memory 704 or computer data storage unit 712) that can direct a computer (e.g., computer system 700), other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer (e.g., computer system 700), other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus, or other devices to produce a computer implemented process such that the instructions which are carried out on the computer, other programmable apparatus, or other devices provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
Any of the components of an embodiment of the present invention can be deployed, managed, serviced, etc. by a service provider that offers to deploy or integrate computing infrastructure with respect to the process of determining an optimal configuration of LPARs and servers that host the LPARs. Thus, an embodiment of the present invention discloses a process for supporting computer infrastructure, comprising integrating, hosting, maintaining and deploying computer-readable code (e.g., program code 714) into a computer system (e.g., computer system 700), wherein the code in combination with the computer system is capable of performing a process of determining an optimal configuration of LPARs and servers that host the LPARs.
In another embodiment, the invention provides a business method that performs the process steps of the invention on a subscription, advertising and/or fee basis. That is, a service provider, such as a Solution Integrator, can offer to create, maintain, support, etc. a process of determining an optimal configuration of LPARs and servers that host the LPARs. In this case, the service provider can create, maintain, support, etc. a computer infrastructure that performs the process steps of the invention for one or more customers. In return, the service provider can receive payment from the customer(s) under a subscription and/or fee agreement, and/or the service provider can receive payment from the sale of advertising content to one or more third parties.
The flowcharts in
While embodiments of the present invention have been described herein for purposes of illustration, many modifications and changes will become apparent to those skilled in the art. Accordingly, the appended claims are intended to encompass all such modifications and changes as fall within the true spirit and scope of this invention.
Number | Name | Date | Kind |
---|---|---|---|
6463454 | Lumelsky et al. | Oct 2002 | B1 |
6678371 | Flockhart et al. | Jan 2004 | B1 |
6957433 | Umberger et al. | Oct 2005 | B2 |
6973654 | Shutt et al. | Dec 2005 | B1 |
6993762 | Pierre | Jan 2006 | B1 |
7174379 | Agarwal et al. | Feb 2007 | B2 |
7305520 | Voigt et al. | Dec 2007 | B2 |
7523454 | Romero et al. | Apr 2009 | B2 |
7565508 | Nakamura | Jul 2009 | B2 |
7617411 | Baba et al. | Nov 2009 | B2 |
7669029 | Mishra et al. | Feb 2010 | B1 |
7673110 | Yamamoto et al. | Mar 2010 | B2 |
7725642 | Tsushima et al. | May 2010 | B2 |
7743148 | Ajiro | Jun 2010 | B2 |
7849278 | Sato et al. | Dec 2010 | B2 |
7849347 | Armstrong et al. | Dec 2010 | B2 |
7877754 | Birkestrand et al. | Jan 2011 | B2 |
7921424 | Shutt et al. | Apr 2011 | B2 |
8112527 | Kawato | Feb 2012 | B2 |
8244827 | Abrams | Aug 2012 | B2 |
8347307 | Dawson et al. | Jan 2013 | B2 |
20020069369 | Tremain | Jun 2002 | A1 |
20020091786 | Yamaguchi et al. | Jul 2002 | A1 |
20020129127 | Romero et al. | Sep 2002 | A1 |
20030097393 | Kawamoto et al. | May 2003 | A1 |
20030144894 | Robertson et al. | Jul 2003 | A1 |
20040088145 | Rosenthal et al. | May 2004 | A1 |
20040210623 | Hydrie et al. | Oct 2004 | A1 |
20050044228 | Birkestrand et al. | Feb 2005 | A1 |
20050198303 | Knauerhase et al. | Sep 2005 | A1 |
20050228618 | Patel et al. | Oct 2005 | A1 |
20050251802 | Bozek et al. | Nov 2005 | A1 |
20050268298 | Hunt et al. | Dec 2005 | A1 |
20060031268 | Shutt et al. | Feb 2006 | A1 |
20060064523 | Moriki et al. | Mar 2006 | A1 |
20060136761 | Frasier et al. | Jun 2006 | A1 |
20060206891 | Armstrong et al. | Sep 2006 | A1 |
20060224741 | Jackson | Oct 2006 | A1 |
20070027973 | Stein et al. | Feb 2007 | A1 |
20080028408 | Day et al. | Jan 2008 | A1 |
20080034093 | Sutou | Feb 2008 | A1 |
20080052720 | Barsness et al. | Feb 2008 | A1 |
20080109813 | Narita et al. | May 2008 | A1 |
20080256321 | Armstrong et al. | Oct 2008 | A1 |
20080301487 | Hatta et al. | Dec 2008 | A1 |
20080320269 | Houlihan et al. | Dec 2008 | A1 |
20090013153 | Hilton | Jan 2009 | A1 |
20090133016 | Brown et al. | May 2009 | A1 |
20090164660 | Abrams | Jun 2009 | A1 |
20090235265 | Dawson et al. | Sep 2009 | A1 |
20100030877 | Yanagisawa | Feb 2010 | A1 |
20100050172 | Ferris | Feb 2010 | A1 |
20100161559 | Patil et al. | Jun 2010 | A1 |
Entry |
---|
Khanna et al.; Application Performance Management in Virtualized Server Environments; 10th IEEE/IFIP Network Operations and Management Symposium; Apr. 3-7, 2006; pp. 373-381. |
Adams et al.; A Comparison of Software and Hardware Techniques for x86 Virtualization; ASPLOS '06; Oct. 21-25, 2006; 12 pages. |
Alpern et al.; PDS: A Virtual Execution Environment for Software Deployment; First ACM/USENIX Conference on Virtual Execution Environments (VEE'05); Jun. 11-12, 2005; pp. 175-185. |
Jones, Rob; ALSTOM Creates a Virtual World; VM World 2005; Oct. 18-20, 2005; 24 pages. |
McCune et al.; Shamon: A System for Distributed Mandatory Access Control; 22nd Annual Computer Security Applications Conference; Dec. 2006; pp. 23-32. |
Wood et al.; Black-box and Gray-box Strategies for Virtual Machine Migration; Proceedings of the 4th USENIX Conference on Networked Systems Design & Implementation; Apr. 2007; 14 pages. |
Hill et al.; Storage & Servers; Network Computing; CMP Media Inc.; vol. 17, No. 26; Dec. 2006; pp. 39-46. |
Umeno et al.; Development of Methods for Reducing the Spins of Guest Multiprocessors; Transactions of the Information Processing Society of Japan; vol. 36, No. 3; Mar. 1995; pp. 681-696. |
Office Action (Mail Date Nov. 16, 2009) for U.S. Appl. No. 11/960,629, filed Dec. 19, 2007. |
Amendment filed Jan. 21, 2010 in response to Office Action (Mail Date Nov. 16, 2009) for U.S. Appl. No. 11/960,629, filed Dec. 19, 2007. |
Final Office Action (Mail Date Mar. 29, 2010) for U.S. Appl. No. 11/960,629, filed Dec. 19, 2007. |
Notice of Appeal filed Jun. 3, 2010 in response to Final Office Action (Mail Date Mar. 29, 2010) for U.S. Appl. No. 11/960,629, filed Dec. 19, 2007. |
Appeal Brief filed Jul. 12, 2010 in response to Final Office Action (Mail Date Mar. 29, 2010) for U.S. Appl. No. 11/960,629, filed Dec. 19, 2007. |
Office Action (Mail Date Oct. 7, 2010) for U.S. Appl. No. 11/960,629, filed Dec. 19, 2007. |
Amendment filed Jan. 7, 2011 in response to Office Action (Mail Date Oct. 7, 2010) for U.S. Appl. No. 11/960,629, filed Dec. 19, 2007. |
Office Action (Mail Date May 12, 2011) for U.S. Appl. No. 11/960,629, filed Dec. 19, 2007. |
Amendment filed Aug. 12, 2011 in response to Office Action (Mail Date May 12, 2011) for U.S. Appl. No. 11/960,629, filed Dec. 19, 2007. |
Final Office Action (Mail Date Dec. 14, 2011) for U.S. Appl. No. 11/960,629, filed Dec. 19, 2007. |
Notice of Appeal filed Feb. 13, 2012 in response to Final Office Action (Mail Date Dec. 14, 2011) for U.S. Appl. No. 11/960,629, filed Dec. 19, 2007. |
Appeal Brief filed Feb. 23, 2012 in response to Final Office Action (Mail Date Dec. 14, 2011) for U.S. Appl. No. 11/960,629, filed Dec. 19, 2007. |
Notice of Allowance (Mail Date Apr. 5, 2012) for U.S. Appl. No. 11/960,629, filed Dec. 19, 2007. |
Office Action (Mail Date Aug. 18, 2011) for U.S. Appl. No. 12/046,759, filed Mar. 12, 2008. |
Amendment filed Nov. 18, 2011 in response to Office Action (Mail Date Aug. 18, 2011) for U.S. Appl. No. 12/046,759, filed Mar. 12, 2008. |
Final Office Action (Mail Date Jan. 20, 2012) for U.S. Appl. No. 12/046,759, filed Mar. 12, 2008. |
Amendment After Final filed Apr. 4, 2012 in response to Final Office Action (Mail Date Jan. 20, 2012) for U.S. Appl. No. 12/046,759, filed Mar. 12, 2008. |
Advisory Action (Mail Date Apr. 4, 2012) for U.S. Appl. No. 12/046,759, filed Mar. 12, 2008. |
Notice of Appeal filed Apr. 10, 2012 in response to Advisory Action (Mail Date Apr. 4, 2012) for U.S. Appl. No. 12/046,759, filed Mar. 12, 2008. |
Appeal Brief filed Jun. 11, 2012 in response to Advisory Action (Mail Date Apr. 4, 2012) for U.S. Appl. No. 12/046,759, filed Mar. 12, 2008. |
Notice of Allowance (Mail Date Aug. 20, 2012) for U.S. Appl. No. 12/046,759, filed Mar. 12, 2008. |
Number | Date | Country | |
---|---|---|---|
20110106922 A1 | May 2011 | US |