Systems and methods for double hulled virtualization operations

Information

  • Patent Grant
  • 8793688
  • Patent Number
    8,793,688
  • Date Filed
    Friday, March 15, 2013
    11 years ago
  • Date Issued
    Tuesday, July 29, 2014
    10 years ago
Abstract
A method for storing and processing data includes providing an operating system (OS) virtualization running on a processor and having a plurality of containers. Each container may prevent privilege escalation by a user to an administrator of a global zone running the OS virtualization. The method may also include providing a hardware virtual machine (HVM) for the user, the HVM encapsulated in one of the containers. A system for storing and processing data is provided that includes an operating system (OS) virtualization stored in a memory and running on a processor. The OS virtualization has a plurality of containers, and each container prevents privilege escalation by a user to an administrator of a global zone running the OS virtualization. The HVM may be encapsulated in one of the containers. A non-transitory computer readable storage medium having a program recorded thereon is provided.
Description
FIELD OF THE INVENTION

The present invention relates to systems and methods for virtualization infrastructure of a cloud computing environment. More particularly, the present invention relates to a system and method for double hulled virtualization operations.


BACKGROUND

Cloud infrastructure, including storage and processing, is an increasingly important resource for businesses and individuals. Using a cloud infrastructure enables businesses to outsource all or substantially all of their information technology (IT) functions to a cloud service provider. Businesses using a cloud service provider benefit from increased expertise supporting their IT function, higher capability hardware and software at lower cost, and ease of expansion (or contraction) of IT capabilities.


Monitoring a cloud infrastructure is an important function of cloud service providers, and continuity of function is an important selling point for cloud service providers. Downtime due to malware or other failures should be avoided to ensure customer satisfaction. Cloud infrastructure monitoring conventionally includes network packet sniffing, but this is impractical as a cloud infrastructure scales up. Alternatively, host-based systems conventionally collect and aggregate information regarding processes occurring within the host.


SUMMARY OF THE INVENTION

According to exemplary embodiments, the present technology provides a method for storing and processing data. The method may include providing an operating system (OS) virtualization running on a processor and having a plurality of containers. Each container may prevent privilege escalation by a user to an administrator of a global zone running the OS virtualization. The method may also include providing a hardware virtual machine (HVM) for the user, the HVM encapsulated in one of the containers.


The method may include eliminating code paths directed from within each container to outside each container. The method may also include limiting access by the user associated with the HVM to the one container encapsulating the HVM. The method may further include limiting operations of the user within the container to instantiating another HVM.


The method may include configuring the HVM by a quick emulator (QEMU) to limit access by the user via a virtual network interface card (VNIC) to the container encapsulating the HVM. The method may also include preventing the user from changing the VNIC, and limiting actions of the user within the HVM by limiting privileges of the user at instantiation of the HVM by the QEMU.


Resource control of the OS virtualization may be inherited by the HVM. The HVM accesses a storage volume for the user via a virtual network interface card (VNIC) or via a virtual disk controller (VDC). Input/output is dynamically throttled for the HVM by the OS virtualization. Processor scheduling is performed for the HVM by the OS virtualization.


The method may include providing a debug module in the global zone hosting the OS virtualization. The debug module may be adapted to monitor input/output of the container. The debug module may be adapted to observe virtual register states of the HVM.


The method may include throttling input/output of the HVM by an administrator of the global zone.


A system for storing and processing data is provided that includes an operating system (OS) virtualization stored in a memory and running on a processor. The OS virtualization has a plurality of containers, and each container prevents privilege escalation by a user to an administrator of a global zone running the OS virtualization. The system also includes a hardware virtual machine (HVM) for the user. The HVM may be encapsulated in one of the containers.


A non-transitory computer readable storage medium having a program recorded thereon is provided. The program when executed causes a computer to perform a method for storing and processing data


These and other advantages of the present technology will be apparent when reference is made to the accompanying drawings and the following description.





BRIEF DESCRIPTION OF THE DRAWINGS


FIGS. 1A and 1B are diagrams illustrating an exemplary embodiment of a global zone.



FIG. 2 is a system level diagram illustrating an exemplary embodiment of a compute/storage server and datacenter administrator.



FIG. 3 is a diagram illustrating an exemplary embodiment of a cloud-based data storage and processing system.



FIG. 4 is a flow chart illustrating an exemplary method.



FIG. 5 is a schematic of a computer system according to an exemplary embodiment.



FIG. 6 is a graphical user interface of an exemplary embodiment of a guest monitoring program.





DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS

While this technology is susceptible of embodiment in many different forms, there is shown in the drawings and will herein be described in detail several specific embodiments with the understanding that the present disclosure is to be considered as an exemplification of the principles of the technology and is not intended to limit the technology to the embodiments illustrated.


A hardware virtual machine (HVM), also generally referred to as a virtual machine or a hypervisor) is used to emulate a computer for a guest within a host computer system. Virtualization of many features of a mother-board is possible. Hardware virtualizations may emulate many motherboard properties by simulating a chip, including timers, interrupt controllers, disk controllers, and network controllers.


Virtual machines are useful for cloud providers to enable customers to operate a guest computer within a cloud processing environment. Various types of specific virtual machines exist, including KVM, Xen and Zones. Containers (also referred to as Zones and jails) are a feature of some virtual machines, for example, an operating system (OS) virtualization. Some virtual machines have better operating features than other virtual machines. Containers provide good security and resource control (for example, input/output (I/O), network connectivity, and central processing unit (CPU) scheduling). The present technology integrates two virtual machines to access the best features of both, while simultaneously producing a secure and observable virtual machine.


An exemplary method for storing and processing data is provided that includes providing a network connectivity component of a zone-based virtualization system. The exemplary method also includes providing a processing component of a virtual machine. The processing component of the virtual machine accesses the network connectivity component of the zone-based virtualization system for input/output operations.


In an exemplary embodiment, KVM is inserted into a Zone. The I/O path for Zones (which may be based on ZFS (Zettabyte File System) volumes) may be preferred over the I/O features of KVM, and therefore, I/O for Zones is used in the exemplary embodiment. KVM may fully emulate a motherboard, and may therefore have the advantage of providing a guest with an environment that does not require modification of guest software. By combining the two virtual machines in this manner, the best of both may be obtained to provide improved scalability and observability, without undue negative consequences. This is an unexpected result, since doubling virtualizations intuitively suggests a slower and more cumbersome operating system. Some hypervisors, such as KVM, handle I/O and memory by representing a physical machine within the virtual machine, which may be inefficient. By taking away the elements of the KVM virtualization that are less efficient, and instead using an OS virtualization for these elements, a preferred implementation is possible.


The nesting or encapsulating of one virtual machine (e.g., KVM) within another (e.g., Zones), while stripping away any redundancy, may optimize the result. In this manner, each virtualization level does not need to create an abstraction of the bare metal level, but may instead rely on abstractions made by a lower, or earlier-instantiated, level. The exemplary method of nesting virtual machines may include identifying ideal elements for each level, and inheriting the remaining abstractions from a level below. Therefore, using the exemplary method, nesting or layering more than two virtual machines for triple (or more) hulled security may be possible.


Resource controls for a virtual machine include: CPU, disk, and network. The disk I/O throttling may be dynamically controlled by a Zone, and other controls in the KVM environment may be inherited from the OS virtualization and/or the motherboard. A Zone limits device controls, and presents itself as a complete operating system, without a kernel. Containers are not a process themselves, but a way for the kernel to group processes. Unexpected results from the integration of KVM and OS virtualization include conventional CPU scheduling and other resource controls being inherited by KVM from the OS virtualization.


Some exemplary changes enabling KVM to run inside Zones include modification of the QEMU (Quick Emulator) that emulates a CPU during instantiation of the KVM instance. In particular, the interface between QEMU and the virtual network interface card (VNIC) prevents the KVM guest from media access control (MAC) and Internet Protocol (IP) address spoofing and/or modifying a networking stack. Each VNIC is assigned to a Zone, and operates like a physical hardware switching port. The QEMU process is run to set-up a virtual machine. The VNIC is modified according to an exemplary embodiment to prevent a guest from changing the properties of the VNIC. If modification is attempted, network packets may be dropped. In this manner, the exemplary embodiment shows only the packets to and/or from the Zone having the correct MAC and IP address, thereby preventing packet paths from mixing. ZFS is a file management system, which is accessed by QEMU to address file storage issues in exemplary embodiments.


The process flow for instantiation of a KVM guest inside a Zone according to an exemplary embodiment of the present technology includes the global zone (also referred to as the kernel) setting-up a Zone. Zones can launch processes including a virtual machine. After a container is set-up, a QEMU process is started to provide an HVM guest. Every action in the OS virtualization requires a privilege. At launch of QEMU, privileges are stripped away, and the exemplary KVM brand provides these properties to control a master spawning process. Even if a breakout (due to, for example, a UNIX vulnerability) from KVM to QEMU is accomplished, the QEMU cannot execute any processes, since every action in the Zone requires a privilege. A QEMU guest does not have access to any new devices and cannot create additional KVM guests even if there is a breakout.


Significantly, no privilege escalation from the Zone to the global zone is possible, since no code path exists for promoting a user within a Zone to be an administrator of the global zone. The container can set up the processes of QEMU, and only a few code paths exist crossing the container boundary. Further, all of the code paths are one-directional into the container. The kernel is designed to only allow changes from the global zone into a zone, while preventing any action within a Zone from impacting the global zone.


Debugging modules, for example DTrace, may be software for identifying network and processor activity. DTrace can monitor operations inside a Zone, and can determine state data for a virtual register of a virtual machine. Using DTrace or another appropriate debugging module, an administrator can profile a guest while the HVM is running, without the guest knowing. DTrace can dynamically observe traffic over a VNIC.



FIG. 1A illustrates an exemplary embodiment of global zone 100. Global zone 100 is managed by an administrator. Within global zone 100 is storage module 150 including disks 170. Alternatively, disks 170 may be any other appropriate form of persistent memory. Storage module 150 may be operated based on ZFS volumes. The administrator may create OS virtualization 110 for use by a customer, and OS virtualization 110 may be provided with virtual network interface card (VNIC) 130 for communicating outside OS virtualization 110. QEMU 120 may be started for the purpose of creating hardware virtual machine 140 for the customer within OS virtualization 110. QEMU 120 may strip away privileges of hardware virtual machine 140 during instantiation, and may have no other functions other than creating hardware virtual machine 140. Hardware virtual machine 140 may include network interface card 160 for communicating input/output data to storage module 150 or any other network. Hardware virtual machine 140 may include an emulated disk controller 165 for communicating input/output data to storage module 150. VNIC 130 is the gateway for all network traffic from the emulated network interface card 160. DTrace 180, or any other appropriate visualization and/or debug module, may be used by an administrator in global zone 100 to monitor hardware virtual machine 140. DTrace 180 may be used to monitor network traffic, virtual registers, and/or other processes operating on behalf of hardware virtual machine 140. DTrace 180 may operate without the knowledge of the customer or a user operating hardware virtual machine 140.



FIG. 1B illustrates an exemplary embodiment of global zone 100. Global zone 100 may run on kernel 190, which may be run by an administrator. The administrator may create OS virtualizations 110, 112, 114, 116 and 118 (or more), for use by one or more customers. OS virtualizations 110, 112, 114, 116 and 118 may be provided with virtual network interface cards (VNICs) 130, 132, 134, 136 and 138, respectively, for controlling communications with kernel 190. OS virtualizations 110, 112, 114, 116 and 118 encapsulate hardware virtual machines (HVMs) 140, 142, 144, 146 and 148, respectively. Each of hardware virtual machines 140, 142, 144, 146 and 148 is a different, exemplary type of virtual machine, which may be selected by a customer from a library of possible virtual machines prior to instantiation. Hardware virtual machines 140 is a Windows machine, hardware virtual machine 142 is a Linux machine, hardware virtual machine 144 runs a Java virtual machine (JVM) application, hardware virtual machine 146 runs a Database application, and hardware virtual machine 148 runs a node.js application. The encapsulation of hardware virtual machines 140, 142, 144, 146 and 148 by OS virtualizations 110, 112, 114, 116 and 118 creates a double hulled security that prevents mischievous conduct by a customer or guest within a cloud system. Escape or breakout from any of hardware virtual machines 140, 142, 144, 146 and 148 only provides access to OS virtualizations 110, 112, 114, 116 and 118, respectively. Further, OS virtualizations 110, 112, 114, 116 and 118 all provide a secure area that prevents privilege escalation by a user to be an administrator and/or to access global zone 100 or kernel 190.



FIG. 2 is a system level diagram illustrating datacenter 200 including compute/storage servers 100 and 210, and administrator terminal 220. Administrator terminal 220 may be used to control all or a portion of datacenter 200, and/or may be used to operate multiple datacenters. Administrator terminal 220 may communicatively couple with hardware virtual machine 140 of compute/storage server 100, and/or may monitor the operations of OS virtualization 110 and/or storage module 150. In this manner, the present technology enables a datacenter administrator to observe operations of compute and storage to a degree that was previously impossible. In particular, the internal processes of hardware virtual machines 140, 142, 144, 146 and 148 may be identified and visualized, and may be correlated with input/output operations of storage module 150.



FIG. 3 illustrates cloud-based data storage and processing system 300. Cloud-based data storage and processing system 300 includes datacenter 200 communicatively coupled to network 310. Network 310 may be a wide-area network (WAN), a local area network (LAN), the internet, or any other appropriate network. Customers may access cloud-based data storage and processing system 300 by using any of customer terminal 320, customer laptop 330, and/or customer personal computer 340 (or the like) to access network 310.



FIG. 4 illustrates method 400 according to the present technology. Method 400 proceeds from a start oval to operation 410, which indicates to provide an operating system (OS) virtualization having containers. In operation 410, each container prevents privilege escalation by a user to an administrator of a global zone running the OS virtualization. From operation 410, the flow proceeds to operation 420, which indicates to provide a hardware virtual machine (HVM) for the user, the HVM encapsulated in a container. From operation 420, the flow optionally proceeds to operation 430, which indicates to eliminate code paths directed from within each container to outside each container. From operation 430, the flow optionally proceeds to operation 440, which indicates to limit access by the user associated with the HVM to the container encapsulating the HVM, and limit operations of the user within the container to instantiating another HVM. From operation 440, the flow proceeds to end oval 450.



FIG. 5 illustrates an exemplary computing system 500 that may be used to implement an embodiment of the present technology. For example, global zone 100, kernel 190, administrator terminal 220, network 310, customer terminal 320, customer laptop 330 and/or customer personal computer 340 may be implemented by one or more of the components of computing system 500. Additionally or alternatively, computing system 500 may be used to implement method 400 of FIG. 4. The computing system 500 of FIG. 5 includes one or more processors 510 and memory 520. Memory 520 stores, in part, instructions and data for execution by the one or more processors 510. Memory 520 can store the executable code when the computing system 500 is in operation. The computing system 500 of FIG. 5 may further include a mass storage 530, portable storage 540, output devices 550, input devices 560, a graphics display 570, and other peripheral device(s) 580.


The components shown in FIG. 5 are depicted as being connected via a single bus 590. The components may be connected through one or more data transport means. The one or more processor 510 and memory 520 may be connected via a local microprocessor bus, and the mass storage 530, peripheral device(s) 580, portable storage 540, and graphics display 570 may be connected via one or more input/output (I/O) buses.


Mass storage 530, which may be implemented with a magnetic disk drive or an optical disk drive, is a non-volatile storage device for storing data and instructions for use by processor 510. Mass storage 530 can store the system software for implementing embodiments of the present technology for purposes of loading that software into memory 520.


Portable storage 540 operate in conjunction with a portable non-volatile storage medium, such as a floppy disk, compact disk or digital video disc, to input and output data and code to and from the computing system 500 of FIG. 5. The system software for implementing embodiments of the present technology may be stored on such a portable medium and input to the computing system 500 via the portable storage 540.


Input devices 560 provide a portion of a user interface. Input devices 560 may include an alphanumeric keypad, such as a keyboard, for inputting alphanumeric and other information, or a pointing device, such as a mouse, a trackball, stylus, or cursor direction keys. Additionally, the computing system 500 as shown in FIG. 5 includes output devices 550. Suitable output devices include speakers, printers, network interfaces, and monitors.


Graphics display 570 may include a liquid crystal display (LCD) or other suitable display device. Graphics display 570 receives textual and graphical information, and processes the information for output to the display device.


Peripheral device(s) 580 may include any type of computer support device to add additional functionality to the computing system. Peripheral device(s) 580 may include a modem or a router.


The components contained in the computing system 500 of FIG. 5 are those typically found in computing systems that may be suitable for use with embodiments of the present technology and are intended to represent a broad category of such computer components that are well known in the art. Thus, the computing system 500 of FIG. 5 can be a personal computer, hand held computing system, telephone, mobile computing system, workstation, server, minicomputer, mainframe computer, or any other computing system. The computer can also include different bus configurations, networked platforms, multi-processor platforms, etc. Various operating systems can be used including SmartOS, UNIX, Linux, Windows, Macintosh OS, Palm OS, and other suitable operating systems.



FIG. 6 is graphical user interface 600 of an exemplary embodiment of a guest monitoring program. The guest monitoring program may be DTrace, a debugging module, or any other appropriate monitoring software. Graphical user interface 600 may indicate target information 610, for instance a virtual machine (by name and/or type) and/or a process (for example, I/O operations or register states). Filtering toggles 620 may enable an administrator using the guest monitoring program to filter the data, for example to include or exclude either “read” or “write” in an analytic view of I/O operations. Data context 630, for example an x-axis identifier and/or a scale indication, may be provided to give additional context to the data displayed. Display area 640 may be used to visualize data and may include different colors, intensities, shapes and positions to indicate different data elements.


The above description is illustrative and not restrictive. Many variations of the technology will become apparent to those of skill in the art upon review of this disclosure. The scope of the technology should, therefore, be determined not with reference to the above description, but instead should be determined with reference to the appended claims along with their full scope of equivalents.

Claims
  • 1. A method for storing and processing data, comprising: providing an operating system (OS) virtualization running on a processor and having a plurality of containers, one or more containers preventing privilege escalation by a user to an administrator of a global zone running the OS virtualization;providing a hardware virtual machine (HVM) for the user, the HVM encapsulated in one of the one or more containers;limiting access by the user associated with the HVM to the one of the one or more containers encapsulating the HVM; andlimiting operations of the user within the one of the one or more containers to instantiating another HVM.
  • 2. The method of claim 1, further comprising eliminating code paths directed from within the one of the one or more containers to outside the one of the one or more containers.
  • 3. The method of claim 1 further comprising configuring the HVM by a quick emulator (QEMU) to limit access by the user via a virtual network interface card (VNIC) to the one of the one or more containers encapsulating the HVM.
  • 4. The method of claim 3, further comprising: preventing the user from changing the VNIC; andlimiting actions of the user within the HVM by limiting privileges of the user at an instantiation of the HVM by the QEMU.
  • 5. The method of claim 1, wherein limited resource control of the OS virtualization is inherited by the HVM.
  • 6. The method of claim 5, wherein: the HVM accesses at least one storage volume for the user via at least one of a virtual network interface card (VNIC) and a virtual disk controller (VDC);input/output is dynamically throttled for the HVM by the OS virtualization; andprocessor scheduling is performed for the HVM by the OS virtualization.
  • 7. The method of claim 1, further comprising providing a debug module in the global zone hosting the OS virtualization, the debug module adapted to monitor input/output of the one of the one or more containers.
  • 8. The method of claim 7, wherein the debug module is adapted to observe virtual register states of the HVM.
  • 9. The method of claim 1, further comprising throttling input/output of the HVM by an administrator of the global zone.
  • 10. A system for storing and processing data, comprising: an operating system (OS) virtualization stored in a memory and running on a processor, the OS virtualization having a plurality of containers, one or more containers preventing privilege escalation by a user to an administrator of a global zone running the OS virtualization;a hardware virtual machine (HVM) for the user, the HVM encapsulated in one of the one or more containers;
  • 11. The system of claim 10, wherein code paths directed from within the one of the one or more containers to outside the one of the one or more containers are eliminated.
  • 12. The system of claim 10, further comprising configuring the HVM by a quick emulator (QEMU) to limit access by the user via at least one of a virtual network interface card (VNIC) and a virtual disk controller (VDC) to the one of the one or more containers encapsulating the HVM.
  • 13. The system of claim 12, wherein: the user is prevented from changing the VNIC; andactions of the user within the HVM are limited by limiting privileges of the user at an instantiation of the HVM by the QEMU.
  • 14. The system of claim 10, wherein: limited resource control of the OS virtualization is inherited by the HVM;the HVM accesses at least one storage volume for the user via at least one of a virtual network interface card (VNIC) and a virtual disk controller;input/output is dynamically throttled for the HVM by the OS virtualization; andprocessor scheduling is performed for the HVM by the OS virtualization.
  • 15. The system of claim 10, further comprising a debug module in the global zone hosting the OS virtualization, the debug module adapted to monitor input/output of the one of the one or more containers, the debug module further adapted to observe register states of the HVM.
  • 16. The system of claim 10, wherein input/output of the HVM is throttled by an administrator of the global zone.
  • 17. A non-transitory computer readable storage medium having a program recorded thereon, the program when executed causing a computer to perform a method for storing and processing data, the method comprising: providing an operating system (OS) virtualization having a plurality of containers, one or more containers preventing privilege escalation by a user to an administrator of a global zone running the OS virtualization;providing a hardware virtual machine (HVM) for the user, the HVM encapsulated in one of the one or more containers;eliminating code paths directed from within the one of the one or more containers to outside the one of the one or more containers;limiting access by the user associated with the HVM to the one of the one or more containers encapsulating the HVM; andlimiting operations of the user within the one of the one or more containers to instantiating another HVM.
  • 18. The non-transitory computer readable storage medium of claim 17, wherein the method further comprises: configuring the HVM by a quick emulator (QEMU) to limit access by the user via at least one of a virtual network interface card (VNIC) and a virtual disk controller (VDC) to the one of the one or more containers encapsulating the HVM;preventing the user from changing the VNIC; andlimiting actions of the user within the HVM by limiting privileges of the user at instantiation of the HVM by the QEMU.
US Referenced Citations (154)
Number Name Date Kind
6393495 Flory et al. May 2002 B1
6553391 Goldring et al. Apr 2003 B1
6901594 Cain et al. May 2005 B1
7222345 Gray et al. May 2007 B2
7265754 Brauss Sep 2007 B2
7379994 Collazo May 2008 B2
7437730 Goyal Oct 2008 B2
7529780 Braginsky et al. May 2009 B1
7581219 Neiger et al. Aug 2009 B2
7603671 Liu Oct 2009 B2
7640547 Neiman et al. Dec 2009 B2
7685148 Engquist et al. Mar 2010 B2
7774457 Talwar et al. Aug 2010 B1
7814465 Liu Oct 2010 B2
7849111 Huffman et al. Dec 2010 B2
7899901 Njemanze et al. Mar 2011 B1
7904540 Hadad et al. Mar 2011 B2
7917599 Gopalan et al. Mar 2011 B1
7933870 Webster Apr 2011 B1
7940271 Wright et al. May 2011 B2
8006079 Goodson et al. Aug 2011 B2
8010498 Gounares et al. Aug 2011 B2
8141090 Graupner et al. Mar 2012 B1
8181182 Martin May 2012 B1
8301746 Head et al. Oct 2012 B2
8336051 Gokulakannan Dec 2012 B2
8370936 Zuk et al. Feb 2013 B2
8417673 Stakutis et al. Apr 2013 B2
8417746 Gillett, Jr. et al. Apr 2013 B1
8429282 Ahuja et al. Apr 2013 B1
8434081 Cervantes et al. Apr 2013 B2
8464251 Sahita et al. Jun 2013 B2
8631131 Kenneth et al. Jan 2014 B2
8677359 Cavage et al. Mar 2014 B1
20020069356 Kim Jun 2002 A1
20020082856 Gray et al. Jun 2002 A1
20020156767 Costa et al. Oct 2002 A1
20020198995 Liu et al. Dec 2002 A1
20030154112 Neiman et al. Aug 2003 A1
20030163596 Halter et al. Aug 2003 A1
20040088293 Daggett May 2004 A1
20050097514 Nuss May 2005 A1
20050108712 Goyal May 2005 A1
20050188075 Dias et al. Aug 2005 A1
20060107087 Sieroka et al. May 2006 A1
20060153174 Towns-von Stauber et al. Jul 2006 A1
20060218285 Talwar et al. Sep 2006 A1
20060246879 Miller et al. Nov 2006 A1
20060248294 Nedved et al. Nov 2006 A1
20060294579 Khuti et al. Dec 2006 A1
20070088703 Kasiolas et al. Apr 2007 A1
20070118653 Bindal May 2007 A1
20070168336 Ransil et al. Jul 2007 A1
20070179955 Croft et al. Aug 2007 A1
20070250838 Belady et al. Oct 2007 A1
20070271570 Brown et al. Nov 2007 A1
20080080396 Meijer et al. Apr 2008 A1
20080103861 Zhong May 2008 A1
20080155110 Morris Jun 2008 A1
20090044188 Kanai et al. Feb 2009 A1
20090077235 Podila Mar 2009 A1
20090164990 Ben-Yehuda et al. Jun 2009 A1
20090172051 Huffman et al. Jul 2009 A1
20090193410 Arthursson et al. Jul 2009 A1
20090216910 Duchesneau Aug 2009 A1
20090259345 Kato et al. Oct 2009 A1
20090260007 Beaty et al. Oct 2009 A1
20090300210 Ferris Dec 2009 A1
20100050172 Ferris Feb 2010 A1
20100057913 DeHaan Mar 2010 A1
20100106820 Gulati et al. Apr 2010 A1
20100114825 Siddegowda May 2010 A1
20100125845 Sugumar et al. May 2010 A1
20100131324 Ferris May 2010 A1
20100131854 Little May 2010 A1
20100153958 Richards et al. Jun 2010 A1
20100162259 Koh et al. Jun 2010 A1
20100223383 Salevan et al. Sep 2010 A1
20100223385 Gulley et al. Sep 2010 A1
20100228936 Wright et al. Sep 2010 A1
20100235632 Iyengar et al. Sep 2010 A1
20100250744 Hadad et al. Sep 2010 A1
20100262752 Davis et al. Oct 2010 A1
20100268764 Wee et al. Oct 2010 A1
20100299313 Orsini et al. Nov 2010 A1
20100306765 DeHaan Dec 2010 A1
20100306767 DeHaan Dec 2010 A1
20100318609 Lahiri et al. Dec 2010 A1
20100332629 Cotugno et al. Dec 2010 A1
20100333087 Vaidyanathan et al. Dec 2010 A1
20110004566 Berkowitz et al. Jan 2011 A1
20110016214 Jackson Jan 2011 A1
20110029969 Venkataraja et al. Feb 2011 A1
20110029970 Arasaratnam Feb 2011 A1
20110047315 De Dinechin et al. Feb 2011 A1
20110055396 DeHaan Mar 2011 A1
20110055398 Dehaan et al. Mar 2011 A1
20110078303 Li et al. Mar 2011 A1
20110107332 Bash May 2011 A1
20110131306 Ferris et al. Jun 2011 A1
20110131329 Kaplinger et al. Jun 2011 A1
20110131589 Beaty et al. Jun 2011 A1
20110138382 Hauser et al. Jun 2011 A1
20110138441 Neystadt et al. Jun 2011 A1
20110145392 Dawson et al. Jun 2011 A1
20110153724 Raja et al. Jun 2011 A1
20110161952 Poddar et al. Jun 2011 A1
20110173470 Tran Jul 2011 A1
20110179132 Mayo et al. Jul 2011 A1
20110179134 Mayo et al. Jul 2011 A1
20110179162 Mayo et al. Jul 2011 A1
20110185063 Head et al. Jul 2011 A1
20110219372 Agrawal et al. Sep 2011 A1
20110270968 Salsburg et al. Nov 2011 A1
20110276951 Jain Nov 2011 A1
20110296021 Dorai et al. Dec 2011 A1
20110302378 Siebert Dec 2011 A1
20110302583 Abadi et al. Dec 2011 A1
20110320520 Jain Dec 2011 A1
20120017210 Huggins et al. Jan 2012 A1
20120054742 Eremenko et al. Mar 2012 A1
20120060172 Abouzour Mar 2012 A1
20120066682 Al-Aziz et al. Mar 2012 A1
20120079480 Liu Mar 2012 A1
20120089980 Sharp et al. Apr 2012 A1
20120124211 Kampas et al. May 2012 A1
20120131156 Brandt et al. May 2012 A1
20120131591 Moorthi et al. May 2012 A1
20120159507 Kwon et al. Jun 2012 A1
20120167081 Sedayao et al. Jun 2012 A1
20120173709 Li et al. Jul 2012 A1
20120179874 Chang et al. Jul 2012 A1
20120185913 Martinez et al. Jul 2012 A1
20120198442 Kashyap et al. Aug 2012 A1
20120204176 Tian et al. Aug 2012 A1
20120221845 Ferris Aug 2012 A1
20120246517 Bender et al. Sep 2012 A1
20120266231 Spiers et al. Oct 2012 A1
20120284714 Venkitachalam et al. Nov 2012 A1
20120303773 Rodrigues Nov 2012 A1
20120311012 Mazhar et al. Dec 2012 A1
20130042115 Sweet et al. Feb 2013 A1
20130060946 Kenneth et al. Mar 2013 A1
20130067067 Miri et al. Mar 2013 A1
20130081016 Saito et al. Mar 2013 A1
20130086590 Morris et al. Apr 2013 A1
20130129068 Lawson et al. May 2013 A1
20130132057 Deng et al. May 2013 A1
20130179881 Calder et al. Jul 2013 A1
20130191835 Araki Jul 2013 A1
20130191836 Meyer Jul 2013 A1
20130318525 Palanisamy et al. Nov 2013 A1
20130339966 Meng et al. Dec 2013 A1
20130346974 Hoffman et al. Dec 2013 A1
Foreign Referenced Citations (1)
Number Date Country
2011088224 Jul 2011 WO
Non-Patent Literature Citations (18)
Entry
Bi et al. “Dynamic Provisioning Modeling for Virtualized Multi-tier Applications in Cloud Data Center”. 2010 IEEE 3rd International Conference on Cloud Computing. pp. 370-377.
Chappell, David. “Introducing Windows Azure”. Microsoft Corporation. Oct. 2010. pp. 1-25.
Yagoubi, Belabbas et al., “Load Balancing in Grid Computing,” Asian Journal of Information Technology, vol. 5, No. 10 , pp. 1095-1103, 2006.
Kramer, “Advanced Message Queuing Protocol (AMQP),” Linux Journal, Nov. 2009, p. 1-3.
Subramoni et al., “Design and Evaluation of Benchmarks for Financial Applications Using Advanced Message Queuing Protocol (AMQP) over InfiniBand,” Nov. 2008.
Richardson et al., “Introduction to RabbitMQ,” Sep. 2008, p. 1-33.
Bernstein et al., “Using XMPP as a Transport in Intercloud Protocols,” Jun. 22, 2010, p. 1-8.
Bernstein et al., “Blueprint for the Intercloud—Protocols and Formats for Cloud Computing Interoperabiilty,” May 28, 2009, pp. 328-336.
Gregg, Brendan, “Visualizing System Latency,” May 1, 2010, ACM Queue, p. 1-13, http://queue.acm.org/detail.cfm?id=1809426.
Gregg, Brendan, “Heat Map Analytics,” Mar. 17, 2009, Oracle, p. 1-7, https://blogs.oracle.com/brendan/entry/heat—map—analytics.
Mundigl, Robert, “There is More Than One Way to Heat a Map,” Feb. 10, 2009, Clearly and Simply, p. 1-12, http://www.clearlyandsimply.com/clearly—and—simply/2009/02/there-is-more-than-one-way-to-heat-a-map.html.
International Search Report and Written Opinion of the International Searching Authority mailed May 5, 2011 in Patent Cooperation Treaty Application No. PCT/US2011/028230, filed Mar. 12, 2011.
Chef Documents. Retrieved Mar. 11, 2014 from http://docs.opscode.com/.
Ansible Documentation. Retrieved Mar. 11, 2014 from http://docs.ansible.com/.
Block 10 Controller. Retrieved Mar. 12, 2014 from https://www.kernel.org/doc/Documentation/cgroups/blkio-controller.txt.
Block Device Bio Throttling Support. Retrieved Mar. 12, 2014 from https://lwn.net/Articles/403889/.
Gregg, Brendan. Systems Performance: Enterprise and the Cloud, Prentice Hall, 2014, pp. 557-558.
Mesnier, Michael. I/O throttling. 2006. Retrieved Apr. 13, 2014 from https://www.usenix.org/legacy/event/fast07/tech/full—papers/mesnier/mesnier—html/node5.html.