Systems and methods for integrating compute resources in a storage area network

Information

  • Patent Grant
  • 8943284
  • Patent Number
    8,943,284
  • Date Filed
    Thursday, March 14, 2013
    11 years ago
  • Date Issued
    Tuesday, January 27, 2015
    9 years ago
Abstract
A data processing and storage system includes a compute module for running at least one virtual machine for processing guest data. State data on the at least one virtual machine is collected. The system also includes a storage module communicating with the compute module and storing the guest data. The storage module accesses the state data for controlling storage operations. A cloud storage/compute system is provided that includes a storage module for storing guest data for a virtual machine and operating based on a clock. The cloud storage/compute system also includes a compute module coupled to the storage module for performing operations on the guest data for the virtual machine and operating based on the clock. A method is provided that includes collecting state data on a virtual machine that processes guest data, and controlling storage operations relating to the guest data based on the state data.
Description
FIELD OF THE INVENTION

The present invention relates to systems and methods for a cloud computing infrastructure. More particularly, the present invention relates to a system and method for integrating compute resources in a storage area network.


BACKGROUND

Cloud infrastructure, including storage and processing, is an increasingly important resource for businesses and individuals. Using a cloud infrastructure enables businesses to outsource all or substantially all of their information technology (IT) functions to a cloud service provider. Businesses using a cloud service provider benefit from increased expertise supporting their IT function, higher capability hardware and software at lower cost, and ease of expansion (or contraction) of IT capabilities.


Monitoring a cloud infrastructure is an important function of cloud service providers, and continuity of function is an important selling point for cloud service providers. Downtime due to malware or other failures should be avoided to ensure customer satisfaction. Cloud infrastructure monitoring conventionally includes network packet sniffing, but this is impractical as a cloud infrastructure scales up. Alternatively, host-based systems conventionally collect and aggregate information regarding processes occurring within the host.


SUMMARY OF THE INVENTION

According to exemplary embodiments, the present technology provides a data processing and storage system. The system may include a compute module for running at least one virtual machine for processing guest data. State data on the at least one virtual machine is collected. The system also includes a storage module communicating with the compute module and storing the guest data. The storage module accesses the state data for controlling storage operations.


The state data may include a process identifier, a username, a central processing unit usage, a memory identifier, an internal application identifier, and/or an internal application code path. The state data may be used to dynamically modify control software of the storage module. The state data may be used to manage input/output throttling of the guest data with respect to the storage module. The state data may be accessible by a system administrator for determining and/or modifying resource usage related to the guest data.


The system may include a clock accessible by the compute module and the storage module for managing operations. The system may include a debug module adapted to access the state data and provide a stack trace for the compute module and the storage module.


The storage module may store a read-only copy of a virtual machine operating system for instantiating new instances of virtual machines in the compute module.


A cloud storage/compute system is provided that includes a storage module for storing guest data for a virtual machine and operating based on a clock. The cloud storage/compute system also includes a compute module communicatively coupled with the storage module for performing operations on the guest data for the virtual machine and operating based on the clock. Clock data may be associated with storage module operations data and compute module operations data.


The system may include a cloud system administrator module accessing the storage module operations data and the compute module operations data for managing operations. The system may include a debug module accessing the storage module operations data and the compute module operations data for providing a stack trace.


A method is provided that includes collecting state data on a virtual machine that processes guest data. The method also includes controlling storage operations relating to the guest data based on the state data.


The method may include communicating by the virtual machine the guest data to a storage module, and storing the guest data in the storage module. The method may include storing in the storage module a read-only copy of a virtual machine operating system for instantiating new instances of virtual machines. The method may further include storing in the storage module modifications to the virtual machine in an instance image file.


These and other advantages of the present technology will be apparent when reference is made to the accompanying drawings and the following description.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a diagram illustrating an exemplary embodiment of a compute/storage server.



FIG. 2 is a system level diagram illustrating an exemplary embodiment of a compute/storage server and datacenter administrator.



FIG. 3 is a diagram illustrating an exemplary embodiment of a cloud-based data storage and processing system.



FIG. 4 is a flow chart illustrating an exemplary method.



FIG. 5 is a schematic of a computer system according to an exemplary embodiment.



FIG. 6 is a diagram illustrating another exemplary embodiment of a compute/storage server.





DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS

While this technology is susceptible of embodiment in many different forms, there is shown in the drawings and will herein be described in detail several specific embodiments with the understanding that the present disclosure is to be considered as an exemplification of the principles of the technology and is not intended to limit the technology to the embodiments illustrated.


The present technology provides a unified compute and storage model for a datacenter. The present technology modifies storage area network (SAN) model and provides compute as a service to a SAN. The present technology enables a datacenter administrator to answer customer queries that are difficult to answer in a conventional SAN model. For example, using a SAN model, a system administrator is not able to quickly and easily respond to questions presented by a customer such as: “why is it slow?”; “why is it down?”; “when is it coming back?”; and “when it comes back, will it be OK?”. Conventional datacenters built on the SAN model cannot do a socket-to-socket analysis and do not provide transparency enabling an administrator to properly answer these questions.


The present technology may provide cost optimization across a complete range of different instance types, and may provide a system and method for running a virtual machine natively on a modified storage area network. A multi-datacenter object store is provided by relaxing a policy around compute, and by providing the same core object system for a multi-datacenter object store (a primary system of record). The present technology brings compute functionality to the data store, and thereby provides unique capabilities to query, index, MapReduce, transform, and/or perform any other compute function directly on the object store without having to move data.


The present technology collapses the conventional SAN model by combining storage and compute. Policies, security and software are updated to handle this architecture pursuant to the present technology. Storage volumes may be optimized for storage. An integrated compute/SAN according to the present technology enables I/O throttling for tenants (also referred to as guests or virtual machines) based on co-tenant operations, and through the proper implementation of operating software may enable awareness of co-tenant (also referred to as neighbors) operations affecting common resources.


Advantages of the present technology include increased predictability and control, as well as improved unit economics. The present technology also enables improved network, compute and storage integration with end-user needs. The present technology avoids hard allocation of compute and storage resources, and redirects the datacenter model to be about data, including both storage and manipulation. In this manner, the present technology ensures that storage resources are synchronized with compute needs of a guest by providing dynamic allocation of storage and computer resources.


The improved observability enabled by the present technology includes visualization of storage latency, I/O latency, and the effects of I/O on other tenants. With the known latencies being determined, latencies for I/O may be controlled, by for instance, instituting delays for high I/O users in order to prevent impairment of neighbor guests using the same storage unit. Improved observability may be enabled in part by compute and storage resources utilizing the same clock. The present technology also enables an administrator of the datacenter to identify code paths for I/O for each guest.


By integrating compute and storage in the same server in a datacenter, context information relating to a processor or processors (also referred to as a CPU) may be stored and used to control storage operations. This information relating to computer operations is conventionally lost if a datacenter is built from parts from different manufacturers and connected over a network. The context information (also referred to as state data, state and statistics) may be tenant specific. For example, an administrator may identify that a guest is using a large amount of I/O in the storage. Using the present technology, the administrator may also be able to access context to identify that a failover has occurred, such that the virtual machine is replaced by a new virtual machine. The state data may include a process identifier, a username, a central processing unit usage, a memory identifier, an internal application identifier and/or an internal application code path.


A common clock between storage and compute modules of a compute/storage server in a datacenter enables analysis by an administrator of the host operating system of the datacenter. The present technology enables an analysis of all I/O activity of SAN components of the present technology, and further enables tracking the data flows in and out of the SAN, in real-time. Further, the identified I/O may be correlated with processes running on a virtual machine. Time-series based correlations are possible, and the present technology may utilize DTrace and/or other debugging software to provide a clear view of operations up and down a stack by identifying processes.


The present technology may provide ease of management by creating virtual machines (also referred to as instances, tenants, and guests) from a single copy of the software. The read-only copy of the virtual operating system may be stored in the storage component of the rack in which the virtual machine operates in the compute component. Instantiation of the virtual machine may be performed very quickly by accessing the storage component directly from the compute component. Additionally, since (in some embodiments) only the difference (also referred to as the delta) in the operating system file is saved as a new file, the various copies of virtual machine operating systems for all, or at least a plurality of the guests of the host machine may occupy a much smaller amount of disk (or other appropriate) storage. A modified virtual OS may therefore be stored as pointers directed to the read-only copy of the operating system and other pointers directed to the delta file. Accessing the read-only copy of the virtual machine along with the delta file when starting another instance based on the modified virtual machine may also be performed very quickly.


Different images for databases, node.js (a platform built on Chrome's Javascript runtime for building fast, scalable network applications), and MySQL are commonly stored and offered to customers. In this manner, configuring a new virtual machine may be seamless and quick, since the copy-on-write system exists on the same machine including both compute and storage. In this manner, the process of creating a new instance is vastly accelerated. ZFS may be utilized as the file storage system of the present application.


An exemplary hardware embodiment uses the rack as the primary unit and offers four rack designs. The exemplary hardware embodiment may draw substantially constant power, for example 8 kW, and may be based on the same board, CPUs, DRAM, HBAs (Host Bus Adapter) and ToR (The Onion Routing). The exemplary hardware embodiment may require only minimal firmware.


Exemplary compute racks for the present technology include, but are not limited to, any of the following: 1) 512 CPUs, 4 TB DRAM, all 600 GB SAS (68 TB); 2) 512 CPUs, 4 TB DRAM, all 3 TB SAS (600 TB); and 3) 512 CPUs, 4 TB DRAM, all 800 GB SSDs (90 TB/200 TB). Object storage racks according to an exemplary embodiment may include, but are not limited to, 256 CPUs, 4 TB DRAM, all 3 TB/4 TB SATA (800 TB).



FIG. 1 is a diagram illustrating an exemplary compute/storage server 100. Compute/storage server 100 includes compute module 110 and storage module 150. Compute module 110 may be composed of processors (also referred to as processing units, central processing units, and CPUs). Compute module 110 may be used to instantiate one or more virtual machines, for instance virtual machine 120 and virtual machine 130. Virtual machine 120 and virtual machine 130 may operate as guests (also referred to as tenants) on host machine, and may run for the benefit of one or more customers of the datacenter operator. Virtual machine 120 and virtual machine 130 may, after processing, output guest data to storage module 150 for persistent storage in disks 170 of storage module 150. In alternative exemplary embodiments, disks 170 may be any other appropriate memory device for persistent data storage.


Virtual machine 120 and virtual machine 130 may output context data to context memory 140. Context data may be state data of virtual machine 120 and virtual machine 130, and may include process identifiers within each virtual machine, usernames for each virtual machine, central processing unit usage for each virtual machine, memory identifiers for each virtual machine, internal application identifiers for each virtual machine and/or internal application code paths for each virtual machine. Context memory 140 may couple to device drivers 160 of storage module 150. Alternatively, context memory 140 may couple to other software elements of storage module 150. Context data may be transferred by context memory 140 to device drivers 160 (or other software elements of storage module 150) and may be used to assist in the operation of storage module 150 and/or disks 170. In particular, context data may be used to dynamically modify device drivers 160. In this manner, data relating to the operation of virtual machine 120 and virtual machine 130 may be used as an input to storage module 150 and may be used to modify a storage algorithm. Likewise, data relating to the processing elements of compute module 110 used to run virtual machine 120 and virtual machine 130 may be also used as an input to storage module 150 and may be used to modify a storage algorithm. Additionally, device drivers 160 (or other software storage control elements of storage module 150) may output data relating to storage operations to context memory 140, and this data may be matched or correlated with data received from machine 120 and virtual machine 130 for use by a system administrator.


Compute/storage server 100 also includes clock 180, which may be accessed by both compute module 110 and storage module 150. Due to the fact that the operations of compute module 110 and storage module 150 may both be based on clock 180, time stamps associated with the operations of the respective modules may be correlated, either in context memory 140, another module in compute/storage server 100, in a system administrator server, and/or elsewhere.



FIG. 2 is a system level diagram illustrating datacenter 200 including compute/storage servers 100 and 210, and administrator terminal 220. Administrator terminal 220 may be used to control all or a portion of datacenter 200, and/or may be used to operate multiple datacenters. Administrator terminal 220 may communicatively couple with context memory 140 of compute/storage server 100, and/or may monitor the operations of compute module 110 and/or storage module 150. In this manner, the present technology enables a datacenter administrator to observe operations of compute and storage to a degree that was previously impossible. In particular, the internal processes of a virtual machine may be identified and visualized, and may be correlated with input/output operations of storage module 150. Clock data for context data as well as all other data received from compute/storage server 100 is inherently synchronized since all of the operations within compute/storage server 100 are performed based on clock 180.



FIG. 3 illustrates cloud-based data storage and processing system 300. Cloud-based data storage and processing system 300 includes datacenter 200 communicatively coupled to network 310. Network 310 may be a wide-area network (WAN), a local area network (LAN), the internet, or any other appropriate network. Customers may access cloud-based data storage and processing system 300 by using any of customer terminal 320, customer laptop 330, and/or customer personal computer 340 (or the like) to access network 310.



FIG. 4 illustrates method 400 according to the present technology. Method 400 proceeds from a start oval to operation 410, which indicates to collect state data on a virtual machine processing guest data. From operation 410, the flow proceeds to operation 420, which indicates to control storage operations relating to the guest data based on the state data. From operation 420, the flow optionally proceeds to operation 430, which indicates to communicate by the virtual machine the guest data to a storage module, and to store the guest data in the storage module. From operation 430, the flow optionally proceeds to operation 440, which indicates to manage processing operations and the storage operations based on a clock accessible by the virtual machine and the storage module. From operation 440, the flow proceeds to end oval.



FIG. 5 illustrates an exemplary computing system 500 that may be used to implement an embodiment of the present technology. For example, computer/storage servers 100 and 210, administrator terminal 220, network 310, customer terminal 320, customer laptop 330 and/or customer personal computer 340 may be implemented by one or more of the components of computing system 500. Additionally or alternatively, computing system 500 may be used to implement method 400 of FIG. 4. The computing system 500 of FIG. 5 includes one or more processors 510 and memory 520. Memory 520 stores, in part, instructions and data for execution by the one or more processors 510. Memory 520 can store the executable code when the computing system 500 is in operation. The computing system 500 of FIG. 5 may further include a mass storage 530, portable storage 540, output devices 550, input devices 560, a graphics display 570, and other peripheral device(s) 580.


The components shown in FIG. 5 are depicted as being connected via a single bus 590. The components may be connected through one or more data transport means. The one or more processor 510 and memory 520 may be connected via a local microprocessor bus, and the mass storage 530, peripheral device(s) 580, portable storage 540, and graphics display 570 may be connected via one or more input/output (I/O) buses.


Mass storage 530, which may be implemented with a magnetic disk drive or an optical disk drive, is a non-volatile storage device for storing data and instructions for use by processor 510. Mass storage 530 can store the system software for implementing embodiments of the present technology for purposes of loading that software into memory 520.


Portable storage 540 operate in conjunction with a portable non-volatile storage medium, such as a floppy disk, compact disk or digital video disc, to input and output data and code to and from the computing system 500 of FIG. 5. The system software for implementing embodiments of the present technology may be stored on such a portable medium and input to the computing system 500 via the portable storage 540.


Input devices 560 provide a portion of a user interface. Input devices 560 may include an alphanumeric keypad, such as a keyboard, for inputting alphanumeric and other information, or a pointing device, such as a mouse, a trackball, stylus, or cursor direction keys. Additionally, the system 500 as shown in FIG. 5 includes output devices 550. Suitable output devices include speakers, printers, network interfaces, and monitors.


Graphics display 570 may include a liquid crystal display (LCD) or other suitable display device. Graphics display 570 receives textual and graphical information, and processes the information for output to the display device.


Peripheral device(s) 580 may include any type of computer support device to add additional functionality to the computing system. Peripheral device(s) 580 may include a modem or a router.


The components contained in the computing system 500 of FIG. 5 are those typically found in computing systems that may be suitable for use with embodiments of the present technology and are intended to represent a broad category of such computer components that are well known in the art. Thus, the computing system 500 of FIG. 5 can be a personal computer, hand held computing system, telephone, mobile computing system, workstation, server, minicomputer, mainframe computer, or any other computing system. The computer can also include different bus configurations, networked platforms, multi-processor platforms, etc. Various operating systems can be used including UNIX, Linux, Windows, Macintosh OS, Palm OS, and other suitable operating systems.



FIG. 6 is a diagram illustrating another exemplary embodiment of compute/storage server 100. Compute/storage server 100 includes compute module 110 and storage module 150. Compute module 110 may be composed of processors and may be used to instantiate one or more virtual machines, for instance virtual machine 120, operating as a guest. Storage module 150 may include disks 170, and disks 170 may include read-only OS disk 600, read-write apps config disk 610, library disk 620 and instance image disk 630. As discussed previously, disks 170, read-only OS disk 600, read-write apps config disk 610, library disk 620 and/or instance image disk 630 may be any other appropriate memory device suitable for persistent data storage. Additionally, read-only OS disk 600, read-write apps config disk 610, library disk 620 and instance image disk 630 may instead be collectively stored on one disk, or may be stored on more than three disks.


Virtual machine 120 may be instantiated based on a copy-on-write methodology. In particular, when an administrator of the datacenter and/or a customer desires a new virtual machine, compute module 110 may access read-only OS disk 600 of storage module 150. Alternatively, the administrator or customer may desire a particular type of virtual machine, for instance a database or a virtual machine based on node.js and/or MySQL. Due to the direct access of compute module 110 to storage module 150, the instantiation of a virtual machine may be performed very quickly. If the customer or administrator modifies the virtual machine, the changes to the system may be stored in a delta file stored in instance image disk 630, and a pointer file may provide a map to selectively access read-only OS disk 600 and instance image disk 630. Additionally, if a customer or the datacenter administrator wants to make a copy of a previously modified virtual machine, compute module 110 may access the read-only copy of the operating system for the virtual machine stored in read-only OS disk 600 and the modifications stored in instance image disk 630, based on the contents of the pointer file.


The above description is illustrative and not restrictive. Many variations of the technology will become apparent to those of skill in the art upon review of this disclosure. The scope of the technology should, therefore, be determined not with reference to the above description, but instead should be determined with reference to the appended claims along with their full scope of equivalents.

Claims
  • 1. A storage area network system, comprising: an object store comprising a plurality of tenants;a compute module for:assigning to each of the plurality of tenants a virtual machine,instantiating the virtual machine for each of the plurality of tenants directly on the object store;running at least one virtual machine that is used to process guest data in the object store using at least one compute function that is executed directly on the object store in such a way that data is not moved from the object store, each of the compute operations having a timestamp generated by a clock;collecting context data about each of the virtual machines of the plurality of tenants
  • 2. The system of claim 1, wherein the context data is used to dynamically modify control software of the storage module.
  • 3. The system of claim 2, wherein the context data is used to manage input/output throttling of the guest data with respect to the storage module.
  • 4. The system of claim 1, wherein the context data is accessible by a system administrator for at least one of determining and modifying resource usage related to the guest data.
  • 5. The system of claim 1, further comprising a clock accessible by the compute module and the storage module for managing operations.
  • 6. The system of claim 1, further comprising a debug module adapted to access the context data and provide a stack trace for the compute module and the storage module.
  • 7. The system of claim 1, wherein the storage module stores a read-only copy of a virtual machine operating system for instantiating new instances of virtual machines in the compute module, the storage module further storing modifications to the virtual machine operating system in a delta file on an image disk, separately from the read-only copy of the virtual machine operating system, the delta file being associated with the read-only copy of the virtual machine operating system by a pointer file.
  • 8. An object store, comprising: a storage module for executing storage operations comprising storing guest data for a virtual machine and operating based on a clock, each of the storage operations comprising a timestamp, context data about the virtual machine being collected, wherein the context data comprises an internal application identifier and an internal application code path for the virtual machine;a compute module coupled to the storage module for performing compute operations on the guest data for the virtual machine and operating based on the clock each of the storage operations comprising a timestamp respectively; anda context memory for storing context data generated by the virtual machine, the context memory being coupled to device drivers of the storage module, the context data being used to dynamically modify the device drivers, the context data comprising a correlation of the storage operations and the compute operations using the timestamps of the storage operations and the timestamps of the compute operations, wherein the timestamps associated with the storage operations are correlated with the timestamps associated with the compute operations, in the context memory.
  • 9. The object store of claim 8, further comprising a cloud system administrator module accessing storage module operations data and compute module operations data for managing operations.
  • 10. The object store of claim 9, further comprising a debug module accessing the storage module operations data and the compute module operations data for providing a stack trace.
  • 11. The object store of claim 8, wherein the context data is used to at least one of dynamically modify control software of the storage module and manage input/output throttling of the guest data with respect to the storage module.
  • 12. A method comprising: collecting context data on each of the virtual machines of the plurality of tenants, the context data comprising a process identifier and a username, a compute module that executes compute operations comprising instantiating each of the virtual machines directly on the object store without moving guest data from the object store, the context data comprising a correlation of storage operations and the compute operations using timestamps of the storage operations and timestamps of the compute operations, wherein the timestamps associated with the storage operations are correlated with the timestamps associated with the compute operations, in the context memory;controlling storage operations for the object store relating to the guest data based on the context data, each of the compute operations and the storage operations comprising timestamps respectively;generating correlated data that comprises correlating the compute operations of the compute module and the storage operations of the storage module using timestamps; andoutputting the correlated data to an administrator terminal.
  • 13. The method of claim 12, further comprising: communicating by at least one virtual machine the guest data to a storage module; andstoring the guest data in the storage module.
  • 14. The method of claim 13, further comprising: storing in the storage module a read-only copy of a virtual machine operating system for instantiating new instances of virtual machines; andstoring in the storage module modifications to the virtual machine in an instance image file.
  • 15. The method of claim 14, wherein the context data is used to: dynamically modify control software of the storage module; modify device drivers of the storage module;manage input/output throttling of the guest data with respect to the storage module;determine resource usage related to the guest data;modify resource usage related to the guest data;debug at least one of the virtual machine and the storage module; and provide a stack trace.
  • 16. The method of claim 12, further comprising managing processing operations and the storage operations based on a clock accessible by the virtual machine and the storage module.
  • 17. The method according to claim 12, wherein the context data further comprises a process identifier.
  • 18. The method according to claim 12, wherein the context data further comprises a username.
  • 19. The method according to claim 12, wherein the context data further comprises a memory identifier.
  • 20. The method according to claim 12, wherein the context data further comprises an internal application identifier.
  • 21. A system for processing and storing data, comprising: a compute module running at least one virtual machine that is used to process guest data in an object store using at least one compute function that is executed directly on the object store in such a way that data is not moved from the object store, the compute module accessing a clock for managing compute operations, context data on the at least one virtual machine being collected;a storage module communicating with the compute module and storing the guest data, the storage module accessing the context data that is used to dynamically modify control software of the storage module and to manage input/output throttling of the guest data with respect to the storage module, the storage module storing a read-only copy of a virtual machine operating system for instantiating new instances of virtual machines in the compute module, the storage module accessing the clock for managing storage operations;a context module for correlating the compute operations of the compute module and the storage operations of the storage module using timestamps, each of the compute operations and the storage operations comprising timestamps respectively; anda system administrator module accessing the context data for determining resource usage related to the guest data and outputting the correlated data to an administrator terminal.
US Referenced Citations (185)
Number Name Date Kind
6088694 Burns et al. Jul 2000 A
6393495 Flory et al. May 2002 B1
6553391 Goldring et al. Apr 2003 B1
6901594 Cain et al. May 2005 B1
7222345 Gray et al. May 2007 B2
7265754 Brauss Sep 2007 B2
7379994 Collazo May 2008 B2
7437730 Goyal Oct 2008 B2
7529780 Braginsky et al. May 2009 B1
7581219 Neiger et al. Aug 2009 B2
7603671 Liu Oct 2009 B2
7640547 Neiman et al. Dec 2009 B2
7685148 Engquist et al. Mar 2010 B2
7774457 Talwar et al. Aug 2010 B1
7814465 Liu Oct 2010 B2
7849111 Huffman et al. Dec 2010 B2
7899901 Njemanze et al. Mar 2011 B1
7904540 Hadad et al. Mar 2011 B2
7917599 Gopalan et al. Mar 2011 B1
7933870 Webster Apr 2011 B1
7940271 Wright et al. May 2011 B2
8006079 Goodson et al. Aug 2011 B2
8010498 Gounares et al. Aug 2011 B2
8141090 Graupner et al. Mar 2012 B1
8181182 Martin May 2012 B1
8244559 Horvitz et al. Aug 2012 B2
8301746 Head et al. Oct 2012 B2
8336051 Gokulakannan Dec 2012 B2
8346935 Mayo et al. Jan 2013 B2
8370936 Zuk et al. Feb 2013 B2
8417673 Stakutis et al. Apr 2013 B2
8417746 Gillett, Jr. et al. Apr 2013 B1
8429282 Ahuja et al. Apr 2013 B1
8434081 Cervantes et al. Apr 2013 B2
8464251 Sahita et al. Jun 2013 B2
8468251 Pijewski et al. Jun 2013 B1
8547379 Pacheco et al. Oct 2013 B2
8555276 Hoffman et al. Oct 2013 B2
8631131 Kenneth et al. Jan 2014 B2
8677359 Cavage et al. Mar 2014 B1
8775485 Cavage et al. Jul 2014 B1
8782224 Pijewski et al. Jul 2014 B2
8789050 Hoffman et al. Jul 2014 B2
8793688 Mustacchi et al. Jul 2014 B1
8826279 Pacheco et al. Sep 2014 B1
8881279 Gregg Nov 2014 B2
8898205 Cavage et al. Nov 2014 B2
20020069356 Kim Jun 2002 A1
20020082856 Gray et al. Jun 2002 A1
20020156767 Costa et al. Oct 2002 A1
20020198995 Liu et al. Dec 2002 A1
20030154112 Neiman et al. Aug 2003 A1
20030163596 Halter et al. Aug 2003 A1
20040088293 Daggett May 2004 A1
20050097514 Nuss May 2005 A1
20050108712 Goyal May 2005 A1
20050188075 Dias et al. Aug 2005 A1
20060107087 Sieroka et al. May 2006 A1
20060153174 Towns-von Stauber et al. Jul 2006 A1
20060218285 Talwar et al. Sep 2006 A1
20060246879 Miller et al. Nov 2006 A1
20060248294 Nedved et al. Nov 2006 A1
20060294579 Khuti et al. Dec 2006 A1
20070088703 Kasiolas et al. Apr 2007 A1
20070118653 Bindal May 2007 A1
20070168336 Ransil et al. Jul 2007 A1
20070179955 Croft et al. Aug 2007 A1
20070250838 Belady et al. Oct 2007 A1
20070271570 Brown et al. Nov 2007 A1
20080080396 Meijer et al. Apr 2008 A1
20080103861 Zhong May 2008 A1
20080155110 Morris Jun 2008 A1
20090044188 Kanai et al. Feb 2009 A1
20090077235 Podila Mar 2009 A1
20090164990 Ben-Yehuda et al. Jun 2009 A1
20090172051 Huffman et al. Jul 2009 A1
20090193410 Arthursson et al. Jul 2009 A1
20090216910 Duchesneau Aug 2009 A1
20090259345 Kato et al. Oct 2009 A1
20090260007 Beaty et al. Oct 2009 A1
20090300210 Ferris Dec 2009 A1
20100050172 Ferris Feb 2010 A1
20100057913 DeHaan Mar 2010 A1
20100106820 Gulati et al. Apr 2010 A1
20100114825 Siddegowda May 2010 A1
20100125845 Sugumar et al. May 2010 A1
20100131324 Ferris May 2010 A1
20100131854 Little May 2010 A1
20100153958 Richards et al. Jun 2010 A1
20100162259 Koh et al. Jun 2010 A1
20100223383 Salevan et al. Sep 2010 A1
20100223385 Gulley et al. Sep 2010 A1
20100228936 Wright et al. Sep 2010 A1
20100235632 Iyengar et al. Sep 2010 A1
20100250744 Hadad et al. Sep 2010 A1
20100262752 Davis et al. Oct 2010 A1
20100268764 Wee et al. Oct 2010 A1
20100299313 Orsini et al. Nov 2010 A1
20100306765 DeHaan Dec 2010 A1
20100306767 Dehaan Dec 2010 A1
20100318609 Lahiri et al. Dec 2010 A1
20100332262 Horvitz et al. Dec 2010 A1
20100332629 Cotugno et al. Dec 2010 A1
20100333087 Vaidyanathan et al. Dec 2010 A1
20110004566 Berkowitz et al. Jan 2011 A1
20110016214 Jackson Jan 2011 A1
20110029969 Venkataraja et al. Feb 2011 A1
20110029970 Arasaratnam Feb 2011 A1
20110047315 De Dinechin et al. Feb 2011 A1
20110055377 DeHaan Mar 2011 A1
20110055396 DeHaan Mar 2011 A1
20110055398 Dehaan et al. Mar 2011 A1
20110078303 Li et al. Mar 2011 A1
20110107332 Bash May 2011 A1
20110131306 Ferris et al. Jun 2011 A1
20110131329 Kaplinger et al. Jun 2011 A1
20110131589 Beaty et al. Jun 2011 A1
20110138382 Hauser et al. Jun 2011 A1
20110138441 Neystadt et al. Jun 2011 A1
20110145392 Dawson et al. Jun 2011 A1
20110153724 Raja et al. Jun 2011 A1
20110161952 Poddar et al. Jun 2011 A1
20110173470 Tran Jul 2011 A1
20110179132 Mayo et al. Jul 2011 A1
20110179134 Mayo et al. Jul 2011 A1
20110179162 Mayo et al. Jul 2011 A1
20110185063 Head et al. Jul 2011 A1
20110219372 Agrawal et al. Sep 2011 A1
20110270968 Salsburg et al. Nov 2011 A1
20110276951 Jain Nov 2011 A1
20110296021 Dorai et al. Dec 2011 A1
20110302378 Siebert Dec 2011 A1
20110302583 Abadi et al. Dec 2011 A1
20110320520 Jain Dec 2011 A1
20120017210 Huggins et al. Jan 2012 A1
20120054742 Eremenko et al. Mar 2012 A1
20120060172 Abouzour Mar 2012 A1
20120066682 Al-Aziz et al. Mar 2012 A1
20120079480 Liu Mar 2012 A1
20120089980 Sharp et al. Apr 2012 A1
20120124211 Kampas et al. May 2012 A1
20120131156 Brandt et al. May 2012 A1
20120131591 Moorthi et al. May 2012 A1
20120159507 Kwon et al. Jun 2012 A1
20120167081 Sedayao et al. Jun 2012 A1
20120173709 Li et al. Jul 2012 A1
20120179874 Chang et al. Jul 2012 A1
20120185913 Martinez et al. Jul 2012 A1
20120198442 Kashyap et al. Aug 2012 A1
20120204176 Tian et al. Aug 2012 A1
20120221845 Ferris Aug 2012 A1
20120233315 Hoffman et al. Sep 2012 A1
20120233626 Hoffman et al. Sep 2012 A1
20120246517 Bender et al. Sep 2012 A1
20120266231 Spiers et al. Oct 2012 A1
20120284714 Venkitachalam et al. Nov 2012 A1
20120303773 Rodrigues Nov 2012 A1
20120311012 Mazhar et al. Dec 2012 A1
20130042115 Sweet et al. Feb 2013 A1
20130060946 Kenneth et al. Mar 2013 A1
20130067067 Miri et al. Mar 2013 A1
20130081016 Saito et al. Mar 2013 A1
20130086590 Morris et al. Apr 2013 A1
20130129068 Lawson et al. May 2013 A1
20130132057 Deng et al. May 2013 A1
20130169666 Pacheco et al. Jul 2013 A1
20130173803 Pijewski et al. Jul 2013 A1
20130179881 Calder et al. Jul 2013 A1
20130191835 Araki Jul 2013 A1
20130191836 Meyer Jul 2013 A1
20130254407 Pijewski et al. Sep 2013 A1
20130318525 Palanisamy et al. Nov 2013 A1
20130328909 Pacheco et al. Dec 2013 A1
20130339966 Meng et al. Dec 2013 A1
20130346974 Hoffman et al. Dec 2013 A1
20140279955 Cavage et al. Sep 2014 A1
20140280198 Cavage et al. Sep 2014 A1
20140280796 Pijewski Sep 2014 A1
20140280912 Gregg Sep 2014 A1
20140280970 Pijewski et al. Sep 2014 A1
20140282512 Pacheco et al. Sep 2014 A1
20140282513 Pacheco et al. Sep 2014 A1
20140282590 Cavage et al. Sep 2014 A1
20140282615 Cavage et al. Sep 2014 A1
20140283053 Gregg Sep 2014 A1
Foreign Referenced Citations (3)
Number Date Country
2011088224 Jul 2011 WO
WO2012125143 Sep 2012 WO
WO2012125144 Sep 2012 WO
Non-Patent Literature Citations (23)
Entry
I/O throttling, Michael Mesnier , 2006.
Yagoubi, Belabbas et al., “Load Balancing in Grid Computing,” Asian Journal of Information Technology, vol. 5, No. 10 , pp. 1095-1103, 2006.
Kramer,“Advanced Message Queuing Protocol (AMQP),” Linux Journal, Nov. 2009, p. 1-3.
Subramoni et al., “Design and Evaluation of Benchmarks for Financial Applications Using Advanced Message Queuing Protocol (AMQP) over InfiniBand,” Nov. 2008.
Richardson et al., “Introduction to RabbitMQ,” Sep. 2008, p. 1-33.
Bernstein et al., “Using XMPP as a Transport in Intercloud Protocols,” Jun. 22, 2010, p. 1-8.
Bernstein et al., “Blueprint for the Intercloud—Protocols and Formats for Cloud Computing Interoperabiilty,” May 28, 2009, p. 328-336.
Gregg, Brendan, “Visualizing System Latency,” May 1, 2010, ACM Queue, p. 1-13, http://queue.acm.org/detail.cfm?id=1809426.
Gregg, Brendan, “Heat Map Analytics,” Mar. 17, 2009, Oracle, p. 1-7, https://blogs.oracle.com/brendan/entry/heat— map—analytics.
Mundigl, Robert, “There is More Than One Way to Heat a Map,” Feb. 10, 2009, Clearly and Simply, p. 1-12, http://www.clearlyandsimply.com/clearly—and—simply/2009/02/there-is-more-than-one-way-to-heat-a-map.html.
Bi et al. “Dynamic Provisioning Modeling for Virtualized Multi-tier Applications in Cloud Data Center”. 2010 IEEE 3rd International Conference on Cloud Computing. pp. 370-377.
Chappell, David. “Introducing Windows Azure”. Microsoft Corporation. Oct. 2010. pp. 1-25.
Chef Documents. Retrieved Mar. 11, 2014 from http://docs.opscode.com/.
Ansible Documentation. Retrieved Mar. 11, 2014 from http://docs.ansible.com/.
Bill Pijewski's Blog. Retrieved Mar. 12, 2014 from http://dtrace.org/blogs/wdp/2011/03/our-zfs-io-throttle/.
Brendan's Blog. Retrieved Mar. 12, 2014 from http://dtrace.org/blogs/brendan/2011/03/08/busy-week-zfs-throttling-dtrace-node-js-and-cloud-analytics/.
Joyent ZFS Performance Analysis and Tools. Retrieved Mar. 12, 2014 from http://www.slideshare.net/ brendangregg/zfsperftools2012.
Block 10 Controller. Retrieved Mar. 12, 2014 from https://www.kernel.org/doc/Documentation/cgroups/blkio-controller.txt.
Block Device Bio Throttling Support. Retrieved Mar. 12, 2014 from https://lwn.net/Articles/403889/.
Gregg, Brendan. Systems Performance: Enterprise and the Cloud, Prentice Hall, 2014, pp. 557-558.
International Search Report and Written Opinion of the International Searching Authority mailed Sep. 1, 2011 in Patent Cooperation Treaty Application No. PCT/US2011/021157 filed Jan. 13, 2011.
International Search Report and Written Opinion of the International Searching Authority mailed May 19, 2011 in Patent Cooperation Treaty Application No. PCT/US2011/028234 filed Mar. 11, 2011.
International Search Report and Written Opinion of the International Searching Authority mailed May 5, 2011 in Patent Cooperation Treaty Application No. PCT/US2011/028230, filed Mar. 11, 2011.
Related Publications (1)
Number Date Country
20140281304 A1 Sep 2014 US