Embodiments of the disclosure relate to the management of cloud-based computing environments. Systems, methods, and media provided herein may be utilized for time-based dynamic allocation of resource management.
A cloud is a resource that typically combines the computational power of a large grouping of processors and/or that combines the storage capacity of a large grouping of computer memories or storage devices. For example, systems that provide a cloud resource may be utilized exclusively by their owners, such as Google™ or Yahoo!™, or such systems may be accessible to outside users who deploy applications within the computing infrastructure to obtain the benefit of large computational or storage resources.
The cloud may be formed, for example, by a network of servers, with each server (or at least a plurality thereof) providing processor and/or storage resources. These servers may manage workloads provided by multiple users (e.g., cloud resource customers or other users). Typically, each user places workload demands upon the cloud that vary in real-time, sometimes dramatically. The nature and extent of these variations may depend on the type of business associated with the user.
According to some embodiments, the present technology may be directed to methods for managing requests for computing resources by dynamically throttling requests for computing resources generated by one or more tenants within a multi-tenant system, the requests being directed to a computing resource, the requests of a tenant being selectively throttled based upon a comparison of a usage metric and priority for the tenant.
According to other embodiments, the present technology may be directed to methods for managing requests for computing resources by dynamically throttling requests for computing resources generated by one or more tenants within a multi-tenant system, the requests being directed to a computing resource that receives fluctuating quantities of requests from the multi-tenant system, wherein the one or more tenants that are selectively throttled are determined by comparing a raw number of requests generated each tenant and selecting one or more of tenants with the greatest amount of requests relative to the other tenants.
According to additional embodiments, the present technology may be directed to systems for managing requests for computing resources. These systems may include: (a) a processor that executes computer-readable instructions; (b) a memory for storing executable instructions that include an operating system that has a filesystem; and (c) a throttling module that manages requests for computing resources by dynamically throttling requests for computing resources generated by one or more tenants within a multi-tenant system, the requests being directed to a computing resource that receives fluctuating quantities of requests from the multi-tenant system, the requests of a tenant being selectively throttled based upon a comparison of a usage metric and priority for the tenant.
According to additional embodiments, the present technology may be directed to computer readable storage media for managing requests for computing resources. The method may include dynamically throttling requests for computing resources generated by one or more tenants within a multi-tenant system, the requests being directed to a computing resource that receives fluctuating quantities of requests from the multi-tenant system, the requests of a tenant being selectively throttled based upon a comparison of a usage metric and priority for the tenant.
The accompanying drawings, where like reference numerals refer to identical or functionally similar elements throughout the separate views, together with the detailed description below, are incorporated in and form part of the specification, and serve to further illustrate embodiments of concepts that include the claimed disclosure, and explain various principles and advantages of those embodiments.
The methods and systems disclosed herein have been represented where appropriate by conventional symbols in the drawings, showing only those specific details that are pertinent to understanding the embodiments of the present disclosure so as not to obscure the disclosure with details that will be readily apparent to those of ordinary skill in the art having the benefit of the description herein.
In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the disclosure. It will be apparent, however, to one skilled in the art, that the disclosure may be practiced without these specific details. In other instances, structures and devices are shown in block diagram form only in order to avoid obscuring the disclosure.
Generally speaking, the present technology may control access to a computing resource(s) that is subject to an unknown/unpredictable number of requests (e.g., workload). In some instances, these computing resources are physical components that are constrained by a finite number of possible requests that they may process within a given time frame. For example, a physical storage media may only be able to process up to a thousand read and/or write requests per second.
In some embodiments, the present technology may be utilized in multi-tenant systems. Multi-tenant systems may impose dynamic and drastically varying workloads on computing resources of a cloud. An exemplary computing resource may include a physical storage medium such as a hard disk. Workload imposed on the computing resource may include I/O operations (e.g., read and write operations) and/or network bandwidth usage. Because physical systems such as hard disks have finite operational constraints (e.g., maximum amount of I/O requests that can be fulfilled in a given timespan), monopolization of these resources by one or more tenants in a multi-tenant system may lead to pathological latency issues for the other tenants as they must wait for the computing resource. Such latency issues will diminish the overall performance of the other tenants.
To address these issues, the present technology may dynamically limit the workload from a tenant applied to the computing resource based upon the number of tenants providing such workloads to the computing resource for processing. Workloads may be understood to include I/O (e.g., input/output, read/write) operations for a computing resource such as a physical storage media, but may also include any quantifiable request that is based upon the process that is executed by the computing resource.
More specifically, when designing a cloud computing platform, a cloud provider may desire to mitigate any performance vagaries due to multi-tenant effects. As stated previously, a cloud computing environment may include a physical machine or plurality of machines that provision a plurality of tenants (e.g., zones) for customers. Groups of tenants are often referred to as multi-tenancy environment.
The terms multi-tenant may be understood to include not only cloud environments, but also other configurations of computing devices/resources, such as an enterprise system that may have both primary and secondary computing resources. The present technology may ensure that primary resources have adequate access to computing resources such as databases or other storage media, while preserving the ability for secondary computing devices to access the storage media on a throttled basis, if necessary.
Because the workload imposed upon a computing resource by each tenant may not be consistent and uniformly distributed, bursts of activity (increases in workload) may affect the performance of other tenants. These tenants may be virtual machines utilizing the system's computing resources, or single applications running on that system. For example, when one tenant monopolizes the available I/O operations of a physical storage media, other tenants may be required to wait for unacceptable periods of time to access the physical storage media.
One way to avoid these multi-tenant effects is to overprovision the cloud to handle spikes in activity (e.g., provide additional physical storage media), but that approach may leave machines or components of the cloud underutilized and may undermine the economics of cloud computing.
The present technology may employ a software virtualized solution within a cloud platform, wherein each tenant is a container built into the underlying operating system of the cloud. The present technology may provision a tenant (also known as a zone) for each customer, and this architecture grants the system additional flexibility when allocating resources to individual tenants. The present technology may observe the activity of all tenants, and can coordinate with the kernel of the cloud to optimize resource management between tenants.
Generally speaking, the four basic computing resources that may require provisioning with a cloud include CPU, memory, I/O, and network bandwidth. For many customer workloads, network bandwidth may occasionally present a bottleneck, and such bottlenecking may increase as applications become more and more distributed.
I/O contention can also be major factor that negatively impacts customers. For example, on one machine, a single tenant can issue a stream of I/O operations, usually synchronous writes, which disrupt I/O operations for all other tenants. This problem is further exacerbated by filesystem management functionalities, which may buffer asynchronous writes for a single transaction group. These asynchronous writes may include a set of data blocks which are atomically flushed to disk. The process of flushing a filesystem transaction group may occupy all or a significant portion of a computing device's (e.g., a storage media) I/O bandwidth, thereby preventing pending read operations by other tenants.
According to some embodiments, the present technology may employ an I/O throttling functionality to remedy I/O contention. The I/O throttling functionality may be generally described as having two components. The first component may monitor and account for each tenant's I/O operations. A second component may throttle each tenant's operations when it exceeds a fair share of disk I/O. When the throttle detects that a tenant is consuming more than is appropriate, each read or write system call is delayed by up to 200 microseconds, which may be sufficient to allow other tenants to interleave I/O requests during those delays. I/O throttling functionality may calculate an I/O usage metric for each tenant, as will be described in greater detail below. It will be understood that while some embodiments of the present technology may implement a delay of up to 200 microseconds, the actual delay imposed by the system may include any duration desired.
The present technology may prioritize I/O access amongst the tenants, such that certain tenants may be granted prioritized access to the I/O component. These types of prioritizations may be referred to as a “priority,” If desired, each tenant may be provisioned with a usage metric and the I/O throttling functionality may monitor I/O usage across the zones and compare I/O usage for each tenant to its usage metric. If a zone has a higher-than-average I/O usage (compared to their usage metric), the I/O throttling functionality may throttle or temporarily suspend I/O requests from the tenant to the I/O device. That is, each I/O request may be delayed up to 200 microseconds, depending on the severity of the inequity between the various tenants.
Additionally, the delay applied to the I/O requests may be increased and/or decreased in a stepwise fashion, based upon a velocity of the I/O requests for the tenant. These and other advantages of the present technology will be described in greater detail with reference to the collective figures.
The cloud may be formed, for example, by a network of servers, with each server (or at least a plurality thereof) providing processor and/or storage resources. These servers may manage workloads provided by multiple users (e.g., cloud resource customers or other users). Typically, each user places workload demands upon the cloud that vary in real-time, sometimes dramatically. The nature and extent of these variations typically depend on the type of business associated with the user.
In some embodiments, the cloud includes a plurality of tenants 110A-N (e.g., zones), where each tenant may represent a virtual computing system for a customer. Each tenant may be configured to perform one or more computing operations such as hosting a web page, enabling a web-based application, facilitating data storage, and so forth.
In other embodiments, the multi-tenant system 105 may include a distributed group of computing devices such as servers that do not share computing resources or workload. Additionally, the multi-tenant system 105 may include a single computing device that has been provisioned with a plurality of programs that each produce instances of event data.
The multi-tenant system 105 may provide the tenants 110A-N with a plurality of computing resources, which may be either virtual or physical components. For the purposes of brevity, the following description may specifically describe a computing resource 130 that includes a physical storage media such as a hard disk. Again, the computing resource 130 may include physical devices that have operational constraints that can be defined in terms of a finite quantity. For example, an upper limit for the amount of I/O requests that can be handled by the computing resource 130 over a given period of time.
Customers or system administrators may utilize client devices 115 to access their tenant within the multi-tenant system 105. Additionally, the individual parts of the system 100 may be communicatively coupled with one another via a network connection 120. The network connection may include any number or combination of private and/or public communications media, such as the Internet. The multi-tenant system 105 may include a system memory 125.
The filesystem of the multi-tenant system 105 may be provisioned with a throttling layer or “kernel 200,” which will be described in greater detail with regard to
According to some embodiments, the throttling kernel 200 may comprise a priority module 205, a tenant monitor module 210, a metric generator 215, an analytics module 220, a throttling module 225, and an interleaving module 230. It is noteworthy that the throttling kernel 200 may include additional or fewer modules, engines, or components, and still fall within the scope of the present technology. As used herein, the term “module” may also refer to any of an application-specific integrated circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) that executes one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality.
Prior to throttling request of tenants within the multi-tenant system, a system administrator may interact with the throttling kernel 200 to establish guidelines that govern the behavior of the throttling kernel 200. For a particular computing resource such as a physical storage media that may be accessed by the tenants 110A-N, the system administrator may determine threshold request levels that represent the physical constraints of the computing resource. For example, the system administrator may estimate that the maximum number of I/O requests that a physical storage media may handle within a one second period of time is approximately 1,000.
It will be understood that while the throttling kernel 200 may be utilized to manage requests provided by tenants to any number of computing resources, for the purposes of brevity, the following descriptions will be limited to a computing resources such as a physical storage medium (e.g., hard disk).
Based upon this threshold information, in some instances, the priority module 205 may be executed to generate a global priority value for each tenant 110A-N within the multi-tenant system 105. The global priority value defines an acceptable usage relative to other tenants that may be generated by each tenant. The relative global priority values of tenants determine their relative access to the computing resource, such as a hard disk. The use of global priority values will be discussed in greater detail infra.
In other embodiments, the priority module 205 may generate a tenant specific priority value for each tenant in the multi-tenant system. A tenant specific priority value may be generated by a pricing schedule provided by the multi-tenant system operator. For example, a customer may obtain higher priority by purchasing additional computing resources from the operator. In other cases, increased priority may be obtained by customers purchasing multiple tenants, or other price-based methods that would be known to one of ordinary skill in the art.
The priority module 205 may also distribute available requests across the tenants relative to a weighting of tenants that is based upon their respective priority values. That is, a tenant with greater priority may receive a greater percentage of the available requests for the computing resource.
In some instances, the priority module 205 may not consider a priority for a tenant that has not generated an I/O request or other access to a computing resource within a given timespan. Moreover, these tenants are not considered when comparing global priorities to determine preferential access to the computing resource. Such provisioning ensures that the computing resource is not idle and is being utilized to its fullest potential.
Once priorities have been established for the tenants, the tenant monitor module 210 may be executed to monitor the I/O requests generated by each of the tenants. These I/O requests represent workload that will be placed upon the computing resource when transmitted to the resource. For example, the I/O requests may include read and write requests for the physical disk that were generated by the tenants. The tenant monitor module 210 may obtain raw request numbers for each tenant within the system. By way of non-limiting example, the tenant monitor module 210 may continually obtain raw data from a tenant that includes all I/O requests that were generated by the tenant in the last two seconds.
Once the raw data has been gathered, the metric generator 215 may be executed to calculate usage metrics for each of the tenants. Usage metrics are generated by processing the raw data for a tenant. In some embodiments, the metric generator 215 takes the raw request data generated during a timespan to generate an automatically updated usage metric. The metric is generated by multiplying an aggregate number of read requests for a tenant over the timespan by an average read latency relative to the computing resource, plus the product of the number of write requests and the average write latency relative to the computing resource.
It will be understood that the usage metric has been referred to as an “automatically updated” metric because the metric generator continually receives raw data from the tenant and updates the usage metric to continually measure the I/O requests generated by a tenant in near real-time. That is, I/O requests for a tenant are typically a fluctuating and variable quantity. Tenant may have periods of high or sustained I/O request generation and may also have periods of relatively little or no I/O request generation. Monitoring and automatically processing the I/O requests generated by the tenants ensure that access to the computing resource may be fairly distributed across the tenants as their I/O requests fluctuate.
The metric generator 215 may weigh the raw data based upon temporal aspects of the raw data. For example, new I/O requests may be given greater weight than relatively older I/O requests. Therefore, in some instances, the metric generator 215 may calculate an exponentially decayed average which may be included in the aggregate numbers of read and write requests. It is noteworthy that this average may include I/O requests from a tenant that occurred prior to current I/O requests relative to the timespan of interest. Current I/O requests include the most recent requests generated by the tenant.
The analytics module 220 may be executed to compare the current usage metric for a tenant to the priority established for the tenant. The analytics module 220 may repeat the comparison for each tenant in the system. If the usage metric for a tenant exceeds its priority, the throttling module 225 may be executed to throttle the tenant. Throttling may include imposing a delay in communication or transmission of I/O requests to the computing resource. The delay may be based upon the severity of the overuse of the computing resource by the tenant. That is, the greater the difference between the usage metric and the priority, the more delay may be imposed upon the tenant. The exact amount of the delay is configurable, but an exemplary delay may include a delay time of approximately zero to 200 microseconds in duration.
Because the usage metric for a tenant may be continually or automatically updated, the delay duration imposed upon the tenant may be increased or decreased in a stepwise manner. For example, if the analytics module 220 determines that a tenant is exceeding its allotted I/O request quota (e.g., priority), the tenant may be throttled by imposing a delay to the transmission of its requests to the computing resource. Subsequent updating of the usage module some time later may indicate that the tenant is still exceeding its priority. Therefore the throttling module 225 may increase the delay duration by another ten microseconds. The throttling module 225 may also decrease the delay duration in a stepwise fashion as the difference between the usage metric and the priority begins to recede. The ten microsecond step up or down is a configurable amount, and is just a reference amount for this example.
The ability of the throttling kernel 200 to selectively throttle I/O request of the tenants ensures that access to computing resources is allotted fairly across the tenants, according to priority. Furthermore, these types of short microsecond delay durations will not create deleterious performance issues for the tenants.
Upon throttling of a tenant, the interleaving module 230 may be executed to transmit I/O requests for the other tenants to the computing resource during the duration of the delay imposed against the tenant that exceeded their priority. That is, I/O requests generated by other tenants may be interleaved in between I/O requests generated by the tenant that has exceeded its usage. This functionality is particularly important when a tenant has a relatively high priority relative to the other tenants, or a tenant is alone capable of monopolizing access to the computing device, for example, by large transfers of write requests to a storage media.
As mentioned above, in some embodiments, the throttling kernel 200 may employ a global priority to each tenant within the multi-tenant system. The analytics module 220 may compare the raw request data for each tenant to the global priority value and throttle tenants that generate requests for the computing resource that exceed the global priority. In other embodiments, the throttling kernel 200 may simply compare raw request numbers for each of the tenants relative to one another and selectively throttle tenants as their raw request numbers increase or decrease over time.
The method may then include a step 310 of gathering raw request data for each tenant along with a step 315 of processing the raw request data to generate an automatically updating usage metric for each tenant that includes calculations performed on the raw data over time. As stated before, the usage metric may be weighted using an exponentially decayed average.
The method may also include a step 320 of comparing the usage metric for a tenant to the priority for the tenant along with a step 325 of dynamically throttling requests generated by the tenant based upon the comparison. Again, as mentioned previously, the duration of delay applied to the requests of a tenant may be selectively varied as the usage metric changes over time.
The usage metric may be utilization-based, but it can also be based on other metric types, for example, I/O per second (IOPS), a sum of latency, or other metrics. It is noteworthy that utilization, in some contexts (e.g., queuing theory) has a specific meaning: the time a resource was busy.
In some embodiments, the users in the virtualized environment have full I/O access at the start regardless of the size of their virtual machine or zone or their assigned priority. Subsequently, the resources can be limited by blocking access for variable periods of time. This approach may be analogous to metering lights on a freeway entrance. Sometimes the lights are green when the user needs resources, and other times the user has to wait. This time sharing may be accomplished, in some embodiments, in a virtualized hypervisor environment.
The components shown in
Mass storage device 430, which may be implemented with a magnetic disk drive, an optical disk drive, or other storage media, is a non-volatile storage device for storing data and instructions for use by processor unit 410. Mass storage device 430 can store the system software for implementing embodiments of the present technology for purposes of loading that software into main memory store 420.
Portable storage device 440 operates in conjunction with a portable non-volatile storage medium, such as a floppy disk, compact disk or digital video disc, to input and output data and code to and from the computing system 400 of
Input devices 460 provide a portion of a user interface. Input devices 460 may include an alphanumeric keypad, such as a keyboard, for inputting alphanumeric and other information, or a pointing device, such as a mouse, a trackball, stylus, or cursor direction keys. Additionally, the system 400 as shown in
Display system 470 may include a liquid crystal display (LCD) or other suitable display device. Display system 470 receives textual and graphical information, and processes the information for output to the display device.
Peripherals 480 may include any type of computer support device to add additional functionality to the computing system. Peripheral device(s) 480 may include a modem or a router.
The components contained in the computing system 400 of
Some of the above-described functions may be composed of instructions that are stored on storage media (e.g., computer-readable medium). The instructions may be retrieved and executed by the processor. Some examples of storage media are memory devices, tapes, disks, SSDs (solid-state drives), and the like. The instructions are operational when executed by the processor to direct the processor to operate in accord with the technology. Those skilled in the art are familiar with instructions, processor(s), and storage media.
It is noteworthy that any hardware platform suitable for performing the processing described herein is suitable for use with the technology. The terms “computer-readable storage medium” and “computer-readable storage media” as used herein refer to any medium or media that participate in providing instructions to a CPU for execution. Such media can take many forms, including, but not limited to, non-volatile media, volatile media and transmission media. Non-volatile media include, for example, optical or magnetic disks, such as a fixed disk. Volatile media include dynamic memory, such as system RAM. Transmission media include coaxial cables, copper wire and fiber optics, among others, including the wires that comprise one embodiment of a bus. Transmission media can also take the form of acoustic or light waves, such as those generated during radio frequency (RF) and infrared (IR) data communications. Common forms of computer-readable media include, for example, a floppy disk, a flexible disk, a hard disk, magnetic tape, any other magnetic medium, a CD-ROM disk, digital video disk (DVD), any other optical medium, any other physical medium with patterns of marks or holes, a RAM, a PROM, an EPROM, an EEPROM, a FLASHEPROM, any other memory chip or data exchange adapter, a carrier wave, or any other medium from which a computer can read.
The above description is illustrative and not restrictive. Many variations of the technology will become apparent to those of skill in the art upon review of this disclosure. The scope of the technology should, therefore, be determined not with reference to the above description, but instead should be determined with reference to the appended claims along with their full scope of equivalents.
In the foregoing specification, the invention is described with reference to specific embodiments thereof, but those skilled in the art will recognize that the invention is not limited thereto. Various features and aspects of the above-described invention can be used individually or jointly. Further, the invention can be utilized in any number of environments and applications beyond those described herein without departing from the broader spirit and scope of the specification. The specification and drawings are, accordingly, to be regarded as illustrative rather than restrictive. It will be recognized that the terms “comprising,” “including,” and “having,” as used herein, are specifically intended to be read as open-ended terms of art.
This nonprovisional application is a continuation-in-part application that claims priority benefit of U.S. application Ser. No. 13/340,461 filed on Dec. 29, 2011, and this application also claims priority benefit of U.S. Provisional Patent Application No. 61/782,697, filed Mar. 14, 2013, the contents of which are hereby incorporated by reference.
Number | Name | Date | Kind |
---|---|---|---|
6393495 | Flory et al. | May 2002 | B1 |
6553391 | Goldring et al. | Apr 2003 | B1 |
6901594 | Cain et al. | May 2005 | B1 |
7222345 | Gray et al. | May 2007 | B2 |
7265754 | Brauss | Sep 2007 | B2 |
7379994 | Collazo | May 2008 | B2 |
7437730 | Goyal | Oct 2008 | B2 |
7529780 | Braginsky et al. | May 2009 | B1 |
7581219 | Neiger et al. | Aug 2009 | B2 |
7603671 | Liu | Oct 2009 | B2 |
7640547 | Neiman et al. | Dec 2009 | B2 |
7685148 | Engquist et al. | Mar 2010 | B2 |
7774457 | Talwar et al. | Aug 2010 | B1 |
7814465 | Liu | Oct 2010 | B2 |
7849111 | Huffman et al. | Dec 2010 | B2 |
7899901 | Njemanze et al. | Mar 2011 | B1 |
7904540 | Hadad et al. | Mar 2011 | B2 |
7917599 | Gopalan et al. | Mar 2011 | B1 |
7933870 | Webster | Apr 2011 | B1 |
7940271 | Wright et al. | May 2011 | B2 |
8006079 | Goodson et al. | Aug 2011 | B2 |
8010498 | Gounares et al. | Aug 2011 | B2 |
8141090 | Graupner et al. | Mar 2012 | B1 |
8181182 | Martin | May 2012 | B1 |
8301746 | Head et al. | Oct 2012 | B2 |
8336051 | Gokulakannan | Dec 2012 | B2 |
8346935 | Mayo et al. | Jan 2013 | B2 |
8370936 | Zuk et al. | Feb 2013 | B2 |
8417673 | Stakutis et al. | Apr 2013 | B2 |
8417746 | Gillett, Jr. et al. | Apr 2013 | B1 |
8429282 | Ahuja et al. | Apr 2013 | B1 |
8434081 | Cervantes et al. | Apr 2013 | B2 |
8468251 | Pijewski et al. | Jun 2013 | B1 |
8547379 | Pacheco et al. | Oct 2013 | B2 |
8555276 | Hoffman et al. | Oct 2013 | B2 |
8631131 | Kenneth et al. | Jan 2014 | B2 |
8677359 | Cavage et al. | Mar 2014 | B1 |
20020069356 | Kim | Jun 2002 | A1 |
20020082856 | Gray et al. | Jun 2002 | A1 |
20020156767 | Costa et al. | Oct 2002 | A1 |
20020198995 | Liu et al. | Dec 2002 | A1 |
20030154112 | Neiman et al. | Aug 2003 | A1 |
20030163596 | Halter et al. | Aug 2003 | A1 |
20040088293 | Daggett | May 2004 | A1 |
20050097514 | Nuss | May 2005 | A1 |
20050108712 | Goyal | May 2005 | A1 |
20050188075 | Dias et al. | Aug 2005 | A1 |
20060107087 | Sieroka et al. | May 2006 | A1 |
20060153174 | Towns-von Stauber et al. | Jul 2006 | A1 |
20060218285 | Talwar et al. | Sep 2006 | A1 |
20060246879 | Miller et al. | Nov 2006 | A1 |
20060248294 | Nedved et al. | Nov 2006 | A1 |
20060294579 | Khuti et al. | Dec 2006 | A1 |
20070088703 | Kasiolas et al. | Apr 2007 | A1 |
20070118653 | Bindal | May 2007 | A1 |
20070168336 | Ransil et al. | Jul 2007 | A1 |
20070179955 | Croft et al. | Aug 2007 | A1 |
20070250838 | Belady et al. | Oct 2007 | A1 |
20070271570 | Brown et al. | Nov 2007 | A1 |
20080080396 | Meijer et al. | Apr 2008 | A1 |
20080103861 | Zhong | May 2008 | A1 |
20080155110 | Morris | Jun 2008 | A1 |
20090044188 | Kanai et al. | Feb 2009 | A1 |
20090077235 | Podila | Mar 2009 | A1 |
20090164990 | Ben-Yehuda et al. | Jun 2009 | A1 |
20090172051 | Huffman et al. | Jul 2009 | A1 |
20090193410 | Arthursson et al. | Jul 2009 | A1 |
20090216910 | Duchesneau | Aug 2009 | A1 |
20090259345 | Kato et al. | Oct 2009 | A1 |
20090260007 | Beaty et al. | Oct 2009 | A1 |
20090300210 | Ferris | Dec 2009 | A1 |
20100050172 | Ferris | Feb 2010 | A1 |
20100057913 | DeHaan | Mar 2010 | A1 |
20100106820 | Gulati et al. | Apr 2010 | A1 |
20100114825 | Siddegowda | May 2010 | A1 |
20100125845 | Sugumar et al. | May 2010 | A1 |
20100131324 | Ferris | May 2010 | A1 |
20100131854 | Little | May 2010 | A1 |
20100153958 | Richards et al. | Jun 2010 | A1 |
20100162259 | Koh et al. | Jun 2010 | A1 |
20100223383 | Salevan et al. | Sep 2010 | A1 |
20100223385 | Gulley et al. | Sep 2010 | A1 |
20100228936 | Wright et al. | Sep 2010 | A1 |
20100235632 | Iyengar et al. | Sep 2010 | A1 |
20100250744 | Hadad et al. | Sep 2010 | A1 |
20100262752 | Davis et al. | Oct 2010 | A1 |
20100268764 | Wee et al. | Oct 2010 | A1 |
20100299313 | Orsini et al. | Nov 2010 | A1 |
20100306765 | DeHaan | Dec 2010 | A1 |
20100306767 | Dehaan | Dec 2010 | A1 |
20100318609 | Lahiri et al. | Dec 2010 | A1 |
20100332629 | Cotugno et al. | Dec 2010 | A1 |
20100333087 | Vaidyanathan et al. | Dec 2010 | A1 |
20110004566 | Berkowitz et al. | Jan 2011 | A1 |
20110016214 | Jackson | Jan 2011 | A1 |
20110029969 | Venkataraja et al. | Feb 2011 | A1 |
20110029970 | Arasaratnam | Feb 2011 | A1 |
20110047315 | De Dinechin et al. | Feb 2011 | A1 |
20110055396 | DeHaan | Mar 2011 | A1 |
20110055398 | Dehaan et al. | Mar 2011 | A1 |
20110078303 | Li et al. | Mar 2011 | A1 |
20110107332 | Bash | May 2011 | A1 |
20110131306 | Ferris et al. | Jun 2011 | A1 |
20110131329 | Kaplinger et al. | Jun 2011 | A1 |
20110131589 | Beaty et al. | Jun 2011 | A1 |
20110138382 | Hauser et al. | Jun 2011 | A1 |
20110138441 | Neystadt et al. | Jun 2011 | A1 |
20110145392 | Dawson et al. | Jun 2011 | A1 |
20110153724 | Raja et al. | Jun 2011 | A1 |
20110161952 | Poddar et al. | Jun 2011 | A1 |
20110173470 | Tran | Jul 2011 | A1 |
20110179132 | Mayo et al. | Jul 2011 | A1 |
20110179134 | Mayo et al. | Jul 2011 | A1 |
20110179162 | Mayo et al. | Jul 2011 | A1 |
20110185063 | Head et al. | Jul 2011 | A1 |
20110219372 | Agrawal et al. | Sep 2011 | A1 |
20110270968 | Salsburg et al. | Nov 2011 | A1 |
20110276951 | Jain | Nov 2011 | A1 |
20110296021 | Dorai et al. | Dec 2011 | A1 |
20110302378 | Siebert | Dec 2011 | A1 |
20110302583 | Abadi et al. | Dec 2011 | A1 |
20110320520 | Jain | Dec 2011 | A1 |
20120017210 | Huggins et al. | Jan 2012 | A1 |
20120054742 | Eremenko et al. | Mar 2012 | A1 |
20120060172 | Abouzour | Mar 2012 | A1 |
20120066682 | Al-Aziz et al. | Mar 2012 | A1 |
20120079480 | Liu | Mar 2012 | A1 |
20120089980 | Sharp et al. | Apr 2012 | A1 |
20120124211 | Kampas et al. | May 2012 | A1 |
20120131156 | Brandt et al. | May 2012 | A1 |
20120131591 | Moorthi et al. | May 2012 | A1 |
20120159507 | Kwon et al. | Jun 2012 | A1 |
20120167081 | Sedayao et al. | Jun 2012 | A1 |
20120173709 | Li et al. | Jul 2012 | A1 |
20120179874 | Chang et al. | Jul 2012 | A1 |
20120185913 | Martinez et al. | Jul 2012 | A1 |
20120198442 | Kashyap et al. | Aug 2012 | A1 |
20120204176 | Tian et al. | Aug 2012 | A1 |
20120221845 | Ferris | Aug 2012 | A1 |
20120233315 | Hoffman et al. | Sep 2012 | A1 |
20120233626 | Hoffman et al. | Sep 2012 | A1 |
20120246517 | Bender et al. | Sep 2012 | A1 |
20120266231 | Spiers et al. | Oct 2012 | A1 |
20120284714 | Venkitachalam et al. | Nov 2012 | A1 |
20120303773 | Rodrigues | Nov 2012 | A1 |
20120311012 | Mazhar et al. | Dec 2012 | A1 |
20130042115 | Sweet et al. | Feb 2013 | A1 |
20130060946 | Kenneth et al. | Mar 2013 | A1 |
20130067067 | Miri et al. | Mar 2013 | A1 |
20130081016 | Saito et al. | Mar 2013 | A1 |
20130086590 | Morris et al. | Apr 2013 | A1 |
20130129068 | Lawson et al. | May 2013 | A1 |
20130132057 | Deng et al. | May 2013 | A1 |
20130169666 | Pacheco et al. | Jul 2013 | A1 |
20130173803 | Pijewski et al. | Jul 2013 | A1 |
20130179881 | Calder et al. | Jul 2013 | A1 |
20130191835 | Araki | Jul 2013 | A1 |
20130191836 | Meyer | Jul 2013 | A1 |
20130318525 | Palanisamy et al. | Nov 2013 | A1 |
20130328909 | Pacheco et al. | Dec 2013 | A1 |
20130339966 | Meng et al. | Dec 2013 | A1 |
20130346974 | Hoffman et al. | Dec 2013 | A1 |
Number | Date | Country |
---|---|---|
2011088224 | Jul 2011 | WO |
WO2012125143 | Sep 2012 | WO |
WO2012125144 | Sep 2012 | WO |
Entry |
---|
Yagoubi, Belabbas et al., “Load Balancing in Grid Computing,” Asian Journal of Information Technology, vol. 5, No. 10 , pp. 1095-1103, 2006. |
Kramer, “Advanced Message Queuing Protocol (AMQP),” Linux Journal, Nov. 2009, p. 1-3. |
Subramoni et al., “Design and Evaluation of Benchmarks for Financial Applications Using Advanced Message Queuing Protocol (AMQP) over InfiniBand,” Nov. 2008. |
Richardson et al., “Introduction to RabbitMQ,” Sep. 2008, p. 1-33. |
Bernstein et al., “Using XMPP as a Transport in Intercloud Protocols,” Jun. 22, 2010, p. 1-8. |
Bernstein et al., “Blueprint for the Intercloud—Protocols and Formats for Cloud Computing Interoperabiilty,” May 28, 2009, p. 328-336. |
Gregg, Brendan, “Visualizing System Latency,” May 1, 2010, ACM Queue, p. 1-13, http://queue.acm.org/detail.cfm?id=1809426. |
Gregg, Brendan, “Heat Map Analytics,” Mar. 17, 2009, Oracle, p. 1-7, https://blogs.oracle.com/brendan/entry/heat—map—analytics. |
Mundigl, Robert, “There is More Than One Way to Heat a Map,” Feb. 10, 2009, Clearly and Simply, p. 1-12, http://www.clearlyandsimply.com/clearly—and—simply/2009/02/there-is-more-than-one-way-to-heat-a-map.html. |
Bi et al. “Dynamic Provisioning Modeling for Virtualized Multi-tier Applications in Cloud Data Center”. 2010 IEEE 3rd International Conference on Cloud Computing. pp. 370-377. |
Chappell, David. “Introducing Windows Azure”. Microsoft Corporation. Oct. 2010. pp. 1-25. |
Non-Final Office Action, Aug. 12, 2013, U.S. Appl. No. 13/046,660, filed Mar. 11, 2011. |
Notice of Allowance, Jun. 3, 2013, U.S. Appl. No. 13/046,647, filed Mar. 11, 2011. |
Notice of Allowance, May 23, 2013, U.S. Appl. No. 13/340,488, filed Dec. 29, 2011. |
Non-Final Office Action, Jul. 29, 2013, U.S. Appl. No. 13/829,250, filed Mar. 14, 2013. |
Non-Final Office Action, Jul. 29, 2013, U.S. Appl. No. 13/899,543, filed May 21, 2013. |
Chef Documents. Retrieved Mar. 11, 2014 from http://docs.opscode.com/. |
Ansible Documentation. Retrieved Mar. 11, 2014 from http://docs.ansible.com/. |
Bill Pijewski's Blog. Retrieved Mar. 12, 2014 from http://dtrace.org/blogs/wdp/2011/03/our-zfs-io-throttle/. |
Brendan's Blog. Retrieved Mar. 12, 2014 from http://dtrace.org/blogs/brendan/2011/03/08/busy-week-zfs-throttling-dtrace-node-js-and-cloud-analytics/. |
Joyent ZFS Performance Analysis and Tools. Retrieved Mar. 12, 2014 from http://www.slideshare.net/brendangregg/zfsperftools2012. |
Block 10 Controller. Retrieved Mar. 12, 2014 from https://www.kernel.org/doc/Documentation/cgroups/blkio-controller.txt. |
Block Device Bio Throttling Support. Retrieved Mar. 12, 2014 from https://lwn.net/Articles/403889/. |
Gregg, Brendan. Systems Performance: Enterprise and the Cloud, Prentice Hall, 2014, pp. 557-558. |
Mesnier, Michael. I/O throttling. 2006. Retrieved Apr. 13, 2014 from https://www.usenix.org/legacy/event/fast07/tech/full—papers/mesnier/mesnier—html/node5.html. |
Number | Date | Country | |
---|---|---|---|
20130254407 A1 | Sep 2013 | US |
Number | Date | Country | |
---|---|---|---|
61782697 | Mar 2013 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13340461 | Dec 2011 | US |
Child | 13899543 | US |