The present application is related to U.S. application Ser. No. 11/276,852 incorporated herein by reference.
A portion of the disclosure of this patent document contains material that is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure as it appears in the United States Patent & Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
The present invention relates to a resource management system and more specifically to a system and method of providing access to on-demand compute resources.
Managers of clusters desire maximum return on investment often meaning high system utilization and the ability to deliver various qualities of service to various users and groups. A cluster is typically defined as a parallel computer that is constructed of commodity components and runs as its system software commodity software. A cluster contains nodes each containing one or more processors, memory that is shared by all of the processors in the respective node and additional peripheral devices such as storage disks that are connected by a network that allows data to move between nodes. A cluster is one example of a compute environment. Other examples include a grid, which is loosely defined as a group of clusters, and a computer farm which is another organization of computer for processing.
Often a set of resources organized in a cluster or a grid may have jobs to be submitted to the resources that require more capability than the set of resource has available. In this regard, there is a need in the art for being able to easily, efficiently and on-demand be able to utilize new resources or different resources to handle a job. The concept of “on-demand” compute resources has been developing in the high performance computing community recently. An on-demand computing environment enables companies to procure compute power for average demand and then contract remote processing power to help in peak loads or to offload all their compute needs to a remote facility. Several reference books having background material related to on-demand computing or utility computing include Mike Ault, Madhu Tumma, Oracle 10 g Grid & Real Application Clusters, Rampant TechPress, 2004 and Guy Bunker, Darren Thomson, Delivering Utility Computing Business-driven IT Optimization, John Wiley & Sons Ltd, 2006.
In Bunker and Thompson, section 3.3 on page 32 is entitled “Connectivity: The Great Enabler” wherein they discuss how the interconnecting of computers will dramatically increase their usefulness. This disclosure addresses that issue. There exists in the art a need for improved solutions to enable communication and connectivity with an on-demand high performance computing center.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The features and advantages of the invention may be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. These and other features of the present invention will become more fully apparent from the following description and appended claims, or may be learned by the practice of the invention as set forth herein.
Various embodiments of the invention include, but are not limited to, methods, systems, computing devices, clusters, grids and computer-readable media that perform the processes and steps described herein.
An on-demand compute environment comprises a plurality of nodes within an on-demand compute environment available for provisioning and a slave management module operating on a dedicated node within the on-demand compute environment, wherein upon instructions from a master management module at a local compute environment, the slave management module modifies at least one node of the plurality of nodes. Methods and computer readable media are also disclosed for managing an on-demand compute environment.
A benefit of the approaches disclosed herein is a reduction in unnecessary costs of building infrastructure to accommodate peak demand. Thus, customers only pay for the extra processing power they need only during those times when they need it.
In order to describe the manner in which the above-recited and other advantages and features of the invention can be obtained, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments thereof which are illustrated in the appended documents and drawings. Understanding that these drawings depict only typical embodiments of the invention and are not therefore to be considered to be limiting of its scope, the invention will be described and explained with additional specificity and detail through the use of the accompanying drawings.
Various embodiments are discussed in detail below. While specific implementations are discussed, it should be understood that this is done for illustration purposes only. A person skilled in the relevant art will recognize that other components and configurations may be used without parting from the spirit and scope of the invention.
This disclosure relates to the access and management of on-demand or utility computing resources at a hosting center.
Products such as Moab provide an essential service for optimization of a local compute environment. It provides an analysis into how & when local resources, such as software and hardware devices, are being used for the purposes of charge-back, planning,
auditing, troubleshooting and reporting internally or externally. Such optimization enables the local environment to be tuned to get the most out of the resources in the local compute environment. However, there are times where more resources are needed.
Typically a hosting center 102 will have the following attributes. It allows an organization to provide resources or services to customers where the resources or services are custom-tailored to the needs of the customer. Supporting true utility computing usually requires creating a hosting center 102 with one or more capabilities as follows: secure remote access, guaranteed resource availability at a fixed time or series of times, integrated auditing/accounting/billing services, tiered service level (QoS/SLA) based resource access, dynamic compute node provisioning, full environment management over compute, network, storage, and application/service based resources, intelligent workload optimization, high availability, failure recovery, and automated re-allocation.
A management module 108 such as, by way of example, Moab™ (which may also refer to any Moab product such as the Moab Workload Manager®, Moab Grid Monitor®, etc. from Cluster Resources, Inc.) enables utility computing by allowing compute resources to be reserved, allocated, and dynamically provisioned to meet the needs of internal or external workload. Thus, at peak workload times, the local compute environment does not need to be built out with peak usage in mind. As periodic peak resources are required, triggers can cause overflow to the on-demand environment and thus save money for the customer. The module 108 is able to respond to either manual or automatically generated requests and can guarantee resource availability subject to existing service level agreement (SLA) or quality of service (QOS) based arrangements. As an example,
Other software is shown by way of example in a distributed resource manager such as Torque 128 and various nodes 130, 132 and 134. The management modules (both master and/or slave) may interact and operate with any resource manager, such as Torque, LSF, SGE, PBS and LoadLeveler and are agnostic in this regard. Those of skill in the art will recognize these different distributed resource manager software packages.
A hosting master or hosting management module 106 may also be an instance of a Moab software product with hosting center capabilities to enable an organization to dynamically control network, compute, application, and storage resources and to dynamically provision operating systems, security, credentials, and other aspects of a complete end-to-end compute environments. Module 106 is responsible for knowing all the policies, guarantees, promises and also for managing the provisioning of resources within the utility computing space 102. In one sense, module 106 may be referred to as the “master” module in that it couples and needs to know all of the information associated with both the utility environment and the local environment. However, in another sense it may be referred to as the slave module or provisioning broker wherein it takes instructions from the customer management module 108 for provisioning resources and builds whatever environment is requested in the on-demand center 102. A slave module would have none of its own local policies but rather follows all requests from another management module. For example, when module 106 is the slave module, then a master module 108 would submit automated or manual (via an administrator) requests that the slave module 106 simply follows to manage the build out of the requested environment. Thus, for both IT and end users, a single easily usable interface can increase efficiency, reduce costs including management costs and improve investments in the local customer environment. The interface to the local environment which also has the access to the on-demand environment may be a web-interface or access portal as well. Restrictions of feasibility only may exist. The customer module 108 would have rights and ownership of all resources. The allocated resources would not be shared but be dedicated to the requestor. As the slave module 106 follows all directions from the master module 108, any policy restrictions will preferably occur on the master module 108 in the local environment.
The modules also provide data management services that simplify adding resources from across a local environment. For example, if the local environment comprises a wide area network, the management module 108 provides a security model that ensures, when the environment dictates, that administrators can rely on the system even when untrusted resources at the certain level have been added to the local environment or the on-demand environment. In addition, the management modules comply with n-tier web services based architectures and therefore scalability and reporting are inherent parts of the system. A system operating according to the principles set forth herein also has the ability to track, record and archive information about jobs or other processes that have been run on the system.
A hosting center 102 provides scheduled dedicated resources to customers for various purposes and typically has a number of key attributes: secure remote access, guaranteed resource availability at a fixed time or series of times, tightly integrated auditing/accounting services, varying quality of service levels providing privileged access to a set of users, node image management allowing the hosting center to restore an exact customer-specific image before enabling access. Resources available to a module 106, which may also be referred to as a provider resource broker, will have both rigid (architecture, RAM, local disk space, etc.) and flexible (OS, queues, installed applications etc.) attributes. The provider or on-demand resource broker 106 can typically provision (dynamically modify) flexible attributes but not rigid attributes. The provider broker 106 may possess multiple resources each with different types with rigid attributes (i.e., single processor and dual processor nodes, Intel nodes, AMD nodes, nodes with 512 MB RAM, nodes with 1 GB RAM, etc).
This combination of attributes presents unique constraints on a management system. We describe herein how the management modules 108 and 106 are able to effectively manage, modify and provision resources in this environment and provide full array of services on top of these resources.
Utility-based computing technology allows a hosting center 102 to quickly harness existing compute resources, dynamically co-allocate the resources, and automatically provision them into a seamless virtual cluster. The management modules' advanced reservation and policy management tools provide support for the establishment of extensive service level agreements, automated billing, and instant chart and report creation.
Also shown in
The modules address these and similar issues through the use of the identity manager 112. The identity manager 112 allows the module to exchange information with an external identity management service. As with the module's resource manager interfaces, this service can be a full commercial package designed for this purpose, or something far simpler by which the module obtains the needed information for a web service, text file, or database.
Next attention is turned to the node provisioner 118 and as an example of its operation, the node provisioner 118 can enable the allocation of resources in the hosting center 102 for workload from a local compute environment 104. The customer management module 108 will communicate with the hosting management module 106 to begin the provisioning process. In one aspect, the provisioning module 118 may generate another instance of necessary management software 120 and 122 which will be created in the hosting center environment as well as compute nodes 124 and 126 to be consumed by a submitted job. The new management module 120 is created on the fly, may be associated with a specific request and will preferably be operative on a dedicated node. If the new management module 120 is associated with a specific request or job, as the job consumes the resources associated with the provisioned compute nodes 124, 126, and the job becomes complete, then the system would remove the management module 120 since it was only created for the specific request. The new management module 120 may connect to other modules such as module 108. The module 120 does not necessarily have to be created but may be generated on the fly as necessary to assist in communication and provisioning and use of the resources in the utility environment 102. For example, the module 106 may go ahead and allocate nodes within the utility computing environment 102 and connect these nodes directly to module 108 but in that case you may lose some batch ability as a tradeoff. The hosting master 128 having the management module 106, identity manager 112 and node provisioner 118 preferably is co-located with the utility computing environment but may be distributed. The management module on the local environment 108 may then communicate directly with the created management module 120 in the hosting center to manage the transfer of workload and consumption of on-demand center resources.
There are two supported primary usage models, a manual and an automatic model. In manual mode, utilizing the hosted resources can be as easy as going to a web site, specifying what is needed, selecting one of the available options, and logging in when the virtual cluster is activated. In automatic mode, it is even simpler. To utilize hosted resources, the user simply submits jobs to the local cluster. When the local cluster can no longer provide an adequate level of service, it automatically contacts the utility hosting center, allocates additional nodes, and runs the jobs. The end user is never aware that the hosting center even exists. He merely notices that the cluster is now bigger and that his jobs are being run more quickly.
When a request for additional resources is made from the local environment, either automatically or manually, a client module or client resource broker (which may be, for example, an instance of a management module 108 or 120) will contact the provider resource broker 106 to request resources. It will send information regarding rigid attributes of needed resources as well as quantity or resources needed, request duration, and request timeframe (i.e., start time, feasible times of day, etc.) It will also send flexible attributes which must be provisioned on the nodes 124, 126. Both flexible and rigid resource attributes can come from explicit workload-specified requirement or from implicit requirements associated with the local or default compute resources. The provider resource broker 106 must indicate if it is possible to locate requested resources within the specified timeframe for sufficient duration and of the sufficient quantity. This task includes matching rigid resource attributes and identifying one or more provisioning steps required to put in place all flexible attributes.
When provider resources are identified and selected, the client resource broker 108 or 120 is responsible for seamlessly integrating these resources in with other local resources. This includes reporting resource quantity, state, configuration and load. This further includes automatically enabling a trusted connection to the allocated resources which can perform last mile customization, data staging, and job staging. Commands are provided to create this connection to the provider resource broker 106, query available resources, allocate new resources, expand existing allocations, reduce existing allocations, and release all allocated resources.
In most cases, the end goal of a hosting center 102 is to make available to a customer, a complete, secure, packaged environment which allows them to accomplish one or more specific tasks. This packaged environment may be called a virtual cluster and may consist of the compute, network, data, software, and other resources required by the customer. For successful operation, these resources must be brought together and provisioned or configured so as to provide a seamless environment which allows the customers to quickly and easily accomplish their desired tasks.
Another aspect of the invention is the cluster interface. The desired operational model for many environments is providing the customer with a fully automated self-service web interface. Once a customer has registered with the host company, access to a hosting center portal is enabled. Through this interface, customers describe their workload requirements, time constraints, and other key pieces of information. The interface communicates with the backend services to determine when, where, and how the needed virtual cluster can be created and reports back a number of options to the user. The user selects the desired option and can monitor the status of that virtual cluster via web and email updates. When the virtual cluster is ready, web and email notification is provided including access information. The customer logs in and begins working.
The hosting center 102 will have related policies and service level agreements. Enabling access in a first come-first served model provides real benefits but in many cases, customers require reliable resource access with guaranteed responsiveness. These requirements may be any performance, resource or time based rule such as in the following examples: I need my virtual cluster within 24 hours of asking; I want a virtual cluster available from 2 to 4 PM every Monday, Wednesday, and Friday; I want to always have a virtual cluster available and automatically grow/shrink it based on current load, etc.
Quality of service or service level agreement policies allow customers to convert the virtual cluster resources to a strategic part of their business operations greatly increasing the value of these resources. Behind the scenes, a hosting center 102 consists of resource managers, reservations, triggers, and policies. Once configured, administration of such a system involves addressing reported resource failures (i.e., disk failures, network outages, etc) and monitoring delivered performance to determine if customer satisfaction requires tuning policies or adding resources.
The modules associated with the local environment 104 and the hosting center environment 102 may be referred to as a master module 108 and a slave module 106. This terminology relates to the functionality wherein the hosting center 102 receives requests for workload and provisioning of resources from the module 108 and essentially follows those requests. In this regard, the module 108 may be referred to as a client resource broker 108 which will contact a provider resource broker 106 (such as an On-Demand version of Moab).
The management module 108 may also be, by way of example, a Moab Workload Manager® operating in a master mode. The management module 108 communicates with the compute environment to identify resources, reserve resources for consumption by jobs, provision resources and in general manage the utilization of all compute resources within a compute environment. As can be appreciated by one of skill in the art, these modules may be programmed in any programming language, such as C or C++ and which language is immaterial to the invention.
In a typical operation, a user or a group submits a job to a local compute environment 104 via an interface to the management module 108. An example of a job is a submission of a computer program that will perform a weather analysis for a television station that requires the consumption of a large amount of compute resources. The module 108 and/or an optional scheduler 128 such as TORQUE, as those of skill in the art understand, manages the reservation of resources and the consumption of resources within the environment 104 in an efficient manner that complies with policies and restrictions. The use of a resource manager like TORQUE 128 is optional and not specifically required as part of the disclosure.
A user or a group of users will typically enter into a service level agreement (SLA) which will define the policies and guarantees for resources on the local environment 104. For example, the SLA may provide that the user is guaranteed 10 processors and 50 GB of hard drive space within 5 hours of a submission of a job request. Associated with any user may be many parameters related to permissions, guarantees, priority level, time frames, expansion factors, and so forth. The expansion factor is a measure of how long the job is taking to run on a local environment while sharing the environment with other jobs versus how long it would take if the cluster was dedicated to the job only. It therefore relates to the impact of other jobs on the performance of the particular job. Once a job is submitted and will sit in a job queue waiting to be inserted into the cluster 104 to consume those resources. The management software will continuously analyze the environment 104 and make reservations of resources to seek to optimize the consumption of resources within the environment 104. The optimization process must take into account all the SLA's of users, other policies of the environment 104 and other factors.
As introduced above, this disclosure provides improvements in the connectivity between a local environment 104 and an on-demand center 102. The challenges that exist in accomplishing such a connection include managing all of the capabilities of the various environments, their various policies, current workload, workload queued up in the job queues and so forth.
As a general statement, disclosed herein is a method and system for customizing an on-demand compute environment based on both implicit and explicit job or request requirements. For example, explicit requirements may be requirements specified with a job such as a specific number of nodes or processor and a specific amount of memory. Many other attributes or requirements may be explicitly set forth with a job submission such as requirements set forth in an SLA for that user. Implicit requirements may relate to attributes of the compute environment that the job is expecting because of where it is submitted. For example, the local compute environment 104 may have particular attributes, such as, for example, a certain bandwidth for transmission, memory, software licenses, processors and processor speeds, hard drive memory space, and so forth. Any parameter that may be an attribute of the local environment in which the job is submitted may relate to an implicit requirement. As a local environment 104 communicates with an on-demand environment 102 for the transfer of workload, the implicit and explicit requirements are seamlessly imported into the on-demand environment 102 such that the user's job can efficiently consume resources in the on-demand environment 102 because of the customization of that environment for the job. This seamless communication occurs between a master module 108 and a slave module 106 in the respective environments. As shown in
Part of the seamless communication process includes the analysis and provisioning of resources taking into account the need to identify resources such as hard drive space and bandwidth capabilities to actually perform the transfer of the workload. For example, if it is determined that a job in the queue has a SLA that guarantees resources within 5 hours of the request, and based on the analysis by the management module of the local environment the resources cannot be available for 8 hours, and if such a scenario is at triggering event, then the automatic and seamless connectivity with the on-demand center 102 will include an analysis of how long it will take to provision an environment in the on-demand center that matches or is appropriate for the job to run. That process, of provisioning the environment in the on-demand center 102, and transferring workload from the local environment 104 to the on-demand center 102, may take, for example, 1 hour. In that case, the on-demand center will begin the provisioning process one hour before the 5 hour required time such that the provisioning of the environment and transfer of data can occur to meet the SLA for that user. This provisioning process may involve reserving resources within the on-demand center 102 from the master module 108 as will be discussed more below.
Example triggering events may be related to at least one of a resource threshold, a service threshold, workload and a policy threshold or other factors. Furthermore, the event may be based one of all workload associated with the local compute environment or a subset of workload associated with the compute environment or any other subset of a given parameter or may be external to the compute environment such as a natural disaster or power outage or predicted event.
The disclosure below provides for various aspects of this connectivity process between a local environment 104 and an on-demand center 102. The CD submitted with the priority Provisional Patent Application includes source code that carries out this functionality. The various aspects will include an automatic triggering approach to transfer workload from the local environment 104 to the on-demand center 102, a manual “one-click” method of integrating the on-demand compute environment 102 with the local environment 104 and a concept related to reserving resources in the on-demand compute environment 102 from the local compute environment 104.
The first aspect relates to enabling the automatic detection of a triggering event such as passing a resource threshold or service threshold within the compute environment 104. This process may be dynamic and involve identifying resources in a hosting center, allocating resources and releasing them after consumption. These processes may be automated based on a number of factors, such as: workload and credential performance thresholds; a job's current time waiting in the queue for execution, (queuetime) (i.e., allocate if a job has waited more than 20 minutes to receive resources); a job's current expansion factor which relates to a comparison of the affect of other jobs consuming local resources has on the particular job in comparison to a value if the job was the only job consuming resources in the local environment; a job's current execution load (i.e., allocate if load on job's allocated resources exceeds 0.9); quantity of backlog workload (i.e., allocate if more than 50,000 proc-hours of workload exist); a job's average response time in handling transactions (i.e., allocate if job reports it is taking more than 0.5 seconds to process transaction); a number of failures workload has experienced (i.e., allocate if a job cannot start after 10 attempts); overall system utilization (i.e., allocate if more than 80% of machine is utilized) and so forth. This is an example list and those of skill in the art will recognize other factors that may be identified as triggering events.
Other triggering events or thresholds may comprise a predicted workload performance threshold. This would relate to the same listing of events above but be applied in the context of predictions made by a management module or customer resource broker.
Another listing of example events that may trigger communication with the hosting center include, but are not limited to events such as resource failures including compute nodes, network, storage, license (i.e., including expired licenses); service failures including DNS, information services, web services, database services, security services; external event detected (i.e., power outage or national emergency reported) and so forth. These triggering events or thresholds may be applied to allocate initial resources, expand allocated resources, reduce allocated resources and release all allocated resources. Thus, while the primary discussion herein relates to an initial allocation of resources, these triggering events may cause any number of resource-related actions. Events and thresholds may also be associated with any subset of jobs or nodes (i.e., allocate only if threshold backlog is exceeded on high priority jobs only or jobs from a certain user or project or allocate resources only if certain service nodes fail or certain licenses become unavailable.)
For example, if a threshold of 95% of processor consumption is met by 951 processors out of the 1000 processors in the environment are being utilized, then the system (which may or may not include the management module 108) automatically establishes a connection with the on-demand environment 102. Another type of threshold may also trigger the automatic connection such as a service level received threshold, a service level predicted threshold, a policy-based threshold, a threshold or event associated with environment changes such as a resource failure (compute node, network storage device, or service failures).
In a service level threshold, one example is where a SLA specifies a certain service level requirement for a customer, such as resources available within 5 hours. If an actual threshold is not met, i.e., a job has waited now for 5 hours without being able to consume resource, or where a threshold is predicted to not be met, these can be triggering events for communication with the on-demand center. The module 108 then communicates with the slave manager 106 to provision or customize the on-demand resources 102. The two environments exchange the necessary information to create reservations of resources, provision, handle licensing, and so forth, necessary to enable the automatic transfer of jobs or other workload from the local environment 104 to the on-demand environment 102. For a particular task or job, all or part of the workload may be transferred to the on-demand center. Nothing about a user job 110 submitted to a management module 108 changes. The on-demand environment 102 then instantly begins running the job without any change in the job or perhaps even any knowledge of the submitter.
There are several aspects of the disclosure that are shown in the source code on the CD. One is the ability to exchange information. For example, for the automatic transfer of workload to the on-demand center, the system will import remote classes, configuration policy information and other information from the local scheduler 108 to the slave scheduler 106 for use by the on-demand environment 102. Information regarding the on-demand compute environment, resources, policies and so forth are also communicated from the slave module 106 to the local module 108.
The triggering event for the automatic establishment of communication with the on-demand center and a transfer of workload to the on-demand center may be a threshold that has been passed or an event that occurred. Threshold values may comprise an achieved service level, predicted service level and so forth. For example, a job sitting in a queue for a certain amount of time may trigger a process to contact the on-demand center and transfer that job to the on-demand center to run. If a queue has a certain number of jobs that have not been submitted to the compute environment for processing, if a job has an expansion factor that has a certain value, if a job has failed to start on a local cluster one or more times for whatever reason, then these types of events may trigger communication with the on-demand center. These have been examples of threshold values that when passed will trigger communication with the on-demand environment.
Example events that also may trigger the communication with the on-demand environment include, but are not limited to, events such as the failure of nodes within the environment, storage failure, service failure, license expiration, management software failure, resource manager fails, etc. In other words, any event that may be related to any resource or the management of any resource in the compute environment may be a qualifying event that may trigger workload transfer to an on-demand center. In the license expiration context, if the license in a local environment of a certain software package is going to expire such that a job cannot properly consume resources and utilize the software package, the master module 108 can communicate with the slave module 106 to determine if the on-demand center has the requisite license for that software. If so, then the provisioning of the resources in the on-demand center can be negotiated and the workload transferred wherein it can consume resources under an appropriate legal and licensed framework.
The basis for the threshold or the event that triggers the communication, provisioning and transfer of workload to the on-demand center may be all jobs/workload associated with the local compute environment or a subset of jobs/workload associated with the local compute environment. In other words, the analysis of when an event and/or threshold should trigger the transfer of workload may be based on a subset of jobs. For example, the analysis may be based on all jobs submitted from a particular person or group or may be based on a certain type of job, such as the subset of jobs that will require more than 5 hours of processing time to run. Any parameter may be defined for the subset of jobs used to base the triggering event.
The interaction and communication between the local compute environment and the on-demand compute environment enables an improved process for dynamically growing and shirking provisioned resource space based on load. This load balancing between the on-demand center and the local environment may be based on thresholds, events, all workload associated with the local environment or a subset of the local environment workload.
Another aspect of the disclosure is the ability to automate data management between two sites. This involves the transparent handling of data management between the on-demand environment 102 and the local environment 104 that is transparent to the user. Typically environmental information will always be communicated between the local environment 104 and the on-demand environment 102. In some cases, job information may not need to be communicated because a job may be gathering its own information, say from the Internet, or for other reasons. Therefore, in preparing to provision resources in the on-demand environment all information or a subset of information is communicated to enable the process. Yet another aspect of the invention relates to a simple and easy mechanism to enable on-demand center integration. This aspect of the invention involves the ability of the user or an administrator to, in a single action like the click of a button or a one-click action, be able to command the integration of an on-demand center information and capability into the local resource manager 108.
This feature is illustrated in
Another aspect provides for a method of integrating an on-demand compute environment into a local compute environment. The method comprises receiving a request from an administrator or via an automated command from an event trigger or administrator action to integrate an on-demand compute environment into a local compute environment. In response to the request, local workload information and/or resource configuration information is routed to an on-demand center and an environment is created and customized in the on-demand center that is compatible with workload requirements submitted to the local compute environment. Billing and costing are also automatically integrated and handled.
The exchange and integration of all the necessary information and resource knowledge may be performed in a single action or click to broaden the set of resources that may be available to users who have access initially only to the local compute environment 104. The system may receive the request to integrate an on-demand compute environment into a local compute environment in other manners as well, such as any type of multi-modal request, voice request, graffiti on a touch-sensitive screen request, motion detection, and so forth. Thus the one-click action may be a single tap on a touch sensitive display or a single voice command such as “integrate” or another command or multi-modal input that is simple and singular in nature. In response to the request, the system automatically integrates the local compute environment information with the on-demand compute environment information to enable resources from the on-demand compute environment available to requestors of resources in the local compute environment.
The one-click approach relates to the automated approach expect a human is in the middle of the process. For example, if a threshold or a triggering event is passed, an email or a notice may be sent to an administrator with options to allocate resources from the on-demand center. The administrator may be presented with one or more options related to different types of allocations that are available in the on-demand center—and via one-click or one action the administrator may select the appropriate action. For example, three options may include 500 processors in 1 hour; 700 processors in 2 hours; and 1000 processors in 10 hours. The options may be intelligent in that they may take into account the particular triggering event, costs of utilizing the on-demand environment, SLAs, policies, and any other parameters to present options that comply with policies and available resources. The administrator may be given a recommended selection based on SLAs, cost, or any other parameters discussed herein but may then choose the particular allocation package for the on-demand center. The administrator also may have an option, without an alert, to view possible allocation packages in the on-demand center if the administrator knows of an upcoming event that is not capable of being detected by the modules, such as a meeting with a group wherein they decide to submit a large job the next day which will clearly require on-demand resources. The one-click approach encapsulates the command line instruction to proceed with the allocation of on-demand resources.
One of the aspects of the disclosure is the integration of an on-demand environment 102 and a local compute environment 104 is that the overall data appears locally. In other words, the local scheduler 108 will have access to the resources and knowledge of the on-demand environment 102 but those resources, with the appropriate adherence to local policy requirements, is handled locally and appears locally to users and administrators of the local environment 104.
Another aspect of the invention that is enabled with the attached source code is the ability to specify configuration information and feeding it down the line. For example, the interaction between the compute environments supports static reservations. A static reservation is a reservation that a user or an administrator cannot change, remove or destroy. It is a reservation that is associated with the resource manager 108 itself. A static reservation blocks out time frames when resources are not available for other uses. For example, if to enable a compute environment to have workload run on (or consume) resources, a job takes an hour to provision a resources, then the module 108 may make a static reservation of resources for the provisioning process. The module 108 will locally create a static reservation for the provisioning component of running the job. The module 108 will report on these constraints associated with the created static reservation within the on-demand compute environment.
Then, the module 108 will communicate with the slave module 106 if on-demand resources are needed to run a job. The module 108 communicates with the slave module 106 and identifies what resources are needed (20 processors and 512 MB of memory, for example) and inquires when can those resources be available. Assume that module 106 responds that the processors and memory will be available in one hour and that the module 108 can have those resources for 36 hours. Once all the appropriate information has been communicated between the modules 106 and 108, then module 108 creates a static reservation to block the first part of the resources which requires the one hour of provisioning. The module 108 may also block out the resources with a static reservation from hour 36 to infinity until the resources go away. Therefore, from zero to one hour is blocked out by a static reservation and from the end of the 36 hours to infinity is blocked out. In this way, the scheduler 108 can optimize the on-demand resources and insure that they are available for local workloads. The communication between the modules 106 and 108 is performed preferably via tunneling.
Another aspect relates to receiving requests or information associated with resources in an on-demand center. An example will illustrate. Assume that a company has a reservation of resources within an on-demand center but then finds out that their budget is cut for the year. There is a mechanism for an administrator to enter information such as a request for a cancellation of a reservation so that they do not have to pay for the consumption of those resources. Any type of modification of the on-demand resources may be contemplated here. This process involves translating a current or future state of the environment for a requirement of the modification of usable resources. Another example includes where a group determines that they will run a large job over the weekend that will knowingly need more than the local environment. An administrator can submit in the local resource broker 108 a submission of information associated with a parameter—such as a request for resources and the local broker 108 will communicate with the hosting center 106 and the necessary resources can be reserved in the on-demand center even before the job is submitted to the local environment.
The modification of resources within the on-demand center may be an increase, decrease, or cancellation of resources or reservations for resources. The parameters may be a direct request for resources or a modification of resources or may be a change in an SLA which then may trigger other modifications. For example, if an SLA prevented a user from obtaining more than 500 nodes in an on-demand center and a current reservation has maximized this request, a change in the SLA agreement that extended this parameter may automatically cause the module 106 to increase the reservation of nodes according to the modified SLA. Changing policies in this manner may or may not affect the resources in the on-demand center.
Receiving resource requirement information may be based on user specification, current or predicted workload. The specification of resources may be fully explicit, or may be partially or fully implicit based on workload or based on virtual private cluster (VPC) package concept where VPC package can include aspects of allocated or provisioning support environment and adjustments to resource request timeframes including pre-allocation, allocation duration, and post-allocation timeframe adjustments. The Application incorporated above provides information associated with the VPC that may be utilized in many respects in this invention. The reserved resources may be associated with provisioning or customizing the delivered compute environment. A reservation may involve the co-allocation of resources including any combination of compute, network, storage, license, or service resources (i.e., parallel database services, security services, provisioning services) as part of a reservation across multiple different resource types. Also, the co-allocation of resources over disjoint timeframes to improve availability and utilization of resources may be part of a reservation or a modification of resources. Resources may also be reserved with automated failure handling and resource recovery.
Another feature associated with reservations of resources within the on-demand environment is the use of provisioning padding. This is an alternate approach to the static reservation discussed above. For example, if a reservation of resources would require 2 hours of processing time for 5 nodes, then that reservation may be created in the on-demand center as directed by the client resource broker 108. As part of that same reservation or as part of a separate process, the reservation may be modified or adjusted to increase its duration to accommodate for provisioning overhead and clean up processes. Therefore, there may need to be ½ hour of time in advance of the beginning of the two hour block wherein data transmission, operating system set up, or any other provisioning step needs to occur. Similarly, at the end of the two hours, there may need to be 15 minutes to clean up the nodes and transmit processed data to storage or back to the local compute environment. Thus, an adjustment of the reservation may occur to account for this provisioning in the on-demand environment. This may or may not occur automatically, for example, the user may request resources for 2 hours and the system may automatically analyze the job submitted or utilize other information to automatically adjust the reservation for the provisioning needs. The administrator may also understand the provisioning needs and specifically request a reservation with provisioning pads on one or both ends of the reservation.
A job may also be broken into component parts and only one aspect of the job transferred to an on-demand center for processing. In that case, the modules will work together to enable co-allocation of resources across local resources and on-demand resources. For example, memory and processors may be allocated in the local environment while disk space is allocated in the on-demand center. In this regard, the local management module could request the particular resources needed for the co-allocation from the on-demand center and when the job is submitted for processing that portion of the job would consume on-demand center resources while the remaining portion of the job consumes local resources. This also may be a manual or automated process to handle the co-allocation of resources.
Another aspect relates to interaction between the master management module 106 and the slave management module 106. Assume a scenario where the local compute environment requests immediate resources from the on-demand center. Via the communication between the local and the on-demand environments, the on-demand environment notifies the local environment that resources are not available for eight hours but provides the information about those resources in the eight hours. At the local environment, the management module 108 may instruct the on-demand management module 106 to establish a reservation for those resources as soon as possible (in eight hours) including, perhaps, provisioning padding for overhead. Thus, although the local environment requested immediate resources from the on-demand center, the best that could be done in this case is a reservation of resources in eight hours given the provisioning needs and other workload and jobs running on the on-demand center. Thus, jobs running or in the queue at the local environment will have an opportunity to tap into the reservation and given a variety of parameters, say job number 12 has priority or an opportunity to get a first choice of those reserved resources.
With reference to
Although the exemplary environment described herein employs the hard disk, it should be appreciated by those skilled in the art that other types of computer readable media which can store data that is accessible by a computer, such as magnetic cassettes, flash memory cards, digital video disks, memory cartridges, random access memories (RAMs) read only memory (ROM), and the like, may also be used in the exemplary operating environment. The system above provides an example server or computing device that may be utilized and networked with a cluster, clusters or a grid to manage the resources according to the principles set forth herein. It is also recognized that other hardware configurations may be developed in the future upon which the method may be operable.
Embodiments within the scope of the present invention may also include computer-readable media for carrying or having computer-executable instructions or data structures stored thereon. Such computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store desired program code means in the form of computer-executable instructions or data structures. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or combination thereof) to a computer, the computer properly views the connection as a computer-readable medium. Thus, any such connection is properly termed a computer-readable medium. Combinations of the above should also be included within the scope of the computer-readable media.
Computer-executable instructions include, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. Computer-executable instructions also include program modules that are executed by computers in stand-alone or network environments. Generally, program modules include routines, programs, objects, components, and data structures, etc. that perform particular tasks or implement particular abstract data types. Computer-executable instructions, associated data structures, and program modules represent examples of the program code means for executing steps of the methods disclosed herein. The particular sequence of such executable instructions or associated data structures represents examples of corresponding acts for implementing the functions described in such steps.
Those of skill in the art will appreciate that other embodiments of the invention may be practiced in network computing environments with many types of computer system configurations, including personal computers, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, and the like. Embodiments may also be practiced in distributed computing environments where tasks are performed by local and remote processing devices that are linked (either by hardwired links, wireless links, or by a combination thereof) through a communications network. As can also be appreciated, the compute environment itself, being managed according to the principles of the invention, may be an embodiment of the invention. Thus, separate embodiments may include an on-demand compute environment, a local compute environment, both of these environments together as a more general compute environment, and so forth. In a distributed computing environment, program modules may be located in both local and remote memory storage devices. Accordingly, the scope of the claims should be governed by the claims and their equivalents below rather than by any particular example in the specification.
The present application is a continuation of U.S. patent application Ser. No. 13/758,164, filed Feb. 4, 2013, which is a continuation of U.S. patent application Ser. No. 12/752,622, filed Apr. 1, 2010, now U.S. Pat. No. 8,370,495, issued Feb. 5, 2013, which is a continuation of U.S. patent application Ser. No. 11/276,856, filed Mar. 16, 2006, now U.S. Pat. No. 7,698,430, issued Apr. 13, 2010, which claims priority to U.S. Provisional Application No. 60/662,240 filed Mar. 16, 2005, the contents of which are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
4215406 | Gomola et al. | Jul 1980 | A |
4412288 | Herman | Oct 1983 | A |
4525780 | Bratt et al. | Jun 1985 | A |
4532893 | Day et al. | Aug 1985 | A |
4553202 | Trufyn | Nov 1985 | A |
4677614 | Circo | Jun 1987 | A |
4850891 | Walkup et al. | Jul 1989 | A |
4852001 | Tsushima et al. | Jul 1989 | A |
4943932 | Lark et al. | Jul 1990 | A |
5146561 | Carey et al. | Sep 1992 | A |
5257374 | Hammer et al. | Oct 1993 | A |
5299115 | Fields et al. | Mar 1994 | A |
5325526 | Cameron et al. | Jun 1994 | A |
5349682 | Rosenberry | Sep 1994 | A |
5377332 | Entwistle et al. | Dec 1994 | A |
5495533 | Linehan et al. | Feb 1996 | A |
5542000 | Semba | Jul 1996 | A |
5598536 | Slaughter et al. | Jan 1997 | A |
5600844 | Shaw et al. | Feb 1997 | A |
5651006 | Fujino et al. | Jul 1997 | A |
5675739 | Eilert et al. | Oct 1997 | A |
5701451 | Rogers et al. | Dec 1997 | A |
5732077 | Whitehead | Mar 1998 | A |
5737009 | Payton | Apr 1998 | A |
5761433 | Billings | Jun 1998 | A |
5761484 | Agarwal et al. | Jun 1998 | A |
5774660 | Brendel et al. | Jun 1998 | A |
5774668 | Choquier et al. | Jun 1998 | A |
5781624 | Mitra et al. | Jul 1998 | A |
5799174 | Muntz et al. | Aug 1998 | A |
5801985 | Roohparvar et al. | Sep 1998 | A |
5826239 | Du et al. | Oct 1998 | A |
5828888 | Kozaki et al. | Oct 1998 | A |
5854887 | Kindell et al. | Dec 1998 | A |
5874789 | Su | Feb 1999 | A |
5911143 | Deinhart et al. | Jun 1999 | A |
5920545 | Räsänen et al. | Jul 1999 | A |
5930167 | Lee et al. | Jul 1999 | A |
5935293 | Detering et al. | Aug 1999 | A |
5961599 | Kalavade et al. | Oct 1999 | A |
5978356 | Elwalid et al. | Nov 1999 | A |
5987611 | Freund | Nov 1999 | A |
6006192 | Cheng et al. | Dec 1999 | A |
6012052 | Altschuler et al. | Jan 2000 | A |
6052707 | D'Souza | Apr 2000 | A |
6078953 | Vaid et al. | Jun 2000 | A |
6085238 | Yuasa et al. | Jul 2000 | A |
6097882 | Mogul | Aug 2000 | A |
6108662 | Hoskins et al. | Aug 2000 | A |
6151598 | Shaw et al. | Nov 2000 | A |
6161170 | Burger et al. | Dec 2000 | A |
6175869 | Ahuja et al. | Jan 2001 | B1 |
6182142 | Win et al. | Jan 2001 | B1 |
6185575 | Orcutt | Feb 2001 | B1 |
6185601 | Wolff | Feb 2001 | B1 |
6195678 | Komuro | Feb 2001 | B1 |
6201611 | Carter et al. | Mar 2001 | B1 |
6202080 | Lu et al. | Mar 2001 | B1 |
6223202 | Bayeh | Apr 2001 | B1 |
6226677 | Slemmer | May 2001 | B1 |
6247056 | Chou et al. | Jun 2001 | B1 |
6253230 | Couland et al. | Jun 2001 | B1 |
6259675 | Honda | Jul 2001 | B1 |
6289382 | Bowman-Amuah | Sep 2001 | B1 |
6314114 | Coyle et al. | Nov 2001 | B1 |
6317787 | Boyd et al. | Nov 2001 | B1 |
6327364 | Shaffer et al. | Dec 2001 | B1 |
6330562 | Boden et al. | Dec 2001 | B1 |
6330605 | Christensen et al. | Dec 2001 | B1 |
6338085 | Ramaswamy | Jan 2002 | B1 |
6338112 | Wipfel et al. | Jan 2002 | B1 |
6339717 | Baumgartl et al. | Jan 2002 | B1 |
6343311 | Nishida et al. | Jan 2002 | B1 |
6343488 | Hackfort | Feb 2002 | B1 |
6345287 | Fong et al. | Feb 2002 | B1 |
6351775 | Yu | Feb 2002 | B1 |
6353844 | Bitar et al. | Mar 2002 | B1 |
6363434 | Eytchison | Mar 2002 | B1 |
6363488 | Ginter et al. | Mar 2002 | B1 |
6366945 | Fong et al. | Apr 2002 | B1 |
6370584 | Bestavros et al. | Apr 2002 | B1 |
6374254 | Cochran et al. | Apr 2002 | B1 |
6385302 | Antonucci et al. | May 2002 | B1 |
6392989 | Jardetzky et al. | May 2002 | B1 |
6393569 | Orenshteyn | May 2002 | B1 |
6393581 | Friedman et al. | May 2002 | B1 |
6404768 | Basak et al. | Jun 2002 | B1 |
6418459 | Gulick | Jul 2002 | B1 |
6434568 | Bowman-Amuah | Aug 2002 | B1 |
6438125 | Brothers | Aug 2002 | B1 |
6438134 | Chow et al. | Aug 2002 | B1 |
6438594 | Bowman-Amuah | Aug 2002 | B1 |
6452924 | Golden et al. | Sep 2002 | B1 |
6453383 | Stoddard et al. | Sep 2002 | B1 |
6463454 | Lumelsky et al. | Oct 2002 | B1 |
6464261 | Dybevik et al. | Oct 2002 | B1 |
6466980 | Lumelsky et al. | Oct 2002 | B1 |
6477580 | Bowman-Amuah | Nov 2002 | B1 |
6487390 | Virine et al. | Nov 2002 | B1 |
6490432 | Wegener et al. | Dec 2002 | B1 |
6502135 | Munger et al. | Dec 2002 | B1 |
6520591 | Jun et al. | Feb 2003 | B1 |
6529499 | Doshi et al. | Mar 2003 | B1 |
6564261 | Gudjonsson et al. | May 2003 | B1 |
6571215 | Mahapatro | May 2003 | B1 |
6571391 | Acharya et al. | May 2003 | B1 |
6578068 | Bowman-Amuah | Jun 2003 | B1 |
6587469 | Bragg | Jul 2003 | B1 |
6600898 | De Bonet et al. | Jul 2003 | B1 |
6601234 | Bowman-Amuah | Jul 2003 | B1 |
6606660 | Bowman-Amuah | Aug 2003 | B1 |
6622168 | Datta | Sep 2003 | B1 |
6626077 | Gilbert | Sep 2003 | B1 |
6628649 | Raj et al. | Sep 2003 | B1 |
6629081 | Cornelius et al. | Sep 2003 | B1 |
6633544 | Rexford et al. | Oct 2003 | B1 |
6640238 | Bowman-Amuah | Oct 2003 | B1 |
6651098 | Carroll et al. | Nov 2003 | B1 |
6661787 | O'Connell et al. | Dec 2003 | B1 |
6724733 | Schuba et al. | Apr 2004 | B1 |
6735188 | Becker et al. | May 2004 | B1 |
6738736 | Bond | May 2004 | B1 |
6772211 | Lu et al. | Aug 2004 | B2 |
6775701 | Pan | Aug 2004 | B1 |
6779016 | Aziz et al. | Aug 2004 | B1 |
6781990 | Puri et al. | Aug 2004 | B1 |
6785724 | Drainville | Aug 2004 | B1 |
6816903 | Rakoshitz et al. | Nov 2004 | B1 |
6816905 | Sheets et al. | Nov 2004 | B1 |
6857020 | Chaar et al. | Feb 2005 | B1 |
6862606 | Major et al. | Mar 2005 | B1 |
6868097 | Soda et al. | Mar 2005 | B1 |
6874031 | Corbeil | Mar 2005 | B2 |
6928471 | Pabari et al. | Aug 2005 | B2 |
6934702 | Faybishenko et al. | Aug 2005 | B2 |
6947982 | McGann et al. | Sep 2005 | B1 |
6950833 | Costello et al. | Sep 2005 | B2 |
6971098 | Khare et al. | Nov 2005 | B2 |
6978310 | Rodriguez et al. | Dec 2005 | B1 |
7013322 | Lahr | Mar 2006 | B2 |
7020719 | Grove et al. | Mar 2006 | B1 |
7035854 | Hsaio et al. | Apr 2006 | B2 |
7058070 | Tran et al. | Jun 2006 | B2 |
7076717 | Grossman et al. | Jul 2006 | B2 |
7080378 | Noland et al. | Jul 2006 | B1 |
7082606 | Wood et al. | Jul 2006 | B2 |
7085825 | Pishevar et al. | Aug 2006 | B1 |
7085837 | Kimbrel et al. | Aug 2006 | B2 |
7085893 | Krissell et al. | Aug 2006 | B2 |
7089294 | Baskey et al. | Aug 2006 | B1 |
7099933 | Wallace et al. | Aug 2006 | B1 |
7100192 | Igawa et al. | Aug 2006 | B1 |
7102996 | Amdahl et al. | Sep 2006 | B1 |
7103625 | Hipp et al. | Sep 2006 | B1 |
7124289 | Suorsa | Oct 2006 | B1 |
7126913 | Patel et al. | Oct 2006 | B1 |
7127613 | Pabla et al. | Oct 2006 | B2 |
7127633 | Olson et al. | Oct 2006 | B1 |
7140020 | McCarthy et al. | Nov 2006 | B2 |
7143088 | Green et al. | Nov 2006 | B2 |
7146233 | Aziz et al. | Dec 2006 | B2 |
7146416 | Yoo et al. | Dec 2006 | B1 |
7155478 | Ims et al. | Dec 2006 | B2 |
7155502 | Galloway et al. | Dec 2006 | B1 |
7171415 | Kan et al. | Jan 2007 | B2 |
7299294 | Bruck et al. | Jan 2007 | B1 |
7177823 | Lam et al. | Feb 2007 | B2 |
7185046 | Ferstl et al. | Feb 2007 | B2 |
7197549 | Salama et al. | Mar 2007 | B1 |
7197559 | Goldstein et al. | Mar 2007 | B2 |
7206819 | Schmidt | Apr 2007 | B2 |
7213065 | Watt | May 2007 | B2 |
7216173 | Clayton et al. | May 2007 | B2 |
7225249 | Barry et al. | May 2007 | B1 |
7228350 | Hong et al. | Jun 2007 | B2 |
7231445 | Aweya et al. | Jun 2007 | B1 |
7242501 | Ishimoto | Jul 2007 | B2 |
7243351 | Kundu | Jul 2007 | B2 |
7249179 | Romero et al. | Jul 2007 | B1 |
7251688 | Leighton et al. | Jul 2007 | B2 |
7275249 | Miller et al. | Sep 2007 | B1 |
7278008 | Case et al. | Oct 2007 | B1 |
7278142 | Bandhole et al. | Oct 2007 | B2 |
7281045 | Aggarwal et al. | Oct 2007 | B2 |
7284109 | Paxie et al. | Oct 2007 | B1 |
7293092 | Sukegawa | Nov 2007 | B2 |
7305464 | Phillipi et al. | Dec 2007 | B2 |
7313793 | Traut et al. | Dec 2007 | B2 |
7320025 | Steinberg et al. | Jan 2008 | B1 |
7324555 | Chen et al. | Jan 2008 | B1 |
7328406 | Kalinoski et al. | Feb 2008 | B2 |
7334108 | Case et al. | Feb 2008 | B1 |
7334230 | Chung et al. | Feb 2008 | B2 |
7340578 | Khanzode | Mar 2008 | B1 |
7343467 | Brown et al. | Mar 2008 | B2 |
7350186 | Coleman et al. | Mar 2008 | B2 |
7353276 | Bain et al. | Apr 2008 | B2 |
7356655 | Allen et al. | Apr 2008 | B2 |
7356770 | Jackson | Apr 2008 | B1 |
7366101 | Varier et al. | Apr 2008 | B1 |
7373391 | Iinuma | May 2008 | B2 |
7373524 | Motsinger et al. | May 2008 | B2 |
7380039 | Miloushev et al. | May 2008 | B2 |
7386586 | Headley et al. | Jun 2008 | B1 |
7386611 | Dias et al. | Jun 2008 | B2 |
7389310 | Bhagwan et al. | Jun 2008 | B1 |
7392325 | Grove et al. | Jun 2008 | B2 |
7398216 | Barnett et al. | Jul 2008 | B2 |
7398471 | Rambacher | Jul 2008 | B1 |
7401355 | Supnik et al. | Jul 2008 | B2 |
7415709 | Hipp et al. | Aug 2008 | B2 |
7418518 | Grove et al. | Aug 2008 | B2 |
7421402 | Chang et al. | Sep 2008 | B2 |
7421500 | Talwar et al. | Sep 2008 | B2 |
7426489 | Van Soestbergen et al. | Sep 2008 | B2 |
7426546 | Breiter et al. | Sep 2008 | B2 |
7428540 | Coates et al. | Sep 2008 | B1 |
7433304 | Galloway et al. | Oct 2008 | B1 |
7437460 | Chidambaran et al. | Oct 2008 | B2 |
7437730 | Goyal | Oct 2008 | B2 |
7441261 | Slater et al. | Oct 2008 | B2 |
7451199 | Kandefer et al. | Nov 2008 | B2 |
7451201 | Alex et al. | Nov 2008 | B2 |
7454467 | Girouard et al. | Nov 2008 | B2 |
7461134 | Ambrose | Dec 2008 | B2 |
7463587 | Rajsic et al. | Dec 2008 | B2 |
7464159 | DiLuoffo et al. | Dec 2008 | B2 |
7467225 | Anerousis et al. | Dec 2008 | B2 |
7475419 | Basu et al. | Jan 2009 | B1 |
7483945 | Blumofe | Jan 2009 | B2 |
7487254 | Walsh et al. | Feb 2009 | B2 |
7492720 | Pruthi et al. | Feb 2009 | B2 |
7502884 | Shah | Mar 2009 | B1 |
7503045 | Aziz et al. | Mar 2009 | B1 |
7516221 | Souder et al. | Apr 2009 | B2 |
7529835 | Agronow et al. | May 2009 | B1 |
7533385 | Barnes | May 2009 | B1 |
7543052 | Cesa Klein | Jun 2009 | B1 |
7546553 | Bozak et al. | Jun 2009 | B2 |
7551614 | Teisberg et al. | Jun 2009 | B2 |
7554930 | Gaddis et al. | Jun 2009 | B2 |
7568199 | Bozak et al. | Jul 2009 | B2 |
7577722 | Khandekar et al. | Aug 2009 | B1 |
7577834 | Traversat et al. | Aug 2009 | B1 |
7577959 | Nguyen et al. | Aug 2009 | B2 |
7583607 | Steele et al. | Sep 2009 | B2 |
7584274 | Bond | Sep 2009 | B2 |
7590746 | Slater et al. | Sep 2009 | B2 |
7590747 | Coates et al. | Sep 2009 | B2 |
7594011 | Chandra | Sep 2009 | B2 |
7594015 | Bozak | Sep 2009 | B2 |
7596784 | Abrams et al. | Sep 2009 | B2 |
7610289 | Muret et al. | Oct 2009 | B2 |
7627691 | Buchsbaum et al. | Dec 2009 | B1 |
7631066 | Schatz et al. | Dec 2009 | B1 |
7640547 | Neiman et al. | Dec 2009 | B2 |
7657535 | Moyaux et al. | Feb 2010 | B2 |
7668809 | Kelly et al. | Feb 2010 | B1 |
7680933 | Fatula, Jr. | Mar 2010 | B2 |
7685281 | Saraiya et al. | Mar 2010 | B1 |
7685602 | Tran et al. | Mar 2010 | B1 |
7693976 | Perry et al. | Apr 2010 | B2 |
7693993 | Sheets et al. | Apr 2010 | B2 |
7694305 | Karlsson et al. | Apr 2010 | B2 |
7698386 | Amidon et al. | Apr 2010 | B2 |
7698398 | Lai | Apr 2010 | B1 |
7698430 | Jackson | Apr 2010 | B2 |
7701948 | Rabie et al. | Apr 2010 | B2 |
7716334 | Rao et al. | May 2010 | B2 |
7725583 | Jackson | May 2010 | B2 |
7739541 | Rao et al. | Jun 2010 | B1 |
7743147 | Suorsa et al. | Jun 2010 | B2 |
7747451 | Keohane et al. | Jun 2010 | B2 |
RE41440 | Briscoe et al. | Jul 2010 | E |
7752258 | Lewin et al. | Jul 2010 | B2 |
7752624 | Crawford, Jr. et al. | Jul 2010 | B2 |
7756658 | Kulkarni et al. | Jul 2010 | B2 |
7757236 | Singh | Jul 2010 | B1 |
7761557 | Fellenstein et al. | Jul 2010 | B2 |
7765288 | Bainbridge et al. | Jul 2010 | B2 |
7765299 | Romero | Jul 2010 | B2 |
7769620 | Fernandez et al. | Aug 2010 | B1 |
7769803 | Birdwell et al. | Aug 2010 | B2 |
7770120 | Baudisch | Aug 2010 | B2 |
7774331 | Barth et al. | Aug 2010 | B2 |
7778234 | Cooke et al. | Aug 2010 | B2 |
7788403 | Darugar et al. | Aug 2010 | B2 |
7793288 | Sameske | Sep 2010 | B2 |
7796619 | Feldmann et al. | Sep 2010 | B1 |
7813822 | Hoffberg | Oct 2010 | B1 |
7827361 | Karlsson et al. | Nov 2010 | B1 |
7844787 | Ranganathan et al. | Nov 2010 | B2 |
7900206 | Joshi et al. | Mar 2011 | B1 |
7930397 | Midgley | Apr 2011 | B2 |
8078708 | Wang | Dec 2011 | B1 |
8185776 | Gentes et al. | May 2012 | B1 |
8196133 | Kakumani et al. | Jun 2012 | B2 |
8260893 | Bandhole et al. | Sep 2012 | B1 |
8261349 | Peng | Sep 2012 | B2 |
8321048 | Coss et al. | Nov 2012 | B1 |
8464250 | Ansel | Jun 2013 | B1 |
8726278 | Shawver et al. | May 2014 | B1 |
8863143 | Jackson | Oct 2014 | B2 |
8954584 | Subbarayan et al. | Feb 2015 | B1 |
9116755 | Jackson | Aug 2015 | B2 |
20010015733 | Sklar | Aug 2001 | A1 |
20010051929 | Suzuki | Dec 2001 | A1 |
20010052016 | Skene et al. | Dec 2001 | A1 |
20020002578 | Yamashita | Jan 2002 | A1 |
20020002636 | Vange et al. | Jan 2002 | A1 |
20020010783 | Primak et al. | Jan 2002 | A1 |
20020032716 | Nagato | Mar 2002 | A1 |
20020049608 | Hartsell et al. | Apr 2002 | A1 |
20020059094 | Hosea et al. | May 2002 | A1 |
20020059274 | Hartsell et al. | May 2002 | A1 |
20020062377 | Hillman et al. | May 2002 | A1 |
20020062451 | Scheidt et al. | May 2002 | A1 |
20020083299 | Van Huben et al. | Jun 2002 | A1 |
20020091786 | Yamaguchi et al. | Jul 2002 | A1 |
20020093915 | Larson | Jul 2002 | A1 |
20020103886 | Rawson, III | Aug 2002 | A1 |
20020107962 | Richter et al. | Aug 2002 | A1 |
20020116234 | Nagasawa | Aug 2002 | A1 |
20020116721 | Dobes et al. | Aug 2002 | A1 |
20020120741 | Webb et al. | Aug 2002 | A1 |
20020133537 | Lau | Sep 2002 | A1 |
20020133821 | Shteyn | Sep 2002 | A1 |
20020138635 | Redlich | Sep 2002 | A1 |
20020152305 | Jackson et al. | Oct 2002 | A1 |
20020156891 | Ulrich et al. | Oct 2002 | A1 |
20020156984 | Padovano | Oct 2002 | A1 |
20020161869 | Griffin et al. | Oct 2002 | A1 |
20020166117 | Abrams et al. | Nov 2002 | A1 |
20020173984 | Robertson et al. | Nov 2002 | A1 |
20020174165 | Kawaguchi | Nov 2002 | A1 |
20020174227 | Hartsell et al. | Nov 2002 | A1 |
20020198734 | Greene et al. | Dec 2002 | A1 |
20030004772 | Dutta et al. | Jan 2003 | A1 |
20030014503 | Legout et al. | Jan 2003 | A1 |
20030014524 | Tormasov | Jan 2003 | A1 |
20030014539 | Reznick | Jan 2003 | A1 |
20030028656 | Babka | Feb 2003 | A1 |
20030036820 | Yellepeddy et al. | Feb 2003 | A1 |
20030039246 | Guo et al. | Feb 2003 | A1 |
20030041308 | Ganesan et al. | Feb 2003 | A1 |
20030050989 | Marinescu et al. | Mar 2003 | A1 |
20030058277 | Bowman-Amuah | Mar 2003 | A1 |
20030065703 | Aborn | Apr 2003 | A1 |
20030069949 | Chan et al. | Apr 2003 | A1 |
20030097429 | Wu et al. | May 2003 | A1 |
20030097439 | Strayer et al. | May 2003 | A1 |
20030101084 | Otero Perez | May 2003 | A1 |
20030105721 | Ginter et al. | Jun 2003 | A1 |
20030112792 | Cranor et al. | Jun 2003 | A1 |
20030120472 | Lind | Jun 2003 | A1 |
20030120701 | Pulsipher et al. | Jun 2003 | A1 |
20030120710 | Pulsipher et al. | Jun 2003 | A1 |
20030126013 | Shand | Jul 2003 | A1 |
20030126202 | Watt | Jul 2003 | A1 |
20030126283 | Prakash et al. | Jul 2003 | A1 |
20030144894 | Robertson et al. | Jul 2003 | A1 |
20030154112 | Neiman et al. | Aug 2003 | A1 |
20030158940 | Leigh | Aug 2003 | A1 |
20030177121 | Moona et al. | Sep 2003 | A1 |
20030177334 | King et al. | Sep 2003 | A1 |
20030182429 | Jagels | Sep 2003 | A1 |
20030191857 | Terrell et al. | Oct 2003 | A1 |
20030195931 | Dauger | Oct 2003 | A1 |
20030202709 | Simard et al. | Oct 2003 | A1 |
20030204773 | Petersen et al. | Oct 2003 | A1 |
20030210694 | Jayaraman et al. | Nov 2003 | A1 |
20030212738 | Wookey et al. | Nov 2003 | A1 |
20030231647 | Petrovykh | Dec 2003 | A1 |
20040003077 | Bantz et al. | Jan 2004 | A1 |
20040003086 | Parham et al. | Jan 2004 | A1 |
20040010544 | Slater et al. | Jan 2004 | A1 |
20040010550 | Gopinath | Jan 2004 | A1 |
20040015579 | Cooper et al. | Jan 2004 | A1 |
20040015973 | Skovira | Jan 2004 | A1 |
20040034873 | Zenoni | Feb 2004 | A1 |
20040039815 | Evans et al. | Feb 2004 | A1 |
20040044718 | Ferstl | Mar 2004 | A1 |
20040054630 | Ginter et al. | Mar 2004 | A1 |
20040054777 | Ackaouy et al. | Mar 2004 | A1 |
20040054780 | Romero | Mar 2004 | A1 |
20040066782 | Nassar | Apr 2004 | A1 |
20040068730 | Miller et al. | Apr 2004 | A1 |
20040071147 | Roadknight et al. | Apr 2004 | A1 |
20040073908 | Benejam | Apr 2004 | A1 |
20040103078 | Smedberg et al. | May 2004 | A1 |
20040103305 | Ginter et al. | May 2004 | A1 |
20040107273 | Biran et al. | Jun 2004 | A1 |
20040111307 | Demsky | Jun 2004 | A1 |
20040117610 | Hensley | Jun 2004 | A1 |
20040121777 | Schwarz et al. | Jun 2004 | A1 |
20040128495 | Hensley | Jul 2004 | A1 |
20040128670 | Robinson et al. | Jul 2004 | A1 |
20040133665 | Deboer et al. | Jul 2004 | A1 |
20040139202 | Talwar et al. | Jul 2004 | A1 |
20040143664 | Usa et al. | Jul 2004 | A1 |
20040150664 | Baudisch | Aug 2004 | A1 |
20040179528 | Powers et al. | Sep 2004 | A1 |
20040181370 | Froehlich et al. | Sep 2004 | A1 |
20040181476 | Smith et al. | Sep 2004 | A1 |
20040189677 | Amann et al. | Sep 2004 | A1 |
20040194098 | Chung et al. | Sep 2004 | A1 |
20040199621 | Lau | Oct 2004 | A1 |
20040199646 | Susai et al. | Oct 2004 | A1 |
20040203670 | King et al. | Oct 2004 | A1 |
20040213395 | Ishii et al. | Oct 2004 | A1 |
20040218615 | Griffin et al. | Nov 2004 | A1 |
20040221038 | Clarke et al. | Nov 2004 | A1 |
20040236852 | Birkestrand et al. | Nov 2004 | A1 |
20040260746 | Brown et al. | Dec 2004 | A1 |
20040267897 | Hill et al. | Dec 2004 | A1 |
20050010465 | Drew et al. | Jan 2005 | A1 |
20050021759 | Gupta et al. | Jan 2005 | A1 |
20050021862 | Schroeder et al. | Jan 2005 | A1 |
20050022188 | Tameshige et al. | Jan 2005 | A1 |
20050027863 | Talwar et al. | Feb 2005 | A1 |
20050027864 | Bozak et al. | Feb 2005 | A1 |
20050027865 | Bozak et al. | Feb 2005 | A1 |
20050034070 | Meir et al. | Feb 2005 | A1 |
20050038835 | Chidambaran et al. | Feb 2005 | A1 |
20050044226 | McDermott et al. | Feb 2005 | A1 |
20050044228 | Birkestrand et al. | Feb 2005 | A1 |
20050049884 | Hunt et al. | Mar 2005 | A1 |
20050050057 | Mital et al. | Mar 2005 | A1 |
20050050200 | Mizoguchi | Mar 2005 | A1 |
20050054354 | Roman et al. | Mar 2005 | A1 |
20050055322 | Masters et al. | Mar 2005 | A1 |
20050055694 | Lee | Mar 2005 | A1 |
20050055698 | Sasaki et al. | Mar 2005 | A1 |
20050060360 | Doyle et al. | Mar 2005 | A1 |
20050060608 | Marchand | Mar 2005 | A1 |
20050066358 | Anderson et al. | Mar 2005 | A1 |
20050076145 | Ben-Zvi et al. | Apr 2005 | A1 |
20050080845 | Gopinath | Apr 2005 | A1 |
20050080891 | Cauthron | Apr 2005 | A1 |
20050080930 | Joseph | Apr 2005 | A1 |
20050102396 | Hipp | May 2005 | A1 |
20050102683 | Branson | May 2005 | A1 |
20050108407 | Johnson et al. | May 2005 | A1 |
20050114862 | Bisdikian et al. | May 2005 | A1 |
20050120160 | Plouffe et al. | Jun 2005 | A1 |
20050125213 | Chen et al. | Jun 2005 | A1 |
20050125537 | Martins et al. | Jun 2005 | A1 |
20050125538 | Tawil | Jun 2005 | A1 |
20050131898 | Fatula | Jun 2005 | A1 |
20050132378 | Horvitz et al. | Jun 2005 | A1 |
20050138618 | Gebhart | Jun 2005 | A1 |
20050144315 | George et al. | Jun 2005 | A1 |
20050149940 | Calinescu | Jul 2005 | A1 |
20050160137 | Ishikawa et al. | Jul 2005 | A1 |
20050165925 | Dan et al. | Jul 2005 | A1 |
20050177600 | Eilam et al. | Aug 2005 | A1 |
20050187866 | Lee | Aug 2005 | A1 |
20050188088 | Fellenstein et al. | Aug 2005 | A1 |
20050190236 | Ishimoto | Sep 2005 | A1 |
20050192771 | Fischer et al. | Sep 2005 | A1 |
20050193103 | Drabik | Sep 2005 | A1 |
20050193231 | Scheuren | Sep 2005 | A1 |
20050198200 | Subramanian et al. | Sep 2005 | A1 |
20050204040 | Ferri et al. | Sep 2005 | A1 |
20050209892 | Miller | Sep 2005 | A1 |
20050210470 | Chung et al. | Sep 2005 | A1 |
20050213507 | Banerjee et al. | Sep 2005 | A1 |
20050213560 | Duvvury | Sep 2005 | A1 |
20050234846 | Davidson et al. | Oct 2005 | A1 |
20050235150 | Kaler et al. | Oct 2005 | A1 |
20050243867 | Petite | Nov 2005 | A1 |
20050246705 | Etelson et al. | Nov 2005 | A1 |
20050267948 | McKinley et al. | Dec 2005 | A1 |
20050268063 | Diao et al. | Dec 2005 | A1 |
20050278392 | Hansen et al. | Dec 2005 | A1 |
20050278760 | Dewar et al. | Dec 2005 | A1 |
20050283822 | Appleby et al. | Dec 2005 | A1 |
20050288961 | Tabrizi | Dec 2005 | A1 |
20050289540 | Nguyen et al. | Dec 2005 | A1 |
20060008256 | Khedouri et al. | Jan 2006 | A1 |
20060015555 | Douglass et al. | Jan 2006 | A1 |
20060015637 | Chung | Jan 2006 | A1 |
20060015773 | Singh et al. | Jan 2006 | A1 |
20060028991 | Tan et al. | Feb 2006 | A1 |
20060031379 | Kasriel et al. | Feb 2006 | A1 |
20060031547 | Tsui et al. | Feb 2006 | A1 |
20060031813 | Bishop et al. | Feb 2006 | A1 |
20060039246 | King et al. | Feb 2006 | A1 |
20060041444 | Flores | Feb 2006 | A1 |
20060047920 | Moore et al. | Mar 2006 | A1 |
20060048157 | Dawson et al. | Mar 2006 | A1 |
20060053215 | Sharma | Mar 2006 | A1 |
20060059253 | Goodman et al. | Mar 2006 | A1 |
20060069671 | Conley et al. | Mar 2006 | A1 |
20060069774 | Chen | Mar 2006 | A1 |
20060069926 | Ginter et al. | Mar 2006 | A1 |
20060089894 | Balk et al. | Apr 2006 | A1 |
20060090136 | Miller et al. | Apr 2006 | A1 |
20060095917 | Black-Ziegelbein et al. | May 2006 | A1 |
20060112184 | Kuo | May 2006 | A1 |
20060117208 | Davidson | Jun 2006 | A1 |
20060117317 | Crawford et al. | Jun 2006 | A1 |
20060126619 | Teisberg et al. | Jun 2006 | A1 |
20060126667 | Smith et al. | Jun 2006 | A1 |
20060129687 | Goldszmidt et al. | Jun 2006 | A1 |
20060136235 | Keohane et al. | Jun 2006 | A1 |
20060136928 | Crawford et al. | Jun 2006 | A1 |
20060136929 | Miller et al. | Jun 2006 | A1 |
20060143350 | Miloushev et al. | Jun 2006 | A1 |
20060149695 | Bossman et al. | Jul 2006 | A1 |
20060153191 | Rajsic et al. | Jul 2006 | A1 |
20060155740 | Chen et al. | Jul 2006 | A1 |
20060155912 | Singh et al. | Jul 2006 | A1 |
20060159088 | Aghvami et al. | Jul 2006 | A1 |
20060161466 | Trinon et al. | Jul 2006 | A1 |
20060161585 | Clarke et al. | Jul 2006 | A1 |
20060168107 | Balan et al. | Jul 2006 | A1 |
20060168224 | Midgley | Jul 2006 | A1 |
20060173730 | Birkestrand | Aug 2006 | A1 |
20060189349 | Montulli et al. | Aug 2006 | A1 |
20060190775 | Aggarwal et al. | Aug 2006 | A1 |
20060190975 | Gonzalez | Aug 2006 | A1 |
20060212332 | Jackson | Sep 2006 | A1 |
20060212333 | Jackson | Sep 2006 | A1 |
20060212740 | Jackson | Sep 2006 | A1 |
20060224725 | Bali et al. | Oct 2006 | A1 |
20060224741 | Jackson | Oct 2006 | A1 |
20060227810 | Childress et al. | Oct 2006 | A1 |
20060251419 | Zadikian et al. | Nov 2006 | A1 |
20060294238 | Naik et al. | Dec 2006 | A1 |
20070003051 | Kiss et al. | Jan 2007 | A1 |
20070050777 | Hutchinson et al. | Mar 2007 | A1 |
20070061441 | Landis et al. | Mar 2007 | A1 |
20070067366 | Landis | Mar 2007 | A1 |
20070067435 | Landis et al. | Mar 2007 | A1 |
20070083899 | Compton et al. | Apr 2007 | A1 |
20070088822 | Coile et al. | Apr 2007 | A1 |
20070094691 | Gazdzinski | Apr 2007 | A1 |
20070124344 | Rajakannimariyan et al. | May 2007 | A1 |
20070143824 | Shahbazi | Jun 2007 | A1 |
20070155406 | Dowling et al. | Jul 2007 | A1 |
20070180380 | Khavari et al. | Aug 2007 | A1 |
20070264986 | Warrillow et al. | Nov 2007 | A1 |
20070266136 | Esfahany et al. | Nov 2007 | A1 |
20070271375 | Hwang | Nov 2007 | A1 |
20070297350 | Eilam | Dec 2007 | A1 |
20080104231 | Dey et al. | May 2008 | A1 |
20080215730 | Sundaram et al. | Sep 2008 | A1 |
20080255953 | Chang et al. | Oct 2008 | A1 |
20080263558 | Lin et al. | Oct 2008 | A1 |
20080279167 | Cardei et al. | Nov 2008 | A1 |
20090043809 | Fakhouri et al. | Feb 2009 | A1 |
20090070771 | Yuyitung et al. | Mar 2009 | A1 |
20090100133 | Giulio et al. | Apr 2009 | A1 |
20090103501 | Farrag et al. | Apr 2009 | A1 |
20090105059 | Dorry et al. | Apr 2009 | A1 |
20090113056 | Tameshige et al. | Apr 2009 | A1 |
20090138594 | Fellenstein et al. | May 2009 | A1 |
20090178132 | Hudis et al. | Jul 2009 | A1 |
20090210356 | Abrams et al. | Aug 2009 | A1 |
20090210495 | Wolfson et al. | Aug 2009 | A1 |
20090216910 | Duchesneau | Aug 2009 | A1 |
20090217329 | Riedl et al. | Aug 2009 | A1 |
20090225360 | Shirai | Sep 2009 | A1 |
20090234962 | Strong et al. | Sep 2009 | A1 |
20090234974 | Arndt et al. | Sep 2009 | A1 |
20090235104 | Fung | Sep 2009 | A1 |
20090238349 | Pezzutti | Sep 2009 | A1 |
20090240547 | Fellenstein et al. | Sep 2009 | A1 |
20090292824 | Marashi et al. | Nov 2009 | A1 |
20090327489 | Swildens et al. | Dec 2009 | A1 |
20100036945 | Allibhoy et al. | Feb 2010 | A1 |
20100049931 | Jacobson et al. | Feb 2010 | A1 |
20100088205 | Robertson | Apr 2010 | A1 |
20100091676 | Moran et al. | Apr 2010 | A1 |
20100103837 | Jungck et al. | Apr 2010 | A1 |
20100114531 | Korn et al. | May 2010 | A1 |
20100131624 | Ferris | May 2010 | A1 |
20100153546 | Clubb et al. | Jun 2010 | A1 |
20100217801 | Leighton et al. | Aug 2010 | A1 |
20100235234 | Shuster | Sep 2010 | A1 |
20100281166 | Buyya et al. | Nov 2010 | A1 |
20100318665 | Demmer et al. | Dec 2010 | A1 |
20100332262 | Horvitz et al. | Dec 2010 | A1 |
20110179134 | Mayo et al. | Jul 2011 | A1 |
20120137004 | Smith | May 2012 | A1 |
20120159116 | Lim et al. | Jun 2012 | A1 |
20120185334 | Sarkar et al. | Jul 2012 | A1 |
20120218901 | Jungck et al. | Aug 2012 | A1 |
20120226788 | Jackson | Sep 2012 | A1 |
20140317292 | Odom | Oct 2014 | A1 |
Number | Date | Country |
---|---|---|
2496783 | Mar 2004 | CA |
0268435 | May 1988 | EP |
0 859 314 | Aug 1998 | EP |
1331564 | Jul 2003 | EP |
1365545 | Nov 2003 | EP |
1492309 | Dec 2004 | EP |
1865684 | Dec 2007 | EP |
2391744 | Feb 2004 | GB |
20040107934 | Dec 2004 | KR |
WO 1998011702 | Mar 1998 | WO |
WO 1999015999 | Apr 1999 | WO |
WO 1999057660 | Nov 1999 | WO |
WO 2000014938 | Mar 2000 | WO |
WO 2000060825 | Oct 2000 | WO |
WO 2001009791 | Feb 2001 | WO |
WO 0114987 | Mar 2001 | WO |
WO 0115397 | Mar 2001 | WO |
WO 2001039470 | May 2001 | WO |
WO 2003046751 | Jun 2003 | WO |
WO 2004070547 | Aug 2004 | WO |
WO 2004092884 | Oct 2004 | WO |
WO 2005017783 | Feb 2005 | WO |
WO 2006036277 | Apr 2006 | WO |
WO 2006112981 | Oct 2006 | WO |
Entry |
---|
US 7,774,482 B1, 08/2010, Szeto et al. (withdrawn) |
Kuan-Wei Cheng, Chao-Tung Yang, Chuan-Lin Lai and Shun-Chyi Chang, “A parallel loop self-scheduling on grid computing environments,” 7th International Symposium on Parallel Architectures, Algorithms and Networks, 2004. Proceedings., 2004, pp. 409-414 (Year: 2004). |
Banicescu et al., “Efficient Resource Management for Scientific Applications in Distributed Computing Environment”, 1998, Mississippi State Univ, Dept of Comp. Science, p. 45-54. (Year: 1998). |
Banicescu et al., “Competitive Resource Management in Distributed Computing Environments with Hectiling”, 1999, High Performance Computing Symposium, p. 1-7 (Year: 1999). |
Liu, Simon. “Securing the Clouds: Methodologies and Practices.” Encyclopedia of Cloud Computing (2016): 220. (Year: 2016). |
Final Office Action on U.S. Appl. No. 14/154,912 dated Dec. 7, 2017. |
Notice of Allowance on U.S. Appl. No. 14/331,772 dated Jan. 10, 2018. |
Non-Final Office Action on U.S. Appl. No. 14/590,102 dated Aug. 15, 2017. |
Non-Final Office Action issued on U.S. Appl. No. 14/833,673, dated Sep. 24, 2015. |
Non-Final Office Action on U.S. Appl. No. 14/154,912 dated Jul. 20, 2017. |
Non-Final Office Action on U.S. Appl. No. 14/331,772 dated Aug. 11, 2017. |
U.S. Appl. No. 60/662,240, filed Nov. 1985, Trufyn. |
U.S. Appl. No. 11/279,007, filed Jul. 2000, Smith et al. |
Chandra, Abhishek et al., “Quantifying the Benefits of Resource Multiplexing in On-Demand Data Centers”, Department of Computer Science, University of Massachusetts Amherst, 2003. |
Rolia, Jerome et al., “Adaptive Internet Data Centers”, Hewlett Packard Labs, Palo Alto, CA, USA , 2000. |
Chase, Jeffrey et al., “Dynamic Virtual Clusters in a Grid Site Manager”, Department of Computer Science, Duke University, Box 30129, Durham, NC 27708, USA. 2003. |
Doyle, Ronald et al., “Model-Based Resource Provisioning in a Web Service Utility”, IBM, Research Triangle Park, Department of Computer Science, Duke University, 2003. |
Bradford, Lindsay et al., “Experience Using a Coordination-based Architecture for Adaptive Web Content Provision”, Centre for Information Technology Innovation, Queensland University of Technology, Australia, 2005. |
Ranjan, S. et al., “QoS-Driven Server Migration for Internet Data Centers”, In Proceedings of IWQoS, 2002. |
Russell, Clark, et al., “Providing Scalable Web Service Using Multicast Delivery”, College of Computing, Georgia Institute of Technology, Atlanta, GA 30332-0280, 1995. |
Reed, Daniel et al., “The Next Frontier: Interactive and Closed Loop Performance Steering”, Department of Computer Science, University of Illinois, Urbana, Illinois 61801, International Conference on Parallel Processing Workshop, 1996. |
Bian, Qiyong, et al., “Dynamic Flow Switching, A New Communication Service for ATM Networks”, 1997. |
Feldmann, Anja, et al., “Reducing Overhead in Flow-Switched Networks: An Empirical Study of Web Traffic”, AT&T Labs—Research, Florham Park, NJ, 1998. |
Feldmann, Anja, et al., “Efficient Policies for Carrying Web Traffic Over Flow-Switched Networks”, IEEE/ACM Transactions on Networking, vol. 6, No. 6, Dec. 1998. |
Feng, Chen, et al., “Replicated Servers Allocation for Multiple Information Sources in a Distributed Environment”, Department of Computer Science, Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, Sep. 2000. |
Wang, Z., et al., “Resource Allocation for Elastic Traffic: Architecture and Mechanisms”, Bell Laboratories, Lucent Technologies, Network Operations and Management Symposium, 2000. 2000 IEEE/IFIP, pp. 157-170. Apr. 2000. |
Fan, Li, et al., “Summary Cache: A Scalable Wide-Area Web Cache Sharing Protocol”, IEEE/ACM Transactions on networking, vol. 8, No. 3, Jun. 2000. |
Yang, Chu-Sing, et al., “Building an Adaptable, Fault Tolerant, and Highly Manageable Web Server on Clusters of Non-dedicated Workstations”, Department of Computer Science and Engineering, National Sun Yat-Sen University, Kaohsiung, Taiwan, R.O.C.. 2000. |
Appleby, K., et al., “Oceano-SLA Based Management of a Computing Utility”, IBM T.J. Watson Research Center, P.O.Box 704, Yorktown Heights, New York 10598, USA. Proc. 7th IFIP/IEEE Int'l Symp. Integrated Network Management, IEEE Press 2001. |
Abdelzaher, Tarek, et al., “Performance Guarantees for Web Server End-Systems: A Control-Theoretical Approach”, IEEE Transactions on Parallel and Distributed Systems, vol. 13, No. 1, Jan. 2002. |
Garg, Rahul, et al., “A SLA Framework for QoS Provisioning and Dynamic Capacity Allocation”, 2002. |
Xu, Jun, et al., “Sustaining Availability of Web Services under Distributed Denial of Service Attacks”, IEEE Transactions on Computers, vol. 52, No. 2, pp. 195-208, Feb. 2003. |
McCann, Julie, et al., “Patia: Adaptive Distributed Webserver (A Position Paper)”, Department of Computing, Imperial College London, SW1 2BZ, UK. 2003. |
Urgaonkar, Bhuvan, et al., “Sharc: Managing CPU and Network Bandwidth in Shared Clusters”, IEEE Transactions on Parallel and Distributed Systems, vol. 15, No. 1, pp. 2-17, Jan. 2004. |
Liao, Raymond, et al., “Dynamic Core Provisioning for Quantitative Differentiated Services”, IEEE/ACM Transactions on Networking, vol. 12, No. 3, pp. 429-442, Jun. 2004. |
Amini, Lisa, et al., “Effective Peering for Multi-provider Content Delivery Services”, IBM Research, Hawthorne, New York, Columbia University, New York, New York. 2004. |
Soldatos, John, et al., “On the Building Blocks of Quality of Service in Heterogeneous IP Networks”, IEEE Communications Surveys, The Electronic Magazine of Original Peer-Reviewed Survey Articles, vol. 7, No. 1. First Quarter 2005. |
Devarakonda, Murthy, et al., “Policy-Based Multi-Datacenter Resource Management”, IBM Research, P.O.Box 704, Yorktown Hts, NY. Sixth IEEE International Workshop on Policies for Distributed Systems and Networks, pp. 247-250, Jun. 2005. |
Rashid, Mohammad, et al., “An Analytical Approach to Providing Controllable Differentiated Quality of Service in Web Servers”, IEEE Transactions on Parallel and Distributed Systems, vol. 16, No. 11, pp. 1022-1033, Nov. 2005. |
Braumandl, R. et al., “ObjectGlobe: Ubiquitous query processing on the Internet”, Universit{umlaut over ( )}at Passau, Lehrstuhl f{umlaut over ( )}ur Informatik, 94030 Passau, Germany. Technische Universit{umlaut over ( )}at M{umlaut over ( )}unchen, Institut f{umlaut over ( )}ur Informatik, 81667 M{umlaut over ( )}unchen, Germany. Edited by F. Casati, M.-C. Shan, D. Georgakopoulos. Accepted: Mar. 14, 2001.Published online: Jun. 7, 2001—_c Springer-Verlag 2001. |
Kai, Shen, et al., “Supporting Cluster-based Network Services on Functionally Symmetric Software Architecture”, Proceedings of the 2004 ACM/IEEE conference on Supercomputing, 2004. |
Benkner, Siegfried, et al., “VGE—A Service-Oriented Grid Environment for On-Demand Supercomputing”, Institute for Software Science, University of Vienna, Nordbergstrasse 15/C/3, A-1090 Vienna, Austria. Proceedings of the 5th IEEE/ACM International Workshop on Grid Computing. pp. 11-18. 2004. |
Amir, Yari et al., “WALRUS—a Low Latency, High Throughput Web Service Using Internet-wide Replication”, Department of Computer Science, The Johns Hopkins University, 1998. |
Azuma, K. et al., “Design, Implementation and Evaluation of Resource Management System for Internet Servers”, Journal of High Speed Networking, IOS Press, vol. 14, No. 4/2005, Oct. 2005. |
Baentsch, Michael et al., “World Wide Web Caching: The Application-Level View of the Internet”, Communications Magazine, IEEE, vol. 35, Issue 6, pp. 170-178, Jun. 1997. |
Banga, Gaurav et al., “Resource Containers: A New Facility for Resource Management in Server Systems”, Rice University, originally published in the Proceedings of the 3rd Symposium on Operating Systems Design and Implementation, New Orleans, Louisiana, Feb. 1999. |
Belloum, A. et al., “A Scalable Web Server Architecture”, World Wide Web: Internet and Web Information Systems, 5, 5-23, 2002 Kluwer Academic Publishers. Manufactured in The Netherlands. 2000. |
Cardellini, Valeria et al., “Geographic Load Balancing for Scalable Distributed Web Systems”, Proceedings of the 8th International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems, pp. 20-27. 2000. |
Casalicchio, Emiliano, et al., “Static and Dynamic Scheduling Algorithms for Scalable Web Server Farm”, University of Roma Tor Vergata, Roma, Italy, 00133. 2001. |
Chase, Jeffrey et al., “Dynamic Virtual Clusters in a Grid Site Manager”, Department of Computer Science, Duke University, Box 90129, Durham, NC 27708, U.S.A. Proceedings of the 12th IEEE International Symposium on High Performance Distributed Computing, 2003. |
Chawla, Hamesh et al., “HydraNet: Network Support for Scaling of Large-Scale Services”, Proceedings of 7th International Conference on Computer Communications and Networks, 1998. Oct. 1998. |
Chen, Xiangping et al., “Performance Evaluation of Service Differentiating Internet Servers”, IEEE Transactions on Computers, vol. 51, No. 11, pp. 1368-1375, Nov. 2002. |
Chu, Wesley et al., “Taks Allocation and Precedence Relations for Distributed Real-Time Systems”, IEEE Transactions on Computers, vol. C-36, No. 6, pp. 667-679. Jun. 1987. |
Clarke, Michael et al., “An Architecture for Dynamically Extensible Operating Systems”, Distributed Multimedia Research Group, Department of Computing, Lancaster University, Bailrigg, Lancaster, LA1 4YR, U.K. Proceedings on Fourth International Conference on Configurable Distributed Systems. May 1998. |
Colajanni, Michele et al., “Dynamic Load Balancing in Geographically Distributed Heterogeneous Web Servers”, Dip. di Informatica, Sistemi e Produzione, Universita di Roma, Tor Vergata, Roma, Italy 00133, 18th International Conference on Distributed Computing Systems, pp. 295-302, May 1998. |
Colajanni, Michele et al., “Analysis of Task Assignment Policies in Scalable Distributed Web-server Systems”, IEEE Transactions on Parallel and Distributed Systes, vol. 9, No. 6, Jun. 1998. |
Conti, Marco, et al., “Client-side content delivery policies in replicated web services: parallel access versus single server approach”, Istituto di Informatica e Telematica (IIT), Italian National Research Council (CNR), Via G. Moruzzi, I. 56124 Pisa, Italy, Performance Evaluation 59 (2005) 137-157, Available online Sep. 11, 2004. |
Dilley, John, et al., “Globally Distributed Content Delivery”, IEEE Internet Computing, 1089-7801/02/$17.00 © 2002 IEEE, pp. 50-58, Sep.-Oct. 2002. |
Ercetin, Ozgur et al., “Market-Based Resource Allocation for Content Delivery in the Internet”, IEEE Transactions on Computers, vol. 52, No. 12, pp. 1573-1585, Dec. 2003. |
Fong, L.L. et al., “Dynamic Resource Management in an eUtility”, IBM T. J. Watson Research Center, 0-7803-7382-0/02/$17.00 © 2002 IEEE. |
Foster, Ian et al., “The Anatomy of the Grid—Enabling Scalable Virtual Organizations”, To appear: Intl J. Supercomputer Applications, 2001. |
Gayek, P., et al., “A Web Content Serving Utility”, IBM Systems Journal, vol. 43, No. 1, pp. 43-63. 2004. |
Genova, Zornitza et al., “Challenges in URL Switching for Implementing Globally Distributed Web Sites”, Department of Computer Science and Engineering, University of South Florida, Tampa, Florida 33620. 0-7695-077 I-9/00 $10.00-IEEE. 2000. |
Guo, Jiani et al., “QoS Aware Job Scheduling in a Cluster-based Web Server for Multimedia Applications”, Computer Science and Engineering, University of California, Riverside, CA 92521, 0-7695-2312-9/05/$20.00 (c) 2005 IEEE. |
Hu, E.C. et al., “Adaptive Fast Path Architecture”, Copyright 2001 by International Business Machines Corporation, pp. 191-206, IBM J. Res. & Dev. vol. 45 No. 2 Mar. 2001. |
Huang, Chengdu, et al., “An Architecture for Real-Time Active Content Distribution”, Proceedings of the 12th 16th Euromicro Conference on Real-Time Systems (ECRTS'04), 1068-3070/04 $20.00 © 2004 IEEE. |
Jann, Joefon et al., “Web Applications and Dynamic Reconfiguration in UNIX Servers”, IBM, Thomos J. Watson Research Center, Yorktown Heights, New York 10598, 0-7803-7756-7/03/$17.00. 2003 IEEE. pp. 186-194. |
Jiang, Xuxian et al., “SODA: a Service-On-Demand Architecture for Application Service Hosting Utility Platforms”, Proceedings of the 12th IEEE International Symposium on High Performance Distributed Computing (HPDC'03) 1082-8907/03 $17.00 © 2003 IEEE. |
Kant, Krishna et al., “Server Capacity Planning for Web Traffic Workload”, IEEE Transactions on Knowledge and Data Engineering, vol. 11, No. 5, Sep./Oct. 1999, pp. 731-474. |
Kapitza, Rudiger et al., “Decentralized, Adaptive Services: The AspectIX Approach for a Flexible and Secure Grid Environment”, M. Jeckle, R. Kowalczyk, and P. Braun (Eds.): GSEM 2004, LNCS 3270, pp. 107-118, 2004. © Springer-Verlag Berlin Heidelberg 2004. |
Koulopoulos, D. et al., “PLEIADES: An Internet-based parallel/distributed system”, Software-Practice and Experience 2002; 32:1035-1049 (DOI: 10.1002/spe.468). |
Kuz, Ihor et al., Delft University of Technology Vrije Universiteit Vrije Universiteit Delft, The Netherlands, 0-7695-0819-7/00 $10.00 0 2000 IEEE. |
Lu, Chenyang et al., “A Feedback Control Approach for Guaranteeing Relative Delays in Web Servers”, Department of Computer Science, University of Virginia, Charlottesville, VA 22903, 0-7695-1134-1/01 $10.00. 2001 IEEE. |
Mahon, Rob et al., “Cooperative Design in Grid Services”, The 8th International Conference on Computer Supported Cooperative Work in Design Proceedings. pp. 406-412. IEEE 2003. |
Montez, Carlos et al., “Implementing Quality of Service in Web Servers”, LCMI—Depto de Automacao e Sistemas—Univ. Fed. de Santa Catarina, Caixa Postal 476-88040-900—Florianopolis—SC—Brasil, 1060-9857/02 $17.00. 2002 IEEE. |
Haddad, Ibrahim et al., “MOSIX: A Cluster Load-Balancing Solution for Linux”, Linux Journal, vol. 2001, Issue 85es, Article No. 6, May 2001. |
Naik, Vijay et al., “Adaptive Resource Sharing in a Web Services Environment”, Middleware Conference, vol. 78, Proceedings of the 5th ACM/IFIP/USENIX international conference on Middleware, pp. 311-330. 2004. |
Nakrani, Sunil et al., “On Honey Bees and Dynamic Server Allocation in Internet Hosting Centers”, Computing Laboratory, University of Oxford, Oxford OX1 3QD, England, UK Copyright © 2004 International Society for Adaptive Behavior, vol. 12(3-4): pp. 223-240. 2004. |
Abdelwahed, Sherif et al., “A Control-Based Framework for Self-Managing Distributed Computing Systems”, WOSS'04 Oct. 31-Nov. 1, 2004 Newport Beach, CA, USA. Copyright 2004 ACM 1-58113-989-6/04/0010. |
Aweya, James et al., “An adaptive load balancing scheme for web servers”, International Journal of Network Management 2002; 12: 3-39 (DOI: 10.1002/nem.421), Copyright 2002 John Wiley & Sons, Ltd. |
Chen, Liang et al., “Resource Allocation in a Middleware for Streaming Data”, 2nd Workshop on Middleware for Grid Computing Toronto, Canada, pp. 5-10, Copyright 2004 ACM. |
Workshop on Performance and Architecture of Web Servers (PAWS-2000) Jun. 17-18, 2000, Santa Clara, CA (Held in conjunction with SIGMETRICS-2000). |
Hadjiefthymiades, Stathes et al., “Using Proxy Cache Relocation to Accelerate Web Browsing in Wireless/Mobile Communications”, University of Athens, Dept. of Informatics and Telecommunications, Panepistimioupolis, Ilisia, Athens, 15784, Greece. WWW10, May 1-5, 2001, Hong Kong. |
Fox, Armando et al., “Cluster-Based Scalable Network Services”, University of California at Berkeley, SOSP-16 10/97 Saint-Malo, France, ACM 1997. |
Chen, Thomas, “Increasing the Observability of Internet Behavior”, Communications of the ACM, vol. 44, No. 1, pp. 93-98, Jan. 2001. |
Shaikh, Anees et al., “Implementation of a Service Platform for Online Games”, Network Software and Services, IBM T.J. Watson Research Center, Hawthorne, NY 10532, SIGCOMM'04 Workshops, Aug. 30 & Sep. 3, 2004, Portland, Oregon, USA. Copyright 2004 ACM. |
Chellappa, Ramnath et al., “Managing Computing Resources in Active Intranets”, International Journal of Network Management, 2002, 12:117-128 (DOI:10.1002/nem.427). |
Lowell, David et al., “Devirtualizable Virtual Machines Enabling General, Single-Node, Online Maintenance”, ASPLOS'04, Oct. 9-13, 2004, Boston, Massachusetts, USA. pp. 211-223, Copyright 2004 ACM. |
Shen, Kai et al., “Integrated Resource Management for Cluster-based Internet Services”, USENIX Association, 5th Symposium on Operating Systems Design and Implementation. vol. 36, Issue SI (Winter 2002), pp. 225-238. 2002. |
Cardellini, Valeria et al., “The State of the Art in Locally Distributed Web-Server Systems”, ACM Computing Surveys, vol. 34, No. 2, Jun. 2002, pp. 263-311. |
Grajcar, Martin, “Genetic List Scheduling Algorithm for Scheduling and Allocation on a Loosely Coupled Heterogeneous Multiprocessor System”, Proceedings of the 36th annual ACM/IEEE Design Automation Conference, New Orleans, Louisiana, pp. 280-285. 1999. |
Chandra, Abhishek et al., “Dynamic Resource Allocation for Shared Data Centers Using Online Measurements” Proceedings of the 11th international conference on Quality of service, Berkeley, CA, USA pp. 381-398. 2003. |
Grimm, Robert et al., “System Support for Pervasive Applications”, ACM Transactions on Computer Systems, vol. 22, No. 4, Nov. 2004, pp. 421-486. |
Bent, Leeann et al., “Characterization of a Large Web Site Population with Implications for Content Delivery”, WWW2004, May 17-22, 2004, New York, New York, USA ACM 1-58113-844-X/04/0005, pp. 522-533. |
Pacifici, Giovanni et al., “Performance Management for Cluster Based Web Services”, IBM TJ Watson Research Center, May 13, 2003. |
Conti, Marco et al., “Quality of Service Issues in Internet Web Services”, IEEE Transactions on Computers, vol. 51, No. 6, pp. 593-594, Jun. 2002. |
Raunak, Mohammad et al., “Implications of Proxy Caching for Provisioning Networks and Servers”, IEEE Journal on Selected Areas in Communications, vol. 20, No. 7, pp. 1276-1289, Sep. 2002. |
Reumann, John et al., “Virtual Services: A New Abstraction for Server Consolidation”, Proceedings of 2000 USENIX Annual Technical Conference, San Diego, California, Jun. 18-23, 2000. |
Rolia, J. et al., “Resource Access Management for a Utility Hosting Enterprise Applications”, IFIP/IEEE Eighth International Symposium on Integrated Network Management, pp. 549-562, Mar. 2003. |
Ryu, Kyung Dong et al., “Resource Policing to Support Fine-Grain Cycle Stealing in Networks of Workstations”, IEEE Transactions on Parallel and Distributed Systems, vol. 15, No. 10, pp. 878-892, Oct. 2004. |
Sacks, Lionel et al., “Active Robust Resource Management in Cluster Computing Using Policies”, Journal of Network and Systems Management, vol. 11, No. 3, pp. 329-350, Sep. 2003. |
Sit, Yiu-Fai et al., “Cyclone: A High-Performance Cluster-Based Web Server with Socket Cloning”, Department of Computer Science and Information Systems, The University of Hong Kong, Cluster Computing 7, 21-37, 2004. Kluwer Academic Publishers. |
Sit, Yiu-Fai et al., “Socket Cloning for Cluster-BasedWeb Servers”, Department of Computer Science and Information Systems, The University of Hong Kong, Proceedings of the IEEE International Conference on Cluster Computing, IEEE 2002. |
Snell, Quinn et al., “An Enterprise-Based Grid Resource Management System”, Brigham Young University, Provo, Utah 84602, Proceedings of the 11th IEEE International Symposium on High Performance Distributed Computing, 2002. |
Tang, Wenting et al., “Load Distribution via Static Scheduling and Client Redirection for Replicated Web Servers”, Department of Computer Science and Engineering, 3115 Engineering Building, Michigan State University, East Lansing, MI 48824-1226, Proceedings of the 2000 International Workshop on Parallel Processing, pp. 127-133, IEEE 2000. |
Taylor, Steve et al., “Grid Resources for Industrial Applications”, Proceedings of the IEEE International Conference on Web Services. 2004. |
Vidyarthi, Deo Prakash et al., “Cluster-Based Multiple Task Allocation in Distributed Computing System”, Proceedings of the 18th International Parallel and Distributed Processing Symposium, IEEE 2004. |
Villela, Daniel et al., “Provisioning Servers in the Application Tier for E-commerce Systems”, pp. 57-66, IEEE 2004. |
Xu, Zhiwei et al., “Cluster and Grid Superservers: The Dawning Experiences in China”, Institute of Computing Technology, Chinese Academy of Sciences, P.O. Box 2704, Beijing 100080, China. Proceedings of the 2001 IEEE International Conference on Cluster Computing. IEEE 2002. |
Zeng, Daniel et al., “Efficient Web Content Delivery Using Proxy Caching Techniques”, IEEE Transactions on Systems, Man, and Cybernetics—Part C: Applications and Reviews, vol. 34, No. 3, pp. 270-280, Aug. 2004. |
Zhang, Qian et al., “Resource Allocation for Multimedia Streaming Over the Internet”, IEEE Transactions on Multimedia, vol. 3, No. 3, pp. 339-355, Sep. 2001. |
Gupta et al., “Provisioning a virtual private network: a network design problem for multicommodity flow”, Proceedings of the thirty-third annual ACM symposium on Theroy of computing [online], Jul. 2001, pp. 389-398, abstract [retrieved on Jun. 14, 2007], Retrieved from the Internet: URL:http://portal.acm.org/citation.crm?id=380830&dl=ACM&coll-GUIDE. |
Foster et al., “A Distributed Resource Management Architecture that Supports Advance Reservations and Co-Allocation,” Seventh International Workshop on Quality of Service (IWQoS '99), 1999, pp. 27-36. |
Final Office Action on U.S. Appl. No. 14/691,120 dated Sep. 13, 2017. |
Chen et al., “A flexible service model for advance reservation,” Computer Networks 37 (2001), pp. 251-262. |
Final Office Action issued on U.S. Appl. No. 11/276,855, dated Jan. 26, 2012. |
Final Office Action issued on U.S. Appl. No. 11/276,852, dated Mar. 5, 2013. |
Final Office Action issued on U.S. Appl. No. 11/276,854, dated Apr. 18, 2011. |
Final Office Action issued on U.S. Appl. No. 11/276,854, dated Jun. 8, 2010. |
Final Office Action on U.S. Appl. No. 11/276,853, dated Oct. 16, 2009. |
Final Office Action on U.S. Appl. No. 11/276,855, dated Aug. 13, 2009. |
Non-Final Office Action for U.S. Appl. No. 11/276,855, dated Dec. 31, 2009. |
Non-Final Office Action issued on U.S. Appl. No. 11/276,852, dated Feb. 10, 2009. |
Non-Final Office Action issued on U.S. Appl. No. 11/276,852, dated Mar. 17, 2011. |
Non-Final Office Action issued on U.S. Appl. No. 11/276,852, dated Mar. 4, 2010. |
Non-Final Office Action issued on U.S. Appl. No. 11/276,852, dated Jan. 16, 2014. |
Non-Final Office Action issued on U.S. Appl. No. 11/276,854, dated Oct. 27, 2010. |
Non-Final Office Action issued on U.S. Appl. No. 11/276,854, dated Nov. 26, 2008. |
Non-Final Office Action issued on U.S. Appl. No. 11/276,854, dated Aug. 1, 2012. |
Non-Final Office Action issued on U.S. Appl. No. 11/276,855, dated Dec. 30, 2008. |
Non-Final Office Action on U.S. Appl. No. 11/276,854, dated Jun. 5, 2013. |
Non-Final Office Action on U.S. Appl. No. 11/276,855, dated Dec. 7, 2010. |
Non-Final Office Action on U.S. Appl. No. 14/691,120 dated Mar. 2, 2017. |
Notice of Allowance issued on U.S. Appl. No. 11/276,852, dated Nov. 26, 2014. |
Notice of Allowance issued on U.S. Appl. No. 11/276,854, dated Mar. 6, 2014. |
Notice of Allowance issued on U.S. Appl. No. 13/758,164, dated Apr. 15, 2015. |
Notice of Allowance issued on U.S. Appl. No. 14/704,231, dated Sep. 2, 2015. |
Notice of Allowance on U.S. Appl. No. 11/276,855, dated Sep. 13, 2013. |
Notice of Allowance on U.S. Appl. No. 14/833,673, dated Dec. 2, 2016. |
Non-Final Office Action on U.S. Appl. No. 14/691,120 dated Feb. 12, 2018. |
Notice of Allowance on U.S. Appl. No. 14/590,102 dated Jan. 22, 2018. |
Non-Final Office Action on U.S. Appl. No. 14/154,912 dated May 8, 2018. |
U.S. Non-Final Office Action on U.S. Appl. No. 14/987,8059, dated May 11, 2018. |
Final Office Action on U.S. Appl. No. 14/154,912 dated Oct. 11, 2018, 17 pps. |
Final Office Action on U.S. Appl. No. 14/691,120 dated Aug. 27, 2018, 16 pps. |
Final Office Action on U.S. Appl. No. 14/987,059 dated Oct. 11, 2018, 15 pps. |
Non-Final Office Action on U.S. Appl. No. 15/478,467 dated Jul. 13, 2018, 21 pps. |
Final Office Action on U.S. Appl. No. 15/478,467 dated Jan. 11, 2019. |
Non-Final Office Action on U.S. Appl. No. 14/691,120 dated Mar. 22, 2019. |
Non-Final Office Action on U.S. Appl. No. 14/987,059 dated Jan. 31, 2019. |
Notice of Allowance on U.S. Appl. No. 14/154,912 dated Feb. 7, 2019. |
Notice of Allowance on U.S. Appl. No. 14/154,912 dated Apr. 3, 2019. |
Non-Final Office Action issued on U.S. Appl. No. 11/276,854, dated Jun. 10, 2009, 16 pages. |
Final Office Action issued on U.S. Appl. No. 11/276,852, dated Oct. 16, 2009, 15 pages. |
Final Office Action issued on U.S. Appl. No. 11/276,855, dated Jul. 22, 2010, 16 pages. |
Final Office Action issued on U.S. Appl. No. 11/276,852, dated Oct. 4, 2010, 19 pages. |
Non-Final Office Action on U.S. Appl. No. 11/276,855, dated Jun. 27, 2011, 16 pages. |
Final Office Action issued on U.S. Appl. No. 11/276,852, dated Oct. 5, 2011, 20 pages. |
Non-Final Office Action issued on U.S. Appl. No. 11/276,852, dated Jun. 26, 2012, 21 pages. |
Final Office Action issued on U.S. Appl. No. 14/833,673, dated Feb. 11, 2016, 14 pages. |
Non-Final Office Action issued on U.S. Appl. No. 14/833,673, dated Jun. 10, 2016, 11 pages. |
Notice of Allowance issued on U.S. Appl. No. 15/478,467 dated May 30, 2019, 14 pages. |
Notice of Allowance issued on U.S. Appl. No. 14/987,059 dated Jul. 8, 2019, 7 pages. |
Final Office Action issued on U.S. Appl. No. 14/691,120, dated Oct. 3, 2019, 15 pages. |
Notice of Allowance issued on U.S. Appl. No. 14/987,059 dated Nov. 7, 2019, 7 pages. |
Notice of Allowance issued on U.S. Appl. No. 14/987,059 dated Feb. 14, 2020, 7 pages. |
Caesar et al., “Design and Implementation of a Routing Control Platform,” Usenix, NSDI '05 Paper, Technical Program, obtained from the Internet, on Apr. 13, 2021, at URL <https://www.usenix.org/legacy/event/nsdi05/tech/full_papers/caesar/caesar_html/>, 23 pages. |
Bader et al.; “Applications”; The International Journal of High Performance Computing Applications, vol. 15, No.; pp. 181-185; Summer 2001. |
Coomer et al.; “Introduction to the Cluster Grid—Part 1”; Sun Microsystems White Paper; 19 pages; Aug. 2002. |
Joseph et al.; “Evolution of grid computing architecture and grid adoption models”; IBM Systems Journal, vol. 43, No. 4; 22 pages; 2004. |
Smith et al.; “Grid computing”; MIT Sloan Management Review, vol. 46, Iss. 1.; 5 pages; Fall 2004. |
“Microsoft Computer Dictionary, 5th Ed.”; Microsoft Press; 3 pages; 2002. |
“Random House Concise Dictionary of Science & Computers”; 3 pages; Helicon Publishing; 2004. |
Number | Date | Country | |
---|---|---|---|
20150381521 A1 | Dec 2015 | US |
Number | Date | Country | |
---|---|---|---|
60662240 | Mar 2005 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13758164 | Feb 2013 | US |
Child | 14827927 | US | |
Parent | 12752622 | Apr 2010 | US |
Child | 13758164 | US | |
Parent | 11276856 | Mar 2006 | US |
Child | 12752622 | US |