1. Technical Field
The present invention relates in general to improved grid computing and in particular to efficient client-side estimation of future grid job costs. Still more particularly, the present invention relates to customer estimation of future grid job costs by comparing a current grid job of a particular classification with a history of stored costs for other grid jobs of that customer of that particular classification.
2. Description of the Related Art
Ever since the first connection was made between two computer systems, new ways of transferring data, resources, and other information between two computer systems via a connection continue to develop. In typical network architectures, when two computer systems are exchanging data via a connection, one of the computer systems is considered a client sending requests and the other is considered a server processing the requests and returning results. In an effort to increase the speed at which requests are handled, server systems continue to expand in size and speed. Further, in an effort to handle peak periods when multiple requests are arriving every second, server systems are often joined together as a group and requests are distributed among the grouped servers. Multiple methods of grouping servers have developed such as clustering, multi-system shared data (sysplex) environments, and enterprise systems. With a cluster of servers, one server is typically designated to manage distribution of incoming requests and outgoing responses. The other servers typically operate in parallel to handle the distributed requests from clients. Thus, one of multiple servers in a cluster may service a client request without the client detecting that a cluster of servers is processing the request.
Typically, servers or groups of servers operate on a particular network platform, such as Unix or some variation of Unix, and provide a hosting environment for running applications. Each network platform may provide functions ranging from database integration, clustering services, and security to workload management and problem determination. Each network platform typically offers different implementations, semantic behaviors, and application programming interfaces (APIs).
Merely grouping servers together to expand processing power, however, is a limited method of improving efficiency of response times in a network. Thus, increasingly, within a company network, rather than just grouping servers, servers and groups of server systems are organized as distributed resources. There is an increased effort to collaborate, share data, share cycles, and improve other modes of interaction among servers within a company network and outside the company network. Further, there is an increased effort to outsource nonessential elements from one company network to that of a service provider network. Moreover, there is a movement to coordinate resource sharing between resources that are not subject to the same management system, but still address issues of security, policy, payment, and membership. For example, resources on an individual's desktop are not typically subject to the same management system as resources of a company server cluster. Even different administrative groups within a company network may implement distinct management systems.
The problems with decentralizing the resources available from servers and other computing systems operating on different network platforms, located in different regions, with different security protocols and each controlled by a different management system, has led to the development of Grid technologies using open standards for operating a grid environment. Grid environments support the sharing and coordinated use of diverse resources in dynamic, distributed, virtual organizations. A virtual organization is created within a grid environment when a selection of resources, from geographically distributed systems operated by different organizations with differing policies and management systems, is organized to handle a job request. A grid vendor may develop a grid environment to which a buyer may submit grid jobs, for example.
Grid vendors may offer to process grid jobs with different performance promises and with different pricing policies. Even if standards, such as those proposed by the open standards organization for Grid technologies, define standard monitoring, metering, rating, accounting, and billing interfaces, grid vendors will still have different physical resources available to process grid jobs, and thus pricing and performance will still vary among grid vendors. In one example, grid vendors have to measure the use of the grid vendor's resources by a grid job, which may involve complex formulas which take into account multiple factors, in addition to the actual use of resources. For example, a grid vendor may dedicate a particular processor resource to a particular job and charge the grid job for the dedicated use of the processor, in addition to the actual number of processor cycles the grid job required.
While grid vendors are focused on monitoring, metering, accounting a billing for the actual usage of physical resources, at a computational cycle level, grid clients or customers are focused on processing of applications and jobs at an application type level. As a result, there is a lack of connection between the way that grid customers and grid vendors view the costs associated with grid jobs. Further, currently, each grid vendor still monitors, meters, and bills for grid jobs using different units of physical resource measurement. Thus, because of the disconnect between the client grid job at an application level and the grid vendor measurement of use of physical resources, it is difficult for grid clients to compare the costs of processing grid jobs at different grid vendors and to estimate future costs of submitting grid jobs to a same grid vendor.
Therefore, in view of the foregoing, it would have advantageous to provide a method, system, and program for estimating future job costs by classifying grid jobs in categories with client-defined application based metric units, converting the grid vendor defined metric costs to perform grid jobs into the client-defined application based metric unit costs by category of grid job, and storing the converted client-defined application based metric unit costs for predicting future costs of grid jobs of the same category. In particular, it would be advantageous to submit grid job microcosms, or smaller representative grid jobs, to multiple grid vendors to retrieve actual costs for each category of grid job on a smaller basis, converting the grid vendor defined metric costs to a client-defined application based metric unit cost, and comparing the costs at the client-defined application based metric level, before submitting larger grid jobs, in the future to the most cost effective grid vendor.
In view of the foregoing, the present invention in general provides for improved grid computing and in particular to client-side estimation of future grid job costs. Still more particularly, the present invention relates to customer estimation of future grid job costs by comparing a current grid job of a particular classification with a history of stored costs for other grid jobs of that customer of that particular classification.
In one embodiment, a grid client agent for a client system enabled to submit grid jobs to a grid provider that facilitates an on-demand grid environment, calculates a ratio of an application based metric to a grid provider metric for processing a particular grid job. Then, the grid client agent creates a table with an entry comparing the application based metric to a cost per grid provider metric for the grid provider based on the calculated ratio. Next, the grid client agent stores the table with the entry. Then, responsive to detecting a next grid job, the grid client agent estimates a cost for the grid provider to process the next grid job based on a particular number of application based metric operations required for the next grid job, translated by the ratio into the grid provider metric and multiplied by the cost per grid provider metric.
To calculate the ratio, first, the grid client agent distributes a job microcosm, which is a smaller representation of a particular grid job, to the grid provider for processing in the on-demand grid environment. Responsive to receiving the result of the job microcosm and a charge for processing the job microcosm based on a grid provider metric for the grid provider, then grid client agent calculates the ratio of the application based metric to the grid provider metric and identifies the cost per grid provider metric from the charge for processing the job microcosm. In addition, the grid client agent may first distribute a job request with requirements for the grid microcosm to the grid provider and receive a bid from the grid provider specifying the cost and other agreements for processing the grid microcosm. Further, in addition, the grid client agent may also identify the cost per grid provider metric from a published rate by the grid provider.
When the grid client agent detects an adjusted cost per grid provider metric, whether through a pricing notification or the charges for another grid job, the grid client agent updates the entry for the grid provider with the adjusted cost per grid provider metric and automatically reestimates the cost for the grid provider to process the next grid job based on the adjusted cost per grid provider metric, without requiring a new calculation of the ratio.
The table may include additional entries from additional grid providers who process grid microcosms of a particular job, where the grid client agent calculates a ratio of the application based metric to each grid providers different metric for each entry in the table. Where multiple grid provider entries are available in the table, then the grid client agent may estimate the cost for each grid provider to process the next grid job based on the ratio and cost per grid provider metric in each entry and compare the costs which are calculated for the application based metric number of operation required for the next grid job.
In addition, entries in the table are classified by category of grid job. Thus, the grid client agent will detect the next grid job, classify the grid job within one of the categories of grid job, and access those entries in the table that are also classified by the same category of grid job.
The novel features believed aspect of the invention are set forth in the appended claims. The invention itself however, as well as a preferred mode of use, further objects and advantages thereof, will best be understood by reference to the following detailed description of an illustrative embodiment when read in conjunction with the accompanying drawings, wherein:
Referring now to the drawings and in particular to
In one embodiment, computer system 100 includes a bus 122 or other device for communicating information within computer system 100, and at least one processing device such as processor 112, coupled to bus 122 for processing information. Bus 122 may include low-latency and higher latency paths connected by bridges and adapters and controlled within computer system 100 by multiple bus controllers. When implemented as a server system, computer system 100 typically includes multiple processors designed to improve network servicing power.
Processor 112 may be a general-purpose processor such as IBM's PowerPC (PowerPC is a registered trademark of International Business Machines Corporation) processor that, during normal operation, processes data under the control of operating system and application software accessible from a dynamic storage device such as random access memory (RAM) 114 and a static storage device such as Read Only Memory (ROM) 116. The operating system may provide a graphical user interface (GUI) to the user. In one embodiment, application software contains machine executable instructions that when executed on processor 112 carry out the operations depicted in the flowchart of
The present invention may be provided as a computer program product, included on a machine-readable medium having stored thereon the machine executable instructions used to program computer system 100 to perform a process according to the present invention. The term “machine-readable medium” as used herein includes any medium that participates in providing instructions to processor 112 or other components of computer system 100 for execution. Such a medium may take many forms including, but not limited to, non-volatile media, volatile media, and transmission media. Common forms of non-volatile media include, for example, a floppy disk, a flexible disk, a hard disk, magnetic tape or any other magnetic medium, a compact disc ROM (CD-ROM) or any other optical medium, punch cards or any other physical medium with patterns of holes, a programmable ROM (PROM), an erasable PROM (EPROM), electrically EPROM (EEPROM), a flash memory, any other memory chip or cartridge, or any other medium from which computer system 100 can read and which is suitable for storing instructions. In the present embodiment, an example of a non-volatile medium is mass storage device 118 which as depicted is an internal component of computer system 100, but will be understood to also be provided by an external device. Volatile media include dynamic memory such as RAM 114. Transmission media include coaxial cables, copper wire or fiber optics, including the wires that comprise bus 122. Transmission media can also take the form of acoustic or light waves, such as those generated during radio frequency or infrared data communications.
Moreover, the present invention may be downloaded as a computer program product, wherein the program instructions may be transferred from a remote virtual resource, such as a virtual resource 160, to requesting computer system 100 by way of data signals embodied in a carrier wave or other propagation medium via a network link 134 (e.g. a modem or network connection) to a communications interface 132 coupled to bus 122. Virtual resource 160 may include a virtual representation of the resources accessible from a single system or systems, wherein multiple systems may each be considered discrete sets of resources operating on independent platforms, but coordinated as a virtual resource by a grid manager. Communications interface 132 provides a two-way data communications coupling to network link 134 that may be connected, for example, to a local area network (LAN), wide area network (WAN), or an Internet Service Provider (ISP) that provide access to network 102. In particular, network link 134 may provide wired and/or wireless network communications to one or more networks, such as network 102, through which use of virtual resources, such as virtual resource 160, is accessible as provided within a grid environment 150. Grid environment 150 may be part of multiple types of networks, including a peer-to-peer network, or may be part of a single computer system, such as computer system 100.
As one example, network 102 may refer to the worldwide collection of networks and gateways that use a particular protocol, such as Transmission Control Protocol (TCP) and Internet Protocol (IP), to communicate with one another. Network 102 uses electrical, electromagnetic, or optical signals that carry digital data streams. The signals through the various networks and the signals on network link 134 and through communication interface 132, which carry the digital data to and from computer system 100, are exemplary forms of carrier waves transporting the information. It will be understood that alternate types of networks, combinations of networks, and infrastructures of networks may be implemented.
When implemented as a server system, computer system 100 typically includes multiple communication interfaces accessible via multiple peripheral component interconnect (PCI) bus bridges connected to an input/output controller. In this manner, computer system 100 allows connections to multiple network computers.
Additionally, although not depicted, multiple peripheral components and internal/external devices may be added to computer system 100, connected to multiple controllers, adapters, and expansion slots coupled to one of the multiple levels of bus 122. For example, a display device, audio device, keyboard, or cursor control device may be added as a peripheral component.
Those of ordinary skill in the art will appreciate that the hardware depicted in
With reference now to
It will be understood that grid environment 150 may be provided by a grid vendor or provider, where a cost for use of resources within grid environment 150 may be calculated based on the amount of time required for a grid job to execute or the actual amount of resources used, for example. In addition, it will be understood that grid environment 150 may include grid resources supplied by a single grid vendor, such as a particular business enterprise, or multiple vendors, where each vendor continues to monitor and manage the vendor's group of resources, but grid management system 240 is able to monitor unintended changes across all the resources, regardless of which vendors provide which resources. Further, it will be understood that although resource discovery mechanisms for discovering available grid resources are not depicted, client system 200 or grid management system 240 may discover grid resources advertised from local and global directories available within and outside of grid environment 150.
The central goal of a grid environment, such as grid environment 150 is organization and delivery of resources from multiple discrete systems viewed as virtual resource 160. Client system 200, server clusters 222, servers 224, workstations and desktops 226, data storage systems 228, networks 230 and the systems creating grid management system 240 may be heterogeneous and regionally distributed with independent management systems, but enabled to exchange information, resources, and services through a grid infrastructure enabled by grid management system 240. Further, server clusters 222, servers 224, workstations and desktops 226, data storage systems 228, and networks 230 may be geographically distributed across countries and continents or locally accessible to one another.
In the example, client system 200 interfaces with grid management system 240. Client system 200 may represent any computing system sending requests to grid management system 240. In particular, client system 200 may send virtual job requests (or requests for a quote (RFQs) and jobs to grid management system 240. Further, while in the present embodiment client system 200 is depicted as accessing grid environment 150 with a request, in alternate embodiments client system 200 may also operate within grid environment 150.
While the systems within virtual resource 160 are depicted in parallel, in reality, the systems may be part of a hierarchy of systems where some systems within virtual resource 160 may be local to client system 200, while other systems require access to external networks. Additionally, it is important to note, that systems depicted within virtual resources 160 may be physically encompassed within client system 200.
To implement grid environment 150, grid management system 240 facilitates grid services. Grid services may be designed according to multiple architectures, including, but not limited to, the Open Grid Services Architecture (OGSA). In particular, grid management system 240 refers to the management environment which creates a grid by linking computing systems into a heterogeneous network environment characterized by sharing of resources through grid services.
In particular, as will be described with reference to
According to an advantage of the invention, client system 200 includes a grid client agent for estimating future costs of grid jobs. As will be described with reference to
Referring now to
Within the layers of architecture 300, first, a physical and logical resources layer 330 organizes the resources of the systems in the grid. Physical resources include, but are not limited to, servers, storage media, and networks. The logical resources virtualize and aggregate the physical layer into usable resources such as operating systems, processing power, memory, I/O processing, file systems, database managers, directories, memory managers, and other resources.
Next, a web services layer 320 provides an interface between grid services 310 and physical and logical resources 330. Web services layer 320 implements service interfaces including, but not limited to, Web Services Description Language (WSDL), Simple Object Access Protocol (SOAP), and eXtensible mark-up language (XML) executing atop an Internet Protocol (IP) or other network transport layer. Further, the Open Grid Services Infrastructure (OSGI) standard 322 builds on top of current web services 320 by extending web services 320 to provide capabilities for dynamic and manageable Web services required to model the resources of the grid. In particular, by implementing OGSI standard 322 with web services 320, grid services 310 designed using OGSA are interoperable. In alternate embodiments, other infrastructures or additional infrastructures may be implemented a top web services layer 320.
Grid services layer 310 includes multiple services, the combination of which may implement grid management system 240. For example, grid services layer 310 may include grid services designed using OGSA, such that a uniform standard is implemented in creating grid services. Alternatively, grid services may be designed under multiple architectures. Grid services can be grouped into four main functions. It will be understood, however, that other functions may be performed by grid services.
First, a resource management service 302 manages the use of the physical and logical resources. Resources may include, but are not limited to, processing resources, memory resources, and storage resources. Management of these resources includes scheduling jobs, distributing jobs, and managing the retrieval of the results for jobs. Resource management service 302 monitors resource loads and distributes jobs to less busy parts of the grid to balance resource loads and absorb unexpected peaks of activity. In particular, a user may specify preferred performance levels so that resource management service 302 distributes jobs to maintain the preferred performance levels within the grid.
Second, information services 304 manages the information transfer and communication between computing systems within the grid. Since multiple communication protocols may be implemented, information services 304 manages communications across multiple networks utilizing multiple types of communication protocols.
Third, a data management service 306 manages data transfer and storage within the grid. In particular, data management service 306 may move data to nodes within the grid where a job requiring the data will execute. A particular type of transfer protocol, such as Grid File Transfer Protocol (GridFTP), may be implemented.
Finally, a security service 308 applies a security protocol for security at the connection layers of each of the systems operating within the grid. Security service 308 may implement security protocols, such as Open Secure Socket Layers (SSL), to provide secure transmissions. Further, security service 308 may provide a single sign-on mechanism, so that once a user is authenticated, a proxy certificate is created and used when performing actions within the grid for the user.
Multiple services may work together to provide several key functions of a grid computing system. In a first example, computational tasks are distributed within a grid. Data management service 306 may divide up a computation task into separate grid services requests of packets of data that are then distributed by and managed by resource management service 302. The results are collected and consolidated by data management system 306. In a second example, the storage resources across multiple computing systems in the grid are viewed as a single virtual data storage system managed by data management service 306 and monitored by resource management service 302.
An applications layer 340 includes applications that use one or more of the grid services available in grid services layer 310. Advantageously, applications interface with the physical and logical resources 330 via grid services layer 310 and web services 320, such that multiple heterogeneous systems can interact and interoperate.
With reference now to
In the example, the grid management system for a grid provider includes a grid provider bid request portal 404 at which job requests are received and queued. Grid provider bid request portal directs each job request to a workload calculator 408 which calculates the workload requirements of job request 402. In particular, workload requirements may include, for example, an estimation of the computational cycles that a job will require and the type of hardware and software platforms required. Workload calculator 408 distributes the workload calculations as workload data 412 to a bid formalizer 418 and as workload data 410 to a cost calculator 414. Cost calculator 414 uses the workload calculation, job request requirements, and current and estimated costs for use of resources to estimate a cost for processing the grid job specified in job request 402. Cost calculator 414 returns cost data 416 to bid formalizer 418. Bid formalizer 418 gathers workload data 412 and cost data 416 into a bid 420 which is returned from the grid provider to client system 200. Bid 420 may agree to perform the grid job exactly as requested or may include exceptions, exclusions, and other variations from the specification in job request 402. In addition, bid 420 may be viewed as a service level agreement, specifying a performance standard which the grid provider agrees to if the grid job is later submitted to the grid provider. Further, bid formalizer 418 may create a bid based on a pricing contract agreement reached between the grid client and the provider before or after the bid placement.
Referring now to
In the example, the grid management system for a grid provider includes a job queue 504 that receives job 502 and holds job 502 until grid scheduler 506 can schedule and dispatch job 502 to grid resources. In particular, grid scheduler 502 accesses service level agreement (SLA) 508, which includes the performance requirements for job 502, based on a bid placed by the grid provider for the specific job or an agreement for job performance requirements for jobs received from a particular client system, for example. Grid scheduler 506 accesses the grid resources required to handle job 502, for example server A 516, server B 518, and server N 520. Although not depicted, grid scheduler 506 may access a grid manager and other components of the grid management system that build the required resources for a grid job, access resources from other grid environments, and sell-off grid jobs if necessary to other grid providers.
In the example, grid scheduler 506 divides job 502 into job parts 510, 512, and 514 that are distributed to server A 516, server B 518, and server N 520, respectively. A job results manager 528 collects results 522, 524, and 526 from serve A 516, server B 518, and server N 520, respectively. Job results manager 528 returns complete results 530 to client system 200. In addition, job results manager 528 updates an accounting manager 532 when the job is complete. Accounting manager 532 communicates with a workload manager (not depicted) that monitors the use of server A 516, server B 518, and server N 520 by job 502 to calculate the total workload of job 502 and the total cost of job 502. In particular SLA 508 may specify factors that control the total cost of job 502, such as a maximum cost, a fixed cost, a sliding cost scale if performance requirements are not met, and other pricing adjustment factors.
With reference now to
Each of grid providers 604 and 614 process gird job microcosms 602 and 612 and return results 606 and 616 the same manner as described with reference to a grid provider processing a grid job in
According to an advantage, by sampling the actual performance and cost for each provider and translating the cost into a client-defined application metric basis, client system 200 can compare the actual cost for performance, rather than the promised cost for performance, on client-defined application metric basis, before sending a large grid job or multiple large grid jobs. In the example, after sampling the results and cost for each of grid job microcosms 602 and 612, client system 200 selects to send full grid job 620, of which grid job microcosm 602 and 612 are representative sets, to grid provider 604. Grid provider 604 processes full grid job 620, as described with reference to
Referring now to
A job microcosm controller 702 controls the process, as described with reference to
In particular, job microcosm controller 702 may first query multiple grid providers with a job request for the job microcosm. In addition to querying grid providers with job requests as described with reference to
Next, once job microcosm controller 702 acquires bids and rate quotes from multiple grid providers, job microcosm controller 702 submits job microcosms, which are small jobs representative of larger grid jobs to be submitted, to a selection of the multiple grid providers. In one example, if a corporation needs an average of 20,000,000 records merged each night, then the job microcosm distributed to each of the selected grid providers may include 1% of these records. In another example, a client does not send a portion of the actual grid job, but instead submits a job microcosm of an analogous job with tester data.
When job microcosm controller 702 receives the computational results of the job microcosms are received, the charges from each grid provider for each job microcosm are already received. Job microcosm controller 702 also detects the time taken, once the job microcosm was submitted to a grid provider, for the grid provider to return a result. In the example, the first provider takes five minutes to return a result and charges $2.20, the second provider takes one minute to return a result and charges $1.74 and the third provider takes two minutes to return a result and charges $3.40. In particular, it will be understood that job microcosm controller 702 may receive the charges from each grid provider through multiple communication media, including a separate transmission from the grid provider to client system 200, an email communication, an embedded accounting token digitally signed and returned to client system 200 with a transaction receipt.
Once all the costs per grid microcosm are received, a cost comparator 706 compares the actual costs by grid provider for performing a particular category of grid job. In particular, cost comparator 706 calculates a cost by client-defined application metric for each grid job microcosm. In particular, each grid provider submits provider-defined metric costs, such as cost per hour or cost per provider-based complex formula. The client, however, defines grid jobs at an application level granularity. For example, a client defined application metric is a cost per record merge. Once cost comparator 706 calculates a client-defined application metric to grid-provider metric ratio, then cost comparator 706 can translate the number of client-defined application metric operations required for a full job into a price using the client-defined application metric to grid-metric ratio as a translation value. Cost comparator 706 determines the most cost effective grid provider and triggers submission of the remainder of the grid job or the actual grid job to the most cost effective grid provider.
In addition cost comparator 706 calculates the client-defined application metric to grid-provide metric cost ratio for cost tables 710 and stores the cost by client-defined application metric in cost tables 710. As illustrated with reference to
Cost tables 710 includes a second column for a provider identifier 806. In the example, values listed under provider identifiers 806 are “acme grid”, “wiley grid”, and “coyote grid”. It will be understood that other types of provider identifiers may be implemented, including an address and other indicia of a grid provider.
Cost tables 710 includes a third column for a grid-provider metric 808. In the example, the values listed under grid-provider metric 808 are “hourly charge”, “proprietary composite charge”, and “million floating point (MFP) operations charge”. It will be understood that additional types of grid-provider metric values may be defined by grid providers. Further, it will be understood that grid providers may designate a grid-provider metric when bidding on a job request or with the charges returned for processing a grid microcosm. In addition, a “proprietary composite charge” refers to a charge calculated by the grid provider based on multiple factors, including for example, the data volume moved across the network, jobs submitted to the processor run queues, and bytes written to and read from a grid provider's own storage system.
A fourth column in cost tables 710 includes a translation value 810 which represents the ratio of the client-defined application metric to the grid-provider metric. In the example, values listed under translation value 810 include “3,000,000 merges per hour”, “600 merges per composite unit”, and “2000 merges per MFP operations”. In particular, the translation values are calculated by cost comparator 706 and represent the number of client-defined application metrics accomplished per grid-provider metric ratio. As previously described, translation values may be calculated based on grid microcosm. In other embodiments, however, translation values may also be calculated and updated based on a full job submission.
Finally, the fifth column in cost tables 710 includes an offered pricing per grid-provider metric 812. In the example, the values listed under offered pricing per grid-provider metric 812 include “$40 per hour”, “$0.02 per composite unit”, and “$0.08 per MFP”. In an alternate embodiment, historical pricing ranges may be given by provider, as well as the most recent price by provider.
According to an advantage of the invention, when a client wants to estimate a cost of a new grid job, grid job classifier 704 classifies the grid job and future cost estimator 708 searches cost tables 710 for client-defined application metric based costs for that category of grid job. Then, based on the client-defined application metric requirements of the new grid job, future cost estimator 708 estimates the cost for the new grid job according to grid provider. In one example, based on the values illustrated in cost tables 710, a new grid job requiring 3,000,000 batch merges would cost $40 on “acme grid” (3,000,000 merges per hour/$40 per hour), $100 on “wiley grid” (600 merges per composite unit/$0.02 per composite unit), and $120 on “coyote grid” (2000 merges per MFP operations/$0.08 per MFP).
Further, according to the advantage, if future cost estimator 708 searches cost tables 710 for client-defined application metric based costs for that category of grid job and none or available or the costs are out of date, then future cost estimator 708 initiates job microcosm controller 702 to determine current costs for microcosms of the particular classification category of grid job.
In addition, it is important to note that as pricing per grid-provider metric changes over time or in response to market conditions, the new cost can be inserted into the offered pricing per grid-provider metric 812 column in cost tables 710 and prices estimates for jobs classified within the categories updated, without changing the translation values. For example, if the “acme grid” price increases to $55 per hour, the “wiley grid” price drops to $0.013 per composite unit, and the “coyote grid” price drops to “$0.05 per MFP”, the client-defined application metric to grid-provider metric ratio listed under the translation value 810 column does not change, so future cost estimator 708 can still estimate the future cost to complete a 5,000,000 batch merge job tomorrow based on the updated prices, e.g. $91.67 on “acme grid” (3,000,000 merges per hour/$55 per hour), $108.33 on “wiley grid” (600 merger per composite unit/$0.013 per composite unit) and $125 on the “coyote grid” (2000 mergers per MFP operation/$0.05 per MFP).
Referring now to
Returning to block 904, if there are not already grid jobs of the same category priced in the cost table, then the process passes to block 908. In addition, although not depicted, at block 904 a determination may also be made that even though there grid jobs of the same category already priced in the cost table, that the pricing is outdated, and the process passes to block 908.
Block 908 depicts creating a grid job request for a small part of the grid job. Next, block 910 illustrates distributing the grid job request to multiple grid providers. Thereafter, block 912 depicts a determination whether the grid client agent receives grid bids for the small part of the grid job. As depicted, if no bids are yet received, the process iterates at block 912, however, if no bids are received after a period of time, then the job request may be adjusted and resubmitted to the multiple grid providers. Once bids are received, then the process passes to block 914.
Block 914 depicts selecting those grid job providers whose bids meet the bid request requirements. Next, block 916 illustrates distributing small parts of the grid job to the selection of the grid job providers. Thereafter, block 918 depicts a determination whether the grid client agent receives all the results of the grid job processing with costs. As illustrated, if the results are not yet received, the process iterates at block 918, however, if not all results are received within the expected period of time for response, then those grid providers not returning a results may be queried. Once the results are retrieved, then the process passes to block 920.
Block 920 depicts calculating the client-defined application metric to grid provider metric ratio for each grid provider. Next, block 921 illustrates calculating the cost per grid provider using the ratio based on the number of client-defined application metric operations required for the large grid job. Next, block 922 illustrates comparing the actual costs per grid provider. Thereafter, block 924 depicts distributing the remainder of the grid job to the projected least expensive grid provider based on client-defined application metrics. Then, block 926 illustrates storing the ratio and grid provider costs in the cost table according to the client-defined classification of the grid job, and the process ends.
While the invention has been particularly shown and described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention.
This application is a continuation of commonly assigned U.S. patent application Ser. No. 11/034,305, filed Jan. 12, 2005, which is hereby incorporated herein by reference. The present application is related to the following co-pending applications, hereby incorporated herein by reference: (1) U.S. patent application Ser. No. 11/034,303; and(2) U.S. patent application Ser. No. 11/034,335.
Number | Name | Date | Kind |
---|---|---|---|
4096561 | Trinchieri | Jun 1978 | A |
4591980 | Huberman | May 1986 | A |
5220674 | Morgan | Jun 1993 | A |
5325525 | Shan et al. | Jun 1994 | A |
5392430 | Chen et al. | Feb 1995 | A |
5416840 | Cane et al. | May 1995 | A |
5559978 | Spilo et al. | Sep 1996 | A |
5630156 | Privat et al. | May 1997 | A |
5640569 | Miller | Jun 1997 | A |
5655081 | Bonnell | Aug 1997 | A |
5729472 | Seiffert et al. | Mar 1998 | A |
5881238 | Aman et al. | Mar 1999 | A |
5884046 | Antonov | Mar 1999 | A |
5905975 | Ausubel | May 1999 | A |
5931911 | Remy et al. | Aug 1999 | A |
5960176 | Kuroki et al. | Sep 1999 | A |
5978583 | Ekanadham et al. | Nov 1999 | A |
5996013 | Delp et al. | Nov 1999 | A |
6003075 | Arendt et al. | Dec 1999 | A |
6021398 | Ausubel | Feb 2000 | A |
6023612 | Harris et al. | Feb 2000 | A |
6038560 | Wical | Mar 2000 | A |
6049828 | Dev | Apr 2000 | A |
6064810 | Raad et al. | May 2000 | A |
6067580 | Aman et al. | May 2000 | A |
6119186 | Watts et al. | Sep 2000 | A |
6134680 | Yeomans | Oct 2000 | A |
6154787 | Urevig et al. | Nov 2000 | A |
6167445 | Gai | Dec 2000 | A |
6182139 | Brendel | Jan 2001 | B1 |
6304892 | Bhoj et al. | Oct 2001 | B1 |
6310889 | Parsons et al. | Oct 2001 | B1 |
6324656 | Gleichaf | Nov 2001 | B1 |
6356909 | Spencer | Mar 2002 | B1 |
6370565 | Van Gong | Apr 2002 | B1 |
6397197 | Gindlesperger | May 2002 | B1 |
6418462 | Xu | Jul 2002 | B1 |
6430711 | Sekizawa | Aug 2002 | B1 |
6438704 | Harris et al. | Aug 2002 | B1 |
6452692 | Yacoub | Sep 2002 | B1 |
6453376 | Fairman et al. | Sep 2002 | B1 |
6463454 | Lumelsky et al. | Oct 2002 | B1 |
6463457 | Armentrout | Oct 2002 | B1 |
6466947 | Arnold et al. | Oct 2002 | B2 |
6470384 | O'Brien et al. | Oct 2002 | B1 |
6480955 | DeKoning et al. | Nov 2002 | B1 |
6516312 | Kraft et al. | Feb 2003 | B1 |
6552813 | Yacoub | Apr 2003 | B2 |
6560609 | Frey et al. | May 2003 | B1 |
6564377 | Jayasimha | May 2003 | B1 |
6567935 | Figueroa | May 2003 | B1 |
6578160 | MacHardy et al. | Jun 2003 | B1 |
6594684 | Hodjat et al. | Jul 2003 | B1 |
6597956 | Aziz et al. | Jul 2003 | B1 |
6606602 | Kolis | Aug 2003 | B1 |
6615373 | Elko | Sep 2003 | B2 |
6625643 | Colby et al. | Sep 2003 | B1 |
6633868 | Min et al. | Oct 2003 | B1 |
6640241 | Ozzie et al. | Oct 2003 | B1 |
6647373 | Carlton-Foss | Nov 2003 | B1 |
6654759 | Brunet et al. | Nov 2003 | B1 |
6654807 | Farber et al. | Nov 2003 | B2 |
6671676 | Shacham | Dec 2003 | B1 |
6681251 | Leymann et al. | Jan 2004 | B1 |
6697801 | Eldredge et al. | Feb 2004 | B1 |
6701342 | Bartz et al. | Mar 2004 | B1 |
6714987 | Amin et al. | Mar 2004 | B1 |
6717694 | Fukunaga et al. | Apr 2004 | B1 |
6735200 | Novaes | May 2004 | B1 |
6738736 | Bond | May 2004 | B1 |
6748416 | Carpenter et al. | Jun 2004 | B2 |
6752663 | Bright et al. | Jun 2004 | B2 |
6799208 | Sankaranarayan et al. | Sep 2004 | B1 |
6816905 | Sheets et al. | Nov 2004 | B1 |
6816907 | Mei et al. | Nov 2004 | B1 |
6941865 | Kato | Sep 2005 | B2 |
6954739 | Bouillet et al. | Oct 2005 | B1 |
6963285 | Fischer et al. | Nov 2005 | B2 |
7050184 | Miyamoto | May 2006 | B1 |
7055052 | Chalasani et al. | May 2006 | B2 |
7080077 | Ramamurthy et al. | Jul 2006 | B2 |
7086086 | Ellis | Aug 2006 | B2 |
7093259 | Pulsipher et al. | Aug 2006 | B2 |
7096248 | Masters et al. | Aug 2006 | B2 |
7123375 | Nobutani et al. | Oct 2006 | B2 |
7124062 | Gebhart | Oct 2006 | B2 |
7171654 | Werme et al. | Jan 2007 | B2 |
7181302 | Bayne | Feb 2007 | B2 |
7181743 | Werme et al. | Feb 2007 | B2 |
7234032 | Durham et al. | Jun 2007 | B2 |
7243121 | Neiman et al. | Jul 2007 | B2 |
7243147 | Hodges et al. | Jul 2007 | B2 |
7245584 | Goringe et al. | Jul 2007 | B2 |
7269757 | Lieblich et al. | Sep 2007 | B2 |
7272732 | Farkas et al. | Sep 2007 | B2 |
7283935 | Pritchard et al. | Oct 2007 | B1 |
7293092 | Sukegawa | Nov 2007 | B2 |
7340654 | Bigagli et al. | Mar 2008 | B2 |
7426267 | Caseau | Sep 2008 | B1 |
7433931 | Richoux | Oct 2008 | B2 |
7437675 | Casati et al. | Oct 2008 | B2 |
7451106 | Gindlesperger | Nov 2008 | B1 |
7472079 | Fellenstein et al. | Dec 2008 | B2 |
7472112 | Pfleiger et al. | Dec 2008 | B2 |
7533168 | Pabla et al. | May 2009 | B1 |
7533170 | Fellenstein et al. | May 2009 | B2 |
7552437 | Di Luoffo et al. | Jun 2009 | B2 |
7562035 | Fellenstein et al. | Jul 2009 | B2 |
7562143 | Fellenstein et al. | Jul 2009 | B2 |
7571120 | Fellenstein et al. | Aug 2009 | B2 |
7584274 | Bond et al. | Sep 2009 | B2 |
7620706 | Jackson | Nov 2009 | B2 |
7739155 | Fellenstein et al. | Jun 2010 | B2 |
20020023168 | Bass et al. | Feb 2002 | A1 |
20020057684 | Miyamoto et al. | May 2002 | A1 |
20020072974 | Pugliese et al. | Jun 2002 | A1 |
20020103904 | Hay | Aug 2002 | A1 |
20020116488 | Subramanian et al. | Aug 2002 | A1 |
20020147578 | O'Neil et al. | Oct 2002 | A1 |
20020152305 | Jackson et al. | Oct 2002 | A1 |
20020152310 | Jain | Oct 2002 | A1 |
20020165979 | Vincent | Nov 2002 | A1 |
20020171864 | Sesek | Nov 2002 | A1 |
20020188486 | Gil et al. | Dec 2002 | A1 |
20030011809 | Suzuki et al. | Jan 2003 | A1 |
20030023499 | Das et al. | Jan 2003 | A1 |
20030036886 | Stone | Feb 2003 | A1 |
20030041010 | Yonao-Cowan | Feb 2003 | A1 |
20030058797 | Izmailov et al. | Mar 2003 | A1 |
20030088671 | Klinker et al. | May 2003 | A1 |
20030101263 | Boilet et al. | May 2003 | A1 |
20030105868 | Kimbrel et al. | Jun 2003 | A1 |
20030108018 | Dujardin et al. | Jun 2003 | A1 |
20030110419 | Banerjee et al. | Jun 2003 | A1 |
20030112809 | Bharali et al. | Jun 2003 | A1 |
20030115099 | Burns et al. | Jun 2003 | A1 |
20030120701 | Pulsipher et al. | Jun 2003 | A1 |
20030126240 | Vosseler | Jul 2003 | A1 |
20030126265 | Aziz et al. | Jul 2003 | A1 |
20030128186 | Laker | Jul 2003 | A1 |
20030140143 | Wolf et al. | Jul 2003 | A1 |
20030145084 | McNerney | Jul 2003 | A1 |
20030161309 | Karuppiah | Aug 2003 | A1 |
20030172061 | Krupin et al. | Sep 2003 | A1 |
20030191795 | Bernardin et al. | Oct 2003 | A1 |
20030195813 | Pallister et al. | Oct 2003 | A1 |
20030200347 | Weitzman | Oct 2003 | A1 |
20030204485 | Triggs | Oct 2003 | A1 |
20030204758 | Singh | Oct 2003 | A1 |
20030212782 | Canali et al. | Nov 2003 | A1 |
20040003077 | Bantz | Jan 2004 | A1 |
20040015976 | Lam | Jan 2004 | A1 |
20040019624 | Sukegawa | Jan 2004 | A1 |
20040059729 | Krupin et al. | Mar 2004 | A1 |
20040064548 | Adams et al. | Apr 2004 | A1 |
20040078471 | Yang | Apr 2004 | A1 |
20040093381 | Hodges et al. | May 2004 | A1 |
20040095237 | Chen | May 2004 | A1 |
20040098606 | Tan et al. | May 2004 | A1 |
20040103339 | Chalasani et al. | May 2004 | A1 |
20040120256 | Park | Jun 2004 | A1 |
20040128186 | Breslin et al. | Jul 2004 | A1 |
20040128374 | Hodges et al. | Jul 2004 | A1 |
20040145775 | Kubler et al. | Jul 2004 | A1 |
20040193461 | Keohane et al. | Sep 2004 | A1 |
20040213220 | Davis | Oct 2004 | A1 |
20040215590 | Kroening | Oct 2004 | A1 |
20040215973 | Kroening | Oct 2004 | A1 |
20040225711 | Burnett et al. | Nov 2004 | A1 |
20050015437 | Strait | Jan 2005 | A1 |
20050021349 | Chiliotis et al. | Jan 2005 | A1 |
20050021742 | Yemini et al. | Jan 2005 | A1 |
20050027691 | Brin et al. | Feb 2005 | A1 |
20050027785 | Bozak et al. | Feb 2005 | A1 |
20050041583 | Su et al. | Feb 2005 | A1 |
20050044228 | Birkestrand et al. | Feb 2005 | A1 |
20050065994 | Creamer et al. | Mar 2005 | A1 |
20050071843 | Guo et al. | Mar 2005 | A1 |
20050108394 | Braun | May 2005 | A1 |
20050120160 | Plouffe et al. | Jun 2005 | A1 |
20050132041 | Kundu | Jun 2005 | A1 |
20050138162 | Byrnes | Jun 2005 | A1 |
20050138175 | Kumar et al. | Jun 2005 | A1 |
20050149294 | Gebhart | Jul 2005 | A1 |
20050160423 | Bantz et al. | Jul 2005 | A1 |
20050182838 | Sheets et al. | Aug 2005 | A1 |
20050187797 | Johnson | Aug 2005 | A1 |
20050187977 | Frost | Aug 2005 | A1 |
20050192968 | Beretich et al. | Sep 2005 | A1 |
20050234937 | Ernest et al. | Oct 2005 | A1 |
20050257079 | Arcangeli | Nov 2005 | A1 |
20050283788 | Bigagli et al. | Dec 2005 | A1 |
20060047802 | Iszlai et al. | Mar 2006 | A1 |
20060064698 | Miller et al. | Mar 2006 | A1 |
20060069621 | Chang et al. | Mar 2006 | A1 |
20060075041 | Antonoff et al. | Apr 2006 | A1 |
20060075042 | Wang et al. | Apr 2006 | A1 |
20060149576 | Ernest et al. | Jul 2006 | A1 |
20060288251 | Jackson | Dec 2006 | A1 |
20060294218 | Tanaka et al. | Dec 2006 | A1 |
20060294238 | Naik et al. | Dec 2006 | A1 |
20070022425 | Jackson | Jan 2007 | A1 |
20080168451 | Challenger et al. | Jul 2008 | A1 |
20090083425 | Bozak et al. | Mar 2009 | A1 |
20090240547 | Fellenstein et al. | Sep 2009 | A1 |
Number | Date | Country |
---|---|---|
1336054 | Feb 2002 | CN |
0790559 | Aug 1997 | EP |
1109353 | Jun 2001 | EP |
1267552 | Dec 2002 | EP |
08-272638 | Oct 1996 | JP |
2000-066904 | Mar 2000 | JP |
2000-194572 | Jul 2000 | JP |
2002-182932 | Jun 2002 | JP |
2003-067199 | Mar 2003 | JP |
2003-233515 | Aug 2003 | JP |
0074313 | Jul 2000 | WO |
03067494 | Aug 2003 | WO |
Entry |
---|
“Grid computing set for big growth”. Tanner, John. America's Network, vol. 107, No. 8. p. 32. May 15, 2003. |
“IBM Girds for Grids”. McConnell, Chris. Enterprise System Journal. vol. 16, No. 10. p. 10. Oct. 2001. |
Office Action, U.S. Appl. No. 11/767,502, filed Jun. 23, 2007, Zhendong Bao, Mailed Jun. 25, 2009, pp. 1-14. |
Weng et al, “A cost-based online scheduling algorithm for job assignment on computational grids”, Springer-Verlag Berlin Heidelberg, 2003, pp. 343-351. |
Andrade et al, “Our grid: An approach to easily assemble grids with equitable resource sharing”, Springer-Verlag Berlin Heidelberg, 2003, pp. 61-86. |
Chase, JS et al, “Dynamic Virtual Clusters in a Grid Site Manager,” High Performance Distributed Computing 2003. Proceedings, 12th IEEE International Symposium, Jun. 22-24, 2003, Piscataway, NJ USA, IEEE, pp. 90-100. |
Office Action, U.S. Appl. No. 10/940,452, filed Sep. 14, 2004, Craig Fellenstein, Mailed Jun. 23, 2009, pp. 1-13. |
“IBM Girds for Grids”. McConnell, Chris. Enterprise System Journal, Oct. 2001, 1 page. |
“Grid Computing set for big growth”. Tanner, John, America's Network, vol. 107, No. 8, May 15, 2003, 6 pages. |
Office Action, U.S. Appl. No. 12/125,892, filed May 22, 2008, mailed Aug. 26, 2009. |
Office Action, U.S. Appl. No. 12/125,879, filed May 22, 2008, mailed Sep. 15, 2009. |
Notice of Allowance, U.S. Appl. No. 12/194,989, filed Aug. 20, 2008, mailed Sep. 30, 2009. |
Office Action, U.S. Appl. No. 12/211,243, filed Sep. 16, 2008, Di Luoffo et al, Mailed Aug. 12, 2009, pp. 1-18. |
Office Action, U.S. Appl. No. 11/031,542, filed Jan. 6, 2005, Dawson et al, Mailed Jul. 7, 2009, pp. 1-15. |
Cao et a “Grid Flow: Workflow Management for Grid Computing”, Cluster Computing and the Grid, 2003, Proceedings. CCGrid 2003. 3rd IEEE/ACM International Symposium on : Publication Date May 12-15, 2003. |
Moore et al, “Managing Mixed Use Clusters with Cluster on Demand”, Duke University, Nov. 2002. |
Office Action, U.S. Appl. No. 11/031,426, filed Jan. 6, 2005, Carl Philip Gusler et al., mailed Nov. 13, 2009, 21 Pages. |
Notice of Allowance, U.S. Appl. No. 11/031,403, filed Jan. 6, 2005, Leslie Mark Ernest et al., Mailed Oct. 5, 2009, 15 Pages. |
In re Vincent Valentino Di Luoffo, Notice of Allowance, U.S. Appl. No. 12/211,243, filed Sep. 16, 2003, mail date Dec. 31, 2009, 18 pages. |
In re Fellenstein, Final Office Action, U.S. Appl. No. 11/031,490, filed Jan. 6, 2005, mail date Dec. 28, 2009, 21 pages. |
In re Fellenstein, Notice of Allowance, U.S. Appl. No. 12/364/469, filed Feb. 2, 2009, mail date Jan. 5, 2010, 27 pages. |
In re Fellenstein, Supplemental Notice of Allowance, U.S. Appl. No. 12/364,469, filed Feb. 2, 2009, mail date Jan. 19, 2010, 7 pages. |
In re Fellenstein, Notice of Allowance, U.S. Appl. No. 11/031,542, filed Jan. 6, 2005, mail date Dec. 8, 2009, 35 pages. |
In re Fellenstein, Notice of Allowance, U.S. Appl. No. 10/940,452, filed Sep. 14, 2004, mail date Dec. 16, 2009, 28 pages. |
In re Fellenstein, Notice of Allowance, U.S. Appl. No. 12/125,879, filed May 22, 2008, mail date Jan. 29, 2010, 24 pages. |
In re Fellenstein, Notice of Allowance, U.S. Appl. No. 11/031,490, filed Jan. 6, 2005, mail date Mar. 9, 2010, 12 pages. |
In re Fellenstein, Notice of Allowance, U.S. Appl. No. 12/359,216, filed Jan. 23, 2009, mail date Feb. 1, 2010, 25 pages. |
TTI Cluster Computing Services On Demand, ClusterOnDemand.com, publicly available and archived by Archive.org on or before Dec. 8, 2004, 4 pages. |
In re Fellenstein, Office Action, U.S. Appl. No. 12/196,287, filed Aug. 22, 2008, mail date Mar. 30, 2010, 24 pages. |
In re Fellenstein, Office Action, U.S. Appl. No. 11/031,489, filed Jan. 6, 2005, mail date Apr. 5, 2010, 28 pages. |
In re Fellenstein, Notice of Allowance, U.S. Appl. No. 12/364,469, filed Feb. 2, 2009, mail date Apr. 14, 2010, 16 pages. |
In re Gusler, Office Action, U.S. Appl. No. 11/031,426, filed Jan. 6, 2005, mail date Apr. 29, 2010, 26 pages. |
In re Fellenstein, USPTO Office Action, U.S. Appl. No. 12/435,370, filed May 4, 2009, mailing date Sep. 1, 2010, 43 pages. |
In re Bao, US Office Action, U.S. Appl. No. 11/767,502, filed Jun. 23, 2007, mailing date Jul. 12, 2010, 35 pages. |
In re Fellenstein, USPTO Office Action, U.S. Appl. No. 12/480,939, filed Jun. 9, 2009, mailing date Sep. 9, 2010, 13 pages. |
In re Fellenstein, USPTO Notice of Allowance, U.S. Appl. No. 10/756,134, filed Jan. 13, 2004, mailing date Apr. 22, 2008, 12 pages. |
In re Fellenstein, USPTO Office Action, U.S. Appl. No. 10/756,134, filed Jan. 13, 2004, mailing date Oct. 31, 2007, 17 pages. |
In re Di Luoffo, USPTO Notice of Allowance, U.S. Appl. No. 10/757,270, filed Jan. 14, 2004, mailing date Aug. 4, 2008, 10 pages. |
In re Di Luoffo, USPTO Office Action, U.S. Appl. No. 10/757,270, filed Jan. 14, 2004, mailing date Jan. 24, 2008, 20 pages. |
In re Fellenstein, USPTO Notice of Allowance, U.S. Appl. No. 10/756,138, filed Jan. 13, 2004, mailing date Jun. 5, 2008, 31 pages. |
In re Fellenstein, USPTO Notice of Allowance, U.S. Appl. No. 10/756,138, filed Jan. 13, 2004, mailing date Feb. 6, 2009, 51 pages. |
Rolia, Jerry et al, Service Centric Computing—Next Generation Internet Computing, 2002 Springer-Verlag Berlin Heidelberg, 17 pages. |
Belloum, Adam et al, VLAM-G; a grid based virtual laboratory, 2002, Future Generation Computer Systems 19, Elsevier Science B.V., 9 pages. |
Min D and Mutka M, Efficient Job Scheduling in a Mesh Multicomputer Without Discrimination Against Large Jobs, 1995, IEEE, 8 pages. |
In re Fellenstein, USPTO Office Action, U.S. Appl. No. 10/756,138, filed Sep. 27, 2007, mailing date Sep. 27, 2007, 49 pages. |
In re Di Luoffo, USPTO Notice of Allowance, U.S. Appl. No. 11/034,304, filed Jul. 1, 2008, mailing date Oct. 1, 2008, 6 pages. |
In re Di Luoffo, USPTO Office Action, U.S. Appl. No. 11/034,304, filed Jan. 12, 2005, mailing date Nov. 28, 2007, 26 pages. |
In re Di Luoffo, USPTO Office Action, U.S. Appl. No. 12/194,989, filed Apr. 16, 2009, mailing date Apr. 16, 2009, 5 pages. |
In re Fellenstein, USPTO Notice of Allowance, U.S. Appl. No. 11/034,335, filed Jan. 12, 2005, mailing date Aug. 7, 2008, 7 pages. |
In re Fellenstein, USPTO Office Action, U.S. Appl. No. 11/034,335, filed Jan. 12, 2005, mailing date Feb. 22, 2008, 29 pages. |
In re Fellenstein, USPTO Office Action, U.S. Appl. No. 11/034,490, filed Jan. 6, 2005, mailing date May 29, 2009, 66 pages. |
Avellino et al, “The DataGrid Workload Management System: Challenges and Results”, Journal of Grid Computing, 2004, copyright Springer 205, pp. 353-367, 15 pages. |
In re Bao, USPTO Office Action, U.S. Appl. No. 10/865,270, filed Jun. 10, 2004, mailing date Nov. 7, 2006, 20 pages. |
In re Fellenstein, USPTO Office Action, U.S. Appl. No. 11/031,541, filed Jan. 6, 2005, mailing date May 20, 2008, 35 pages. |
In re Fellenstein, USPTO Office Action, U.S. Appl. No. 11/031,543, filed Jan. 6, 2005, mailing date Dec. 7, 2007, 17 pages. |
In re Fellenstein, USPTO Office Action, U.S. Appl. No. 11/031,543, filed Jan. 6, 2005, mailing date Jan. 27, 2009, 25 pages. |
In re Fellenstein, USPTO Notice of Allowance, U.S. Appl. No. 11/031,543, filed Jan. 6, 2005, mailing date May 11, 2009, 72 pages. |
In re Fellenstein, USPTO Office Action, U.S. Appl. No. 11/031,543, filed Jan. 6, 2005, mailing date Jun. 25, 2007, 41 pages. |
In re Fellenstein, USPTO Office Action, U.S. Appl. No. 11/031,543, filed Jan. 6, 2005, mailing date Jul. 10, 2008, 45 pages. |
Ding et al, “An Agent Model for Managing Distributed Software Resources in Grid Environment”, School of Computer Engineering and Science, China, 2003, pp. 971-980. |
In re Ernest, USPTO Office Action, U.S. Appl. No. 11/031,403, filed Jan. 6, 2005, mailing date Apr. 24, 2009, 28 pages. |
In re Ernest, USPTO Office Action, U.S. Appl. No. 11/031,403, filed Jan. 6, 2005, mailing date Oct. 24, 2008, 19 pages. |
Hill, J.R., “A management platform for commercial Web Services”, BT Technology Journal (Jan. 2004), vol. 22, No. 1, pp. 52-62. |
Al-Theneyan, Ahmed Hamdan, “A Policy-Based Resource Brokering Environment for Computational Grids” 2002, Ph. D. disseration, Old Dominion Unversity, United States—Virginia., as cited by the Examiner in in re Ernest, USPTO Office Action, U.S. Appl. No. 11/031,403, filed Jan. 6, 2005, mailing date Oct. 24, 2008. |
Leff, Avraham, Rayfield, James T. et al “Service-Level Agreements and Commercial Grids”, IEEE Internet Computing, Jul.-Aug. 2003, pp. 44-50, discloses monitoring and enforcing SLAs on pp. 48-49. |
Alexander Keller et al “The WSLA Framework: Specifying and Monitoring Service Level Agreements for Web Services” Journal of Network and Systems Management, vol. 11, No. 1, Mar. 2003, pp. 57-81. |
Menasce, Daniel A and Casalicchio, Emiliano “QoS in Grid Computing”, IEEE Internet Computing, Jul.-Aug. 2004, pp. 85-87. |
T Boden, “The Grid Enterprise—structuring the agile business for the future”, BT Technology Journal, vol. 22, No. 1, Jan. 2004, pp. 107-117. |
In re Gusler, USPTO Office Action, U.S. Appl. No. 11/031,426, filed Jan. 6, 2005, mailing date Apr. 1, 2009, 42 pages. |
In re Fellenstein, Notice of Allowance, U.S. Appl. No. 11/031,427, filed Jan. 6, 2005, mailing date Oct. 23, 2008, 14 pages. |
In re Fellenstein, Office Action, U.S. Appl. No. 11/031,427, filed Jan. 6, 2005, mailing date May 21, 2008, 26 pages. |
In re Bao Notice of Allowance, U.S. Appl. No. 10/865,270, filed Jun. 10, 2004, mailing date May 3, 2007, 9 pages. |
In re Bao Notice of Allowance, U.S. Appl. No. 11/767,502, filed Jun. 23, 2007, mailing date Oct. 22, 2010, 11 pages. |
In re Fellenstein, Office Action, U.S. Appl. No. 11/034,305, filed Jan. 12, 2005, mailing date Oct. 2, 2008, 30 pages. |
In re Fellenstein, Notice of Allowance, U.S. Appl. No. 11/034,305, filed Jan. 12, 2005, mailing date Mar. 24, 2009, 10 pages. |
In re Fellesntein, Office Action, U.S. Appl. No. 11/034,303, filed Jan. 12, 2005, mailing date Sep. 17, 2008, 27 pages. |
In re Fellenstein, Notice of Allowance, U.S. Appl. No. 11/034,303, filed Jan. 12, 2005, mailing date Mar. 9, 2009, 9 pages. |
Schneider, “What's So Great About Grid”, Wall Street and Technology, New York, Jul. 2004, p. 24, 4 pages, recovered from Proquest on Sep. 12, 2008, as cited in In re Fellenstein, Office Action, U.S. Appl. No. 11/034,303, filed Jan. 12, 2005, mailing date Sep. 17, 2008, 27 pages. |
“SGI and Platform Computing Announce Global Alliannce for Grid Computing Solutions”, PR Newswire, New York, Jul. 16, 2002, p. 1. |
Massie ML et al, “The Ganglia Distributed Monitoring System: Design, Implementation, and Experience” Parallel Computing Elsevier Netherlands, vol. 30, No. 7, Jul. 2004, pp. 817-840. |
Fenglian Xu et al, “Tools and Support for Deploying Applications on the Grid” Services Computing, 2004. Proceedings 2004 International Conference on Shanghai, China, Sept 15-18, 2004, Piscataway, NJ, IEEE, pp. 281-287. |
Ian Foster and Carl Kesselman, “Grid2—Blueprint for a New Computing Infrastructure” 2004, Elsevier, San Francisco, CA, chapter 20, Instrumentation and Monitoring, pp. 319-343. |
Smallen S et al, “The Inca Test Harness and Reporting Framework” Supercomputing 2004. Proceedings of the ACM/IEEE SC2004 Conference Pittsburgh, PA, Nov. 2004, p. 1-10. |
Allen G, et al, “The Cactus Worm: Experiments with Dynamic Resource Discovery and Allocation in a Grid Environment”, International Journal of High Performance Computing Applications, Sage Science Press, Thousand Oaks, US, vol. 15, No. 4, 2001, pp. 345-358. |
Hwa Min Lee, “A Fault Tolerance Service for QoS in Grid Computing”, Lecture Notes in Computer Science, vol. 2659, Aug. 2003, pp. 286-296. |
Tianyi Zang, et al, “The Design and Implementation of an OGSA-based grid information service” Web Services, 2004. Proceedings IEEE International Conference on San Diego CA, Piscataway, NJ, IEEE, Jul. 6, 2004, pp. 566-573. |
Sample N, et al, “Scheduling Under Uncertainty: Planning for the Ubiquitous Grid”, Coordination Models and Languages, 5th International Conference, Coordination 2002. Proceedings (Lecture Notes in Computer Science, vol. 2315) Springer-Varlag Berlin, Germany, 2002, pp. 300-316. |
Geyer DH, et al, “WWW-based high performance computing support of acoustic matched field processing”, MTS/IEEE Oceans 2001. An Ocean Odessey. Conference Proceedings (IEEE Cat. No. 01CH37295) Marine Technology Soc. Washington, DC, vol. 4, 2001, pp. 2541-2548. |
Chase, JS et al, “Dynamic Virtual Clusters in a Grid Site Manager”, High Performance Distributed Computing 2003. Proceedings. 12th IEEE International Symposium, Jun. 22-24, 2003, Piscataway, NJ, Usa, IEEE, pp. 90-100. |
“Method of Providing On-Demand-Computing for Server Blades”, IP.com Journal, IP.com Inc., West Henrietta, NY, US, Sep. 8, 2003, p. 1. |
Kubicek, C, et al., “Dynamic Allocation of Servers to Jobs in a Grid Hosting Environment”, BY Technology Journal, vol. 22, No. 3, Jul. 2004, pp. 251-260. |
Yang, Kun, et al, “Network Engineering Towards Efficient Resource On-Demand in Grid Computing”, Communication Technology Proceedings, 2003, ICCT 2003, International Conference on Apr. 9-11, 2003, Piscataway, NJ, USA, IEEE, vol. 2, Apr. 9, 2003, pp. 1715-1718. |
Foster et al., “The Anatomy of the Grid, Enabling Scalable Virtual Organizations,” 2001, pp. 1-25, 25 pages, [online], [print accessed on Nov. 27, 2003]. Retrieved from the internet <http://www.globus.org/research/papers/anatomy.pdf>. |
Foster et al., “The Physiology of the Grid, An Open Grid Services Architecture for Distributed Systems Integration,” Jun. 22, 2002, pp. 1-31, 31 pages, [online], [print accessed on Nov. 27, 2003]. Retrieved from the Internet <http:/lwww.globus/org/research/papers/ogsa.pdf>. |
Foster, Ian, “What is the Grid? A Three Point Checklist,” Jul. 20, 2002, 4 pages, [online], [print accessed on Nov. 27, 2003]. Retrieved from the Internet <http://www-fp.mcs.anl.gov/˜foster/Articles/WhatIsTheGrid.pdf>. |
Ferreira et al., “IBM Redpaper—Globus Toolkit 3.0 Quick Start,” Sep. 2003, 36 pages, [online], [print accessed on Nov. 27, 2003]. Retrieved from the Internet <http://www.redbooks.ibm.com/redpapers/pdfs/redp369>. |
“IBM Grid Computing—What is Grid Computing,” 1 page, [online], [print accessed on Nov. 27, 2003]. Retrieved from the internet <http://www-1. ibm.com/grid/about—grid/what—is.shtml>. |
Berstis, Viktors, “IBM Redpaper—Fundamentals of Grid Computing,” 200, pp. 1-28, 28 pages, [online], [print accessed on Nov. 27, 2003]. Retrieved from the internet <http://www.redbooks.ibm.com/redpapers/pdfs/redp3613.pdf>. |
Jacob, Bart; “IBM Grid Computing—Grid Computing: What are the key components?” Jun. 2003, 7 pages, [online], [print accessed on Nov. 27, 2003]. Retrieved from the Internet <http://www-106.ibm.com/developerworks/grid/library/gr-overview>. |
Unger et al., “IBM Grid Computing—A Visual Tour of Open Grid Services Architecture,” Aug 2003, 9 pages, [online], [print accessed on Nov. 27, 2003]. Retrieved from the Internet <http://www-106.ibm.com/developerworks/grid/library/gr-visual>. |
Edited by Rajkumar Buyya, “Grid Computing Info Centre: Frequently Asked Questions (FAQ),” 3 pages, [online], [print accessed on Nov. 27, 2003] Retrieved from the Internet <http://www.cs.mu.oz.au/˜raj/Gridlnfoware/ gridfaq.html>. |
U.S. Appl. No. 12/470,225, filed May 21, 2009, Vincent Valentino Di Luoffo, Non-Final Office Action, mailed May 12, 2011, 64 pages. |
Notice of Allowance, U.S. Appl. No. 12/470,225, filed May 21, 2009, Vincent Valentino Di Luoffo, mailing date Nov. 4, 2011, 26 pages. |
Office Action, U.S. Appl. No. 12/535,404, filed Aug. 4, 2009, Craig William Fellenstein, mailing date Nov. 2, 2011, 151 pages. |
Japanese Patent Office Action, Information Material for IDS, dated Oct. 27, 2010, 2 pages. |
Final Office Action, U.S. Appl. No. 12/435,370, filed May 4, 2009, Craig Fellenstein, mailing date Mar. 22, 2011, 43 pages. |
Notice of Allowance, U.S. Appl. No. 12/435,370, filed May 4, 2009, In Re Craig Fellenstein, mailing May 11, 2012, 130 pages. |
Krauter et al., A Taxonomy and Survey of Grid Resource Management Systems for Distributed Computing, Sep. 17, 2001, John Wiley & Sons, pp. 1-32. |
He et al. “Hybrid performance-based workload management for multiclusters and grids”, 2004, IET Journals and Magazines, vol. 8, issue 4, pp. 224-231. |
He et al. Dynamic scheduling of parallel jobs with QoS Demands in multiclusters and grids, 2004, Grid Computing, 2004. Proceedings. IEEE/ACM International Workshop, pp. 402-409. |
Cao, J., “Self-Organizing agents for grid load balancing”, 2004, Grid Computing, 2004. Proceedings. Fifth IEEE/ACM International Workshop, pp. 388-395. |
Notice of Allowance, mailing date Jul. 19, 2012, U.S. Appl. No. 12/480,939, filed Jun. 9, 2009, In re Fellenstein, 32 pages. |
Joseph, Joshy and Fellenstein, Craig, “Grid Computing”, IBM Press, Dec. 30, 2003, ISBN-10: 0-13-145660-1, a378 pages in print edition, also available online from <http://my.safaribooksonline.com/book/software-engineering-and-development/grid-computing/0131456601>. |
Fellenstein et al., Notice of Allowance, U.S. Appl. No. 12/480,939, filed Jun. 9, 2009, mailing date Mar. 3, 2011, 55 pages. |
Office Action, U.S. Appl. No. 12/535,404, filed Aug. 4, 2009, Craig William Fellenstin, mailing date May 25, 2012, 47 pages. |
U.S. Appl. No. 10/757,282, filed Jan. 14, 2004, Di Luoffo et al., US Patent 7,552,437, Final Rejection, mailing date Jun. 28, 2008, 27 pages. |
U.S. Appl. No. 10/757,282, filed Jan. 14, 2004, Di Luoffo et al, US Patent 7,552,437, Notice of Allowance, mailing date Feb. 24, 2009, 10 pages. |
U.S. Appl. No. 10/757,282, filed Jan. 14, 2004, Di Luoffo et al, US Patent 7,552,437, Office Action, mailing date Dec. 26, 2007, 16 pages. |
U.S. Appl. No. 11/031,489, filed Jan. 6, 2005, US Publication 20060149652, Fellenstein et al., Final Office Action, mailing date Nov. 26, 2010, 78 pages. |
Hai et al, Fault-Tolerant Grid Architecture and Practice, Jul. 2003, vol. 18, pp. 423-433, J Computer Sci and Technology, 11 pages. |
Tcherevik, Dmitri; Managing the Service-Oriented Architecture (SOA) and On-Demand Computing; copyright 2004 Computer Associates International, Inc., pp. 1-11. |
Sven Graupner et al., “Management +=Grid”, reference numeral HPL 2003-114, copyright Hewlett-Packard Company 2003, pp. 1-2, available at http://www.hpl.hp.com/techreports/2003/HPL-2003-114.html as of Nov. 14, 2004. |
Baden, Mr Hughes and Steven, Dr Bird (2003) Grid-Enabling Natural Language Engineering by Stealth. In Proceedings HLT-NAACLO3 Workshop on the Software Engineering and Architecture of Language Technology Systems, pp. 31-38, Edmonton, Canada, available from http://eprints.unimelb.edu.au/archive/00000491 as of May 3, 2004. |
Zhu et al., “Scheduling Optimization for resource-intensive Web requests on server clusters”, ACM Symposium on Parallel Algorithms and Architectures, 1999, p. 13-22. |
Rumsewicz et al, “Preferential Load Balancing for Distributed Internet Servers”, Cluster Computing and the Grid, Proceedings. First IEEE/ACM International Symposium, May 2001, p. 363-370. |
Kim et al., “Request Rate adaptive dispatching architecture for scalable Internet Server”, Cluster Computing, 2000, Proceedings. IEEE conference on Nov. 28-Dec. 1, 2000, p. 289-296. |
Casalicchio et al., “Scalable Web Clusters with Static and Dynamic Contents”, Cluster Computing, 2000, Proceedings. IEEE conference on Nov. 28-Dec. 1, 2000, p. 170-177. |
Fox et al., “Cluster-based scalable network services”, Oct. 1997 ACM SIGOPS Operating Systems Review, Proceedings of the 16.sup.th ACM symposium on operating systems principles, vol. 31, Issue 5, p. 78-91. |
Bodhuin et al, “Using Grid Technologies for Web-enabling Legacy Systems”, Research Centre on Software Technology, available at http://www.bauhaus-stuttgart.de/sam/bodhuin.pdf as of at least Jun. 21, 2004. |
IBM, “Process and method for IT energy optimization”, Research Disclosure, Feb 2002, pp. 366-367, 2 pages. |
Gillmor, Steve, “Ahead of the curve, Grid Will Hunting”, InfoWorld; Feb 25, 2002;24, p. 66, 1 page. |
U.S. Appl. No. 12/143,776, filed Jun. 21, 2008, Craig Fellenstein, Non-Final Office Action, mailed Jan. 6, 2012, 93 pages. |
U.S. Appl. No. 12/143,776, filed Jun. 21, 2008, Craig Fellenstein, Non-Final Office Action, mailed Aug. 14, 2012, 17 pages. |
Number | Date | Country | |
---|---|---|---|
20090259511 A1 | Oct 2009 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 11034305 | Jan 2005 | US |
Child | 12491172 | US |