Method and apparatus for dynamic distributed computing over a network

Information

  • Patent Grant
  • 7210148
  • Patent Number
    7,210,148
  • Date Filed
    Friday, March 16, 2001
    23 years ago
  • Date Issued
    Tuesday, April 24, 2007
    17 years ago
Abstract
A homogeneous execution environment operates within a heterogeneous client-server network. A client selects a server and transmits a procedure call with parameters. In response, a server dynamically and securely downloads code to a compute server; invokes a generic compute method; executes the code on the compute server; and returns the results to the calling client method, preserving the result on the compute server if requested. This technique is efficient in that it does not require multiple copies of code to be downloaded or compiled since server byte-codes can be executed on each of the different systems, therefore downloading or compiling multiple copies of code can be avoided. The code can be compiled once and downloaded as needed to the various servers as byte-codes and then executed.
Description
BACKGROUND OF THE INVENTION

1. Field of the Invention


This invention generally relates to distributed computing systems and more particularly, to a method and apparatus for performing dynamic distributed computing over a network.


2. Description of the Related Art


In a distributed computing network, users can harness the processing capabilities of numerous computers coupled to the network. Tasks with many different independent calculations can be quickly processed in parallel by dividing the processing among different computers on the network. Further, specialized tasks can be computed more quickly by locating a computer on the network most suitable for processing the data. For example, a task executing on a client system which performs an intense floating point calculation may execute faster on a server system coupled to the network which has specialized floating point hardware suitable for the particular calculations.


Unfortunately, conventional techniques for distributed computing are not easily implemented in the typical heterogenous computing environments. Each computer on the network is typically heterogeneous containing different processor and operating system combinations, and require different object modules for execution. On the client side, different object modules requires that the user compiles different versions of the task for each different platform and loads the module onto the corresponding platform adding storage requirements to each client and also requiring porting and compiling the same tasks multiple times. Further, conventional techniques require that the code be distributed over the computers well before the code is executed. In the conventional systems, the extensive preparation required for performing distributed computing deterred many from exploiting this technology.


Distributed computing systems based on scripting languages are an improvement over some conventional distributed computing systems. Unfortunately, scripting based systems eliminate the need to recompile code, but are still very inefficient. A scripting based distributed system can execute the same instructions on multiple platforms because the language is interpreted by an interpreter located on each system. Consequently, most scripting languages are slow since they must translate high level scripting instructions into low level native instructions in real time. Moreover, scripting languages are hard to optimize and can waste storage space since they are not generally compressed.


Based on the above limitations found in conventional systems, it is desirable to improve distributed computing systems.


SUMMARY OF THE INVENTION

In one aspect of the present invention associated with a client computer, a method and apparatus for dynamic distributed computing is provided. Initially, the client selects a server from the network to process the task. This selection can be based on the availability of the server or the specialized processing capabilities of the server. Next, a client stub marshals the parameters and data into a task request. The client sends the task request to the server which invokes a generic compute method. The server automatically determines if the types associated with the task are available on the server and downloads the task types from the network as necessary. Information in the task types are used to extract parameters and data stored in the particular task request. The generic compute method is used to execute the task request on the selected server. After the server processes the task request, the client receives the results, or the computed task, back from the selected server.


In another aspect of the present invention associated with a server computer, a method and apparatus for dynamic distributed computing is provided. Initially, the server will automatically determine which task types are available on the server and will download task types from the network as necessary. These task types help the server unmarshal parameters and data from a task request and generate a local task. Next, the server invokes a generic compute method capable of processing all types of compute tasks or subtypes of a compute task. The generic compute method is used to execute the task request on the selected server. If a subsequent task will use the results, the server stores the results from the computed tasks in a local cache. Once the task has completed, the server returns the results, or the computed task, to the client.





BRIEF DESCRIPTION OF THE DRAWINGS

The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate an embodiment of the invention and, together with the description, serve to explain the advantages, and principles of the invention.


In the drawings:



FIG. 1. illustrates a network suitable for use with methods and systems consistent with the present invention;



FIG. 2 is block diagram of a computer system suitable for use with methods and systems consistent with the present invention;



FIG. 3 is a block diagram representation of a client-server networking environment suitable for use with methods and systems consistent with the present invention;



FIG. 4 is a flow chart of the steps a client performs in accordance with methods and systems consistent with the present invention; and



FIG. 5 is a flow chart the steps performed by a server in accordance with methods and systems consistent with the present invention.





DETAILED DESCRIPTION OF THE INVENTION

Introduction


Reference will now be made in detail to an implementation of the present invention as illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings and the following description to refer to the same or like parts.


Systems consistent with the present invention address shortcomings of the prior art and provide a dynamic distributed computing system used over a network of server computers. This dynamic distributed computing system is particularly useful in heterogenous computer networks having computers with different processors, different operating systems, and combinations thereof. Such a system allows a client application to select a server computer at runtime to execute a particular task. In method and systems consistent with the present invention, the task is an object having a particular type or class definition. The server can generally defer knowing the actual class definition until the parameters and data associated with the object task are received on the server. Consequently, the particular type is downloaded by the server if it is not available on the server. For example, if an object instance of an unknown class is transmitted to the server, the server downloads the unknown class. The server then uses this class to process the object. This late association of a class definition to an object increases the flexibility in processing complex tasks over a network of server computers. Further, the present design facilitates this flexibility with minimal additional overhead by utilizing features in existing remote procedure call subsystems such as the Remote Method Invocation (RMI) subsystem developed by Sun Microsystems, Inc. of Mountain View, Calif. For more information on Remote Method Invocation (RMI) see co-pending U.S. Patent Application, “System and Method For Facilitating Loading of “Stub” Information to Enable a Program Operating in One Address Space to Invoke Processing of a Remote Method or Procedure in Another Address Space” having Ser. No. 08/636,706, filed Apr. 23, 1996, now U.S. Pat. No. 6,938,263, by Ann M. Wollrath, James Waldo, and Roger Riggs, assigned to a common assignee and hereby incorporated by reference. Also, RMI is described in further detail in the “Java (™) Remote Method Invocation Specification,” available on the JavaSoft WebPage provided by Sun Microsystems, Inc. which is also hereby incorporated by reference.


Unlike conventional systems, a task in the dynamic distributed system consistent with the present invention can be written once and executed on any server computer in a network. This capability is particularly advantageous in a heterogeneous network because the task does not have to be ported to every platform before it is executed. Instead, a generic compute task designed in accordance with the present invention is loaded on each system. This generic compute task is capable of executing a wide variety of tasks specified by the client at runtime. For example, one can develop a type called “Compute” and a generic compute task which accepts the “Compute” type in an object-oriented language, such as Java. Java is described in many texts, including one that is entitled “The Java Language Specification” by James Gosling, Bill Joy, and Guy Steele, Addison-Wesley (1996), which is hereby incorporated by reference. The client creates a task having a subtype of the type “Compute” and passes an object corresponding to task to the generic compute task on the server. A remote procedure call mechanism downloads the object to the server and the generic compute task which executes the task.


In Java, the task transmitted by the client is actually an object including a series of bytecodes. These bytes codes can be executed immediately as long as the server implements a Java Virtual Machine (JVM). The JVM can be implemented directly in hardware or efficiently simulated in a software layer running on top of the native operating system. The Java language was designed to run on computing systems with characteristics that are specified by the Java Virtual Machine (JVM) Specification. The JVM specification is described in greater detail in Lindholm and Yellin, The Java Virtual Machine Specification, Addison-Wesley (1997), which is hereby incorporated by reference. This uniform JVM environment allows homogeneous execution of tasks even though the computer systems are heterogenous and have different processors, different operating systems, and combinations thereof. Combining a powerful remote procedure call subsystem with a generic compute task on the server, designed in accordance with the present invention, results in a powerful dynamic distributed computing environment.


A compute server using bytecodes can process a task much faster than systems using conventional text based scripting languages or other character based languages. Each bytecode is compact (8 bits) and is in a numeric format. Consequently, the server computer does not spend compute cycles parsing the characters and arguments at run time. Also, the bytecodes can be optimized on the client before transporting them to the server. The server optionally can convert the bytecodes to native instructions for execution directly on the hardware at run time using a processing mechanism such as a Just-in-Time (JIT) compiler. For more information on JIT compilers see The Java Virtual Machine Specification.


A system designed in accordance with the present invention assumes that each client is capable of communicating to each server over a common networking protocol such as TCP/IP. Also, it is assumed that there is a remote procedure call (RPC) subsystem on the client and server which is capable of receiving remote requests from a client and executing them on the server. This RPC system also automatically downloads code and related information needed for performing the task at run time. RMI developed by Sun Microsystems, Inc. is a suitable RPC subsystem providing these features. One skilled in the art, however, will appreciate that other RPC subsystems, such as DCOM/COM from Microsoft, Inc., may be used in lieu of RMI.


Computer Network



FIG. 1 illustrates a network 100 in which one embodiment of the present invention can be implemented. Network 100 includes Local Area Network (LAN) 101, backbone or Wide Area Network (WAN) 112, and Local Area Network (LAN) 116 in its essential configuration. LAN 101 includes a series of work stations and server computers 102, 104, 106, and 108. LAN 116 includes a series of work stations and server computers 118, 120, 122, and 124. These computer systems 102108 and 118124 are coupled together to share information, transmit data, and also share computational capabilities. LAN 101 is coupled to the larger overall network using a network interconnect device 110. The specific type of network interconnect device can be a router, a switch, or a hub depending on the particular network configuration. In general, network interconnect device 110 includes routers, switches, hubs or any other network interconnect device capable of coupling together a LAN 101, a WAN 112, and LAN 116 with user terminals into an integrated network. Network interconnect device 114 can also include routers, switches, hubs, or any other network interconnect device capable of coupling the computers on LAN 116 with user terminals into an integrated network. In general, a dynamic distributed computing system designed in accordance with the present invention is typically located on each computer system coupled to network 100. Accordingly, each computer may operate as either a client or a server depending on the particular request being made and the services being provided. Typically, the client requests that a task is computed on a server computer and the server computer will process the task.


Computer System


Referring now to FIG. 2, the system architecture for a computer system 200 suitable for practicing methods and systems consistent with the present invention is illustrated. The exemplary computer system 200 is for descriptive purposes only. Although the description may refer to terms commonly used in describing particular computer systems, such as in IBM PS/2 personal computer, the description and concepts equally apply to other computer systems, such as network computers, workstation, and even mainframe computers having architectures dissimilar to FIG. 1.


Furthermore, the implementation is described with reference to a computer system implementing the Java programming language and Java Virtual Machine specifications, although the invention is equally applicable to other computer systems having similar requirements. Specifically, the present invention may be implemented with both object-oriented and nonobject-oriented programming systems.


Computer system 200 includes a central processing unit (CPU) 105, which may be implemented with a conventional microprocessor, a random access memory (RAM) 210 for temporary storage of information, and a read only memory (ROM) 215 for permanent storage of information. A memory controller 220 is provided for controlling RAM 210.


A bus 230 interconnects the components of computer system 200. A bus controller 225 is provided for controlling bus 230. An interrupt controller 235 is used for receiving and processing various interrupt signals from the system components.


Mass storage may be provided by diskette 242, CD ROM 247, or hard drive 252. Data and software may be exchanged with computer system 200 via removable media such as diskette 242 and CD ROM 247. Diskette 242 is insertable into diskette drive 241 which is, in turn, connected to bus 230 by a controller 240. Similarly, CD ROM 247 is insertable into CD ROM drive 246 which is, in turn, connected to bus 230 by controller 245. Hard disk 252 is part of a fixed disk drive 251 which is connected to bus 230 by controller 250.


User input to computer system 200 may be provided by a number of devices. For example, a keyboard 256 and mouse 257 are connected to bus 230 by controller 255. It will be obvious to those reasonably skilled in the art that other input devices, such as a pen and/or tablet may be connected to bus 230 and an appropriate controller and software, as required. DMA controller 260 is provided for performing direct memory access to RAM 210 A visual display is generated by video controller 265 which controls video display 270.


Computer system 200 also includes a communications adaptor 290 which allows the system to be interconnected to a local area network (LAN) or a wide area network (WAN), schematically illustrated by bus 291 and network 295.


Operation of computer system 200 is generally controlled and coordinated by operating system software. The operating system controls allocation of system resources and performs tasks such as processing scheduling, memory management, networking, and services, among things.


Dynamic Distributed Computing


Dynamic distributed computing is generally a client server process. The client-server relationship is established for each call being made and generally the roles can change. Typically, the client is defined as the process making a call to request resources located or controlled by the server. In this context, the computer or processor executing the requesting process may also be referred to as a client. However, these roles may change depending on the context of information and particular processing which is taking place.



FIG. 3 is a block diagram representation of a client-server networking environment used to implement one embodiment of the present invention. This diagram includes those subsystems closely related to the present invention to emphasize one embodiment of the present invention. Additional subsystems, excluded in FIG. 3, may be necessary depending on the actual implementation.


Accordingly, FIG. 3 includes a client 302, a server 316, and an object/method repository 314 which are all operatively coupled to a network 312. Client 302 includes an application 304 which makes a remote compute call 306 to process a task on a remote server computer. A remote stub 310, typically generated using a remote procedure call subsystem, as described in the RMI specification, is used to package parameters and data associated with the specific remote compute call 306. The typical client can also includes a collection of local objects/methods 308 which may contain the type of task client 302 calls remote compute call 306 to execute. Alternatively, the tasks can be located in object method repository 314 and are accessed by compute method 320 as needed. Server 316 includes a remote skeleton 322 to unmarshal the parameters and data transmitted from the client. Remote skeleton 322 prepares information for use by compute method 320. A local objects/methods 324 also includes tasks client 302 can ask the server 316 to process.


In operation, remote compute call 306 makes a call to a compute method 320 to process a particular task. A remote stub 310 marshals information on the calling method so that a compute method 320 on server 316 can execute the task. Remote stub 310 may also marshal basic parameters used as arguments by compute method 320 on server 302. Remote skeleton 322 receives the task and unmarshals data and parameters received over the network and provides them to compute method 320. If the task and related types are not available on server 316, the skeleton downloads the types from client 302, object/method repository 314, or some other safe and reliable source of the missing types. The type information maps the location of data in the object and allows the remote skeleton to complete processing the object. RMI (not shown) is one remote procedure call (RPC) system capable of providing remote stub 310 and remote skeleton 322. Once the object is processed by the skeleton, compute method 320 executes the task and returns the computed task or computed task results to client 302.



FIG. 4 is a flow chart of the steps performed by a client when utilizing the dynamic distributed computing system and method consistent with the present invention. Initially, the client selects a suitable server from the network to process the task (step 402). The selection criteria can be based upon the overall processing load distribution among the collection of server computers or the specialized computing capabilities of each server computer. For example, load balancing techniques may be used to automatically determine which computer has the least load at a given moment. Further, some computers having specialized hardware, such as graphic accelerators or math co-processors, may be selected by the client because the task has intense graphic calculations, such as rendering three dimensional wireframes, or must perform many floating point calculations.


Once the server is selected, the client invokes a remote compute method on the selected server (step 404). An RPC system, such as RMI, facilitates invoking the remote compute method on a server computer. Typically, the client need only know that the remote compute method can be used as a conduit to process a particular task on a remote computer. For example, in Java the remote instruction “Server.runTask(new PI(1000))” executed on a client causes a remote method “runTask” to be invoked on a remote server “Server” of type “ComputeServer”. This step provides the task (in this case the task is a type task object instantiated by the “new PI(1000)) as a parameter to the generic compute method through the remote method “runTask”. The “runTask” method on the server implements a Compute remote interface. Optionally, this instruction can indicate to the server that results from the computed task should be stored in a result cache on the selected server. This enables subsequent tasks to share the results between iterations. For example, the results from calculating “PI” may be used later by another remote method to compute the volume of a sphere or perform another precise calculation using the value of “PI”.


A stub is used to marshal parameters and data into a task request. The task request is then provided to the selected server. Typically, the task request includes data and parameters for the task as well as a network location for the type or class if it is not present on the server. A skeleton on the server uses the type or class information to process the object and unmarshall data and parameters. In a system using Java and RMI, the task request is an object and the class location information is contained in a codebase URL (universal record locator) parameter. Further details on this are contained in the RMI Specification. The server can schedule the task for execution immediately or whenever the server finds a suitable time for executing the task. After the server performs the computation, the client receives the results from the computed task (step 408).



FIG. 5 is a flow chart of the steps performed by the dynamic distributed computing system and methods consistent with the present invention. Initially, a skeleton on the server unmarshalls parameters and data from a task request and recreates the original task as transmitted (step 504). Unmarshalling these parameters may include downloading several additional types. The skeleton determines if the types related to the task request are available on the server (step 506). If the types associated with the task request are not available, the skeleton must download the tasks from one of the areas on the network (step 509). For example, if a “PI( )” class is not on the server, the skeleton server will down load this type from the client. The type or class is used by the skeleton to map data in the object and marshall parameters and data.


Typically, the client will indicate in the request package where the particular type is located. The skeleton can download the requested type from a object/method repository and can cache the type for future server requests. Also, the requested type could also be located on the client. For example, in Java and RMI the class containing the particular type is located in the given codebase URL (universal record locator) transmitted by the client. Dynamic class loading features in RMI facilitate the automatic downloading of the class using the codebase. These types enable the skeleton to parse the task request and extract the appropriate data and parameters. The steps outlined above make the parameters and data readily available for further processing.


Once the appropriate types are available, the skeleton invokes the generic compute method (step 508). The generic compute method on the server then executes the specific task requested by the client (step 510). For example, assume the client calls “ComputeServer.runTask(new PI(1000))”. The skeleton will invoke the generic compute method “runTask” on the server. The “runTask” method calls the “run( )” method embedded in the task called by the client. Further, the “runTask” method implements the remote interface “Compute” which maintains the remote connection with the client. At the option of the client or a predetermined setting on the server, the skeleton stores results from the computed tasks in a cache if a subsequent task will use the results. As a final step on the server, the computed task or results are returned to the client by executing “return t.run( )” on the server (step 512).


EXEMPLARY IMPLEMENTATION

Consistent with the present invention, the following code sample is provided as one implementation. Although this example is provided in the object-oriented Java programming language other programming languages could also be used. For example, the server can include the following Java code:

















THE TASK



public interface Task extends Serializable {









//This interface allows a class (the “PI”



// class ) to implement the abstract



// run( ) class









{









Public Object run( );









}



THE REMOTE INTERFACE:



import java.rmi.*;



public interface Compute extends Remote {









// The RMI/RPC Interface









public Object runTask(Task t) throws RemoteException;









//The abstract runIt method









}



THE COMPUTE SERVER IMPLEMENTATION



import java.rmi.*;



import java.rmi.server.*;



public class ComputeServer extends UnicastRemoteObject



implements Compute{









public ComputeServer ( ) throws RemoteException{ }









//Implements the Compute interface



//abstract “runTask” method









// ... Code in this area is used for initializing the routine with RPC







system









public Object runTask (Task t) throws RemoteException









// runTask implements the abstract







method









// defined in ComputerServer interface










return t.run( );
//









}



The following exemplary Java code can be used on a client







performing dynamic distributed computing consistent with the present


invention.









class PI {



private int precision;



PI (int howManyPlaces) { // sets precision of PI value to be







calculated later









precision = howManyPlaces;









}










public Object run( ) {
// implement the abstract run method in the




// compute interface









double pi = computePIsomehow(precision); // calculate pi



return new Double(pi);



}



public static void main (String[ ] args) {










ComputerServer server = getAComputerServer( );
// Select a server




// from the




// network and




// store in remote




// compute call




// to RMI RPC




// abstract remote




// interface










Double pi = server.runTask(new PI(1000));
// implement abstract




// remote to execute a




// “pi” computation




// defined in “PI”




// class.



System.out.println(“PI seems to be “+pi);
// return results in “pi”




// variable and print to




// standard out










While specific embodiments have been described herein for purposes of illustration, various modifications may be made without departing from the spirit and scope of the invention. Those skilled in the art understand that the present invention can be implemented in a wide variety of hardware and software platforms and is not limited to the traditional routers, switches, and intelligent hub devices discussed above. Accordingly, the invention is not limited to the above described embodiments, but instead is defined by the appended claims in light of their full scope of equivalents.

Claims
  • 1. A method performed on a processor operatively coupled to a collection of servers, which enables a client associated with the processor to dynamically distribute a task to a server, the method comprising: selecting a server to process the task;forming a task request from parameters and data;sending the task request to the selected server, wherein the selected server: downloads a class definition after receiving the task request, wherein the class definition maps locations of information in the task request and allows the selected server to process the task request;extracts parameters and data from the task request using the downloaded class definition; andinvokes a generic compute technique capable of executing a plurality of types of tasks, wherein the generic compute technique executes the task request using the extracted parameters and data; andreceiving results associated with the executed task request from the selected server.
  • 2. The method of claim 1, wherein the processor is operatively coupled to a computer system having a primary storage device, a secondary storage device, a display device, and an input/output mechanism.
  • 3. The method of claim 1, wherein the task is developed in a programming language and environment compatible with each of the server computers.
  • 4. The method of claim 3, wherein the environment includes a remote procedure call subsystem.
  • 5. The method of claim 4, wherein the remote procedure call subsystem is the Remote Method Invocation (RMI) system.
  • 6. The method of claim 1, wherein the server is selected from a plurality of heterogeneous computer systems.
  • 7. The method of claim 6, wherein the selected server has the lowest load characteristic compared with average load characteristic of the servers over a predetermined time period.
  • 8. The method of claim 1, wherein selecting the server comprises selecting the server based on the overall processing load distribution among the collection of servers.
  • 9. The method of claim 1, wherein selecting the server comprises selecting the server based on the specialized computing capabilities of each server.
  • 10. The method of claim 9, wherein the specialized computing capabilities include a capability to render images.
  • 11. The method of claim 1, wherein the sending step further comprises the substeps of: determining if code related to the requested task is present on the selected server; anddownloading the code onto the selected server when the code is not present on the selected server.
  • 12. The method of claim 1, wherein the sending step further comprises: providing the task as a parameter to the generic compute method.
  • 13. The method of claim 3 further comprising the step of indicating to the server that results from a computed task should be stored in a result cache on the selected server for subsequent tasks to use.
  • 14. The method of claim 1, wherein the results are used for further processing on the client.
  • 15. The method of claim 1, wherein the results comprise an object.
  • 16. The method of claim 1, wherein the server downloads the class definition from a location indicated by a URL parameter in the task request.
  • 17. The method of claim 1, wherein the server provides the task request as a parameter to the generic compute technique.
  • 18. A method performed on a processor operatively coupled to a collection of servers, which enables a server associated with the processor to dynamically receive and process a task from a client computer wherein the task is in an executable programming language compatible with each of the server computers, the method comprising: downloading a class definition after receiving a task request, wherein the class definition maps locations of information in the task request and allows the server to process the task request;assembling parameters and data from a the task request into a task, using the downloaded class definition;invoking a generic compute method, capable of processing a plurality of types of tasks, on the server, wherein the generic compute method executes the task and generates results; andreturning results to the client.
  • 19. The method of claim 18, wherein the processor is operatively coupled to a computer system having a primary storage device, a secondary storage device, a display device, and an input/output mechanism.
  • 20. The method of claim 18, wherein the task is developed in a programming language and environment compatible with each of the server computers.
  • 21. The method of claim 18, wherein the task is developed using the Java programming language and environment.
  • 22. The method of claim 21, wherein the environment includes a remote procedure call subsystem.
  • 23. The method of claim 22, wherein the remote procedure call subsystem is the Remote Method Invocation (RMI) system.
  • 24. The method of claim 18, wherein the assembling step further comprises: determining if types related to the task are available on the server;when types are not available on the server, downloading the types onto the server from a location as indicated by the parameters provided by the client; andexecuting the task based upon the data and parameters provided by the client.
  • 25. The method of claim 24, wherein the determining step and the downloading steps are performed by a remote procedure call (RPC) subsystem.
  • 26. The method of claim 25, wherein the determining step is performed by a Remote Method Invocation (RMI) type of remote procedure call subsystem.
  • 27. The method of claim 18, further comprising the substep of storing the results from the task in a cache if a subsequent task will use the results.
  • 28. A computer readable medium containing instructions for controlling a computer system comprising a collection of servers to perform a method for enabling a client to dynamically distribute a task to a server, the method comprising the steps of: selecting a server to process the task;forming a task request from parameters and data;sending the task request to the selected server, wherein the selected server: downloads a class definition after receiving the task request, wherein the class definition maps locations of information in the task request and allows the selected server to process the task request:extracts parameters and data from the task request using the downloaded class definition; andinvokes a generic compute method capable of executing a plurality of types of tasks, wherein the generic compute technique executes the task request on the selected server using the extracted parameters and data; andreceiving results associated with the executed task request from the selected server.
  • 29. The computer readable medium of claim 28, wherein the computer system is operatively coupled to a primary storage device, a secondary storage device, a display device, and an input/output mechanism.
  • 30. The computer readable medium of claim 28, wherein the task is developed in a programming language and environment compatible with each of the servers.
  • 31. The computer readable medium of claim 30, wherein the environment includes a remote procedure call subsystem.
  • 32. The computer readable medium of claim 31, wherein the remote procedure call subsystem is the Remote Method Invocation (RMI) system.
  • 33. The computer readable medium of claim 28, wherein the selected server is selected from a plurality of heterogeneous computer systems.
  • 34. The computer readable medium of claim 28, wherein selecting the server comprises selecting the server based on the overall processing load distribution among the collection of servers.
  • 35. The computer readable medium of claim 28, wherein selecting the server comprises selecting the server based on a lowest load characteristic compared to an average load characteristic of the servers over a predetermined period of time.
  • 36. The computer readable medium of claim 28, wherein selecting the server comprises selecting the server based on the specialized computing capabilities of each server.
  • 37. The computer readable medium of claim 16, wherein the specialized computing capabilities include a capability to render images.
  • 38. The computer readable medium of claim 28, wherein the sending step further comprises: determining whether code related to the requested task is present on the selected server; anddownloading the code onto the selected server if the code is not present on the selected server.
  • 39. The computer readable medium of claim 28, wherein the sending step further comprises: providing the task as a parameter to the generic compute method.
  • 40. The computer readable medium of claim 28 further comprising the step of indicating to the server that results from a computed task should be stored in a result cache on the selected server for subsequent tasks to use.
  • 41. The computer readable medium of claim 28, wherein the results are used for further processing on the client.
  • 42. The computer readable medium of claim 28, wherein the results comprise an object.
  • 43. A computer readable medium containing instructions for controlling a computer system comprising a collection of servers to perform a method for enabling a server to dynamically receive and process a task from a client computer wherein the task is in an executable programming language compatible with each of the servers, the method comprising: downloading a class definition after receiving a task request, wherein the class definition maps locations of information in the task request and allows the server to process the task request;assembling parameters and data from the task request into a task, using the downloaded class definition;invoking a generic compute method, capable of processing a plurality of types of tasks, on the server, wherein the generic compute method executes the task and generates results; andreturning results to the client.
  • 44. The computer readable medium of claim 43, wherein the computer system is operatively coupled to a primary storage device, a secondary storage device, a display device, and an input/output mechanism.
  • 45. The computer readable medium of claim 43, wherein the task is developed in a programming language compatible with each of the servers.
  • 46. The computer readable medium of claim 43, wherein the task is developed using a Java programming language and environment.
  • 47. The computer readable medium of claim 43, wherein the environment includes a remote procedure call subsystem.
  • 48. The computer readable medium of claim 47, wherein the remote procedure call subsystem is the Remote Method Invocation (RMI) system.
  • 49. The computer readable medium of claim 43, wherein the assembling step further comprises: determining if types related to the task are available on the server;when the types are not available on the server, downloading the types onto the server from a location as indicated by the parameters provided by the client; andexecuting the task based upon the data and parameters provided by the client.
  • 50. The computer readable medium of claim 49, wherein the determining step and the downloading steps are performed by a remote procedure call (RPC) subsystem.
  • 51. The computer readable medium of claim 50, wherein the determining step is performed by a Remote Method Invocation (RMI) type of remote procedure call subsystem.
  • 52. The computer readable medium of claim 43, further comprising: storing the results from the task in a cache if a subsequent task will use the results.
Parent Case Info

This application is a continuation of application Ser. No. 09/030,840, filed Feb. 26, 1998, now U.S. Pat. No. 6,446,070, which is incorporated herein by reference.

US Referenced Citations (295)
Number Name Date Kind
3449669 Granqvist Jun 1969 A
4430699 Segarra et al. Feb 1984 A
4491946 Kryskow, Jr. et al. Jan 1985 A
4558413 Schmidt et al. Dec 1985 A
4567359 Lockwood Jan 1986 A
4713806 Oberlander et al. Dec 1987 A
4800488 Agrawal et al. Jan 1989 A
4809160 Mahon et al. Feb 1989 A
4819233 Delucia et al. Apr 1989 A
4823122 Mann et al. Apr 1989 A
4939638 Stephenson et al. Jul 1990 A
4956773 Saito et al. Sep 1990 A
4992940 Dworkin Feb 1991 A
5088036 Ellis et al. Feb 1992 A
5101346 Ohtsuki Mar 1992 A
5109486 Seymour Apr 1992 A
5187787 Skeen et al. Feb 1993 A
5218699 Brandle et al. Jun 1993 A
5253165 Leiseca et al. Oct 1993 A
5257369 Skeen et al. Oct 1993 A
5293614 Ferguson et al. Mar 1994 A
5297283 Kelly, Jr. et al. Mar 1994 A
5303042 Lewis et al. Apr 1994 A
5307490 Davidson et al. Apr 1994 A
5311591 Fischer May 1994 A
5319542 King, Jr. et al. Jun 1994 A
5327559 Priven et al. Jul 1994 A
5339430 Lundin et al. Aug 1994 A
5339435 Lubkin et al. Aug 1994 A
5341477 Pitkin et al. Aug 1994 A
5386568 Wold et al. Jan 1995 A
5390328 Frey et al. Feb 1995 A
5392280 Zheng Feb 1995 A
5423042 Jalili et al. Jun 1995 A
5440744 Jacobson et al. Aug 1995 A
5446901 Owicki et al. Aug 1995 A
5448740 Kiri et al. Sep 1995 A
5452459 Drury et al. Sep 1995 A
5455952 Gjovaag Oct 1995 A
5459837 Caccavale Oct 1995 A
5471629 Risch Nov 1995 A
5475792 Stanford et al. Dec 1995 A
5475817 Waldo et al. Dec 1995 A
5475840 Nelson et al. Dec 1995 A
5481721 Serlet et al. Jan 1996 A
5491791 Glowny et al. Feb 1996 A
5504921 Dev et al. Apr 1996 A
5506984 Miller Apr 1996 A
5511196 Shackelford et al. Apr 1996 A
5511197 Hill et al. Apr 1996 A
5524244 Robinson et al. Jun 1996 A
5544040 Gerbaulet Aug 1996 A
5548724 Akizawa et al. Aug 1996 A
5548726 Pettus Aug 1996 A
5553282 Parrish et al. Sep 1996 A
5555367 Premerlani et al. Sep 1996 A
5555427 Aoe et al. Sep 1996 A
5557798 Skeen et al. Sep 1996 A
5560003 Nilsen et al. Sep 1996 A
5561785 Blandy et al. Oct 1996 A
5577231 Scalzi et al. Nov 1996 A
5592375 Salmon et al. Jan 1997 A
5594921 Pettus Jan 1997 A
5603031 White et al. Feb 1997 A
5617537 Yamada et al. Apr 1997 A
5628005 Hurvig May 1997 A
5640564 Hamilton et al. Jun 1997 A
5644720 Boll et al. Jul 1997 A
5644768 Periwal et al. Jul 1997 A
5652888 Burgess Jul 1997 A
5655148 Richman et al. Aug 1997 A
5659751 Heninger Aug 1997 A
5664110 Green et al. Sep 1997 A
5664111 Nahan et al. Sep 1997 A
5664191 Davidson et al. Sep 1997 A
5666493 Wojcik et al. Sep 1997 A
5671225 Hooper et al. Sep 1997 A
5671279 Elgamal Sep 1997 A
5674982 Greve et al. Oct 1997 A
5675796 Hodges et al. Oct 1997 A
5675797 Chung et al. Oct 1997 A
5675804 Sidik et al. Oct 1997 A
5680573 Rubin et al. Oct 1997 A
5680617 Gough et al. Oct 1997 A
5682534 Kapoor et al. Oct 1997 A
5684955 Meyer et al. Nov 1997 A
5689709 Corbett et al. Nov 1997 A
5694551 Doyle et al. Dec 1997 A
5706435 Barbara et al. Jan 1998 A
5706502 Foley et al. Jan 1998 A
5710887 Chelliah et al. Jan 1998 A
5715314 Payne et al. Feb 1998 A
5721825 Lawson et al. Feb 1998 A
5721832 Westrope et al. Feb 1998 A
5724540 Kametani Mar 1998 A
5724588 Hill et al. Mar 1998 A
5727048 Hiroshima et al. Mar 1998 A
5727145 Nessett et al. Mar 1998 A
5729594 Klingman Mar 1998 A
5732706 White et al. Mar 1998 A
5737607 Hamilton et al. Apr 1998 A
5742768 Gennaro et al. Apr 1998 A
5745678 Herzberg et al. Apr 1998 A
5745695 Gilchrist et al. Apr 1998 A
5745703 Cejtin et al. Apr 1998 A
5745755 Covey Apr 1998 A
5748897 Katiyar May 1998 A
5754849 Dyer et al. May 1998 A
5754977 Gardner et al. May 1998 A
5757925 Faybishenko May 1998 A
5758077 Danahy et al. May 1998 A
5758328 Giovannoli May 1998 A
5758344 Prasad et al. May 1998 A
5761507 Govett Jun 1998 A
5761656 Ben-Shachar Jun 1998 A
5764897 Khalidi Jun 1998 A
5764915 Heimsoth et al. Jun 1998 A
5764982 Madduri Jun 1998 A
5768532 Megerian Jun 1998 A
5774551 Wu et al. Jun 1998 A
5774729 Carney et al. Jun 1998 A
5778179 Kanai et al. Jul 1998 A
5778187 Monteiro et al. Jul 1998 A
5778228 Wei Jul 1998 A
5778368 Hogan et al. Jul 1998 A
5784560 Kingdon et al. Jul 1998 A
5787425 Bigus Jul 1998 A
5787431 Shaughnessy Jul 1998 A
5790548 Sistanizadeh et al. Aug 1998 A
5790677 Fox et al. Aug 1998 A
5794207 Walker et al. Aug 1998 A
5799173 Gossler et al. Aug 1998 A
5802367 Held et al. Sep 1998 A
5805805 Civanlar et al. Sep 1998 A
5806042 Kelly et al. Sep 1998 A
5808911 Tucker et al. Sep 1998 A
5809144 Sirbu et al. Sep 1998 A
5809507 Cavanaugh, III Sep 1998 A
5812819 Rodwin et al. Sep 1998 A
5813013 Shakib et al. Sep 1998 A
5815149 Mutschler, III et al. Sep 1998 A
5815709 Waldo et al. Sep 1998 A
5815711 Sakamoto et al. Sep 1998 A
5818448 Katiyar Oct 1998 A
5828842 Sugauchi et al. Oct 1998 A
5829022 Watanabe et al. Oct 1998 A
5832219 Pettus Nov 1998 A
5832529 Wollrath et al. Nov 1998 A
5832593 Wurst et al. Nov 1998 A
5835737 Sand et al. Nov 1998 A
5842018 Atkinson et al. Nov 1998 A
5844553 Hao et al. Dec 1998 A
5845090 Collins, III et al. Dec 1998 A
5845129 Wendorf et al. Dec 1998 A
5850442 Muftic Dec 1998 A
5860004 Fowlow et al. Jan 1999 A
5860153 Matena et al. Jan 1999 A
5864862 Kriens et al. Jan 1999 A
5864866 Henckel et al. Jan 1999 A
5872928 Lewis et al. Feb 1999 A
5872973 Mitchell et al. Feb 1999 A
5875335 Beard Feb 1999 A
5878411 Burroughs et al. Mar 1999 A
5884024 Lim et al. Mar 1999 A
5884079 Furusawa Mar 1999 A
5887134 Ebrahim Mar 1999 A
5887172 Vasudevan et al. Mar 1999 A
5889951 Lombardi Mar 1999 A
5889988 Held Mar 1999 A
5890158 House et al. Mar 1999 A
5892904 Atkinson et al. Apr 1999 A
5905868 Baghai et al. May 1999 A
5913029 Shostak Jun 1999 A
5915112 Boutcher Jun 1999 A
5925108 Johnson et al. Jul 1999 A
5933497 Beetcher et al. Aug 1999 A
5933647 Aronberg et al. Aug 1999 A
5935249 Stern et al. Aug 1999 A
5940827 Hapner et al. Aug 1999 A
5944793 Islam et al. Aug 1999 A
5946485 Weeren et al. Aug 1999 A
5946694 Copeland et al. Aug 1999 A
5949998 Fowlow et al. Sep 1999 A
5951652 Ingrassia, Jr. et al. Sep 1999 A
5956509 Kevner Sep 1999 A
5960404 Chaar et al. Sep 1999 A
5961582 Gaines Oct 1999 A
5963924 Williams et al. Oct 1999 A
5963947 Ford et al. Oct 1999 A
5966435 Pino Oct 1999 A
5966531 Skeen et al. Oct 1999 A
5969967 Aahlad et al. Oct 1999 A
5974201 Chang et al. Oct 1999 A
5978484 Apperson et al. Nov 1999 A
5978773 Hudetz et al. Nov 1999 A
5982773 Nishimura et al. Nov 1999 A
5987506 Carter et al. Nov 1999 A
5991808 Broder et al. Nov 1999 A
5996075 Matena Nov 1999 A
5999179 Kekic et al. Dec 1999 A
5999988 Pelegri-Llopart et al. Dec 1999 A
6003050 Silver et al. Dec 1999 A
6003065 Yan et al. Dec 1999 A
6003763 Gallagher et al. Dec 1999 A
6009103 Woundy Dec 1999 A
6009413 Webber et al. Dec 1999 A
6009464 Hamilton et al. Dec 1999 A
6014686 Elnozahy et al. Jan 2000 A
6016496 Roberson Jan 2000 A
6016516 Horikiri Jan 2000 A
6018619 Allard et al. Jan 2000 A
6023586 Gaisford et al. Feb 2000 A
6026414 Anglin Feb 2000 A
6031977 Pettus Feb 2000 A
6032151 Arnold et al. Feb 2000 A
6034925 Wehmeyer Mar 2000 A
6041351 Kho Mar 2000 A
6044381 Boothby et al. Mar 2000 A
6052761 Hornung et al. Apr 2000 A
6055562 Devarakonda et al. Apr 2000 A
6058381 Nelson May 2000 A
6058383 Narasimhalu et al. May 2000 A
6061699 DiCecco et al. May 2000 A
6061713 Bharadhwaj May 2000 A
6067575 McManis et al. May 2000 A
6078655 Fahrer et al. Jun 2000 A
6085030 Whitehead et al. Jul 2000 A
6085255 Vincent et al. Jul 2000 A
6092194 Touboul Jul 2000 A
6093216 Adl-Tabatabai et al. Jul 2000 A
6101528 Butt Aug 2000 A
6104716 Crichton et al. Aug 2000 A
6108346 Doucette et al. Aug 2000 A
6134603 Jones et al. Oct 2000 A
6154844 Touboul et al. Nov 2000 A
6157960 Kaminsky et al. Dec 2000 A
6182083 Scheifler et al. Jan 2001 B1
6185602 Bayrakeri Feb 2001 B1
6185611 Waldo et al. Feb 2001 B1
6189046 Moore et al. Feb 2001 B1
6192044 Mack Feb 2001 B1
6199068 Carpenter Mar 2001 B1
6199116 May et al. Mar 2001 B1
6212578 Racicot et al. Apr 2001 B1
6216138 Wells et al. Apr 2001 B1
6216158 Luo et al. Apr 2001 B1
6219675 Pal et al. Apr 2001 B1
6226746 Scheifler May 2001 B1
6243716 Waldo et al. Jun 2001 B1
6243814 Matena Jun 2001 B1
6247091 Lovett Jun 2001 B1
6253256 Wollrath et al. Jun 2001 B1
6263350 Wollrath et al. Jul 2001 B1
6263379 Atkinson et al. Jul 2001 B1
6269401 Fletcher et al. Jul 2001 B1
6272559 Jones et al. Aug 2001 B1
6282295 Young et al. Aug 2001 B1
6282568 Sondur et al. Aug 2001 B1
6282581 Moore et al. Aug 2001 B1
6292934 Davidson et al. Sep 2001 B1
6301613 Ahlstrom et al. Oct 2001 B1
6321275 McQuistan et al. Nov 2001 B1
6327677 Garg et al. Dec 2001 B1
6339783 Horikiri Jan 2002 B1
6343308 Marchesseault Jan 2002 B1
6351735 Deaton et al. Feb 2002 B1
6360266 Pettus Mar 2002 B1
6363409 Hart et al. Mar 2002 B1
6378001 Aditham et al. Apr 2002 B1
6385643 Jacobs et al. May 2002 B1
6408342 Moore et al. Jun 2002 B1
6418468 Ahlstrom et al. Jul 2002 B1
6446070 Arnold et al. Sep 2002 B1
6463480 Kikuchi et al. Oct 2002 B2
6505248 Casper et al. Jan 2003 B1
6564174 Ding et al. May 2003 B1
6578074 Bahlmann Jun 2003 B1
6603772 Moussavi et al. Aug 2003 B1
6604127 Murphy et al. Aug 2003 B2
6604140 Beck et al. Aug 2003 B1
6654793 Wollrath et al. Nov 2003 B1
6704803 Wilson et al. Mar 2004 B2
6757729 Devarakonda et al. Jun 2004 B1
6801940 Moran et al. Oct 2004 B1
6801949 Bruck et al. Oct 2004 B1
6804711 Dugan et al. Oct 2004 B1
6804714 Tummalapalli Oct 2004 B1
20010003824 Schnier Jun 2001 A1
20010011350 Zabelian Aug 2001 A1
20020059212 Takagi May 2002 A1
20020073019 Deaton Jun 2002 A1
20020111814 Barnett et al. Aug 2002 A1
20030005132 Nguyen et al. Jan 2003 A1
20030084204 Wollrath et al. May 2003 A1
20030191842 Murphy et al. Oct 2003 A1
Foreign Referenced Citations (44)
Number Date Country
0 300 516 Jan 1989 EP
0 351 536 Jan 1990 EP
0 384 339 Aug 1990 EP
0 472 874 Mar 1992 EP
0 474 340 Mar 1992 EP
497 022 Aug 1992 EP
0 555 997 Aug 1993 EP
0 565 849 Oct 1993 EP
0 569 195 Nov 1993 EP
0 625 750 Nov 1994 EP
0 635 792 Jan 1995 EP
0 651 328 May 1995 EP
0 660 231 Jun 1995 EP
0 697 655 Feb 1996 EP
0 718 761 Jun 1996 EP
0 767 432 Apr 1997 EP
0 778 520 Jun 1997 EP
0 794 493 Sep 1997 EP
0 803 810 Oct 1997 EP
0 803 811 Oct 1997 EP
0 805 393 Nov 1997 EP
0 810 524 Dec 1997 EP
0 817 020 Jan 1998 EP
0 817 022 Jan 1998 EP
0 817 025 Jan 1998 EP
0 836 140 Apr 1998 EP
2 253 079 Aug 1992 GB
2 262 825 Jun 1993 GB
2 305 087 Mar 1997 GB
7-168744 Apr 1995 JP
WO9207335 Apr 1992 WO
WO9209948 Jun 1992 WO
WO9325962 Dec 1993 WO
WO9403855 Feb 1994 WO
WO9603692 Feb 1996 WO
WO9610787 Apr 1996 WO
WO9618947 Jun 1996 WO
WO9624099 Aug 1996 WO
WO9802814 Jan 1998 WO
WO9804971 Feb 1998 WO
WO 9917194 Apr 1999 WO
WO 0113228 Feb 2001 WO
WO 0186394 Nov 2001 WO
WO 0190903 Nov 2001 WO
Related Publications (1)
Number Date Country
20010049713 A1 Dec 2001 US
Continuations (1)
Number Date Country
Parent 09030840 Feb 1998 US
Child 09809201 US