Claims
- 1. A method for parallelizing an application, comprising the steps of:
providing a plurality of adapters each affording a specific type of processing algorithm; at successive portions in an application, identifying an adapter appropriate for parallelizing that portion of the application; associating the identified adapter with the portion; generating a code segment which represents the functionality of for each identified portion and which includes a call to a client-server library; including a call to the client-server library in the application which, at run-time, launches said codes segment from a main entry point in each respective code segment in lieu of executing the portion.
- 2. The method of claim 1, wherein, at run-time, the client-server library returns values to the application as a result of launching said code segments.
- 3. The method of claim 1, including the additional steps of providing a software server, the server cooperating with the adapters to control and supervise distributed processing of one or more of said code segments.
- 4. The parallel computing method of claim 3, wherein the distributed processing functions include at least one of mapping, load balancing, and error detection and correction.
- 5. The parallel computing method of claim 3, wherein the distributed processing functions coordinate results of the computing in real-time and return said results according to the application.
- 6. A method for running a parallelized application in which a pool of work is to be performed, comprising the steps of:
using a master server that operates in a master mode, instantiating a stateless server which contains a first object including a code segment suitable for processing work from the pool; dispatching from the master server to the stateless server a first portion of work from the pool; reporting to the master server a progress of the first portion of work dispatched to the stateless server; and distributing additional portions of work from the master server to the stateless server once a prescribed amount of work progress has been reported by the stateless server.
- 7. The method of claim 6, including the additional steps of:
establishing the stateless server in a slave mode prior to the reporting step; and performing the first portion of work on the stateless server.
- 8. The method of claim 6, including the additional steps of:
establishing the stateless server in a master mode prior to the reporting step; instantiating a plurality of further servers, a set of said further servers operating in a slave mode; splitting the first portion of work for performance by said further servers; performing the split first portion of work at said further servers; and advising the stateless server of the progress of the first portion of work performed by said further servers.
- 9. The method of claim 8, wherein the splitting step comprises splitting the first portion of work from the pool into overlapping parts.
- 10. The method of claim 8, wherein the splitting step comprises splitting the first portion of work from the pool into overlapping parts.
- 11. The method of claim 6, including the additional steps of:
monitoring at the master server the progress reported by the stateless server; and redistributing from the master server the first portion of work from the pool to a different stateless server in the event that a prescribed criterion is not met.
- 12. The method of claim 11, wherein the prescribed criterion is that a progress measure has advanced by a threshold amount since a prior reporting step.
- 13. The method of claim 11, including the additional step of terminating the first portion of work running on the stateless server.
- 14. The method of claim 6, wherein the first portion of work has a first size, the method including the additional steps of:
analyzing the progress reported to the master server by the stateless server; establishing a second size for said additional portions of the work in the pool to be distributed to the stateless server in response to the analyzing step; and distributing additional portions of work from the master server to the stateless server with each additional portion having the second size, whereby the master server balances the load to the stateless servers in real-time in order to maintain the prescribed amount of work progress below 100%.
- 15. The method of claim 14, wherein the second size is different than the first size.
- 16. The method of claim 14, wherein the second size is the same as the first size.
- 17. The method of claim 6, wherein the first portion of work has a first size, the method including the additional steps of:
establishing the stateless server in a master mode prior to the reporting step, instantiating a plurality of further servers, a set of said further servers operating in a slave mode; splitting the first portion of work for performance by said further servers; performing the split first portion of work at said further servers; reporting to the stateless server the progress of the first portion of work performed by said further servers; analyzing the progress reported to the stateless server; establishing a second size for said additional portions of the work in the pool to be distributed to said further servers in response to the analyzing step; and distributing additional portions of work from the stateless server to said further servers with each additional portion having the second size, whereby the stateless server balances the load to said further servers in real-time in order to maintain the prescribed amount of work progress below 100%.
- 18. A method for running a parallelized application, comprising the steps of:
obtaining a portion of an application at a first server, the portion containing an algorithm to process and an interface to a crosscaller; executing the portion at the first server in a master mode; calling the interface to obtain global data for use as an input to the algorithm executing at the server; replicating the portion from the first server to at least one other resource; and permitting the replicated portion to dynamically select whether to operate in a master or slave mode.
- 19. The method of claim 18 including the additional step of monitoring the distributed portions for stalls/dead threads.
Parent Case Info
[0001] This patent application claims the benefit of priority under 35 U.S.C. § 119 of U.S. Provisional Patent Application Serial No. 60/338,278, filed Dec. 4, 2001, entitled “Parallel Computing System And Architecture,” the entirety of which is hereby incorporated by reference.
Provisional Applications (1)
|
Number |
Date |
Country |
|
60338278 |
Dec 2001 |
US |