Claims
- 1. A parallel processing system, comprising:
a plurality of processing nodes arranged in a Howard Cascade; a first home node, responsive to an algorithm processing request, for (a) broadcasting the algorithm processing request to the plurality of processing nodes in a time and sequence order within the Howard Cascade and for (b) broadcasting a dataset of an algorithm to at least top level processing nodes of the Howard Cascade simultaneously; contiguous processing nodes within the plurality of processing nodes being operable to process contiguous parts of the dataset and to agglomerate results contiguously to the first home node in reverse to the time and sequence order.
- 2. A parallel processing system of claim 1, further comprising:
a broadcast communication channel between the first home node and the plurality of processing nodes, the first home node initiating a multicast message group on the broadcast communication channel and, when each of the plurality of processing nodes has joined the broadcast communication, communicating the dataset to the plurality of processing nodes.
- 3. A parallel processing system of claim 2, each of the plurality of nodes leaving the multicast message group upon receiving the dataset.
- 4. A parallel processing system of claim 3, the first home node being operable to detect whether any of the processing nodes failed to leave the multicast message group.
- 5. A parallel processing system of claim 4, the first home node being further configured for opening a discrete communication channel with a failed processing node to attempt recovery of the failed processing node.
- 6. A processing system of claim 1, the first home node comprising a controller for distributing the algorithm processing request to the plurality of processing nodes, and for appending, to the algorithm processing request, information defining data distribution among a number of the plurality of processing nodes.
- 7. A processing system of claim 1, each of the plurality of processing nodes being constructed and arranged to manage internal memory by storing all or part of the dataset.
- 8. A processing system of claim 1, each of the plurality of processing nodes having information from the algorithm processing request about how many other processing nodes are downstream within the Howard Cascade.
- 9. A processing system of claim 1, further comprising a gateway for communicating the algorithm processing request from a remote host to the first home node and for communicating agglomerated results to the remote host.
- 10. A processing system of claim 9, each of the plurality of processing nodes having an algorithm library for storing computationally intensive functions, the remote host having an API for defining an interface for the computationally intensive functions.
- 11. A processing system of claim 10, each of the plurality of processing nodes having like computationally intensive functions stored within its library.
- 12. A processing system of claim 9, each of the plurality of processing nodes having a data template and control software, the control software routing the algorithm processing request to the data template, the data template determining data indexing and input parameters to communicate with a particular function in the library and defined by the algorithm processing request.
- 13. A processing system of claim 12, the data template further determining whether or not its associated processing node requires data associated with the particular function.
- 14. A processing system of claim 13, the associated processing node communicating a message to the first home node to request the required data.
- 15. A processing system of claim 1, the first home node broadcasting the dataset to each of the plurality of processing nodes simultaneously.
- 16. A processing system of claim 1, a first group of the plurality of processing nodes being configured within a first strip, a second group of the plurality of processing nodes being configured in a second strip, the first and second strips being communicatively independent from each other during broadcasting of the dataset and being communicatively dependent upon each other during agglomeration of the results.
- 17. A processing system of claim 16, the first group comprising one top-level processing node and at least one other lower-level processing node, the second group comprising one top level processing node.
- 18. A processing system of claim 17, each communication between processing nodes on adjacent levels utilizing one time unit in the time and sequence order, each communication between processing nodes of the first and second strip utilizing one time unit in the time and sequence order.
- 19. A processing system of claim 16, communication between the first and second strip occurring only between the top level processing nodes within the first and second strips.
- 20. A processing system of claim 16, each top processing node within the first and second strips relaying the dataset to lower level processing nodes within respective first and second strips.
- 21. A processing system of claim 20, further comprising a router for enabling communication between the first home node and the top processing nodes.
- 22. A processing system of claim 16, further comprising a first switch within the first strip for enabling inter-strip communications between processing nodes of the first group, and a second switch within the second strip for enabling inter-strip communications between processing nodes of the second group.
- 23. A processing system of claim 1, the plurality of processing nodes being grouped into two or more strips of the Howard Cascade, each of the strips having at least one switch for enabling inter-strip communications between processing nodes of a common strip, and further comprising at least one router for enabling communication between the first home node and top level processing nodes of each strip.
- 24. A processing system of claim 23, the Howard Cascade being reconfigurable, via the switches and router, to separate one strip into a plurality of strips to accommodate boundary conditions.
- 25. A processing system of claim 23, the Howard Cascade being balanced between the strips.
- 26. A processing system of claim 25, the Howard Cascade being balanced by distributing computational load across the processing nodes by assigning every nth term in a series expansion to each processing node, thereby normalizing low and high order terms across nodes of the Howard Cascade.
- 27. A processing system of claim 26, the Howard Cascade being balanced by advancing, by one, a starting term in the dataset for each of the nodes on successive intervals and by rotating each processing node that computed a last term, to normalize imbalances across each nth interval of the series expansion.
- 28. A processing system of claim 1, each of the processing nodes comprising at least one processor and at least one communication interface.
- 29. A processing system of claim 28, the communication interface comprising a network interface card.
- 30. A processing system of claim 28, each of the processing nodes comprising a plurality of parallel communication interfaces.
- 31. A processing system of claim 28, each of the processing nodes having a plurality of processors.
- 32. A processing system of claim 1, at least one of the processing nodes being reconfigurable to operate as the first home node if the home node fails.
- 33. A processing system of claim 1, further comprising one or more second home nodes for communicating with the processing nodes like the first home node, at least one of the second home nodes being reconfigurable to function as the first home node if the first home node fails.
- 34. A processing system of claim 33, further comprising a home node switch network for enabling communication among the first and second home nodes.
- 35. A processing system of claim 33, further comprising a processing node switch network for enabling communication between the first and second home nodes and the plurality of processing nodes.
- 36. A processing system of claim 33, the plurality of processing nodes being grouped in association with the first and second home nodes, the first and second home nodes communicating to reallocate one or more of the grouped processing nodes of the first home node to function in association with the second home nodes, to reallocate processing power according to a need of the second home nodes.
- 37. A processing system of claim 1, Howard Cascade sharing overlapped data between the plurality of processing nodes to decrease I/O data transfer.
- 38. A processing system of claim 1, further comprising a hidden function API to hide knowledge of functions within the processing nodes.
- 39. A method for processing a context-based algorithm for enhanced parallel processing within a parallel processing architecture, comprising the steps of:
A. determining whether work to be performed by the algorithm is intrinsic to the algorithm or to another algorithm; B. determining whether the algorithm requires data movement; and C. parallelizing the algorithm based upon whether the work is intrinsic to the algorithm and whether the algorithm requires data movement.
- 40. A method of claim 39, the steps of determining comprising classifying the algorithm as one of transactional, ICNADM, ICADM, ECNADM and ECADM.
- 41. A method of claim 39, the steps of determining comprising determining that the algorithm comprises a series expansion with the work intrinsic to the algorithm that does not require data movement, the step of parallelizing comprising the step of assigning every nth term series expansion to each node in the parallel processing architecture.
- 42. A method of claim 41, further comprising the step of balancing series terms among nodes of the parallel processing architecture when the series terms are not evenly divisible.
- 43. A method of claim 39, the step of parallelizing comprising utilizing a run-time translation process when the work is intrinsic to the algorithm and wherein one of (a) the algorithm requires data movement and (b) the algorithm does not require data movement.
- 44. A method for parallel processing an algorithm for use within a Howard Cascade, comprising the steps of:
extract input and output data descriptions for the algorithm; acquire data for the algorithm; process the algorithm on nodes of the Howard Cascade; agglomerate node results through the Howard Cascade; and return results to a remote host requesting parallel processing of the algorithm.
- 45. A method for parallel computation comprising:
transmitting an algorithm computation request and associated data from a requesting host to a home node of a computing system wherein the request includes a requested number (N) of processing nodes to be applied to computation of the request; distributing the computation request from the home node to a plurality of processing nodes wherein the plurality of processing nodes includes N processing nodes coupled to the home node and wherein the distribution is in a hierarchical ordering; broadcasting the associated data from the home node to all of the plurality of processing nodes; agglomerating a final computation result from partial computation results received from the plurality of processing nodes wherein the agglomeration is performed in the reverse order of the hierarchical ordering; and returning the final computation result from the home node to the requesting host.
- 46. The method of claim 45 wherein the step of distributing comprises:
distributing the computation request to the N processing nodes in a time sequence hierarchical order.
- 47. The method of claim 45 wherein the step of distributing comprises:
distributing the computation request to the N processing nodes in a hierarchical order so as to minimize the utilization of communication channels coupling the N processing nodes to one another and the communication channels coupling the N processing nodes to the home node.
- 48. The method of claim 45 wherein the step of distributing comprises:
distributing the computation request to the N processing nodes in a hierarchical order time interval sequence such that the N processing nodes form a Howard Cascade structure.
- 49. The method of claim 45 wherein the step of distributing comprises:
distributing the computation request to the N processing nodes in a hierarchical order time interval sequence, wherein each processing node includes a number of processors (P) and a number of communication channels (C) and wherein a first processing node that receives the computation request in a first time interval may forward the computation request to a second processing node hierarchically deeper than the first processing node in a next time interval and wherein the total number of time intervals (t) required to distribute the computation request to all processing nodes is at most:t=φ log((N+P)/P)/log(C+1)κ.
- 50. The method of claim 45 further comprising:
partitioning the associated data to generate data set information identifying a portion of the associated data to be utilized by each corresponding processing node, wherein the step of distributing includes distributing the data set information to the processing nodes.
- 51. The method of claim 50 further comprising:
determining a performance measure for processing of each processing node, wherein the step of partitioning comprises:
partitioning the associated data in accordance with the performance measure of each processing node to balance processing load between the processing nodes.
- 52. A method of distributing an algorithm computation request comprising:
receiving within a home node of a distributed parallel computing system a computation request and associated data from a requesting host system; determining a number of processing nodes (N) of the parallel computing system to be applied to performing the computation request; partitioning the associated data to identify a portion of the data associated with each processing node; and recursively communicating the computing request and information regarding the partitioned data from the home node to each of the N processing nodes over a plurality of communication channels during a sequence of discrete time intervals, wherein each communication channel is used for communication by at most one processing node or home node during any one time interval, and wherein the number of discrete time intervals to recursively communicate to all N processing nodes is minimized.
- 53. The method of claim 52 wherein each processing node is associated with a corresponding communication channel of said plurality of communication channels and wherein the step of recursively communicating is completed in at most a number of time intervals (t) such that:
- 54. The method of claim 52 further comprising:
initiating computation by each processing node following completion of the recursive communication to all of the N processing nodes; and agglomerating a partial result generated by computation in each processing node an communicated to a corresponding higher processing node in the recursive order defined by the recursive communication to generate an agglomerated result.
- 55. The method of claim 54 further comprising:
returning the agglomerated result as a final result from the home node to the requesting host.
- 56. The method of claim 54 further comprising:
recursively communicating the agglomerated result to the N processing nodes; continuing computation by each processing node following completion of the recursive communication of the agglomerated result to all N processing nodes; repeating the steps of agglomerating, recursively communicating the agglomerated result and continuing; and returning the agglomerated result as a final result from the home node to the requesting host when the computation request has completed.
- 57. The method of claim 54 wherein the step of recursively communicating and the step of agglomerating utilize separate communication channels associated with a corresponding processing node.
- 58. The method of claim 54 wherein the step of determining comprises:
determining N from information in the computation request.
- 59. A method for distributing an algorithm computation request for a complex algorithm in a parallel processing system comprising:
receiving from a requesting host a computation request for a complex algorithm wherein the complex algorithm includes a plurality of computation sections; expanding the computation request to a plurality of nodes configured as a Howard Cascade; computing within the Howard Cascade a first computation section to generate a partial result; returning the partial result to a control device; receiving further direction from the control device; computing within the Howard Cascade a next computation section to generate a partial result in response to receipt of further direction to compute the next computation section; repeating the steps of returning, receiving and computing the next computation section in response to receipt of further direction to compute the next computation section; and returning the partial result to the requesting host as a final result in response to further direction to complete processing of the complex algorithm.
- 60. The method of claim 59 wherein the control device is the requesting host.
- 61. The method of claim 59 wherein the control device is a node of the plurality of nodes.
- 62. A method for parallelizing an algorithm comprising:
receiving a new algorithm description; automatically annotating the new algorithm description with template information relating to data used by the new algorithm and relating to data generated by the new algorithm; and storing the annotated new algorithm in each processing node of a Howard Cascade parallel processing system.
- 63. The method of claim 62 further comprising:
storing the template information in each node of the Howard Cascade.
- 64. The method of claim 62 wherein the step of annotating includes:
adding elements to extract input data descriptions and output data descriptions for each processing node on which the new algorithm may be computed prior to computing the new algorithm; adding elements to obtain input data for each said processing node based on the input data description prior to computing the new algorithm; adding elements to agglomerate partial results from other processing nodes of the Howard Cascade following computation of the new algorithm; and adding elements to return the partial result generated by computation of the new algorithm of each processing node to another processing node of the Howard Cascade.
- 65. A computer readable storage medium tangibly embodying program instructions for a method for parallel computation, the method comprising:
transmitting an algorithm computation request and associated data from a requesting host to a home node of a computing system wherein the request includes a requested number (N) of processing nodes to be applied to computation of the request; distributing the computation request from the home node to a plurality of processing nodes wherein the plurality of processing nodes includes N processing nodes coupled to the home node and wherein the distribution is in a hierarchical ordering; broadcasting the associated data from the home node to all of the plurality of processing nodes; agglomerating a final computation result from partial computation results received from the plurality of processing nodes wherein the agglomeration is performed in the reverse order of the hierarchical ordering; and returning the final computation result from the home node to the requesting host.
- 66. The computer readable storage medium of claim 65 wherein the method step of distributing comprises:
distributing the computation request to the N processing nodes in a time sequence hierarchical order.
- 67. The computer readable storage medium of claim 65 wherein the method step of distributing comprises:
distributing the computation request to the N processing nodes in a hierarchical order so as to minimize the utilization of communication channels coupling the N processing nodes to one another and the communication channels coupling the N processing nodes to the home node.
- 68. The computer readable storage medium of claim 65 wherein the method step of distributing comprises:
distributing the computation request to the N processing nodes in a hierarchical order time interval sequence such that the N processing nodes form a Howard Cascade structure.
- 69. The computer readable storage medium of claim 65 wherein the method step of distributing comprises:
distributing the computation request to the N processing nodes in a hierarchical order time interval sequence, wherein each processing node includes a number of processors (P) and a number of communication channels (C) and wherein a first processing node that receives the computation request in a first time interval may forward the computation request to a second processing node hierarchically deeper than the first processing node in a next time interval and wherein the total number of time intervals (t) required to distribute the computation request to all processing nodes is at most:t=φ log((N+P)/P)/log(C+1)κ.
- 70. The computer readable storage medium of claim 65 wherein the method further comprises:
partitioning the associated data to generate data set information identifying a portion of the associated data to be utilized by each corresponding processing node, wherein the method step of distributing includes distributing the data set information to the processing nodes.
- 71. The computer readable storage medium of claim 70 wherein the method further comprises:
determining a performance measure for processing of each processing node, wherein the method step of partitioning comprises:
partitioning the associated data in accordance with the performance measure of each processing node to balance processing load between the processing nodes.
- 72. A computer readable storage medium tangibly embodying program instructions for a method of distributing an algorithm computation request, the method comprising:
receiving within a home node of a distributed parallel computing system a computation request and associated data from a requesting host system; determining a number of processing nodes (N) of the parallel computing system to be applied to performing the computation request; partitioning the associated data to identify a portion of the data associated with each processing node; and recursively communicating the computing request and information regarding the partitioned data from the home node to each of the N processing nodes over a plurality of communication channels during a sequence of discrete time intervals, wherein each communication channel is used for communication by at most one processing node or home node during any one time interval, and wherein the number of discrete time intervals to recursively communicate to all N processing nodes is minimized.
- 73. The computer readable storage medium of claim 72 wherein each processing node is associated with a corresponding communication channel of said plurality of communication channels and wherein the method step of recursively communicating is completed in at most a number of time intervals (t) such that:
- 74. The computer readable storage medium of claim 72 wherein the method further comprises:
initiating computation by each processing node following completion of the recursive communication to all of the N processing nodes; and agglomerating a partial result generated by computation in each processing node an communicated to a corresponding higher processing node in the recursive order defined by the recursive communication to generate an agglomerated result.
- 75. The computer readable storage medium of claim 74 wherein the method further comprises:
returning the agglomerated result as a final result from the home node to the requesting host.
- 76. The computer readable storage medium of claim 74 wherein the method further comprises:
recursively communicating the agglomerated result to the N processing nodes; continuing computation by each processing node following completion of the recursive communication of the agglomerated result to all N processing nodes; repeating the method steps of agglomerating, recursively communicating the agglomerated result and continuing; and returning the agglomerated result as a final result from the home node to the requesting host when the computation request has completed.
- 77. The computer readable storage medium of claim 74 wherein the method step of recursively communicating and the method step of agglomerating utilize separate communication channels associated with a corresponding processing node.
- 78. The computer readable storage medium of claim 74 wherein the method step of determining comprises:
determining N from information in the computation request.
- 79. A computer readable storage medium tangibly embodying program instructions for a method for distributing an algorithm computation request for a complex algorithm in a parallel processing system, the method comprising:
receiving from a requesting host a computation request for a complex algorithm wherein the complex algorithm includes a plurality of computation sections; expanding the computation request to a plurality of nodes configured as a Howard Cascade; computing within the Howard Cascade a first computation section to generate a partial result; returning the partial result to a control device; receiving further direction from the control device; computing within the Howard Cascade a next computation section to generate a partial result in response to receipt of further direction to compute the next computation section; repeating the method steps of returning, receiving and computing the next computation section in response to receipt of further direction to compute the next computation section; and returning the partial result to the requesting host as a final result in response to further direction to complete processing of the complex algorithm.
- 80. The computer readable storage medium of claim 79 wherein the control device is the requesting host.
- 81. The computer readable storage medium of claim 79 wherein the control device is a node of the plurality of nodes.
- 82. A computer readable storage medium tangibly embodying program instructions for a method for parallelizing an algorithm, the method comprising:
receiving a new algorithm description; automatically annotating the new algorithm description with template information relating to data used by the new algorithm and relating to data generated by the new algorithm; and storing the annotated new algorithm in each processing node of a Howard Cascade parallel processing system.
- 83. The computer readable storage medium of claim 82 wherein the method further comprises:
storing the template information in each node of the Howard Cascade.
- 84. The computer readable storage medium of claim 82 wherein the method step of annotating includes:
adding elements to extract input data descriptions and output data descriptions for each processing node on which the new algorithm may be computed prior to computing the new algorithm; adding elements to obtain input data for each said processing node based on the input data description prior to computing the new algorithm; adding elements to agglomerate partial results from other processing nodes of the Howard Cascade following computation of the new algorithm; and adding elements to return the partial result generated by computation of the new algorithm of each processing node to another processing node of the Howard Cascade.
- 85. A system for parallel computation comprising:
means for transmitting an algorithm computation request and associated data from a requesting host to a home node of a computing system wherein the request includes a requested number (N) of processing nodes to be applied to computation of the request; means for distributing the computation request from the home node to a plurality of processing nodes wherein the plurality of processing nodes includes N processing nodes coupled to the home node and wherein the distribution is in a hierarchical ordering; means for broadcasting the associated data from the home node to all of the plurality of processing nodes; means for agglomerating a final computation result from partial computation results received from the plurality of processing nodes wherein the agglomeration is performed in the reverse order of the hierarchical ordering; and means for returning the final computation result from the home node to the requesting host.
- 86. The system of claim 85 wherein the means for distributing comprises:
means for distributing the computation request to the N processing nodes in a time sequence hierarchical order.
- 87. The system of claim 85 wherein the means for distributing comprises:
means for distributing the computation request to the N processing nodes in a hierarchical order so as to minimize the utilization of communication channels coupling the N processing nodes to one another and the communication channels coupling the N processing nodes to the home node.
- 88. The system of claim 85 wherein the means for distributing comprises:
means for distributing the computation request to the N processing nodes in a hierarchical order time interval sequence such that the N processing nodes form a Howard Cascade structure.
- 89. The system of claim 85 wherein the means for distributing comprises:
means for distributing the computation request to the N processing nodes in a hierarchical order time interval sequence, wherein each processing node includes a number of processors (P) and a number of communication channels (C) and wherein a first processing node that receives the computation request in a first time interval may forward the computation request to a second processing node hierarchically deeper than the first processing node in a next time interval and wherein the total number of time intervals (t) required to distribute the computation request to all processing nodes is at most:t=φ log((N+P)/P)/log(C+1)κ.
- 90. The system of claim 85 further comprising:
means for partitioning the associated data to generate data set information identifying a portion of the associated data to be utilized by each corresponding processing node, wherein the means for distributing includes distributing the data set information to the processing nodes.
- 91. The system of claim 90 further comprising:
means for determining a performance measure for processing of each processing node, wherein the means for partitioning comprises: means for partitioning the associated data in accordance with the performance measure of each processing node to balance processing load between the processing nodes.
- 92. A system of distributing an algorithm computation request comprising:
means for receiving within a home node of a distributed parallel computing system a computation request and associated data from a requesting host system; means for determining a number of processing nodes (N) of the parallel computing system to be applied to performing the computation request; means for partitioning the associated data to identify a portion of the data associated with each processing node; and means for recursively communicating the computing request and information regarding the partitioned data from the home node to each of the N processing nodes over a plurality of communication channels during a sequence of discrete time intervals, wherein each communication channel is used for communication by at most one processing node or home node during any one time interval, and wherein the number of discrete time intervals to recursively communicate to all N processing nodes is minimized.
- 93. The system of claim 92 wherein each processing node is associated with a corresponding communication channel of said plurality of communication channels and wherein the means for recursively communicating is completed in at most a number of time intervals (t) such that:
- 94. The system of claim 92 further comprising:
means for initiating computation by each processing node following completion of the recursive communication to all of the N processing nodes; and means for agglomerating a partial result generated by computation in each processing node an communicated to a corresponding higher processing node in the recursive order defined by the recursive communication to generate an agglomerated result.
- 95. The system of claim 94 further comprising:
means for returning the agglomerated result as a final result from the home node to the requesting host.
- 96. The system of claim 94 further comprising:
means for recursively communicating the agglomerated result to the N processing nodes; means for continuing computation by each processing node following completion of the recursive communication of the agglomerated result to all N processing nodes; means for repeating the steps of agglomerating, recursively communicating the agglomerated result and continuing; and means for returning the agglomerated result as a final result from the home node to the requesting host when the computation request has completed.
- 97. The system of claim 94 wherein the means for recursively communicating and the means for agglomerating utilize separate communication channels associated with a corresponding processing node.
- 98. The system of claim 94 wherein the means for determining comprises:
means for determining N from information in the computation request.
- 99. A system for distributing an algorithm computation request for a complex algorithm in a parallel processing system comprising:
means for receiving from a requesting host a computation request for a complex algorithm wherein the complex algorithm includes a plurality of computation sections; means for expanding the computation request to a plurality of nodes configured as a Howard Cascade; means for computing within the Howard Cascade a first computation section to generate a partial result; means for returning the partial result to a control device; means for receiving further direction from the control device; means for computing within the Howard Cascade a next computation section to generate a partial result in response to receipt of further direction to compute the next computation section; means for repeating the steps of returning, receiving and computing the next computation section in response to receipt of further direction to compute the next computation section; and means for returning the partial result to the requesting host as a final result in response to further direction to complete processing of the complex algorithm.
- 100. The system of claim 99 wherein the control device is the requesting host.
- 101. The system of claim 99 wherein the control device is a node of the plurality of nodes.
- 102. A system for parallelizing an algorithm comprising:
means for receiving a new algorithm description; means for automatically annotating the new algorithm description with template information relating to data used by the new algorithm and relating to data generated by the new algorithm; and means for storing the annotated new algorithm in each processing node of a Howard Cascade parallel processing system.
- 103. The system of claim 102 further comprising:
means for storing the template information in each node of the Howard Cascade.
- 104. The system of claim 102 wherein the means for annotating includes:
means for adding elements to extract input data descriptions and output data descriptions for each processing node on which the new algorithm may be computed prior to computing the new algorithm; means for adding elements to obtain input data for each said processing node based on the input data description prior to computing the new algorithm; means for adding elements to agglomerate partial results from other processing nodes of the Howard Cascade following computation of the new algorithm; and means for adding elements to return the partial result generated by computation of the new algorithm of each processing node to another processing node of the Howard Cascade.
RELATED APPLICATIONS
[0001] This application is a continuation-in-part of commonly-owned and co-pending U.S. patent application Ser. No. 09/603,020, filed on Jun. 26, 2000, entitled MASSIVELY PARALLEL INTERNET COMPUTING, and incorporated herein by reference. This application also claims priority to U.S. Patent Application No. 60/347,325, filed on Jan. 10, 2002, entitled PARALLEL PROCESSING SYSTEMS AND METHODS, and incorporated herein by reference.
Provisional Applications (1)
|
Number |
Date |
Country |
|
60347325 |
Jan 2002 |
US |
Continuation in Parts (1)
|
Number |
Date |
Country |
Parent |
09603020 |
Jun 2000 |
US |
Child |
10340524 |
Jan 2003 |
US |