This invention relates to execution of graph-based computations.
Complex computations can often be expressed as a data flow through a directed graph, with components of the computation being associated with the vertices of the graph and data flows between the components corresponding to links (arcs, edges) of the graph. A system that implements such graph-based computations is described in U.S. Pat. No. 5,966,072, EXECUTING COMPUTATIONS EXPRESSED AS GRAPHS. One approach to executing a graph-based computation is to execute a number of processes, each associated with a different vertex of the graph, and to establish communication paths between the processes according to the links of the graph. For example, the communication paths can use TCP/IP or UNIX domain sockets, or use shared memory to pass data between the processes.
In one aspect, in general, a method for processing transactions using graph-based computations includes determining that at least one of a plurality of graph elements of a computation graph of a set of one or more computation graphs includes a computation to be performed for a given transaction, associating the given transaction with an instance of the computation graph that includes reusable computation elements associated with respective graph elements, and executing the graph to perform the computation.
Aspects can include one or more of the following features.
At least some instances of the graphs in the set of computation graphs share one or more of the computation elements.
The computation elements include computations executed by at least one of an operating system process and a process thread.
The graph elements include vertices of the computation graphs.
Associating the transaction with an instance of the computation graph includes assigning a computation element corresponding to each graph element in the computation graph to the instance of the computation graph before beginning executing the graph elements.
Associating the transaction with an instance of the computation graph includes assigning a computation element corresponding to a graph element in the computation graph to the instance of the computation graph after executing another graph element using a computation element already assigned to the instance.
At least two of the graph elements use a common resource, and executing the graph to perform the computation includes assigning each of the graph elements using the common resource to a single computation element.
The single computation element is already initiated when the graph elements are assigned to the computation element.
The common resource includes a database.
The common resource includes a specific port.
Processing the transaction includes receiving a request for the transaction.
The method also includes determining that the same computation graph is associated with a computation to be performed for a second transaction, associating the second transaction with a second instance of the computation graph, and executing the second instance of the graph to perform the computation for the second transaction.
The computations for transactions performed using different instances of computation graphs are performed in a time-interleaved manner.
Multiple transactions are processed concurrently.
Each transaction is associated with one or more work elements that are processed according to the corresponding computation graph.
At least some transactions are each associated with one work element that is processed according to the corresponding computation graph.
The method further includes forming multiple instances of at least some of the computation graphs.
The method further includes identifying that an error has occurred in the performing of a computation for one of the transactions, and continuing the performing of a computation for another one of the transactions.
The processing of a first transaction of the plurality of transactions starts at a first time, and the processing of a second transaction of the plurality of transactions starts at a second time later than the first time, the method further includes completing the performing of the computation for the second transaction before completing the performing of the computation for the first transaction.
In another aspect, in general, a system for processing transactions using graph-based computations includes means for determining that at least one of a plurality of graph elements of a computation graph of a set of one or more computation graphs includes a computation to be performed for a transaction, means for associating the given transaction with an instance of the computation graph that includes reusable computation elements associated with respective graph elements, and means for executing the graph to perform the computation.
In another aspect, in general, a computer-readable medium stores a computer program for processing transactions using graph-based computations. The computer program includes instructions for causing a computer system to: determine that at least one of a plurality of graph elements of a computation graph of a set of one or more computation graphs includes a computation to be performed for a given transaction, associate the given transaction with an instance of the computation graph that includes reusable computation elements associated with respective graph elements, and execute the graph to perform the computation.
In another aspect, in general, a method for processing graph-based computations includes: within a graph including vertices representing graph components that process work elements according to links joining the vertices, providing at least one error-handling graph component configured to provide error information to a process external to the graph, and processing data, including, in response to a graph component encountering an error while processing, redirecting processing to the error-handling graph component including directing at least some of the work elements to the error-handling component according to at least one link to a vertex representing the error-handling component.
Aspects can include one or more of the following features.
Redirecting processing to the error-handling graph component includes removing work elements from at least one input queue.
Redirecting processing to the error-handling graph component includes processing the work elements directed to the error-handling graph component.
Processing the work elements directed to the error-handling graph component includes rolling back changes to a database made prior to the error.
Processing the data includes, for graph components not included in handling the error, discarding work elements directed to those graph components.
A sub-graph is provided, the sub-graph including an error-handling sub-graph component configured to provide an error code as an output of the sub-graph.
If output provided by the sub-graph indicates that an error occurred in the sub graph, processing is redirected to the error-handling graph component.
Redirecting processing to the error-handling graph component includes communicating, from the graph component that encountered the error, to the error-handling graph component, work elements that the graph component was processing when the error occurred.
The work elements are communicated according to the link to the vertex representing the error-handling component.
Redirecting processing to the error-handling graph component includes communicating, from the graph component that encountered the error, to the error-handling graph component, reporting information about the error.
The reporting information is communicated according to an implicit connection between the graph component that encountered the error and the error-handling component.
The implicit connection is revealed as an explicit link between a vertex representing the graph component that encountered the error and a vertex representing the error-handling component in response to a user request.
Providing the error-handling graph component includes providing a plurality of error-handling graph components, and redirecting processing to the error-handling graph component includes selecting an error-handling graph component based on output provided from the graph component that encountered the error.
Processing the data also includes, if a graph component encounters an error while processing, outputting an identification of a work element that caused the error.
Processing includes: enabling a first component of the graph; disabling the error-handling component; and for each component downstream of the first component other than the error-handling component, enabling the component if a component immediately upstream of the component is enabled.
Redirecting processing to the error-handling graph component includes: stopping execution of each enabled graph component, disabling the component that encountered the error; enabling the error-handling component; disabling components downstream of the component that encountered the error that are not downstream of the error-handling component; and enabling components upstream of the error-handling component.
Redirecting processing to the error-handling graph component includes, where the error occurred in a first component, if the error occurs under a first condition, directing process flow from the first component to a first error-handling component upstream of the first component, and if the error occurs under a second condition, directing process flow from the first component to a second error-handling component downstream of the first component.
The first condition is that a counter is below a limit.
The second condition is that a counter is above a limit.
Redirecting processing to the error-handling graph component also includes enabling a set of graph components, the set having been determined prior to the error.
In another aspect, in general, a system for processing graph-based computations includes, within a graph including vertices representing graph components that process work elements according to links joining the vertices, means for providing at least one error-handling graph component configured to provide error information to a process external to the graph, and means for processing data, including, in response to a graph component encountering an error while processing, redirecting processing to the error-handling graph component including directing at least some of the work elements to the error-handling component according to at least one link to a vertex representing the error-handling component.
In another aspect, in general, a computer-readable medium stores a computer program for processing graph-based computations. The computer program includes instructions for causing a computer system to: within a graph including vertices representing graph components that process work elements according to links joining the vertices, provide at least one error-handling graph component configured to provide error information to a process external to the graph, and process data, including, in response to a graph component encountering an error while processing, redirecting processing to the error-handling graph component including directing at least some of the work elements to the error-handling component according to at least one link to a vertex representing the error-handling component.
Other features and advantages of the invention are apparent from the following description, and from the claims.
1. Overview
This application is related to U.S. patent application Ser. Nos. 10/268,509, Startup and Control of Graph-Based Computation, filed Oct. 10, 2002, and 11/733,579, Transactional Graph-Based Computation, filed Apr. 10, 2007, which is a continuation of application Ser. No. 10/268,509. Both are incorporated herein by reference.
The system described below implements a method for executing computations that are defined in terms of computation graphs. Referring to
A process for a vertex is ready to run when at least one work element is queued at each of the vertex's inputs. As illustrated in
In some examples, a work flow may include work elements from multiple transactions (i.e., a first set of one or more work elements correspond to a first transaction, a second set of one or more elements correspond to a second transaction, etc.). A transaction can include a set of work elements representing actions that are all to be processed as a set, such that if one action fails, none should be carried out. Multiple instances of a graph may be used to process multiple transactions, and multiple instances of individual graph components (represented by vertices of a computation graph) may be created as needed by implementing computations of a graph component with a reusable computation element (e.g., an operating system process). By associating different transactions with different respective instances of graphs, multiple transactions can be processed concurrently. By enabling multiple computation elements to be assigned as needed to graph instances, efficient resource sharing can be realized by having a computation element be used by one graph instance and reused by another graph instance, as described in more detail below.
Referring to
A transaction subscription module 220 of the system receives control inputs 222 from a transaction subscribing graph component (e.g., a component providing commands without necessarily processing work elements, such as the component represented by vertex 10
In some examples, the scheduler of the transaction subscription module 220 uses a remote procedure call (RPC) process. When the scheduler receives a work element for a given transaction, it assigns the work element to the appropriate component of a graph instance associated with (i.e., assigned to) the transaction. The process assigned to that graph instance executes the computation of that component. The data associated with the work element is written to a temporary space available for the graph instance and accessible by the process. The scheduler is notified that the transaction subscription module 220 is done with that component, and the scheduler then schedules any downstream graph components for execution. Eventually the transaction will progress through the whole graph (as the graph is executed using the graph computation processing resources 230), and be output by way of an RPC publish process. This takes the data accumulated in the temporary space and commits it to the appropriate output channel, e.g., the database output 6 in
In general, different transactions may be processed concurrently, each being processed by a different instance of a graph. System 200, through the transaction subscription module 220, allocates resources for an instance of a computation graph for each transaction and, through the graph computation processing resources 230, controls their execution to process the work flows.
2. Graph Data Structures
System 200 includes a number of features that provide rapid startup of graph computations as well as efficient sharing of limited resources.
Before processing a transaction with an instance of a computation graph, the transaction subscription module 220 creates a runtime data structure for that graph instance in a functionally shared memory. In one embodiment, a single shared memory segment is created in which all the runtime data structures for graph instances are created.
The process or processes bound to a transaction are associated with the vertices of the graph and each of these processes maps the shared memory segment into its address space. The processes may be associated with vertices when graph instances are created for individual transactions or they may not be associated with vertices until instances of individual graph components are created or executed. The processes read and write work elements from and to the runtime data structures for the graph instances during processing of the transaction. That is, data for the transactions that flow through the graph are passed from component to component, and from process to process if more than one process is bound to the transaction, through these runtime data structures in the shared memory segment. By containing the data for a given transaction in a memory space accessible to each component of the graph and executing each component with a consistent process or set of processes, state can be shared between the components. Among other advantages, this allows all the database operations associated with executing the computations for a transaction to be committed at once, after it is confirmed that the transaction executed successfully.
3. Process Pools
As introduced above, graph computation processing resources 230 for executing the components of a graph instance can be implemented using process pools managed and allocated by the scheduler. For each of a number of different types of computation, a pool of processes is created prior to beginning processing of work flows of transactions using graph components requiring that type of computation. When a transaction is assigned to a graph instance, if computation of a particular type will be needed to perform the computation for a given component of the graph instance, the scheduler allocates a member of the process pool for use by the graph instance and with the given component. The member of the process pool remains associated with that graph instance for the duration of processing of the transaction, and may be re-used for other components within that graph instance that require the same type of computation. The process may be released back to the pool once no work elements remain upstream of the last component in the graph instance for that transaction that needs that type of computation. There may be many different pools of processes, each associated with a corresponding type of computation. Processes in a pool may be used for components in the same or different graph instances, including for a given type of component in different graph instances, and for multiple different components in one graph instance, for example.
In some implementations, each process in a process pool is a separate process (e.g., a UNIX process) that is invoked by the transaction subscription module 220, which manages the process pools. The module 220 maintains a separate work queue for each process pool. Each entry in a work queue identifies a specific vertex of a graph instance for which the process is to perform computation.
Some processes reserve or consume fixed resources. An example of such a process is one that makes a connection to a database, such as an Oracle® database. Since resources are consumed with forming and maintaining each database connection, it is desirable to limit the number of such processes that are active. If a graph includes multiple components that access a database, it may be desirable for all the database operations for a given transaction to take place in a single database process. To accommodate this, a set of processes may be established that each maintain a connection to the database and are each capable of performing the database functions that a given graph instance may require. When a graph instance is assigned to a given transaction, one process from the set is assigned to that graph instance for the entire transaction, as described above, and all of the database components are multiplexed to that process. When a vertex requires a process for accessing the database to process a work element of the transaction, the assigned process (which has already established its connection with the database) is associated with that vertex. In this way, the overhead of the initialization steps of that process that would have been required to connect to that database is avoided, and all database actions for a given transaction are handled by the same process. Other types of processes can be handled in the same way.
System 200 supports different approaches to configuring processes for vertices, which differ in when the vertices are associated with processes and when the computation for the vertices is initiated. In one type of configuration, a process is not associated with a vertex until all the data at all its input work elements are completely available. If a work element is large, it may take some time for the entire work element to be computed by the upstream vertex and to be available. This type of configuration avoids blocking the process waiting for input to become available, so that it can be used by other vertices in that graph instance.
Another type of configuration uses a streaming mode. A process is associated with a vertex and initiated when at least the start of each input is available. The remainder of each of its inputs becomes available while the process executes. If that input becomes available sufficiently quickly, the process does not block waiting for input. However, if the inputs do not become available, the process may block.
4. Computation Control
5. Alternatives
As noted above, it is possible to pre-create graph pools of already instantiated instances of computation graphs in anticipation of there being transactions that will require them. When a transaction is received and needs a graph instance, if one is available from a graph pool, it is assigned from the pool rather than having to be created. In this way, the startup cost for a transaction is further reduced. When the computation for the transaction is completed, the graph is reset by restoring variables to their initial values prior to having been assigned to the transaction and freeing any dynamically-assigned memory. After the graph instance is reset it is returned to the pool.
In some examples, the number of graph instances in a graph pool can be allowed to grow as needed. For instance, there might be a minimum number of instances of each graph, and more may be created as needed.
In the description above, processes may be assigned to vertices in the graph in an on-demand manner where they are not associated with a vertex until after all the inputs to that vertex are available, though they are bound to the particular graph instance and transaction. Another approach is to associate the processes to the vertices when the transaction is associated with the graph instance and to maintain the association until the transaction's entire work flow has been processed.
6. Applications
One application of computation graphs of the type described above is for processing financial transactions in a banking application. In general, different types of transactions require different types of computation graphs. A typical computation graph is associated with some combination of a type of customer transaction and “backend” services that are needed to process the transaction. For example, transactions can be ATM requests, bank teller inputs, and business-to-business transactions between computers or web servers. Different customers might have different backend systems, particularly when banks consolidate and customers are combined from different original banks. Their accounts may be maintained on very different backend systems even though they are all customers of the acquiring bank. Therefore, different vertices in a graph may be used to process different transactions. Different services may be associated with vertices in the graph. For example, some of the vertices may be associated with functions such as updating a balance, depositing money in an account, or performing an account hold so funds are held in an account. In some implementations, on-the-fly assignment of processes to vertices avoids the overhead of having processes for unused vertices remain idle.
An advantage of allocating graph instances on a per-transaction basis is that it allows parallelization of data streams that otherwise would have to be processed serially. Graph instances assigned to different transactions may finish in a different order than they started, for example, if the first transaction was more complicated than the second. This may allow the second graph instance to be released and available to process a third transaction when a serialized system would still be processing the first transaction.
7. Error Handling
An advantage of allocating graph instances on a per-transaction basis is that failures due to errors in executing a graph instance can be contained to that transaction, and do not compromise the concurrent processing of other graph instances. By delaying committing the results of the computation graph until the entire transaction is completed, the data can be “rolled-back”, in the event of an error, to the state that it was in before the system began to process the transaction. Errors can be handled in several ways.
In some examples, an “error handling” component is included in a graph. The error handling component is a special case in that it does not have to execute for the graph to complete. In the event that the component at any vertex generates an error, instead of causing the whole computation to abort, execution of the graph is redirected to the error handling component. An explicit relationship between a given component and an error handling component (including a work flow from an output port of a component to an input port of the error handling component) is referred to as an exception flow. The scheduler removes work elements that were part of the failed computation from the graph instance and the error handling component provides an output which the graph can use to provide an error message as output to the process that called it. The error handling component may receive data input other than through an exception flow, depending on the implementation.
For any component in a graph, there is a designated error handling component. This may be a component that directly receives an exception flow output or other error data output from another graph component, or it may be defined as the designated error handling component for a set of components regardless of whether it receives an exception flow. In some examples, exception flow is handled as shown in
If an error occurs, the scheduler halts execution of the erring component, allows any other components that are already executing to finish, and propagates any relevant data (e.g., exception flow output of the completed components, or “error reporting output” of the erring component) to the error handling component. For example, if the call web service component 910 triggers an error, the exception flow from replicate component 906 and error reporting output from a reject port 921 of the call web service component 910 are input to the error handling component 916 at inputs 922, 924, respectively. Error reporting output ports (shown as ports on the bottom of some of the components in the graph 900) can be used to provide information about any errors that have occurred including, for example, information characterizing what error(s) occurred, where the error(s) occurred, and any rejected work elements associated with the error(s).
In this example, there are three error reporting output ports for the replicate component 906. The reject port 921 provides work elements that may have caused the error or are in some way related to the error. The error port 923 provides an error messages describing relevant information about the error. The log port 925 can optionally provide information logging that the error occurred. The log port 925 can also provide log information about events during the normal course of execution even if no errors occur. In this example, the reject port 921 is explicitly shown as connected for those components (e.g., the call web service component 910) that may need to use the port. However, the error port 923 and log port 925 are not explicitly shown as connected, but have implicit connections to the error handling component 916. For example, the ports can be connected by a developer and then hidden using an interface control. In some implementations, the system can automatically determine implicit connections to a default error handling component, which may then be overridden by the developer. For large and/or complicated graphs, this “implicit wiring” for one or more types of error reporting ports improves visual comprehension of a graph by a developer, which is one of the benefits of graph-based programming. In some implementations, visual cues can be provided to indicate that a port is implicitly connected to a port of another component (e.g., an icon or a shaded or colored port). Some or all of the hidden implicit work flow connections can also be revealed as explicit links in response to a user request (e.g., clicking a button or hovering over a port).
The exception flow output from the replicate component 906 may have already been queued at the input 922, if the replicate had finished operation before the error occurred. The scheduler then enables the error handling component (916 in this example), disables the erring component (910 in this example), and performs enablement propagation from the error handling component (enabling 918, 904, 920 in this example). Any component downstream of the disabled erring component is also disabled as long as that component does not receive a flow from an enabled component downstream of the error handling component (disabling 912 and 914 in this example). Finally, any remaining component that provides a flow to an enabled component is enabled (enabling 906 and 902 in this example).
Thus, the result of this procedure is shown by the indication of “<enabled>” and “<disabled>” components in
As noted above, data may flow to the error handling component as part of an exception flow or as part of an error reporting output of another component. Data that is available before the error occurs, for example, output data from the replicate module 906 in
In some examples, as shown in
If a sub-graph does not have error handling, its errors flow upwards in the hierarchy of sub-graphs of which it is a part until they reach a graph level that does have error handling, at which point that level's error-handling component is activated.
The data escrowed at the input of the error handling component may be a subset of a work flow, it may be all the data associated with a transaction, or it could be an entire data flow. If the error-handling component has an error output port, it will output the record that caused the error or other error information based on the escrowed data or the input received from the component that had the error. If it does not have such a port, it may simply output the offending record as normal output on its output port.
If a sub-graph does not have error handling, errors in its components flow upwards in the hierarchy of sub-graphs of which it is a part until they reach a graph level that does have error handling, at which point that level's error-handling component receives appropriate input and generates an appropriate error output.
Error handling can allow cyclic graph arrangements that would ordinarily be avoided in graph-based computation processing. For example, as shown in
To assure that a cyclic graph is well-defined, the set of elements that will be enabled on error is determined in advance based on the topology of the graph, rather than being done as-needed as described above.
In some examples, other rules are used to assure that error handling works correctly. For example, in some implementations, error handling can only be triggered on one exception port of one component within a graph (any simultaneous errors may be ignored). If a graph component or sub-graph is linked to an error handling component, it must use that component on any error. If a graph component or sub-graph is not linked to an error handling component, errors must be handled by the generic error handler for the present scope. Each graph component is typically associated with exactly one error handler. These rules may be modified or combined depending on the requirements of the system. They can be useful where tight control of the process for each transaction is needed.
In some examples, when an error occurs, the operating system determines which error-handling component is associated with the component that experienced the error, and then determines which input flow, if any, to that error-handling component should be used. If there are multiple inputs, the one that most recently had data written to it is used.
Error handling may be active, as just described, where components or sub-graphs handle their own errors and produce error codes that can be used by other components to diagnose or work around the error, or it can be passive. In a passive system, a graph that encounters an error simply fails, and allows the operating system to provide error handling, for example by providing a stack dump to a debugging process.
Each component of a graph is implicitly connected to a scheduler, which doesn't need a specific invitation from a graph to intervene and handle errors. The scheduler can remove data related to an error from a graph instance and, in some examples, does not need to know the nature of the error. In some cases, the scheduler may return resources assigned to a graph to their respective pools in stages, allowing the graph to complete processing work elements that were not affected by the error.
8. Implementation
The invention may be implemented in hardware or software, or a combination of both (e.g., programmable logic arrays). Unless otherwise specified, the algorithms described are not inherently related to any particular computer or other apparatus. In particular, various general purpose machines may be used with programs written in accordance with the teachings herein, or it may be more convenient to construct more specialized apparatus (e.g., integrated circuits) to perform particular functions. Thus, the invention may be implemented in one or more computer programs executing on one or more programmed or programmable computer systems (which may be of various architectures such as distributed, client/server, or grid) each comprising at least one processor, at least one data storage system (including volatile and non-volatile memory and/or storage elements), at least one input device or port, and at least one output device or port. Program code is applied to input data to perform the functions described herein and generate output information. The output information is applied to one or more output devices, in known fashion.
Each such program may be implemented in any desired computer language (including machine, assembly, or high level procedural, logical, or object oriented programming languages) to communicate with a computer system. In any case, the language may be a compiled or interpreted language.
Each such computer program is preferably stored on or downloaded to a storage media or device (e.g., solid state memory or media, or magnetic or optical media) readable by a general or special purpose programmable computer, for configuring and operating the computer when the storage media or device is read by the computer system to perform the procedures described herein. The inventive system may also be considered to be implemented as a computer-readable storage medium, configured with a computer program, where the storage medium so configured causes a computer system to operate in a specific and predefined manner to perform the functions described herein.
It is to be understood that the foregoing description is intended to illustrate and not to limit the scope of the invention, which is defined by the scope of the appended claims. Other embodiments are within the scope of the following claims.
This application claims priority to U.S. Application Ser. No. 60/952,075, filed on Jul. 26, 2007, incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
3662343 | Goldstein et al. | May 1972 | A |
3662401 | Collins et al. | May 1972 | A |
4922418 | Dolecek | May 1990 | A |
4972314 | Getzinger et al. | Nov 1990 | A |
5127104 | Dennis | Jun 1992 | A |
5276899 | Neches | Jan 1994 | A |
5280619 | Wang | Jan 1994 | A |
5301336 | Kodosky et al. | Apr 1994 | A |
5323452 | Dickman et al. | Jun 1994 | A |
5333319 | Silen | Jul 1994 | A |
5357632 | Pian et al. | Oct 1994 | A |
5495590 | Comfort et al. | Feb 1996 | A |
5630047 | Wang | May 1997 | A |
5692168 | McMahan | Nov 1997 | A |
5701400 | Amado | Dec 1997 | A |
5712971 | Stanfill et al. | Jan 1998 | A |
5745778 | Alfieri | Apr 1998 | A |
5802267 | Shirakihara et al. | Sep 1998 | A |
5805462 | Poirot et al. | Sep 1998 | A |
5857204 | Lordi et al. | Jan 1999 | A |
5923832 | Shirakihara et al. | Jul 1999 | A |
5924095 | White | Jul 1999 | A |
5930794 | Linenbach et al. | Jul 1999 | A |
5933640 | Dion | Aug 1999 | A |
5966072 | Stanfill et al. | Oct 1999 | A |
5999729 | Tabloski, Jr. et al. | Dec 1999 | A |
6006242 | Poole et al. | Dec 1999 | A |
6012094 | Leymann | Jan 2000 | A |
6014670 | Zamanian et al. | Jan 2000 | A |
6016516 | Horikiri | Jan 2000 | A |
6032158 | Mukhopadhhyay et al. | Feb 2000 | A |
6038558 | Powers et al. | Mar 2000 | A |
6044211 | Jain | Mar 2000 | A |
6044374 | Nesamoney et al. | Mar 2000 | A |
6044394 | Cadden et al. | Mar 2000 | A |
6088716 | Stanfill et al. | Jul 2000 | A |
6145017 | Ghaffari | Nov 2000 | A |
6173276 | Kant et al. | Jan 2001 | B1 |
6208345 | Sheard et al. | Mar 2001 | B1 |
6256637 | Venkatesh et al. | Jul 2001 | B1 |
6259988 | Galkowski et al. | Jul 2001 | B1 |
6272650 | Meyer et al. | Aug 2001 | B1 |
6301601 | Helland | Oct 2001 | B1 |
6314114 | Coyle et al. | Nov 2001 | B1 |
6324437 | Frankel et al. | Nov 2001 | B1 |
6330008 | Razdow et al. | Dec 2001 | B1 |
6339775 | Zamanian et al. | Jan 2002 | B1 |
6400996 | Hoffberg et al. | Jun 2002 | B1 |
6401216 | Meth et al. | Jun 2002 | B1 |
6437796 | Sowizral et al. | Aug 2002 | B2 |
6449711 | Week | Sep 2002 | B1 |
6480876 | Rehg et al. | Nov 2002 | B2 |
6496961 | Gupta et al. | Dec 2002 | B2 |
6538651 | Hayman et al. | Mar 2003 | B1 |
6584581 | Bay et al. | Jun 2003 | B1 |
6608628 | Ross et al. | Aug 2003 | B1 |
6611862 | Reisman | Aug 2003 | B2 |
6651234 | Gupta et al. | Nov 2003 | B2 |
6654907 | Stanfill et al. | Nov 2003 | B2 |
6658464 | Reisman | Dec 2003 | B2 |
6715145 | Bowman-Amuah | Mar 2004 | B1 |
6728879 | Atkinson | Apr 2004 | B1 |
6813761 | Das et al. | Nov 2004 | B1 |
6816825 | Ashar et al. | Nov 2004 | B1 |
6832369 | Kryka et al. | Dec 2004 | B1 |
6848100 | Wu et al. | Jan 2005 | B1 |
6879946 | Rong et al. | Apr 2005 | B2 |
7062483 | Ferrari et al. | Jun 2006 | B2 |
7082604 | Schneiderman | Jul 2006 | B2 |
7085426 | August | Aug 2006 | B2 |
7103597 | McGovern | Sep 2006 | B2 |
7103620 | Kunz et al. | Sep 2006 | B2 |
7130484 | August | Oct 2006 | B2 |
7137116 | Parkes et al. | Nov 2006 | B2 |
7164422 | Wholey et al. | Jan 2007 | B1 |
7165030 | Yi et al. | Jan 2007 | B2 |
7167850 | Stanfill | Jan 2007 | B2 |
7316001 | Gold et al. | Jan 2008 | B2 |
7356819 | Ricart et al. | Apr 2008 | B1 |
7398514 | Russell | Jul 2008 | B2 |
7417645 | Beda et al. | Aug 2008 | B2 |
7457984 | Kutan | Nov 2008 | B2 |
7467383 | Inchingolo et al. | Dec 2008 | B2 |
7505975 | Luo | Mar 2009 | B2 |
7577628 | Stanfill | Aug 2009 | B2 |
7636699 | Stanfill | Dec 2009 | B2 |
7716630 | Wholey et al. | May 2010 | B2 |
7756940 | Sagawa | Jul 2010 | B2 |
7840949 | Schumacher et al. | Nov 2010 | B2 |
7870556 | Wholey et al. | Jan 2011 | B2 |
7877350 | Stanfill et al. | Jan 2011 | B2 |
7979479 | Staebler et al. | Jul 2011 | B2 |
8566641 | Douros et al. | Oct 2013 | B2 |
20010055019 | Sowizral et al. | Dec 2001 | A1 |
20020080181 | Razdow et al. | Jun 2002 | A1 |
20020087921 | Rodriguez | Jul 2002 | A1 |
20020091747 | Rehg et al. | Jul 2002 | A1 |
20020091748 | Rehg et al. | Jul 2002 | A1 |
20020111876 | Rudraraju et al. | Aug 2002 | A1 |
20020129340 | Tuttle | Sep 2002 | A1 |
20020147745 | Houben et al. | Oct 2002 | A1 |
20020184616 | Chessell et al. | Dec 2002 | A1 |
20030004771 | Yaung | Jan 2003 | A1 |
20030023413 | Srinivasa | Jan 2003 | A1 |
20030033432 | Simpson et al. | Feb 2003 | A1 |
20030091055 | Craddock et al. | May 2003 | A1 |
20030126240 | Vosseler | Jul 2003 | A1 |
20030204804 | Petri et al. | Oct 2003 | A1 |
20040006745 | Van Heldan et al. | Jan 2004 | A1 |
20040041838 | Adusumilli et al. | Mar 2004 | A1 |
20040073529 | Stanfill | Apr 2004 | A1 |
20040093559 | Amaru et al. | May 2004 | A1 |
20040098452 | Brown et al. | May 2004 | A1 |
20040107414 | Bronicki et al. | Jun 2004 | A1 |
20040111469 | Manion et al. | Jun 2004 | A1 |
20040148373 | Childress et al. | Jul 2004 | A1 |
20040177099 | Ganesh et al. | Sep 2004 | A1 |
20040205726 | Chedgey et al. | Oct 2004 | A1 |
20040207665 | Mathur | Oct 2004 | A1 |
20040210831 | Feng et al. | Oct 2004 | A1 |
20040225657 | Sarkar | Nov 2004 | A1 |
20040260590 | Golani et al. | Dec 2004 | A1 |
20050021689 | Marvin et al. | Jan 2005 | A1 |
20050034112 | Stanfill | Feb 2005 | A1 |
20050039176 | Fournie | Feb 2005 | A1 |
20050059046 | Labrenz et al. | Mar 2005 | A1 |
20050086360 | Mamou et al. | Apr 2005 | A1 |
20050097561 | Schumacher et al. | May 2005 | A1 |
20050102670 | Bretl et al. | May 2005 | A1 |
20050144277 | Flurry et al. | Jun 2005 | A1 |
20050144596 | McCullough et al. | Jun 2005 | A1 |
20050149935 | Benedetti | Jul 2005 | A1 |
20050177531 | Bracewell | Aug 2005 | A1 |
20050193056 | Schaefer et al. | Sep 2005 | A1 |
20050216421 | Barry et al. | Sep 2005 | A1 |
20050240621 | Robertson et al. | Oct 2005 | A1 |
20050262470 | Gavrilov | Nov 2005 | A1 |
20050289527 | Illowsky et al. | Dec 2005 | A1 |
20060085462 | Todd | Apr 2006 | A1 |
20060095722 | Biles et al. | May 2006 | A1 |
20060098017 | Tarditi et al. | May 2006 | A1 |
20060206872 | Krishnaswamy | Sep 2006 | A1 |
20060282474 | MacKinnon | Dec 2006 | A1 |
20060294150 | Stanfill et al. | Dec 2006 | A1 |
20060294459 | Davis et al. | Dec 2006 | A1 |
20070011668 | Wholey et al. | Jan 2007 | A1 |
20070022077 | Stanfill | Jan 2007 | A1 |
20070035543 | David et al. | Feb 2007 | A1 |
20070094211 | Sun et al. | Apr 2007 | A1 |
20070118839 | Berstis et al. | May 2007 | A1 |
20070139441 | Lucas et al. | Jun 2007 | A1 |
20070143360 | Harris et al. | Jun 2007 | A1 |
20070150429 | Huelsman et al. | Jun 2007 | A1 |
20070174185 | McGovern | Jul 2007 | A1 |
20070179923 | Stanfill | Aug 2007 | A1 |
20070239766 | Papaefstathiou et al. | Oct 2007 | A1 |
20070271381 | Wholey et al. | Nov 2007 | A1 |
20070285440 | MacInnis et al. | Dec 2007 | A1 |
20080049022 | Sherb et al. | Feb 2008 | A1 |
20080126755 | Wu et al. | May 2008 | A1 |
20080288608 | Johnson | Nov 2008 | A1 |
20080294615 | Furuya et al. | Nov 2008 | A1 |
20090030863 | Stanfill et al. | Jan 2009 | A1 |
20090064147 | Beckerle et al. | Mar 2009 | A1 |
20090083313 | Stanfill et al. | Mar 2009 | A1 |
20090182728 | Anderson | Jul 2009 | A1 |
20090193417 | Kahlon | Jul 2009 | A1 |
20090224941 | Kansal et al. | Sep 2009 | A1 |
20090327196 | Studer et al. | Dec 2009 | A1 |
20100070955 | Kahlon | Mar 2010 | A1 |
20100169137 | Jastrebski et al. | Jul 2010 | A1 |
20100174694 | Staebler et al. | Jul 2010 | A1 |
20100180344 | Malyshev et al. | Jul 2010 | A1 |
20100211953 | Wakeling et al. | Aug 2010 | A1 |
20100218031 | Agarwal et al. | Aug 2010 | A1 |
20100281488 | Krishnamurthy et al. | Nov 2010 | A1 |
20110078500 | Douros et al. | Mar 2011 | A1 |
20110093433 | Stanfill et al. | Apr 2011 | A1 |
20120054255 | Buxbaum et al. | Mar 2012 | A1 |
Number | Date | Country |
---|---|---|
64-013189 | Jan 1989 | JP |
06-236276 | Aug 1994 | JP |
08-278892 | Oct 1996 | JP |
08-305576 | Nov 1996 | JP |
63-231613 | Sep 1998 | JP |
11-184766 | Jul 1999 | JP |
2000-99317 | Apr 2000 | JP |
2002-229943 | Aug 2002 | JP |
2005-317010 | Nov 2005 | JP |
2006-504160 | Feb 2006 | JP |
WO 9800791 | Jan 1998 | WO |
WO 0211344 | Feb 2002 | WO |
WO2005001687 | Jan 2005 | WO |
WO 2005086906 | Sep 2005 | WO |
WO 2008124319 | Oct 2008 | WO |
WO 2009039352 | Mar 2009 | WO |
Entry |
---|
Vajracharya et al, “Asynchronous Resource Management”, Proceedings 15th International Parallel and Distributed Processing Symposium, Issue Date: Apr. 2001, Date of Current Version: Aug. 7, 2002. |
Krsul et al, “VMPlants: Providing and Managing Virtual Machine Execution Environments for Grid Computing”, Proceedings of the ACM/IEEE SC2004 Conference on Supercomputing, 2004, Issue Date: Nov. 6-12, 2004, Date of Current Version: Mar. 21, 2005. |
Babaoglu, O et al., “Mapping parallel computations onto distributed systems in Paralex” Compuero '91. Advanced Computer Technology, Reliable Systems and Applications. 5th Annual European Computer Conference. Proceedings. Bologna, Italy May 13-16, 1991, Los Alamitos, CA, USA, IEEE Comput. Soc, US, May 13, 1991, pp. 123-130. |
Baer, J.L. et al., “Legality and Other Properties of Graph Models of Computations.” Journal of the Association for Computing Machinery, vol. 17, No. 3, Jul. 1970, pp. 543-554. |
Bookstein, A. et al., “Modeling Word Occurrences for the Compression of Concordances.” ACM Transactions on Information Systems, vol. 15, No. 3, Jul. 1997, pp. 254-290. |
Cytron, Ron et al., “Efficiently Computing Static Single Assignment Form and the Control Dependence Graph.” ACM Transactions on Programming Languages and Systems, vol. 13, No. 4, Oct. 1991, pp. 451-490. |
Ebert, Jurgen et al., “A Declarative Approach to Graph-Based Modeling.” Workshop on Graph-Theoretic Concepts in Computer Science, 1994, pp. 1-19. |
Gamma et al. “Design Patterns: Elements of Reusable Object-Oriented Software”, Sep. 1999. |
International Search Report & Written Opinion issued in PCT application No. PCT/US08/71206, mailed Oct. 22, 2008, 12 pages. |
Jawadi, Ramamohanrao et al., “A Graph-based Transaction Model for Active Databases and its Parallel Implementation.” U. Florida Tech. Rep TR94-0003, 1994, pp. 1-29. |
Kebschull, U. et al., “Efficient Graph-Based Computation and Manipulation of Functional Decision Diagrams.” University of Tubingen, 1993 IEEE, pp. 278-282. |
Li, Xiqing et al., “A Practical External Sort for Shared Disk MPPs.” Proceedings of Supercomputing '93, 1993, 24 pages. |
Martin, David et al., “Models of Computations and Systems—Evaluation of Vertex Probabilities in Graph Models of Computations.” Journal of the Association for Computing Machinery, vol. 14, No. 2, Apr. 1967, pp. 281-299. |
Ou, Chao-Wei et al., “Architecture-Independent Locality-Improving Transformations of Computational Graphs Embedded in κ-Dimensions.” Proceedings of the 9th International Conference on Supercomputing, 1995, pp. 289-298. |
“RASSP Data Flow Graph Design Application Note.” International Conference on Parallel Processing, Dec. 2000, Retrieved from Internet <http://www.atl.external.lmco.com/projects/rassp/RASSP—legacy/appnotes/FLOW/APNOTE—FLOW—02 >, 5 pages. |
Stanfill, Craig, “Massively Parallel Information Retrieval for Wide Area Information Servers.” 1991 IEEE International Conference on Systems, Man and Cybernetics, Oct. 1991, pp. 679-682. |
Stanfill, Craig et al., “Parallel Free-Text Search on the Connection Machine System.” Communications of the ACM, vol. 29, No. 12, Dec. 1986, pp. 1229-1239. |
Stanfill, Craig, “The Marriage of Parallel Computing and Information Retrieval.” IEE Colloquium on Parallel Techniques for Information Retrieval, Apr. 1989, 5 pages. |
Wah, B.W. et al., “Report on Workshop on High Performance Computing and Communications for Grand Challenge Applications: Computer Vision, Speech and Natural Language Processing, and Artificial Intelligence.” IEEE Transactions on Knowledge and Data Engineering, vol. 5, No. 1, Feb. 1993, 138-154. |
Burch, J.R. et al., “Sequential circuit verification using symbolic model checking.” In Design Automation Conference, 1990, Proceedings of the 27th ACM/IEEE. Jun. 24-28, 1990, pp. 46-51. |
Guyer et al., “Finding Your Cronies: Static Analysis for Dynamic Object Colocation.” Oct. 2004, ACM, pp. 237-250. |
Grove et al., “A Framework for Call Graph Construction Algorithms.” Nov. 2001, ACM TOPLAS, vol. 23, Issue 6, pp. 685-746. |
Herniter, Marc E., “Schematic Capture with MicroSim PSpice,” 2nd Edition, Prentice Hall, Upper Saddle River, N.J., 1996, pp. 51-52, 255-280, 292-297. |
International Search Report & Written Opinion issued in PCT application No. PCT/US01/23552, mailed Jan. 24, 2002, 5 pages. |
International Search Report & Written Opinion issued in PCT application No. PCT/US06/24957, dated Jan. 17, 2008, 14 pages. |
International Search Report & Written Opinion issued in PCT application No. PCT/US07/75576, mailed Sep. 16, 2008, 13 pages. |
International Search Report & Written Opinion received in PCT application No. PCT/US10/24036, mailed Mar. 23, 2010, 11 pages. |
Just et al., “Review and Analysis of Synthetic Diversity for Breaking Monocultures.” Oct. 2004, ACM, pp. 23-32. |
Krahmer et al., “Graph-Based Generation of Referring Expressions.” Mar. 2003, MIT Press, vol. 29, No. 1, pp. 53-72. |
Supplemental European Search Report issued in application No. EP07813940, dated Nov. 26, 2009, 7 pages. |
European Search Report issued in application No. EP10003554, dated Sep. 24, 2010, 7 pages. |
Supplemental European Search Report issued in application No. EP08796632, dated Sep. 24, 2010, 6 pages. |
International Search Report & Written Opinion issued in PCT application No. PCT/US10/49966, dated Nov. 23, 2010, 8 pages. |
International Search Report & Written Opinion received in PCT application No. PCT/US2011/040440, mailed Oct. 12, 2011, 13 pages. |
Control-M; New Dimension Software. User Manual. New Dimension Software Ltd., 1999. |
Romberg, M., “UNICORE: Beyond Web-based Job-Submission,” Proceedings of the 42nd Cray User Group Conference, Noordwijk (May 22-26, 2000). |
“Unicenter AutoSys Job Management,” Computer Associates, Copyright 2001. |
European Search Report issued in application No. EP10741775, dated Nov. 14, 2012, 4 pages. |
Russell, Nick, et al., “Workflow Control-Flow Patterns a Revised View,” Workflow Patterns Initiative, 2006, pp. 1-134. |
van der Aalst, W.M.P., et al., “Workflow Patterns,” Distributed and Parallel Databases, 14, 5-51, 2003. |
Japanese Office Action, with English Translation, JP application No. 2011-000948, mailed Jan. 8, 2013, 11 pages. |
Japanese Office Action, with English Translation, JP application No. 2008-519474, mailed Sep. 25, 2012, 8 pages. |
Shoten, Iwanami, “Encyclopedic Dictionary of Computer Science,” (with English Translation), May 25, 1990, p. 741. |
Japanese Office Action, with English Translation, JP application No. 2009-523997, mailed Oct. 23, 2012, 7 pages. |
Supplemental European Search Report issued in application No. EP06774092, dated Dec. 19, 2012, 5 pages. |
“Topological sorting,” Wikipedia, accessed Dec. 10, 2012, 2 pages. |
Japanese Office Action, with English Translation, JP application No. 2010-518415, mailed Feb. 21, 2013, 11 pages. |
“Visual Lint: Squash Bugs Early with Interactive C/C++, C# and Java Code Analysis for Microsoft Visual Studio and Eclipse,” [ retrieved from the internet Dec. 3, 2012: www.riverblade.co.uk/products/visual—lint.] (2 pages). |
Transaction History, U.S. Appl. No. 09/627,252, Jul. 8, 2013, 2 pages. |
Transaction History, U.S. Appl. No. 10/268,509, Jul. 8, 2013, 2 pages. |
Transaction History, U.S. Appl. No. 11/467,724, Jul. 8, 2013, 2 pages. |
Transaction History, U.S. Appl. No. 11/733,579, Jul. 8, 2013, 2 pages. |
Transaction History, U.S. Appl. No. 11/169,014, Jul. 8, 2013, 2 pages. |
Transaction History, U.S. Appl. No. 11/167,902, Jul. 8, 2013, 3 pages. |
Transaction History, U.S. Appl. No. 12/977,545, Jul. 8, 2013, 6 pages. |
Transaction History, U.S. Appl. No. 11/836,349, Jul. 8, 2013, 4 pages. |
Transaction History, U.S. Appl. No. 12/704,998, Jul. 8, 2013, 2 pages. |
Transaction History, U.S. Appl. No. 13/161,010, Jul. 8, 2013, 2 pages. |
Transaction History, U.S. Appl. No. 12/638,588, Jul. 8, 2013, 3 pages. |
Transaction History, U.S. Appl. No. 13/678,921, Jul. 8, 2013, 1 page. |
Transaction History, U.S. Appl. No. 13/678,928, Jul. 8, 2013, 1 page. |
Transaction History, U.S. Appl. No. 13/936,330, Aug. 7, 2013, 1 page. |
Dillon, Laura K., et al., “Inference Graphs: A Computational Structure Supporting Generation of Customizable and Correct Analysis Components,” IEEE Transactions on Software Engineering, vol. 29, No. 2, Feb. 2003, pp. 133-150. |
Evripidou, Paraskevas, et al., “Incorporating input/output operations into dynamic data-flow graphs,” Parallel Computing 21 (1995) 1285-1311. |
Extended European Search Report, EP 12165575, mailed May 10, 2013, 9 pages. |
Frankl, Phyllis G., et al., “An Applicable Family of Data Flow Testing Criteria,” IEEE Transactions on Software Engineering, vol. 14, No. 10, Oct. 1988, pp. 1483-1498. |
Whiting, Paul G., et al., “A History of Data-Flow Languages,” IEEE Annals of the History of Computing, vol. 16, No. 4, 1994, pp. 38-59. |
Japanese Office Action for Japanese Application No. 2010-518415 dated Nov. 18, 2013 (11 pages). |
Number | Date | Country | |
---|---|---|---|
20090030863 A1 | Jan 2009 | US |
Number | Date | Country | |
---|---|---|---|
60952075 | Jul 2007 | US |