Computer data system data source having an update propagation graph with feedback cyclicality

Information

  • Patent Grant
  • 10002154
  • Patent Number
    10,002,154
  • Date Filed
    Tuesday, November 14, 2017
    7 years ago
  • Date Issued
    Tuesday, June 19, 2018
    6 years ago
Abstract
Described are methods, systems and computer readable media for data source refreshing using an update propagation graph with feedback cyclicality.
Description

Embodiments relate generally to computer data systems, and more particularly, to methods, systems and computer readable media for data source refreshing using an update propagation graph with feedback cyclicality.


Data sources or objects within a computer data system may include static sources and dynamic sources. Some data sources or objects (e.g., tables) may depend on other data sources. As new data is received or obtained for dynamic data sources, those dynamic data sources may be refreshed (or updated). Data sources or objects that are dependent on one or more dynamic sources that have been refreshed may also need to be refreshed. The refreshing of data sources may need to be performed in an order based on dependencies. The dependencies may be defined by an update propagation graph. A need may exist to provide a feedback mechanism within the update propagation graph for purposes such as backtesting of a computer data system or of a data model or technique associated with the computer data system.


Some implementations were conceived in light of the above mentioned needs, problems and/or limitations, among other things.


Some implementations can include a system for updating a data object using an update propagation graph having a cyclicality feedback provider. The system can include one or more hardware processors coupled to a nontransitory computer readable medium having stored thereon software instructions that, when executed by the one or more processors, cause the one or more processors to perform operations. The operations can include constructing a cyclicality feedback provider including a cyclicality feedback provider object including one or more feedback data fields, and obtaining a reference to the cyclicality feedback provider object. The operations can also include constructing a computer data system update propagation graph having one or more update propagation graph data fields that correspond to the one or more feedback data fields. The operations can further include adding a feedback provider listener to the computer data system update propagation graph, wherein the feedback provider listener provides feedback updates to the one or more feedback data fields of the cyclicality feedback provider object when changes to the one or more update propagation graph data fields corresponding to the one or more feedback data fields are detected, and wherein the feedback updates are provided to the one or more feedback data fields of the cyclicality feedback provider object based on a state of a logical clock and on completion of update processing for a given logical clock cycle.


The cyclicality feedback provider object can include a computer data system table object. The update propagation graph can include a hybrid directed acyclic graph having a clock-state controlled cyclicality feedback provided by the cyclicality feedback provider and a state of a logical clock.


The operations can also include determining that a logical clock has transitioned to an update state, and processing events and updates to data sources of the update propagation graph for a current logical clock cycle, wherein processing the events and updates are performed on the hybrid directed acyclic graph as if the cyclicality feedback is not present. The operations can further include after the processing events and updates has completed, providing events and updates from the cyclicality feedback provider object to one or more data objects within the update propagation graph, wherein the events and updates from the cyclicality feedback provider object will be processed through the update propagation graph in a next logical clock cycle.


Processing events and updates to the data sources can include invoking a data source refresh method for a data source for which changes are being processed, determining whether a priority queue for the data source is empty. The operations can also include, when the priority queue is not empty, retrieving a next change notification message from the priority queue and delivering the change notification to a corresponding data source and repeating determining whether the priority is queue is empty. The operations can further include when the priority queue is empty, setting the logical clock to an idle state.


The operations can also include performing a backtesting operation by providing predetermined input data to the update propagation graph as one or more events and updates to one or more data sources, and receiving output results from the update propagation graph for each logical clock cycle. The operations can further include comparing the output results received from the update propagation graph with one or more reference values, and generating an output signal based on the comparing.


Some implementations can include a computer-implemented method for updating a data object using an update propagation graph having a cyclicality feedback provider. The method can include constructing a cyclicality feedback provider including a cyclicality feedback provider object including one or more feedback data fields, and obtaining a reference to the cyclicality feedback provider object. The method can also include constructing a computer data system update propagation graph having one or more update propagation graph data fields that correspond to the one or more feedback data fields, and adding a feedback provider listener to the computer data system update propagation graph, wherein the feedback provider listener provides feedback updates to the one or more feedback data fields of the cyclicality feedback provider object when changes to the one or more update propagation graph data fields corresponding to the one or more feedback data fields are detected, and wherein the feedback updates are provided to the one or more feedback data fields of the cyclicality feedback provider object based on a state of a logical clock and on completion of update processing for a given logical clock cycle.


The cyclicality feedback provider object can include a computer data system table object. The update propagation graph can include a hybrid directed acyclic graph having a clock-state controlled cyclicality feedback provided by the cyclicality feedback provider and a state of a logical clock.


The method can also include determining that a logical clock has transitioned to an update state, and processing events and updates to data sources of the update propagation graph for a current logical clock cycle, wherein processing the events and updates are performed on the hybrid directed acyclic graph as if the cyclicality feedback is not present. The method can further include after the processing events and updates has completed, providing events and updates from the cyclicality feedback provider object to one or more data objects within the update propagation graph, wherein the events and updates from the cyclicality feedback provider object will be processed through the update propagation graph in a next logical clock cycle.


Processing events and updates to the data sources can include invoking a data source refresh method for a data source for which changes are being processed, and determining whether a priority queue for the data source is empty. The method can also include, when the priority queue is not empty, retrieving a next change notification message from the priority queue and delivering the change notification to a corresponding data source and repeating determining whether the priority is queue is empty. The method can further include when the priority queue is empty, setting the logical clock to an idle state.


The method can also include performing a backtesting operation by providing predetermined input data to the update propagation graph as one or more events and updates to one or more data sources, and receiving output results from the update propagation graph for each logical clock cycle. The method can further include comparing the output results received from the update propagation graph with one or more reference values, and generating an output signal based on the comparing.


Some implementations can include a nontransitory computer readable medium having stored thereon software instructions that, when executed by one or more processors, cause the one or more processors to perform operations. The operations can include constructing a cyclicality feedback provider including a cyclicality feedback provider object including one or more feedback data fields, and obtaining a reference to the cyclicality feedback provider object. The operations can also include constructing a computer data system update propagation graph having one or more update propagation graph data fields that correspond to the one or more feedback data fields, and adding a feedback provider listener to the computer data system update propagation graph, wherein the feedback provider listener provides feedback updates to the one or more feedback data fields of the cyclicality feedback provider object when changes to the one or more update propagation graph data fields corresponding to the one or more feedback data fields are detected, and wherein the feedback updates are provided to the one or more feedback data fields of the cyclicality feedback provider object based on a state of a logical clock and on completion of update processing for a given logical clock cycle.


The cyclicality feedback provider object can include a computer data system table object. The update propagation graph can include a hybrid directed acyclic graph having a clock-state controlled cyclicality feedback provided by the cyclicality feedback provider and a state of a logical clock. The operations can also include determining that a logical clock has transitioned to an update state, and processing events and updates to data sources of the update propagation graph for a current logical clock cycle, wherein processing the events and updates are performed on the hybrid directed acyclic graph as if the cyclicality feedback is not present. The operations can further include, after the processing events and updates has completed, providing events and updates from the cyclicality feedback provider object to one or more data objects within the update propagation graph, wherein the events and updates from the cyclicality feedback provider object will be processed through the update propagation graph in a next logical clock cycle.


Processing events and updates to the data sources can include invoking a data source refresh method for a data source for which changes are being processed, and determining whether a priority queue for the data source is empty. The operations can also include, when the priority queue is not empty, retrieving a next change notification message from the priority queue and delivering the change notification to a corresponding data source and repeating determining whether the priority is queue is empty. The operations can further include, when the priority queue is empty, setting the logical clock to an idle state.


The operations can also include performing a backtesting operation by providing predetermined input data to the update propagation graph as one or more events and updates to one or more data sources, and receiving output results from the update propagation graph for each logical clock cycle. The operations can further include comparing the output results received from the update propagation graph with one or more reference values, and generating an output signal based on the comparing.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a diagram of an example computer data system showing an example data distribution configuration in accordance with some implementations.



FIG. 2 is a diagram of an example computer data system showing an example administration/process control arrangement in accordance with some implementations.



FIG. 3 is a diagram of an example computing device configured for GUI control element processing in accordance with some implementations.



FIGS. 4A and 4B show example data source definitions and a corresponding directed acyclic graph (DAG) in accordance with some implementations.



FIGS. 5A and 5B show example data source definitions and a corresponding hybrid DAG having a cyclicality feedback provider in accordance with some implementations.



FIG. 6 is a flowchart of an example cyclicality feedback provider process in accordance with some implementations.



FIG. 7 is a flowchart of an example data source refresh process using a hybrid DAG having a cyclicality feedback provider in accordance with some implementations.



FIG. 8 is a diagram of an example computer data system and backtesting application using a hybrid DAG having a cyclicality feedback provider in accordance with some implementations.



FIG. 9 is a diagram of an example machine learning system using a hybrid DAG having a cyclicality feedback provider in accordance with some implementations.





DETAILED DESCRIPTION

Reference may be made herein to the Java programming language, Java classes, Java bytecode and the Java Virtual Machine (JVM) for purposes of illustrating example implementations. It will be appreciated that implementations can include other programming languages (e.g., groovy, Scala, R, Go, etc.), other programming language structures as an alternative to or in addition to Java classes (e.g., other language classes, objects, data structures, program units, code portions, script portions, etc.), other types of bytecode, object code and/or executable code, and/or other virtual machines or hardware implemented machines configured to execute a data system query.



FIG. 1 is a diagram of an example computer data system and network 100 showing an example data distribution configuration in accordance with some implementations. In particular, the system 100 includes an application host 102, a periodic data import host 104, a query server host 106, a long-term file server 108, and a user data import host 110. While tables are used as an example data object in the description below, it will be appreciated that the data system described herein can also process other data objects such as mathematical objects (e.g., a singular value decomposition of values in a given range of one or more rows and columns of a table), TableMap objects, etc. A TableMap object provides the ability to lookup a Table by some key. This key represents a unique value (or unique tuple of values) from the columns aggregated on in a by External( ) statement execution, for example. A TableMap object can be the result of a by External( ) statement executed as part of a query. It will also be appreciated that the configurations shown in FIGS. 1 and 2 are for illustration purposes and in a given implementation each data pool (or data store) may be directly attached or may be managed by a file server.


The application host 102 can include one or more application processes 112, one or more log files 114 (e.g., sequential, row-oriented log files), one or more data log tailers 116 and a multicast key-value publisher 118. The periodic data import host 104 can include a local table data server, direct or remote connection to a periodic table data store 122 (e.g., a column-oriented table data store) and a data import server 120. The query server host 106 can include a multicast key-value subscriber 126, a performance table logger 128, local table data store 130 and one or more remote query processors (132, 134) each accessing one or more respective tables (136, 138). The long-term file server 108 can include a long-term data store 140. The user data import host 110 can include a remote user table server 142 and a user table data store 144. Row-oriented log files and column-oriented table data stores are discussed herein for illustration purposes and are not intended to be limiting. It will be appreciated that log files and/or data stores may be configured in other ways. In general, any data stores discussed herein could be configured in a manner suitable for a contemplated implementation.


In operation, the input data application process 112 can be configured to receive input data from a source (e.g., a securities trading data source), apply schema-specified, generated code to format the logged data as it's being prepared for output to the log file 114 and store the received data in the sequential, row-oriented log file 114 via an optional data logging process. In some implementations, the data logging process can include a daemon, or background process task, that is configured to log raw input data received from the application process 112 to the sequential, row-oriented log files on disk and/or a shared memory queue (e.g., for sending data to the multicast publisher 118). Logging raw input data to log files can additionally serve to provide a backup copy of data that can be used in the event that downstream processing of the input data is halted or interrupted or otherwise becomes unreliable.


A data log tailer 116 can be configured to access the sequential, row-oriented log file(s) 114 to retrieve input data logged by the data logging process. In some implementations, the data log tailer 116 can be configured to perform strict byte reading and transmission (e.g., to the data import server 120). The data import server 120 can be configured to store the input data into one or more corresponding data stores such as the periodic table data store 122 in a column-oriented configuration. The periodic table data store 122 can be used to store data that is being received within a time period (e.g., a minute, an hour, a day, etc.) and which may be later processed and stored in a data store of the long-term file server 108. For example, the periodic table data store 122 can include a plurality of data servers configured to store periodic securities trading data according to one or more characteristics of the data (e.g., a data value such as security symbol, the data source such as a given trading exchange, etc.).


The data import server 120 can be configured to receive and store data into the periodic table data store 122 in such a way as to provide a consistent data presentation to other parts of the system. Providing/ensuring consistent data in this context can include, for example, recording logged data to a disk or memory, ensuring rows presented externally are available for consistent reading (e.g., to help ensure that if the system has part of a record, the system has all of the record without any errors), and preserving the order of records from a given data source. If data is presented to clients, such as a remote query processor (132, 134), then the data may be persisted in some fashion (e.g., written to disk).


The local table data server 124 can be configured to retrieve data stored in the periodic table data store 122 and provide the retrieved data to one or more remote query processors (132, 134) via an optional proxy.


The remote user table server (RUTS) 142 can include a centralized consistent data writer, as well as a data server that provides processors with consistent access to the data that it is responsible for managing. For example, users can provide input to the system by writing table data that is then consumed by query processors.


The remote query processors (132, 134) can use data from the data import server 120, local table data server 124 and/or from the long-term file server 108 to perform queries. The remote query processors (132, 134) can also receive data from the multicast key-value subscriber 126, which receives data from the multicast key-value publisher 118 in the application host 102. The performance table logger 128 can log performance information about each remote query processor and its respective queries into a local table data store 130. Further, the remote query processors can also read data from the RUTS, from local table data written by the performance logger, or from user table data read over NFS, for example.


It will be appreciated that the configuration shown in FIG. 1 is a typical example configuration that may be somewhat idealized for illustration purposes. An actual configuration may include one or more of each server and/or host type. The hosts/servers shown in FIG. 1 (e.g., 102-110, 120, 124 and 142) may each be separate or two or more servers may be combined into one or more combined server systems. Data stores can include local/remote, shared/isolated and/or redundant. Any table data may flow through optional proxies indicated by an asterisk on certain connections to the remote query processors. Also, it will be appreciated that the term “periodic” is being used for illustration purposes and can include, but is not limited to, data that has been received within a given time period (e.g., millisecond, second, minute, hour, day, week, month, year, etc.) and which has not yet been stored to a long-term data store (e.g., 140).



FIG. 2 is a diagram of an example computer data system 200 showing an example administration/process control arrangement in accordance with some implementations. The system 200 includes a production client host 202, a controller host 204, a GUI host or workstation 206, and query server hosts 208 and 210. It will be appreciated that there may be one or more of each of 202-210 in a given implementation.


The production client host 202 can include a batch query application 212 (e.g., a query that is executed from a command line interface or the like) and a real time query data consumer process 214 (e.g., an application that connects to and listens to tables created from the execution of a separate query). The batch query application 212 and the real time query data consumer 214 can connect to a remote query dispatcher 222 and one or more remote query processors (224, 226) within the query server host 1208.


The controller host 204 can include a persistent query controller 216 configured to connect to a remote query dispatcher 232 and one or more remote query processors 228-230. In some implementations, the persistent query controller 216 can serve as the “primary client” for persistent queries and can request remote query processors from dispatchers, and send instructions to start persistent queries. For example, a user can submit a query to 216, and 216 starts and runs the query every day. In another example, a securities trading strategy could be a persistent query. The persistent query controller can start the trading strategy query every morning before the market opened, for instance. It will be appreciated that 216 can work on times other than days. In some implementations, the controller may require its own clients to request that queries be started, stopped, etc. This can be done manually, or by scheduled (e.g., cron) jobs. Some implementations can include “advanced scheduling” (e.g., auto-start/stop/restart, time-based repeat, etc.) within the controller.


The GUI/host workstation can include a user console 218 and a user query application 220. The user console 218 can be configured to connect to the persistent query controller 216. The user query application 220 can be configured to connect to one or more remote query dispatchers (e.g., 232) and one or more remote query processors (228, 230).



FIG. 3 is a diagram of an example computing device 300 in accordance with at least one implementation. The computing device 300 includes one or more processors 302, operating system 304, computer readable medium 306 and network interface 308. The memory 306 can include a feedback cyclicality application 310 and a data section 312 (e.g., for storing in-memory tables, etc.).


In operation, the processor 302 may execute the application 310 stored in the memory 306. The application 310 can include software instructions that, when executed by the processor, cause the processor to perform operations for data source refreshing using an update propagation graph with feedback cyclicality in accordance with the present disclosure (e.g., performing one or more of 602-618 and/or 702-708 described below).


The application program 310 can operate in conjunction with the data section 312 and the operating system 304.


As used herein, a data source can include, but is not limited to, a real time or near real time data source such as securities market data (e.g., over a multicast distribution mechanism (e.g., 118/126) or through a tailer (e.g., 116), system generated data, historical data, user input data from a remote user table server, tables programmatically generated in-memory, or an element upstream in an update propagation graph (UPG) such as a directed acyclic graph (DAG), and/or any data (e.g., a table, mathematical object, etc.) having a capability to refresh itself/provide updated data.


When a data source is updated, it will send add, delete, modify, reindex (AMDR) notifications through the DAG. It will be appreciated that a DAG is used herein for illustration purposes of a possible implementation of the UPG, and that the UPG can include other implementations. A reindex message is a message to change the indexing of a data item, but not change the value. When a table is exported from the server to a client, there is an exported table handle created and that handle attaches itself to the DAG; as a child of the table to be displayed. When the DAG updates, that handle's node in the DAG is reached and a notification is sent across the network to the client that includes the rows which have been added/modified/deleted/reindexed. On the client side, those rows are reconstructed and an in-memory copy of the table (or portion thereof) is maintained for display (or other access).


There can be two cases in which a view is updated. In the first case, a system clock ticks, and there is new data for one or more source (parent) nodes in the DAG, which percolates down to the exported table handle. In the second case, a user changes the “viewport”, which is the active set of rows and columns.


There can be various ways the viewport is caused to be updated, such as: (i) scrolling the view of the table, (ii) showing or hiding a table, (iii) when the user or client program programmatically accesses the table, and/or (iv) adding/removing columns from a view. When the viewport is updated, the viewport is automatically adjusted to include the rows/columns that the user is trying to access with exponential expansion up to a limit for efficiency. After a timeout, any automatically created viewports are closed.


A query result may not change without a clock tick that has one or more AMDR messages which traverse the DAG. However, the portion of a query result that is displayed by the user (e.g., the viewport) might change. When a user displays a table, a set of visible columns and rows is computed. In addition to the visible set of rows/columns, the system may compute (and make available for possible display) more data than is visible. For example, the system may compute and make available for possible display three screens of data: the currently visible screen and one screen before and one screen after. If there are multiple views of the same table, either multiple exported table handles are created in which case the views are independent or if a single exported table handle is created, the viewport is the union of the visible sets. As the user scrolls the table, the viewport may change. When the viewport changes, the visible area (with a buffer of rows up and down, and columns left and right, so that scrolling is smooth) is computed and the updated visible area is sent to the server. In response, the server sends a snapshot with relevant portions of those newly visible rows/columns. For non-displayed tables, the visible area can be considered the whole table by the system for further processing so that a consistent table view is available for further processing (e.g., all rows and one or more columns of the data object may be sent to the client).


The snapshot can be generated asynchronously from the DAG update/table refresh loop under the condition that a consistent snapshot (i.e., the clock value remains the same throughout the snapshot) is able to be obtained. If a consistent snapshot is not obtained after a given number of attempts (e.g., three attempts), a lock can be obtained (e.g., the LiveTableMonitor lock) at the end of the current DAG update cycle to lock out updates while the snapshot is created.


Further, the remote query processor (or server) has knowledge of the visible regions and will send data updates for the visible rows/columns (e.g., it can send the entire AMDR message information so the client has information about what has been updated, just not what the actual data is outside of its viewport). This enables the client optionally to cache data even if the data is outside the viewport and only invalidate the data once the data actually changes.


The DAG structure can be maintained in the memory of a remote query processor. Child nodes have hard references back to their parents, and parents have weak references to their children. This ensures that if a child exists, its parent will also exist, but if there are no external references to a child, then a garbage collection event can properly clean the child up (and the parent won't hold onto the child). For the exported table handles, a component (e.g., an ExportedTableHandleManager component) can be configured to hold hard references to the exported tables. If a client disconnects, then the references for its tables can be cleaned up. Clients can also proactively release references.



FIGS. 4A and 4B show data source definitions and a corresponding directed acyclic graph (DAG) in accordance with some implementations. In FIG. 4A, example code defines the data sources as tables (t1-t5). From the code for the data sources, a DAG can be generated as shown by the graph in FIG. 4B. The DAG in FIG. 4B shows dependencies between the nodes, which correspond to table data sources.


Data sources can include market data (e.g., data received via multicast distribution mechanism or through a tailer), system generated data, historical data, user input data from the remote user table server, tables programmatically generated in-memory, or something further upstream in the DAG. In general, anything represented in the data system as a table and which can refresh itself/provide data can be a data source. Also, data sources can include non-table data structures which update, for example, mathematical data structures such as a singular value decomposition (SVD) of a table. Similarly, correlation matrices, linear algebra, PDE solvers, a non-matrix, non-tabular data object, etc. can be supported.


In some implementations, code can be converted into the in-memory data structures holding the DAG. For example, the source code of FIG. 4A gets converted into the DAG data structure in memory. The DAG connectivity can change by executing code. For example, assume a set of code CODE1 is executed. CODE1 leads to a DAG1 being created. Data can be processed through DAG1, leading to table updates. Now assume that the user wants to compute a few more tables. The user can run a few more lines of code CODE2, which use variables computed in CODE1. The execution of CODE2 leads to a change in the DAG. As a simple example, assume that the first 3 lines in FIG. 4A are executed. The user could come along later and execute line 4, which would modify the DAG data structure. Also, some implementations can permit other programs to listen to changes from a node representing a data object (e.g., table or non-table object) or an internal node.


In some implementations, when a table changes, an application programming interface (API) can specify rows where add, modify, delete, or reindex (AMDR) changes were made. A reindex is a change in which a row is moved but the value contained in the row is not modified. The API can also provide a mechanism to obtain a value prior to the most recent change. When the DAG is processed during the refresh, the AMD info on “upstream” data objects (e.g., tables, etc.) or nodes is used to compute changes in “downstream” data objects or nodes. In some implementations, the entire DAG can be processed during the refresh cycle.


In general, a DAG can be comprised of a) dynamic nodes (DN); b) static nodes (SN); and c) internal nodes (IN) that can include nodes with DN and/or SN and/or IN as inputs.


DNs are nodes of the graph that can change. For example, DN can be data sources that update as new data comes in. DN could also be timers that trigger an event based on time intervals. In other examples, DN could also be MySQL monitors, specialized filtering criteria (e.g., update a “where” filter only when a certain event happens). Because these nodes are “sources”, they may occur as root nodes in the DAG. At the most fundamental level, DN are root DAG nodes which change (e.g., are “alive”).


SNs are nodes of the DAG that do not change. For example, historical data does not change. IN are interior nodes of the DAG. The state of an IN can be defined by its inputs, which can be DN, SN, and or IN. If all of the IN inputs are “static”, the IN will be static. If one or more of the IN inputs is “dynamic”, the IN will be dynamic. IN can be tables or other data structures. For example, a “listener IN” can permit code to listen to a node of the DAG. A listener node or associated listener monitoring code can place (or “fire”) additional events (or notifications) into a priority queue of a DAG.


In general, a DAG can be composed of static and/or dynamic subgraphs. Update processing occurs on dynamic subgraphs (because static subgraphs are not changing). Only dynamic nodes are in the DataMonitor loop. For Tables, AMDR messages are used for communication within the DAG.


When query code is executed, the DAG is created or modified. As part of this process, the system records the order in which the DAG nodes were constructed in. This “construction ordering” can be used to determine the order that nodes are processed in the DAG.


For example, consider:


a=db.i( . . . ), where a is a dynamic node (or DN)


b=a.where(“A=1”)


c=b.where(“B=2”)


d=c.join(b)


Assume (a) has changes to be processed during a refresh cycle. The order of processing will be (a), (b), (c), and then (d).


When (d) is processed, it will process input changes from both (b) and (c) before creating AMDRs notification messages for (d). This ordering prevents (d) from creating more than one set of AMDRs per input change, and it can help ensure that all AMDRs are consistent with all data being processed for the clock cycle. If this ordering were not in place, it may be possible to get multiple ticks per cycle and some of the data can be inconsistent. Also, the ordering can help ensure that joins produce consistent results.



FIGS. 5A and 5B show example data source definitions and a corresponding hybrid DAG having a cyclicality feedback provider in accordance with some implementations. In FIG. 5A, example code defines the data sources as tables (t1-t5), where table t1 includes data from the cyclicality feedback provider that provides a feedback path within the hybrid DAG for data from table t3. From the code for the data sources in FIG. 5A, a hybrid DAG can be generated as shown by the graph in FIG. 5B. The hybrid DAG in FIG. 5B shows dependencies between the nodes, which correspond to table data sources, and also shows the cyclicality feedback provider that provides feedback data from table t3 to table t1. It will be appreciated that a cyclicality feedback provider can provide feedback from one or more tables (or other data sources) within an update propagation graph to one or more other tables (or other data sources) at a higher level within the update propagation graph. The cyclicality feedback provider can include a table that is maintained in memory or stored to a storage device such as a disk and/or a non-table data source as mentioned above.


While the cyclicality feedback provider is shown as a separate table for illustration purposes in FIG. 5B, it will be appreciated that in some implementations, the cyclicality feedback provider could be part of another data source (e.g., table). The cyclicality feedback provider can be configured to listen to one or more tables (or other data sources). In addition to listening (or monitoring) for changes in data sources, a cyclicality feedback provider listener can listen for events such as state changes, state updates or other changes in addition to or as an alternative to listening for AMDR-type changes or events.



FIG. 6 is a flowchart of an example cyclicality feedback provider method 600 in accordance with some implementations. Processing begins at 602, where a new cyclicality feedback provider is created. For example, the example pseudo code of FIG. 5A shows example pseudo code lines (e.g., first three lines of FIG. 5A) that create a new cyclicality feedback provider. Creation of a cyclicality feedback provider can include instantiating an object of a class that has been configured to operate as a cyclicality feedback provider object. Processing continues to 604.


At 604, a reference to the cyclicality feedback provider object is obtained. For example, a reference to a table of the cyclicality feedback provider can be requested as illustrated in line four of the pseudo code in FIG. 5A. Processing continues to 606.


At 606, an update propagation graph is constructed. For example, using techniques described above, an update propagation graph (e.g., as represented by FIG. 5B) is constructed. Processing continues to 608.


At 608, the cyclicality feedback provider is configured to listen to changes (e.g., events and/or updates) of the data sources to which the cyclicality feedback provider is associated (e.g., the data sources or tables having data fields that the cyclicality feedback provider is configured to listen for events or changes to occur. Processing continues to 610.


At 610, events and/or updates for the data sources (e.g., tables) within the update propagation graph are processed according to the techniques mentioned above. Processing continues to 612.


At 612, the cyclicality feedback provider listener listens for changes. For example, the listener could programmatically detect a change in one or more data sources or data fields that the cyclicality feedback provider is listening to. Processing continues to 614.


At 614, it is determined whether any changes were detected. If so, processing continues to 616. If no changes were detected, processing continues to 610.


At 616, the cyclicality feedback provider data object (e.g., table) is updated once the events and/or changes for the update propagation graph have been processed for the current logical clock cycle. For example, data changes in data sources monitored by the cyclicality feedback provider can be reflected in the cyclicality feedback provider data object. Processing continues to 618.


At 618, data sources that depend on the cyclicality feedback provider are updated at the end of the current logical cycle so that the feedback data is available for processing during the next logical clock cycle.


It will be appreciated that 602-618 can be repeated in whole or in part to perform a cyclicality feedback operation.



FIG. 7 is a flowchart of an example data source refresh process 700 using a hybrid DAG having a cyclicality feedback provider in accordance with some implementations. Processing begins at 702, where it is determined that a logical clock has transitioned to an update state. The transition to an update state can mark the beginning of a logical clock cycle for updating an update propagation graph as discussed above. Processing continues to 704.


At 704, events and updates that have been queued are processed through the update propagation graph as described above. Processing continues to 706.


At 706, after the events and updates for the update propagation graph have been processed, events and updates from the cyclicality feedback provider are provided to the data sources that are dependent on the cyclicality feedback provider. Processing continues to 708.


At 708, the logical clock transitions to an idle state, which can indicate the end of a current logical clock cycle in advance of a next logical clock cycle.



FIG. 8 is a diagram of an example computer data system 802 and backtesting application 804 using a hybrid DAG having a cyclicality feedback provider 806 in accordance with some implementations.


In operation, the backtesting application 804 can provide predetermined input data 808 to the update propagation graph of the computer data system 802. As the input data 808 is processed across one or more logical clock cycles, the cyclicality feedback provider 806 can provide feedback within the update propagation graph. Output 810 from the computer data system 802 can be received by the backtesting application/system 804 and programmatically evaluated to generate an output signal (e.g., new/modified input data 808, and/or an indication of how the computer data system 802 is performing based on the input data 808).


It will be appreciated that the backtesting application/system 804 can be part of the computer data system 802, or can be a separate system. The cyclicality feedback provider 806 can be part of the backtesting application/system 804 or part of the computer data system 802, or distributed between the two.



FIG. 9 is a diagram of an example machine learning system 906 using a hybrid DAG having a cyclicality feedback provider in accordance with some implementations. In operation, data from data sources 1 and 2 (902 and 904) is provided to the machine learning model or system 906, which processes the input data according to a machine learning technique (e.g., neural network, etc.). The machine learning model or system 906 generates a prediction 908 (or other inference or estimation). A cyclicality feedback provider 910 listens to changes in the machine learning model or system 906 can provides updates to the machine learning model or system 906 as described above.


It will be appreciated that the modules, processes, systems, and sections described above can be implemented in hardware, hardware programmed by software, software instructions stored on a nontransitory computer readable medium or a combination of the above. A system as described above, for example, can include a processor configured to execute a sequence of programmed instructions stored on a nontransitory computer readable medium. For example, the processor can include, but not be limited to, a personal computer or workstation or other such computing system that includes a processor, microprocessor, microcontroller device, or is comprised of control logic including integrated circuits such as, for example, an Application Specific Integrated Circuit (ASIC), a field programmable gate array (FPGA), GPGPU, GPU, or the like. The instructions can be compiled from source code instructions provided in accordance with a programming language such as Java, C, C++, C#.net, assembly or the like. The instructions can also comprise code and data objects provided in accordance with, for example, the Visual Basic™ language, a specialized database query language, or another structured or object-oriented programming language. The sequence of programmed instructions, or programmable logic device configuration software, and data associated therewith can be stored in a nontransitory computer-readable medium such as a computer memory or storage device which may be any suitable memory apparatus, such as, but not limited to ROM, PROM, EEPROM, RAM, flash memory, disk drive and the like.


Furthermore, the modules, processes systems, and sections can be implemented as a single processor or as a distributed processor. Further, it should be appreciated that the steps mentioned above may be performed on a single or distributed processor (single and/or multi-core, or cloud computing system). Also, the processes, system components, modules, and sub-modules described in the various figures of and for embodiments above may be distributed across multiple computers or systems or may be co-located in a single processor or system. Example structural embodiment alternatives suitable for implementing the modules, sections, systems, means, or processes described herein are provided below.


The modules, processors or systems described above can be implemented as a programmed general purpose computer, an electronic device programmed with microcode, a hard-wired analog logic circuit, software stored on a computer-readable medium or signal, an optical computing device, a networked system of electronic and/or optical devices, a special purpose computing device, an integrated circuit device, a semiconductor chip, and/or a software module or object stored on a computer-readable medium or signal, for example.


Embodiments of the method and system (or their sub-components or modules), may be implemented on a general-purpose computer, a special-purpose computer, a programmed microprocessor or microcontroller and peripheral integrated circuit element, an ASIC or other integrated circuit, a digital signal processor, a hardwired electronic or logic circuit such as a discrete element circuit, a programmed logic circuit such as a PLD, PLA, FPGA, PAL, or the like. In general, any processor capable of implementing the functions or steps described herein can be used to implement embodiments of the method, system, or a computer program product (software program stored on a nontransitory computer readable medium).


Furthermore, embodiments of the disclosed method, system, and computer program product (or software instructions stored on a nontransitory computer readable medium) may be readily implemented, fully or partially, in software using, for example, object or object-oriented software development environments that provide portable source code that can be used on a variety of computer platforms. Alternatively, embodiments of the disclosed method, system, and computer program product can be implemented partially or fully in hardware using, for example, standard logic circuits or a VLSI design. Other hardware or software can be used to implement embodiments depending on the speed and/or efficiency requirements of the systems, the particular function, and/or particular software or hardware system, microprocessor, or microcomputer being utilized. Embodiments of the method, system, and computer program product can be implemented in hardware and/or software using any known or later developed systems or structures, devices and/or software by those of ordinary skill in the applicable art from the function description provided herein and with a general basic knowledge of the software engineering and computer networking arts.


Moreover, embodiments of the disclosed method, system, and computer readable media (or computer program product) can be implemented in software executed on a programmed general purpose computer, a special purpose computer, a microprocessor, or the like.


It is, therefore, apparent that there is provided, in accordance with the various embodiments disclosed herein, methods, systems and computer readable media for data source refreshing using an update propagation graph with feedback cyclicality.


Application Ser. No. 15/154,974, entitled “DATA PARTITIONING AND ORDERING” and filed in the United States Patent and Trademark Office on May 14, 2016, is hereby incorporated by reference herein in its entirety as if fully set forth herein.


Application Ser. No. 15/154,975, entitled “COMPUTER DATA SYSTEM DATA SOURCE REFRESHING USING AN UPDATE PROPAGATION GRAPH” and filed in the United States Patent and Trademark Office on May 14, 2016, is hereby incorporated by reference herein in its entirety as if fully set forth herein.


Application Ser. No. 15/154,979, entitled “COMPUTER DATA SYSTEM POSITION-INDEX MAPPING” and filed in the United States Patent and Trademark Office on May 14, 2016, is hereby incorporated by reference herein in its entirety as if fully set forth herein.


Application Ser. No. 15/154,980, entitled “SYSTEM PERFORMANCE LOGGING OF COMPLEX REMOTE QUERY PROCESSOR QUERY OPERATIONS” and filed in the United States Patent and Trademark Office on May 14, 2016, is hereby incorporated by reference herein in its entirety as if fully set forth herein.


Application Ser. No. 15/154,983, entitled “DISTRIBUTED AND OPTIMIZED GARBAGE COLLECTION OF REMOTE AND EXPORTED TABLE HANDLE LINKS TO UPDATE PROPAGATION GRAPH NODES” and filed in the United States Patent and Trademark Office on May 14, 2016, is hereby incorporated by reference herein in its entirety as if fully set forth herein.


Application Ser. No. 15/154,984, entitled “COMPUTER DATA SYSTEM CURRENT ROW POSITION QUERY LANGUAGE CONSTRUCT AND ARRAY PROCESSING QUERY LANGUAGE CONSTRUCTS” and filed in the United States Patent and Trademark Office on May 14, 2016, is hereby incorporated by reference herein in its entirety as if fully set forth herein.


Application Ser. No. 15/154,985, entitled “PARSING AND COMPILING DATA SYSTEM QUERIES” and filed in the United States Patent and Trademark Office on May 14, 2016, is hereby incorporated by reference herein in its entirety as if fully set forth herein.


Application Ser. No. 15/154,987, entitled “DYNAMIC FILTER PROCESSING” and filed in the United States Patent and Trademark Office on May 14, 2016, is hereby incorporated by reference herein in its entirety as if fully set forth herein.


Application Ser. No. 15/154,988, entitled “DYNAMIC JOIN PROCESSING USING REAL-TIME MERGED NOTIFICATION LISTENER” and filed in the United States Patent and Trademark Office on May 14, 2016, is hereby incorporated by reference herein in its entirety as if fully set forth herein.


Application Ser. No. 15/154,990, entitled “DYNAMIC TABLE INDEX MAPPING” and filed in the United States Patent and Trademark Office on May 14, 2016, is hereby incorporated by reference herein in its entirety as if fully set forth herein.


Application Ser. No. 15/154,991, entitled “QUERY TASK PROCESSING BASED ON MEMORY ALLOCATION AND PERFORMANCE CRITERIA” and filed in the United States Patent and Trademark Office on May 14, 2016, is hereby incorporated by reference herein in its entirety as if fully set forth herein.


Application Ser. No. 15/154,993, entitled “A MEMORY-EFFICIENT COMPUTER SYSTEM FOR DYNAMIC UPDATING OF JOIN PROCESSING” and filed in the United States Patent and Trademark Office on May 14, 2016, is hereby incorporated by reference herein in its entirety as if fully set forth herein.


Application Ser. No. 15/154,995, entitled “QUERY DISPATCH AND EXECUTION ARCHITECTURE” and filed in the United States Patent and Trademark Office on May 14, 2016, is hereby incorporated by reference herein in its entirety as if fully set forth herein.


Application Ser. No. 15/154,996, entitled “COMPUTER DATA DISTRIBUTION ARCHITECTURE” and filed in the United States Patent and Trademark Office on May 14, 2016, is hereby incorporated by reference herein in its entirety as if fully set forth herein.


Application Ser. No. 15/154,997, entitled “DYNAMIC UPDATING OF QUERY RESULT DISPLAYS” and filed in the United States Patent and Trademark Office on May 14, 2016, is hereby incorporated by reference herein in its entirety as if fully set forth herein.


Application Ser. No. 15/154,998, entitled “DYNAMIC CODE LOADING” and filed in the United States Patent and Trademark Office on May 14, 2016, is hereby incorporated by reference herein in its entirety as if fully set forth herein.


Application Ser. No. 15/154,999, entitled “IMPORTATION, PRESENTATION, AND PERSISTENT STORAGE OF DATA” and filed in the United States Patent and Trademark Office on May 14, 2016, is hereby incorporated by reference herein in its entirety as if fully set forth herein.


Application Ser. No. 15/155,001, entitled “COMPUTER DATA DISTRIBUTION ARCHITECTURE” and filed in the United States Patent and Trademark Office on May 14, 2016, is hereby incorporated by reference herein in its entirety as if fully set forth herein.


Application Ser. No. 15/155,005, entitled “PERSISTENT QUERY DISPATCH AND EXECUTION ARCHITECTURE” and filed in the United States Patent and Trademark Office on May 14, 2016, is hereby incorporated by reference herein in its entirety as if fully set forth herein.


Application Ser. No. 15/155,006, entitled “SINGLE INPUT GRAPHICAL USER INTERFACE CONTROL ELEMENT AND METHOD” and filed in the United States Patent and Trademark Office on May 14, 2016, is hereby incorporated by reference herein in its entirety as if fully set forth herein.


Application Ser. No. 15/155,007, entitled “GRAPHICAL USER INTERFACE DISPLAY EFFECTS FOR A COMPUTER DISPLAY SCREEN” and filed in the United States Patent and Trademark Office on May 14, 2016, is hereby incorporated by reference herein in its entirety as if fully set forth herein.


Application Ser. No. 15/155,009, entitled “COMPUTER ASSISTED COMPLETION OF HYPERLINK COMMAND SEGMENTS” and filed in the United States Patent and Trademark Office on May 14, 2016, is hereby incorporated by reference herein in its entirety as if fully set forth herein.


Application Ser. No. 15/155,010, entitled “HISTORICAL DATA REPLAY UTILIZING A COMPUTER SYSTEM” and filed in the United States Patent and Trademark Office on May 14, 2016, is hereby incorporated by reference herein in its entirety as if fully set forth herein.


Application Ser. No. 15/155,011, entitled “DATA STORE ACCESS PERMISSION SYSTEM WITH INTERLEAVED APPLICATION OF DEFERRED ACCESS CONTROL FILTERS” and filed in the United States Patent and Trademark Office on May 14, 2016, is hereby incorporated by reference herein in its entirety as if fully set forth herein.


Application Ser. No. 15/155,012, entitled “REMOTE DATA OBJECT PUBLISHING/SUBSCRIBING SYSTEM HAVING A MULTICAST KEY-VALUE PROTOCOL” and filed in the United States Patent and Trademark Office on May 14, 2016, is hereby incorporated by reference herein in its entirety as if fully set forth herein.


Application Ser. No. 15/351,429, entitled “QUERY TASK PROCESSING BASED ON MEMORY ALLOCATION AND PERFORMANCE CRITERIA” and filed in the United States Patent and Trademark Office on Nov. 14, 2016, is hereby incorporated by reference herein in its entirety as if fully set forth herein.


Application Ser. No. 15/813,112, entitled “COMPUTER DATA SYSTEM DATA SOURCE REFRESHING USING AN UPDATE PROPAGATION GRAPH HAVING A MERGED JOIN LISTENER” and filed in the United States Patent and Trademark Office on Nov. 14, 2017, is hereby incorporated by reference herein in its entirety as if fully set forth herein.


Application Ser. No. 15/813,127, entitled “COMPUTER DATA DISTRIBUTION ARCHITECTURE CONNECTING AN UPDATE PROPAGATION GRAPH THROUGH MULTIPLE REMOTE QUERY PROCESSORS” and filed in the United States Patent and Trademark Office on Nov. 14, 2017, is hereby incorporated by reference herein in its entirety as if fully set forth herein.


Application Ser. No. 15/813,119, entitled “KEYED ROW SELECTION” and filed in the United States Patent and Trademark Office on Nov. 14, 2017, is hereby incorporated by reference herein in its entirety as if fully set forth herein.


While the disclosed subject matter has been described in conjunction with a number of embodiments, it is evident that many alternatives, modifications and variations would be, or are, apparent to those of ordinary skill in the applicable arts. Accordingly, Applicants intend to embrace all such alternatives, modifications, equivalents and variations that are within the spirit and scope of the disclosed subject matter.

Claims
  • 1. A system for updating a data object using an update propagation graph having a cyclicality feedback provider, the system comprising: one or more hardware processors coupled to a nontransitory computer readable medium having stored thereon software instructions that, when executed by the one or more processors, cause the one or more processors to perform operations including:constructing a cyclicality feedback provider including a cyclicality feedback provider object including one or more feedback data fields;obtaining a reference to the cyclicality feedback provider object;constructing a computer data system update propagation graph having one or more update propagation graph data fields that correspond to the one or more feedback data fields; andadding a feedback provider listener to the computer data system update propagation graph, wherein the feedback provider listener provides feedback updates to the one or more feedback data fields of the cyclicality feedback provider object when changes to the one or more update propagation graph data fields corresponding to the one or more feedback data fields are detected, and wherein the feedback updates are provided to the one or more feedback data fields of the cyclicality feedback provider object based on a state of a logical clock and on completion of update processing for a given logical clock cycle.
  • 2. The system of claim 1, wherein the cyclicality feedback provider object includes a computer data system table object.
  • 3. The system of claim 1, wherein the update propagation graph includes a hybrid directed acyclic graph having a clock-state controlled cyclicality feedback provided by the cyclicality feedback provider and a state of a logical clock.
  • 4. The system of claim 3, wherein the operations further include: determining that a logical clock has transitioned to an update state;processing events and updates to data sources of the update propagation graph for a current logical clock cycle, wherein processing the events and updates are performed on the hybrid directed acyclic graph as if the cyclicality feedback is not present; andafter the processing events and updates has completed, providing events and updates from the cyclicality feedback provider object to one or more data objects within the update propagation graph, wherein the events and updates from the cyclicality feedback provider object will be processed through the update propagation graph in a next logical clock cycle.
  • 5. The system of claim 4, wherein processing events and updates to the data sources includes: invoking a data source refresh method for a data source for which changes are being processed;determining whether a priority queue for the data source is empty;when the priority queue is not empty, retrieving a next change notification message from the priority queue and delivering the change notification to a corresponding data source and repeating determining whether the priority is queue is empty; andwhen the priority queue is empty, setting the logical clock to an idle state.
  • 6. The system of claim 5, wherein the operations further include: performing a backtesting operation by providing predetermined input data to the update propagation graph as one or more events and updates to one or more data sources; andreceiving output results from the update propagation graph for each logical clock cycle.
  • 7. The system of claim 6, wherein the operations further include: comparing the output results received from the update propagation graph with one or more reference values; andgenerating an output signal based on the comparing.
  • 8. A computer-implemented method for updating a data object using an update propagation graph having a cyclicality feedback provider, the method comprising: constructing a cyclicality feedback provider including a cyclicality feedback provider object including one or more feedback data fields;obtaining a reference to the cyclicality feedback provider object;constructing a computer data system update propagation graph having one or more update propagation graph data fields that correspond to the one or more feedback data fields; andadding a feedback provider listener to the computer data system update propagation graph, wherein the feedback provider listener provides feedback updates to the one or more feedback data fields of the cyclicality feedback provider object when changes to the one or more update propagation graph data fields corresponding to the one or more feedback data fields are detected, and wherein the feedback updates are provided to the one or more feedback data fields of the cyclicality feedback provider object based on a state of a logical clock and on completion of update processing for a given logical clock cycle.
  • 9. The computer-implemented method of claim 8, wherein the cyclicality feedback provider object includes a computer data system table object.
  • 10. The computer-implemented method of claim 8, wherein the update propagation graph includes a hybrid directed acyclic graph having a clock-state controlled cyclicality feedback provided by the cyclicality feedback provider and a state of a logical clock.
  • 11. The computer-implemented method of claim 10, further comprising: determining that a logical clock has transitioned to an update state;processing events and updates to data sources of the update propagation graph for a current logical clock cycle, wherein processing the events and updates are performed on the hybrid directed acyclic graph as if the cyclicality feedback is not present; andafter the processing events and updates has completed, providing events and updates from the cyclicality feedback provider object to one or more data objects within the update propagation graph, wherein the events and updates from the cyclicality feedback provider object will be processed through the update propagation graph in a next logical clock cycle.
  • 12. The computer-implemented method of claim 11, wherein processing events and updates to the data sources includes: invoking a data source refresh method for a data source for which changes are being processed;determining whether a priority queue for the data source is empty;when the priority queue is not empty, retrieving a next change notification message from the priority queue and delivering the change notification to a corresponding data source and repeating determining whether the priority is queue is empty; andwhen the priority queue is empty, setting the logical clock to an idle state.
  • 13. The computer-implemented method of claim 12, further comprising: performing a backtesting operation by providing predetermined input data to the update propagation graph as one or more events and updates to one or more data sources; andreceiving output results from the update propagation graph for each logical clock cycle.
  • 14. The computer-implemented method of claim 13, further comprising: comparing the output results received from the update propagation graph with one or more reference values; andgenerating an output signal based on the comparing.
  • 15. A nontransitory computer readable medium having stored thereon software instructions that, when executed by one or more processors, cause the one or more processors to perform operations including: constructing a cyclicality feedback provider including a cyclicality feedback provider object including one or more feedback data fields;obtaining a reference to the cyclicality feedback provider object;constructing a computer data system update propagation graph having one or more update propagation graph data fields that correspond to the one or more feedback data fields; andadding a feedback provider listener to the computer data system update propagation graph, wherein the feedback provider listener provides feedback updates to the one or more feedback data fields of the cyclicality feedback provider object when changes to the one or more update propagation graph data fields corresponding to the one or more feedback data fields are detected, and wherein the feedback updates are provided to the one or more feedback data fields of the cyclicality feedback provider object based on a state of a logical clock and on completion of update processing for a given logical clock cycle.
  • 16. The nontransitory computer readable medium of claim 15, wherein the cyclicality feedback provider object includes a computer data system table object.
  • 17. The nontransitory computer readable medium of claim 15, wherein the update propagation graph includes a hybrid directed acyclic graph having a clock-state controlled cyclicality feedback provided by the cyclicality feedback provider and a state of a logical clock.
  • 18. The nontransitory computer readable medium of claim 17, wherein the operations further include: determining that a logical clock has transitioned to an update state;processing events and updates to data sources of the update propagation graph for a current logical clock cycle, wherein processing the events and updates are performed on the hybrid directed acyclic graph as if the cyclicality feedback is not present; andafter the processing events and updates has completed, providing events and updates from the cyclicality feedback provider object to one or more data objects within the update propagation graph, wherein the events and updates from the cyclicality feedback provider object will be processed through the update propagation graph in a next logical clock cycle.
  • 19. The nontransitory computer readable medium of claim 18, wherein processing events and updates to the data sources includes: invoking a data source refresh method for a data source for which changes are being processed;determining whether a priority queue for the data source is empty;when the priority queue is not empty, retrieving a next change notification message from the priority queue and delivering the change notification to a corresponding data source and repeating determining whether the priority is queue is empty; andwhen the priority queue is empty, setting the logical clock to an idle state.
  • 20. The nontransitory computer readable medium of claim 19, wherein the operations further include: performing a backtesting operation by providing predetermined input data to the update propagation graph as one or more events and updates to one or more data sources;receiving output results from the update propagation graph for each logical clock cycle;comparing the output results received from the update propagation graph with one or more reference values; andgenerating an output signal based on the comparing.
Parent Case Info

This application claims the benefit of U.S. Provisional Application No. 62/549,908, entitled “COMPUTER DATA SYSTEM” and filed on Aug. 24, 2017, which is incorporated herein by reference in its entirety.

US Referenced Citations (421)
Number Name Date Kind
5335202 Manning et al. Aug 1994 A
5452434 MacDonald Sep 1995 A
5469567 Okada Nov 1995 A
5504885 Alashqur Apr 1996 A
5530939 Mansfield et al. Jun 1996 A
5568632 Nelson Oct 1996 A
5673369 Kim Sep 1997 A
5701461 Dalal et al. Dec 1997 A
5701467 Freeston Dec 1997 A
5764953 Collins Jun 1998 A
5787428 Hart Jul 1998 A
5806059 Tsuchida et al. Sep 1998 A
5859972 Subramaniam et al. Jan 1999 A
5875334 Chow et al. Feb 1999 A
5878415 Olds Mar 1999 A
5890167 Bridge et al. Mar 1999 A
5899990 Maritzen et al. May 1999 A
5920860 Maheshwari et al. Jul 1999 A
5943672 Yoshida Aug 1999 A
5960087 Tribble et al. Sep 1999 A
5991810 Shapiro et al. Nov 1999 A
5999918 Williams et al. Dec 1999 A
6006220 Haderle et al. Dec 1999 A
6032144 Srivastava et al. Feb 2000 A
6032148 Wilkes Feb 2000 A
6038563 Bapat et al. Mar 2000 A
6058394 Bakow et al. May 2000 A
6061684 Glasser et al. May 2000 A
6138112 Slutz Oct 2000 A
6266669 Brodersen et al. Jul 2001 B1
6289357 Parker Sep 2001 B1
6292803 Richardson et al. Sep 2001 B1
6304876 Isip Oct 2001 B1
6317728 Kane Nov 2001 B1
6327702 Sauntry et al. Dec 2001 B1
6336114 Garrison Jan 2002 B1
6353819 Edwards et al. Mar 2002 B1
6367068 Vaidyanathan et al. Apr 2002 B1
6389414 Delo et al. May 2002 B1
6389462 Cohen et al. May 2002 B1
6438537 Netz et al. Aug 2002 B1
6446069 Yaung et al. Sep 2002 B1
6460037 Weiss et al. Oct 2002 B1
6473750 Petculescu et al. Oct 2002 B1
6487552 Lei et al. Nov 2002 B1
6496833 Goldberg et al. Dec 2002 B1
6505189 Au et al. Jan 2003 B1
6505241 Pitts Jan 2003 B2
6510551 Miller Jan 2003 B1
6530075 Beadle et al. Mar 2003 B1
6538651 Hayman et al. Mar 2003 B1
6546402 Beyer et al. Apr 2003 B1
6553375 Huang et al. Apr 2003 B1
6584474 Pereira Jun 2003 B1
6604104 Smith Aug 2003 B1
6618720 Au et al. Sep 2003 B1
6631374 Klein et al. Oct 2003 B1
6640234 Coffen et al. Oct 2003 B1
6697880 Dougherty Feb 2004 B1
6701415 Hendren Mar 2004 B1
6714962 Helland et al. Mar 2004 B1
6725243 Snapp Apr 2004 B2
6732100 Brodersen et al. May 2004 B1
6745332 Wong et al. Jun 2004 B1
6748374 Madan et al. Jun 2004 B1
6748455 Hinson et al. Jun 2004 B1
6760719 Hanson et al. Jul 2004 B1
6775660 Lin et al. Aug 2004 B2
6785668 Polo et al. Aug 2004 B1
6795851 Noy Sep 2004 B1
6816855 Hartel et al. Nov 2004 B2
6820082 Cook et al. Nov 2004 B1
6829620 Michael et al. Dec 2004 B2
6832229 Reed Dec 2004 B2
6851088 Conner et al. Feb 2005 B1
6882994 Yoshimura et al. Apr 2005 B2
6925472 Kong Aug 2005 B2
6934717 James Aug 2005 B1
6947928 Dettinger et al. Sep 2005 B2
6983291 Cochrane et al. Jan 2006 B1
6985895 Witkowski et al. Jan 2006 B2
6985899 Chan et al. Jan 2006 B2
6985904 Kaluskar et al. Jan 2006 B1
7020649 Cochrane et al. Mar 2006 B2
7024414 Sah et al. Apr 2006 B2
7031962 Moses Apr 2006 B2
7058657 Berno Jun 2006 B1
7089228 Arnold et al. Aug 2006 B2
7089245 George et al. Aug 2006 B1
7096216 Anonsen Aug 2006 B2
7103608 Ozbutun et al. Sep 2006 B1
7110997 Turkel et al. Sep 2006 B1
7127462 Hiraga et al. Oct 2006 B2
7146357 Suzuki et al. Dec 2006 B2
7149742 Eastham et al. Dec 2006 B1
7167870 Avvari et al. Jan 2007 B2
7171469 Ackaouy et al. Jan 2007 B2
7174341 Ghukasyan et al. Feb 2007 B2
7181686 Bahrs Feb 2007 B1
7188105 Dettinger et al. Mar 2007 B2
7200620 Gupta Apr 2007 B2
7216115 Walters et al. May 2007 B1
7216116 Nilsson et al. May 2007 B1
7219302 O'Shaughnessy et al. May 2007 B1
7225189 McCormack et al. May 2007 B1
7254808 Trappen et al. Aug 2007 B2
7257689 Baird Aug 2007 B1
7272605 Hinshaw et al. Sep 2007 B1
7308580 Nelson et al. Dec 2007 B2
7316003 Dulepet et al. Jan 2008 B1
7330969 Harrison et al. Feb 2008 B2
7333941 Choi Feb 2008 B1
7343585 Lau et al. Mar 2008 B1
7350237 Vogel et al. Mar 2008 B2
7380242 Alaluf May 2008 B2
7401088 Chintakayala et al. Jul 2008 B2
7426521 Harter Sep 2008 B2
7430549 Zane et al. Sep 2008 B2
7433863 Zane et al. Oct 2008 B2
7447865 Uppala et al. Nov 2008 B2
7478094 Ho et al. Jan 2009 B2
7484096 Garg et al. Jan 2009 B1
7493311 Cutsinger et al. Feb 2009 B1
7529734 Dirisala May 2009 B2
7529750 Bair May 2009 B2
7542958 Warren et al. Jun 2009 B1
7610351 Gollapudi et al. Oct 2009 B1
7620687 Chen et al. Nov 2009 B2
7624126 Pizzo et al. Nov 2009 B2
7627603 Rosenblum et al. Dec 2009 B2
7661141 Dutta et al. Feb 2010 B2
7664778 Yagoub et al. Feb 2010 B2
7672275 Yajnik et al. Mar 2010 B2
7680782 Chen et al. Mar 2010 B2
7711716 Stonecipher May 2010 B2
7711740 Minore et al. May 2010 B2
7761444 Zhang et al. Jul 2010 B2
7797356 Iyer et al. Sep 2010 B2
7827204 Heinzel et al. Nov 2010 B2
7827403 Wong et al. Nov 2010 B2
7827523 Ahmed et al. Nov 2010 B2
7882121 Bruno et al. Feb 2011 B2
7882132 Ghatare Feb 2011 B2
7904487 Ghatare Mar 2011 B2
7908259 Branscome et al. Mar 2011 B2
7908266 Zeringue et al. Mar 2011 B2
7930412 Yeap et al. Apr 2011 B2
7966311 Haase Jun 2011 B2
7966312 Nolan et al. Jun 2011 B2
7966343 Yang et al. Jun 2011 B2
7970777 Saxena et al. Jun 2011 B2
7979431 Qazi et al. Jul 2011 B2
7984043 Waas Jul 2011 B1
8019795 Anderson et al. Sep 2011 B2
8027293 Spaur et al. Sep 2011 B2
8032525 Bowers et al. Oct 2011 B2
8037542 Taylor et al. Oct 2011 B2
8046394 Shatdal Oct 2011 B1
8046749 Owen et al. Oct 2011 B1
8055672 Djugash et al. Nov 2011 B2
8060484 Bandera et al. Nov 2011 B2
8171018 Zane et al. May 2012 B2
8180789 Wasserman et al. May 2012 B1
8196121 Peshansky et al. Jun 2012 B2
8209356 Roesler Jun 2012 B1
8286189 Kukreja et al. Oct 2012 B2
8321833 Langworthy et al. Nov 2012 B2
8332435 Ballard et al. Dec 2012 B2
8359305 Burke et al. Jan 2013 B1
8375127 Lita Feb 2013 B1
8380757 Bailey et al. Feb 2013 B1
8418142 Ao et al. Apr 2013 B2
8433701 Sargeant et al. Apr 2013 B2
8458218 Wildermuth Jun 2013 B2
8473897 Box et al. Jun 2013 B2
8478713 Cotner et al. Jul 2013 B2
8515942 Marum et al. Aug 2013 B2
8543620 Ching Sep 2013 B2
8553028 Urbach Oct 2013 B1
8555263 Allen et al. Oct 2013 B2
8560502 Vora Oct 2013 B2
8595151 Hao et al. Nov 2013 B2
8601016 Briggs et al. Dec 2013 B2
8631034 Peloski Jan 2014 B1
8650182 Murthy Feb 2014 B2
8660869 MacIntyre et al. Feb 2014 B2
8676863 Connell et al. Mar 2014 B1
8683488 Kukreja et al. Mar 2014 B2
8713518 Pointer et al. Apr 2014 B2
8719252 Miranker et al. May 2014 B2
8725707 Chen et al. May 2014 B2
8726254 Rohde et al. May 2014 B2
8745014 Travis Jun 2014 B2
8745510 D'Alo' et al. Jun 2014 B2
8751823 Myles et al. Jun 2014 B2
8768961 Krishnamurthy Jul 2014 B2
8788254 Peloski Jul 2014 B2
8793243 Weyerhaeuser et al. Jul 2014 B2
8805947 Kuzkin et al. Aug 2014 B1
8806133 Hay et al. Aug 2014 B2
8812625 Chitilian et al. Aug 2014 B1
8838656 Cheriton Sep 2014 B1
8855999 Elliot Oct 2014 B1
8863156 Lepanto et al. Oct 2014 B1
8874512 Jin et al. Oct 2014 B2
8880569 Draper et al. Nov 2014 B2
8880787 Kimmel et al. Nov 2014 B1
8881121 Ali Nov 2014 B2
8886631 Abadi et al. Nov 2014 B2
8903717 Elliot Dec 2014 B2
8903842 Bloesch et al. Dec 2014 B2
8922579 Mi et al. Dec 2014 B2
8924384 Driesen et al. Dec 2014 B2
8930892 Pointer et al. Jan 2015 B2
8954418 Faerber et al. Feb 2015 B2
8959495 Chafi et al. Feb 2015 B2
8996864 Maigne et al. Mar 2015 B2
9031930 Valentin May 2015 B2
9077611 Cordray et al. Jul 2015 B2
9122765 Chen Sep 2015 B1
9195712 Freedman et al. Nov 2015 B2
9298768 Varakin et al. Mar 2016 B2
9311357 Ramesh et al. Apr 2016 B2
9372671 Balan et al. Jun 2016 B2
9384184 Acuña et al. Jul 2016 B2
9613018 Zeldis et al. Apr 2017 B2
9633060 Gaudy et al. Apr 2017 B2
9832068 McSherry Nov 2017 B2
20020002576 Wollrath et al. Jan 2002 A1
20020007331 Lo et al. Jan 2002 A1
20020054587 Baker et al. May 2002 A1
20020065981 Jenne et al. May 2002 A1
20020156722 Greenwood Oct 2002 A1
20030004952 Nixon et al. Jan 2003 A1
20030061216 Moses Mar 2003 A1
20030074400 Brooks et al. Apr 2003 A1
20030110416 Morrison et al. Jun 2003 A1
20030167261 Grust et al. Sep 2003 A1
20030182261 Patterson Sep 2003 A1
20030208484 Chang et al. Nov 2003 A1
20030208505 Mullins et al. Nov 2003 A1
20030233632 Aigen et al. Dec 2003 A1
20040002961 Dettinger et al. Jan 2004 A1
20040076155 Yajnik et al. Apr 2004 A1
20040111492 Nakahara et al. Jun 2004 A1
20040148630 Choi Jul 2004 A1
20040186813 Tedesco et al. Sep 2004 A1
20040216150 Scheifler et al. Oct 2004 A1
20040220923 Nica Nov 2004 A1
20040254876 Coval et al. Dec 2004 A1
20050015490 Saare et al. Jan 2005 A1
20050060693 Robison et al. Mar 2005 A1
20050097447 Serra et al. May 2005 A1
20050102284 Srinivasan et al. May 2005 A1
20050102636 McKeon et al. May 2005 A1
20050131893 Glan Jun 2005 A1
20050132384 Morrison et al. Jun 2005 A1
20050138624 Morrison et al. Jun 2005 A1
20050165866 Bohannon et al. Jul 2005 A1
20050198001 Cunningham et al. Sep 2005 A1
20060059253 Goodman et al. Mar 2006 A1
20060074901 Pirahesh et al. Apr 2006 A1
20060085490 Baron et al. Apr 2006 A1
20060100989 Chinchwadkar et al. May 2006 A1
20060101019 Nelson et al. May 2006 A1
20060116983 Dettinger et al. Jun 2006 A1
20060116999 Dettinger Jun 2006 A1
20060136361 Peri et al. Jun 2006 A1
20060173693 Arazi et al. Aug 2006 A1
20060195460 Nori et al. Aug 2006 A1
20060212847 Tarditi et al. Sep 2006 A1
20060218123 Chowdhuri et al. Sep 2006 A1
20060218200 Factor et al. Sep 2006 A1
20060230016 Cunningham et al. Oct 2006 A1
20060253311 Yin et al. Nov 2006 A1
20060271510 Harward et al. Nov 2006 A1
20060277162 Smith Dec 2006 A1
20070011211 Reeves et al. Jan 2007 A1
20070027884 Heger et al. Feb 2007 A1
20070033518 Kenna et al. Feb 2007 A1
20070073765 Chen Mar 2007 A1
20070101252 Chamberlain et al. May 2007 A1
20070169003 Branda et al. Jul 2007 A1
20070256060 Ryu et al. Nov 2007 A1
20070258508 Werb et al. Nov 2007 A1
20070271280 Chandasekaran Nov 2007 A1
20070299822 Jopp et al. Dec 2007 A1
20080022136 Mattsson et al. Jan 2008 A1
20080033907 Woehler et al. Feb 2008 A1
20080046804 Rui et al. Feb 2008 A1
20080072150 Chan et al. Mar 2008 A1
20080120283 Liu et al. May 2008 A1
20080155565 Poduri Jun 2008 A1
20080168135 Redlich et al. Jul 2008 A1
20080235238 Jalobeanu et al. Sep 2008 A1
20080263179 Buttner et al. Oct 2008 A1
20080276241 Bajpai et al. Nov 2008 A1
20080319951 Ueno et al. Dec 2008 A1
20090019029 Tommaney et al. Jan 2009 A1
20090022095 Spaur et al. Jan 2009 A1
20090037391 Agrawal et al. Feb 2009 A1
20090055370 Dagum et al. Feb 2009 A1
20090083215 Burger Mar 2009 A1
20090089312 Chi et al. Apr 2009 A1
20090248902 Blue Oct 2009 A1
20090254516 Meiyyappan et al. Oct 2009 A1
20090300770 Rowney et al. Dec 2009 A1
20090319058 Rovaglio et al. Dec 2009 A1
20090319484 Golbandi et al. Dec 2009 A1
20090327242 Brown et al. Dec 2009 A1
20100036801 Pirvali et al. Feb 2010 A1
20100042587 Johnson et al. Feb 2010 A1
20100047760 Best et al. Feb 2010 A1
20100049715 Jacobsen et al. Feb 2010 A1
20100161555 Nica et al. Jun 2010 A1
20100186082 Ladki et al. Jul 2010 A1
20100199161 Aureglia et al. Aug 2010 A1
20100205017 Sichelman et al. Aug 2010 A1
20100205351 Wiener et al. Aug 2010 A1
20100281005 Carlin et al. Nov 2010 A1
20100281071 Ben-Zvi et al. Nov 2010 A1
20110126110 Vilke et al. May 2011 A1
20110126154 Boehler et al. May 2011 A1
20110153603 Adiba et al. Jun 2011 A1
20110161378 Williamson Jun 2011 A1
20110167020 Yang et al. Jul 2011 A1
20110178984 Talius et al. Jul 2011 A1
20110194563 Shen et al. Aug 2011 A1
20110219020 Oks et al. Sep 2011 A1
20110314019 Peris Dec 2011 A1
20120110030 Pomponio May 2012 A1
20120144234 Clark et al. Jun 2012 A1
20120159303 Friedrich et al. Jun 2012 A1
20120191446 Binsztok et al. Jul 2012 A1
20120192096 Bowman et al. Jul 2012 A1
20120197868 Fauser et al. Aug 2012 A1
20120209886 Henderson Aug 2012 A1
20120215741 Poole et al. Aug 2012 A1
20120221528 Renkes Aug 2012 A1
20120246052 Taylor et al. Sep 2012 A1
20120254143 Varma et al. Oct 2012 A1
20120259759 Grist et al. Oct 2012 A1
20120296846 Teeter Nov 2012 A1
20130041946 Joel et al. Feb 2013 A1
20130080514 Gupta et al. Mar 2013 A1
20130086107 Genochio et al. Apr 2013 A1
20130166556 Baeumges et al. Jun 2013 A1
20130173667 Soderberg et al. Jul 2013 A1
20130179460 Cervantes et al. Jul 2013 A1
20130185619 Ludwig Jul 2013 A1
20130191370 Chen et al. Jul 2013 A1
20130198232 Shamgunov et al. Aug 2013 A1
20130226959 Dittrich et al. Aug 2013 A1
20130246560 Feng et al. Sep 2013 A1
20130263123 Zhou et al. Oct 2013 A1
20130290243 Hazel et al. Oct 2013 A1
20130304725 Nee et al. Nov 2013 A1
20130304744 McSherry et al. Nov 2013 A1
20130311352 Kayanuma et al. Nov 2013 A1
20130311488 Erdogan et al. Nov 2013 A1
20130318129 Vingralek et al. Nov 2013 A1
20130346365 Kan et al. Dec 2013 A1
20140019494 Tang Jan 2014 A1
20140040203 Lu et al. Feb 2014 A1
20140059646 Hannel et al. Feb 2014 A1
20140082724 Pearson et al. Mar 2014 A1
20140136521 Pappas May 2014 A1
20140143123 Banke et al. May 2014 A1
20140149997 Kukreja et al. May 2014 A1
20140156618 Castellano Jun 2014 A1
20140173023 Varney et al. Jun 2014 A1
20140181036 Dhamankar et al. Jun 2014 A1
20140181081 Veidhuizen Jun 2014 A1
20140188924 Ma et al. Jul 2014 A1
20140195558 Murthy et al. Jul 2014 A1
20140201194 Reddy et al. Jul 2014 A1
20140215446 Araya et al. Jul 2014 A1
20140222768 Rambo et al. Aug 2014 A1
20140229506 Lee Aug 2014 A1
20140229874 Strauss Aug 2014 A1
20140244687 Shmueli et al. Aug 2014 A1
20140279810 Mann et al. Sep 2014 A1
20140280522 Watte Sep 2014 A1
20140282444 Araya et al. Sep 2014 A1
20140282540 Bonnet et al. Sep 2014 A1
20140287777 Nixon et al. Sep 2014 A1
20140297611 Abbour et al. Oct 2014 A1
20140317084 Chaudhry et al. Oct 2014 A1
20140324821 Meiyyappan et al. Oct 2014 A1
20140330700 Studnitzer et al. Nov 2014 A1
20140330807 Weyerhaeuser et al. Nov 2014 A1
20140344186 Nadler Nov 2014 A1
20140344391 Varney et al. Nov 2014 A1
20140359574 Beckwith et al. Dec 2014 A1
20140372482 Martin et al. Dec 2014 A1
20140380051 Edward et al. Dec 2014 A1
20150019516 Wein et al. Jan 2015 A1
20150026155 Martin Jan 2015 A1
20150067640 Booker et al. Mar 2015 A1
20150074066 Li et al. Mar 2015 A1
20150082218 Affoneh et al. Mar 2015 A1
20150088894 Czarlinska et al. Mar 2015 A1
20150095381 Chen et al. Apr 2015 A1
20150127599 Schiebeler May 2015 A1
20150154262 Yang Jun 2015 A1
20150172117 Dolinsky et al. Jun 2015 A1
20150188778 Asayag et al. Jul 2015 A1
20150205588 Bates et al. Jul 2015 A1
20150254298 Bourbonnais et al. Sep 2015 A1
20150304182 Brodsky et al. Oct 2015 A1
20150317359 Tran et al. Nov 2015 A1
20160026442 Chhaparia Jan 2016 A1
20160065670 Kimmel et al. Mar 2016 A1
20160092599 Barsness et al. Mar 2016 A1
20160125018 Tomoda et al. May 2016 A1
20160171070 Hrle et al. Jun 2016 A1
20160253294 Allen et al. Sep 2016 A1
20160316038 Jolfaei Oct 2016 A1
20160335330 Teodorescu Nov 2016 A1
20160335361 Teodorescu Nov 2016 A1
20170161514 Dettinger et al. Jun 2017 A1
Foreign Referenced Citations (13)
Number Date Country
2309462 Dec 2000 CA
1406463 Apr 2004 EP
1198769 Jun 2008 EP
2199961 Jun 2010 EP
2423816 Feb 2012 EP
2743839 Jun 2014 EP
2421798 Jun 2011 RU
2000000879 Jan 2000 WO
2001079964 Oct 2001 WO
2011120161 Oct 2011 WO
2012136627 Oct 2012 WO
2014026220 Feb 2014 WO
2014143208 Sep 2014 WO
Non-Patent Literature Citations (5)
Entry
Kramer, The Combining DAG: A Technique for Parallel Data Flow Analysis, IEEE Transactions on Parallel and Distributed Systems, vol. 5, No. 8, August 1994, pp. 805-813.
Breitbart, Update Propagation Protocols for Replicated Databases, SIGMOD '99 Philadelphia PA, 1999, pp. 97-108.
Jellema, Lucas. “Implementing Cell Highlighting in JSF-based Rich Enterprise Apps (Part 1)”, dated Nov. 2008. Retrieved from http://www.oracle.com/technetworld/articles/adf/jellema-adfcellhighlighting-087850.html (last accessed Jun. 16, 2016).
Murray, Derek G. et al. “Naiad: a timely dataflow system.” SOSP '13 Proceedings of the Twenty-Fourth ACM Symposium on Operating Systems Principles. pp. 439-455. Nov. 2013.
“GNU Emacs Manual”, dated Apr. 15, 2016, pp. 43-47. Retrieved from https://web.archive.org/web/20160415175915/http://www.gnu.org/software/emacs/manual/html_mono/emacs.html.
Provisional Applications (1)
Number Date Country
62549908 Aug 2017 US