The present disclosure relates to data synchronization across multiple data storage systems, specifically to data synchronization across immutable and mutable data storage systems.
Decentralized web (also referred to as Web 3), represents the next phase in the evolution of the Internet. It is an emerging paradigm that aims to decentralize online platforms and services, fostering greater user control, privacy, and security. Unlike its predecessor, Web 2.0, which is predominantly centralized and relies on intermediaries, Web3 is characterized by decentralized platforms, blockchain technology, and enhanced user control. It aims to create a more transparent, secure, and user-centric online ecosystem by leveraging concepts such as cryptocurrencies, smart contracts, decentralized applications (dApps), decentralized finance (DeFi), and decentralized identity (DID) solutions. Web3 strives to remove the reliance on intermediaries, promote peer-to-peer interactions, and empower individuals with greater privacy, security, and ownership of their data and digital assets.
The concept of decentralization may be achieved through distributed ledger technology, e.g., blockchain and smart contracts. Blockchain allows for transparent and immutable record-keeping, ensuring trust and eliminating the need for a central authority. Smart contracts, on the other hand, are self-executing contracts with predefined rules encoded on the blockchain, enabling automated and secure transactions.
While decentralized systems or blockchain technologies offer many advantages, they also have some limitations. For example, blockchain database systems face challenges in handling large volumes of transactions and scaling to meet the demands of a rapidly growing user base, while centralized databases can typically handle higher transactions more efficiently. Further, due to the distributed nature of blockchains, transaction processing can be slower in blockchains compared to centralized databases. The consensus mechanisms and cryptographic computations involved in validating transactions can introduce delays.
Embodiments described herein include a system and a method for data synchronization across multiple immutable and mutable data storage systems. For example, a system provides an endpoint configured to receive messages associated with database actions (e.g., POST actions, GET actions, DELETE actions, PATCH actions, etc.) from client devices. Responsive to receiving a message associated with a database action via the endpoint, the system routes the message to an action queue. In some embodiments, the message is routed to one of a plurality of action queues based in part on a set of rules.
The system transmits the message from the action queue to a plurality of data engines corresponding to a plurality of data storage systems that store data, causing the plurality of data engines to perform the database action based on the message. The plurality of data engines includes a mutable data engine corresponding to a mutable data storage system (e.g., a centralized database) and an immutable data engine corresponding to an immutable data storage system (e.g., a blockchain, a distributed ledger, or decentralized database). In some embodiments, the mutable data engine corresponds to at least one of an in-memory database, a relational database, a NoSQL database, and/or a timeseries database. In some embodiments, the immutable data engine corresponds to a blockchain or a distributed ledger. The system also tracks the action queue to determine an action performance speed of each of the plurality of data engines.
In some embodiments, the message comprises a request to achieve a specific level of consensus among the plurality of data engines. In some embodiments, the specific level of consensus may include a minimum number of data engines to return a same confirmation responsive to performing the action. In some embodiments, the specific level of consensus may include a minimum number and/or a mixture of immutable and mutable data engines to return a same confirmation responsive to performing the action.
Other aspects include components, devices, systems, improvements, methods, processes, applications, computer readable mediums having program code encoded thereon, and other technologies related to any of the above.
The disclosure will be understood more fully from the detailed description given below and from the accompanying figures of embodiments of the disclosure. The figures are used to provide knowledge and understanding of embodiments of the disclosure and do not limit the scope of the disclosure to these specific embodiments. Furthermore, the figures are not necessarily drawn to scale.
Embodiments described herein relate to a data distribution system or method configured to update and synchronize data across multiple immutable and mutable data storage systems. The data distribution system ensures synchronization and resilience among a diverse range of immutable and mutable storage systems, each with varying transaction per second (TPS) capabilities. By keeping all systems in sync, the platform guarantees the ability to respond with the latest state, irrespective of the status of any particular system. This approach enhances resilience by mitigating failures of individual systems and enables consistent access to up-to-date data.
Web3, also known as the decentralized web, represents the next phase in the evolution of the Internet. It is an emerging paradigm that aims to decentralize online platforms and services, fostering greater user control, privacy, and security. Unlike its predecessor, Web2, which is predominantly centralized and relies on intermediaries, Web3 seeks to empower individuals by enabling peer-to-peer interactions and removing the need for intermediaries.
While decentralized systems or blockchain technologies offer many advantages, they also have some limitations. For example, blockchain database systems face challenges in handling large volumes of transactions and scaling to meet the demands of a rapidly growing user base, while centralized databases can typically handle higher transaction throughout more efficiently. Further, due to the distributed nature of blockchains, and the consensus mechanisms and cryptographic computations involved in validation, transaction processing can be slower compared to centralized databases.
With the shift towards web3, there is a growing need for online systems to exhibit high scalability, measured by their ability to handle a significant number of transactions per second (TPS). There is often a trade-off between opting for larger, existing immutable systems and smaller, but faster ones, which may lead to compatibility issues and potential customer loss in the process.
The principles described herein solve the above-described problem by providing a data distribution system or method that enables updating and synchronizing data from multiple immutable and mutable data storage systems.
The data distribution system (hereinafter also referred to as the “system” or “data broker”) provides a variety of storage systems with different TPS capabilities. The data distribution system is configured to keep the variety of storage systems in sync, such that it is resilient among failures of individual systems and able to respond with the last state regardless of states of individual systems.
In some embodiments, the system includes a message broker, one or more database prosumer blocks, and one or more data engines, each of which corresponds to a mutable or an immutable database. A data prosumer is a component that both produces and consumes data. In some embodiments, the databases (corresponding to the data engines) include at least one in memory database configured to store current state, at least one NoSQL or timeseries database, and at least one immutable data storage. The database prosumer blocks include pieces of middleware that keep track of each engine's limitations and current state, such as current amount of TPS for a particular operation, current state, and transactions behind the current state. The message broker is configured to redirect requests to all the required database prosumer blocks, which keeps every data engine in sync.
In some embodiments, summarizing and/or compression techniques are also implemented to avoid lagging responsive to TPS exceeding a predefined number. In some embodiments, when a greater level of confidence is required, the system seeks and waits for consensus among several data engines.
In some embodiments, when a collection of tokens, e.g., non-fungible tokens (NFTs) or fungible tokens, is approved to be lazy minted, the system calls a mutable data engine only, causing the mutable data engine to mint the collection of tokens in a virtual wallet corresponding to a real wallet on blockchain. Responsive to receiving a call to retrieve contents in the real wallet, the system calls a read function to read contents in the real wallet. The system also calls the mutable database to see if a virtual wallet corresponding to the real wallet exists. If the virtual wallet exits, the contents in the virtual wallet are returned, but metadata is marked to show that the tokens in the virtual wallet are not really minted, but virtually minted. The contents from the real wallet and the virtual wallet are joined and returned. The system presents the contents to the user, indicating whether particular tokens are really minted on the blockchain or virtually minted by a mutable data engine.
In some embodiments, the database prosumers 140 include one or more data system wrappers 142 that extract content from messages in the action queue and transform it into a different form. In some embodiments, the data system wrappers 142 include software interfaces capable of communicating with the action queues 135 and provide direct connections with specific data engines 150 (which may be mutable or immutable). The data engines 150 correspond to underlying data storage systems configured to store the data. The data storage systems corresponding to the data engines 150 may include (but are not limited to) a relational database, a NoSQL database, an in-memory database, a timeseries database, a public blockchain, a private blockchain, etc. The data engines 150 may have varying TPSs. Typically, an immutable database has a lower TPS than a mutable database.
In some embodiments, the data engines 150 are ranked for their trustworthiness scores, as such if any discrepancy happens inside the system, the action recorded in a higher ranked data engine controls. For example, if a first data engine with a first rank has recorded an action that is not present in a second data engine with a second rank that is higher than the first rank, the first data engine will subordinate itself to the second data engine.
Tracking queues 135 are used to get updates on the last action carried out by each one of the database prosumers. Tasks carried out by tracking queues 135 include keeping track of updates to the database, measuring the current TPS, and/or keeping track of how far behind this specific data engine. In some embodiments keeping track of how far behind a specific database is from the most up-to-date data engine (e.g., the one with a higher value for last state ID).
In some embodiments, each database prosumer in the system, whether mutable or immutable, is ranked given its importance to the end application of the system. When a discrepancy arises among different engines, it is solved by enforcing the highest-ranking database towards the lower ranking one.
In some embodiments, there may be more than one immutable database, each with different capabilities regarding TPS and uptime availability. In
In some embodiments, there are several compressor and/or summarizer mechanisms depending on the pressure the system has to virtually increase the TPS for a specific prosumer to keep up with requests. A type of mechanism to use depends on whether only the final state is relevant, or also the information on how the final state was achieved (e.g., the path) is relevant too.
In some embodiments, lossless summary is performed in case the path is irrelevant. Lossless summary includes summarizing actions by writing only the final state. For example, the system aggregates transactions regarding several transfers to the same place into just one signal transfer for the full amount. If a digital item is moved among several digital addresses without other actions, the system moves directly to the final destination.
In some embodiments, lossy compression is performed in case the path or the time between events is relevant. Lossy compression includes summarizing actions to keep up with a higher requirement of TPS, and the final state can still be used as the main action to be recorded. In some embodiments, lossy compression may require inclusion of a digitally signed data addendum including the path and time between events.
In some embodiments, several mutable databases can coexist, each with different features regarding TPS and uptime availability. For example, another mutable process is shown in steps 341 to 347, equivalent or similar to those from 331 to 337.
Reading can happen with or without requiring consensus. When consensus is required, the system may expect a confirmation from an immutable database up to a certain state ID, which corresponds to a time when a specific action took place, but not necessarily the very last state ID. For instance, if confirmation is expected from an immutable database for a specific operation that took place in the past, the system may need to wait until said immutable database has arrived at that state where the changes in question are observed regardless of the very last known state.
Given the degree of consensus necessary, the message routing agent 402 may pass the request over to a specific set of mutable and immutable databases to fulfill the request. The particular reading process may be the same when the database is mutable or immutable. As illustrated, reading process 410 is performed by a first database (which may be mutable or immutable), and reading process 420 is performed by a second database (which may be mutable or immutable).
In case the database has arrived at the desired state, the specific reading task is sent to the wrapper 414, which will retry 416 until it gets a response 415 and the result is sent to the consensus process 450. The same process is repeated for any other data engines requested by the read action, e.g., 421 to 425. After enough responses 415, 425 have been gathered to fulfill the request, the consensus process 450 is started.
The consensus process 450 takes all the answers from individual databases and matches these answers among each other seeking for inconsistencies. In case an inconsistency is found, the consensus mechanism will seek to solve it against a higher-ranking database. When only the latest known state is required without any consensus, the system will just return the result from database with the latest status ID.
Traditionally, a consumer is an automatic process or a job scheduler (e.g., CRON) that is constantly listening for a specific queue. In some cloud platforms, creation of a traditional automatic process is expensive due to a number of requests it generates. Unlike traditional approach, the embodiments described herein uses a message queue manager, which works in a similar way and costs much less than a traditional automatic process. Additionally, the message queue manager also does not have to wait to read the received messages.
In some cases, a query speed of a message queue manager may exceed one second because the memory was insufficient, and a timeout would occur for the request. To solve this problem, configuration of a consuming serverless compute service may be modified by increasing the memory and timeout to significantly decrease the processing time for each transaction.
In some embodiments, multiple events are allocated in a single serverless service and allow the single serverless service to be connected to multiple queues. The serverless service is configured to dynamically retrieve information received from each queue and process the retrieved information.
In some embodiments, the system is able to generate parallel processing with multiple queues. In some embodiments, all requests received in a single queue are to be processed sequentially, which may result in a delay in processing. In some embodiments, one queue per method (e.g., POST, GET, PATCH, DELETE) per module (user, fungibles, non-fungibles, marketplace) is created and implemented to receive transactions in parallel and void waiting times.
In some embodiments, multiple actions are integrated into a single action to speed up processing in a database that has a lower performance speed. It is detected that for the mining and transfer of consumables, transactions can be compressed (e.g., adding up the amounts for a same user), such that processing speed can be improved.
In some embodiments, the system described herein is configured to provide an answer in less than a second without relying on a blockchain. The processing times in blockchain are often high. The principles described herein implement a mutable database that simulates the functionality of a blockchain, yet provides an instant response. At the same time, the mutable database sends the request in the background to the blockchain without waiting for a result from the blockchain.
In some embodiments, transactions are executed in a main database, which generates a unique identifier for each transaction. The transactions are replicated in a secondary database, which is a blockchain. The blockchain generates an identifier that is related to the identifier of the primary database. This allows the primary database and secondary database to be in sync and the processing in the secondary database may be executed in the background.
Multiple development environments, including blockchains, centralized databases, and a message queue manager are configured to be interconnected to provide the system described herein.
Transaction tracking queues are generated to identify a number of transactions executed in each database. Lazy minting may be used to modify the transaction tracking. A consensus module may be implemented with a register of unique identifiers registered by the databases.
The system described herein also prevents transactions from executing in an incorrect order with multiprocessing in blockchain. A retry system of up to a maximum number of times may be implemented for when a transaction is to be executed. The maximum number may also be determined based on if an error is identified, and a retry is due to the detected error.
In some embodiments, multiprocessing is implemented at blockchain, similarly as in the primary database, to avoid accumulating transactions. The multiprocessing takes into account a waiting time for when a transaction is to be executed, depending on another transaction that has not yet been executed, or whether there is an error or try again for the other transaction.
In some embodiments, the system described herein removes a message from a queue until it has already been processed to avoid loss of information. In some embodiments, messages or transactions are processed from the replication queues by consumers. In case there is an error, a retry up to a maximum number may be performed. If there is still an error, the message may be removed from the queue and sent to a collection to be reviewed later. If there is no response from the endpoint to which the request is sent, the message is saved and sent until a response is received.
The principles described herein provide a data distribution platform that allow entities to create their own data distribution system based on needs of their own applications.
As illustrated, the data broker 500A also includes a data system wrapper 520A, which corresponds to data system wrapper 142 of
When a response or confirmation is obtained from one or more data engines 530A, the responses from different data engines 530A are compared to determine whether there is consensus. It also matters whether the response includes an OK status code (e.g., 200 status code) indicating that the request was successful, although the meaning of success depends on the request method used. For example, if it is a GET request, 200 status code means that the requested resource has been fetched and transmitted to the message body. If it is a POST request, 200 status code means that a description of the result of the action is transmitted to the message body. When there is no consensus and no 200 status code, a negative result is returned to a translator 502. When there is no consensus, but 200 status code, a tracking queue is updated, transaction history is saved, and replication is performed. When there are both consensus and 200 status code, the tracking queue is updated, transaction history is saved, and replication is performed. In some embodiments, the consensus is recorded.
Generally, processing times in blockchain are high. To achieve an answer in less than a second without relying on the blockchain, the data distribution system further includes an intermediate database (that is a mutable database) that simulates the functionality of the blockchain giving an instant response and sending the request in the background to the blockchain without waiting for a result.
In some embodiments, the intermediate database is a main database, and blockchain is a secondary database. Transactions are executed in the intermediate database, which generates a unique identifier for each transaction. The transactions are replicated in the secondary database, and the generated identifier is related to the identifier of the primary database. This allows to have the primary and secondary databases in sync and the processing is executed in the background. The multiple development environments, such as the blockchain, intermediate database, and MQ, are connected to each other. A configuration of environments is generated to indicate the relationship and connection parameters of each of them.
Users can enter the broker name. Generally, a personally identifiable information or other confidential or sensitive information should not be included in broker names Broker names may be accessible to other client services, including cloud monitor logs. Broker names are not intended to be used for private or sensitive data.
In some embodiments, on a configuration setting page, in the message broker access section, the user can provide a username and a password. Certain restrictions may be applied to broker usernames and passwords. For example, a username may be required to contain only alphanumeric characters, dashes, periods, and underscores (- . _). This value must not contain any tilde (˜) characters. Amazon MQ prohibits using guest as a username. The password may be required to be at least a threshold number of characters long, contain at least a threshold number of unique characters, and not contain certain special characters.
After username and password are entered, a review and create page may be presented, and the user can review their selections and edit them as needed. Alternatively, the user can choose to create the broker. Creating a broker may take a few minutes. During the time the broker is being created, the system may display a creation in progress status. After the broker is created, the system may show a running status.
In some embodiments, a remote procedure call (RPC) service may be used to create a client class, which exposes a method named call that sends an RPC request and blocks until an answer is received. Example code for a client interface of an RPC service is shown below.
In general, a client sends a request message and a server replies with a response message. In order to receive a response, the client needs to send a ‘callback” queue address with the request. Example suede code for a call queue is shown below.
In some embodiments, delivery_mode property is used to mark a message as persistent (e.g., with a value of 2) or transient (e.g., with any other value). In some embodiments, content_type property is used to describe the mime-type of the encoding. For example, for JSON encoding, it is advantageous to set this property to: application/json. In some embodiments, reply_to property is used to name a callback queue. In some embodiments, correlation_id is useful to correlate RPC responses with requests.
Serverless compute service (e.g., AWS® Lambda) can connect to and consume message from the message broker. When a user connects a broker to a serverless compute service, an event source mapping is created. The event source mapping reads messages from a queue and invokes the function synchronously. The event source mapping reads messages from the message broker in batches and converts them into a payload (e.g., lambda payload) the form of a data object (e.g., a JSON object).
In some embodiments, a configuration file (e.g., a template YAML file) is used for the creation of the serverless compute services. The configuration file indicates the configuration of each one of the compute services. In a same serverless compute service, multiple events are indicated that will function as a consumer to the queue indicated in the “Queues” property.
In some embodiments, a primary consumer lambda is used to execute a transaction in a primary database. The primary consumer lambda includes a method defined as def store_DB_ID (ID, exchange, resource, action):
Input parameters include ID, exchange, resource, and action. ID is a primary ID to store in the identifiers collection. Exchange is related to exchange to store the ID. Resource indicates the module to which it belongs (e.g., users, nonfungibles, etc.). Action indicates the action to be executed.
An example YAML description for primary consumer in accordance with some embodiments is shown below.
In some embodiments, a consumer replicator lambda is used to replicate transaction in a secondary database, obtaining the transaction from a message queue for further processing. An example YAML description for consumer replicator in accordance with some embodiments is shown below.
In some embodiments, a notification lambda is used to get messages that require a transaction identifier to be recalled. An example YAML description for consumer notification in accordance with some embodiments is shown below.
In some embodiments, an action is associated with a token, e.g., an NFT or a fungible token, on a blockchain. At a later stage of purchasing a token, a step called lazy minting is performed. Minting a token includes creating a unique token on the blockchain. Only after minting, the token becomes a digital collectible stored on the blockchain. When a data broker includes both an immutable database (e.g., blockchain) that is used to host token, and a mutable database that is used as a main system, a new lazy minting process is implemented. In the data broker, lazy minting may be used to modify the transaction tracker to generate or track consensus between multiple databases.
Referring to
When a call to transfer 1330 a particular item to a real wallet is received, the system calls 1332 a corresponding minting function to mint the particular item. The real minted item is transferred 1334 to a particular destination. If the transfer is successful, the system deletes 1336 that item from the virtual wallets, and returns 1338 a successful transfer message.
Referring to
When a call to buy 1370 a particular item is received, the system intercepts 1372 the call and checks if the sale could be successful. If the sale could be successful, the item at the LPS is cancelled 1374. The system calls 1376 a corresponding minting function. After successful minting the item, the item is deleted 1378 from the virtual wallet. The real item is put 1380 for sale in a normal LPS that corresponds to the virtual one. The endpoint for buying is called 1382 again but for the real UUID. Again, backend sees 1384 no difference in the process.
Responsive to receiving the buyer's indication, a purchase table for a pre-purchase is created 1516. Responsive to creating the purchase table for the pre-purchase, mutable database 1500A causes a blockchain to mint 1520 the NFT, and puts 1522 the NFT for sale at an LPM under same conditions as in mutable database 1500A. If an LPM with the specified rakes and fees does not exist, the immutable database 1500B creates 1524 one. The immutable database 1500B calls 1526 a purchase. If the purchase succeeds, the NFT is sent 1528 to the user's wallet, and the immutable database 1500B causes the mutable database 1500A to update 1518 the purchase. If the purchase fails, the immutable database 1500B burns 1530 the NFT to avoid reporting and clear UUID in data broker's database.
Note, the above described NFT transaction is a sales transaction or a gift transaction, embodiments described herein are not limited to sale of NFT. For example, in some embodiments, a transaction associated with an NFT may be a lease transaction, in kind transaction, or any other transactions the parties agreed upon. Further, the above-described principles are also applicable to fungible tokens, such as cryptocurrencies.
Many of the systems and subsystems described herein (such as, but not limited to, data brokers, data engines, mutable database systems, immutable database systems, user client devices, could database systems) include one or more computer systems.
The machine may be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, a switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.
The example computer system 1600 includes a processing device 1602, a main memory 1604 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM), a static memory 1606 (e.g., flash memory, static random access memory (SRAM), etc.), and a data storage device 1618, which communicate with each other via a bus 1630.
Processing device 1602 represents one or more processors such as a microprocessor, a central processing unit, or the like. More particularly, the processing device may be complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device 1602 may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device 1602 may be configured to execute instructions 1626 for performing the operations and steps described herein.
The computer system 1600 may further include a network interface device 1608 to communicate over the network 1620. The computer system 1600 also may include a video display unit 1610 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alphanumeric input device 1612 (e.g., a keyboard), a cursor control device 1614 (e.g., a mouse), a graphics processing unit 1622, a signal generation device 1616 (e.g., a speaker), graphics processing unit 1622, video processing unit 1628, and audio processing unit 1632.
The data storage device 1618 may include a machine-readable storage medium 1624 (also known as a non-transitory computer-readable medium) on which is stored one or more sets of instructions 1626 or software embodying any one or more of the methodologies or functions described herein. The instructions 1626 may also reside, completely or at least partially, within the main memory 1604 and/or within the processing device 1602 during execution thereof by the computer system 1600, the main memory 1604 and the processing device 1602 also constituting machine-readable storage media.
In some implementations, the instructions 1626 include instructions to implement functionality corresponding to the present disclosure. While the machine-readable storage medium 1624 is shown in an example implementation to be a single medium, the term “machine-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “machine-readable storage medium” shall also be taken to include any medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine and the processing device 1602 to perform any one or more of the methodologies of the present disclosure. The term “machine-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media.
Some portions of the preceding detailed descriptions have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the ways used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm may be a sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Such quantities may take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. Such signals may be referred to as bits, values, elements, symbols, characters, terms, numbers, or the like.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the present disclosure, it is appreciated that throughout the description, certain terms refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage devices.
The present disclosure also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the intended purposes, or it may include a computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus.
The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various other systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct a more specialized apparatus to perform the method. In addition, the present disclosure is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the disclosure as described herein.
The present disclosure may be provided as a computer program product, or software, that may include a machine-readable medium having stored thereon instructions, which may be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g., a computer). For example, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium such as a read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices, etc.
In the foregoing disclosure, implementations of the disclosure have been described with reference to specific example implementations thereof. It will be evident that various modifications may be made thereto without departing from the broader spirit and scope of implementations of the disclosure as set forth in the following claims. Where the disclosure refers to some elements in the singular tense, more than one element can be depicted in the figures and like elements are labeled with like numerals. The disclosure and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense.
This application claims priority to U.S. Provisional Patent Application Ser. No. 63/358,487, titled “High Efficiency Method to Update and Synchronize Data from A Plurality of Immutable and Mutable Data Storage Systems,” filed Jul. 5, 2022, which is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
63358487 | Jul 2022 | US |