Databases typically incorporate indexes for enabling the efficient retrieval of certain information. A B-tree data structure is a popular indexing structure that is optimized for use in a database that reads and writes large blocks of data and that enables efficient database searching. A B-Tree data structure includes a root and a plurality of leaves. The root uses a different key value to identify each leaf. Each leaf points to the records that contain the key value. The key values are sorted in order to form a sorted list. Specifically, a given leaf includes a “left sibling” (the next leaf to the left) and a “right sibling” (the next left to right) in the sorted order. The first or left-most leaf and last or right-most leaf include entries denoting the ends of the list of leaves for that root.
Typically, each leaf has a fixed memory size. As more data is added to the database, the leaf grows in size until it reaches a size threshold, at which point the leaf is split into new left and right leaves at a particular key value. The left leaf receives values that are less than the key value and the right leaf receives the remaining values with appropriate modifications to the root.
In centrally based and non-shared databases, the splitting process is efficient because generally there is only one copy of the index in the database system. The split is easy to effect by quiescing the data processing system during the actual splitting operation. In a distributed database with many copies of the index, each copy of the index should be split to maintain accuracy, completeness, and data integrity. Unfortunately, splitting multiple copies of the same index can cause a race condition that leads to an erroneous or inconsistent split.
In order to assure consistency following the split of a given index in a node, some existing approaches implement locks. A lock is applied to individual pages or records while the index is being split. The lock prevents additional data from being added or removed from the database until after the index has been split. However, locking a database during an index split is not a scalable approach. Locking can also increase the latency associated with adding information to the database.
Embodiments of the present technology include methods of splitting a first index atom in a plurality of atoms in a distributed database. The distributed database includes a plurality of nodes. Each node in the plurality of nodes comprises a corresponding processor and a corresponding memory. One node in the plurality of nodes is designated as a chairman and includes a chairman's copy of the first index atom. An example method comprises splitting the chairman's copy of the first index atom by the chairman. The chairman's copy of the first index atom represents data and/or metadata stored in the distributed database. The chairman transmits instructions to split respective copies of the first index atom to the other nodes in the plurality of nodes. The respective copies of the first index atom in other nodes are replicas of the chairman's copy of the first index atom. A first node in the plurality of nodes splits a first copy of the first index atom into a first copy of a source atom and a first copy of a target atom. The first node transmits an acknowledgement indicating that the first copy of the first index atom has been split. The acknowledgement is transmitted to the chairman and to each other node in the plurality of nodes.
In some cases, the chairman splits the first copy of the first index atom in response to a request from another node in the plurality of nodes. The method also comprises forwarding a message from the first copy of the source atom to the first copy of the target atom at the first node. In some cases, transmitting the acknowledgement from the first node to the chairman and to each other node in the plurality of nodes can occur after the first copy of the source atom forwards the message to the first copy of the target atom.
Another embodiment includes a method of splitting an index atom in a plurality of atoms in a distributed database. Again, the distributed database includes a plurality of nodes, each of which comprises a corresponding processor and a corresponding memory. One of these nodes is designated as a chairman for the index atom and includes a chairman's instance of the index atom, which represents data and/or metadata stored in the distributed database. The method includes splitting, by the chairman, the chairman's instance of the index atom. The chairman transmits the instructions to split the index atom to at least a subset of the nodes. Each node in the subset includes a corresponding instance of the index atom. A first node in the subset splits its (first) instance of the index atom into a first instance of a source atom and a first instance of a target atom. The first node also re-transmits the instructions to split the index atom to each other node in the subset. And the first node transmits, to the chairman, an acknowledgement indicating that the first instance of the index atom has been split. The chairman transmits a message indicating the index atom has been split to the subset of nodes.
Yet another embodiment includes a method of splitting an index atom in a plurality of atoms in a distributed database that includes a plurality of nodes, each of which comprises a corresponding processor and a corresponding memory. In this method, one of the nodes splits a local instance of the index atom into a local instance of a source atom and a local instance of a target atom. The local instance of the source atom includes values less than a split key value and the local instance of the target atom includes values greater than the split key value. The node receives a message referring to a key value greater than the split key value on the local instance of the source atom. And the node forwards the message from the local instance of the source atom to the local instance of the target atom.
It should be appreciated that all combinations of the foregoing concepts and additional concepts discussed in greater detail below (provided such concepts are not mutually inconsistent) are contemplated as being part of the inventive subject matter disclosed herein. In particular, all combinations of claimed subject matter appearing at the end of this disclosure are contemplated as being part of the inventive subject matter disclosed herein. It should also be appreciated that terminology explicitly employed herein that also may appear in any disclosure incorporated by reference should be accorded a meaning most consistent with the particular concepts disclosed herein.
Other systems, processes, and features will become apparent to those skilled in the art upon examination of the following drawings and detailed description. It is intended that all such additional systems, processes, and features be included within this description, be within the scope of the present invention, and be protected by the accompanying claims.
The skilled artisan will understand that the drawings primarily are for illustrative purposes and are not intended to limit the scope of the inventive subject matter described herein. The drawings are not necessarily to scale; in some instances, various aspects of the inventive subject matter disclosed herein may be shown exaggerated or enlarged in the drawings to facilitate an understanding of different features. In the drawings, like reference characters generally refer to like features (e.g., functionally similar and/or structurally similar elements).
Embodiments described herein generally relate to distributed databases and more particularly to splitting indexes in distributed databases. The systems and processes disclosed herein use a two-stage index splitting process to address problems associated with maintaining correctness while splitting many copies of the same index in a distributed database without locking the distributed database during the splitting process. During the first stage of the index splitting process, the nodes in the distributed database with the index atom split the index atom into a source atom and a target atom. And during the second stage of the index splitting process, the nodes with the index flush messages being forwarded from the source atom to the target atom. This two-stage splitting process makes it easier to maintain correctness, concurrency, and consistency across the distributed databased if data is being inserted while the index atom is being split.
Distributed Databases
Each node in
Transactional Nodes
At any given time, the transactional node 32 contains only those portions of the database that are then relevant to user applications active on the transactional node 32. Moreover, the portions of distributed database in use at a given time at the transactional node 32 reside in the memory 38. There is no need for supplementary storage, such as disk storage, at the transactional node 32 during the operation of this system.
Atoms
In this system, the classes/objects set 42 is divided into a subset 43 of atom classes, a subset 44 of message classes, and a subset 45 of helper classes. Each atom class 43 in
Each time a copy of an atom is changed in any transactional node, the copy of the atom receives a new change number. Element 76E records that change number. Whenever a node requests an atom from another node, there is an interval during which time the requesting node may not be known to the other transactional nodes. Element 76F is a list of all the nodes to which the supplying node relays messages that contain the atom until the request is completed.
Operations of the database system are also divided into cycles. A cycle reference element 76G provides the cycle number of the last access to the atom. Element 76H is a list of the all active nodes that contain the atom. Element 76I includes several status indicators. Element 76J contains a binary tree of index nodes to provide a conventional indexing function. Element 76K contains an index level.
Chairmen
When a transactional node in the distributed database creates a new atom, that transactional node is designated as the new atom's chairman. Each atom can have a different chairman, and a given node can be the chairman for more than one atom. As the new atom's chairman, the transactional node establishes and maintains an ordered list of other nodes in the distributed database with copies of the new atom. The order of this list is as follows: first the chairman, then any transactional nodes with the new atom, and then any archival nodes with new atom.
When the transactional node creates the new atom, it is the first and only entry in the ordered list. As other nodes obtain copies of the new atom, they are added to the ordered list. Each transactional node with a copy of the new atom also keeps a copy of the ordered list. If the chairman becomes inactive for any reason, the next transactional node on the ordered list becomes the chairman. If there are no transactional nodes on the ordered list, the first non-synchronizing archival node becomes the chairman.
Messaging Among Nodes
The nodes exchange transfer atoms and information about atoms via asynchronous messages to maintain the distributed database in a consistent and concurrent state. As mentioned above, each node in the distributed database can communicate with every other node in the distributed database. When one node generates a message involving a specific atom, it can transmit or broadcast that message to the other nodes with replicas of that specific atom. Each node generates these messages independently of other nodes. It is possible that, at any given instant, multiple nodes may contain copies of a given atom and different nodes may be at various stages of processing them.
Data Integrity During Index Splitting
As mentioned above, distributed databases suffer from data integrity problems that don't affect other types of databases. Many of these data integrity problems arise from the desire to maintain consistency and across the nodes containing instances (copies) of a given atom (piece of data or metadata). If the data is not consistent across all nodes, then two nodes could supply different answers to the same query.
When the atom is split, the nodes conventionally rebroadcast messages about the split to other nodes in the database. Unfortunately, rebroadcasts can lead to multiple scenarios that result in transient consistency violations. If the chairman fails during the split, those inconsistencies could become permanent or at least persist until an atom with incorrect data is dropped. These problems include incorrect references to a target atom on a node that has yet to split its instance of the index atom. This can cause consistency problems or crashes. If the references to the target atom is never updated, the distributed database may enter an infinite loop in backward scan (while holding cycle lock). In addition, it is possible to miss a split message while fetching an object from a node before the node has split and the node originating the split fails before sending any final messages about the split.
The splitter node 620 responds to the insertion message 601 by splitting the index atom into a source atom and a target atom, with entries equal to or less than the split key value in the source atom and entries greater than the split key value in the target atom. The splitter node 620 also rebroadcasts (at 602) the insertion message to the reader node 630, which responds to the rebroadcast by updating its instance of a root atom that refers to the index atom to show that the index atom has been split. But if the reader node 630 receives a commit transaction message 603 before it receives the rebroadcast 602, it may retrieve potentially incorrect information in response for a period 604 between the arrival of the rebroadcast 602 and the commit transaction message 603. And if the splitter node 620 fails before sending the rebroadcast 602, the reader node 630 may never learn about the split, leaving the distributed databased inconsistent and possibly incorrect.
Maintaining Correctness During Index Splitting
Although the process 700 in
Other potential problems associated with the prior process 700 include “chairmanship pileup” and the difficulty of exhaustive testing. A chairmanship pileup occurs in the prior process 700 because the root chairman for the index atom orchestrates the split. As a result, the root chairman become the chairman for the new atoms created during split; in other words, the new atoms “pile up” on the root chairman, leaving the distributed database more vulnerable if the root chairman fails.
Exhaustive testing becomes difficult when considering a state machine on a given node and the events that move this state machine from state to state. For exhaustive testing, each valid state/event pair should be verified. Since a given state is composed of four atoms (each in several states itself), the number of unit tests for exhaustive testing becomes prohibitive.
Exhaustively testing a particular system typically involves generating a set of valid state/event pairs and then generating test for each pair. For illustration, consider a system that can have two states A and B and two possible events X and Y. This give four state/event pairs—here, AX, AY, BX and BY—each of which should be tested. The number of tests is the Cartesian product of events and states.
In an example distributed database, the state of the system is defined by the state of the relevant atoms. In the process 700 in
In the first stage of the process 750, the nodes split the index atom into a source index atom, or source, and a target index atom, or target. The process begins when the chairman 711 of the index atom determines that the index atom should be split, e.g., in response to an attempt to insert a value into its instance of the index atom. If the chairman 711 determines that the index atom should be split, it selects a key value for the split. This key value indicates which records will stay in the original source index atom and which records will be transferred to the new target index atom created by the split.
The chairman 711 splits its copy of the index atom at the key value to create its own copies of the source index atom and target index atom. It also broadcasts an “execute split” message 741 to the other nodes 720, 730a in the distributed database with instances of the index atom. In response to receiving the “execute split” message 741 from the chairman, each of these other nodes 720, 730a splits its own copy of the index atom at the key value to create its own copies of the source index atom and target index atom. Unlike in other index splitting process, each of these nodes also re-transmits the “execute split” message 742 to the other nodes with the index atom, including the chairman 711. Once the other nodes 720, 730a have received “execute split” messages 742 from every possible source and have split their own instances of the index atom, they transmit a “split applied” 743 to the chairman 711. The chairman 711 then broadcasts a “split done” message 744 to the nodes 720, 730a with the split index atom and to other nodes affected by the split, including nodes with root atoms that point to the split index atom (e.g., transactional node 730b). This completes the index splitting process 740 in
As explained below, the source index atoms forward messages to the target index atoms during a portion of the splitting process 740. To ensure that these messages are forwarded correctly, each node containing a copy of the index atom (including the chairman 711) tracks the index splitting progress using its ordered list of all of the nodes in the distributed database that contain a copy of the index atom. This is another difference from previous index splitting processes.
The nodes track the index splitting progress as follows. Once each node has received a “split applied” message 752 from each other node on the ordered list, it transmits a “split applied all” message 753 to the chairman 711 and the other nodes with split index atoms. This signifies that every copy of the index atom has been split into a source and a target. The nodes then exchange “split applied ack” message 754 acknowledging the “split applied” messages 753. Once the chairman 711 has receive a “split applied ack” message 754 from the affected nodes, it broadcasts a “split complete” message 755, to which the affected node responds with “split complete ack” messages 756.
Again, the source index atoms forward messages to the target index atoms during a portion of the splitting process 750 as explained above with respect to
Message Forwarding During Index Splitting
As mentioned above, the distributed database is not locked during the index splitting processes 740 or 750. As a result, information can be added to the index atom while it is being split and new copies of the index atom can be created during the split. This reduces latency and makes it simpler and easier to scale the distributed database.
To maintain correctness and data integrity during the index splitting process, the nodes forward messages received during certain periods of the index splitting process. More specifically, a node forwards messages that are broadcast on the source atom but should be applied on the target atom. These messages are generated before TO in
Forwarding occurs as follows. If a node receives a message addressed to the index atom after its copy of the index atom has been split into a source and a target, it directs the message to source. If the message's destination has a key value that is equal to or less than the split key value, the source acts on the message. And if the message's destination has a key value that is greater than the split key value, the source forwards the message to the target (if target is present on this node), which acts on the message. The target atom cannot exist on the node without the source atom, so forwarding from the source atom to the target atom is an operation local to the node.
Message forwarding ensures that messages destined for the target actually reach the target. It accounts for the possibility that splitting the index could occur simultaneously in all the nodes, or simultaneously in some nodes and at a different time in other nodes, or at different times in each node that has a copy of the index atom. Message forwarding continues until the node receives a “split applied all” message from each other node in processes 740 and 750 shown in
In other words, the index splitting process 700 is considered to be complete when: 1) every node containing the index atom has been split into a source and a target; 2) every node acknowledges that it is no longer accepting message forwarding; and 3) the root is modified to include a reference to the target. That is, the index splitting process 750 ends when every node has obtained both “split applied all” messages from each other node and a “split done” message from the chairman and has determined that message forwarding is no longer necessary for the source and target.
Advantages of Two-Stage Index Splitting
Previous processes for splitting an index atom do not include broadcasting “split” messages from non-chairman nodes to other (non-chairman) nodes as in the process 740 of
While various inventive embodiments have been described and illustrated herein, those of ordinary skill in the art will readily envision a variety of other means and/or structures for performing the function and/or obtaining the results and/or one or more of the advantages described herein, and each of such variations and/or modifications is deemed to be within the scope of the inventive embodiments described herein. More generally, those skilled in the art will readily appreciate that all parameters, dimensions, materials, and configurations described herein are meant to be exemplary and that the actual parameters, dimensions, materials, and/or configurations will depend upon the specific application or applications for which the inventive teachings is/are used. Those skilled in the art will recognize or be able to ascertain, using no more than routine experimentation, many equivalents to the specific inventive embodiments described herein. It is, therefore, to be understood that the foregoing embodiments are presented by way of example only and that, within the scope of the appended claims and equivalents thereto, inventive embodiments may be practiced otherwise than as specifically described and claimed. Inventive embodiments of the present disclosure are directed to each individual feature, system, article, material, kit, and/or method described herein. In addition, any combination of two or more such features, systems, articles, materials, kits, and/or methods, if such features, systems, articles, materials, kits, and/or methods are not mutually inconsistent, is included within the inventive scope of the present disclosure.
The above-described embodiments can be implemented in any of numerous ways. For example, embodiments may be implemented using hardware, software or a combination thereof. When implemented in software, the software code can be executed on any suitable processor or collection of processors, whether provided in a single computer or distributed among multiple computers.
Further, it should be appreciated that a computer may be embodied in any of a number of forms, such as a rack-mounted computer, a desktop computer, a laptop computer, or a tablet computer. Additionally, a computer may be embedded in a device not generally regarded as a computer but with suitable processing capabilities, including a Personal Digital Assistant (PDA), a smart phone or any other suitable portable or fixed electronic device.
Also, a computer may have one or more input and output devices. These devices can be used, among other things, to present a user interface. Examples of output devices that can be used to provide a user interface include printers or display screens for visual presentation of output and speakers or other sound generating devices for audible presentation of output. Examples of input devices that can be used for a user interface include keyboards, and pointing devices, such as mice, touch pads, and digitizing tablets. As another example, a computer may receive input information through speech recognition or in other audible format.
Such computers may be interconnected by one or more networks in any suitable form, including a local area network or a wide area network, such as an enterprise network, and intelligent network (IN) or the Internet. Such networks may be based on any suitable technology and may operate according to any suitable protocol and may include wireless networks, wired networks or fiber optic networks.
The various methods or processes outlined herein may be coded as software that is executable on one or more processors that employ any one of a variety of operating systems or platforms. Additionally, such software may be written using any of a number of suitable programming languages and/or programming or scripting tools, and also may be compiled as executable machine language code or intermediate code that is executed on a framework or virtual machine.
Also, various inventive concepts may be embodied as one or more methods, of which an example has been provided. The acts performed as part of the method may be ordered in any suitable way. Accordingly, embodiments may be constructed in which acts are performed in an order different than illustrated, which may include performing some acts simultaneously, even though shown as sequential acts in illustrative embodiments.
All publications, patent applications, patents, and other references mentioned herein are incorporated by reference in their entirety.
All definitions, as defined and used herein, should be understood to control over dictionary definitions, definitions in documents incorporated by reference, and/or ordinary meanings of the defined terms.
The indefinite articles “a” and “an,” as used herein in the specification and in the claims, unless clearly indicated to the contrary, should be understood to mean “at least one.”
The phrase “and/or,” as used herein in the specification and in the claims, should be understood to mean “either or both” of the elements so conjoined, i.e., elements that are conjunctively present in some cases and disjunctively present in other cases. Multiple elements listed with “and/or” should be construed in the same fashion, i.e., “one or more” of the elements so conjoined. Other elements may optionally be present other than the elements specifically identified by the “and/or” clause, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, a reference to “A and/or B”, when used in conjunction with open-ended language such as “comprising” can refer, in one embodiment, to A only (optionally including elements other than B); in another embodiment, to B only (optionally including elements other than A); in yet another embodiment, to both A and B (optionally including other elements); etc.
As used herein in the specification and in the claims, “or” should be understood to have the same meaning as “and/or” as defined above. For example, when separating items in a list, “or” or “and/or” shall be interpreted as being inclusive, i.e., the inclusion of at least one, but also including more than one, of a number or list of elements, and, optionally, additional unlisted items. Only terms clearly indicated to the contrary, such as “only one of” or “exactly one of,” or, when used in the claims, “consisting of,” will refer to the inclusion of exactly one element of a number or list of elements. In general, the term “or” as used herein shall only be interpreted as indicating exclusive alternatives (i.e. “one or the other but not both”) when preceded by terms of exclusivity, such as “either,” “one of,” “only one of,” or “exactly one of” “Consisting essentially of,” when used in the claims, shall have its ordinary meaning as used in the field of patent law.
As used herein in the specification and in the claims, the phrase “at least one,” in reference to a list of one or more elements, should be understood to mean at least one element selected from any one or more of the elements in the list of elements, but not necessarily including at least one of each and every element specifically listed within the list of elements and not excluding any combinations of elements in the list of elements. This definition also allows that elements may optionally be present other than the elements specifically identified within the list of elements to which the phrase “at least one” refers, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, “at least one of A and B” (or, equivalently, “at least one of A or B,” or, equivalently “at least one of A and/or B”) can refer, in one embodiment, to at least one, optionally including more than one, A, with no B present (and optionally including elements other than B); in another embodiment, to at least one, optionally including more than one, B, with no A present (and optionally including elements other than A); in yet another embodiment, to at least one, optionally including more than one, A, and at least one, optionally including more than one, B (and optionally including other elements); etc.
In the claims, as well as in the specification above, all transitional phrases such as “comprising,” “including,” “carrying,” “having,” “containing,” “involving,” “holding,” “composed of,” and the like are to be understood to be open-ended, i.e., to mean including but not limited to. Only the transitional phrases “consisting of” and “consisting essentially of” shall be closed or semi-closed transitional phrases, respectively, as set forth in the United States Patent Office Manual of Patent Examining Procedures, Section 2111.03.
This application is a US National Stage Application Submitted Under 35 U.S.C. 371 of International Application PCT/US2018/000142, Filed Aug. 15, 2018, and entitled “Index Splitting in Distributed Databases.” International Application PCT/US2018/000142 claims the priority benefit, under 35 U.S.C. § 119(e), of U.S. Application No. 62/545,791, filed on Aug. 15, 2017, and entitled “Index Splitting in Distributed Databases.” Each of these applications are incorporated herein by reference in its entirety.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2018/000142 | 8/15/2018 | WO |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2019/035878 | 2/21/2019 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
4733353 | Jaswa | Mar 1988 | A |
4853843 | Ecklund | Aug 1989 | A |
5446887 | Berkowitz | Aug 1995 | A |
5524240 | Barbara et al. | Jun 1996 | A |
5555404 | Torbjørnsen et al. | Sep 1996 | A |
5568638 | Hayashi et al. | Oct 1996 | A |
5625815 | Maier et al. | Apr 1997 | A |
5701467 | Freeston | Nov 1997 | A |
5764877 | Lomet et al. | Jun 1998 | A |
5806065 | Lomet | Sep 1998 | A |
5960194 | Choy et al. | Sep 1999 | A |
6216151 | Antoun | Apr 2001 | B1 |
6226650 | Mahajan et al. | May 2001 | B1 |
6275863 | Left et al. | Aug 2001 | B1 |
6334125 | Johnson et al. | Nov 2001 | B1 |
6401096 | Zellweger | Jun 2002 | B1 |
6424967 | Johnson et al. | Jul 2002 | B1 |
6480857 | Chandler | Nov 2002 | B1 |
6499036 | Gurevich | Dec 2002 | B1 |
6523036 | Hickman et al. | Feb 2003 | B1 |
6748394 | Shah et al. | Jun 2004 | B2 |
6792432 | Kodavalla et al. | Sep 2004 | B1 |
6862589 | Grant | Mar 2005 | B2 |
7026043 | Bleizeffer et al. | Apr 2006 | B2 |
7080083 | Kim et al. | Jul 2006 | B2 |
7096216 | Anonsen | Aug 2006 | B2 |
7184421 | Liu et al. | Feb 2007 | B1 |
7219102 | Zhou et al. | May 2007 | B2 |
7233960 | Boris et al. | Jun 2007 | B1 |
7293039 | Deshmukh et al. | Nov 2007 | B1 |
7353227 | Wu | Apr 2008 | B2 |
7395352 | Lam et al. | Jul 2008 | B1 |
7401094 | Kesler | Jul 2008 | B1 |
7403948 | Ghoneimy et al. | Jul 2008 | B2 |
7562102 | Sumner et al. | Jul 2009 | B1 |
7853624 | Friedlander et al. | Dec 2010 | B2 |
7890508 | Gerber et al. | Feb 2011 | B2 |
8108343 | Wang et al. | Jan 2012 | B2 |
8122201 | Marshak et al. | Feb 2012 | B1 |
8224860 | Starkey | Jul 2012 | B2 |
8266122 | Newcombe et al. | Sep 2012 | B1 |
8504523 | Starkey | Aug 2013 | B2 |
8756237 | Stillerman et al. | Jun 2014 | B2 |
8930312 | Rath et al. | Jan 2015 | B1 |
9501363 | Ottavio | Nov 2016 | B1 |
9734021 | Sanocki et al. | Aug 2017 | B1 |
9824095 | Taylor et al. | Nov 2017 | B1 |
10740323 | Palmer et al. | Aug 2020 | B1 |
11176111 | Palmer et al. | Nov 2021 | B2 |
20020112054 | Hatanaka | Aug 2002 | A1 |
20020152261 | Arkin et al. | Oct 2002 | A1 |
20020152262 | Arkin et al. | Oct 2002 | A1 |
20020178162 | Ulrich et al. | Nov 2002 | A1 |
20030051021 | Hirschfeld et al. | Mar 2003 | A1 |
20030149709 | Banks | Aug 2003 | A1 |
20030204486 | Berks et al. | Oct 2003 | A1 |
20030220935 | Vivian et al. | Nov 2003 | A1 |
20040153459 | Whitten | Aug 2004 | A1 |
20040263644 | Ebi | Dec 2004 | A1 |
20050013208 | Hirabayashi et al. | Jan 2005 | A1 |
20050086384 | Ernst | Apr 2005 | A1 |
20050198062 | Shapiro | Sep 2005 | A1 |
20050216502 | Kaura et al. | Sep 2005 | A1 |
20060010130 | Left et al. | Jan 2006 | A1 |
20060168154 | Zhang et al. | Jul 2006 | A1 |
20070067349 | Jhaveri et al. | Mar 2007 | A1 |
20070156842 | Vermeulen et al. | Jul 2007 | A1 |
20070288526 | Mankad et al. | Dec 2007 | A1 |
20080086470 | Graefe | Apr 2008 | A1 |
20080106548 | Singer | May 2008 | A1 |
20080228795 | Lomet | Sep 2008 | A1 |
20080320038 | Liege | Dec 2008 | A1 |
20090113431 | Whyte | Apr 2009 | A1 |
20100094802 | Luotojarvi et al. | Apr 2010 | A1 |
20100115246 | Seshadri et al. | May 2010 | A1 |
20100153349 | Schroth | Jun 2010 | A1 |
20100191884 | Holenstein et al. | Jul 2010 | A1 |
20100235606 | Oreland et al. | Sep 2010 | A1 |
20100297565 | Waters et al. | Nov 2010 | A1 |
20110087874 | Timashev et al. | Apr 2011 | A1 |
20110231447 | Starkey | Sep 2011 | A1 |
20120254175 | Horowitz et al. | Apr 2012 | A1 |
20120136904 | Venkata Naga Ravi | May 2012 | A1 |
20130060922 | Koponen et al. | Mar 2013 | A1 |
20130086018 | Horii | Apr 2013 | A1 |
20130110766 | Promhouse et al. | May 2013 | A1 |
20130110774 | Shah et al. | May 2013 | A1 |
20130110781 | Golab et al. | May 2013 | A1 |
20130124467 | Naidu | May 2013 | A1 |
20130159265 | Peh et al. | Jun 2013 | A1 |
20130159366 | Lyle et al. | Jun 2013 | A1 |
20130232378 | Resch et al. | Sep 2013 | A1 |
20130259234 | Acar et al. | Oct 2013 | A1 |
20130262403 | Milousheff et al. | Oct 2013 | A1 |
20130278412 | Kelly et al. | Oct 2013 | A1 |
20130297565 | Starkey | Nov 2013 | A1 |
20130311426 | Erdogan et al. | Nov 2013 | A1 |
20140108414 | Stillerman et al. | Apr 2014 | A1 |
20140258300 | Baeumges et al. | Sep 2014 | A1 |
20140279881 | Tan et al. | Sep 2014 | A1 |
20140297676 | Bhatia et al. | Oct 2014 | A1 |
20140304306 | Proctor et al. | Oct 2014 | A1 |
20150019739 | Attaluri et al. | Jan 2015 | A1 |
20150032695 | Tran et al. | Jan 2015 | A1 |
20150066858 | Sabdar et al. | Mar 2015 | A1 |
20150135255 | Theimer et al. | May 2015 | A1 |
20150370505 | Shuma et al. | Dec 2015 | A1 |
20160134490 | Balasubramanyan et al. | May 2016 | A1 |
20160306709 | Shaull et al. | Oct 2016 | A1 |
20160350357 | Palmer | Dec 2016 | A1 |
20160350392 | Rice et al. | Dec 2016 | A1 |
20160371355 | Massari et al. | Dec 2016 | A1 |
20170039099 | Ottavio | Feb 2017 | A1 |
20170139910 | Mcalister et al. | May 2017 | A1 |
20190278757 | Palmer et al. | Sep 2019 | A1 |
20200341967 | Palmer et al. | Oct 2020 | A1 |
Number | Date | Country |
---|---|---|
101251843 | Jun 2010 | CN |
101471845 | Jun 2011 | CN |
101268439 | Apr 2012 | CN |
002931 | Oct 2001 | EA |
1403782 | Mar 2004 | EP |
2006-048507 | Feb 2006 | JP |
2007-058275 | Mar 2007 | JP |
2003-256256 | Mar 2013 | JP |
2315349 | Jan 2008 | RU |
2008106904 | Aug 2009 | RU |
2010034608 | Apr 2010 | WO |
Entry |
---|
“Album Closing Policy,” Background, retrieved from the Internet at URL:http://tools/wiki/display/ENG/Album+Closing+Policy (Jan. 29, 2015), 4 pp. |
“Distributed Coordination in NuoDB,” YouTube, retrieved from the Internet at URL:https://www.youtube.com/watch?feature=player_embedded&v=URoeHvflVKg on Feb. 4, 2015, 2 pp. |
“Glossary—NuoDB 2.1 Documentation / NuoDB,” retrieved from the Internet at URL: http://doc.nuodb.com/display/doc/Glossary on Feb. 4, 2015, 1 pp. |
“How It Works,” retrieved from the Internet at URL: http://www.nuodb.com/explore/newsql-cloud-database-how-it-works?mkt_tok=3RkMMJW on Feb. 4, 2015, 4 pp. |
“How to Eliminate MySQL Performance Issues,” NuoDB Technical Whitepaper, Sep. 10, 2014, Version 1, 11 pp. |
“Hybrid Transaction and Analytical Processing with NuoDB,” NuoDB Technical Whitepaper, Nov. 5, 2014, Version 1, 13 pp. |
“No Knobs Administration,” retrieved from the Internet at URL: http://www.nuodb.com/explore/newsql-cloud-database-product/auto-administration on Feb. 4, 2015, 4 pp. |
“Snapshot Albums,” Transaction Ordering, retrieved from the Internet at URL:http://tools/wiki/display/ENG/Snapshot+Albums (Aug. 12, 2014), 4 pp. |
“Table Partitioning and Storage Groups (TPSG),” Architect's Overview, NuoDB Technical Design Document, Version 2.0 (2014), 12 pp. |
“The Architecture & Motivation for NuoDB,” NuoDB Technical Whitepaper, Oct. 5, 2014, Version 1, 27 pp. |
“Welcome to NuoDB Swifts Release 2.1 GA,” retrieved from the Internet at URL: http:.//dev.nuodb.com/techblog/welcome-nuodb-swifts-release-21-ga on Feb. 4, 2015, 7 pp. |
“What Is A Distributed Database? And Why Do You Need One,” NuoDB Technical Whitepaper, Jan. 23, 2014, Version 1, 9 pp. |
Advisory Action issued by The United States Patent and Trademark Office for U.S. Appl. No. 14/215,461, dated Jan. 10, 2017, 9 pages. |
Advisory Action dated May 2, 2018 for U.S. Appl. No. 14/215,461, 8 pages. |
Amazon CloudWatch Developer Guide API, Create Alarms That or Terminate an Instance, Jan. 2013, downloaded Nov. 16, 2016 from archive.org., pp. 1-11. |
Amazon RDS FAQs, Oct. 4, 2012, downloaded Nov. 16, 2016 from archive.org., 39 pp. |
Bergsten et al., “Overview of Parallel Architectures for Databases,” The Computer Journal vol. 36, No. 8, pp. 734-740 (1993). |
Connectivity Testing with Ping, Telnet, Trace Route and NSlookup (hereafter help.webcontrolcenter), Article ID:1757, Created: Jun. 17, 2013 at 10:45 a.m., https://help.webcontrolcenter.com/kb/a1757/connectivity-testing-with-ping-telnet-trace-route-and-nslookup.aspx, 6 pages. |
Dan et al., “Performance Comparisons of Buffer Coherency Policies,” Proceedings of the International Conference on Distributed Computer Systems, IEEE Comp. Soc. Press vol. 11, pp. 208-217 (1991). |
Durable Distributed Cache Architecture, retrieved from the Internet at URL: http://www.nuodb.com/explore/newsql-cloud-database-ddc-architecture on Feb. 4, 2015, 3 pp. |
Final Office Action dated Jan. 10, 2018 from U.S. Appl. No. 14/215,461, 30 pages. |
Final Office Action dated Sep. 9, 2016 from U.S. Appl. No. 14/215,461, 26 pp. |
Final Office Action dated Nov. 3, 2016 from U.S. Appl. No. 14/215,401, 36 pp. |
Final Office Action dated Nov. 24, 2017 from U.S. Appl. No. 14/215,401, 33 pages. |
Garding, P. “Alerting on Database Mirorring Events,” Apr. 7, 2006, downloaded Dec. 6, 2016 from technet.microsoft.com, 24 pp. |
Hull, Autoscaling MYSQL on Amazon EC2, Apr. 9, 2012, 7 pages. |
International Search Report and Written Opinion in International Patent Application No. PCT/US18/00142 dated Dec. 13, 2018. 11 pages. |
Iqbal, A. M. et al., “Performance Tradeoffs in Static and Dynamic Load Balancing Strategies,” Instittute for Computer Applications in Science and Engineering, 1986, pp. 1-23. |
Leverenz et al., “Oracle8i Concepts, Partitioned Tables and Indexes,” Chapter 11, pp. 11-12-11/66 (1999). |
Non-Final Office Action dated Jan. 21, 2016 from U.S. Appl. No. 14/215,401, 19 pp. |
Non-Final Office Action dated Feb. 1, 2016 from U.S. Appl. No. 14/215,461, 19 pp. |
Non-Final Office Action dated May 31, 2017 from U.S. Appl. No. 14/215,401, 27 pp. |
Non-Final Office Action dated Jun. 1, 2017 from U.S. Appl. No. 14/215,461, 21 pp. |
NuoDB at a Glance, retrieved from the Internet at URL: http://doc.nuodb.com/display/doc/NuoDB+at+a+Glance on Feb. 4, 2015, 1 pp. |
Oracle Database Concepts 10g Release 2 (10.2), Oct. 2005, 14 pages. |
Rahimi, S. K. et al., “Distributed Database Management Systems: A Practical Approach,” IEEE Computer Society, John Wiley & Sons, Inc. Publications (2010), 765 pp. |
Roy, N. et al., “Efficient Autoscaling in the Cloud using Predictive Models for Workload Forecasting,” IEEE 4th International Conference on Cloud Computing, 2011, pp. 500-507. |
Searchcloudapplications.techtarget.com, Autoscaling Definition, Aug. 2012, 1 page. |
Shaull, R. et al., “A Modular and Efficient Past State System for Berkeley DB,” Proceedings of USENIX ATC '14:2014 USENIX Annual Technical Conference, 13 pp. (Jun. 19-20, 2014). |
Shaull, R. et al., “Skippy: a New Snapshot Indexing Method for Time Travel in the Storage Manager,” SIGMOD'08, Jun. 9-12, 2008, 12 pp. |
Shaull, R., “Retro: A Methodology for Retrospection Everywhere,” A Dissertation Presented to the Faculty of the Graduate School of Arts and Sciences of Brandeis University, Waltham, Massachusetts, Aug. 2013, 174 pp. |
Veerman, G. et al., “Database Load Balancing, MySQL 5.5 vs PostgreSQL 9.1,” Universiteit van Amsterdam, System & Network Engineering, Apr. 2, 2012, 51 pp. |
Yousif, M. “Shared-Storage Clusters,” Cluster Computing, Baltzer Science Publishers, Bussum, NL, vol. 2, No. 4, pp. 249-257 (1999). |
Extended European Search Report in European Patent Application No. 18845799.8 dated May 25, 2021, 8 pages. |
Number | Date | Country | |
---|---|---|---|
20200257667 A1 | Aug 2020 | US |
Number | Date | Country | |
---|---|---|---|
62545791 | Aug 2017 | US |