Systems and methods for hierarchical key management in encrypted distributed databases

Information

  • Patent Grant
  • 10673623
  • Patent Number
    10,673,623
  • Date Filed
    Thursday, May 25, 2017
    7 years ago
  • Date Issued
    Tuesday, June 2, 2020
    4 years ago
Abstract
According to one aspect, methods and systems are provided for modifying an encryption scheme in a database system. The methods and systems can include at least one internal database key; at least one database configured to be encrypted and decrypted using the at least one internal database key; a memory configured to store a master key; a key management server interface configured to communicate with a key management server; and a database application configured to receive, into the memory, the master key from the key management server via the key management server interface, and encrypt and decrypt the at least one internal database key using the master key.
Description
BACKGROUND

Technical Field


The present invention relates to distributed database systems and methods for securely encrypting both the data stored in the databases and the encryption keys used to encrypt the data.


Background Discussion


Encryption techniques exist for database systems storing sensitive or confidential material. Individual databases may be encrypted using internal database keys, and the internal database keys themselves may be encrypted using a master key that are stored locally or at a key management server.


SUMMARY

Conventional approaches to encrypting databases involve the use of internal database keys. The internal database keys may be stored locally and used to encrypt and decrypt the database as needed. Because those internal database keys provide access to any sensitive information stored in the database, the internal database keys themselves may need to be stored in an encrypted file or otherwise securely stored.


Various aspects are provided for management of internal and external database encryption keys. According to an embodiment, management interfaces and processes are provided that automate time consuming and error-prone operations, including, for example, key rotation operations. In these embodiments, key management functions can be executed with no downtime, in that data can be accessed during the key rotation. A single master key may be used to encrypt and decrypt the internal database keys. The master key may be stored locally in an encrypted keyfile, or may be stored at a (possibly third party) key management server and requested as needed. When the master key is received, it is stored temporarily in memory as opposed to permanent storage, thereby reducing the risk of a security breach.


Security breaches, as well as regulatory requirements, may require that the master key and/or the internal database keys be rotated, or changed, on occasion or on a particular schedule (e.g., once a year). During such a change event, the master key may be used to decrypt the internal database keys. If desired, the internal database keys can then be used to decrypt the database itself; new internal database keys can be generated and used to re-encrypt the database. A new master key may also be generated and used to re-encrypt the internal database keys, whether or not they have changed.


Performing such a “key rotation” may require that the database be unavailable for read/write operations for some period of time, as the database and the keys must be available in an unencrypted format during the process, thereby creating a potential security issue. This downtime creates additional issues where the master key and/or internal database keys of more than one database node need to be changed. For example, where a high level of performance and availability is required, database systems may be arranged as replica sets, in which a number of nodes storing the same information are available to respond to database operations (e.g., read and write requests). Replica sets may be configured to include a primary node and a number of secondary nodes. The primary node contains the definitive version of the data stored therein, and may be where any write operations are initially performed. Any write operations or other changes to the primary node are eventually propagated to the second nodes, which may be configured to handle read operations according to load balancing and other considerations.


According to one aspect, in a database incorporating such replica sets, there is therefore a need for a system and method for rotating the master key and/or internal database keys while maintaining availability to the data stored in the replica set. In some embodiments, a process is provided for rotating the keys of a node within the replica set while maintaining the availability to the rest of the replica set, and repeating the process for each node while continuing to maintain that availability.


According to one aspect a distributed database system is provided. The system comprises at least a first database node hosting data of the database system, at least one internal database key, at least one database configured to be encrypted and decrypted using the at least one internal database key comprising at least a portion of the data of the distributed database system, a memory configured to store a master key, a key management server interface configured to communicate with a key management server, and a database application configured to, receive, into the memory, the master key from the key management server via the key management server interface, and encrypt and decrypt the at least one internal database key using the master key.


According to one embodiment, the system further comprises a storage engine configured to write encrypted data to the at least one database, the encrypted data generated with reference to the at least one internal database key. According to one embodiment, the database application is further configured to manage key rotation functions for the at least one database. According to one embodiment, the key rotation functions are performed on the database while the database is available for read and write operations. According to one embodiment, the database application is further configured to perform a key rotation function on a node in a replica set by performing the key rotation function on a first secondary node. According to one embodiment, the database application is further configured to perform a key rotation function on a node in a replica set by performing the key rotation function on a second secondary node. According to one embodiment, the database application is further configured to, demote a current primary node to be a secondary node of the replica set, and elect one of the first secondary node and the second secondary node to be a next primary node of the replica set.


According to one aspect a distributed database system is provided. The system comprises at least a first database node hosting data of the database system, at least one database instance configured to be encrypted and decrypted using at least one internal database key comprising at least a portion of the data of the distributed database system, a stored keyfile, a database application configured to encrypt and decrypt the at least one internal database key using the stored keyfile, and a storage engine configured to write encrypted data to the at least one database, the encrypted data generated with reference to the at least one internal database key.


According to one aspect a method for modifying an encryption scheme of a database system is provided. The method comprises disabling read and write access to a node of a replica set, for at least one database on the node of a replica set, decrypting an internal database key using a first master key, obtaining a second master key, for the at least one database on the node of the replica set, encrypting the internal database key using the second master key, restoring read and write access to the node of the replica set, repeating steps (A)-(E) for at least one other node of the replica set in a rolling manner. According to one embodiment, the second master key is obtained from a key management server, and the method further comprises receiving the second master key via a key management interoperability protocol (KMIP). According to one embodiment, the second master key is obtained from a key management server, and the method further comprising receiving the second master key via an Application Programming Interface (API).


According to one aspect a method for modifying an encryption scheme of a database system is provided. The method comprises, disabling read and write access to a node of a replica set, for at least one database on the node of a replica set, decrypting a first internal database key using a first master key, decrypting the at least one database using the first internal database key, generating a second internal database key for each of the at least one database, encrypting the at least one database using the second internal database key for the at least one database, obtaining a second master key, encrypting the second internal database key for the at least one database using the second master key, restoring read and write access to the node of the replica set, repeating steps (A)-(H) for at least one other node of the replica set in a rolling manner. According to one embodiment, the act of obtaining the second master key comprises requesting the second master key from a key management server via a key management interoperability protocol (KMIP).


Still other aspects, embodiments, and advantages of these exemplary aspects and embodiments, are discussed in detail below. Any embodiment disclosed herein may be combined with any other embodiment in any manner consistent with at least one of the objects, aims, and needs disclosed herein, and references to “an embodiment,” “some embodiments,” “an alternate embodiment,” “various embodiments,” “one embodiment” or the like are not necessarily mutually exclusive and are intended to indicate that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment. The appearances of such terms herein are not necessarily all referring to the same embodiment. The accompanying drawings are included to provide illustration and a further understanding of the various aspects and embodiments, and are incorporated in and constitute a part of this specification. The drawings, together with the remainder of the specification, serve to explain principles and operations of the described and claimed aspects and embodiments.





BRIEF DESCRIPTION OF THE DRAWINGS

Various aspects of at least one embodiment are discussed herein with reference to the accompanying figures, which are not intended to be drawn to scale. The figures are included to provide illustration and a further understanding of the various aspects and embodiments, and are incorporated in and constitute a part of this specification, but are not intended as a definition of the limits of the invention. Where technical features in the figures, detailed description or any claim are followed by reference signs, the reference signs have been included for the sole purpose of increasing the intelligibility of the figures, detailed description, and/or claims. Accordingly, neither the reference signs nor their absence are intended to have any limiting effect on the scope of any claim elements. In the figures, each identical or nearly identical component that is illustrated in various figures is represented by a like numeral. For purposes of clarity, not every component may be labeled in every figure.


In the figures:



FIG. 1 illustrates a block diagram of an example architecture for a storage node, according to aspects of the invention;



FIG. 2 illustrates a block diagram of an example architecture for a storage node, according to aspects of the invention;



FIG. 3 illustrates a block diagram of an example architecture for a database replica set, according to aspects of the invention;



FIG. 4 illustrates an example process flow for encrypting a database according to aspects of the embodiment;



FIG. 5 illustrates another example process flow for encrypting a database according to aspects of the embodiment;



FIG. 6 is a block diagram of an example distributed database system in which various aspects of the present invention can be practiced;



FIG. 7 is a block diagram of an example distributed database system in which various aspects of the present invention can be practiced; and



FIG. 8 is a block diagram of an example distributed database system in which various aspects of the present invention can be practiced.





DETAILED DESCRIPTION

According to various embodiments, a system and method are provided for modifying the encryption scheme of a database system by sequentially rotating the keys of each node in a replica set, while the replica set remains available for normal read/write operations. In a preferred embodiment where a master key is stored at a key management server, a database node is removed from normal operation, and the master key is obtained, such as with a Key Management Interoperability Protocol (KMIP) request, and used to decrypt one or more internal database keys. A new master key is then generated and/or obtained and used to re-encrypt the one or more internal database keys. In such an embodiment, only a new master key may be generated, and used to re-encrypt the (previously used) internal database keys.


In another embodiment, where the master key is stored locally in a keyfile, responsibility for securing the master key is on the system administrator or other user. In some embodiments, it may be desirable to rotate both the master key and the internal database keys. Accordingly, a database node is removed from normal operation, and the master key is obtained from the keyfile (e.g., local or remote keys) and used to decrypt one or more internal database keys. The internal database keys are then used to decrypt the database itself. New internal database keys are generated and used to re-encrypt the database, and a new master is generated and used to re-encrypt the new one or more internal database keys.


According to one aspect, an encryption management system provides functions and user interfaces for managing encryption schemes for a database. According to some embodiments, the system automates key management functions (e.g., key rotation) to reduce error in execution, improve execution efficiency of the computer system, and provide user-configurable compliance options for managing encryption keys, among other options. For example, the user can set a timetable for key rotation, that is automatically executed by the system. In another embodiment, the user can also establish settings for a type of key rotation (e.g., full rotation or internal key rotations, etc.).


Examples of the methods, devices, and systems discussed herein are not limited in application to the details of construction and the arrangement of components set forth in the following description or illustrated in the accompanying drawings. The methods and systems are capable of implementation in other embodiments and of being practiced or of being carried out in various ways. Examples of specific implementations are provided herein for illustrative purposes only and are not intended to be limiting. In particular, acts, components, elements and features discussed in connection with any one or more examples are not intended to be excluded from a similar role in any other examples.


Also, the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. Any references to examples, embodiments, components, elements or acts of the systems and methods herein referred to in the singular may also embrace embodiments including a plurality, and any references in plural to any embodiment, component, element or act herein may also embrace embodiments including only a singularity. References in the singular or plural form are not intended to limit the presently disclosed systems or methods, their components, acts, or elements. The use herein of “including,” “comprising,” “having,” “containing,” “involving,” and variations thereof is meant to encompass the items listed thereafter and equivalents thereof as well as additional items. References to “or” may be construed as inclusive so that any terms described using “or” may indicate any of a single, more than one, and all of the described terms.


An example of a database storage node 100 is shown in FIG. 1. The storage node 100 represents a subsystem (e.g., a server) on which a particular set or subset of data may be stored, as well as functional components for interacting with the data. For example, the storage node 100 may be a standalone database, or may be a primary node or a secondary node within a replica set, wherein particular data is stored by more than one node to ensure high availability and stability in the event that one or more nodes becomes unavailable for some period of time. In other embodiments, the storage node 100 may be a shard server storing a certain range of data within a database system.


The storage node 100 may be arranged as a relational database, or as a non-relational database, such as the MongoDB database system offered by MongoDB, Inc. of New York, N.Y. and Palo Alto, Calif. The storage node 100 includes a database 10 configured to store the primary data of a database. In a preferred embodiment, the storage node 100 is a non-relational database system wherein the database 10 stores one or more collections of documents allowing for dynamic schemas. In such scenarios, a “document” is a collection of attribute-value associations relating to a particular entity, and in some examples forms a base unit of data storage for the managed database system. Attributes are similar to rows in a relational database, but do not require the same level of organization, and are therefore less subject to architectural constraints. A collection is a group of documents that can be used for a loose, logical organization of documents. It should be appreciated, however, that the concepts discussed herein are applicable to relational databases and other database formats, and this disclosure should not be construed as being limited to non-relational databases in the disclosed embodiments.


In one example, the database data may include logical organizations of subsets of database data. The database data may include index data, which may include copies of certain fields of data that are logically ordered to be searched efficiently. Each entry in the index may consist of a key-value pair that represents a document or field (i.e., the value), and provides an address or pointer to a low-level disk block address where the document or field is stored (the key). The database data may also include an operation log (“oplog”), which is a chronological list of write/update operations performed on the data store during a particular time period. The oplog can be used to roll back or re-create those operations should it become necessary to do so due to a database crash or other error. Primary data, index data, or oplog data may be stored in any of a number of database formats, including row store, column store, log-structured merge (LSM) tree, or otherwise.


In other embodiments, the storage node 100 forms or is a member of a relational database system, and the database 10 stores one or more tables comprising rows and columns of data according to a database schema.


The storage node 100 further comprises a database application 20 that handles data requests, manages data access, and performs background management operations for the storage node 100. The database application 20 is configured to interact with various components of the storage node 100, including at least one storage engine 30 for writing data to the database 10. In one embodiment, the at least one storage engine 30 writes data to the database in an encrypted format. In particular, the storage engine 30 is configured to write unencrypted data (i.e., plaintext) in an encrypted format to the database 10 using an encryption algorithm that uses a randomly-generated internal database key 40 as an input. In a preferred embodiment, the internal database key 40 is a symmetric database key such that the same key is used to encrypt and decrypt the data. Such symmetric database keys are used in connection with symmetric encryption/decryption algorithms such as Twofish, Serpent, AES (Rijndael), Blowfish, CAST5, RC4, 3DES, Skipjack, Safer+/++ (Bluetooth), and IDEA. In a preferred embodiment, the storage engine 30 uses a symmetric internal database key 40 to perform 256 bit encryption using AES-256 in cipher block chaining (CBC) mode (e.g., via OpenSSL), or in Galois/Counter (GCM) mode. In other embodiments, the internal database key 40 may be part of a public key cryptographic scheme.


A storage node 100 may include more than one database. For example, FIG. 1 shows a second database 12, and a corresponding internal database key 42. According to one embodiment, for security purposes, it can be preferable to use a unique internal database key for each database. Thus, for example, internal database key 40 could be used to encrypt and decrypt only database 10, which in turn may only be encrypted and decrypted using internal database key 40. Similarly, internal database key 42 could be used to encrypt and decrypt only database 12, which in turn may only be encrypted and decrypted using internal database key 42. It will be appreciated that any number of databases and corresponding unique internal database keys may be provided on a storage node 100 without departing from the spirit of the invention.


According to one embodiment, the internal database keys 40, 42 can be stored on a disk or other storage in the storage node 100, and are generally kept encrypted except for the period of time during which they have are actually being used. Because the symmetric internal database keys 40, 42 of the preferred embodiment allow for the encryption and decryption of the databases 10, 12, the internal database keys 40, 42 themselves must also be stored in an encrypted format to avoid unauthorized parties obtaining and using them. In one embodiment, the internal database keys 40, 42 are encrypted with a master key 52 that, for security purposes, is maintained only in a temporary memory 50 as needed and is never paged or written to disk.


In some embodiments, a master key and/or internal keys (e.g., 52, 40, 42) can be stored on a separate key management system and requested at each use, or initialized with a first request and maintained only in a temporary memory (e.g., 50) as needed and which configured to prevent paging or writing of keys to disk.


In one embodiment, the master key 52 is also a randomly-generated symmetric key that is maintained by and obtained from a key management server 70 (e.g., operated by a third party) via a key management server interface 60. The key management server interface 60 is a network interface capable of communicating with other systems in a network, such as the Internet. For example, the key management server interface 60 may comprise a KMIP appliance or client capable of communicating with the key management server 70 for the sending and receiving of a master key 52. Examples of such KMIP clients include KeySecure, offered by Gemalto (formerly SafeNet) of Belcamp, Md., and Data Security Manager (DSM), offered by Vormetric, Inc. of San Jose, Calif. In other implementations, the database and the key management server and interface can be implanted on cloud resources. In on example, any database components and any key management components can be instantiated as a private cloud and/or can be configured for secure communication.


The database application 20 may obtain the master key 52 via the key management server interface 60 using a suitable protocol or application programming interface. For example, the database application 20 may communicate a request for the master key 52 to the key management server 70 using a KMIP that defines message formats for accessing and manipulating cryptographic keys on a key management server 70. In another example, the database application 20 may obtain the master key 52 by making an application call to an Application Programming Interface (API) on the key management server 70, such as the Public Key Cryptography Standards, Standard #11 (PKCS #11). In further embodiments, the database application itself can be one or more application programming interfaces or include one or more application programming interface, wherein at least one of the APIs is configured to call an respective API on the key management server. For example, to obtain a master key or in another example to obtain master and/or local keys. In other examples, the database application can request new keys and trigger key rotation within the database.


According to some embodiments, database administrators can access the system and establish a key rotation schedule, which the system is configured to automatically execute. In further embodiments, the system accepts specification of a time-table to rotate master keys and/or a time-table to rotation internal keys, and further specification of rotation of both master and internal keys. Once the type of rotation and time frame is set the system can automatically perform the rotations operations without user intervention. The type of rotation and time frame can be set by administrator users, and/or can be set by default upon creation of a given database or database instance.


According to one embodiment, the system is configured to execute any selected rotation functions transparently to the end users. For example, key rotation can be scheduled by the system and in anticipation of a set time/date within the time-table so that rotation occurs in least utilized times. In further embodiments, and in particular when completing full rotation (e.g., internal key rotation), the system can be configured to instantiate new database resources (e.g., cloud resources) to host at least one copy of a secondary node. The copy of the secondary node can serve any one or more of multiple purposes: (1) ensuring no failure results in data loss (e.g., failed re-encryption can result in an unrecoverable data state); (2) and no significant downtime (e.g., as a failover secondary node in the event of a failed re-encryption); (3) providing same level of service to clients during rotation (e.g., full rotation takes a node off-line to decrypt and re-encrypt the instance) by serving database requests from the copy; and (4) simplify recovery operations (e.g., failed rotation on secondary can simply de-commission failed secondary), among other options.


Co-pending patent application Ser. No. 14/969,537, entitled Systems and Methods for Automating Management of Distributed Databases, filed on Dec. 15, 2015, incorporated by reference in its entirety, describes various aspects and embodiments of automation systems that can be implemented to facilitate generation of new nodes in a replica set, and/or manage new node resources during key rotation functions discussed herein.



FIG. 2 depicts another exemplary storage node 200. Storage node 200 includes many of the same components and functions similarly to storage node 100, but need not include a key management server interface. In this embodiment, a master key is not obtained from a key management server 70, as in the storage node 100. Rather, a locally-stored and managed keyfile 54 stores the master key 52 that is used to encrypt and decrypt the internal database keys 40,42. The keyfile 54 may store the master key 52 as a based 64 encoded 16- or 32-character string.


The embodiments shown and discussed with respect to FIGS. 1 and 2 depict a single database storage node 100 or 200. Yet in some embodiments, multiple storage nodes may be provided and arranged in a replica set, such as the embodiments described in U.S. patent application Ser. No. 12/977,563, which is hereby incorporated by reference in its entirety. FIG. 3 shows a block diagram of an exemplary replica set 300. Replica set 310 includes a primary node 320 and one or more secondary nodes 330, 340, 350, each of which is configured to store a dataset that has been inserted into the database. The primary node 320 may be configured to store all of the documents currently in the database, and may be considered and treated as the authoritative version of the database in the event that any conflicts or discrepancies arise, as will be discussed in more detail below. While three secondary nodes 330, 340, 350 are depicted for illustrative purposes, any number of secondary nodes may be employed, depending on cost, complexity, and data availability requirements. In a preferred embodiment, one replica set may be implemented on a single server, or a single cluster of servers. In other embodiments, the nodes of the replica set may be spread among two or more servers or server clusters.


The primary node 320 and secondary nodes 330, 340, 350 may be configured to store data in any number of database formats or data structures as are known in the art. In a preferred embodiment, the primary node 320 is configured to store documents or other structures associated with non-relational databases. The embodiments discussed herein relate to documents of a document-based database, such as those offered by MongoDB, Inc. (of New York, N.Y. and Palo Alto, Calif.), but other data structures and arrangements are within the scope of the disclosure as well.


In one embodiment, both read and write operations may be permitted at any node (including primary node 320 or secondary nodes 330, 340, 350) in response to requests from clients. The scalability of read operations can be achieved by adding nodes and database instances. In some embodiments, the primary node 320 and/or the secondary nodes 330, 340, 350 are configured to respond to read operation requests by either performing the read operation at that node or by delegating the read request operation to another node (e.g., a particular secondary node 330). Such delegation may be performed based on load-balancing and traffic direction techniques known in the art.


In some embodiments, the database only allows write operations to be performed at the primary node 320, with the secondary nodes 330, 340, 350 disallowing write operations. In such embodiments, the primary node 320 receives and processes write requests against the database, and replicates the operation/transaction asynchronously throughout the system to the secondary nodes 330, 340, 350. In one example, the primary node 320 receives and performs client write operations and generates an oplog. Each logged operation is replicated to, and carried out by, each of the secondary nodes 330, 340, 350, thereby bringing those secondary nodes into synchronization with the primary node 320.


In some embodiments, the primary node 320 and the secondary nodes 330, 340, 350 may operate together to form a replica set 310 that achieves eventual consistency, meaning that replication of database changes to the secondary nodes 330, 340, 350 may occur asynchronously. When write operations cease, all replica nodes of a database will eventually “converge,” or become consistent. This may be a desirable feature where higher performance is important, such that locking records while an update is stored and propagated is not an option. In such embodiments, the secondary nodes 330, 340, 350 may handle the bulk of the read operations made on the replica set 310, whereas the primary node 330, 340, 350 handles the write operations. For read operations where a high level of accuracy is important (such as the operations involved in creating a secondary node), read operations may be performed against the primary node 320.


It will be appreciated that the difference between the primary node 320 and the one or more secondary nodes 330, 340, 350 in a given replica set may be largely the designation itself and the resulting behavior of the node; the data, functionality, and configuration associated with the nodes may be largely identical, or capable of being identical. Thus, when one or more nodes within a replica set 310 fail or otherwise become available for read or write operations, other nodes may change roles to address the failure. For example, if the primary node 320 were to fail, a secondary node 330 may assume the responsibilities of the primary node, allowing operation of the replica set to continue through the outage. This failover functionality is described in U.S. application Ser. No. 12/977,563, the disclosure of which has been incorporated by reference.


Each node in the replica set 310 may be implemented on one or more server systems. Additionally, one server system can host more than one node. Each server can be connected via a communication device to a network, for example the Internet, and each server can be configured to provide a heartbeat signal notifying the system that the server is up and reachable on the network. Sets of nodes and/or servers can be configured across wide area networks, local area networks, intranets, and can span various combinations of wide area, local area and/or private networks. Various communication architectures are contemplated for the sets of servers that host database instances and can include distributed computing architectures, peer networks, virtual systems, among other options.


The primary node 320 may be connected by a LAN, a WAN, or other connection to one or more of the secondary nodes 330, 340, 350, which in turn may be connected to one or more other secondary nodes in the replica set 310. Connections between secondary nodes 330, 340, 350 may allow the different secondary nodes to communicate with each other, for example, in the event that the primary node 320 fails or becomes unavailable and a secondary node must assume the role of the primary node.


Each of the primary node 320 and the secondary nodes 330, 340, and 350 may operate like the storage nodes 100 or 200 in FIGS. 1 and 2, respectively. In a preferred embodiment, the databases on each node are individually encrypted using unique internal database keys, with the unique internal database keys themselves being encrypted using a master key unique to each node. Put differently, a master key used on a given node is preferably different than every other master key used on any other node. Likewise, a unique internal database key used to encrypt a given database on a node is preferably different than every other unique internal database key used on that node or any other node (e.g., the unique internal database key used on database A on a primary node will be different than the unique internal database key used on database A on a secondary node within the same replica set). In other embodiments, the same master key may be used for all nodes in a replica set, and/or the same unique internal database key may be used on multiple databases across one or multiple nodes or replica sets.


For security reasons, it may be desirable to change the master keys and internal database keys used in a particular node or replica set. Such a change may be carried out periodically, in response to a security concern, or on a schedule dictated by regulatory or other frameworks. For a change to a new master key to be implemented, at least the internal database keys must be decrypted as necessary and then re-encrypted using the new master key. For a change to new internal database keys to be implemented, the data in the databases itself must be decrypted as appropriate using the current internal database keys, then re-encrypted using the new internal database keys. The new internal database keys must themselves then be re-encrypted using the new master key (or existing master key, if no change to the master key has occurred).


Due to the decryption/encryption steps required in changing the master key and/or the internal database keys used on a particular node, including the security issues introduced by process of changing the encryption scheme, the node is typically taken offline while the keys are changed, with other nodes in the replica set available to handle database requests while the node is unavailable. When some or all of the nodes in a replica set are due to have their master keys and/or internal database keys changed, the process may be carried out in a sequential or rolling manner, with nodes taken offline one at a time, their keys changed, and the node returned to service. Once the node has returned to availability for processing database requests, another node may be taken offline to repeat the process, and so on. In this way, some or all of the nodes in a replica set may have their encryption schemes changed in a rolling process.


A process 400 of modifying an encryption scheme of a database system (e.g., the storage node 100 of FIG. 1) is shown in FIG. 4. In this example, a new master key is generated, with the same internal database keys being encrypted by the new master key.


At step 410, process 400 begins.


At step 420, read and write access to a node of a replica set is disabled. In one embodiment, the interface between the node and the outside environment is disabled, for example, by terminating the underlying network connection. In another embodiment, the application or process used to handle database read/write requests is terminated or otherwise disabled. In yet another embodiment, permissions for the database are changed so that read/write requests cannot be performed by anyone, or are limited to administrators. For example, an executable program (e.g., the database application 20) may be called from a command line with a command line parameter instructing the program to gracefully remove the node from operation by isolating it from read/write operations. The primary node and/or other nodes in the replica set may be notified of the node's unavailability.


At step 430, the first master key is optionally obtained. In one embodiment, the first master is obtained from the key management server using a suitable protocol or application programming interface and stored in a memory. In another embodiment, the first master key is obtained from a locally-stored keyfile that contains the master key in encrypted form. In one example, the first master key is the “current” master key that has been used for some period of time to encrypt and decrypt the internal database keys. The database application may request the master key from the key management server in a KMIP format. In another example, the database application may obtain the master key by making an API call on the key management server. For example, an executable program (e.g., the database application 20) may be called from a command line with a command line parameter instructing the program to obtain the first master key. In another embodiment, the first master key may already be resident in storage or elsewhere accessible to the database application, and need not be requested again.


At step 440, an internal database key, used to encrypt a database on the node of the replica set, is decrypted using the first master key. In particular, a decryption algorithm is applied to the encrypted internal database key (e.g., internal database key 40) using the first master key. For example, an executable program (e.g., the database application 20) may be called from a command line with a command line parameter instructing the program to decrypt the internal database key using the first master key. In some embodiments, particularly where there are multiple databases on the node, there may be multiple internal database keys as well, with each internal database key corresponding to a database, and vice versa. In that case, each of the multiple internal database keys is decrypted using the first master key.


At step 450, a second master key is obtained. In one embodiment, the second master key may be obtained through a local process for generating encryption keys. For example, an executable program (e.g., the database application 20) may be called from a command line with a command line parameter instructing the program to generate the second master key. The second master key may then be stored locally in a keyfile, or may be sent to a key management server for later retrieval and use.


In another embodiment, a request for the second master key may be sent to the key management server storing the second master key. The request may be sent using a suitable protocol or application programming interface, and the received second master key stored in a memory. For example, an executable program (e.g., the database application 20) may be called from a command line with a command line parameter requesting the master key from the key management server in a KMIP format. In another example, the database application may obtain the master key by making an API call on the key management server. If no second master key has yet been generated, a request may be sent to the key management server requesting that the second master key be generated and sent to the system. For example, an executable program (e.g., the database application 20) may be called from a command line with a command line parameter requesting that the key management server generate the second master key (if necessary) and transmit the second master key to the system. In one example, the executable program may be the database application which has been integrated with a key management appliance capable of securely communicating with the key management server.


In step 460, the internal database key is re-encrypted using the second master key. In particular, an encryption algorithm is applied to the internal database key (e.g., internal database key 40) using the second master key. For example, an executable program (e.g., the database application 20) may be called from a command line with a command line parameter instructing the program to encrypt the internal database key using the second master key. In some embodiments, particularly where there are multiple databases on the node, there may be multiple internal database keys as well, with each internal database key corresponding to a database, and vice versa. In that case, each of the multiple internal database keys is re-encrypted using the second master key.


At step 470, read and write access to the node of the replica set is restored. In one embodiment, the interface between the node and the outside environment is re-enabled, for example, by restoring the underlying network connection. In another embodiment, the application or process used to handle database read/write requests is re-started or otherwise re-enabled. In yet another embodiment, permissions for the database are changed so that read/write requests can be performed according to normal operating conditions. For example, an executable program (e.g., the database application 20) may be called from a command line with a command line parameter instructing the program to restore the node to normal operation.


In step 480, steps 420 through 470 are repeated for one or more additional nodes, one-by-one, until all of the nodes in the replica set have had their internal database keys encrypted using the new master key. In one embodiment, all of the secondary nodes in the replica set are processed one-by-one, followed lastly by the primary node. In one example, a secondary node that has successfully undergone a master key change by steps 420 through 470 may be designated as the primary node, and the then-current primary node redesignated as a secondary node. In this way, it can be ensured that a primary node with internal database keys encrypted by the current master key is always available, even when the then-current primary node is to be taken offline to undergo the master key change.


Process 400 ends at step 490.


Another process 500 of modifying an encryption scheme of a database system (e.g., the storage node 200 of FIG. 2) is shown in FIG. 5. In this example, both a new master key and new internal database key(s) are generated and/or obtained. The database is encrypted using the new internal database key, and the new internal database keys in turn are encrypted by the new master key.


At step 505, process 500 begins.


At step 510, read and write access to a node of a replica set is disabled. In one embodiment, the interface between the node and the outside environment is disabled, for example, by terminating the underlying network connection. In another embodiment, the application or process used to handle database read/write requests is terminated or otherwise disabled. In yet another embodiment, permissions for the database are changed so that read/write requests cannot be performed by anyone, or are limited to administrators. For example, an executable program (e.g., the database application 20) may be called from a command line with a command line parameter instructing the program to gracefully remove the node from operation by isolating it from read/write operations.


At step 515, the first master key is optionally obtained. In one embodiment, the first master key is obtained from a locally-stored keyfile that contains the master key in encrypted form. In another embodiment, the first master is obtained from the key management server using a suitable protocol or application programming interface and stored in a memory. In one example, the first master key is the “current” master key that has been used for some period of time to encrypt and decrypt the internal database keys.


At step 520, a first internal database key, used to encrypt a database on the node of the replica set, is decrypted using the first master key. In particular, a decryption algorithm is applied to the first encrypted internal database key (e.g., internal database key 40) using the first master key. For example, an executable program (e.g., the database application 20) may be called from a command line with a command line parameter instructing the program to decrypt the internal database key using the first master key. In some embodiments, particularly where there are multiple databases on the node, there may be multiple internal database keys as well, with each internal database key corresponding to a database, and vice versa. In that case, each of the multiple internal database keys is decrypted using the first master key.


At step 525, the database is decrypted using the first internal database key. In particular, a decryption algorithm is applied to the database (e.g., database 10) using the first internal database key. For example, an executable program (e.g., the database application 20) may be called from a command line with a command line parameter instructing the program to decrypt the database using the internal database key. In embodiments where there are multiple databases on the node, there may be multiple internal database keys as well, with each internal database key corresponding to a database, and vice versa. In that case, each database is decrypted using one of the multiple internal database keys.


At step 530, a second internal database key is generated. In one embodiment, the second master key may be generated through a local process for generating encryption keys. For example, an executable program (e.g., the database application 20) may be called from a command line with a command line parameter instructing the program to generate the second master key. In embodiments where there are multiple databases on the node, a second internal database key is generated for each database on the node.


In step 535, the database is re-encrypted using the second internal database key. In particular, all of the data in the database (e.g., database 10) may be rewritten (e.g., by the storage engine 30) to another copy of the database, with an encryption algorithm being applied to the database using the second internal database key. For example, an executable program (e.g., the database application 20) may be called from a command line with a command line parameter instructing the program to encrypt the database using the second internal database key. In embodiments where there are multiple databases on the node, each database is re-encrypted using one of the second internal database keys generated in step 530.


At step 540, a second master key is obtained. In one embodiment, the second master key may be obtained through a local process for generating encryption keys. For example, an executable program (e.g., the database application 20) may be called from a command line with a command line parameter instructing the program to generate the second master key. The second master key may then be stored locally in a keyfile, or may be sent to a key management server for later retrieval and use.


In step 545, the second internal database key is re-encrypted using the second master key. In particular, an encryption algorithm is applied to the second internal database key (e.g., internal database key 40) using the second master key. For example, an executable program (e.g., the database application 20) may be called from a command line with a command line parameter instructing the program to encrypt the second internal database key using the second master key. In some embodiments, particularly where there are multiple databases on the node, there may be multiple second internal database keys as well, with each internal database key corresponding to a database, and vice versa. In that case, each of the multiple second internal database keys is re-encrypted using the second master key.


At step 550, read and write access to the node of the replica set is restored. In one embodiment, the interface between the node and the outside environment is re-enabled, for example, by restoring the underlying network connection. In another embodiment, the application or process used to handle database read/write requests is re-started or otherwise re-enabled. In yet another embodiment, permissions for the database are changed so that read/write requests can be performed according to normal operating conditions. For example, an executable program (e.g., the database application 20) may be called from a command line with a command line parameter instructing the program to restore the node to normal operation.


Steps 510 through 550 describe the process for disabling a node of the replica set, encrypting the internal database keys for that node with a new master key, and re-enabling the node. In step 555, steps 510 through 550 are repeated for one or more additional nodes, one-by-one, until all of the nodes in the replica set have had their internal database keys encrypted using the new master key. In one embodiment, all of the secondary nodes in the replica set are processed one-by-one, followed lastly by the primary node. In one example, a secondary node that has successfully undergone a master key change by steps 510 through 550 may be designated as the primary node, and the then-current primary node redesignated as a secondary node. In this way, it can be ensured that a primary node with internal database keys encrypted by the current master key is always available, even when the then-current primary node is to be taken offline to undergo the master key change.


Process 500 ends at step 560.


The various processes described herein can be configured to be executed on the systems shown by way of example in FIGS. 1-5. The systems and/or system components shown can be programmed to execute the processes and/or functions described.


Additionally, other computer systems can be configured to perform the operations and/or functions described herein. For example, various embodiments according to the present invention may be implemented on one or more computer systems. These computer systems may be, specially configured, computers such as those based on Intel Atom, Core, or PENTIUM-type processor, IBM PowerPC, AMD Athlon or Opteron, Sun UltraSPARC, or any other type of processor. Additionally, any system may be located on a single computer or may be distributed among a plurality of computers attached by a communications network.


A special-purpose computer system can be specially configured as disclosed herein. According to one embodiment of the invention the special-purpose computer system is configured to perform any of the described operations and/or algorithms. The operations and/or algorithms described herein can also be encoded as software executing on hardware that defines a processing component, that can define portions of a special purpose computer, reside on an individual special-purpose computer, and/or reside on multiple special-purpose computers.



FIG. 6 shows a block diagram of an example special-purpose computer system 600 on which various aspects of the present invention can be practiced. For example, computer system 600 may include a processor 606 connected to one or more memory devices 610, such as a disk drive, memory, or other device for storing data. Memory 610 is typically used for storing programs and data during operation of the computer system 600. Components of computer system 600 can be coupled by an interconnection mechanism 608, which may include one or more busses (e.g., between components that are integrated within a same machine) and/or a network (e.g., between components that reside on separate discrete machines). The interconnection mechanism enables communications (e.g., data, instructions) to be exchanged between system components of system 600.


Computer system 600 may also include one or more input/output (I/O) devices 602-604, for example, a keyboard, mouse, trackball, microphone, touch screen, a printing device, display screen, speaker, etc. Storage 612, typically includes a computer readable and writeable nonvolatile recording medium in which computer executable instructions are stored that define a program to be executed by the processor or information stored on or in the medium to be processed by the program.


The medium can, for example, be a disk 702 or flash memory as shown in FIG. 7. Typically, in operation, the processor causes data to be read from the nonvolatile recording medium into another memory 704 that allows for faster access to the information by the processor than does the medium. This memory is typically a volatile, random access memory such as a dynamic random access memory (DRAM) or static memory (SRAM). According to one embodiment, the computer-readable medium comprises a non-transient storage medium on which computer executable instructions are retained.


Referring again to FIG. 6, the memory can be located in storage 612 as shown, or in memory system 610. The processor 606 generally manipulates the data within the memory 610, and then copies the data to the medium associated with storage 612 after processing is completed. A variety of mechanisms are known for managing data movement between the medium and integrated circuit memory element and the invention is not limited thereto. The invention is not limited to a particular memory system or storage system.


The computer system may include specially-programmed, special-purpose hardware, for example, an application-specific integrated circuit (ASIC). Aspects of the invention can be implemented in software, hardware or firmware, or any combination thereof. Although computer system 600 is shown by way of example, as one type of computer system upon which various aspects of the invention can be practiced, it should be appreciated that aspects of the invention are not limited to being implemented on the computer system as shown in FIG. 8. Various aspects of the invention can be practiced on one or more computers having a different architectures or components than that shown in FIG. 6.


It should be appreciated that the invention is not limited to executing on any particular system or group of systems. Also, it should be appreciated that the invention is not limited to any particular distributed architecture, network, or communication protocol.


Various embodiments of the invention can be programmed using an object-oriented programming language, such as Java, C++, Ada, or C# (C-Sharp). Other programming languages may also be used. Alternatively, functional, scripting, and/or logical programming languages can be used. Various aspects of the invention can be implemented in a non-programmed environment (e.g., documents created in HTML, XML or other format that, when viewed in a window of a browser program, render aspects of a graphical-user interface (GUI) or perform other functions). The system libraries of the programming languages are incorporated herein by reference. Various aspects of the invention can be implemented as programmed or non-programmed elements, or any combination thereof.


Various aspects of this invention can be implemented by one or more systems similar to system 800 shown in FIG. 8. For instance, the system can be a distributed system (e.g., client server, multi-tier system) that includes multiple special-purpose computer systems. In one example, the system includes software processes executing on a system associated with hosting database services, processing operations received from client computer systems, interfacing with APIs, receiving and processing client database requests, routing database requests, routing targeted database request, routing global database requests, determining global a request is necessary, determining a targeted request is possible, verifying database operations, managing data distribution, replicating database data, migrating database data, etc. These systems can also permit client systems to request database operations transparently, with various routing processes handling and processing requests for data as a single interface, where the routing processes can manage data retrieval from database partitions, merge responses, and return results as appropriate to the client, among other operations.


There can be other computer systems that perform functions such as hosting replicas of database data, with each server hosting database partitions implemented as a replica set, among other functions. These systems can be distributed among a communication system such as the Internet. One such distributed network, as discussed below with respect to FIG. 8, can be used to implement various aspects of the invention. Various replication protocols can be implemented, and in some embodiments, different replication protocols can be implemented, with the data stored in the database replication under one model, e.g., asynchronous replication of a replica set, with metadata servers controlling updating and replication of database metadata under a stricter consistency model, e.g., requiring two phase commit operations for updates.



FIG. 8 shows an architecture diagram of an example distributed system 800 suitable for implementing various aspects of the invention. It should be appreciated that FIG. 8 is used for illustration purposes only, and that other architectures can be used to facilitate one or more aspects of the invention.


System 800 may include one or more specially configured special-purpose computer systems 804, 806, and 808 distributed among a network 802 such as, for example, the Internet. Such systems may cooperate to perform functions related to hosting a partitioned database, managing database metadata, monitoring distribution of database partitions, monitoring size of partitions, splitting partitions as necessary, migrating partitions as necessary, identifying sequentially keyed collections, optimizing migration, splitting, and rebalancing for collections with sequential keying architectures.


Having thus described several aspects and embodiments of this invention, it is to be appreciated that various alterations, modifications and improvements will readily occur to those skilled in the art. Such alterations, modifications, and improvements are intended to be part of this disclosure, and are intended to be within the spirit and scope of the invention. Accordingly, the foregoing description is by way of example only.


Use of ordinal terms such as “first,” “second,” “third,” “a,” “b,” “c,” etc., in the claims to modify or otherwise identify a claim element does not by itself connote any priority, precedence, or order of one claim element over another or the temporal order in which acts of a method are performed, but are used merely as labels to distinguish one claim element having a certain name from another element having a same name (but for use of the ordinal term) to distinguish the claim elements.

Claims
  • 1. A distributed database system comprising: at least a first database node of a plurality of database nodes hosting data of the distributed database system;at least one internal database key;at least one database with data to be encrypted and decrypted using the at least one internal database key comprising at least a portion of the data of the distributed database system;a memory configured to store at least one master key;a key management server interface configured to communicate with a key management server; anda database component, executed by at least one hardware-based processor, configured to: receive, into the memory, the master key from the key management server via the key management server interface;encrypt and decrypt the at least one internal database key using the at least one master key; andmanage the at least one internal and master key for the plurality of database nodes; andwherein the database component is further configured to: manage key rotation functions for the at least one database;demote a current primary node to be a secondary node of a respective replica set; andelect one of at least a first secondary node and a second secondary node to be a next primary node of the respective replica set, wherein election includes validating execution of the key rotation functions, and wherein the next primary node is configured to accept and replicate write operations to secondary nodes in the replica set.
  • 2. The database system of claim 1, further comprising a storage engine configured to write encrypted data to the at least one database, the encrypted data generated with reference to the at least one internal database key.
  • 3. The database system of claim 1, wherein the database component is further configured to manage key rotation functions for the at least one database.
  • 4. The database system of claim 3, wherein the key rotation functions are performed within respective replica sets comprising a first primary node and at least a first and second secondary node, while the respective replica set of the database is available for read and write operations.
  • 5. The database system of claim 4, wherein the database component is further configured to perform a key rotation function on a node in a replica set by performing the key rotation function on the first secondary node of a respective replica set.
  • 6. The database system of claim 5, wherein the database component is further configured to validate the key rotation function prior to continuation of rotation operation on other nodes within the respective replica set of the distributed database.
  • 7. The database system of claim 5, wherein the database component is further configured to perform a key rotation function on a node in a replica set by performing the key rotation function on a second secondary node.
  • 8. The database system of claim 5, wherein the distribute database further comprises at least one storage API configured to: manage retrieval and storage of the at least the portion of the database data using the at least one internal key.
  • 9. The database system of claim 8, wherein the database component is further configured to: validate execution of the rotation function on at least the first and second secondary nodes prior to demotion of the current primary and election of the next primary.
  • 10. The database system of claim 8, wherein the database component is configured to disable read and write access to the demoted primary node and execute the key rotation function on the demoted primary node.
  • 11. The database system of claim 10, wherein the database component executes client requests for write and read operations on the respective replica set, replicating write operations executed on the next primary node to respective secondary nodes.
  • 12. A computer implemented method for managing a distributed database, the method comprising: at least a first database node of a plurality of database nodes hosting data of the distributed database system;at least one internal database key;encrypting and decrypting, by at least one hardware-based processor, at least a portion of the data of the distributed database stored on at least a plurality of database nodes system using at least one internal database key;communicating, by the at least one hardware-based processor, with a key management server, wherein communicating includes receiving, by the at least one hardware-based processor, the master key from the key management server via a key management server interface;encrypting and decrypting, by the at least one hardware-based processor, the at least one internal database key using the at least one master key; andmanaging, by the at least one hardware-based processor, the at least one internal and master key for the plurality of database nodes; andmanaging, by the at least one hardware-based processor, key rotation functions for the at least one database;demoting, by the at least one hardware-based processor, a current primary node to be a secondary node of a respective replica set; andelecting, by the at least one hardware-based processor, one of at least a first secondary node and a second secondary node to be a next primary node of the respective replica set, wherein electing includes validating execution of the key rotation functions, and wherein the next primary node is configured to accept and replicate write operations to secondary nodes in the replica set.
  • 13. The method of claim 12, further comprising writing encrypted data to the at least one database, the encrypted data generated with reference to the at least one internal database key.
  • 14. The method of claim 12, further comprising managing key rotation functions for the at least one database.
  • 15. The method of claim 14, further comprising performing the key rotation functions within respective replica sets comprising a first primary node and at least a first and second secondary node, while the respective replica set of the database is available for read and write operations.
  • 16. The method of claim 15, wherein performing the key rotation function includes performing the key rotation function on the first secondary node of a respective replica set.
  • 17. The method of claim 16, further comprising validating the key rotation function prior to continuation of rotation operation on other nodes within the respective replica set of the distributed database.
  • 18. The method of claim 16, wherein performing the key rotation function includes performing the key rotation function on a second secondary node.
  • 19. The method of claim 16, further comprising managing retrieval and storing of the at least the portion of the database data with the at least one internal key via at least one storage API.
  • 20. The method of claim 19, further comprising validating execution of the rotation function on at least the first and second secondary nodes prior to demotion of the current primary and election of the next primary.
RELATED APPLICATIONS

This Application claims the benefit under 35 U.S.C. § 120 of U.S. application Ser. No. 15/604,856, entitled “DISTRIBUTED DATABASE SYSTEMS AND METHODS WITH ENCRYPTED STORAGE ENGINES” filed on May 25, 2017, which is herein incorporated by reference in its entirety. Application Ser. No. 15/604,856 claims priority under 35 U.S.C. § 119(e) to U.S. Provisional Application Ser. No. 62/343,440, entitled “SYSTEMS AND METHODS FOR HIERARCHICAL KEY MANAGEMENT IN ENCRYPTED DISTRIBUTED DATABASES” filed on May 31, 2016, which is herein incorporated by reference in its entirety. Application Ser. No. 15/604,856 claims priority under 35 U.S.C. § 119(e) to U.S. Provisional Application Ser. No. 62/341,453, entitled “SYSTEMS AND METHODS FOR KEY MANAGEMENT IN ENCRYPTED DISTRIBUTED DATABASES” filed on May 25, 2016, which is herein incorporated by reference in its entirety. Application Ser. No. 15/604,856 claims the benefit under 35 U.S.C. § 120 of U.S. application Ser. No. 14/992,225, entitled “DISTRIBUTED DATABASE SYSTEMS AND METHODS WITH PLUGGABLE STORAGE ENGINES” filed on Jan. 11, 2016, which is herein incorporated by reference in its entirety. Application Ser. No. 14/992,225 claims priority under 35 U.S.C. § 119(e) to U.S. Provisional Application Ser. No. 62/232,979, entitled “DISTRIBUTED DATABASE SYSTEMS AND METHODS WITH PLUGGABLE STORAGE ENGINES” filed on Sep. 25, 2015, which is herein incorporated by reference in its entirety. This application claims priority under 35 U.S.C. § 119(e) to U.S. Provisional Application Ser. No. 62/343,440, entitled “SYSTEMS AND METHODS FOR HIERARCHICAL KEY MANAGEMENT IN ENCRYPTED DISTRIBUTED DATABASES” filed on May 31, 2016, which is herein incorporated by reference in its entirety. This application claims priority under 35 U.S.C. § 119(e) to U.S. Provisional Application Ser. No. 62/341,453, entitled “SYSTEMS AND METHODS FOR KEY MANAGEMENT IN ENCRYPTED DISTRIBUTED DATABASES” filed on May 25, 2016, which is herein incorporated by reference in its entirety.

US Referenced Citations (279)
Number Name Date Kind
4918593 Huber Apr 1990 A
5379419 Heffernan et al. Jan 1995 A
5416917 Adair et al. May 1995 A
5471629 Risch Nov 1995 A
5551027 Choy et al. Aug 1996 A
5598559 Chaudhuri Jan 1997 A
5710915 McElhiney Jan 1998 A
5884299 Ramesh et al. Mar 1999 A
5999179 Kekic et al. Dec 1999 A
6065017 Barker May 2000 A
6088524 Levy et al. Jul 2000 A
6112201 Wical Aug 2000 A
6115705 Larson Sep 2000 A
6240406 Tannen May 2001 B1
6240514 Inoue May 2001 B1
6249866 Brundrett Jun 2001 B1
6324540 Khanna et al. Nov 2001 B1
6324654 Wahl et al. Nov 2001 B1
6339770 Leung et al. Jan 2002 B1
6351742 Agarwal et al. Feb 2002 B1
6363389 Lyle et al. Mar 2002 B1
6385201 Iwata May 2002 B1
6385604 Bakalash et al. May 2002 B1
6496843 Getchius et al. Dec 2002 B1
6505187 Shatdal Jan 2003 B1
6611850 Shen Aug 2003 B1
6687846 Adrangi et al. Feb 2004 B1
6691101 MacNicol et al. Feb 2004 B2
6801905 Andrei Oct 2004 B2
6823474 Kampe et al. Nov 2004 B2
6920460 Srinivasan et al. Jul 2005 B1
6959369 Ashton et al. Oct 2005 B1
7020649 Cochrane et al. Mar 2006 B2
7032089 Ranade et al. Apr 2006 B1
7082473 Breitbart et al. Jul 2006 B2
7177866 Holenstein et al. Feb 2007 B2
7181460 Coss et al. Feb 2007 B2
7191299 Kekre et al. Mar 2007 B1
7246345 Sharma et al. Jul 2007 B1
7467103 Murray et al. Dec 2008 B1
7472117 Dettinger et al. Dec 2008 B2
7486661 Van den Boeck et al. Feb 2009 B2
7548928 Dean et al. Jun 2009 B1
7552356 Waterhouse et al. Jun 2009 B1
7558481 Jenkins et al. Jul 2009 B2
7567991 Armangau et al. Jul 2009 B2
7617369 Bezbaruah et al. Nov 2009 B1
7634459 Eshet et al. Dec 2009 B1
7647329 Fischman et al. Jan 2010 B1
7657570 Wang et al. Feb 2010 B2
7657578 Karr et al. Feb 2010 B1
7668801 Koudas et al. Feb 2010 B1
7761465 Nonaka Jul 2010 B1
7957284 Lu et al. Jun 2011 B2
7962458 Holenstein et al. Jun 2011 B2
8005804 Greer Aug 2011 B2
8005868 Saborit et al. Aug 2011 B2
8037059 Bestgen et al. Oct 2011 B2
8082265 Carlson et al. Dec 2011 B2
8086597 Balmin et al. Dec 2011 B2
8099572 Arora et al. Jan 2012 B1
8103906 Alibakhsh et al. Jan 2012 B1
8108443 Thusoo Jan 2012 B2
8126848 Wagner Feb 2012 B2
8170984 Bakalash et al. May 2012 B2
8260840 Sirota et al. Sep 2012 B1
8296419 Khanna et al. Oct 2012 B1
8321558 Sirota et al. Nov 2012 B1
8352450 Mraz et al. Jan 2013 B1
8352463 Nayak Jan 2013 B2
8363961 Avidan et al. Jan 2013 B1
8370857 Kamii et al. Feb 2013 B2
8386463 Bestgen et al. Feb 2013 B2
8392482 McAlister et al. Mar 2013 B1
8572031 Merriman Oct 2013 B2
8589382 Betawadkar-Norwood Nov 2013 B2
8589574 Cormie et al. Nov 2013 B1
8615507 Varadarajulu et al. Dec 2013 B2
8712044 MacMillan Apr 2014 B2
8712993 Ordonez Apr 2014 B1
8751533 Dhavale et al. Jun 2014 B1
8843441 Rath et al. Sep 2014 B1
8869256 Sample Oct 2014 B2
8996463 Merriman et al. Mar 2015 B2
9015431 Resch et al. Apr 2015 B2
9069827 Rath et al. Jun 2015 B1
9116862 Rath et al. Aug 2015 B1
9141814 Murray Sep 2015 B1
9183254 Cole et al. Nov 2015 B1
9262462 Merriman et al. Feb 2016 B2
9274902 Morley et al. Mar 2016 B1
9317576 Merriman et al. Apr 2016 B2
9350633 Cudak et al. May 2016 B2
9350681 Kitagawa et al. May 2016 B1
9460008 Leshinsky et al. Oct 2016 B1
9495427 Abadi et al. Nov 2016 B2
9569481 Chandra et al. Feb 2017 B1
9660666 Ciarlini et al. May 2017 B1
9740762 Horowitz et al. Aug 2017 B2
9792322 Merriman et al. Oct 2017 B2
9805108 Merriman et al. Oct 2017 B2
9881034 Horowitz et al. Jan 2018 B2
9959308 Carman et al. May 2018 B1
10031931 Horowitz et al. Jul 2018 B2
10031956 Merriman et al. Jul 2018 B2
10262050 Bostic et al. Apr 2019 B2
10346430 Horowitz et al. Jul 2019 B2
10346434 Morkel et al. Jul 2019 B1
10366100 Horowitz et al. Jul 2019 B2
10372926 Leshinsky Aug 2019 B1
10394822 Stearn Aug 2019 B2
10423626 Stearn et al. Sep 2019 B2
10430433 Stearn et al. Oct 2019 B2
10489357 Horowitz et al. Nov 2019 B2
10496669 Merriman et al. Dec 2019 B2
20010021929 Lin et al. Sep 2001 A1
20020029207 Bakalash et al. Mar 2002 A1
20020143901 Lupo et al. Oct 2002 A1
20020147842 Breitbart et al. Oct 2002 A1
20020184239 Mosher, Jr. et al. Dec 2002 A1
20030046307 Rivette et al. Mar 2003 A1
20030084073 Hotti May 2003 A1
20030088659 Susarla et al. May 2003 A1
20030182427 Halpern Sep 2003 A1
20030187864 McGoveran Oct 2003 A1
20040078569 Hotti Apr 2004 A1
20040133591 Holenstein et al. Jul 2004 A1
20040168084 Owen et al. Aug 2004 A1
20040186817 Thames et al. Sep 2004 A1
20040186826 Choi et al. Sep 2004 A1
20040205048 Pizzo et al. Oct 2004 A1
20040236743 Blaicher et al. Nov 2004 A1
20040254919 Giuseppini Dec 2004 A1
20050033756 Kottomtharayil et al. Feb 2005 A1
20050038833 Colrain et al. Feb 2005 A1
20050192921 Chaudhuri et al. Sep 2005 A1
20050234841 Miao et al. Oct 2005 A1
20050283457 Sonkin et al. Dec 2005 A1
20060004746 Angus et al. Jan 2006 A1
20060020586 Prompt et al. Jan 2006 A1
20060085541 Cuomo et al. Apr 2006 A1
20060090095 Massa et al. Apr 2006 A1
20060168154 Zhang et al. Jul 2006 A1
20060209782 Miller et al. Sep 2006 A1
20060218123 Chowdhuri et al. Sep 2006 A1
20060235905 Kapur Oct 2006 A1
20060288232 Ho Dec 2006 A1
20060294129 Stanfill et al. Dec 2006 A1
20070050436 Chen et al. Mar 2007 A1
20070061487 Moore et al. Mar 2007 A1
20070094237 Mitchell et al. Apr 2007 A1
20070203944 Batra et al. Aug 2007 A1
20070226640 Holbrook et al. Sep 2007 A1
20070233746 Garbow et al. Oct 2007 A1
20070240129 Kretzschmar et al. Oct 2007 A1
20080002741 Maheshwari et al. Jan 2008 A1
20080071755 Barsness et al. Mar 2008 A1
20080098041 Chidambaran et al. Apr 2008 A1
20080140971 Dankel et al. Jun 2008 A1
20080162590 Kundu et al. Jul 2008 A1
20080288646 Hasha et al. Nov 2008 A1
20090030986 Bates Jan 2009 A1
20090055350 Branish et al. Feb 2009 A1
20090077010 Muras et al. Mar 2009 A1
20090094318 Gladwin et al. Apr 2009 A1
20090222474 Alpern et al. Sep 2009 A1
20090240744 Thomson et al. Sep 2009 A1
20090271412 Lacapra et al. Oct 2009 A1
20100011026 Saha et al. Jan 2010 A1
20100030793 Cooper et al. Feb 2010 A1
20100030800 Brodfuehrer et al. Feb 2010 A1
20100049717 Ryan et al. Feb 2010 A1
20100058010 Augenstein et al. Mar 2010 A1
20100106934 Calder et al. Apr 2010 A1
20100161492 Harvey et al. Jun 2010 A1
20100198791 Wu et al. Aug 2010 A1
20100205028 Johnson et al. Aug 2010 A1
20100235606 Oreland et al. Sep 2010 A1
20100250930 Csaszar Sep 2010 A1
20100333111 Kothamasu Dec 2010 A1
20100333116 Prahlad et al. Dec 2010 A1
20110022642 deMilo et al. Jan 2011 A1
20110125704 Mordvinova et al. May 2011 A1
20110125766 Carozza May 2011 A1
20110125894 Anderson et al. May 2011 A1
20110138148 Friedman et al. Jun 2011 A1
20110202792 Atzmony Aug 2011 A1
20110225122 Denuit et al. Sep 2011 A1
20110225123 D'Souza et al. Sep 2011 A1
20110231447 Starkey Sep 2011 A1
20110246717 Kobayashi et al. Oct 2011 A1
20120054155 Darcy Mar 2012 A1
20120076058 Padmanabh et al. Mar 2012 A1
20120078848 Jennas et al. Mar 2012 A1
20120079224 Clayton et al. Mar 2012 A1
20120084414 Brock et al. Apr 2012 A1
20120109892 Novik et al. May 2012 A1
20120109935 Meijer May 2012 A1
20120130988 Nica et al. May 2012 A1
20120136835 Kosuru et al. May 2012 A1
20120138671 Gaede et al. Jun 2012 A1
20120158655 Dove et al. Jun 2012 A1
20120159097 Jennas, II et al. Jun 2012 A1
20120166390 Merriman et al. Jun 2012 A1
20120166517 Lee et al. Jun 2012 A1
20120198200 Li et al. Aug 2012 A1
20120221540 Rose et al. Aug 2012 A1
20120254175 Horowitz et al. Oct 2012 A1
20120274664 Fagnou Nov 2012 A1
20120320914 Thyni et al. Dec 2012 A1
20130151477 Tsaur et al. Jun 2013 A1
20130290249 Merriman et al. Oct 2013 A1
20130290471 Venkatesh Oct 2013 A1
20130332484 Gajic Dec 2013 A1
20130339379 Ferrari et al. Dec 2013 A1
20130346366 Ananthanarayanan et al. Dec 2013 A1
20140013334 Bisdikian et al. Jan 2014 A1
20140032525 Merriman et al. Jan 2014 A1
20140032579 Merriman et al. Jan 2014 A1
20140032628 Cudak et al. Jan 2014 A1
20140074790 Berman et al. Mar 2014 A1
20140101100 Hu et al. Apr 2014 A1
20140164831 Merriman et al. Jun 2014 A1
20140258343 Nikula Sep 2014 A1
20140279929 Gupta et al. Sep 2014 A1
20140280380 Jagtap et al. Sep 2014 A1
20150074041 Bhattacharjee et al. Mar 2015 A1
20150081766 Curtis et al. Mar 2015 A1
20150242531 Rodniansky Aug 2015 A1
20150278295 Merriman et al. Oct 2015 A1
20150301901 Rath et al. Oct 2015 A1
20150331755 Morgan Nov 2015 A1
20150341212 Hsiao et al. Nov 2015 A1
20150378786 Suparna et al. Dec 2015 A1
20160048345 Vijayrao Feb 2016 A1
20160110284 Athalye et al. Apr 2016 A1
20160110414 Park et al. Apr 2016 A1
20160162374 Mutha et al. Jun 2016 A1
20160188377 Thimmappa et al. Jun 2016 A1
20160203202 Merriman et al. Jul 2016 A1
20160246861 Merriman et al. Aug 2016 A1
20160306709 Shaull Oct 2016 A1
20160323378 Coskun et al. Nov 2016 A1
20160364440 Lee et al. Dec 2016 A1
20170032007 Merriman Feb 2017 A1
20170032010 Merriman Feb 2017 A1
20170091327 Bostic et al. Mar 2017 A1
20170109398 Stearn Apr 2017 A1
20170109399 Stearn et al. Apr 2017 A1
20170109421 Stearn et al. Apr 2017 A1
20170169059 Horowitz et al. Jun 2017 A1
20170262516 Horowitz et al. Sep 2017 A1
20170262517 Horowitz et al. Sep 2017 A1
20170262519 Horowitz et al. Sep 2017 A1
20170262638 Horowitz et al. Sep 2017 A1
20170270176 Horowitz et al. Sep 2017 A1
20170286510 Horowitz et al. Oct 2017 A1
20170286516 Horowitz et al. Oct 2017 A1
20170286517 Horowitz et al. Oct 2017 A1
20170286518 Horowitz et al. Oct 2017 A1
20170322954 Horowitz et al. Nov 2017 A1
20170322996 Horowitz et al. Nov 2017 A1
20170344290 Horowitz et al. Nov 2017 A1
20170344441 Horowitz et al. Nov 2017 A1
20170344618 Horowitz et al. Nov 2017 A1
20170371750 Horowitz et al. Dec 2017 A1
20170371968 Horowitz et al. Dec 2017 A1
20180004804 Merriman et al. Jan 2018 A1
20180095852 Keremane et al. Apr 2018 A1
20180096045 Merriman et al. Apr 2018 A1
20180165338 Kumar et al. Jun 2018 A1
20180300209 Rahut Oct 2018 A1
20180300381 Horowitz et al. Oct 2018 A1
20180300385 Merriman et al. Oct 2018 A1
20180314750 Merriman et al. Nov 2018 A1
20180343131 George et al. Nov 2018 A1
20180365114 Horowitz Dec 2018 A1
20190102410 Horowitz et al. Apr 2019 A1
20190303382 Bostic et al. Oct 2019 A1
Non-Patent Literature Citations (45)
Entry
[No Author Listed], Automated Administration Tasks (SQL Server Agent). https://docs.microsoft.com/en-us/sql/ssms/agent/automated-adminsitration-tasks-sql-server-agent. 2 pages. [downloaded Mar. 4, 2017].
Chang et al., Bigtable: a distributed storage system for structured data. OSDI'06: Seventh Symposium on Operating System Design and Implementation. Nov. 2006.
Cooper et al., PNUTS: Yahoo!'s hosted data serving platform. VLDB Endowment. Aug. 2008.
Decandia et al., Dynamo: Amazon's highly available key-value store. SOSP 2007. Oct. 2004.
Nelson et al., Automate MongoDB with MMS. PowerPoint Presentation. Published Jul. 24, 2014. 27 slides. http://www.slideshare.net/mongodb/mms-automation-mongo-db-world.
Poder, Oracle living books. 2009. <http://tech.e2sn.com/oracle/sql/oracle-execution-plan-operation-reference >.
Stirman, Run MongoDB with Confidence using MMS. PowerPoint Presentation. Published Oct. 6, 2014. 34 slides. http://www.slideshare.net/mongodb/mongo-db-boston-run-mongodb-with-mms-20141001.
Van Renesse et al., Chain replication for supporting high throughput and availability. OSDI. 2004: 91-104.
Walsh et al., Xproc: An XML Pipeline Language. May 11, 2011. <https://www.w3.org/TR/xproc/>.
Wikipedia, Dataflow programming. Oct. 2011. <http://en.wikipedia.org/wiki/Dataflow_programming>.
Wikipedia, Pipeline (Unix). Sep. 2011. <http://en.wikipedia.org/wiki/Pipeline (Unix)>.
Wilkins et al., Migrate DB2 applications to a partitioned database. developerWorks, IBM. Apr. 24, 2008, 33 pages.
U.S. Appl. No. 16/294,227, filed Mar. 6, 2019, Bostic et al.
U.S. Appl. No. 16/525,447, filed Jul. 29, 2019, Horowitz et al.
U.S. Appl. No. 16/456,685, filed Jun. 28, 2019, Horowitz et al.
U.S. Appl. No. 15/074,987, filed Mar. 18, 2016, Merriman.
U.S. Appl. No. 15/654,590, filed Jul. 19, 2017, Horowitz et al.
U.S. Appl. No. 15/706,593, filed Sep. 15, 2017, Merriman et al.
U.S. Appl. No. 15/721,176, filed Sep. 29, 2017, Merriman et al.
U.S. Appl. No. 15/200,721, filed Jul. 1, 2016, Merriman.
U.S. Appl. No. 15/200,975, filed Jul. 1, 2016, Merriman.
U.S. Appl. No. 14/992,225, filed Jan. 11, 2016, Bostic et al.
U.S. Appl. No. 16/035,370, filed Jul. 13, 2018, Horowitz et al.
U.S. Appl. No. 15/605,143, filed May 25, 2017, Horowitz.
U.S. Appl. No. 15/605,391, filed May 25, 2017, Horowitz.
U.S. Appl. No. 15/390,345, filed Dec. 23, 2016, Stearn et al.
U.S. Appl. No. 15/390,351, filed Dec. 23, 2016, Stearn et al.
U.S. Appl. No. 15/390,364, filed Dec. 23, 2016, Stearn et al.
U.S. Appl. No. 15/604,879, filed May 25, 2017, Horowitz.
U.S. Appl. No. 15/604,856, filed May 25, 2017, Horowitz et al.
U.S. Appl. No. 15/605,141, filed May 25, 2017, Horowitz et al.
U.S. Appl. No. 15/605,276, filed May 25, 2017, Horowitz et al.
U.S. Appl. No. 15/605,372, filed May 25, 2017, Horowitz et al.
U.S. Appl. No. 15/605,426, filed May 25, 2017, Horowitz et al.
U.S. Appl. No. 15/627,502, filed Jun. 20, 2017, Horowitz et al.
U.S. Appl. No. 15/627,672, filed Jun. 20, 2017, Horowitz et al.
U.S. Appl. No. 16/013,345, filed Jun. 20, 2018, Horowitz.
U.S. Appl. No. 15/627,613, filed Jun. 20, 2017, Horowitz et al.
U.S. Appl. No. 15/627,631, filed Jun. 20, 2017, Horowitz et al.
U.S. Appl. No. 15/627,645, filed Jun. 20, 2017, Horowitz et al.
U.S. Appl. No. 15/627,656, filed Jun. 20, 2017, Horowitz et al.
U.S. Appl. No. 16/013,720, filed Jun. 20, 2018, Horowitz et al.
U.S. Appl. No. 16/013,706, filed Jun. 20, 2018, Merriman et al.
U.S. Appl. No. 16/013,725, filed Jun. 20, 2018, Merriman et al.
Ongaro et al., In Search of an Understandable Consensus Algorithm. Proceedings of USENIX ATC '14: 2014 USENIX Annual Technical Conference. Philadelphia, PA. Jun. 19-20, 2014; pp. 305-320.
Related Publications (1)
Number Date Country
20170264432 A1 Sep 2017 US
Provisional Applications (3)
Number Date Country
62343440 May 2016 US
62341453 May 2016 US
62232979 Sep 2015 US
Continuation in Parts (2)
Number Date Country
Parent 15604856 May 2017 US
Child 15605512 US
Parent 14992225 Jan 2016 US
Child 15604856 US