KEY MANAGERS FOR DISTRIBUTED COMPUTING SYSTEMS USING KEY SHARING TECHNIQUES

Information

  • Patent Application
  • 20190238323
  • Publication Number
    20190238323
  • Date Filed
    January 31, 2018
    6 years ago
  • Date Published
    August 01, 2019
    5 years ago
Abstract
Examples described herein may provide local key managers on computing nodes of distributed computing systems. The local key managers may protect secrets (e.g. cryptographic keys) in the distributed system such that risk of compromise is reduced or eliminated. The local key managers may utilize a master key to protect secrets. The master key may be protected by generating multiple key shares using a key sharing technique (e.g., Shamir's secret sharing). The multiple key shares may be stored on different nodes in the distributed computing system. In some examples, secure processors, such as trusted platform modules (TPMs), may be incorporated in computing nodes of distributed computing systems described herein. The secure processor may aid in securely protecting cryptographic keys in the event of disk or node theft, for example.
Description
TECHNICAL FIELD

Examples described herein relate generally to distributed computing systems. Examples of virtualized systems are described. Key managers are provided in some examples of distributed computing systems described herein to facilitate secure storage of secrets.


BACKGROUND

Computing systems are generally desired to protect sensitive data. Encryption based on keys (e.g. cryptographic keys) is commonly used to secure data, e.g. using symmetric keys or public-private key pairs. Dedicated hardware security modules (HSMs) or key management servers (KMSs) are available to securely store keys used for protecting sensitive data. For distributed systems, local HSMs may be attached to each node, or Network HSMs or KMS products may be used that store keys. However, requiring dedicated HSMs or KMSs may add prohibitive or undesirable expense in some examples.


Distributed systems may store keys locally using password protection. This requires a user to manually supply the password during node restart or service restart. If the password is stored locally, security may be comprised if an attacker obtains control of the computing node (e.g. in the case of disk or node theft), such that they have access to the password, the key encrypted with the password, and the data encrypted with the key.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a block diagram of an example distributed computing system, arranged in accordance with examples described herein.



FIG. 2 is a schematic illustration of a distributed computing system arranged in accordance with examples described herein.



FIG. 3 is a schematic illustration of a distributed computing system arranged in accordance with examples described herein.



FIG. 4 is a schematic illustration of a distributed computing system arranged in accordance with examples described herein.



FIG. 5 is a schematic illustration of a distributed computing system arranged in accordance with examples described herein.



FIG. 6 depicts a block diagram of components of a computing node in accordance with examples described herein.





DETAILED DESCRIPTION

Certain details are set forth below to provide a sufficient understanding of embodiments of the invention. However, it will be clear to one skilled in the art that embodiments of the invention may be practiced without various of these particular details. In some instances, well-known circuits, control signals, timing protocols, and software operations have not been shown in detail in order to avoid unnecessarily obscuring the described embodiments of the invention.


Examples described herein may provide local key managers on computing nodes of distributed computing systems. The local key manager may protect secrets (e.g. cryptographic keys) in the distributed system such that risk of compromise is reduced or eliminated. In some examples, secure crypto processors, such as trusted platform modules (TPMs), may be incorporated in computing nodes of distributed computing systems described herein. The secure processor may securely protect cryptographic keys in the event of disk or node theft, for example.


Examples of key managers described herein may utilize a master key to protect one or more secrets (e.g., other cryptographic keys, access credentials, etc.). Examples of key managers described herein may further protect access to the master key. For example, key managers described herein may modify the master key utilizing a secret sharing technique (e.g., a secret splitting technique such as Shamir's secret sharing) to generate multiple key shares. At least one of the multiple key shares may be stored at another node of the distributed computing system such that at least one key share from another node of the distributed computing system may be required to reconstruct the master key. In some examples, key shares from multiple nodes of the distributed computing system may be required to reconstruct the master key.


Examples described herein may use secure processors, such as TPMs, to securely store keys in a distributed system where theft of a node or multiple nodes may still protect data at rest. While TPMs are described, any secure crypto processor (e.g. other secure processors, custom FPGAs, etc.) with a key sealed in it and with restricted functionality that prohibits or reduces risk of leaking the key may be used. The hardware sealed key may be used to ultimately protect other generated keys and/or secrets. For example, the hardware sealed key (e.g., a private key of a computing node) may be used to encrypt a key share of a master key (which may be, e.g. a data encryption key and/or used to generate other data encryption keys), and the encrypted key share may be stored locally on the node and/or on other nodes in a distributed system. The metadata about which nodes contain the encrypted master keys and/or key shares may be stored either locally or in a central location.


Each computing node may fetch key shares stored on other nodes using, e.g., a private network connection or similar restrictions such as firewall rules that disallow an attacker stealing a node to retrieve the encrypted key shares from outside the cluster. Various mechanisms can be taken in other examples to ensure only when the node is part of the cluster it should be allowed to retrieve the encrypted key shares required to reconstruct a master key.



FIG. 1 is a block diagram of an example distributed computing system, arranged in accordance with examples described herein. The distributed computing system of FIG. 1 generally includes computing node 102 and computing node 112 and storage 140 connected to a network 122. The network 122 may be any type of network capable of routing data transmissions from one network device (e.g., computing node 102, computing node 112, and storage 140) to another. For example, the network 122 may be a local area network (LAN), wide area network (WAN), intranet, Internet, or a combination thereof. The network 122 may be a wired network, a wireless network, or a combination thereof.


The storage 140 may include local storage 124, local storage 130, cloud storage 136, and networked storage 138. The local storage 124 may include, for example, one or more solid state drives (SSD 126) and one or more hard disk drives (HDD 128). Similarly, local storage 130 may include SSD 132 and HDD 134. Local storage 124 and local storage 130 may be directly coupled to, included in, and/or accessible by a respective computing node 102 and/or computing node 112 without communicating via the network 122. Cloud storage 136 may include one or more storage servers that may be stored remotely to the computing node 102 and/or computing node 112 and accessed via the network 122. The cloud storage 136 may generally include any type of storage device, such as HDDs, SSDs, or optical drives. Networked storage 138 may include one or more storage devices coupled to and accessed via the network 122. The networked storage 138 may generally include any type of storage device, such as HDDs, SSDs, or optical drives. In various embodiments, the networked storage 138 may be a storage area network (SAN).


The computing node 102 is a computing device for hosting VMs in the distributed computing system of FIG. 1. The computing node 102 may be, for example, a server computer, a laptop computer, a desktop computer, a tablet computer, a smart phone, or any other type of computing device. The computing node 102 may include one or more physical computing components, such as processors. Hardware 150 of the computing node 102 is shown in FIG. 1 and may include local storage 124. TPM 142 may be hardware of the computing node 102 itself, as shown in FIG. 1. Hardware 152 of the computing node 112 is shown in FIG. 1 and may include local storage 130. TPM 144 may be hardware of the computing node 112 itself, as shown in FIG. 1.


The computing node 102 is configured to execute a hypervisor 110, a controller VM 108 and one or more user VMs, such as user VMs 104, 106. The user VMs including user VM 104 and user VM 106 are virtual machine instances executing on the computing node 102. The user VMs including user VM 104 and user VM 106 may share a virtualized pool of physical computing resources such as physical processors and storage (e.g., storage 140). The user VMs including user VM 104 and user VM 106 may each have their own operating system, such as Windows or Linux. While a certain number of user VMs are shown, generally any number may be implemented. Generally, multiple tiers of storage may be included in storage 140. Virtual disks (e.g. “vDisks”) may be structured from storage devices in the storage 140. A vDisk may generally refer to a storage abstraction that may be exposed by services, e.g. controller VMs, described herein, to be used by a user VM. In some examples, vDisks may be exposed using interfaces such as iSCSI (“internet small computer system interface”) or NFS (“network file system”) and may be mounted as a virtual disk on one or more user VMs.


The hypervisor 110 may be any type of hypervisor. For example, the hypervisor 110 may be ESX, ESX(i), Hyper-V, KVM, or any other type of hypervisor. The hypervisor 110 manages the allocation of physical resources (such as storage 140 and physical processors) to VMs (e.g., user VM 104, user VM 106, and controller VM 108) and performs various VM related operations, such as creating new VMs and cloning existing VMs.


Controller VMs described herein, such as the controller VM 108 and/or controller VM 118, may provide services for the user VMs in the computing node. Generally, controller VMs may be used to manage storage and/or I/O activities of the computing node. Controller VMs may in some examples run as virtual machines above hypervisors, such as hypervisor 110 and hypervisor 120. Multiple controller VMs in a distributed system may work together to form a distributed system which manages storage 140. Generally, each controller VM, such as controller VM 108 and controller VM 118, may export one or more block devices or NFS server targets which may appear as disks to client VMs (e.g. user VMs). These disks are virtual, at least because they are implemented by software executing the controller VM. In this manner, to user VMs, controller VMs may appear to be exporting a clustered storage appliance that contains some disks. User data (including the operating system in some examples) in the user VMs may reside on these virtual disks.


The computing node 112 may include user VM 114, user VM 116, a controller VM 118, and a hypervisor 120. The user VM 114, user VM 116, the controller VM 118, and the hypervisor 120 may be implemented similarly to analogous components described above with respect to the computing node 102. For example, the user VM 114 and user VM 116 may be implemented as described above with respect to the user VM 104 and user VM 106. The controller VM 118 may be implemented as described above with respect to controller VM 108. The hypervisor 120 may be implemented as described above with respect to the hypervisor 110. In the embodiment of FIG. 1, the hypervisor 120 may be a different type of hypervisor than the hypervisor 110. For example, the hypervisor 120 may be Hyper-V, while the hypervisor 110 may be ESX(i).


The controller VM 108 and controller VM 118 may communicate with one another via the network 122. By linking the controller VM 108 and controller VM 118 together via the network 122, a distributed network of computing nodes including computing node 102 and computing node 112, can be created.


Controller VMs, such as controller VM 108 and controller VM 118, may each execute a variety of services and may coordinate, for example, through communication over network 122. Moreover, multiple instances of the same service may be running throughout the distributed system—e.g. a same services stack may be operating on each controller VM. For example, an instance of a service may be running on controller VM 108 and a second instance of the service may be running on controller VM 118. Controller VMs may provide a variety of services which may perform a variety of tasks, including, but not limited to, data deduplication tasks, quality of service (QOS) functions, encryption, and compression.


In some examples, controller VMs may be the primary software component within a node that virtualizes I/O access to hardware resources within storage 140. In this manner, a controller VM may be provided for each computing node in a distributed computing system described herein (e.g. a virtualized data center). Each computing node may include a controller VM to share in an overall workload of the system to handle storage tasks.


Controller VMs described herein may implement a local key manager, such as local key manager 146 and local key manager 148. Other services of the controller VM, such as other service(s) 154 of controller VM 108 or other service(s) 156 of controller VM 118 may utilize secrets which are desirably secretly stored. For example, the secrets may be encrypted or otherwise obfuscated, and stored in a distributed manner for example in storage 140. Accordingly, local key managers described herein may provide access to one or more secrets utilized by other services of the controller VM. Secrets which may be managed by local key managers described herein, include encryption keys, such as private encryption keys, identity credentials (e.g. IDs, passwords, and/or certificates), and/or data. Secrets may be used by controller VMs for a variety of purposes, including encrypting and/or decrypting data, or authenticating communications. The secrets may be stored by local key managers described herein encrypted with a master key for the local key manager service. Accordingly, the master key used in multiple or all computing nodes of a distributed computing system may be the same, allowing any computing node with access to the master key to obtain access to the secret(s) stored by any instance of the local key manager in the distributed computing system.


For example, the local key manager 146 may store and/or access secret(s), e.g., in storage 140 encrypted with a master key, MK. During operation, the local key manager 146 may have access to the master key MK. The local key manager 148 may also store and/or access secret(s), e.g., in storage 140, encrypted with the master key MK. The secret(s) accessible by local key manager 148 may be the same or different than the secret(s) accessible by the local key manager 146. During operation, because the local key manager 146 has access to the master key MK, the local key manager 146 may access secret(s) stored by other local key managers in the distributed computing system, such as the local key manager 148, because the local key manager 148 has stored secrets encrypted with the same master key MK. Accordingly, other service(s) 154 and/or other service(s) 156 may store secret(s) using local key manager 146 and/or local key manager 148. This may relieve other service(s) 154 and/or other service(s) 156 of a need to provide their own secure secret storage service. Examples of methods and systems described herein provide further security for the master key MK to reduce or eliminate the ability for MK to be compromised. For example, examples of methods and systems described herein may reduce the ability for the master key MK to be compromised in the event of node or disk theft.


MK may be stored in a memory of the local computing node (e.g., the computing node 102 and computing node 112). When the local key manager of the node (e.g. the local key manager 146 and/or local key manager 148) goes down, such as by power-off or other service disruption, the master key MK will need to be obtained again by the local computing node.


Key managers described herein may utilize key splitting techniques to generate one or more key shares from the master key MK. Generally, key splitting techniques may refer to methods for distributing a secret amongst a group of locations, each of which is allocated a share of the secret (e.g., a key share). The secret can be reconstructed only when a sufficient number of shares are combined together; individual shares are generally of no use on their own (e.g., the secret cannot be reconstructed before the sufficient number of shares are combined).


Accordingly, key managers described herein may utilize the master key (and/or an encrypted version of the master key in some examples) to generate multiple key shares. Generally, each of the key shares may be stored at different computing nodes in the distributed computing system. In some examples, a plurality of key shares may be stored at a particular computing node. Generally, however, each computing node may have fewer than the threshold number of key shares required to reconstruct the master key,


Generally, consider an example distributed system having N nodes. Key managers described herein may generally generate key shares such that K shares are needed to reconstruct the master key. The number K is generally selected to be less than N such that the system has improved fault tolerance (e.g., one or more nodes may be down or inaccessible but it may still be practical to obtain a sufficient number of key shares to reconstruct the master key). In some examples, the number K may be half of the total number N of nodes in the system. For example, in a system having 64 nodes, key shares may be generated such that 32 key shares may be needed to reconstruct the master key. In some examples, the number K may be one-third of the total number of nodes N in a system. In some examples, the number K may be one-quarter of the total number of nodes N in a system. Other fractions may be used in other examples.


Key shares generated by key managers described herein generally refer to data strings which may be later used by key managers to reconstruct a master key, when a sufficient number of the key shares have been obtained at a single node.


Example key sharing techniques which may be used include key splitting techniques such as Shamir's secret sharing (which may also be referred to as Shamir's Secret Splitting algorithm). Shamir's secret sharing may be used for ensuring quorum for access to sensitive data. The method takes a secret S (e.g., a master key or an encrypted master key) and splits it into n shares, where knowledge of a single share generally brings no knowledge of the original secret S. A parameter to splitting the secret (S) is the number of shares required to reconstruct S, this parameter may be referred to as the threshold T. If k shares are obtained, where k>=T then S can be reconstructed. However S cannot be reconstructed with k<T. In some examples, no information about S is obtained from a number of shares k<T.


In some examples, key sharing techniques described herein may utilize a large prime number (e.g., larger than the secret being split) to generate key shares. In some examples, to reduce and/or minimize a size of the prime number used, the master key MK may be split into portions (e.g., 32 bit chunks in some examples, 16 bit chunks in some examples), and secret sharing techniques are used on each chunk, and the outputs concatenated to form the key shares. To reconstruct MK, the key share may be retrieved, each chunk identified in the key share, and combined to reconstruct a segment of the MK. The segments may be concatenated to form the entire MK.


Computing nodes described herein may use one or more secure processors, such as TPMs, to further protect the master key utilized by the local key manager. TPM 142 and TPM 144 are shown in FIG. 1. TPM 142 and TPM 144 are hardware implemented physically on the computing node 102 and computing node 112, respectively. Secure processors described herein, such as TPM 142 and TPM 144 may securely store data a private key) needed for a node to finally obtain the master key MK in some examples for the local key manager services. For example, TPM 142 stores Private Key1. Private Key1 may be stored by the TPM 142, for example, encrypted by a storage root key (SRK) of the TPM 142. In other examples, the Private Key1 encrypted by the SRK may be stored in a location accessible to computing node 102. TPM 142 may be required to obtain access to Private Key1.


To protect a master key, the local key manager 146 may encrypt the master key with a public key for each of the nodes which may utilize the master key (e.g., one encrypted master key may be generated using a public key for node 102 and another encrypted master key may be generated using a public key for node 112). Note that encrypting the master key with a public key may also aid in preventing against network sniffing attacks, because an entity performing network sniffing may not have access to a key needed to decrypt the encrypted master key. The local key manager 146 may utilize key sharing techniques (e.g., Shamir's sharing secret) to generate multiple key shares, a sufficient number from which the master key and/or the encrypted master key may be reconstructed. At least one of the key shares may be stored on another computing node such that at least one key share from another computing node is required to reconstruct the master key. When the local key manager 146 of computing node 102 desires to obtain MK (e.g. on being added to a cluster, on startup, or during a recovery from service disruption), the computing node 102 may access at least one key share from another node. In some examples, the computing node 102 may access multiple key shares from other nodes, including accessing at least a sufficient number of key shares to reconstruct MK. In some examples, the computing node 102 itself may store one or more key shares. In examples where the key shares had been generated using an encrypted version of the MK, the key shares accessed may be those which may be used to reconstruct the version of MK encrypted with the public key for the computing node 102. Once a sufficient number of key shares have been accessed, the key shares are combined to yield the master key, and/or in some examples to yield the master key encrypted with the public key of the computing node 102. The encrypted version of MK may be decrypted using a key which may be stored at computing node 102 (e.g., the private key of the computing node 102), which may in some examples be protected by the requesting node's secure processor. In other examples, the key used to encrypt MK may be stored without protection by a secure processor, such as being stored in plain text at the requesting node. The computing node 102 may obtain a key share from the computing node 112, combine the key share from the computing node 112 with a key share stored at the computing node 102 and/or key shares from other node(s) and combine the key shares to reconstructed a version of MK encrypted with the public key for the computing node 102. To access the relevant key shares, the local key manager 146 may access metadata specifying which nodes contain the key shares, and how many key shares are required to reconstruct MK. The metadata may be stored in storage 140 in some examples. The computing node 102 may provide the encrypted master key to TPM 142 and request that TPM 142 decrypt the encrypted master key using Private Key1. The TPM 142 may return the MK to computing node 102 for use by local key manager 146. In this manner, a computing node may access two items to obtain MK: (1) information derived from MX (e.g. a key share used to reconstruct an encrypted version of MK) stored at another node; and (2) information stored at the requesting node, which may be protected by a secure processor at the requesting computing node (e.g. the secure processor may be used to decrypt the encrypted version of MK reconstructed from at least the key share obtained from the another node).


Analogously, when seeking to reconstruct MK, the computing node 112 may obtain a key share from computing node 102. Additional key shares may be accessed on the computing node 112 and/or other computing nodes until a sufficient number have been accessed to reconstruct MK. To request the key shares, the local key manager 148 may access metadata specifying which nodes contain the key shares, and how many key shares are required to reconstruct MK. The metadata may be stored in storage 140. The computing node 112 may combine the key shares to yield the master key and/or an encrypted version of the master key encrypted by the public key for the computing node 112. The computing node 112 may provide the encrypted master key to TPM 144 and request that TPM 144 decrypt the encrypted master key using Private Key2. The TPM 144 may return the MX to computing node 112 for use by local key manager 148. Note that no one node contains sufficient key shares to reconstruct the master key without access to another node or nodes. In this manner, theft of computing node 102 and/or computing node 112 would not result in sufficient information to obtain MK.


In some examples, additional security may be provided by setting passwords (e.g. cryptographically random passwords) for each TPMs storage root key (SRK) which may be stored in distributed fashion on other nodes without storing it on the local node. In this manner, an attacker may need to both fetch the key shares and the SRK password to obtain the master key.


Accordingly, local key managers described herein, such as local key manager 146 and local key manager 148 may utilize a master key which requires, to be obtained at a node, data stored at computing nodes of the distributed system other than the computing node at which it is being obtained. Local key managers described herein may modify the master key used by the local key managers to provide information derived from the master key. For example, local key managers described herein may utilize one or more secret sharing techniques to generate multiple key shares. In some examples, the master key may be encrypted before and/or after the key shares are generated. Other modifications may be used in other examples. In some examples, the master key and/or key shares may be encrypted using a public key for the computing node on which the local key manager is running. The master key and/or key shares may then be decrypted using the private key for that computing node, which may, in some examples, be protected by a secure crypto processor. The local key manager, e.g. local key manager 146, may store at least one of the key shares at a computing node other than the computing node 102. For example, local key manager 146 may store at least one key share at computing node 112, e.g. using local storage 130.


Accordingly, local key managers described herein may provide secure secret storage. Any number of secrets may be stored by local key managers described herein. In some examples, the number and size of secrets stored and/or managed by the local key manager is such that the secrets may be stored on the computing node of the local key manager, e.g. in the file system. The master key MK used to encrypt secrets stored by local key managers in a distributed computing system may generally be shared (e.g. the same) at each computing node of the computing system, such that each node having access to MK may access secrets stored at other nodes. For example, MK may be used to encrypt any number of keys used by other services (e.g. the other service(s) 154 and/or other service(s) 156). The encrypted keys may be stored by the local key manager.


Local key managers may store at least one key share at another node of the distributed computing system, different from the node storing information which may be used to obtain the master key. Generally, no single computing node may store a sufficient number of key shares to reconstruct the master key. In some examples, each key share needed to obtain a master key may be stored at a different computing node. In this manner, in order to obtain the master key, at least one key share from another computing node of the distributed system must be obtained. Note that after shutdown or powerdown of a local key manager, the master key used by the service is no longer available to the computing node. The master key must be obtained at startup or power on. Accordingly, even if an attacker obtains physical possession of a computing node (e.g. through theft), it may not be possible to generate the master key used to protect secrets stored by the local key manager, because data from other nodes of the distributed computing system may be necessary to obtain the master key.


On restarting or powering up a computing node, the computing node, e.g. a local key manager, running on the computing node, may request the key share derived from the master key from one or more other computing nodes at which it may be stored. Metadata specifying which computing nodes store which key shares may be used to identify nodes to access to obtain a sufficient number of key shares. Access to the information may be restricted to requests from within the distributed computing system, e.g. by restricting access to be from network 122 and/or use of firewall or other access rules. The local key manager may combine a retrieved key share with additional key shares stored at the computing node or on other computing nodes to obtain the master key or an encrypted version of the master key.


Each computing node may generally have a different public/private key pair, such as Private Key1 for computing node 102 shown in FIG. 1 and Private Key2 for computing node 112. The master key may be encrypted with the public key for a node prior to generating the key shares and/or the key shares may themselves be encrypted with the public key for the node. Accordingly, the private key for the node may he required to reconstruct the MK. In this manner, two items may be required to obtain a master key used by a local key manager: (1) information stored at another node (e.g., a key share); and (2) information stored at the local node (e.g., the private key). The information stored at the local node may be protected physically, e.g. using a secure crypto processor such as a TPM.


While described herein as keys, generally local key managers may provide protection for any secrets.


During use, secrets protected by local key managers described herein may be used for encryption and/or decryption. For example, the local key manager 146 may receive a request for a data encryption key (e.g. from one or more of the other service(s) 154). The local key manager 146 may have a stored secret which is a data encryption key for use by the other service(s) 154 for data encryption. The data encryption key may be encrypted by the master key. The local key manager 146 may utilize the master key to decrypt the data encryption key and provide the data encryption key to the other service(s) 154. The other service(s) 154 may utilize the data encryption key to encrypt and/or decrypt data. The encrypted data may be stored, e.g. in local storage 124 or other storage in storage 140 accessible to computing node 102.


During use, controller VMs may utilize secrets protected by local key managers for authentication of communications. For example, when a service at computing node 102 wants to communicate with another service, it may utilize local key manager 146 to store its identity credentials that can be used to authenticate to other service(s). For example, a key or other secret (e.g., a password) protected by the local key manager 146 may be used to authenticate a communication sent and/or received by a service running on computing node 102.


In other examples, keys or other secrets stored inside local key managers described herein may be used to provide keys for applications which may utilize computing nodes described herein. For example, one of the other service(s) 154 may be a key management service. The key management service may itself have a key management service master key protected by the local key manager 146. The key management service may protect secrets for other user VMs and/or user applications using the key management service master key.



FIG. 2 is a schematic illustration of a distributed computing system arranged in accordance with examples described herein. The distributed computing system of FIG. 2 includes node 202, node 210, node 218, and node 226. Any of these nodes may be implemented by or used to implement, for example computing node 102 and/or computing node 112 of FIG. 1 in some examples. While four nodes are shown in FIG. 2, any number of nodes may be used in some examples. As described with reference to FIG. 1, the nodes may generally be connected over a network, and may virtualize and share a pool of storage resources. Storage details are not shown in FIG. 2 to avoid obscuring the detail regarding storage and management of keys provided by local key managers described herein.


Each of the nodes in FIG. 2 includes a local key manager—local key manager 206, local key manager 214, local key manager 222, and local key manager 230, respectively. Generally, each node of a distributed computing system described herein may run a local key manager. In some examples, certain nodes may not include a local key manager, however. Local key managers described herein are generally software services which run (e.g. execute) on one or more processing units (e.g. processors) of a computing node.


Each of the nodes in FIG. 2 further includes a TPM. Generally, each node of a distributed computing system described herein may include a secure crypto processor (e.g., a TPM) or other mechanism for securing a secret (e.g., password protection). In the example of FIG. 2, each node includes a TPM—TPM 208, TPM 216, TPM 224, and TPM 232, respectively. The secure crypto processor of each node may protect a secret (e.g., a key) that may be used to reconstruct a master key used by local key managers described herein. In the example of FIG. 2, TPM 208 protects Private Key1, TPM 216 protects Private Key2, TPM 224 protects Private Key3, and TPM 232 protects Private Key4.


Local storage for each node is also illustrated in FIG. 2—local storage 204, local storage 212, local storage 220, and local storage 228, respectively. FIG. 2 illustrates what key share is stored at each node to be combined to reconstruct master keys used by the local key managers in the distributed computing system. The local storage shown forms part of the physical hardware of the node in some examples.


Each local key manager may utilize a master key (“MK” as shown in FIG. 2) to encrypt other secrets (e.g. keys) accessible to the local key manager. As shown in FIG. 2, the master key utilized by each local key manager in the distributed computing system may be the same, or different master keys may be used in other examples. In the example of FIG. 2, each of the local key managers—local key manager 206, local key manager 214, local key manager 222, and local key manager 230—utilize a same master key MK. The master key may be utilized to encrypt secrets protected by the local key managers. Local key managers may have access to any number of secrets, such as data encryption keys (DEKs). The local key managers of FIG. 2 are shown as having access to secrets 250, including data encryption keys DEK1-DEK6. The secrets may be stored in distributed storage, such as storage 140 of FIG. 1, which may be accessible to all nodes in some examples. The secrets 250 may be stored in distributed storage encrypted by MK. In this manner, MK will be needed to obtain any of the secrets after shutdown and/or power down of the node. Local key managers may generally have access to any number of secrets (e.g. keys) encrypted with a master key. As described herein, key share(s) from other node(s) may be needed to obtain the master key.


The master key utilized by a local key manager may be encrypted utilizing a public key of the local computing node. For example, MK at node 202 may be encrypted utilizing a public key for node 202. In some examples, MK may be encrypted prior to generating key shares. In other examples, the key shares may be generated from MK and the key shares may be encrypted with public keys described herein. In other examples, key shares may be generated from unencrypted MK and the key shares may not be encrypted. In the example of FIG. 2, MK may be encrypted with the public key for each of the nodes prior to generating key shares. Accordingly, a local key manager (e.g., local key manager 206) may generate MK encrypted by the public key for the node 202 (e.g., PubK1), which may be written E(MK, PubK1). The same local key manager (or different local key manager(s) in some examples) may generate MK encrypted by the public key for other nodes in the system (e.g., node 210—PubK2, node 218—PubK3, and node 226—PubK4). These may accordingly be written E(MK, PubK2), E(MK, PubK3), and E(MK, PubK4). The local key manager may then utilize secret sharing techniques to generate multiple key shares for each encrypted version of MK. In the example of FIG. 2, four key shares are generated from each version of encrypted MK, although other numbers of key shares may be generated in other examples.


The key shares for E(MK, PubK1) may be written S1(E(MK, PubK1), S2(E(MK, PubK1), S3(E(MK, PubK1), and S4(E(MK, PubK1). The key shares for E(MK, PubK2) may be written S1(E(MK, PubK2), S2(E(MK, PubK2), S3(E(MK, PubK2), and S4(E(MK, PubK2). The key shares for E(MK, PubK3) may be written S1(E(MK, PubK3), S2(E(MK, PubK3), S3(E(MK, PubK3), and S4(E(MK, PubK3). The key shares for E(MK, PubK4) may be written S1(E(MK, PubK4), S2(E(MK, PubK4), S3(E(MK, PubK4), and S4(E(MK, PubK4). At least one key share needed to reconstruct the master key (or to reconstruct the encrypted master key in some examples) may be stored at another node. In some examples, all key shares may be stored at other nodes. In some examples, half, one-quarter, one-third, or two-thirds of the key shares may be stored on other nodes. Other fractions are possible in other examples. In the example of FIG. 2, each node stores one key share for each version of the encrypted master key. For example, node 202 stores S1(E(MK, PubK1), S1(E(MK, PubK2), S1(E(MK, PubK3), S1(E(MK, PubK4). Node 210 stores S2(E(MK, PubK1), S2(E(MK, PubK2), S2(E(MK, PubK3), S2(E(MK, PubK4). Node 218 stores S3(E(MK, PubK1), S3(E(MK, PubK2), S3(E(MK, PubK3), S3(E(MK, PubK4). Node 226 stores S4(E(MK, PubK1), S4(E(MK, PubK2), S4(E(MK, PubK3), S4(E(MK, PubK4). In the example of FIG. 2, two key shares may be needed to reconstruct the encrypted master key. Accordingly, each node itself stores one key share and may request a second key share from any of the other nodes to reconstruct the encrypted master key. In some examples, more than two key shares may be needed to reconstruct the encrypted master key and additional key shares may need to be requested by a local key manager to reconstruct the encrypted master key.


The example of FIG. 2 has depicted an example where four key shares may be generated based on each public key-encrypted master key. More generally, in some examples N key shares may be generated based on each encrypted master key and stored across any number of nodes.


The data used to decrypt the master key may be the private key for a node. The private key may be protected by a secure crypto processor, such as TPM 208. The encryption of MK using PubK1 yields an encrypted master key—e.g. encrypted data represented as E(MK, PubK1). When a sufficient number of key shares are combined, the encrypted MK may be reconstructed—e.g., E(MK, PubK1). To decrypt E(MK, PubK1), Private Key1 may be used. Each node may reconstruct an encrypted master key representing the master key encrypted with a public key for that node. The private key for that node may be used to decrypt the encrypted MK. So node 210 may decrypt E(MK, PubK2) utilizing Private Key2 to reconstruct MK. The node 218 may decrypt E(MK,PubK3) utilizing Private Key3 to reconstruct MK. The node 226 may decrypt E(MK,PubK4) utilizing Private Key4 to reconstruct MK.


In some examples, no single node may store a sufficient number of key shares to reconstruct an encrypted version of a master key, or the master key itself. In some examples, at least one key share necessary to reconstruct the master key is stored at another node other than the node which may request the master key. On restart or powerup, to obtain a sufficient number of key shares to reconstruct the master key or an encrypted version of the master key, the local key manager 206 may request S2(E(MK, PubK1)) from node 210, S3(E(MK, PubK1)) from node 218, and/or S4(E(MK, PubK4)) from node 226. Metadata specifying where the key shares are stored and/or a number of key shares necessary to reconstruct the master key or the encrypted master key may be stored in the distributed computing system accessible to node 202. On receipt of key shares from another node, local key manager 206 may combine the key shares with one another and/or with S1(E(MK, PubK1) stored at node 202 to reconstruct E(MK, PubK1). The local key manager 206 may decrypt E(MK,PubK1) using Private Key1 protected using TPM 208, thereby recovering the master key. Once the master key MK is recovered, the local key manager 206 may recover a request secret of secrets 250 by decrypting the stored versions of those keys with MK. In this manner, theft of node 202 would not result in an ability to recover the master key, since additional key shares may not be able to be obtained


In this manner, local key managers described herein may be provided on every controller VM in a distributed computing system. The local key managers may use a shared master key (e.g. MK). However, in some examples, one or more local key managers may have a different master key than one or more other local key managers. The master key may only be fetched by the local key manager, however, if the local key manager is part of the distributed computing system (e.g. cluster). Local key managers may create shared keys and/or local keys and/or other secrets, which may be encrypted with the master key. Shared keys may be accessed by multiple controller VMs in the distributed computing system, or all controller VMs in the distributed computing system in some examples. For example, identity asymmetric keys for services may be provided as shared keys by local key managers described herein. Instances of a service may store a shared data encryption key and use it to provide encryption capabilities to other services. In some examples, shared keys may be used by distributed databases to encrypt sensitive data and/or as an authentication secret across components distributed across nodes. Local keys may be accessed only by a single computing node and may be used, for example, for self-encrypting drive encryption at a local node or for authentication purposes.


While the description of FIG. 2 has been provided in the context of storage of data encryption keys, it is to he understood that other secrets may be stored and protected in other examples.


While examples described herein, such as the example shown and described with reference to FIG. 2, refer to the distribution of key shares across multiple nodes of a computing system, in some examples, key shares may be stored on a single node of a computing system and distributed amongst multiple disks (e.g., distributed amongst multiple physical or virtual disks) of the computing node to provide protection against disk theft. Accordingly, in some examples a sufficient number of key shares to reconstruct an encrypted version of a master key, or the master key itself, may be stored at a single node in a computing system. However, each physical (or virtual) storage disk of the computing node may not store a sufficient number of key shares to reconstruct the encrypted version of the master key, or the master key itself. Accordingly, key shares from at least two disks (e.g., physical or virtual disks) may be necessary to reconstruct the encrypted version of the master key, or the master key itself—providing some protection against disk theft. To reconstruct the encrypted master key and/or the master key, the local key manager may request key shares from multiple (e.g., at least two) of the disks of a computing node (e.g., physical disks within the local storage of the computing node and/or virtual disks associated with one or more computing nodes).



FIG. 3 is a schematic illustration of a distributed computing system arranged in accordance with examples described herein. The system of FIG. 3 depicts several components used when local key managers described herein are used to facilitate encryption and/or decryption of stored data. The distributed computing system of FIG. 3 includes two computing nodes—one with controller VM 302 and another with controller VM 304. Any number of computing nodes may be present in some examples.


Each computing node in the distributed computing system of FIG. 3 includes a local key manager—e.g. local key manager 306 and local key manager 330. Moreover, each controller VM includes some other service—e.g. other service 310 and other service 312. The other service may be a service providing encryption and/or decryption of data, or having another function involving the encryption and/or decryption of data. Any number of other services may be provided on each controller VM of a distributed computing system described herein, and the services provided at one controller VM may be the same or different than services provided on another controller VM.


Each computing node may have certain physical hardware resources, including node hardware 308 associated with controller VM 302 and node hardware 314 associated with controller VM 304. The node hardware may include one or more hard disk drives (HDD) and/or self-encrypting drive (SED) and/or solid state drives (SSD)—e.g. HDD/SED 316, SSD 318 connected to controller VM 302 and HDD/SED 324 and SSD 326 connected to controller VM 304. As described herein, each computing node may further include a secure crypto processor, such as TPM 320 in communication with controller VM 302 and TPM 328 in communication with controller VM 304.


Example operation of local key manager 330 in use for services encrypting and/or decrypting data will be described. Local key manager 306 may operate in an analogous manner. During operation, other service 310 may have a need to encrypt and/or decrypt data. Other service 310 may request a data encryption key from local key manager 330. The local key manager 330 may provide a data encryption key to the other service 310. The local key manager 330, as described herein, may provide a data encryption key which may not be obtained without the use of data stored on other nodes of the distributed computing system and additionally in some examples without further use of data protected by a secure crypto processor TPM 320). The other service 310 may utilize the data encryption key to encrypt data and store the encrypted data in HDD/SED 316, SSD 318, or other storage accessible to the controller VM 302. The other service 310 may utilize the data encryption key to decrypt data stored on HDD/SED 316, SSD 318, or other storage accessible to controller VM 302. Note that, in the event of theft of the computing node including the controller VM 302 and the local storage 314, an attacker would be unable to decrypt the stored data because they would be unable to obtain the data encryption key without obtaining data (e.g., key shares) from other computing nodes in the system.



FIG. 4 is a schematic illustration of a distributed computing system arranged in accordance with examples described herein. The system of FIG. 4 depicts several components used when local key managers described herein are used to authenticate communications between nodes in the distributed computing system. The distributed computing system of FIG. 4 includes components which have been described with reference to FIG. 3 and bear the same reference numerals. Each controller VM includes some other service—e.g. other service1402, other service2406, other service1404, and other service2408. The other services may be services which communicates with other computing nodes (e.g. the service 402 and service 404 may communicate with one another, e.g. over a network 122). For example, other service1402 and other service1404 may be instances of a same service operating on the controller VM 304 and controller VM 302 respectively. It may be desirable to ensure that communications originating from and/or destined for an instance of other service1 are only sent to and/or received by other instances of service1 in some examples. Any number of other services may be provided on each controller VM of a distributed computing system described herein, and the services provided at one controller VM may be the same or different than services provided on another controller VM.


Example operation of local key manager 330 in use for services in authenticating communications will be described. Local key manager 306 may operate in an analogous manner. During operation, other service1404 may have a need to communicate with another node and/or receive a communication from another node, such as other service1402 on controller VM 304. Other service 404 may request identity credentials from local key manager 330. The local key manager 330 may provide identity credentials (e.g. one or more private key(s)) to the other service1404. In some examples, the local key manager 330 may access a permissions database or other data structure correlating identity credentials with accounts which may access the identity credentials. Accordingly, local key manager 330 may be able to provide service1 identity credentials to other service1404, but may not provide service1 identity credentials to other service2406. Similarly, the local key manager 306 may provide service1 identity credentials to the other service1402 but may not provide service1 identity credentials to other service2408 The identity credentials provided by the local key manager 330 and/or local key manager 306 may be used to authenticate communications, e.g. encrypt and/or decrypt communications between other service1404 and other service1402.


The local key manager 330, as described herein, may protect identity credentials using a master key which may not be obtained without the use of data (e.g., key shares) stored on other nodes of the distributed computing system and additionally in some examples without further use of data (e.g., a private key) protected by a secure crypto processor (e.g. TPM 320). In some examples, all instances of a same service on each computing node in a system may utilize same identity credentials (e.g. encryption key), and the identity credentials may be used to identify the service from which the communication originated. For example, other service1404 and other service1402 may be two instances of a same service. Each of other service1404 and other service1402 may utilize a same identity credential to encode and/or authenticate communications.


While the other services 402 and 404 utilizing the local key managers to facilitate authentication have been described separately from the other services in FIG. 3 used to encrypt and/or decrypt data, it is to be understood that some computing nodes may contain both types of services and/or in some examples one service may perform both functions. For example, other service 404 and other service 310 of FIG. 4 may be included on a same node in some examples.



FIG. 5 is a schematic illustration of a distributed computing system arranged in accordance with examples described herein. The system of FIG. 5 depicts several components used when local key managers described herein are used to support key management services which may be exposed to user databases and/or applications (e.g. user VMs). The distributed computing system of FIG. 5 includes a number of components which are analogous to those in FIG. 3 and FIG. 4 and bear the same reference numerals.


Each controller VM in FIG. 5 includes a key management service e.g. key management service 504 and key management service 506. The key management services may utilize a key management service master key to protect secrets used by user databases and/or applications (e.g. user VMs). The key management service master key may be a secret which is protected by local key managers described herein. Accordingly, key management services may obtain key management services master keys from local key managers (e.g. local key manager 330 and/or local key manager 306) and utilize the key management services master keys to protect secrets that may be provided to user databases and/or user applications (e.g. user VMs), such as user database 502 and/or user app 508. Any number of other services may be provided on each controller VM of a distributed computing system described herein, and the services provided at one controller VM may be the same or different than services provided on another controller VM.


Example operation of local key manager 330 and key management service 504 will be described. Local key manager 306 and key management service 506 may operate in an analogous manner. During operation, key management service 504 may receive a request for a secret (e.g. a key) from a user database or user application (e.g. a user VM) such as user database 502. The key management service 504 may decrypt the requested secret using a key management service master key. The key management service master key may be protected by the local key manager 330. For example, the key management service master key may be encrypted with the master key of the local key manager 330. As described herein, the master key of the local key manager 330 may be protected using data stored at other computing nodes and/or stored using crypto processor(s). The requested secret provided by the key management service 504 may be used by the user application and/or user database for any purpose for which a secret may be desirable or utilized. In this manner, the distributed computing system may itself provide keys to user applications, and expensive dedicated key services may not be needed in some examples.


While the operation of key management service 504 and key management service 506 have been described with reference to FIG. 5 separately to the operation of other services, it is to be understood that in some examples, different services may be provided on each computing node. For example, the other service 310 of FIG. 3 and/or the other service 404 of FIG. 4 may both be provided on controller VM 302 together with the key management service 504. In some examples, the services may be integrated together into a single service (e.g. a service which may both authenticate communications and encrypt/decrypt data and/or be exposed to provide keys to users).



FIG. 6 depicts a block diagram of components of a computing node in accordance with an embodiment of the present invention. It should be appreciated that FIG. 6 provides only an illustration of one implementation and does not imply any limitations with regard to the environments in which different embodiments may be implemented. Many modifications to the depicted environment may be made. The computing node of FIG. 6 may be used to implement and/or be implemented by any of the computing nodes described herein.


The computing node of FIG. 6 includes a communications fabric 602, which provides communications between one or more processor(s) 604, memory 606, local storage 608, communications unit 610, I/O interface(s) 612. The communications fabric 602 can be implemented with any architecture designed for passing data and/or control information between processors (such as microprocessors, communications and network processors, etc.), system memory, peripheral devices, and any other hardware components within a system. For example, the communications fabric 602 can be implemented with one or more buses.


The memory 606 and the local storage 608 are computer-readable storage media. In this embodiment, the memory 606 includes random access memory RAM 614 and cache 616. In general, the memory 606 can include any suitable volatile or non-volatile computer-readable storage media. The local storage 608 may be implemented as described above with respect to local storage 124 and/or local storage 130. In this embodiment, the local storage 608 includes an SSD 622 and an HDD 624, which may be implemented as described above with respect to SSD 126, SSD 132 and HDD 128, HDD 134 respectively.


Various computer instructions, programs, files, images, etc. may be stored in local storage 608 for execution by one or more of the respective processor(s) 604 via one or more memories of memory 606, in sonic examples, local storage 608 includes a magnetic HDD 624. Alternatively, or in addition to a magnetic hard disk drive, local storage 608 can include the SSD 622, a semiconductor storage device, a read-only memory (ROM), an erasable programmable read-only memory (EPROM), a flash memory, or any other computer-readable storage media that is capable of storing program instructions or digital information. As described herein, in some examples the processor(s) 604 may include one or more crypto processors, such as a TPM. In other examples, a TPM may be provided as a separate component from the processor(s) 604.


The media used by local storage 608 may also be removable. For example, a removable hard drive may be used for local storage 608. Other examples include optical and magnetic disks, thumb drives, and smart cards that are inserted into a drive for transfer onto another computer-readable storage medium that is also part of local storage 608.


Communications unit 610, in these examples, provides for communications with other data processing systems or devices. In these examples, communications unit 610 includes one or more network interface cards. Communications unit 610 may provide communications through the use of either or both physical and wireless communications links.


I/O interface(s) 612 allows for input and output of data with other devices that may be connected to the computing node of FIG. 6. For example, I/O interface(s) 612 may provide a connection to external device(s) 618 such as a keyboard, a keypad, a touch screen, and/or some other suitable input device. External device(s) 618 can also include portable computer-readable storage media such as, for example, thumb drives, portable optical or magnetic disks, and memory cards. Software (e.g. executable instructions) and data used to practice embodiments described herein can be stored on such portable computer-readable storage media and can be loaded onto local storage 608 via I/O interface(s) 612. I/O interface(s) 612 also connect to a display 620.


Display 620 provides a mechanism to display data to a user and may be, for example, a computer monitor.


From the foregoing it will be appreciated that, although specific embodiments of the invention have been described herein for purposes of illustration, various modifications may be made without deviating from the spirit and scope of the invention.

Claims
  • 1. A method comprising: generating a master key at a first node of a distributed computing system;modifying the master key utilizing a secret sharing technique to provide multiple key shares;storing at least one key share of the multiple key shares at another node of the distributed computing system, different from the first node;after restart of the first node, requesting the at least one key share from the another node of the distributed computing system; andcombining the at least one key share with another key share stored at the first node or another node to obtain the master key.
  • 2. The method of claim 1, wherein the secret sharing technique utilizes Shamir's secret sharing.
  • 3. The method of claim 1, further comprising encrypting the at least one master key with a public key of the first node to provide an encrypted master key and wherein modifying the master key using the key sharing technique comprises generating the multiple key shares form the encrypted master key.
  • 4. The method of claim 3, wherein the combining obtains the encrypted master key, the method further comprising decrypting the encrypted master key using a private key of the first node, wherein the private key of the first node is protected using at least one secure crypto processor at the first node.
  • 5. The method of claim 4, wherein the at least one secure crypto processor comprises a trusted platform module (TPM).
  • 6. The method of claim 1, further comprising storing each of the multiple key shares derived from the master key at respective multiple nodes of the distributed computing system other than the first node.
  • 7. The method of claim 1, further comprising: receiving data for encryption;decrypting an encrypted data encryption key using the master key to provide a data encryption key;encrypting the data using the data encryption key to provide encrypted data; andstoring the encrypted data at the first node.
  • 8. The method of claim 1, further comprising, fetching identity credentials using the master key, and utilizing the identity credentials to communicate with another node.
  • 9. The method of claim 1, further comprising decrypting a key management services master key using the master key; decrypting a data encryption key using the key management services master key; andproviding the data encryption key to a user application in communication with the first node.
  • 10. A computing node comprising: at least one processor; andmemory storing instructions that, when executed by the at least one processor, cause the computing node to:generate a master key at the computing node;modify the master key utilizing a secret sharing technique to provide multiple key shares;store the multiple key shares across multiple disks of the computing node, such that key shares from at least two of the multiple disks are necessary to reconstruct the master key;after restart of the computing node, request the key shares from the at least two of the multiple disks of the computing node; andcombine the key shares from at least two of the multiple disks of the computing node to obtain the master key.
  • 11. The computing node of claim 9, wherein the secret sharing technique utilizes Shamir's secret sharing.
  • 12. The computing node of claim 10, wherein the instructions further cause the computing node to encrypt the at least one key share with a public key of the computing node to obtain an encrypted master key and wherein the encrypted master key is used to generate the multiple key shares.
  • 13. The computing node of claim 12, further comprising a secure crypto processor configured to protect a private key of the computing node, wherein said combine action provides the encrypted master key, and wherein the instructions further cause the computing node to decrypt the encrypted master key using the private key of the computing node.
  • 14. The computing node of claim 13, wherein the secure crypto processor comprises a trusted platform module (TPM).
  • 15. The computing node of claim 10, wherein the instructions further cause the computing node to: receive data for encryption;decrypt an encrypted data encryption key utilizing the master key to provide a data encryption key;encrypt the data using the data encryption key to provide encrypted data; andstore the encrypted data at the computing node.
  • 16. The computing node of claim 10, wherein the instructions further cause the computing node to store the multiple key shares across the multiple disks such that each of the multiple disks stores an insufficient number of the multiple key shares to reconstruct the master key.
  • 17. A computing system comprising: a plurality of computing nodes;first storage accessible to the plurality of computing nodes over a network;respective local storage at each of the plurality of computing nodes, each respective local storage accessible by a respective computing node without using the network; andwherein at least one of the plurality of computing nodes is configured to run a local key manager, the local key manager using a master key reconstructable using key shares stored at respective local storage of at least one other of the plurality of computing nodes, and wherein the local key manager is configured to protect additional keys using the master key.
  • 18. The computing system of claim 17, wherein the additional keys comprise at least one data encryption key.
  • 19. The computing system of claim 17, wherein the key shares are generated using key sharing techniques.
  • 20. The computing system of claim 19, wherein the master key is decrypted using a private key protected by a secure crypto processor.