The present invention generally relates to providing security within computing environments, and in particular to channel key loading of a host bus adapter (HBA) based on a secure key exchange (SKE) authentication response by a node of a computing environment.
Encryption provides data security for data and/or other information being transmitted between two entities, such as a source node and a target node coupled via a plurality of endpoints or links. To standardize aspects of encryption, various standards are provided for different types of communication protocols. For instance, the FC-SP-2 and FC-LS-3 standards are provided for Fibre Channels.
The FC-SP-2 standard, as an example, used for encrypting Fibre Channel links includes protocols for mutual authentication of two endpoints, as well as protocols for negotiating encryption keys that are used in communication sessions between the two endpoints. The standard provides support for a variety of mechanisms to authenticate the involved parties, as well as mechanisms by which key material is provided or developed. The standard is defined for several authentication infrastructures, including secret-based, certificate-based, password-based, and pre-shared key based, as examples.
Generally, a certificate-based infrastructure is considered to provide a strong form of secure authentication, as the identity of an endpoint is certified by a trusted certificate authority. The FC-SP-2 standard defines a mechanism by which multiple certified entities can use the public-private key pairs that the certificate binds them to in order to authenticate with each other. This authentication occurs directly between two entities through the use of the Fibre Channel Authentication protocol (FCAP), the design of which is based on authentication that uses certificates and signatures as defined in, for instance, the Internet Key Exchange (IKE) protocol.
However, the exchange and validation of certificates inline is compute intensive, as well as time-consuming. The FCAP protocol is also performed on every Fibre Channel link between the entities. Since it is to be done before any client traffic flows on the links that are to be integrity and/or security protected, it can negatively impact (elongate) the link initialization times, and hence, the time it takes to bring up and begin executing client workloads. The IKE protocol also involves fairly central processing unit intensive mathematical computations, and in an environment that includes large enterprise servers with a large number of Fibre Channel physical ports in a dynamic switched fabric connected to a large number of storage controller ports, the multiplier effect of these computations and the high volume of frame exchanges to complete the IKE protocol can also negatively affect system initialization and cause constraints in heavy normal operation.
Embodiments of the present invention are directed to channel key loading of a host bus adapter (HBA) based on a secure key exchange (SKE) authentication response by a responder node of a computing environment. A non-limiting example computer-implemented method includes receiving an authentication response message at an initiator channel on an initiator node from a responder channel on a responder node to establish a secure communication, the receiving at a local key manager (LKM) executing on the initiator node. A state check can be performed based on a security association of the initiator node and the responder node. An identifier of a selected encryption algorithm can be extracted from the authentication response message. The initiator channel can request to communicate with the responder channel based at least in part on a successful state check and the selected encryption algorithm.
Other embodiments of the present invention implement features of the above-described method in computer systems and computer program products.
Additional technical features and benefits are realized through the techniques of the present invention. Embodiments and aspects of the invention are described in detail herein and are considered a part of the claimed subject matter. For a better understanding, refer to the detailed description and to the drawings.
The specifics of the exclusive rights described herein are particularly pointed out and distinctly claimed in the claims at the conclusion of the specification. The foregoing and other features and advantages of the embodiments of the invention are apparent from the following detailed description taken in conjunction with the accompanying drawings in which:
In accordance with one or more embodiments of the present invention, as data is being moved within and across data centers, authentication of the identities exchanging data and encryption of the data are used to strengthen security of the data. In one example, Fibre Channel Endpoint Security (FCES), offered by International Business Machines Corporation, Armonk, New York is used to encrypt data in flight using the Fibre Channel and Fibre Connection (FICON) protocols. FCES helps to ensure the integrity and confidentiality of all data flowing on Fibre Channel links between authorized hosts and storage devices, by creating a trusted storage network that encrypts data in flight. In one or more embodiments of the present invention, security levels are negotiated and established between the host and storage devices using secure key exchange (SKE) messaging. As part of this process, SKE request and response messages are generated and processed to ensure that the correct level of security is used by the end points (i.e., the hosts and the storage devices).
As used herein the term “secure key exchange” or “SKE” refers to a protocol used to create a security association (SA) between two endpoints, or nodes, in a network. One of more embodiments of the SKE protocol described herein build upon the Internet Key Exchange (IKE) protocol. In accordance with one or more embodiments of the present invention, a local key manager (LKM) executing on each node connects to a security key lifecycle manager, which is used to create shared secret messages to which only the parties involved have access. In accordance with one or more embodiments of the present invention, the LKM acts as a client of the security key lifecycle manager, issuing key management interoperability protocol (KMIP) requests to create keys. One or more embodiments of the SKE protocol involve the exchange of four messages. The first two messages referred to as “SKE SA Init Request” (also referred to herein as a “SKE SA initialization request”) and “SKE SA Init Response” (also referred to herein as a “SKE SA initialization response”) are unencrypted messages that exchange parameters which are used to derive a set of cryptographic keys. The final two messages referred to as “SKE Auth Request” (also referred to herein as an “SKE authentication request”) and “SKE Auth Response” (also referred to herein as an “SKE authentication response”) are encrypted messages that establish the authenticity of each endpoint, or node, as well as identify which encryption algorithm will be used to secure the communication between the endpoints. In a Fibre Channel environment, the SKE messages can be encapsulated, for example, in AUTH extended link service requests (AUTH ELS) in a format defined by the FC-SP-2 standard.
One or more embodiments of the present invention provide host bus adapter (HBA) security registration with an LKM for SKE message processing to allow secure data to be sent between computing nodes (or between channels on the same computing node) in a computing environment. In accordance with one or more embodiments of the present invention, the LKM, which manages private security keys for the HBAs on a computing node, is initialized on the computing node. The LKM establishes a connection with an external key manager (EKM) remote from the computing node. In addition, the HBAs on the computing node executing the LKM are registered with the LKM. The registration of the HBAs with the LKM allows channels of the HBAs to properly process SKE messages sent to or received from the computing node. Once LKM initialization is complete, the LKM is aware of the security capabilities of the HBAs. The LKM uses this information to build and manage the security of data requests between the computing node and other computing nodes in the computing environment.
One or more embodiments of the present invention provide generation of an SKE SA initialization request, or SKE SA Init Request, to provide security for data transfers between channels in a computing environment. The SKE SA initialization request processing is performed subsequent to LKM initialization and registration of the HBAs on the computing node that is generating the SKE SA initialization request. The SKE SA initialization request can be generated by an LKM executing on the computing node in response to receiving a request from a HBA (also referred to herein as a “channel”) of the computing node to communicate (e.g., send data to) another channel. The other channel can be located on the same node to provide the ability to securely pass data between two different partitions executing on the node. The other channel can also be located on a different computing node to provide the ability to securely pass data between channels located on different computing nodes.
The node with the channel that is initiating the request to communicate with another channel is referred to herein as the “initiator” or “source” node; and the node that contains the other channel that is the target of the request is referred to herein as the “responder” or “target” node. Upon receiving the request from the HBA, or channel, to communicate with a channel on a target node, the LKM on the source node creates an SA and then sends a request message (referred to herein as an “SKE SA initialization request message”) to the channel on the target node via the requesting channel. In accordance with one or more embodiments of the present invention, the SKE SA initialization request message includes a shared key identifier) provided by the EKM) that identifies a shared key that is to provide secure communication between the source node and the target node. A shared key rekey timer can be set by the LKM to limit the lifespan of the shared key based on a system policy. In addition to the shared key, the SKE SA initialization request message includes a nonce and a security parameter index (SPI) of the initiator channel that are used to derive keys for encrypting and decrypting payloads (e.g., data) sent between the nodes.
As used herein, the term “node” or “computing node” refers to but is not limited to: a host computer and a storage array. A storage array can be implemented, for example, by a control unit and/or a storage controller. A host computer, or host, can be implemented, for example, by a processor, a computer system, and/or a central electronics complex (CEC). As used herein, the term “computing environment” refers to a group of nodes that are coupled together to perform all or a subset of the processing described herein. For FICON channel to channel (CTC) connections, each of the ports, or channels, can be both initiators and responders. In contrast to FICON channels, Fibre Channel protocol (FCP) storage channels on a host are always the source, or initiator; and the control unit, or storage array, is always the target, or responder.
One or more embodiments of the present invention provide SKE SA initialization processing and message generation at a node of a target channel. The processing and generation of an SKE SA Init Response message is performed in response to the target channel receiving an SKE SA Init Request from a source channel. The processing at the responder, or target, node includes the LKM obtaining the shared key (if needed), and transmitting a nonce generated by the LKM and an SPI describing the target channel to the channel on the initiator node via an SKE SA Init Response message. When the processing associated with the SKE SA Init Request and SKE SA Init Response messages is completed, the initiator and the responder nodes have the shared key information that they need to transmit encrypted messages between them and to decrypt the messages that they receive.
One or more embodiments of the present invention provide SKE SA initialization response message processing and SKE Auth Request message generation at a node of a source channel. The processing and generation of an SKE Auth Request message are performed in response to the source channel receiving an SKE SA Init Response from a target channel. The processing at the initiator, or source, node can include SKE SA Init Response message verification and device group checking based on the SA. The LKM of the source node can generate session keys and build an SKE Auth Request message. The source node transmits the SKE Auth Request message to the target node.
One or more embodiments of the present invention provide SKE Auth Request message processing and SKE Auth Response message generation at a node of a target channel. The processing and generation of an SKE Auth Response message are performed in response to the target channel receiving an SKE Auth Request from a source channel. The processing at the responder, or target, node can include SKE Auth Request message verification and device group checking based on the SA. The LKM of the target node can decrypt the SKE Auth Request message, verify an initiator signature, generate a responder signature, select an encryption algorithm, and build an SKE Auth Response message. The target node transmits the SKE Auth Response message to the source node.
One or more embodiments of the present invention provide SKE Auth Response message processing and HBA key loading at a node of a target channel. The processing and HBA key loading are performed in response to the source channel receiving an SKE Auth Response from a target channel. The processing at the initiator, or source, node can include SKE Auth Response message verification and device group checking based on the SA. The LKM of the source node can decrypt the SKE Auth Response message, verify the responder signature, select an encryption algorithm, and load one or more HBA keys at the source channel to support using the selected encryption algorithm in future communication with the target channel. Upon notifying the LKM of the source node that authentication is done, a session key rekey timer can be started to initiate a session key rekey process based on a system policy.
One or more embodiments of the present invention provide a process for refreshing, or rekeying, the shared key(s) and the session key(s). As described previously, a shared key rekey timer can be set to limit the amount of time that a shared key can be used. When the shared key rekey timer expires, a process to generate a new shared key is initiated. Also, as described previously, a session key rekey timer can be set to limit the amount of time that a session key can be used. When the session key rekey timer expires, an SKE SA Init Request message is generated to initiate the derivation of a new set of cryptographic keys for use in communication between the source and target nodes.
Authentication, via the EKM, between the trusted nodes that share multiple links is performed once, instead of on a link by link basis. The ability of both entities to receive a shared key (e.g., a symmetric key) as trusted entities of the EKM and to use it to encrypt/decrypt messages between them proves mutual authentication. Further, secure communication across all links (or selected links) connecting them is provided without additional accesses to the EKM. Instead, the previously obtained shared key is used in communications between the trusted nodes on other links, or channels, coupling the nodes providing authentication of the links, without having to re-authenticate the trusted nodes via the EKM.
In accordance with one or more embodiments described herein, a trusted node initiates and activates an LKM executing on the trusted node to manage security between HBAs. The HBAs register their security capabilities and address information with the LKM in order to allow channels on the HBA to process SKE messages. SKE SA initialization request messages can be built based on an HBA channel on a trusted node requesting SKE SA initialization between itself (an initiator node) and a target, or responder node. The LKM manages the identification or activation of a device group key identifier for the two trusted nodes that is used to build the SKE SA initialization request. The LKM on the initiator node and the LKM on the responder node trade information via SKE SA initialization request and response messages. The traded information is used to encrypt and decrypt data sent between the channels on the respective nodes. The SKE SA initialization request and response messages can be exchanged in an unencrypted format, and SKE authentication request and response messages can be sent in an encrypted format. The SKE authentication request messages can include a proposal list of encryption algorithms to be used for data exchanged between the nodes, and the SKE authentication response messages can confirm which proposal was accepted by the responder node as a selected encryption algorithm. The responder node can also notify the initiator node when to begin data transfers using the selected encryption algorithm, which can be a different encryption format than the encryption used to send the SKE authentication request and response messages. Both the shared key and the session key(s) can be refreshed, or rekeyed, based on programmable timers expiring.
One example of a computing environment 100 to include one or more aspects of the present invention is described with reference to
The computing environment shown in
As shown in
The HBAs 106 in the host 102 and the HBAs 114 in the storage array 110 shown in
It is to be understood that the block diagram of
Although examples of protocols, communication paths and technologies are provided herein, one or more aspects are applicable to other types of protocols, communication paths and/or technologies. Further, other types of nodes may employ one or more aspects of the present invention. Additionally, a node may include fewer, more, and/or different components. Moreover, two nodes coupled to one another may be both the same type of node or different types of nodes. As examples, both nodes are hosts, both nodes are storage arrays, or one node is a host and another node is a storage array, as described in the examples herein. Many variations are possible.
As an example, a host may be a computing device, such as a processor, a computer system, a central electronics complex (CEC), etc. One example of a computer system that may include and/or use one or more aspects of the present invention is depicted in
Referring to
Continuing with
Memory 204 may include, for instance, a cache, such as a shared cache 210, which may be coupled to local caches 212 of processors 202. Further, memory 204 may include one or more programs or applications 214, an operating system 216, and one or more computer readable program instructions 218. Computer readable program instructions 218 may be configured to carry out functions of embodiments of aspects of the invention.
Computer system 200 may also communicate via, e.g., I/O interfaces 206 with one or more external devices 220, one or more network interfaces 222, and/or one or more data storage devices 224. Example external devices include a user terminal, a tape drive, a pointing device, a display, etc. Network interface 222 enables computer system 200 to communicate with one or more networks, such as a local area network (LAN), a general wide area network (WAN), and/or a public network (e.g., the Internet), providing communication with other computing devices or systems.
Data storage device 224 may store one or more programs 226, one or more computer readable program instructions 228, and/or data, etc. The computer readable program instructions may be configured to carry out functions of embodiments of aspects of the invention.
Computer system 200 may include and/or be coupled to removable/non-removable, volatile/non-volatile computer system storage media. For example, it may include and/or be coupled to a non-removable, non-volatile magnetic media (typically called a “hard drive”), a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a “floppy disk”), and/or an optical disk drive for reading from or writing to a removable, non-volatile optical disk, such as a CD-ROM, DVD-ROM or other optical media. It should be understood that other hardware and/or software components could be used in conjunction with computer system 200. Examples, include, but are not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data archival storage systems, etc.
Computer system 200 may be operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with computer system 200 include, but are not limited to, personal computer (PC) systems, server computer systems, thin clients, thick clients, handheld or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputer systems, mainframe computer systems, and distributed cloud computing environments that include any of the above systems or devices, and the like.
As indicated above, a computer system is one example of a host that may incorporate and/or use one or more aspects of the present invention. Another example of a host to incorporate and/or employ one or more aspects of the present invention is a central electronics complex, an example of which is depicted in
Referring to
In one example, memory 254 of central electronics complex 250 includes, for example, one or more logical partitions 264, a hypervisor 266 that manages the logical partitions, and processor firmware 268. One example of hypervisor 266 is the Processor Resource/System Manager (PR/SM), offered by International Business Machines Corporation. As used herein, firmware includes, e.g., the microcode of the processor. It includes, for instance, the hardware-level instructions and/or data structures used in implementation of higher level machine code. In one embodiment, it includes, for instance, proprietary code that is typically delivered as microcode that includes trusted software or microcode specific to the underlying hardware and controls operating system access to the system hardware.
Each logical partition 264 is capable of functioning as a separate system. That is, each logical partition can be independently reset, run a guest operating system 270 such as z/OS, offered by International Business Machines Corporation, or another operating system, and operate with different programs 282. An operating system or application program running in a logical partition appears to have access to a full and complete system, but in reality, only a portion of it is available.
Memory 254 is coupled to processors (e.g., CPUs) 260, which are physical processor resources that may be allocated to the logical partitions. For instance, a logical partition 264 includes one or more logical processors, each of which represents all or a share of a physical processor resource 260 that may be dynamically allocated to the logical partition.
Further, memory 254 is coupled to I/O subsystem 262. I/O subsystem 262 may be a part of the central electronics complex or separate therefrom. It directs the flow of information between main storage 254 and input/output control units 256 and input/output (I/O) devices 258 coupled to the central electronics complex.
While various examples of hosts are described herein, other examples are also possible. Further, a host may also be referred to herein as a source, a server, a node, or an endpoint node, as examples. Additionally, a storage device may be referred to herein as a target, a node, or an endpoint node, as examples. Example storage devices include storage controllers or control units. Other examples are also possible.
Turning now to
In accordance with one or more embodiments of the present invention, LKM activation and initiation is triggered by a customer (e.g., owner of the host 102) requesting that security be applied to the host 102. This can occur for example, by the customer applying a feature code that they purchased by enabling the feature code in the SE 128 and rebooting the SE 128. Upon reboot, the SE 128 can trigger the host 102 to initialize a partition for executing the LKM 104. A reboot of the SE 128 is not required to activate and initiate the LKM, and in another example, the LKM is activated and initiated by enabling the feature code in the SE 128 without rebooting the SE 128. The triggering event to activate and initialize the LKM 104 is shown as “1” in
In accordance with one or more embodiments of the present invention, once LKM initialization is complete, the LKM 104 contacts the EKM 122 to request a secure connection to the EKM 122. In accordance with one or more embodiments of the present invention, the host 102 executing the LKM 104 is a trusted node that has been certified by a trusted certificate authority. In accordance with one or more embodiments, the host 102 has the Internet Protocol (IP) address or hostname of the EKM along with host's signed certificate from the trusted certificate authority. Contacting the EKM 122 to request connection is shown as “3” in
In accordance with one or more embodiments, a KMIP is used to request the connection from the EKM 122. The KMIP message can include the certificate from the trusted certificate authority and a KMIP message can be returned from the EKM 122, shown as “4” in
Based on the connection between the LKM 104 and the EKM 122 being established, the LKM 104 notifies the I/O subsystem 306, via the hypervisor 304, that the connection has been established. This is shown as “5” in
It is to be understood that the block diagram of
Turning now to
As shown in the embodiment of the process 400 in
The process flow diagram of
Turning now to
As shown in
As shown by arrow 512 of
If an SA does not exist, then the LKM 520 on host 502 enters a state where it creates an SA between the HBA 518 on host 502 and the HBA 522 on storage array 506. The LKM 520 on host 502 determines whether it has a shared key and shared key identifier for the host 502/storage array 504 pair. The shared key and shared key identifier may be stored, for example, in volatile memory (e.g., cache memory) located on the host 502 that is accessible by the LKM 520. If the LKM 520 does not locate a shared key for the host 502/storage array 504 pair, then it sends a request, as shown by arrow 514 on
The host 502 and/or storage array 504 may be identified to the EKM by their respective world-wide node names (WWNN). In response to receiving the request from the LKM 520, the EKM server 506 authenticates the LKM and, if required, creates a device group that includes the host 502/storage array 504 pair. The EKM server 506 also generates a shared key (also referred to herein as a “shared secret key”) specific to the host 502/storage array 504 pair for use in encrypting and decrypting messages and data transferred between the host 502 and the storage array 504. As shown by arrow 524 of
The receiving of the shared key by the LKM 520 may be a multiple step process. In accordance with one or more embodiments of the present invention, the LKM 520 first requests a shared key identifier from the EKM server 506 for a specified device group. The shared key identifier is a unique identifier that can be used by the EKM server 506 to locate/determine the corresponding shared key. In response to receiving the shared key identifier, the LKM 520 sends a second request that includes the shared key identifier to the EKM server 506 to request the shared key. The EKM server 506 responds by returning the shared key. In accordance with one or more embodiments of the present invention, the device group name is a concatenation of the WWNNs of the host 502 and the storage array 504. In accordance with one or more embodiments of the present invention, upon receiving the shared key, the LKM 520 may start a shared key rekey timer that is used to limit that amount of time that the shared key may be used before a rekey, or refresh, is required. The amount of time may be based on system policies such as, but not limited to the confidential nature of the data being exchanged, other security protections in place in the computer environment, and/or a likelihood of an unauthorized access attempt. In accordance with one or more embodiments of the present invention, in addition or alternatively to an elapsed amount of time, the shared key rekey timer can expire based on a number of data exchanges between the source node and the target node. The polices can be configured by a customer via a user interface on an HMC such as server HMC 124 or storage array HMC 118 of
In response to the LKM 520 having or obtaining a valid shared key and shared key identifier, the LKM 520 generates an SKE SA Init Request message that includes the shared key identifier as well as a nonce and security parameter index (SPI) created by the LKM 520 for the secure communication between the channels. The LKM 520 creates the nonce and SPI using a random number generator. A nonce is an arbitrary number that can be used just once in a cryptographic communication. An SPI is an identification tag that is added to the clear text portion of an encrypted Fibre Channel data frame. The receiver of the frame may use this field to validate the key material used to encrypt the data payload. The SKE SA Init Request message is sent to the HBA 518, or channel, that requested that data be sent to the HBA 522 on storage array 504. This in shown by arrow 516 in
It is to be understood that the block diagram of
Turning now to
If it is determined, at block 604, that the initiator HBA is registered with the LKM, processing continues at block 606. At block 606, it is determined whether an SA already exists between the initiator channel and the responder channel. If an SA already exists, then processing continues at block 608 with rejecting the request.
If it is determined, at block 606, that an SA does not already exist between the initiator node and the responder node, then processing continues at block 610 with creating the SA. Once an SA state has been created at block 610 for the initiator channel/responder channel pair, processing continues at block 612 with determining whether a device group key, or shared key, for the initiator node/responder node pair exists. If a shared key exists for the initiator node/responder node pair, then processing continues at block 614 with using the existing shared key. At block 616, an SKE SA Init Request message is built by the LKM. In accordance with one or more embodiments, the SKE SA Init Request message includes: an identifier of the shared key; and a nonce and SPI created by the LKM for the secure communication between the channels. At block 618, the SKE SA Init Request message is sent to the initiator channel. The initiator channel then sends the SKE SA Init Request message to the responder channel on the responder node.
If it is determined at block 612 that a shared key does not exist for the initiator node/responder node pair, then processing continues at block 620. At block 620, it is determined whether a device group exists for the initiator node/responder node pair. Determining whether the device group exists can include the LKM asking the EKM if a device group exists for the initiator node/responder node pair, and the EKM responding with an identifier of the corresponding shared key (a shared key identifier) if the device group does exist or responding with an error message if it does not. If it is determined that the device group does exist, processing continues at block 622 with creating the shared key for the initiator node/responder node pair and at block 624 the shared key is stored at the initiator node. The shared key can be created by the EKM in response to a request from the LKM. In accordance with one or more embodiments of the present invention, the shared key and the shard key identifier are stored in volatile memory so that the shared key is not saved when the initiator node is powered off or restarted. In accordance with one or more embodiments of the present invention, the shared key has a limited life span based for example on, but not limited to: a number of security associations that the shared key has been used for and/or an elapsed amount of time since the shared key was created. After the shared key is stored, processing continues at block 616 with the LKM building the SKE SA Init Request message.
If it is determined at block 620 that a device group does not exist for the initiator node/responder node pair, block 626 is performed and a device group pair is created for the initiator node/requestor node pair. The device group and shared key can be created by the EKM in response to a single (or multiple) requests from the LKM. Once the device group is created, processing continues at block 622 with creating the shared key for the initiator node/requestor node pair.
The process flow diagram of
Turning now to
As shown by arrow 526 of
If an SA does not exist, then the LKM 528 on storage array 504 enters a state where it creates an SA between the HBA 518 on host 502 and the HBA 522 on storage array 506. The LKM 528 on storage array 504 determines whether it has a shared key that corresponds to the shared key identifier contained in the SKE SA Init Request. The shared key and its corresponding shared key identifier may be stored, for example, in volatile memory (e.g., cache memory) located on the storage array 504 that is accessible by the LKM 528. If the LKM 528 does not locate a shared key for the host 502/storage array 504 pair, then it sends a request, as shown by arrow 702 on
In response to the LKM 528 having or obtaining a valid shared key, the LKM 528 generates an SKE SA Init Response message that includes a nonce and security parameter index (SPI) created by the LKM 528 for the secure communication between the channels. The SKE SA Init Response message is sent to HBA 522 as shown by arrow 710 in
It is to be understood that the block diagram of
Turning now to
If it is determined, at block 804, that the responder HBA is registered with the LKM, then processing continues at block 806. At block 806, it is determined whether an SA already exists between the node where the initiator channel is located (the initiator node) and the node where the responder channel is located (the responder node). If an SA already exists, then processing continues at block 808 with rejecting the request.
If it is determined, at block 806, that an SA does not already exist between the initiator node and the responder node, then processing continues at block 810 with creating the SA. Once an SA state has been created at block 810 for the initiator node/responder node pair, processing continues at block 812 with determining whether a shared key for the initiator node/responder node pair exists at the responder LKM. If a shared key can be located on the responder LKM for the initiator node/responder node pair, then processing continues at block 820. At block 820, a nonce and an SPI are generated for the responder channel and at block 822 keys that will be used in the encryption and decryption between the initiator channel and the responder channel are derived at the responder node. The keys can be generated using the nonce and SPI of the responder, the nonce and SPI of the initiator, and the shared key.
Key derivation can be based on pseudo random function (PRF) parameters negotiated in a SA payload of an SKE SA Init message exchange established between an initiator and responder. “PRF+” can be a basic operator for generating keys for use in authentication and encryption mode. Key generation can occur over multiple steps. For example, as a first step, a seeding key called “SKEYSEED” can be generated and defined as SKEYSEED=prf(Ni|Nr, Secret_Key), where Ni and Nr are nonces, and the Secret_Key is a shared secret obtained from an EKM, such as EKM server 506. As a second step, a series of seven keys can be generated. For SKE, there can be five keys and two salts, for example, assuming that that SKE SA is protected by a method that requires a salt. A salt is random data that can be used as an additional input to a one-way function that hashes data, a password or passphrase. Salts can be generated as part of key material. For example, a multiple-byte salt can be used as part of an initialization vector (IV) input to a hash-based message authentication code (HMAC). As a further example, 32-byte keys and 4-byte salts can be generated with:
{SK_d|SK_ei|Salt_ei|SK_er|Salt_er|SK_pi|SK_pr}=prf+(SKEYSEED,Ni|Nr|SPIi|SPIr)
Where:
Inputs
For authentication only, these are all of the keys that may be needed. For encryption of user data, the third step generates the data transfer keys and salts. The third step can derive keys and salts for a child SA, starting with a new recursive invocation of the PRF.
At block 824, an SKE_SA_Init Response message that includes an identifier of the shared key as well as the nonce and SPI created by the LKM at the responder node for the secure communication between the channels is built by the LKM at the responder node. At block 826, the SKE_SA_Init Response message is sent to the responder channel and the responder channel sends the SKE_SA_Init Response message to the initiator channel on the initiator node via, for example, a SAN network.
If it is determined at block 812 that a shared key does not exist for the initiator node/responder node pair, then processing continues at block 816. At block 816, it is determined whether a device group exists for the initiator node/responder node pair. Determining whether the device group exists can include the LKM asking the EKM if a device group exists for the initiator node/responder node pair, and the EKM responding with an identifier of the corresponding shared key if the device group does exist or responding with an error message if it does not. If it is determined that the device group does exist, processing continues at block 818 with obtaining the shared key from the EKM for the initiator node/responder node pair and processing continues at block 820 with generating a responder SPI and nonce. In accordance with one or more embodiments of the present invention, the shared key and shared key identifier are stored in volatile memory at the responder node so that the shared key is not saved when the responder node is powered off or restarted.
If it is determined at block 816 that a device group does not exist on the responder LKM for the initiator node/responder node pair, block 814 is performed and the initiator node/requestor node pair joins a device group. Processing continues at block 818.
The process flow diagram of
As shown by arrow 526 of
It is to be understood that the block diagram of
Turning now to
If the SKE SA Init message is not an SKE SA Init Response message as a verification result at block 1004, an error handler 1012 can be invoked. The error handler 1012 can also be invoked if the SA state is a non-compliant SA state at block 1006, an unexpected message sequence is detected at block 1008, or if the payload type is the Notify message type at block 1010. The error handler 1012 may reject the SKE SA Init message received at block 1002 and support a retry sequence as part of a recovery process in case the error condition was a temporary condition. Under some conditions, such as a shared key error or security association error, the error handler 1012 may perform a recovery process that reinitializes the communication sequence, for instance, by making a new request to the EKM server 506 for a shared key between the initiator node and the responder node. Where a retry fails or under conditions where a retry is not performed, resources reserved to support the communication sequence are released.
After confirming that the SKE SA Init Response message is not a Notify message type at block 1010, the process 1000 advances to block 1014. At block 1014, the LKM can derive a set of cryptographic keys based on an SA payload of the SKE SA Init Response message. Key derivation can be performed, for example, using the steps as previously described in reference to process 800 of
The initiator channel, such as HBA 518, can report security capabilities to the LKM of the host, which are used by the LKM build to a proposal list based on one or more security capabilities supported by the initiator channel at block 1018. For example, the capabilities can include a list of encryption algorithms supported by the initiator node. The encryption algorithms may be stored as a priority list, for instance, defining preferences based on computational complexity or another metric used to establish preferences. The priority list may change over time to ensure that different encryption algorithms are selected over a period of time to further enhance security. For instance, a PRF can be used to establish the priorities in the proposal list.
At block 1020, the LKM builds an SKE Auth Request message based at least in part on the set of cryptographic keys and the proposal list, where one or more of the cryptographic keys are used to compute the initiator signature that is included with the proposal list in the SKE Auth Request message. The initiator node can encrypt the payload of the SKE Auth Request message using a predetermined encryption algorithm. At block 1022, the LKM sends the SKE Auth Request message with encrypted payload to the initiator channel, which transmits the SKE Auth Request message to the responder channel of the responder node.
The process flow diagram of
As shown by arrow 526 of
The SKE Auth Response message is sent to the HBA 522, or channel. This in shown by arrow 1104 in
It is to be understood that the block diagram of
At block 1204, a state check can be performed based on an SA of the initiator node and the responder node. Examples of state checks can include confirming that the SA exists for initiator node and responder node pair with a shared key. An SA mode check can confirm that the mode of the SA is set to Responder. The state check at block 1204 may also include verifying that a last received message state and a last sent message state of the LKM 528 match expected values. For example, a message sequence state machine can be checked to confirm that the last message sent from the responder node was an SKE SA Init Response message and the last message received was an SKE SA Init Request message.
If the state is ok (e.g., all expected values are verified) at block 1204, then the payload type of the SKE Auth Request message can be checked at block 1206 to determine whether the message is a Notify message type. A Notify message type can indicate a fault or other condition at the initiator node that prevents further progress in the authentication sequence. For example, the LKM of the initiator node, such as LKM 520, may have a communication error, a key access error, a sequencing error, or other such condition. The Notify message type indicator can appear unencrypted within the payload of the SKE Auth Request message.
If the message payload is not a Notify message type at block 1206, then the message payload can be decrypted at block 1208. After decryption, further validation checks can be performed at block 1210. Validation checks of the SKE Auth Request message can include, for example, checking one or more message header parameters and an identifier of the payload based on decrypting the payload. Parameters that can be checked in the message header may include a version and a payload length. The decrypted payload of the SKE Auth Request message can be checked to confirm that a world-wide node name or world-wide port name identified in the message matches an expected value based on the SKE SA Init Request message.
The LKM 528 can compute an initiator signature at block 1212, and the initiator signature can be checked at block 1214. The initiator signature can be computed based on previously determined values or values extracted from a previous message, such as the SKE SA Init Request message. For example, the initiator signature can be computed at LKM 528 based on a responder nonce, a shared key, an initiator identifier, and at least one key from the set of cryptographic keys. The computed initiator signature can be compared to the initiator signature received in the SKE Auth Request message, where the initiator signature may be extracted from the payload of the SKE Auth Request message after decryption as a further validation.
If the initiator signature check passes at block 1214, a responder signature can be computed at block 1216. The responder signature can be computed based on an initiator nonce, a shared key, a responder identifier, and at least one key from a set of cryptographic keys. One or more values used in computing the responder signature may be based on values extracted from a previous message, such as the SKE SA Init Request message.
At block 1218, an encryption algorithm is selected for encrypting data between the initiator channel and the responder channel based on a proposal list received in the SKE Auth Request message and capabilities of the highest priority encryption algorithm that is supported by the responder node. The capabilities of the HBA 522 can be reported to the LKM 528 to assist the LKM 528 in selecting an encryption algorithm from the proposal list that will be supported by the initiator node and the responder node. If it is determined at block 1220 that an algorithm selection is not possible, where the responder node supports none of the encryption algorithms from the proposal list, then the SKE Auth Request message is rejected at block 1222. The SKE Auth Request message can also be rejected at block 1222 based on an unexpected state at block 1204, a Notify message type detected at block 1206, a validation check failure at block 1210, or a signature check failure at block 1214. Rejection of the SKE Auth Request message can support a retry option, where the responder node is prepared to accept a replacement SKE Auth Request message. There may be a predetermined number of retries supported before the communication session is canceled and associated values are purged.
If an encryption algorithm selection is possible at block 1220, then LKM 528 builds an SKE Auth Response message at block 1224. Building of the SKE Auth Response message can be based at least in part on a successful state check, a successful validation, and selecting one of the encryption algorithms from the proposal list. The payload of the SKE Auth Response message can include the responder signature as computed in block 1216 and an indicator of the selected encryption algorithm based on the selection at block 1218. The payload of the SKE Auth Response message is encrypted, for example, using the same encryption algorithm as used for encrypting the payload of the SKE Auth Request message. The SKE Auth Response message is encrypted independent of the proposal list.
An LKM Done message is built at block 1226. The LKM Done message can include one or more session keys, an initiator SPI, and a responder SPI to enable encrypted communication between the initiator channel and responder channel using the selected encryption algorithm. The session keys, also referred to as data transfer keys, can be computed based on the selected encryption algorithm and one or more of the set of cryptographic keys previously derived as seeding keys. The session keys can support encryption and decryption of data transfers between the initiator channel and responder channel in combination with knowledge of the selected encryption algorithm by both the initiator node and the responder node. The LKM Done message may also set the SA state to complete and may trigger further cleanup actions associated with the authentication process. In addition, a session key rekey timer may be started. The session key rekey timer can trigger a rekey process as described below with respect to
The SKE Auth Response message and the LKM Done message are sent from the LKM 528 to the HBA 522 at block 1228. After the HBA 522 transmits the SKE Auth Response message to HBA 518, the LKM Done message can trigger reconfiguring of the HBA 522 to communicate with the HBA 518 using the selected encryption algorithm.
The process flow diagram of
As shown by arrow 526 of
In response to receiving the SKE Auth Response message, the HBA 518 sends the SKE Auth Response message to the LKM 520 located on the host 502 (as shown in by arrow 1302 of
The HBA 518 can be configured to communicate with the HBA 522 using the information from the LKM Done message and finish establishing an encrypted link path using the selected encryption algorithm between the HBA 518 and HBA 522 as depicted by arrow 1306.
It is to be understood that the block diagram of
After confirming that the SKE Auth Response message was received at block 1404, the process 1400 continues to block 1406. At block 1406, a state check can be performed based on an SA of the initiator node and the responder node. Examples of state checks can include confirming that the SA exists for initiator node and responder node pair with a shared key. An SA mode check can confirm that the mode of the SA is set to Initiator. The state check at block 1406 may also include verifying that a last received message state and a last sent message state of the LKM 520 match expected values. For example, a message sequence state machine can be checked to confirm that the last message sent from the initiator node was an SKE Auth Request message and the last message received was an SKE SA Init Response message.
If the state is okay (e.g., all expected values are verified) at block 1406, then the payload type of the SKE Auth Response message can be checked at block 1408 to determine whether the message is a Notify message type. A Notify message type can indicate a fault or other condition at the initiator node that prevents further progress in the authentication sequence. For example, the LKM of the responder node, such as LKM 528, may have a communication error, a key access error, a sequencing error, or other such condition. The Notify message type indicator can appear unencrypted within the payload of the SKE Auth Response message.
If the message payload is not a Notify message type at block 1408, then the message payload can be decrypted at block 1410. After decryption, further validation checks can be performed at block 1412. Validation checks of the SKE Auth Response message can include, for example, checking one or more message header parameters and an identifier of the payload based on decrypting the payload. Parameters that can be checked in the message header may include a version and a payload length. The decrypted payload of the SKE Auth Response message can be checked to confirm that a world-wide node name or world-wide port name identified in the message matches an expected value based on the Start LKM message.
The LKM 520 can compute a responder signature at block 1414, and the responder signature can be checked at block 1416. The responder signature can be computed based on an initiator nonce, a shared key, a responder identifier, and at least one key from a set of cryptographic keys. One or more values used in computing the responder signature may be based on values extracted from a previous message, such as the SKE SA Init Response message. The computed responder signature can be compared to the responder signature received in the SKE Auth Response message, where the responder signature may be extracted from the payload of the SKE Auth Response message after decryption as a further validation. If the signature check validation fails at block 1416, then the SKE Auth Response message is rejected at block 1418. The SKE Auth Response message can also be rejected at block 1418 based on an unexpected message at block 1404, an unexpected state at block 1406, a Notify message type detected at block 1408, or a validation check failure (e.g., an unsuccessful validation result) at block 1412. Rejection of the SKE Auth Response message can support a retry option, where the initiator node is prepared to accept a replacement SKE Auth Response message. There may be a predetermined number of retries supported before the communication session is canceled and associated values are purged. If the responder signature check passes at block 1416, then the selected encryption algorithm from the SKE Auth Response message is identified and saved at block 1420.
An LKM Done message is built at block 1422. The LKM Done message can include one or more session keys, an initiator SPI, and a responder SPI to enable encrypted communication between the initiator channel and responder channel using the selected encryption algorithm. The session keys, also referred to as data transfer keys, can be computed based on the selected encryption algorithm and one or more of the set of cryptographic keys previously derived as seeding keys. The session keys can support encryption and decryption of data transfers between the initiator channel and responder channel in combination with knowledge of the selected encryption algorithm by both the initiator node and the responder node. The LKM Done message may also set the SA state to complete and may trigger further cleanup actions associated with the authentication process.
At block 1424, a session key rekey timer is started. The session key rekey timer can trigger a rekey process as described below with respect to
The process flow diagram of
Turning now to
The process 1500 begins at block 1502 with a rekey timer expiring. At block 1504, it is determined whether the rekey timer is the shared key rekey timer or the session key rekey timer. If it is determined, at block 1504, that the shared key rekey timer has expired, then processing continues at block 1506. As described previously, the shared key rekey timer relates to the amount of time that the shared key obtained from an EKM, such as EKM 506 of
At block 1506, it is determined whether a device group exists between the pair of nodes associated with the shared key rekey timer. If it is determined that a device group does not exist, then processing continues at block 1508 with creating a device group between the pair of nodes in a manner such as that described above with respect to
If it is determined at block 1506, that the device group exists between the pair of nodes, then processing continues at block 1510. At block 1510, a new shared key is created. In accordance with one or more embodiments of the present invention, the LKM sends a request to an EKM, such as EKM server 506 of
If it is determined, at block 1504, that the session key rekey timer has expired, then processing continues at block 1516. As described previously, the session key rekey timer relates to the amount of time that the session key (which may include several keys) remains valid for a communication session between two channels, such as an HBA 518 and an HBA 522 of
At block 1516, the LKM accesses the current shared key associated with the node(s) where the pair of channels that are associated with the expired session key rekey timer are located. At block 1518, the LKM builds an SKE SA Init Request message in a manner such as that described above with respect to
Providing the ability to refresh, or rekey, the keys provides another layer of security to the system. In accordance with some embodiments of the present invention, the shared keys are refreshed less frequently than the session keys.
The process flow diagram of
Although various embodiments are described herein, other variations and embodiments are possible.
One or more aspects of the present invention are inextricably tied to computer technology and facilitate processing within a computer, improving performance thereof. In one example, performance enhancement is provided in authenticating links between nodes. These links are used to securely transmit messages between the nodes coupled by the links. One or more aspects reduce link initialization time, increase productivity within the computer environment, enhance security within the computer environment, and/or increase system performance.
Further other types of computing environments may also incorporate and use one or more aspects of the present invention, including, but not limited to, emulation environments, an example of which is described with reference to
Native central processing unit 37 includes one or more native registers 45, such as one or more general purpose registers and/or one or more special purpose registers used during processing within the environment. These registers include information that represents the state of the environment at any particular point in time.
Moreover, native central processing unit 37 executes instructions and code that are stored in memory 39. In one particular example, the central processing unit executes emulator code 47 stored in memory 39. This code enables the computing environment configured in one architecture to emulate another architecture. For instance, emulator code 47 allows machines based on architectures other than the z/Architecture, such as PowerPC processors, or other servers or processors, to emulate the z/Architecture and to execute software and instructions developed based on the z/Architecture.
Further details relating to emulator code 47 are described with reference to
Further, emulator code 47 includes an emulation control routine 57 to cause the native instructions to be executed. Emulation control routine 57 may cause native CPU 37 to execute a routine of native instructions that emulate one or more previously obtained guest instructions and, at the conclusion of such execution, return control to the instruction fetch routine to emulate the obtaining of the next guest instruction or a group of guest instructions. Execution of native instructions 55 may include loading data into a register from memory 39; storing data back to memory from a register; or performing some type of arithmetic or logic operation, as determined by the translation routine.
Each routine is, for instance, implemented in software, which is stored in memory and executed by native central processing unit 37. In other examples, one or more of the routines or operations are implemented in firmware, hardware, software or some combination thereof. The registers of the emulated processor may be emulated using registers 45 of the native CPU or by using locations in memory 39. In embodiments, guest instructions 49, native instructions 55 and emulator code 37 may reside in the same memory or may be disbursed among different memory devices.
One or more aspects may relate to cloud computing.
It is to be understood that although this disclosure includes a detailed description on cloud computing, implementation of the teachings recited herein are not limited to a cloud computing environment. Rather, embodiments of the present invention are capable of being implemented in conjunction with any other type of computing environment now known or later developed.
Cloud computing is a model of service delivery for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, network bandwidth, servers, processing, memory, storage, applications, virtual machines, and services) that can be rapidly provisioned and released with minimal management effort or interaction with a provider of the service. This cloud model may include at least five characteristics, at least three service models, and at least four deployment models.
Characteristics are as follows:
On-demand self-service: a cloud consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with the service's provider.
Broad network access: capabilities are available over a network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs).
Resource pooling: the provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to demand. There is a sense of location independence in that the consumer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter).
Rapid elasticity: capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out and rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time.
Measured service: cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service.
Service Models are as follows:
Software as a Service (SaaS): the capability provided to the consumer is to use the provider's applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based email). The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings.
Platform as a Service (PaaS): the capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including networks, servers, operating systems, or storage, but has control over the deployed applications and possibly application hosting environment configurations.
Infrastructure as a Service (IaaS): the capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly limited control of select networking components (e.g., host firewalls).
Deployment Models are as follows:
Private cloud: the cloud infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on-premises or off-premises.
Community cloud: the cloud infrastructure is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be managed by the organizations or a third party and may exist on-premises or off-premises.
Public cloud: the cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling cloud services.
Hybrid cloud: the cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load-balancing between clouds).
A cloud computing environment is service oriented with a focus on statelessness, low coupling, modularity, and semantic interoperability. At the heart of cloud computing is an infrastructure that includes a network of interconnected nodes.
Referring now to
Referring now to
Hardware and software layer 60 includes hardware and software components. Examples of hardware components include: mainframes 61; RISC (Reduced Instruction Set Computer) architecture based servers 62; servers 63; blade servers 64; storage devices 65; and networks and networking components 66. In some embodiments, software components include network application server software 67 and database software 68.
Virtualization layer 70 provides an abstraction layer from which the following examples of virtual entities may be provided: virtual servers 71; virtual storage 72; virtual networks 73, including virtual private networks; virtual applications and operating systems 74; and virtual clients 75.
In one example, management layer 80 may provide the functions described below. Resource provisioning 81 provides dynamic procurement of computing resources and other resources that are utilized to perform tasks within the cloud computing environment. Metering and Pricing 82 provide cost tracking as resources are utilized within the cloud computing environment, and billing or invoicing for consumption of these resources. In one example, these resources may include application software licenses. Security provides identity verification for cloud consumers and tasks, as well as protection for data and other resources. User portal 83 provides access to the cloud computing environment for consumers and system administrators. Service level management 84 provides cloud computing resource allocation and management such that required service levels are met. Service Level Agreement (SLA) planning and fulfillment 85 provide pre-arrangement for, and procurement of, cloud computing resources for which a future requirement is anticipated in accordance with an SLA.
Workloads layer 90 provides examples of functionality for which the cloud computing environment may be utilized. Examples of workloads and functions which may be provided from this layer include: mapping and navigation 91; software development and lifecycle management 92; virtual classroom education delivery 93; data analytics processing 94; transaction processing 95; and authentication processing 96.
Aspects of the present invention may be a system, a method, and/or a computer program product at any possible technical detail level of integration. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.
The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++, or the like, and procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.
Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.
In addition to the above, one or more aspects may be provided, offered, deployed, managed, serviced, etc. by a service provider who offers management of customer environments. For instance, the service provider can create, maintain, support, etc. computer code and/or a computer infrastructure that performs one or more aspects for one or more customers. In return, the service provider may receive payment from the customer under a subscription and/or fee agreement, as examples. Additionally or alternatively, the service provider may receive payment from the sale of advertising content to one or more third parties.
In one aspect, an application may be deployed for performing one or more embodiments. As one example, the deploying of an application comprises providing computer infrastructure operable to perform one or more embodiments.
As a further aspect, a computing infrastructure may be deployed comprising integrating computer readable code into a computing system, in which the code in combination with the computing system is capable of performing one or more embodiments.
As yet a further aspect, a process for integrating computing infrastructure comprising integrating computer readable code into a computer system may be provided. The computer system comprises a computer readable medium, in which the computer medium comprises one or more embodiments. The code in combination with the computer system is capable of performing one or more embodiments.
Although various embodiments are described above, these are only examples. For example, computing environments of other architectures can be used to incorporate and use one or more embodiments. Further, different instructions, commands or operations may be used. Moreover, other security protocols, transmission protocols and/or standards may be employed. Many variations are possible.
Further, other types of computing environments can benefit and be used. As an example, a data processing system suitable for storing and/or executing program code is usable that includes at least two processors coupled directly or indirectly to memory elements through a system bus. The memory elements include, for instance, local memory employed during actual execution of the program code, bulk storage, and cache memory which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution.
Input/output or I/O devices (including, but not limited to, keyboards, displays, pointing devices, DASD, tape, CDs, DVDs, thumb drives and other memory media, etc.) can be coupled to the system either directly or through intervening I/O controllers. Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modems, and Ethernet cards are just a few of the available types of network adapters.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising”, when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components and/or groups thereof.
The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below, if any, are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of one or more embodiments has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art. The embodiment was chosen and described in order to best explain various aspects and the practical application, and to enable others of ordinary skill in the art to understand various embodiments with various modifications as are suited to the particular use contemplated.
Number | Name | Date | Kind |
---|---|---|---|
4881263 | Herbison et al. | Nov 1989 | A |
5235644 | Gupta et al. | Aug 1993 | A |
5293029 | Iijima | Mar 1994 | A |
5432798 | Blair | Jul 1995 | A |
6178511 | Cohen et al. | Jan 2001 | B1 |
6182215 | Tatebayashi et al. | Jan 2001 | B1 |
6240512 | Fang et al. | May 2001 | B1 |
6243816 | Fang et al. | Jun 2001 | B1 |
6263435 | Dondeti et al. | Jul 2001 | B1 |
6263437 | Liao et al. | Jul 2001 | B1 |
6275944 | Kao et al. | Aug 2001 | B1 |
6490680 | Scheidt et al. | Dec 2002 | B1 |
6636968 | Rosner et al. | Oct 2003 | B1 |
6661896 | Barnett | Dec 2003 | B1 |
6886095 | Hind et al. | Apr 2005 | B1 |
6915434 | Kuroda et al. | Jul 2005 | B1 |
7089211 | Trostle et al. | Aug 2006 | B1 |
7174457 | England et al. | Feb 2007 | B1 |
7362868 | Madoukh et al. | Apr 2008 | B2 |
7879111 | Hardacker et al. | Feb 2011 | B2 |
7899189 | Dawson et al. | Mar 2011 | B2 |
8156231 | Bellifemine et al. | Apr 2012 | B2 |
8200868 | T' Hooft | Jun 2012 | B1 |
8266433 | Przykucki et al. | Sep 2012 | B1 |
8391494 | Serenyi | Mar 2013 | B1 |
8401195 | Fuchs et al. | Mar 2013 | B2 |
8498417 | Harwood et al. | Jul 2013 | B1 |
8774415 | Baba | Jul 2014 | B2 |
8838550 | Meaney | Sep 2014 | B1 |
8908863 | Chen et al. | Dec 2014 | B2 |
9111568 | Goodman et al. | Aug 2015 | B2 |
9152578 | Saad et al. | Oct 2015 | B1 |
9185088 | Bowen | Nov 2015 | B1 |
9258117 | Roth et al. | Feb 2016 | B1 |
9299385 | Carlson et al. | Mar 2016 | B2 |
9519696 | Roth et al. | Dec 2016 | B1 |
9608813 | Roth et al. | Mar 2017 | B1 |
9654469 | Yang | May 2017 | B1 |
9774445 | Gandhasri | Sep 2017 | B1 |
9830278 | Harwood et al. | Nov 2017 | B1 |
9882713 | Raza et al. | Jan 2018 | B1 |
9887975 | Gifford et al. | Feb 2018 | B1 |
9942034 | Le Saint et al. | Apr 2018 | B2 |
10043029 | Murray | Aug 2018 | B2 |
10318754 | Yavuz | Jun 2019 | B2 |
10331895 | Roth et al. | Jun 2019 | B1 |
10333696 | Ahmed | Jun 2019 | B2 |
10361859 | Clark et al. | Jul 2019 | B2 |
10546130 | Chaney | Jan 2020 | B1 |
10678938 | Agerstam et al. | Jun 2020 | B2 |
10785199 | Chhabra et al. | Sep 2020 | B1 |
10817875 | Makhotin et al. | Oct 2020 | B2 |
10917230 | Feng et al. | Feb 2021 | B2 |
11184160 | Zee et al. | Nov 2021 | B2 |
11212264 | Griffin et al. | Dec 2021 | B1 |
11228434 | Yankovskiy et al. | Jan 2022 | B2 |
20020095569 | Jerdonek | Jul 2002 | A1 |
20020154781 | Sowa | Oct 2002 | A1 |
20030026433 | Matt | Feb 2003 | A1 |
20030084336 | Anderson et al. | May 2003 | A1 |
20030126429 | Jinmei et al. | Jul 2003 | A1 |
20030196115 | Karp | Oct 2003 | A1 |
20030226013 | Dutertre | Dec 2003 | A1 |
20030229789 | Morais et al. | Dec 2003 | A1 |
20040103220 | Bostick et al. | May 2004 | A1 |
20040136533 | Takagaki et al. | Jul 2004 | A1 |
20040196979 | Cheng et al. | Oct 2004 | A1 |
20040210673 | Cruciani et al. | Oct 2004 | A1 |
20050027854 | Boulanger et al. | Feb 2005 | A1 |
20050078828 | Zheng | Apr 2005 | A1 |
20050135622 | Fors et al. | Jun 2005 | A1 |
20050136892 | Oesterling et al. | Jun 2005 | A1 |
20050174984 | O'Neill | Aug 2005 | A1 |
20060005237 | Kobata et al. | Jan 2006 | A1 |
20060010324 | Appenzeller et al. | Jan 2006 | A1 |
20060047601 | Peterka et al. | Mar 2006 | A1 |
20060129812 | Mody | Jun 2006 | A1 |
20070038679 | Ramkumar et al. | Feb 2007 | A1 |
20070104329 | England et al. | May 2007 | A1 |
20070127722 | Lam et al. | Jun 2007 | A1 |
20070218875 | Calhoun et al. | Sep 2007 | A1 |
20080063209 | Jaquette et al. | Mar 2008 | A1 |
20080075280 | Ye et al. | Mar 2008 | A1 |
20080095114 | Dutta et al. | Apr 2008 | A1 |
20080165973 | Miranda Gavillan et al. | Jul 2008 | A1 |
20080244174 | Abouelwafa et al. | Oct 2008 | A1 |
20080294906 | Chang et al. | Nov 2008 | A1 |
20090041006 | Chiu | Feb 2009 | A1 |
20090049311 | Carlson et al. | Feb 2009 | A1 |
20090067633 | Dawson et al. | Mar 2009 | A1 |
20090116647 | Korus et al. | May 2009 | A1 |
20090175451 | Greco | Jul 2009 | A1 |
20090316910 | Maeda et al. | Dec 2009 | A1 |
20100023781 | Nakamoto | Jan 2010 | A1 |
20100031045 | Gade et al. | Feb 2010 | A1 |
20100154053 | Dodgson et al. | Jun 2010 | A1 |
20100157889 | Aggarwal et al. | Jun 2010 | A1 |
20100161958 | Cho et al. | Jun 2010 | A1 |
20100246480 | Aggarwal et al. | Sep 2010 | A1 |
20100257372 | Seifert | Oct 2010 | A1 |
20100290624 | Buer et al. | Nov 2010 | A1 |
20110016314 | Hu et al. | Jan 2011 | A1 |
20110016322 | Dean et al. | Jan 2011 | A1 |
20110038477 | Bilodi | Feb 2011 | A1 |
20110150223 | Qi et al. | Jun 2011 | A1 |
20110179278 | Kim | Jul 2011 | A1 |
20110219438 | Maino et al. | Sep 2011 | A1 |
20110296186 | Wong et al. | Dec 2011 | A1 |
20110320706 | Nakajima | Dec 2011 | A1 |
20120030426 | Satran | Feb 2012 | A1 |
20120204032 | Wilkins et al. | Aug 2012 | A1 |
20130044878 | Rich et al. | Feb 2013 | A1 |
20130046972 | Campagna et al. | Feb 2013 | A1 |
20130103945 | Cannon | Apr 2013 | A1 |
20130132722 | Bennett et al. | May 2013 | A1 |
20130159706 | Li et al. | Jun 2013 | A1 |
20130191632 | Spector et al. | Jul 2013 | A1 |
20130198521 | Wu | Aug 2013 | A1 |
20130246813 | Mori et al. | Sep 2013 | A1 |
20130254531 | Liang et al. | Sep 2013 | A1 |
20130305040 | Lee et al. | Nov 2013 | A1 |
20140108789 | Phatak | Apr 2014 | A1 |
20140149740 | Sato et al. | May 2014 | A1 |
20140185805 | Andersen | Jul 2014 | A1 |
20140298037 | Xiao et al. | Oct 2014 | A1 |
20140380056 | Buckley et al. | Dec 2014 | A1 |
20150007262 | Aissi et al. | Jan 2015 | A1 |
20150019870 | Patnala et al. | Jan 2015 | A1 |
20150039883 | Yoon et al. | Feb 2015 | A1 |
20150044987 | Menon et al. | Feb 2015 | A1 |
20150058913 | Kandasamy et al. | Feb 2015 | A1 |
20150074409 | Reid et al. | Mar 2015 | A1 |
20150089241 | Zhao | Mar 2015 | A1 |
20150117639 | Feekes | Apr 2015 | A1 |
20150134971 | Park et al. | May 2015 | A1 |
20150281185 | Cooley | Oct 2015 | A1 |
20150302202 | Yamamoto | Oct 2015 | A1 |
20160065370 | Le Saint et al. | Mar 2016 | A1 |
20160099922 | Dover | Apr 2016 | A1 |
20160260087 | Lee et al. | Sep 2016 | A1 |
20160261407 | Hernandez et al. | Sep 2016 | A1 |
20160323275 | Choi et al. | Nov 2016 | A1 |
20170019380 | Dover | Jan 2017 | A1 |
20170118180 | Takahashi | Apr 2017 | A1 |
20170149740 | Mansour et al. | May 2017 | A1 |
20170171219 | Campagna | Jun 2017 | A1 |
20170264439 | Muhanna et al. | Sep 2017 | A1 |
20170337140 | Ragupathi et al. | Nov 2017 | A1 |
20170353450 | Koved et al. | Dec 2017 | A1 |
20180046586 | Venkatesh | Feb 2018 | A1 |
20180063141 | Kaliski, Jr. et al. | Mar 2018 | A1 |
20180083958 | Avilov et al. | Mar 2018 | A1 |
20180131517 | Block et al. | May 2018 | A1 |
20180212769 | Novak | Jul 2018 | A1 |
20180234409 | Nelson et al. | Aug 2018 | A1 |
20180241561 | Albertson et al. | Aug 2018 | A1 |
20180260125 | Botes et al. | Sep 2018 | A1 |
20180375870 | Bernsen | Dec 2018 | A1 |
20190018968 | Ronca et al. | Jan 2019 | A1 |
20190028437 | Law et al. | Jan 2019 | A1 |
20190068370 | Neerumalla | Feb 2019 | A1 |
20190132296 | Jiang et al. | May 2019 | A1 |
20190180028 | Seo | Jun 2019 | A1 |
20190181997 | Zhao et al. | Jun 2019 | A1 |
20190182240 | Rossi | Jun 2019 | A1 |
20190207927 | Lakhani et al. | Jul 2019 | A1 |
20190238323 | Bunch et al. | Aug 2019 | A1 |
20190268335 | Targali | Aug 2019 | A1 |
20190318102 | Araya et al. | Oct 2019 | A1 |
20190335551 | Williams et al. | Oct 2019 | A1 |
20190349759 | Rosenberg et al. | Nov 2019 | A1 |
20190380029 | Xu | Dec 2019 | A1 |
20200034528 | Yang et al. | Jan 2020 | A1 |
20200067907 | Avetisov et al. | Feb 2020 | A1 |
20200076585 | Sheppard et al. | Mar 2020 | A1 |
20200076600 | Driever et al. | Mar 2020 | A1 |
20200119911 | Shemer et al. | Apr 2020 | A1 |
20200136822 | Vallapakkam et al. | Apr 2020 | A1 |
20200204991 | Parry et al. | Jun 2020 | A1 |
20200205067 | Liu et al. | Jun 2020 | A1 |
20200233963 | Hamamoto et al. | Jul 2020 | A1 |
20200274870 | Zinar et al. | Aug 2020 | A1 |
20200275274 | Kwon et al. | Aug 2020 | A1 |
20200280548 | Toonk et al. | Sep 2020 | A1 |
20200304292 | Mochalov | Sep 2020 | A1 |
20210075621 | Hathorn | Mar 2021 | A1 |
20210075627 | Hathorn | Mar 2021 | A1 |
20210091943 | Hathorn | Mar 2021 | A1 |
20210091944 | Hathorn | Mar 2021 | A1 |
20210168138 | Paruchuri | Jun 2021 | A1 |
20210203498 | Shin et al. | Jul 2021 | A1 |
20210209201 | Ge et al. | Jul 2021 | A1 |
20210218555 | Mastenbrook et al. | Jul 2021 | A1 |
20210266147 | Zee et al. | Aug 2021 | A1 |
20210266152 | Sczepczenski et al. | Aug 2021 | A1 |
20210266154 | Sczepczenski et al. | Aug 2021 | A1 |
20210266156 | Zee et al. | Aug 2021 | A1 |
20210266161 | Zee et al. | Aug 2021 | A1 |
20210266177 | Sczepczenski et al. | Aug 2021 | A1 |
20210266304 | Zee et al. | Aug 2021 | A1 |
20210336966 | Gujarathi et al. | Oct 2021 | A1 |
20210344645 | Vadayadiyil Raveendran et al. | Nov 2021 | A1 |
20210352047 | Singh et al. | Nov 2021 | A1 |
Number | Date | Country |
---|---|---|
101212293 | Dec 2006 | CN |
101662360 | Aug 2008 | CN |
101409619 | Apr 2009 | CN |
101436930 | May 2009 | CN |
102546154 | Jul 2012 | CN |
102821096 | Dec 2012 | CN |
103716797 | Apr 2014 | CN |
109246053 | May 2017 | CN |
107046687 | Aug 2017 | CN |
110446203 | Nov 2019 | CN |
110690960 | Jan 2020 | CN |
2006270363 | Oct 2006 | JP |
2006270363 | Oct 2006 | JP |
2018182665 | Nov 2018 | JP |
2007091002 | Aug 2007 | WO |
2008155066 | Dec 2008 | WO |
WO-2008155066 | Dec 2008 | WO |
2016016656 | Feb 2016 | WO |
2016034453 | Mar 2016 | WO |
2017167741 | Oct 2017 | WO |
2018096449 | May 2018 | WO |
2019133941 | Jul 2019 | WO |
Entry |
---|
Cisco, “Perfect forward secrecy for GETVPN”, https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/sec_conn_getvpn/configuration/xe-16-12/sec-get-vpn-xe-16-12-book/sec-get-vpn-pfs.html; Sep. 25, 2019; 8 pages. |
DMTG, Security protocol and data model specification, retrieved from the internet: https://www.dmtf.org/sites/default/files/standards/documents/DSP0274_0.9.0a.pdf, May 30, 2019, 47 pages. |
International Application No. PCT/IB2021/051287 International Search Report and Written Opinion dated May 21, 2021, 10 pages. |
List of IBM Patents or Patent Applictions Treated as Related; (Appendix P), Filed Sep. 16, 2021, 2 pages. |
Patrick McDaniely et al., “Antigone: Implementing policy in secure group communication”, http://www.eecs.umich.edu/techreports/cse/2000/CSE-TR-426-00.pdf; May 16, 2000; 33 pages. |
Ran Canetti et al., Analysis of key-exchange protocols and their use for building secure channels:, https://link.springer.com/content/pdf/10.1007/3-540-44987-6_28.pdf , 2001, 22 pages. |
Robert Friend et al., “Securing fibre channel sans with end-to-end encryption”, Aug. 6, 2019, https://fibrechannel.org/wp-content/uploads/2018/08/FCIA_SolutionsGuide2019_pg12-13.pdf ; 2 pages. |
U.S. Appl. No. 16/120,894, filed Sep. 4, 2018, Entitled: Controlling Access Between Nodes by a Key Server, First Named Inventor: Patricia G. Driever. |
U.S. Appl. No. 16/120,933, filed Sep. 4, 2018, Entitled: Shared Key Processing by a Host to Secure Links, First Named Inventor: Patricia G. Driever. |
U.S. Appl. No. 16/120,975, filed Sep. 4, 2018, Entitled: Securing a Storage Network Using Key Server Authentication, First Named Inventor: Patricia G. Driever. |
U.S. Appl. No. 16/121,006, filed Sep. 4, 2018, Entitled: Shared Key Processing by a Storage Device to Secure Links, First Named Inventor: Patricia G. Driever. |
U.S. Appl. No. 16/121,026, filed Sep. 4, 2018, Entitled: Securing a Path at a Selected Node, First Named Inventor: Patricia G. Driever. |
U.S. Appl. No. 16/121,050, filed Sep. 4, 2018, Entitled: Securing a Path at a Node, First Named Inventor: Patricia G. Driever. |
U.S. Appl. No. 16/121,097, filed Sep. 4, 2018, Entitled: Automatic Re-Authentication of Links Using a Key Server, First Named Inventor: Roger G. Hathom. |
S. Chandra, et al, “A Comparative survey of Symmetric and Asymmetric Key Cryptography”,IEEE International Conference on Electronics, Communication and Computational Engineering (ICECCE), 2014, 11 pages. |
Number | Date | Country | |
---|---|---|---|
20220006626 A1 | Jan 2022 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16801319 | Feb 2020 | US |
Child | 17476677 | US |