Computer systems operate in communication networks. Typically these networks include both local area networks (LAN) in a trusted location that allow direct addressing using local IP addresses and wide area networks (WAN) where connections may not be trusted and public IP addressing may be required. Traditionally, communicating from a system operating in a LAN, though the WAN, to another system in a separate LAN, involves addressing these two issues of security and address translation.
These issues can be addressed through the use of Virtual Private Networks (VPN). A VPN can be created by combining tunneling with encryption. Examples of VPN implementations are Internet Protocol Security (IPsec) and Secure Sockets Layer/Transport Layer Security (SSL/TLS) VPNs.
The device implementing the VPN technology at the edge of a network is the VPN gateway. The VPN connection can either be made between the edges of the LAN networks in a gateway to gateway VPN, or it can be made from an individual device within one LAN network to a gateway at the edge of the other LAN network in a remote access VPN. Implementation of the gateway can be either in hardware, either standalone or in a router or firewall appliance, or in software implemented on a server.
One limitation of VPN gateways is that the secure connections are typically point to point connections where the tunnel is defined by the IP address of the gateway and the encryption and authentication keys are negotiated in a key exchange, with Internet Key Exchange (IKE) or SSL/TLS, between the two gateways. The speed of traffic through the VPN is thus limited by the speed of the gateway. Further, the number of connections is limited by the gateway capability to perform key exchange.
The development of virtualized computing environments with virtual machines operating in a cloud infrastructure has exacerbated these limitations. In a virtualized environment, multiple computation stacks, including operating system, middleware, and applications, can operate together in a single server or set of servers. A cloud system is a virtualized environment where the virtual machines can elastically and dynamically scale to match the load or performance demands, where access to the cloud is through a public network, and where the number and capability of virtual machines can be metered by the cloud provider and made available to the specifications of the client using the cloud.
Access to the cloud network still requires secure tunnels as with hardware networks. To properly operate in a virtualized, cloud environment, the VPN gateway must be able to match the cloud requirements—elastic scaling to match load and performance demands, client management, and provider metering. In addition the security mechanisms of the gateway should be under control of the client to ensure isolation from other traffic into the cloud. Current gateway implementations fail to meet these requirements.
VPN gateways performing IKE/IPsec can operate with two or more devices to provide failover operation, but these fail to provide scaling to increase the number of available connections to all other LAN gateways or remote access devices for either traffic or key negotiation.
Software gateways running on servers as virtual appliances can operate in the cloud environment, but can only adjust to changes in load requirements by replicating the whole gateway, combining key exchange and data protection and using load balancing to distribute among the gateways.
The technology of load balancing to direct traffic to one of a number of servers providing duplicate capability is well known. Approaches that duplicate the security gateway are used in SSL connections and, to a lesser extent, in IKE/IPsec connections. Typically, these require the complete key negotiation and subsequent encryption to be managed by a single server for each inbound connection. As key negotiation and data protection are tied together on a single device, these are limited in their ability to handle a large volume of traffic from a single source. Conversely, approaches that require sharing of all key and negotiation material amongst a group of security gateways fail to scale to larger numbers as every step of every key negotiation must be accurately replicated to all devices. This leads to performance problems, negotiation failures and risk of denial of service attacks.
A. Recognition of Problems with Prior Art
In order to address the large traffic volume from a given client, current cloud providers are limited to the use of hardware security gateways that can manage the volume of encrypted data. This approach creates a number of limitations as shown in
In
U.S. Pat. No. 7,426,566, “Methods, systems and computer program products for security processing inbound communications in a cluster computing environment”, describes a system for IKE/IPsec whereby one server is used for IKE negotiation which then distributes the resulting Security Associations to multiple other endpoint servers. However, this solution relies on a dynamically routable Virtual Internet Protocol Address (DVIPA) that makes this approach unusable for providing a general connection on private networks across a public WAN and operating in a cloud environment as it forces a dependency on the endpoint devices as part of the solution. Various other U.S. patents also utilize custom operation of the routing protocol or routing table, such as U.S. Pat. Nos. 7,420,958, 7,116,665, and 6,594,704. U.S. Pat. No. 7,280,534 similarly uses an IP Service Controller to exchange addressing for a VPN on a Layer 2 network.
U.S. Pat. No. 7,743,155, “Active-active operation for a cluster of SSL virtual private network (VPN) devices with load distribution”, describes a system with a cluster of two or more nodes that receive a packet from a load balancing device, in which the load balancing device provides a virtual IP address for the cluster. The virtual connection can failover from one device to another through a dispatcher on each device. This approach fails to provide independent scalability of the key exchange from the encryption capability and does not provide elastic scalability to performance demands, and does not address operation in a virtualized computing or cloud computing environment. Furthermore, this approach is tied to a single virtual IP address and does not consider the issue of client controlled security.
U.S. Publication No. 2008/0104693, “Transporting keys between security protocols,” which is hereby incorporated by reference herein, describes placing the key exchange server on the local side of the data protection gateway and allowing the remote gateway to negotiate and send tunneled traffic to the local key server. The key server then performs negotiation and forwards the keys to the gateway which transparently performs encryption and decryption.
B. Solutions to These Problems.
Embodiments include a method and corresponding apparatus for providing a security gateway service in a virtualized computing environment. One example embodiment includes a number of virtual machines for protecting data sent to and from a client, called virtual data protection appliances (vDPA's) and a number of virtual machines for exchanging keys that are used to protect the client's data, called virtual key exchange appliances (vKEA's). At any one of the vDPA's, key exchange packets sent from a client are received. The receiving vDPA passes the key exchange packets to one of the vKEA's, referred to as a working vKEA.
The working vKEA performs the key exchange with the client by responding to the key exchange packets sent from the client. The working vKEA then distributes the result of the key exchange, including a key, to all of the vDPA's. Any one of the vDPA's protects the client's data using the distributed result of the key exchange.
The number of vDPA's or vKEA's or both is increased and decreased as the client's demand increases and decreases.
The embodiments described herein provide a unique solution for taking network data protection that requires point-to-point key exchange, and extending such protection to the demands of elasticity, client control, scaling, and virtualization demanded in cloud networking or virtualized environments.
In scenarios in which provider management of a cloud is independent of a client using the cloud, security is enhanced according to one embodiment by allowing the client to define policies and security parameters, such as certificates requiring private key material. The security is enhanced according to another embodiment by moving the vKEA to the client site.
Some of the described embodiments alleviate the management burden on the provider by restricting the provider's view (or involvement) to provisioning the configuration interface and metering the use of the virtual appliances. The provider is also relieved of the burden of maintaining separate physical hardware to provide clients with private networks in the cloud.
According to one embodiment, a client defines policy configurations, called client policy configuration, in which access to the virtual appliances in the cloud is isolated by network policies. This client policy configuration better matches current network security configurations and matches network segregation required by regulatory bodies, for example. This client policy configuration also allows for more stable deployments, connecting private virtual networks to multiple client offices or between provider buildings.
By using separate virtual appliances for data protection and key exchange, capabilities can be scaled independently and elastically. This separation of capabilities allows, for example, a client to only pay for the capability(s) required at a given time and for a given network. In another example, both virtual key exchange appliances and virtual data protection appliances can be duplicated, backed up, and moved as needed.
In one embodiment, critical state information of the key exchange and data protection virtual appliances are maintained. This state information can be replicated so that failure of any individual virtual appliance can be recovered by other virtual appliances with a minimal loss of traffic, and thus, improving provider and client operational availability.
The described embodiments offer a realistic solution to offering security gateway service in a virtualized computing environment that meets network performance requirements without, for example, overloading server computing time and resources. As further described below, encryption performance is independent of both server load and key exchange operations. Also described below, use of tunneled key exchange packets and shared state storage of key exchange messages and operations combine to provide a highly robust solution in a dynamic environment.
The foregoing will be apparent from the following more particular description of example embodiments, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating the embodiments.
A description of example embodiments follows.
The teachings of all patents, published applications, and references cited herein are incorporated by reference in their entirety.
The VEGA 205 is made up of the following components implemented within the virtualized environment 205 of a provider cloud:
GW1215a and GW2215b. The IKE/IPsec gateways GW1215a and GW2215b are located at client sites 255a, 255b. Traffic from the client sites 255a, 255b is encrypted in IPsec tunnels 260a, 260b that pass through an insecure network 265 to reach the provider site. The secure tunnels 260a, 260b are terminated by the VEGA 200, which provides key exchange and data protection as described below.
In one embodiment, data protection in each of the vDPA's 210 operates in software as a virtual machine. The inbound and outbound traffic go through a load balancer 245a that divides the traffic evenly between the vDPA's 210, for example, to minimize load to any one of the vDPA's 210. By increasing the number of vDPA's 210 and providing them with policies and keys for data protection, according to a convenient embodiment, increasing levels of network traffic can be handled.
Unlike current technologies, an embodiment separates the key negotiation function, IKE in this example, onto a separate virtual machine allowing the vKEA's 220 to dynamically increase (or decrease) in number allowing the key exchange operation to handle changing levels of key exchanges independent of network traffic.
Increasing/decreasing the number of vKEA's 220 independent of network traffic may be helpful, for example, if a large number of remote offices are connecting at the start of work day, requiring a large IKE negotiation load, but these offices do not produce heavy traffic until later in the day when markets opened.
A number of new components are implemented to perform the functionalities and capabilities described above.
Virtual Key Exchange Tunneling: Because key exchange traffic (e.g., IKE packets) from a client can be received at any arbitrary vDPA 210 and then forwarded to one particular vKEA 220 to accomplish the key exchange (or negotiation), in an example embodiment, the vDPA 210 creates a tunnel, called a Virtual Key Exchange Tunnel, with the targeted vKEA 220, encapsulating the key exchange packets. In addition, this tunnel also encapsulates packets that are not key exchange packets, but are required for performing key exchange (e.g., packets encrypted with an unknown key or unencrypted packets to a protected address). The Virtual Key Exchange Tunnel itself is capable of being encrypted to protect the key exchange packets.
Virtual Key Exchange Shared State Storage (or Shared State Storage): Key exchange mechanisms are stateful and include a series of messages that are exchanged, a sequence of operations that are performed, and shared keys that are derived from exchanged knowledge. The key exchange normally takes place between two specific devices, such as client gateway GW1215a and vKEA2220b. In one embodiment, the shared state storage 240 is used to coordinate key exchanges between the vKEA's 220 and client gateways, and to provide failover should one vKEA 220 be dynamically removed while that vKEA 220 is maintaining a key exchange. The shared state storage 240 also provides a mechanism for the Master vKEA 225 to verify liveliness of each of the vKEA's 220 and to coordinate policy updates.
Client Security Management: In separating the key exchange appliance from the data protection appliance and making both appliances virtual, this approach separates the provider configuration, deployment and metering of the VEGA 200 from the client task of configuring policy and security parameters. One embodiment allows the client to configure policies and security parameters, including certificates, in the vKEA's 220 either directly (e.g., from the client site 255a) or via the provider using the vKEA-API 230. In another embodiment, through the use of virtual key exchange tunneling (described above), the vKEA's 220 may be located away from the provider at the client's site.
Virtual Component Control: According to some embodiments, the Master vKEA 225 provides a unique combination of configuration and control. The Master vKEA 225 gives the provider an interface to launch the VEGA 200 and to set limits on a maximum number of vDPA's 210 and a maximum number of vKEA's 220, as well as, to limit their respective configuration. The Master vKEA 225 provides the interface for configuring security and policies either directly to the client or via the provider. The Master vKEA 225 also monitors the liveliness of the vKEA's 220 and manages changes in the number of vKEA's 220, vDPA's 210, their respective policies, and failure scenarios. In one embodiment, the Master vKEA 225 operates as a virtual machine with its state maintained in shared state storage 240. As such, the Master vKEA 225 can be moved or can failover with minimal impact to client traffic.
In one example, operation of the VEGA 200 begins with the provider deploying or provisioning the vDPA's 210 and the VKEA's 220, and, in one embodiment, the shared state storage 240. In a convenient embodiment, the Master vKEA 225 is used to configure public and internal IP addressing for the VEGA 200, as well as, default policies.
Once the VEGA 200 and its components are provisioned, the client configures security settings and policies for data protection and key exchange. In some embodiments, the client sets initial states for the vDPA and vKEA counts, sets the parameters for elasticity, and manages certificates on the vKEA's 220. In one embodiment, one or more of foregoing client activities are done through the Master vKEA 225 to which the client connects. The client configures the client's local gateways (e.g., gateways GW1, GW2215a, 215b) independently.
After the VEGA 200 and its components are configured to perform key exchange and data protection, the key exchange, which in example below is IKE, and data protect are carried out as follows, according to one or more embodiments.
After performing the key exchange, a result of the key exchange (which in the IKE example above, is a Security Association (SA) containing one or more derived keys) is installed on each of the vDPA's 210. In one embodiment, the result of key exchange is installed on each of the vDPA's 210 with a unicast message or broadcast message to all vDPA's 210 to install. In another embodiment, a vDPA (e.g., vDPA0210a) can request the result of the key exchange from the vKEA2220b upon discovering that the result is needed as follows.
Upon receiving at vDPA 210a, an initial outbound data packet from VM 250 to Client A-1 without SA or inbound data packet from Client A-1 with unknown SPI:
According to another convenient embodiment, the VEGA 200 performs periodic operations to ensure re-keys are done in a timely manner, expired keys are removed, and a failure in any one of the vKEA's 220 do not result in system failure.
In the description above, reference is made to the tunneling of IKE traffic (and other related packets) from the vDPA's 210 to the vKEA's 220.
In this example of virtual key exchange tunneling, the IKE stack 330 operates with multiple IP addresses not actually configured on the virtual machine interface (represented as block 340).
The above embodiments are described as an IKE/IPsec solution but there are a number of different scenarios to which these embodiments may be used. For example, a VEGA, according to one or more embodiments, can provide gateway protection using Secure Socket Layer (SSL) or Transport Layer Security (TLS) protection by performing key exchange in one set of virtual appliances and data protection in another set of virtual appliances. In this example, in addition to forwarding key exchange packets from vDPA's though a tunnel, packets maintaining TCP connectivity are also forwarded.
One approach according to one or more embodiments described above can be used with any protocol that requires key exchange or authentication, normally performed in a point-to-point fashion, with data protection where scalability and elasticity are required.
In another embodiment, the internal vDPA 520 re-encrypts decrypted data packets that are sent to a gateway or other remote access device located outside of the provider cloud 510. In this embodiment, another data protection tunnel (not shown) is established with the external gateway or other remote access that is in addition to the data protection tunnel 525 with the client 525. This external type of re-encrypting may be used to protect traffic between multiple client sites, e.g., Client A-1255a and Client A-2255b of
The protection policies, encryption types, and even network types need not be the same on each side of the vDPA's 515, 520. For example, the external encryption tunnel (or connection) 525 might be encrypted with SSL/TLS while the internal encryption tunnel (or connection) 530 is protected with IPsec.
In foregoing approach, Client GW-1720 initiates a key exchange to public IP address in the cloud 715 at a VEGA load balancer 725. A key exchange packet is forwarded to a vKEA distributer 730. The vKEA distributer 730 then sends the key exchange packet in a tunnel 735 to the Local vKEA 705.
The Local vKEA 705 continues the key exchange, tunneling back through the cloud 715 to the vKEA Distributor 730, which sends the key exchange packet back to the original GW-1720. When the exchange is complete, the Local vKEA 705 sends the keys (and/or security associations) to the vKEA Distributer 730, which installs the keys in each of the vDPA's 740.
The procedure 800 starts at 801. The procedure 800, at any one of the vDPA's, receives (805) key exchange packets sent from the client. The packets being received (805) are sent from a client that has no access to information identifying that the key exchange packets are being received by a virtual machine that does not perform a key exchange. The procedure 800 then passes (810) the key exchange packets to one of the vKEA's. The vKEA to which the key exchange packets are being passed is referred to as a working vKEA.
The procedure 800, at the working vKEA, performs (815) the key exchange with the client by responding to the key exchange packets sent from the client. The procedure 800 then distributes (820) the result of the key exchange including a key to all of the vDPA's.
The procedure 800, at any one of the vDPA's, protects (825) the client's data using the distributed result of the key exchange.
The procedure 800 increases and decreases (830) the number of vDPA's or vKEA's or both as the client's demand increases and decreases.
The procedure 800 ends at 831.
The network interface 905 is configured to send and receive packets 920 (e.g., key exchange packets and data packets) to and from a client 925. The vDPA's 910 and vKEA's 915 are configured to perform the procedure 800 of
It should be understood that the example embodiments described above may be implemented in many different ways. In some instances, the various “machines” and/or “data processors” described herein may each be implemented by a physical, virtual or hybrid general purpose computer having a central processor, memory, disk or other mass storage, communication interface(s), input/output (I/O) device(s), and other peripherals. The general purpose computer is transformed into the machines described above, for example, by loading software instructions into a data processor, and then causing execution of the instructions to carry out the functions described.
As is known in the art, such a computer may contain a system bus, where a bus is a set of hardware lines used for data transfer among the components of a computer or processing system. The bus or busses are essentially shared conduit(s) that connect different elements of the computer system (e.g., processor, disk storage, memory, input/output ports, network ports, etc.) that enables the transfer of information between the elements. One or more central processor units are attached to the system bus and provide for the execution of computer instructions. Also attached to system bus are typically I/O device interfaces for connecting various input and output devices (e.g., keyboard, mouse, displays, printers, speakers, etc.) to the computer. Network interface(s) allow the computer to connect to various other devices attached to a network. Memory provides volatile storage for computer software instructions and data used to implement an embodiment. Disk or other mass storage provides non-volatile storage for computer software instructions and data used to implement, for example, the various procedures described herein.
Embodiments may therefore typically be implemented in hardware, firmware, software, or any combination thereof.
The data processors that execute the functions described above may be deployed in a cloud computing arrangement that makes available one or more physical and/or virtual data processing machines via a convenient, on-demand network access model to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. Such cloud computing deployments are relevant and typically preferred as they allow multiple users to access computing resources as part of a shared marketplace. By aggregating demand from multiple users in central locations, cloud computing environments can be built in data centers that use the best and newest technology, located in the sustainable and/or centralized locations and designed to achieve the greatest per-unit efficiency possible.
In certain embodiments, the procedures, devices, and processes described herein constitute a computer program product, including a computer readable medium (e.g., a removable storage medium such as one or more DVD-ROM's, CD-ROM's, diskettes, tapes, etc.) that provides at least a portion of the software instructions for the system. Such a computer program product can be installed by any suitable software installation procedure, as is well known in the art. In another embodiment, at least a portion of the software instructions may also be downloaded over a cable, communication and/or wireless connection.
Embodiments may also be implemented as instructions stored on a non-transient machine-readable medium, which may be read and executed by one or more procedures. A non-transient machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computing device). For example, a non-transient machine-readable medium may include read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices; and others.
Further, firmware, software, routines, or instructions may be described herein as performing certain actions and/or functions of the data processors. However, it should be appreciated that such descriptions contained herein are merely for convenience and that such actions in fact result from computing devices, processors, controllers, or other devices executing the firmware, software, routines, instructions, etc.
It also should be understood that the block and network diagrams may include more or fewer elements, be arranged differently, or be represented differently. But it further should be understood that certain implementations may dictate the block and network diagrams and the number of block and network diagrams illustrating the execution of the embodiments be implemented in a particular way.
Accordingly, further embodiments may also be implemented in a variety of computer architectures, physical, virtual, cloud computers, and/or some combination thereof, and thus the data processors described herein are intended for purposes of illustration only and not as a limitation of the embodiments.
While the embodiments have been particularly shown and described with references to examples thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope encompassed by the appended claims.
This application claims the benefit of U.S. Provisional Application No. 61/393,159, filed on Oct. 14, 2010. The entire teachings of the above application are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
61393159 | Oct 2010 | US |