The field relates generally to information processing systems, and more particularly to techniques for network monitoring in virtualized information processing systems.
One example of an information processing system is a data center. Security is one of the most important aspects of data center operations. In data centers that employ Software Defined Networking (SDN), logical network connectivity among virtual machines (VMs) and other networked resources is relatively dynamic by comparison with the physical network connectivity of the data center. Inevitably, such dynamic logical network connectivity, as well as the implementation of SDN control functionality, create new implementation challenges in network monitoring.
Most existing network monitoring solutions rely on attaching physical monitoring devices to data center network devices in order to inspect traffic. This deployment approach works well in small and mid-sized deployment environments for relatively static monitoring requirements. However, in a large sophisticated data center, deploying and configuring physical monitoring devices for the traffic flows for a large number of network devices are huge operational efforts and consume significant resources.
Embodiments of the invention provide techniques for network monitoring in a virtualized information processing system.
For example, in one embodiment, a method comprises the following steps. A request is obtained at a monitoring controller to provide a monitoring function for at least one subject virtual processing element (e.g., VM) in a virtualized information processing system. The monitoring controller selects and/or provisions at least one traffic capture appliance configured to capture traffic associated with the subject virtual processing element. The monitoring controller requests the virtualized information processing system to forward a copy of traffic associated with the subject virtual processing element to the traffic capture appliance for analysis. One or more of the steps are performed under control of at least one processing device.
In one example, the monitoring controller requests a system controller of the virtualized information processing system to set up network traffic mirroring and an encapsulation tunnel at one or more logical ports of a virtual switch that corresponds to the subject virtual processing element so as to forward a copy of traffic associated with the subject virtual processing element to the traffic capture appliance.
In another embodiment, an article of manufacture is provided which comprises a processor-readable storage medium having encoded therein executable code of one or more software programs. The one or more software programs when executed by the at least one processing device implement steps of the above-described method.
In yet another embodiment, an apparatus comprises a memory and a processor configured to perform steps of the above-described method.
Advantageously, embodiments described herein provide elastic network monitoring services that provision network monitoring on-demand for a VM, and maintain the service functionality across network environment changes that include, but are not limited to, VM migration.
These and other features and advantages of the present invention will become more readily apparent from the accompanying drawings and the following detailed description.
Embodiments of the invention will be described herein with reference to exemplary information processing systems, computing systems, data storage systems, data centers, and associated servers, computers, appliances, controllers, storage units, storage devices, and other processing devices. It is to be appreciated, however, that embodiments of the invention are not restricted to use with the particular illustrative system and device configurations shown. Moreover, the phrases “information processing system,” “computing system,” “data storage system” and “data centers” as used herein are intended to be broadly construed, so as to encompass, for example, private or public cloud computing or storage systems, as well as other types of systems comprising distributed virtual and/or physical infrastructure. A computing system that implements virtualization is referred to herein as a “virtualized information processing system.” However, a given embodiment may more generally comprise any arrangement of one or more processing devices.
As mentioned above, the broad adoption of virtualization technology in data centers results in logical network connectivity that is relatively dynamic by comparison to physical networking infrastructure. This makes it difficult to monitor a subject VM that may migrate inside a data center or even across different data centers.
In a virtualized computing environment, the network traffic between VMs that use a common hypervisor may not be visible to existing physical monitoring solutions. Monitoring of this class of traffic may be essential to achieving comprehensive enterprise security protection and full traffic visibility for network analytics.
With the development of software defined networking and network virtualization, there are virtual network controllers that provide an intelligent abstraction layer between end hosts and the physical network with centralized control to configure and manage virtual networks. An example is the VMware NSX™. In a virtualization environment, the network interfaces in VMs are connected to physical networks via virtual switches. A virtual switch is a network switch implemented by hypervisor-resident software. VMs are attached to logical ports of virtual switches. Open vSwitch (OVS) is an example of virtual switch.
A virtual network controller interacts with virtual switches for configuration purposes via protocols such as OpenFlow™ and the Open vSwitch Database (OVSDB) Management Protocol which is a configuration protocol that is designed to manage Open vSwitch implementations. Some virtual network controllers provide advanced network features, for example, centralized on-demand configuration of network port mirroring and network tunnels. For example, NVP can direct OVS to dynamically mirror all network traffic sent through a logical port, use Generic Routing Encapsulation (GRE) to encapsulate the mirrored traffic, and forward the encapsulated traffic to any destination specifiable by an Internet Protocol (IP) address as a remote GRE tunnel endpoint. A detailed explanation about GRE is provided in two Internet Engineering Task Force (IETF) Request for Comment (RFC) standards, RFC 2784 and RFC 2890. This functionality enables on-demand mirroring, tunnel encapsulation and forwarding of network traffic of a VM that is attached to NVP/OVS-managed virtual networks. Such traffic capture for analysis is not affected by VM migration because the traffic mirroring, encapsulation and forwarding is configured at an abstract layer (logical port on a logical network which is not affected by VM migration). Live VM migration refers to a process of moving a running VM between different physical (host) machines without disconnecting the associated client or application. Memory, storage, and network connectivity of the VM are transferred from the source host machine to the destination host machine, along with the monitoring functionality and local GRE tunnel endpoint. However, illustrative embodiments of the invention can be applied in a straightforward manner to VM migration that is not necessarily “live,” i.e, shutdown/move VM.
Embodiments of the invention provide an elastic monitoring service configured to provide on-demand provision network monitoring for a VM and maintain the service functionality across network environment changes that include VM migration. Existing network analytics and security tools can be used on the network traffic captured by this monitoring service.
As used herein, the term “cloud” refers to a collective computing infrastructure that implements a cloud computing paradigm. For example, as per the National Institute of Standards and Technology (NIST Special Publication No. 800-145), cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. As used herein, a “virtual platform” refers to a computing platform that implements server virtualization. Server virtualization functionality is implemented and realized via a hypervisor. A hypervisor is typically comprised of one or more software programs configured to create and manage virtual assets such as VMs. The hypervisor is directly inserted on top of the computer hardware in order to allocate hardware resources of the physical infrastructure dynamically and transparently. The hypervisor provides, by way of example, the ability for multiple operating systems to run concurrently on a single physical computer and share hardware resources with each other.
By way of example, as shown in
The system 100 also includes a network (or system) controller 110. The network controller 110 is configured to interact with virtual switches during runtime for managing switch configuration via protocols such as OpenFlow™ and OVSDB. More generally, the network controller 110 is responsible for managing networking inside and across the SDN-enabled data center, and is thus sometimes referred to as an “SDN controller.” In one illustrative embodiment using OpenStack™, Neutron™ is an example of an SDN controller component that manages the logical networking of OpenStack™ and is able to provision and configure virtual networking via plugins such as the NVP plugin. In some scenarios, NVP can also act as an SDN controller in a data center environment on its own.
As further shown in
As will be further explained below, virtual switch 112 couples the converged network 108 with traffic capture components 116-1, 116-2, . . . , 116-N. A traffic capture component is a virtual appliance or software program installed on a VM that is configured to capture, model and reassemble network traffic that is sent to it for further process and analysis. Such process and analysis is performed by process and analysis module 118. An example of a virtual appliance that can be used to implement a traffic capture component is an RSA® Security Analytics PacketDecoder which can be dynamically provisioned and configured to perform network packet capturing and network session rebuild. As used herein, a traffic capture component may also be referred to as a “traffic capture appliance.”
As further shown in
It is to be appreciated the functionalities of the network controller 110 and the network monitoring controller 120 can be implemented in the same controller component (e.g., an SDN controller), or in separate controller components as illustratively depicted in
In step 202, a security administrator, or some other individual or system, initiates a request with the network monitoring controller 120 to provision a network monitoring service on one or more VMs specified with network addresses that may include, but are not limited to, Internet Protocol (IP) addresses and Ethernet Media Access Control (MAC) addresses. Other identification implementations may include, but are not limited to, VM name, VM ID, etc. The network monitoring controller 120 is configured to map the label to a VM. In the example depicted in
Based on the request, in step 204, the network monitoring controller 120 configures a traffic capture appliance to perform traffic capturing, modeling and reassembling functionality via a traffic capture interface associated with the traffic capture appliance. As illustratively used herein, “capturing” refers to ingesting or accepting network traffic for processing, and may include set up of filters and/or rules about what kind of network packets to capture (e.g., rule/filter to capture only packets from a certain IP address). Further, “modeling” refers to the situation where the appliance is driven by a model of the network traffic that specifies the aspects used for analysis. Aspects of the traffic outside the model are generally not usable for analytics and the traffic data that represents such aspects may be discarded by the appliance. Modeling may also be considered a process for parsing network packets that are in an expected format. Still further, “reassembling” refers to an operation associated with a higher level protocol session. For example, data for a Transmission Control Protocol (TCP) session may be spread across multiple captured packets that may not be in strict sequential order and may have intervening packets from other sessions in the traffic capture stream. The appliance reassembles the TCP session (headers and data transferred by TCP) from the packets that make up the session.
The traffic capture appliance can be on-demand provisioned or selected from a list of already-deployed traffic capture appliances. In
In step 206, the network monitoring controller 120 queries the network (SDN) controller 110 to locate the corresponding virtual switch and logical ports for the specified VM, and then requests the network controller 110 to set up a traffic mirroring rule(s) on the located logical ports and configure a network tunnel to the traffic capture appliance specified in step 204.
Based on the request, in step 208, the network controller 110 configures the involved virtual switches (in this example, virtual switch 106 and virtual switch 112) to set up necessary network flow rules. Taking NVP/OVS as an example, the traffic mirroring rule is configured on OVS by NVP to mirror all traffic of the specified logical port and a GRE tunnel is configured to transfer the mirrored traffic (i.e., the mirrored traffic is a copy of the original traffic) to the appropriate traffic capture appliance. This is illustrated in
For the purpose of this example, assume that the monitored VM 104-1 initiates an Internet Small Computer System Interface (iSCSI) request. It is to be appreciated that iSCSI traffic is used here for clarity of example, however, embodiments of the invention apply to any type or form of network traffic and/or protocol.
With reference now to step 210, in the virtual switch (106) within the hypervisor that hosts the monitored VM (104-1), the VM's iSCSI traffic is mirrored to a GRE port, where it is encapsulated and sent to the virtual switch 112 for the traffic capture appliance 116-2 through a GRE tunnel. The use of GRE is specific to NVP/OVS, however, other embodiments of the invention may use different encapsulation and tunneling techniques.
The virtual switch 106 within the hypervisor that hosts the monitored VM 104-1 continues to forward the iSCSI traffic towards the storage system 122 as usual, in step 212.
In step 214, the virtual switch 112 that receives the GRE-encapsulated traffic from the GRE tunnel removes the GRE encapsulation and dispatches the original iSCSI traffic to the logical port connected to the traffic capture appliance 116-2.
The traffic capture appliance 116-2, in step 216, captures and parses the traffic, storing the results for further use based on the configuration and capability of the traffic capture appliance, e.g., the appliance may be configured with a capture filter that only captures traffic of a specified protocol (e.g., Telnet/FTP/POP), and may be able to capture traffic at a given maximum traffic rate.
In step 218, the captured traffic is sent to the process and analysis module 118 where it can be consumed by various known tools to perform further processing and analytics. One example of such processing and analytics that module 118 performs is processing and analytics performed by RSA® Security Analytics software which may include, but is not limited to, security monitoring, incident investigation, malware analytics, and compliance reporting operations.
The improved network monitoring techniques described herein leverage network virtualization capability to remove limitations imposed by physical-attach requirements of existing network monitoring solutions. By virtualizing the network monitoring capability, the improved network monitoring techniques can better adapt to dynamic network topology and variable monitoring requirements. Furthermore, by leveraging traffic mirroring features, software-based network monitoring obtains improved configurability and flexibility. The network monitoring service according to embodiments described herein integrates closely with virtualization platforms to obtain full network coverage that enables capture of all traffic streams.
It is to be appreciated that the various elements and steps illustrated and described in
As shown, the cloud infrastructure 300 comprises VMs 302-1, 302-2, . . . , 302-M implemented using a hypervisor 304. The hypervisor 304 runs on physical infrastructure 305. The cloud infrastructure 300 further comprises sets of applications 310-1, 310-2, . . . , 310-M running on respective ones of the VMs 302-1, 302-2, . . . , 302-M (utilizing associated logical storage units or LUNs) under the control of the hypervisor 304.
Although only a single hypervisor 304 is shown in the example of
An example of a processing platform on which the cloud infrastructure 300 may be implemented is processing platform 400 shown in
The processing device 402-1 in the processing platform 400 comprises a processor 410 coupled to a memory 412. The processor 410 may comprise a microprocessor, a microcontroller, an application-specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other type of processing circuitry, as well as portions or combinations of such circuitry elements.
The memory 412 (or other storage devices) having program code embodied therein is an example of what is more generally referred to herein as a processor-readable storage medium. Articles of manufacture comprising such processor-readable storage media are considered embodiments of the invention. A given such article of manufacture may comprise, for example, a storage device such as a storage disk, a storage array or an integrated circuit containing memory. The term “article of manufacture” as used herein should be understood to exclude transitory, propagating signals. Furthermore, memory 412 may comprise electronic memory such as random access memory (RAM), read-only memory (ROM) or other types of memory, in any combination. One or more software programs (program code) when executed by a processing device such as the processing device 402-1 causes the device to perform functions associated with one or more of the elements of system 100. One skilled in the art would be readily able to implement such software given the teachings provided herein. Other examples of processor-readable storage media embodying embodiments of the invention may include, for example, optical or magnetic disks.
Also included in the processing device 402-1 is network interface circuitry 414, which is used to interface the processing device with the network 406 and other system components. Such circuitry may comprise conventional transceivers of a type well known in the art.
The other processing devices 402 of the processing platform 400 are assumed to be configured in a manner similar to that shown for processing device 402-1 in the figure.
The processing platform 400 shown in
Also, numerous other arrangements of servers, computers, storage devices or other components are possible for implementing components shown and described in
It should again be emphasized that the above-described embodiments of the invention are presented for purposes of illustration only. Many variations may be made in the particular arrangements shown. For example, although described in the context of particular system and device configurations, the techniques are applicable to a wide variety of other types of information processing systems, computing systems, data storage systems, processing devices and distributed virtual infrastructure arrangements. For example, alternative embodiments are realized herein that utilize protocols other than GRE, iSCSI, and any other protocols and examples illustratively mentioned herein. In addition, any simplifying assumptions made above in the course of describing the illustrative embodiments should also be viewed as exemplary rather than as requirements or limitations of the invention. Numerous other alternative embodiments within the scope of the appended claims will be readily apparent to those skilled in the art.
Number | Name | Date | Kind |
---|---|---|---|
7899048 | Walker | Mar 2011 | B1 |
7940685 | Breslau | May 2011 | B1 |
8228818 | Chase | Jul 2012 | B2 |
8346918 | Kay | Jan 2013 | B2 |
8528091 | Bowen | Sep 2013 | B2 |
8560663 | Baucke | Oct 2013 | B2 |
8645952 | Biswas | Feb 2014 | B2 |
8665747 | Elsen | Mar 2014 | B2 |
8750288 | Nakil | Jun 2014 | B2 |
8800009 | Beda, III | Aug 2014 | B1 |
8811214 | Sharma | Aug 2014 | B2 |
8966035 | Casado | Feb 2015 | B2 |
8966074 | Richards | Feb 2015 | B1 |
9104458 | Brandwine | Aug 2015 | B1 |
9288219 | Abuelsaad | Mar 2016 | B2 |
9384029 | Brandwine | Jul 2016 | B1 |
9450817 | Bahadur | Sep 2016 | B1 |
9529689 | Ferris | Dec 2016 | B2 |
9575781 | Suit | Feb 2017 | B1 |
20010055274 | Hegge | Dec 2001 | A1 |
20020075809 | Phaal | Jun 2002 | A1 |
20050015642 | Hannel | Jan 2005 | A1 |
20060059163 | Frattura | Mar 2006 | A1 |
20080163333 | Kasralikar | Jul 2008 | A1 |
20080256533 | Ben-Yehuda | Oct 2008 | A1 |
20080267179 | LaVigne | Oct 2008 | A1 |
20090290501 | Levy | Nov 2009 | A1 |
20100054152 | Foschiano | Mar 2010 | A1 |
20100235836 | Bratanov | Sep 2010 | A1 |
20110035494 | Pandey | Feb 2011 | A1 |
20120039337 | Jackowski | Feb 2012 | A1 |
20120082162 | Li | Apr 2012 | A1 |
20120099602 | Nagapudi | Apr 2012 | A1 |
20120147890 | Kikuchi | Jun 2012 | A1 |
20120159454 | Barham | Jun 2012 | A1 |
20120207177 | Sharma | Aug 2012 | A1 |
20120307684 | Biswas | Dec 2012 | A1 |
20120324442 | Barde | Dec 2012 | A1 |
20130007740 | Kikuchi | Jan 2013 | A1 |
20130044636 | Koponen | Feb 2013 | A1 |
20130111468 | Davis | May 2013 | A1 |
20130133068 | Jiang | May 2013 | A1 |
20130152076 | Patel | Jun 2013 | A1 |
20130212244 | Koponen | Aug 2013 | A1 |
20130227674 | Anderson | Aug 2013 | A1 |
20130263259 | Huston, III | Oct 2013 | A1 |
20140029451 | Nguyen | Jan 2014 | A1 |
20140075013 | Agrawal | Mar 2014 | A1 |
20140149980 | Vittal | May 2014 | A1 |
20140204734 | Mizuno | Jul 2014 | A1 |
20140229605 | Besser | Aug 2014 | A1 |
20140280887 | Kjendal | Sep 2014 | A1 |
20140280889 | Nispel | Sep 2014 | A1 |
20140351923 | Madani | Nov 2014 | A1 |
20150071091 | Govil | Mar 2015 | A1 |
20150071292 | Tripathi | Mar 2015 | A1 |
20150124622 | Kovvali | May 2015 | A1 |
20150124812 | Agarwal | May 2015 | A1 |
20150127805 | Htay | May 2015 | A1 |
20150139232 | Yalagandula | May 2015 | A1 |
20150172208 | DeCusatis | Jun 2015 | A1 |
20150215195 | Raps | Jul 2015 | A1 |
20150244617 | Nakil | Aug 2015 | A1 |
20150281067 | Wu | Oct 2015 | A1 |
20150309829 | Hiltgen | Oct 2015 | A1 |
20150312215 | Kher | Oct 2015 | A1 |
20150350095 | Raney | Dec 2015 | A1 |
20160034295 | Cochran | Feb 2016 | A1 |
20160044035 | Huang | Feb 2016 | A1 |
20160073278 | Roessler | Mar 2016 | A1 |
20160080263 | Park | Mar 2016 | A1 |
20160112488 | Oksanen | Apr 2016 | A1 |
20160173326 | Koehler | Jun 2016 | A1 |
20160191568 | Nispel | Jun 2016 | A1 |
20160212687 | Baker | Jul 2016 | A1 |
20160352637 | Wakumoto | Dec 2016 | A1 |
Entry |
---|
“RSA Netwitness Overview, Network Security Monitoring Platform,” webpage including data sheet, http://www.emc.com/security/rsa-netwitness/rsa-netwitness-decoder.htm, Jun. 2014, 6 pages. |
P. Mell et al., “The NIST Definition of Cloud Computing,” U.S. Department of Commerce, Computer Security Division, National Institute of Standards and Technology, Special Publication 800-145, Sep. 2011, 7 pages. |
D. Farinacci et al., “Generic Routing Encapsulation (GRE),” Network Working Group, Request for Comments: 2784, Mar. 2000, 9 pages. |
G. Dommety, “Key and Sequence Number Extensions to GRE,” Network Working Group, Request for Comments: 2890, Sep. 2000, 7 pages. |