The present disclosure relates generally to communication networks, and more particularly, to data center networks.
An ever increasing demand for cloud-based and virtualized services is changing existing network services and storage environments. For example, existing stand-alone storage environments are rapidly being replaced with large storage environments such as data centers, which provide remote access to computing resources through complex and dynamic networks of devices such as servers, routers, switches, hosts, load-balancers, and the like. However, due to dynamic nature and complex network of network devices, data centers present new challenges regarding performance, latency, reliability, scalability, endpoint migration, traffic isolation, and the like.
The embodiments herein may be better understood by referring to the following description in conjunction with the accompanying drawings in which like reference numerals indicate identical or functionally similar elements. Understanding that these drawings depict only exemplary embodiments of the disclosure and are not therefore to be considered to be limiting of its scope, the principles herein are described and explained with additional specificity and detail through the use of the accompanying drawings in which:
According to one or more embodiments of this disclosure, a software defined networking controller in a data center network establishes a translation table for in-band traffic in a data center network, the translation table resolves ambiguous network addresses based on one or more of a virtual network identifier (VNID), a routable tenant address, or a unique loopback address. The network controller device receives packets originating from applications and/or an endpoints operating in a network segment associated with a VNID, and translates, according to the translation table (and using the VNID), unique loopback addresses and/or routable tenant addresses associated with the packets into routable tenant addresses and/or unique loopback addresses, respectively.
According to another embodiment of this disclosure, the software defined networking controller device establishes a virtual routing and forwarding (VRF) device for each network segment of a plurality of network segments and, for each VRF device, instantiates at least one bound interface for routing packets. The network controller device further maps, in a mapping table, a virtual network identifier (VNID) (associated with a first network segment) to a first bound interface of one of the VRF devices, and links at least one application executing on the network controller device with one of the VRF devices. The network controller device also writes a packet from the at least one application to the one of the VRF devices to route the packet over the first bound interface into the first network segment associated with the VNID mapped to the first bound interface.
Various embodiments of the disclosure are discussed in detail below. While specific implementations are discussed, it should be understood that this is done for illustration purposes only. A person skilled in the relevant art will recognize that other components and configurations may be used without parting from the spirit and scope of the disclosure.
As used herein the terms “network segment”, “virtual network segment”, and “tenant segment”, including combinations thereof, generally refers to an overlay network within a data center network.
A communication network is a geographically distributed collection of nodes interconnected by communication links and segments for transporting data between end nodes, such as personal computers and workstations, or other devices, such as sensors, etc. Many types of networks are available, with the types ranging from local area networks (LANs) and wide area networks (WANs) to overlay and software-defined networks, such as virtual extensible local area networks (VXLANs).
LANs typically connect the nodes over dedicated private communications links located in the same general physical location, such as a building or campus. WANs, on the other hand, typically connect geographically dispersed nodes over long-distance communications links, such as common carrier telephone lines, optical lightpaths, synchronous optical networks (SONET), or synchronous digital hierarchy (SDH) links. Notably, LANs and WANs can include layer 2 (L2) and/or layer 3 (L3) networks and devices.
The Internet is an example of a WAN that connects disparate networks throughout the world, providing global communication between nodes on various networks. The nodes typically communicate over the network by exchanging discrete frames or packets of data according to predefined protocols, such as the Transmission Control Protocol/Internet Protocol (TCP/IP). In this context, a protocol can refer to a set of rules defining how the nodes interact with each other. Communication networks may be further interconnected by an intermediate network node, such as a router, to extend the effective “size” of each network.
Overlay networks generally allow virtual networks to be created and layered over a physical network infrastructure. Overlay network protocols, such as virtual extensible LAN (VXLAN), network virtualization using generic routing encapsulation (NVGRE), network virtualization overlays (NVO3), stateless transport tunneling (STT), and the like, provide a traffic encapsulation scheme which allows network traffic to be carried across L2 and L3 networks over a logical tunnel. Such logical tunnels can originate and terminate through one or more virtual tunnel endpoints (VTEPs).
Moreover, overlay networks can include virtual segments or network segments, such as VXLAN segments in a VXLAN overlay network, which can include virtual L2 and/or L3 overlay networks over which virtual machines (VMs) communicate. The virtual segments can be identified through a virtual network identifier (VNID), such as a VXLAN network identifier, which can specifically identify an associated virtual network segment or domain.
In this fashion, overlay network protocols provide a traffic encapsulation scheme which allows network traffic to be carried across L2 and L3 networks over a logical tunnel. Such logical tunnels can originate and terminate through virtual tunnel end points (VTEPs). Importantly, in a data center network context, such overlay network protocols provide traffic isolation between network segments associated with different tenants.
Operatively nodes/devices 200 communicate over and are interconnected by one or more communication links 106. Communication links 106 may be wired links or shared media (e.g., wireless links, PLC links, etc.) where certain nodes/devices 200 may be in communication with other nodes/devices based on, for example, configuration parameters, distance, signal strength, network/node topology, current operational status, location, network policies, and the like.
Data packets 150, which represent traffic and/or messages, may be exchanged among the nodes/devices 200 in data center network 105 using predefined network communication protocols such as certain known wired protocols (e.g., Interior Gateway Protocol (IGP), Exterior Border Gateway Protocol (E-BGP), TCP/IP, etc.), wireless protocols (e.g., IEEE Std. 802.15.4, WiFi, Bluetooth®, etc.), PLC protocols, VXLAN protocols, or other shared-media protocols where appropriate. In this context, a protocol consists of a set of rules defining how the nodes interact with each other.
Those skilled in the art will understand that any number of nodes, devices, communication links, and the like may be used, and that the view shown herein is for simplicity. Also, those skilled in the art will further understand that while data center network 105 is shown in a particular orientation, such orientation is merely an example for purposes of illustration, not limitation.
Network interface(s) 210 contain the mechanical, electrical, and signaling circuitry for communicating data over links 106 coupled to one or more nodes/devices shown in data center network 105. Network interfaces 210 may be configured to transmit and/or receive data using a variety of different communication protocols, including, inter alia, TCP/IP, UDP, ATM, synchronous optical networks (SONET), VXLAN, wireless protocols, Frame Relay, Ethernet, Fiber Distributed Data Interface (FDDI), etc. Notably, a physical network interface 210 may also be used to implement one or more virtual network interfaces, such as for Virtual Private Network (VPN) access, known to those skilled in the art.
Memory 240 includes a plurality of storage locations that are addressable by processor(s) 220 and network interfaces 210 for storing software programs and data structures associated with the embodiments described herein. Processor 220 may comprise necessary elements or logic adapted to execute the software programs and manipulate the data structures 245. An operating system 242 (e.g., the Internetworking Operating System, or IOS®, of Cisco Systems, Inc.), portions of which are typically resident in memory 240 and executed by the processor(s), functionally organizes the node by, inter alia, invoking network operations in support of software processes and/or services executing on the device. These software processes and/or services may comprise an in-band communication process/service 244, as described herein.
In addition, in-band communication process (services) 244 may include computer executable instructions executed by the processor 220 to perform functions provided by one or more routing protocols, such as various routing protocols as will be understood by those skilled in the art. These functions may, on capable devices, be configured to manage a routing/forwarding table (a data structure 245) containing, e.g., data used to make routing/forwarding decisions
It will be apparent to those skilled in the art that other processor and memory types, including various computer-readable media, may be used to store and execute program instructions pertaining to the techniques described herein. Also, while the description illustrates various processes, it is expressly contemplated that various processes may be embodied as modules configured to operate in accordance with the techniques herein (e.g., according to the functionality of a similar process). Further, while processes may be shown and/or described separately, those skilled in the art will appreciate that processes may be routines or modules within other processes.
Illustratively, the techniques described herein may be performed by hardware, software, and/or firmware, such as in accordance with in-band communication process 244, which may contain computer executable instructions executed by the processor 220 (or independent processor of network interfaces 210) to perform functions described herein.
Spine switches 1-N can include, for example, layer 3 (L3) switches, and/or they may also perform L2 functionalities (e.g., supporting certain Ethernet speeds, Ethernet Interfaces, etc.). Generally, spine switches 1-N are configured to lookup destination addresses for a received packet in its respective forwarding table and forward the packet accordingly. However, in some embodiments, one or more of spine switches 1-N may be configured to host a proxy function—here, spine switch 1 operates as a proxy switch. In operation, spine switch 1 matches a received packet to a destination address according to its mapping or routing table on behalf of leaf switches that do not have such mapping. In this fashion, leaf switches forward packets with unknown destination addresses to spine switch 1 for resolution.
For example, spine switch 1 can execute proxy functions to parse an encapsulated packet sent by one or more leaf switches, identify a destination address for the encapsulated packet, and route or forward the encapsulated packet according to the same. In some embodiments, spine switch 1 can perform a local mapping lookup in a database (e.g. a routing table, etc.) to determine a correct locator address of the packet and forward the packet to the locator address without changing certain fields in the header of the packet.
Leaf switches 1-N are interconnected with one or more spine switches to form, in part, fabric 305. Leaf switches 1-N can include access ports (or non-fabric ports) and fabric ports. Fabric ports typically provide uplinks to one or more of the spine switches, while access ports provide connectivity for devices such as device 310 (e.g., a host, a virtual machine (VM), a hypervisor, etc.), endpoint(s) 315, as well as one or more “external networks” (labeled as shown). Leaf switches 1-N may reside at an edge of fabric 305, and can thus represent a physical network edge. In some cases, leaf switches 1-N can include top-of-rack (ToR) switches configured according to a ToR architecture. In other cases, leaf switches 1-N can be virtual switches embedded in one or more servers, or even aggregation switches in any particular topology—e.g., end-of-row (EoR) or middle-of-row (MoR) topologies.
As shown, leaf switches 1-N also connect with devices and/or modules, such as endpoint(s) 315, which represent physical or virtual network devices (e.g., servers, routers, virtual machines (VMs), etc.), external networks, and/or other computing resources. Operatively, network connectivity for fabric 305 flows through leaf switches 1-N, where the leaf switches provide access to fabric 305 as well as interconnectivity between endpoints 315, external networks, etc. Notably, leaf switches 1-N are responsible for applying network policies, routing and/or bridging packets in fabric 305. In some cases, a leaf switch can perform additional functions, including, for example, implementing a mapping cache, sending packets to proxy spines (e.g., when there is a miss in the cache), encapsulating packets, enforcing ingress or egress policies, and the like. In addition, one or more leaf switches may perform virtual switching, including tunneling (e.g., VPN tunneling, etc.), which supports network connectivity through fabric 305, as well as supports communications in an overlay network.
An overlay network typically refers to a network of physical or virtual devices (e.g., servers, hypervisors, applications, endpoints, virtual workloads, etc.), which operate in isolated network segments, important for traffic isolation in various network environments (e.g., multi-tenant, etc.). Operatively, overlay networks isolate traffic amongst tenants on respective network segments within physical and/or virtualized data centers. For example, in a VXLAN overlay network, native frames are encapsulated with an outer IP overlay encapsulation, along with a VXLAN header, and UDP header. Generally, each network segment or VXLAN segment is addressed according to a 24-bit segment ID (e.g., a virtual network identifier or VXLAN network identifier (VNID)), which supports up to 16M VXLAN unique and co-existing network segments in a single administrative domain. The VNID identifies the scope of the inner MAC frame originated by an individual VM; thus, overlapping MAC addresses may exist across segments without resulting in traffic cross-over. The VNID is included in an outer header that encapsulates the inner MAC frame originated by a VM. Due to this encapsulation, VXLAN provides a traffic encapsulation scheme that allows network traffic to be carried across L2 and L3 networks over a logical tunnel, where such logical tunnels can originate and terminate through one or more virtual tunnel endpoints (VTEPs), hosted by a physical switch or physical server and/or implemented in software or other hardware.
As mentioned, leaf switches 1-N support network connectivity through fabric 305 and communications in overlay networks, including such isolated network segments. Further, endpoints 315 may be connected to such overlay networks, and can host virtual workloads, clusters, and/or applications/services that communicate in one or more overlay networks through fabric 305.
Notably, although fabric 305 is illustrated and described as an example leaf-spine architecture employing multiple switches, one of ordinary skill in the art will readily recognize that the subject technology employ any number of devices (e.g., server, routers, etc.) and further, the techniques disclosed herein can be implemented in any network fabric. Indeed, other architectures, designs, infrastructures, and variations are contemplated herein. Further, those skilled in the art will appreciate that the devices shown in fabric 305 are for purposes of illustration, not limitation. Any number of other devices (e.g., route reflectors, etc.) can be included (or excluded) in fabric 305, as appreciated by those skilled in the art.
As shown, host devices 410-412 host respective virtual tunnel endpoints (VTEPs) 420, 421, and 422 that communicate in overlay network 402 (which includes one or more leaf switches 1-N of fabric 305 (ref.
Servers 430-433 and VMs 440, 441 are connected to a respective VTEP and operate in a network segment identified by a corresponding VNID. Notably, each VTEP can include one or more VNIDs—e.g., VTEPs 420 and 422 include VNID 1 and VNID 2, while VTEP 421 includes VNID 1. As discussed above, traffic in overlay network 402 is logically isolated according to network segments identified by specific VNIDs. For example, network devices residing in a network segment identified by VNID 1 cannot be accessed by network devices residing in a network segment identified by VNID 2. More specifically, as shown, server 430 can communicate with server 432 and VM 440 because these devices each reside in the same network segment identified by VNID 1. Similarly, server 431 can communicate with VM 441 because these devices reside in the same network segment identified by VNID 2.
VTEPs 420-422 operatively encapsulate/decapsulate packets for respective network segments identified by respective VNID(s) and exchange such packets in the overlay network 402. As an example, server 430 sends a packet to VTEP 420, which packet is intended for VM 440, hosted by VTEP 422. VTEP 420 determines the intended destination for the packet (VM 440), and encapsulates the packet according to its routing table (e.g., which includes an endpoint-to-switch mappings or bindings for VTEP 422, hosting VM 440), and forwards the encapsulated packet over overlay network 402 to VTEP 422. VTEP 422 encapsulates the packet, and routes the packet to its intended destination—here, VM 440.
In some embodiments, however, the routing table may not include information associated with an intended destination. Accordingly, in such instances, VTEP 410 may be configured to broadcast and/or multicast the packet over overlay network 402 to ensure delivery to VTEP 422 (and thus, to VM 440). In addition, in preferred embodiments, the routing table is continuously and dynamically modified (e.g., removing stale entries, adding new entries, etc.), in order to maintain up-to-date entries in the routing table.
Notably, as is appreciated by those skilled in the art, the views shown herein are provided for purposes of illustration and discussion, not limitation. It is further appreciated that the host devices, servers, and VMs shown in
As discussed above, data centers include a dynamic and complex network of interconnected devices, which present new challenges regarding performance, latency, reliability, scalability, endpoint migration, traffic isolation, and the like. Increasingly, data centers employ overlay networks to provide proper traffic isolation in multi-tenant environments. Typically, as mentioned, in such multi-tenant environments, traffic (e.g., data packets, etc.) is encapsulated and isolated for a particular network segment using an overlay protocol. Operatively, such overlay protocol often encapsulates a packet with network identifier (e.g., a virtual network identifier (VNID), etc.) to communicate the packet in a specific network segment. Challenges arise in data center networks and overlay networks, due to the complexity of interconnected devices as well as the dynamic nature of resource migration, on-demand scalability, and the like. Accordingly, the techniques described herein particularly provide improvements for managing in-band communications in data center networks (including overlay networks).
Specifically, the techniques described herein dynamically track end-point migration, preserve traffic isolation, and route and/or forward communications amongst network devices/modules (e.g., applications, network controller devices, virtual machines (VMs), and the like). In particular, these techniques are preferably employed by one or more network controller devices, which connect to a network fabric in a data center network. These techniques further offload tasks such as locating endpoints for respective leafs (and/or network controller devices) to one or more proxy devices (or devices with proxy functionality). For example, according to some embodiments discussed in greater detail below, a network controller performs address translation to identify routable addresses, encapsulates packets according to VXLAN encapsulation protocols, and forwards the encapsulated packets to a well-known proxy device (e.g., a proxy spine switch) for address resolution. The proxy device receives the encapsulated packets, determines the appropriate routable addresses, and forwards the encapsulated packets to an endpoint in an appropriate network segment based on a VNID. In operation, the proxy devices maintain, or otherwise update respective routing tables with real-time locations (e.g., addresses) for endpoints in the data center network.
Data center network 500 comprises a network fabric 505 that employs an overlay protocol such as a VXLAN overlay protocol. As discussed above, a VXLAN overlay protocol encapsulates/decapsulates and routes packets according to a VXLAN network identifier (VNID) carried in a header field. The VNID identifies a specific virtual network or network segment associated with one or more tenants. In addition, data center network 500 also includes one or more software defined networking controller devices, also referred to as application policy infrastructure controllers (APICs)—here, APIC 1-3—which provide a single point for automation and management.
Fabric 505 includes spine switches 1-N and leaf switches 1-N. As shown, spine switch 1 is designated as a proxy devices or a VXLAN proxy switch (in the VXLAN overlay protocol). Operationally, unknown VXLAN traffic in fabric 505 is forwarded to proxy switches for address resolution and further routing in the data center network and/or within respective overlay networks.
Leaf switches 1 and 2 are further connected to one or more network controller devices APICs 1, 2, and 3, and leaf switches 3 and 4 are connected to host devices 1 and 2. Host devices 1 and 2 can include, for example, physical or virtual devices such as servers, switches, routers, virtual machines, and the like. Here, hosts 1 and 2 host or execute two service VMs 511 and 512. Each VM 511 and VM 512 serves different tenants associated with respective tenant segments in an overlay network.
Overlay networks, as discussed above, are often employed in data center networks to isolate traffic amongst tenants. Typically, a network segment or tenant segment is associated with a tenant using a VNID. In data center network 500, VM 511 communicates in a network segment in common with application (“app”) 501, which executes on one or more of APICs 1, 2, and/or 3, and VM 512 communicates in a network segment in common with application (“app”) 502, which also executes on one or more APICs 1, 2, and/or 3. Due to network segment isolation, VM 511 and VM 512 may be assigned the same IP address—here, 1.1.1.10.
In operation, app 501 and app 502 send in-band communications (e.g., data packets) over network fabric 505 to respective service VMs 511, 512, and likewise, VMs 511, 512 send in-band communications over network fabric 505 to app 501, 502, respectively. Typically, communications between respective applications and VMs are maintained (e.g., persistent) even if the VM migrates amongst hosts in order to maintain proper routing information.
Regarding traffic isolation for in-band communications, the techniques herein (such as the in-band communication process/services 244) employ a VNID based address translation, where overlapping or shared tenant addresses—here, 1.1.1.10 for VMs 511, 512—are translated into a unique (e.g., different) loopback address according to translation tables indexed or keyed to one or more VNIDs.
For example, a network controller device such as APIC 1 establishes a translation table for in-band traffic in data center network 500 to translate potentially ambiguous addresses. Here, APIC 1 establishes a translation table 520 that includes entries indexed according to a unique address (e.g., loopback address), a VNID for a tenant segment, and/or a routable tenant address. Notably, the routable tenant address in routing table 520 is a public address while the unique address is a private address within data center network 500.
As discussed, although the same or common routable addresses (1.1.1.10, and 1.1.1.1) may be used to identify more than one app, VM, or other computing resource in data center network 500, an encapsulation scheme (e.g., VXLAN) for a packet carrying the common routable address will also include a VNID in a header field. The network controller devices use the common routable address along with the VNID to translate the common routable address into a corresponding unique address. Alternatively (or in addition), the network controller devices may similarly translate a unique address into a common routable address and a VNID so that network devices within fabric 505 can properly route/forward the packet to an appropriate endpoint.
For example, a packet from VM 511 (1.1.1.10) is encapsulated with VNID 10001, while a packet from VM 512 (1.1.1.10) is encapsulated with VNID 10002. The proxy spine switch 1 receives the packets from VM 511 and/or VM 512 and forwards to one of the APICs shown for further translation (APIC 1 for example). APIC 1 receives and decapsulates the packets from proxy spine switch 1 to determine respective VNIDs and routable tenant addresses. APIC 1 further translates the routable tenant addresses based on the VNID into unique addresses (e.g., loopback addresses) for respective applications, and forwards the message to the appropriate application(s). Here, for example, an encapsulated packet originating from VM 511 will have a VXLAN header indicating VNID 10001, an inner source address field of 1.1.1.10, and an inner destination address field of 1.1.1.1, while an encapsulated packet originating from VM 512 will have a VXLAN header indicating 10002, an inner source field of 1.1.1.10 and an inner destination address field of 1.1.1.1. The APIC receiving such packets will translate the inner source/destination address fields (e.g., which include routable addresses) into unique loopback addresses based on translation table 520. Specifically, the inner source address field of 1.1.1.10 in VNID 10001 translates into 192.168.1.2 (corresponding to VM 511), and the inner destination address field of 1.1.1.1 in VNID 10001 translates into 192.168.1.1 (corresponding to app 501). In this fashion, the network device controllers (APICs) can translate potentially ambiguous routable tenant addresses (e.g., common or shared by more than one network device) into a unique address/loopback address.
Similarly, applications—here, apps 1, 2—can likewise have a shared or common routable tenant address when operating in different network segments. For example, the same address for app 501, 502 (1.1.1.1) is translated into different loopback addresses (192.168.1.1 and 192.168.1.3) for different tenant segments based on the VNID associated with a particular network segment. Here, app 501 and app 502 are bound to IP address 192.168.1.1 and 192.168.1.3, respectively, and communicate with VM 511, 512, respectively. App 501 sends a packet intended for VM 511 to one of the network controller devices (e.g., APIC 1) for address translation. The packet from app 501 includes 192.168.1.1 and 192.168.1.2 as an inner source and a destination IP address, respectively. APIC 1 receives the packet from app 501 and translates the inner source and destination IP address into routable tenant addresses 1.1.1.1 and 1.1.1.10, respectively. APIC 1 further encapsulates the packet from app 501 with VNID 10001 in a VXLAN header based on translation table 520.
The in-band communication techniques discussed above consolidate address translation in one or more network controller devices while respecting traffic isolation between different network segments. Notably, the translation tables used by the network controller devices may be local and/or distributed across multiple networks devices. Further, as discussed, the routing tables include entries keyed or indexed according to VNIDs, routable tenant addresses, and/or unique (loopback) addresses. Based on a combination of a VNID, a routable tenant address, and/or a unique address, the network controller device can translate between routable tenant addresses and unique addresses and/or identify an appropriate VNID for a packet (which VNID is used when encapsulating the packet for forwarding to the proxy device(s)).
Procedure 600 continues to step 615, where operatively, the network controller device receives a packet originating from an application associated with a first unique loopback address. Notably, the packet is also intended for an endpoint in a first network segment associated with a first VNID, and the endpoint is associated with a second unique loopback address. The network controller, in steps 620 and 625, further translates, using the translation table (e.g., translation table 520), the first unique loopback address into a first routable tenant address and a first VNID, and the second unique loopback address into a second routable tenant address and the first VNID based the first unique loopback address and the second unique loopback address, respectively. Once translated, the network controller encapsulates (e.g., VXLAN encapsulation), in step 625, the packet as an encapsulated packet having a header field including the first VNID, an outer address field including an address for a proxy device (to forward to a proxy device in the network fabric), an inner source address field including the first routable tenant address, and an inner destination field including the second routable tenant address.
The network controller device further forwards the encapsulated packet, in step 635, to the proxy device to route the encapsulated packet in the data center network to the endpoint in the first network segment associated with the first VNID. The proxy device, as discussed above, receives and decapsulates the packet to determine appropriate routing/forwarding and sends the packet to the endpoint. Notably, in some embodiments, the proxy device tunnels the packet to the endpoint (e.g., in the overlay network/first network segment).
In addition, the proxy device may also update its routing table, in step 640, based on migration of the endpoint from the first network segment to a second network segment, or other such dynamic movement of computing resources. Procedure 600 subsequently ends at step 645, but may continue on to step 615 where the network controller device receives packets from the application (or other applications).
Procedure 700 begins at step 705 and continues on to step 710 where, similar to procedure 600, the network controller device establishes a translation table (e.g., local, distributed, etc.) for resolving network addresses for in-band traffic in a data center network based on one or more of a virtual network identifier (VNID), a routable tenant address, or a unique loopback address. Notably, the translation table may be the same as the one provided in procedure 600.
Procedure 700 continues to step 715 where the network controller device decapsulates a second packet originating from the endpoint in first network segment to determine the first VNID, the second routable tenant address, and the first routable tenant address.
The network controller device further translates, in steps 720, 725 the first routable tenant address and the second routable tenant address (e.g., using translation table 520) into the first unique loopback address and the second unique loopback address, respectively, based at least on the first VNID. The network controller device further forwards the second packet to the appropriate application associated with the first unique loopback address. Procedure 700 subsequently ends at step 735, but may continue on to step 715 where the network controller device decapsulates packets from endpoints in corresponding network segments.
It should be noted that certain steps within procedures 600-700 may be optional, and further, the steps shown in
In addition to the embodiments described above, additional embodiments of this disclosure provide a scalable VNID mapping table proportional in size with VNIDs for network segments and/or VRF instances in an overlay network. For example, according to one of these embodiments, the in-bound communication techniques leverage Linux VRF instances and a universal TUN/TAP driver and, in particular, tie the fabric VRF instances and the Linux VRF instances together by mapping a TAP interface with a VNID for a network segment.
In detail,
In operation, a network controller (not shown) (e.g., one of APICs 1-3) creates a Linux VRF instance for each fabric VRF instance. As discussed below, a fabric VRF instance refers to a context for a network segment associated with a VNID (e.g., a tenant network segment). As shown, Linux VRF instances incorporate the fabric VRF context in a VRF device—e.g., VRF devices 801, 802, and 812. The network controller further creates or instantiates an enslaved interface for each VRF instance/device—e.g., interface 1, interface 2, interface 3—which are illustrated as TAP/TUN interfaces. Applications (apps 501, 502, 810, 812) are tied, respectively, to interfaces on corresponding VRF devices. Notably, apps 810 and 811 operate in the same network segment (e.g., VNID 10003) and shown as tied or bound to a shared interface 3 on VRF device 812.
In general, in-band module 815 executes or runs on the network controller and constructs a mapping table 820 to map interfaces with VNIDs (associated with respective network segments). In-band module 815 maintains mapping table 820 and performs VXLAN tunneling for traffic exchanged between applications and computing resources (here, VMs). In-band module 815 updates entries in mapping table 820 when interfaces are created (and/or deleted).
As shown, mapping table 820 maps an aggregation of interfaces to respective VNIDs used by the fabric VRF. Mapping table 820 maps each interface with a corresponding VNID associated with a network segment. For example, interface 1 is mapped to VNID 10001 used by the fabric VRF or network segment common with VM 511. Accordingly, in-band module 815 directs traffic destined for app 501 (which operates in the network segment associated with VNID 10001) to interface 1. In greater detail, consider a packet (e.g., a VXLAN encapsulated packet) sent from VM 511 to app 501, coming in from a channel bonding interface 805 (e.g., which represents a channel bonding interface and/or an aggregation of interfaces). In-band module 815 listens, monitors, or otherwise receives the encapsulated packet from bond interface 805 on an UDP socket bound to a reserved port number. In-band module 815 further performs decapsulation (e.g., VXLAN decapsulation) and determines the VNID—here, VNID 10001—from the encapsulated packet header. In-band module 815 identifies an associated interface (e.g., interface 1) and corresponding enslaved VRF device (e.g., VRF device 1) based on a mapping table lookup. In-band module 815 then writes the decapsulated packet into interface 1, which is received by app 501 tied to VRF device 801. Conversely, app 501 sends a packet to VM 511, the packet is forwarded to interface 1 because interface 1 is the next hop of a default route in VRF device 801. The packet is picked up by in-band module 815, which listens to traffic on interface 1. In-band module 815 performs VXLAN encapsulation, using VNID 10001 in the VXLAN header. The VNID for the received packet is determined by mapping table 820 and a lookup, using interface 1 as the key. In-band module 815 further sends the encapsulated packet through the UDP socket to bond interface 805.
As shown in mapping table 820, each VNID is mapped to a single interface. In this fashion, mapping table 820 remains proportional to a number of VRF instances (e.g., network segments) mapped to respective interfaces. Given a large number of fabric VRF instances, mapping table 820 scales efficiently, because its size is unrelated to the number of service VMs in a fabric VRF.
Procedure 900 begins at step 905 and continues on to step 910 where, as discussed above, the network controller device establishes a virtual routing and forwarding (VRF) instance for each network segment of a plurality of network segments. The network controller, at step 915, further instantiates, for each VRF instance, at least one bound interface on a VRF device. As discussed, the bound interface can include, for example, a TAP/TUN interface (e.g., interfaces 1-3 in
Further, in step 925, the network controller device links or associates one or more applications with the VRF device, and thus, links the one or more applications the respective interface(s) on the VRF device. Procedure 900 continues to step 930 where the network controller and/or the application executing/running on the network controller sends or writes a packet to the linked VRF device, which is a default next hop, in order to route the packet in a network segment over an interface of the VRF device. Typically, an in-band module executing on the network controller device listens to the tap interface, receives the packet, determines the VNID mapped to the interface from a mapping table (e.g., mapping table 820), and tunnels the packet to an appropriate computing resource (e.g., a service VM). Procedure 900 subsequently ends at step 935, but may continue on to step 910 where the network controller device establishes VRF instances for network segments, discussed above.
It should be noted that certain steps within procedure 900 may be optional, and further, the steps shown in
The techniques described herein manage in-band traffic in a data center network, and in particular, traffic within network segments. These techniques further support endpoint migration for in-band communications using local and/or aggregated translation and mapping tables that may index entries according to VNIDs, tenant addresses, unique (loopback) addresses, bound network interfaces (TAP/TUN interfaces), and combinations thereof.
While there have been shown and described illustrative embodiments for managing in-band communications in a data center network, it is to be understood that various other adaptations and modifications may be made within the spirit and scope of the embodiments herein. For example, the embodiments have been shown and described herein with relation to network switches and a control plane comprising the network switches. However, the embodiments in their broader sense are not as limited, and may, in fact, be used with any number of network devices (e.g., routers), and the like. In addition, the embodiments are shown with certain devices/modules performing certain operations (e.g., APICs 1-3, proxy spine switch 1, in-band module 715, and the like), however, it is appreciated that various other devices may be readily modified to perform operations without departing from the sprit and scope of this disclosure. Moreover, although the examples and embodiments described herein particularly refer to VXLAN protocols, the embodiments in their broader sense may be applied to any known encapsulation protocols, as is appreciated by those skilled in the art.
The foregoing description has been directed to specific embodiments. It will be apparent, however, that other variations and modifications may be made to the described embodiments, with the attainment of some or all of their advantages. For instance, it is expressly contemplated that the components and/or elements described herein can be implemented as software being stored on a tangible (non-transitory) computer-readable medium, devices, and memories (e.g., disks/CDs/RAM/EEPROM/etc.) having program instructions executing on a computer, hardware, firmware, or a combination thereof. Further, methods describing the various functions and techniques described herein can be implemented using computer-executable instructions that are stored or otherwise available from computer readable media. Such instructions can comprise, for example, instructions and data which cause or otherwise configure a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. Portions of computer resources used can be accessible over a network. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, firmware, or source code. Examples of computer-readable media that may be used to store instructions, information used, and/or information created during methods according to described examples include magnetic or optical disks, flash memory, USB devices provided with non-volatile memory, networked storage devices, and so on. In addition, devices implementing methods according to these disclosures can comprise hardware, firmware and/or software, and can take any of a variety of form factors. Typical examples of such form factors include laptops, smart phones, small form factor personal computers, personal digital assistants, and so on. Functionality described herein also can be embodied in peripherals or add-in cards. Such functionality can also be implemented on a circuit board among different chips or different processes executing in a single device, by way of further example. Instructions, media for conveying such instructions, computing resources for executing them, and other structures for supporting such computing resources are means for providing the functions described in these disclosures. Accordingly this description is to be taken only by way of example and not to otherwise limit the scope of the embodiments herein. Therefore, it is the object of the appended claims to cover all such variations and modifications as come within the true spirit and scope of the embodiments herein.
This application claims priority to prior filed U.S. Provisional Patent Application Ser. No. 62/342,746, filed on May 27, 2016, the content of which is herein incorporated by reference
Number | Name | Date | Kind |
---|---|---|---|
5742829 | Davis et al. | Apr 1998 | A |
5903545 | Sabourin et al. | May 1999 | A |
6012096 | Link et al. | Jan 2000 | A |
6144962 | Weinberg et al. | Nov 2000 | A |
6247058 | Miller et al. | Jun 2001 | B1 |
6330562 | Boden et al. | Dec 2001 | B1 |
6525658 | Streetman et al. | Feb 2003 | B2 |
6597663 | Rekhter | Jul 2003 | B1 |
6611896 | Mason, Jr. et al. | Aug 2003 | B1 |
6728779 | Griffin et al. | Apr 2004 | B1 |
6801878 | Hintz et al. | Oct 2004 | B1 |
6847993 | Novaes et al. | Jan 2005 | B1 |
6925490 | Novaes et al. | Aug 2005 | B1 |
6958998 | Shorey | Oct 2005 | B2 |
6983323 | Cantrell et al. | Jan 2006 | B2 |
6996817 | Birum et al. | Feb 2006 | B2 |
7002464 | Bruemmer et al. | Feb 2006 | B2 |
7096368 | Kouznetsov et al. | Aug 2006 | B2 |
7120934 | Ishikawa | Oct 2006 | B2 |
7162643 | Sankaran et al. | Jan 2007 | B1 |
7181769 | Keanini et al. | Feb 2007 | B1 |
7185103 | Jain | Feb 2007 | B1 |
7337206 | Wen et al. | Feb 2008 | B1 |
7353511 | Ziese | Apr 2008 | B1 |
7370092 | Aderton et al. | May 2008 | B2 |
7395195 | Suenbuel et al. | Jul 2008 | B2 |
7444404 | Wetherall et al. | Oct 2008 | B2 |
7466681 | Ashwood-Smith et al. | Dec 2008 | B2 |
7467205 | Dempster et al. | Dec 2008 | B1 |
7496040 | Seo | Feb 2009 | B2 |
7496575 | Buccella et al. | Feb 2009 | B2 |
7530105 | Gilbert et al. | May 2009 | B2 |
7610330 | Quinn et al. | Oct 2009 | B1 |
7633942 | Bearden et al. | Dec 2009 | B2 |
7676570 | Levy et al. | Mar 2010 | B2 |
7681131 | Quarterman et al. | Mar 2010 | B1 |
7693947 | Judge et al. | Apr 2010 | B2 |
7752307 | Takara | Jul 2010 | B2 |
7774498 | Kraemer et al. | Aug 2010 | B1 |
7783457 | Cunningham | Aug 2010 | B2 |
7844696 | Labovitz et al. | Nov 2010 | B2 |
7844744 | Abercrombie et al. | Nov 2010 | B2 |
7864707 | Dimitropoulos et al. | Jan 2011 | B2 |
7873025 | Patel et al. | Jan 2011 | B2 |
7874001 | Beck et al. | Jan 2011 | B2 |
7885197 | Metzler | Feb 2011 | B2 |
7895649 | Brook et al. | Feb 2011 | B1 |
7904420 | Ianni | Mar 2011 | B2 |
7930752 | Hertzog et al. | Apr 2011 | B2 |
7934248 | Yehuda et al. | Apr 2011 | B1 |
7957934 | Greifeneder | Jun 2011 | B2 |
7961637 | McBeath | Jun 2011 | B2 |
7970946 | Djabarov et al. | Jun 2011 | B1 |
7975035 | Popescu et al. | Jul 2011 | B2 |
8005935 | Pradhan et al. | Aug 2011 | B2 |
8040232 | Oh et al. | Oct 2011 | B2 |
8040822 | Proulx et al. | Oct 2011 | B2 |
8056134 | Ogilvie | Nov 2011 | B1 |
8135657 | Kapoor et al. | Mar 2012 | B2 |
8156430 | Newman | Apr 2012 | B2 |
8181248 | Oh et al. | May 2012 | B2 |
8185824 | Mitchell et al. | May 2012 | B1 |
8250657 | Nachenberg et al. | Aug 2012 | B1 |
8255972 | Azagury et al. | Aug 2012 | B2 |
8266697 | Coffman | Sep 2012 | B2 |
8281397 | Vaidyanathan et al. | Oct 2012 | B2 |
8291495 | Burns et al. | Oct 2012 | B1 |
8296847 | Mendonca et al. | Oct 2012 | B2 |
8365286 | Poston | Jan 2013 | B2 |
8370407 | Devarajan et al. | Feb 2013 | B1 |
8381289 | Pereira et al. | Feb 2013 | B1 |
8391270 | Van Der Stok et al. | Mar 2013 | B2 |
8407164 | Malik et al. | Mar 2013 | B2 |
8413235 | Chen et al. | Apr 2013 | B1 |
8442073 | Skubacz et al. | May 2013 | B2 |
8451731 | Lee et al. | May 2013 | B1 |
8462212 | Kundu et al. | Jun 2013 | B1 |
8489765 | Vasseur et al. | Jul 2013 | B2 |
8516590 | Ranadive et al. | Aug 2013 | B1 |
8527977 | Cheng et al. | Sep 2013 | B1 |
8549635 | Muttik et al. | Oct 2013 | B2 |
8570861 | Brandwine et al. | Oct 2013 | B1 |
8572600 | Chung et al. | Oct 2013 | B2 |
8572734 | McConnell et al. | Oct 2013 | B2 |
8572735 | Ghosh et al. | Oct 2013 | B2 |
8572739 | Cruz et al. | Oct 2013 | B1 |
8588081 | Salam et al. | Nov 2013 | B2 |
8600726 | Varshney et al. | Dec 2013 | B1 |
8613084 | Dalcher | Dec 2013 | B2 |
8630316 | Haba | Jan 2014 | B2 |
8631464 | Belakhdar et al. | Jan 2014 | B2 |
8640086 | Bonev et al. | Jan 2014 | B2 |
8661544 | Yen et al. | Feb 2014 | B2 |
8677487 | Balupari et al. | Mar 2014 | B2 |
8683389 | Bar-Yam et al. | Mar 2014 | B1 |
8706914 | Duchesneau | Apr 2014 | B2 |
8713676 | Pandrangi et al. | Apr 2014 | B2 |
8719452 | Ding et al. | May 2014 | B1 |
8719835 | Kanso et al. | May 2014 | B2 |
8752042 | Ratica | Jun 2014 | B2 |
8752179 | Zaitsev | Jun 2014 | B2 |
8755396 | Sindhu et al. | Jun 2014 | B2 |
8762951 | Kosche et al. | Jun 2014 | B1 |
8769084 | Westerfeld et al. | Jul 2014 | B2 |
8776180 | Kumar et al. | Jul 2014 | B2 |
8812725 | Kulkarni | Aug 2014 | B2 |
8813236 | Saha et al. | Aug 2014 | B1 |
8825848 | Dotan et al. | Sep 2014 | B1 |
8832013 | Adams et al. | Sep 2014 | B1 |
8832461 | Saroiu et al. | Sep 2014 | B2 |
8849926 | Marzencki et al. | Sep 2014 | B2 |
8881258 | Paul et al. | Nov 2014 | B2 |
8887238 | Howard et al. | Nov 2014 | B2 |
8904520 | Nachenberg et al. | Dec 2014 | B1 |
8908685 | Patel | Dec 2014 | B2 |
8931043 | Cooper et al. | Jan 2015 | B2 |
8954610 | Berke et al. | Feb 2015 | B2 |
8955124 | Kim et al. | Feb 2015 | B2 |
8966625 | Zuk et al. | Feb 2015 | B1 |
8973147 | Pearcy et al. | Mar 2015 | B2 |
8984331 | Quinn | Mar 2015 | B2 |
8990386 | He et al. | Mar 2015 | B2 |
8996695 | Anderson et al. | Mar 2015 | B2 |
8997227 | Mhatre et al. | Mar 2015 | B1 |
9015716 | Fletcher et al. | Apr 2015 | B2 |
9071575 | Lemaster et al. | Jun 2015 | B2 |
9088598 | Zhang et al. | Jul 2015 | B1 |
9110905 | Polley et al. | Aug 2015 | B2 |
9117075 | Yeh | Aug 2015 | B1 |
9152789 | Natarajan, Sr. et al. | Oct 2015 | B2 |
9160764 | Stiansen et al. | Oct 2015 | B2 |
9178906 | Chen et al. | Nov 2015 | B1 |
9185127 | Neou et al. | Nov 2015 | B2 |
9191402 | Yan | Nov 2015 | B2 |
9197654 | Ben-Shalom et al. | Nov 2015 | B2 |
9225793 | Dutta et al. | Dec 2015 | B2 |
9237111 | Banavalikar et al. | Jan 2016 | B2 |
9246773 | Degioanni | Jan 2016 | B2 |
9258217 | Duffield et al. | Feb 2016 | B2 |
9281940 | Matsuda et al. | Mar 2016 | B2 |
9294486 | Chiang et al. | Mar 2016 | B1 |
9317574 | Brisebois et al. | Apr 2016 | B1 |
9319384 | Yan et al. | Apr 2016 | B2 |
9369479 | Lin | Jun 2016 | B2 |
9396327 | Shimomura et al. | Jun 2016 | B2 |
9405903 | Xie et al. | Aug 2016 | B1 |
9418222 | Rivera et al. | Aug 2016 | B1 |
9426068 | Dunbar | Aug 2016 | B2 |
9454324 | Madhavapeddi | Sep 2016 | B1 |
9501744 | Brisebois et al. | Nov 2016 | B1 |
9634915 | Bley | Apr 2017 | B2 |
9645892 | Patwardhan | May 2017 | B1 |
9733973 | Prasad et al. | Aug 2017 | B2 |
20020053033 | Cooper et al. | May 2002 | A1 |
20020103793 | Koller et al. | Aug 2002 | A1 |
20020141343 | Bays | Oct 2002 | A1 |
20020184393 | Leddy et al. | Dec 2002 | A1 |
20030097439 | Strayer et al. | May 2003 | A1 |
20030145232 | Poletto et al. | Jul 2003 | A1 |
20030154399 | Zuk et al. | Aug 2003 | A1 |
20040030776 | Cantrell et al. | Feb 2004 | A1 |
20040243533 | Dempster et al. | Dec 2004 | A1 |
20040268149 | Aaron | Dec 2004 | A1 |
20050039104 | Shah et al. | Feb 2005 | A1 |
20050063377 | Bryant et al. | Mar 2005 | A1 |
20050166066 | Ahuja et al. | Jul 2005 | A1 |
20050185621 | Sivakumar et al. | Aug 2005 | A1 |
20050207376 | Ashwood-Smith et al. | Sep 2005 | A1 |
20050257244 | Joly et al. | Nov 2005 | A1 |
20050289244 | Sahu et al. | Dec 2005 | A1 |
20060048218 | Lingafelt et al. | Mar 2006 | A1 |
20060080733 | Khosmood et al. | Apr 2006 | A1 |
20060095968 | Portolani et al. | May 2006 | A1 |
20060156408 | Himberger et al. | Jul 2006 | A1 |
20060195448 | Newport | Aug 2006 | A1 |
20060272018 | Fouant | Nov 2006 | A1 |
20060274659 | Ouderkirk | Dec 2006 | A1 |
20060294219 | Ogawa et al. | Dec 2006 | A1 |
20070044147 | Choi et al. | Feb 2007 | A1 |
20070097976 | Wood et al. | May 2007 | A1 |
20070169179 | Narad | Jul 2007 | A1 |
20070195729 | Li et al. | Aug 2007 | A1 |
20070195797 | Patel et al. | Aug 2007 | A1 |
20070211637 | Mitchell | Sep 2007 | A1 |
20070300061 | Kim et al. | Dec 2007 | A1 |
20080022385 | Crowell et al. | Jan 2008 | A1 |
20080056124 | Nanda et al. | Mar 2008 | A1 |
20080082662 | Dandliker et al. | Apr 2008 | A1 |
20080101234 | Nakil et al. | May 2008 | A1 |
20080126534 | Mueller et al. | May 2008 | A1 |
20080250122 | Zsigmond et al. | Oct 2008 | A1 |
20080270199 | Chess et al. | Oct 2008 | A1 |
20080301765 | Nicol et al. | Dec 2008 | A1 |
20090064332 | Porras et al. | Mar 2009 | A1 |
20090133126 | Jang et al. | May 2009 | A1 |
20090241170 | Kumar et al. | Sep 2009 | A1 |
20090307753 | Dupont et al. | Dec 2009 | A1 |
20090313373 | Hanna et al. | Dec 2009 | A1 |
20090313698 | Wahl | Dec 2009 | A1 |
20090323543 | Shimakura | Dec 2009 | A1 |
20090328219 | Narayanaswamy | Dec 2009 | A1 |
20100005288 | Rao et al. | Jan 2010 | A1 |
20100077445 | Schneider et al. | Mar 2010 | A1 |
20100095293 | O'Neill et al. | Apr 2010 | A1 |
20100095367 | Narayanaswamy | Apr 2010 | A1 |
20100138810 | Komatsu et al. | Jun 2010 | A1 |
20100148940 | Gelvin et al. | Jun 2010 | A1 |
20100153316 | Duffield et al. | Jun 2010 | A1 |
20100153696 | Beachem et al. | Jun 2010 | A1 |
20100220584 | DeHaan et al. | Sep 2010 | A1 |
20100235514 | Beachem | Sep 2010 | A1 |
20100235915 | Memon et al. | Sep 2010 | A1 |
20100303240 | Beachem | Dec 2010 | A1 |
20100319060 | Aiken et al. | Dec 2010 | A1 |
20110010585 | Bugenhagen et al. | Jan 2011 | A1 |
20110055381 | Narasimhan et al. | Mar 2011 | A1 |
20110055388 | Yumerefendi et al. | Mar 2011 | A1 |
20110066719 | Miryanov et al. | Mar 2011 | A1 |
20110069685 | Tofighbakhsh | Mar 2011 | A1 |
20110083125 | Komatsu et al. | Apr 2011 | A1 |
20110126136 | Abella et al. | May 2011 | A1 |
20110126275 | Anderson et al. | May 2011 | A1 |
20110145885 | Rivers et al. | Jun 2011 | A1 |
20110170860 | Smith et al. | Jul 2011 | A1 |
20110173490 | Narayanaswamy et al. | Jul 2011 | A1 |
20110185423 | Sallam | Jul 2011 | A1 |
20110196957 | Ayachitula et al. | Aug 2011 | A1 |
20110202655 | Sharma et al. | Aug 2011 | A1 |
20110225207 | Subramanian et al. | Sep 2011 | A1 |
20110228696 | Agarwal et al. | Sep 2011 | A1 |
20110277034 | Hanson | Nov 2011 | A1 |
20110302652 | Westerfeld | Dec 2011 | A1 |
20110314148 | Petersen et al. | Dec 2011 | A1 |
20120005542 | Petersen et al. | Jan 2012 | A1 |
20120079592 | Pandrangi | Mar 2012 | A1 |
20120102361 | Sass et al. | Apr 2012 | A1 |
20120102543 | Kohli et al. | Apr 2012 | A1 |
20120117226 | Tanaka et al. | May 2012 | A1 |
20120136996 | Seo et al. | May 2012 | A1 |
20120137278 | Draper et al. | May 2012 | A1 |
20120137361 | Yi et al. | May 2012 | A1 |
20120140626 | Anand et al. | Jun 2012 | A1 |
20120197856 | Banka et al. | Aug 2012 | A1 |
20120198541 | Reeves | Aug 2012 | A1 |
20120216271 | Cooper et al. | Aug 2012 | A1 |
20120233473 | Vasseur et al. | Sep 2012 | A1 |
20120240232 | Azuma | Sep 2012 | A1 |
20120246303 | Petersen et al. | Sep 2012 | A1 |
20120278021 | Lin et al. | Nov 2012 | A1 |
20130003538 | Greenburg et al. | Jan 2013 | A1 |
20130006935 | Grisby | Jan 2013 | A1 |
20130038358 | Cook et al. | Feb 2013 | A1 |
20130086272 | Chen et al. | Apr 2013 | A1 |
20130103827 | Dunlap et al. | Apr 2013 | A1 |
20130145099 | Liu et al. | Jun 2013 | A1 |
20130159999 | Chiueh et al. | Jun 2013 | A1 |
20130174256 | Powers | Jul 2013 | A1 |
20130179487 | Lubetzky et al. | Jul 2013 | A1 |
20130179879 | Zhang et al. | Jul 2013 | A1 |
20130198839 | Wei et al. | Aug 2013 | A1 |
20130246925 | Ahuja et al. | Sep 2013 | A1 |
20130247201 | Alperovitch et al. | Sep 2013 | A1 |
20130254879 | Chesla et al. | Sep 2013 | A1 |
20130268994 | Cooper et al. | Oct 2013 | A1 |
20130275579 | Hernandez et al. | Oct 2013 | A1 |
20130283374 | Zisapel et al. | Oct 2013 | A1 |
20130290521 | Labovitz | Oct 2013 | A1 |
20130297771 | Osterloh et al. | Nov 2013 | A1 |
20130304900 | Trabelsi et al. | Nov 2013 | A1 |
20130305369 | Karta et al. | Nov 2013 | A1 |
20130318357 | Abraham et al. | Nov 2013 | A1 |
20130326623 | Kruglick | Dec 2013 | A1 |
20130333029 | Chesla et al. | Dec 2013 | A1 |
20130347103 | Veteikis et al. | Dec 2013 | A1 |
20140006610 | Formby et al. | Jan 2014 | A1 |
20140006871 | Lakshmanan et al. | Jan 2014 | A1 |
20140012814 | Bercovici et al. | Jan 2014 | A1 |
20140033193 | Palaniappan | Jan 2014 | A1 |
20140047185 | Peterson et al. | Feb 2014 | A1 |
20140047372 | Gnezdov et al. | Feb 2014 | A1 |
20140059200 | Nguyen et al. | Feb 2014 | A1 |
20140089494 | Dasari et al. | Mar 2014 | A1 |
20140096058 | Molesky et al. | Apr 2014 | A1 |
20140115219 | Ajanovic et al. | Apr 2014 | A1 |
20140143825 | Behrendt et al. | May 2014 | A1 |
20140149490 | Luxenberg et al. | May 2014 | A1 |
20140156814 | Barabash et al. | Jun 2014 | A1 |
20140164607 | Bai et al. | Jun 2014 | A1 |
20140165207 | Engel et al. | Jun 2014 | A1 |
20140173623 | Chang et al. | Jun 2014 | A1 |
20140192639 | Smirnov | Jul 2014 | A1 |
20140201717 | Mascaro et al. | Jul 2014 | A1 |
20140215573 | Cepuran | Jul 2014 | A1 |
20140215621 | Xaypanya et al. | Jul 2014 | A1 |
20140281030 | Cui et al. | Sep 2014 | A1 |
20140289854 | Mahvi | Sep 2014 | A1 |
20140298461 | Hohndel et al. | Oct 2014 | A1 |
20140317737 | Shin et al. | Oct 2014 | A1 |
20140331276 | Frascadore et al. | Nov 2014 | A1 |
20140331280 | Porras et al. | Nov 2014 | A1 |
20140331304 | Wong | Nov 2014 | A1 |
20140351203 | Kunnatur et al. | Nov 2014 | A1 |
20140351415 | Harrigan et al. | Nov 2014 | A1 |
20140359695 | Chari et al. | Dec 2014 | A1 |
20150009840 | Pruthi et al. | Jan 2015 | A1 |
20150026809 | Altman et al. | Jan 2015 | A1 |
20150033305 | Shear et al. | Jan 2015 | A1 |
20150036533 | Sodhi et al. | Feb 2015 | A1 |
20150039751 | Harrigan et al. | Feb 2015 | A1 |
20150046882 | Menyhart et al. | Feb 2015 | A1 |
20150058976 | Carney et al. | Feb 2015 | A1 |
20150067143 | Babakhan et al. | Mar 2015 | A1 |
20150082151 | Liang et al. | Mar 2015 | A1 |
20150082430 | Sridhara et al. | Mar 2015 | A1 |
20150085665 | Kompella et al. | Mar 2015 | A1 |
20150095332 | Beisiegel et al. | Apr 2015 | A1 |
20150112933 | Satapathy | Apr 2015 | A1 |
20150113133 | Srinivas et al. | Apr 2015 | A1 |
20150124608 | Agarwal et al. | May 2015 | A1 |
20150128133 | Pohlmann | May 2015 | A1 |
20150138993 | Forster et al. | May 2015 | A1 |
20150142962 | Srinivas et al. | May 2015 | A1 |
20150195291 | Zuk et al. | Jul 2015 | A1 |
20150249622 | Phillips et al. | Sep 2015 | A1 |
20150256555 | Choi et al. | Sep 2015 | A1 |
20150261842 | Huang et al. | Sep 2015 | A1 |
20150261886 | Wu et al. | Sep 2015 | A1 |
20150271255 | Mackay et al. | Sep 2015 | A1 |
20150295945 | Canzanese, Jr. et al. | Oct 2015 | A1 |
20150347554 | Vasantham et al. | Dec 2015 | A1 |
20150356297 | Yang et al. | Dec 2015 | A1 |
20150358352 | Chasin et al. | Dec 2015 | A1 |
20160006753 | McDaid et al. | Jan 2016 | A1 |
20160021131 | Heilig | Jan 2016 | A1 |
20160026552 | Holden et al. | Jan 2016 | A1 |
20160036837 | Jain et al. | Feb 2016 | A1 |
20160050132 | Zhang et al. | Feb 2016 | A1 |
20160072815 | Rieke et al. | Mar 2016 | A1 |
20160094529 | Mityagin | Mar 2016 | A1 |
20160103692 | Guntaka et al. | Apr 2016 | A1 |
20160105350 | Greifeneder et al. | Apr 2016 | A1 |
20160112270 | Danait et al. | Apr 2016 | A1 |
20160119234 | Valencia Lopez et al. | Apr 2016 | A1 |
20160127395 | Underwood et al. | May 2016 | A1 |
20160147585 | Konig et al. | May 2016 | A1 |
20160162308 | Chen et al. | Jun 2016 | A1 |
20160162312 | Doherty et al. | Jun 2016 | A1 |
20160173446 | Nantel | Jun 2016 | A1 |
20160191476 | Schutz et al. | Jun 2016 | A1 |
20160205002 | Rieke et al. | Jul 2016 | A1 |
20160216994 | Sefidcon et al. | Jul 2016 | A1 |
20160294691 | Joshi | Oct 2016 | A1 |
20160308908 | Kirby et al. | Oct 2016 | A1 |
20160357424 | Pang et al. | Dec 2016 | A1 |
20160357546 | Chang et al. | Dec 2016 | A1 |
20160357587 | Yadav et al. | Dec 2016 | A1 |
20160357957 | Deen et al. | Dec 2016 | A1 |
20160359592 | Kulshreshtha et al. | Dec 2016 | A1 |
20160359628 | Singh et al. | Dec 2016 | A1 |
20160359658 | Yadav et al. | Dec 2016 | A1 |
20160359673 | Gupta et al. | Dec 2016 | A1 |
20160359677 | Kulshreshtha et al. | Dec 2016 | A1 |
20160359678 | Madani et al. | Dec 2016 | A1 |
20160359679 | Parasdehgheibi et al. | Dec 2016 | A1 |
20160359680 | Parasdehgheibi et al. | Dec 2016 | A1 |
20160359686 | Parasdehgheibi et al. | Dec 2016 | A1 |
20160359696 | Yadav et al. | Dec 2016 | A1 |
20160359697 | Scheib et al. | Dec 2016 | A1 |
20160359698 | Deen et al. | Dec 2016 | A1 |
20160359699 | Gandham et al. | Dec 2016 | A1 |
20160359700 | Pang et al. | Dec 2016 | A1 |
20160359701 | Pang et al. | Dec 2016 | A1 |
20160359703 | Gandham et al. | Dec 2016 | A1 |
20160359704 | Gandham et al. | Dec 2016 | A1 |
20160359705 | Parasdehgheibi et al. | Dec 2016 | A1 |
20160359708 | Gandham et al. | Dec 2016 | A1 |
20160359709 | Deen et al. | Dec 2016 | A1 |
20160359711 | Deen et al. | Dec 2016 | A1 |
20160359712 | Alizadeh Attar et al. | Dec 2016 | A1 |
20160359740 | Parasdehgheibi et al. | Dec 2016 | A1 |
20160359759 | Singh et al. | Dec 2016 | A1 |
20160359872 | Yadav et al. | Dec 2016 | A1 |
20160359877 | Kulshreshtha et al. | Dec 2016 | A1 |
20160359878 | Prasad et al. | Dec 2016 | A1 |
20160359879 | Deen et al. | Dec 2016 | A1 |
20160359880 | Pang et al. | Dec 2016 | A1 |
20160359881 | Yadav et al. | Dec 2016 | A1 |
20160359888 | Gupta et al. | Dec 2016 | A1 |
20160359889 | Yadav et al. | Dec 2016 | A1 |
20160359890 | Deen et al. | Dec 2016 | A1 |
20160359891 | Pang et al. | Dec 2016 | A1 |
20160359897 | Yadav et al. | Dec 2016 | A1 |
20160359912 | Gupta et al. | Dec 2016 | A1 |
20160359913 | Gupta et al. | Dec 2016 | A1 |
20160359914 | Deen et al. | Dec 2016 | A1 |
20160359915 | Gupta et al. | Dec 2016 | A1 |
20160359917 | Rao et al. | Dec 2016 | A1 |
20160373481 | Sultan et al. | Dec 2016 | A1 |
20170034018 | Parasdehgheibi et al. | Feb 2017 | A1 |
20180006911 | Dickey | Jan 2018 | A1 |
Number | Date | Country |
---|---|---|
101093452 | Dec 2007 | CN |
101770551 | Jul 2010 | CN |
102521537 | Jun 2012 | CN |
103023970 | Apr 2013 | CN |
103716137 | Apr 2014 | CN |
104065518 | Sep 2014 | CN |
0811942 | Dec 1997 | EP |
1383261 | Jan 2004 | EP |
1450511 | Aug 2004 | EP |
2045974 | Apr 2008 | EP |
2887595 | Jun 2015 | EP |
2009-016906 | Jan 2009 | JP |
1394338 | May 2014 | KR |
WO 2007014314 | Feb 2007 | WO |
WO 2007070711 | Jun 2007 | WO |
WO 2008069439 | Jun 2008 | WO |
WO 2013030830 | Mar 2013 | WO |
WO 2015042171 | Mar 2015 | WO |
WO 2015099778 | Jul 2015 | WO |
WO 2016004075 | Jan 2016 | WO |
WO 2016019523 | Feb 2016 | WO |
Entry |
---|
Arista Networks, Inc., “Application Visibility and Network Telemtry using Splunk,” Arista White Paper, Nov. 2013, 11 pages. |
Australian Government Department of Defence, Intelligence and Security, “Top 4 Strategies to Mitigate Targeted Cyber Intrusions,” Cyber Security Operations Centre Jul. 2013, http:www.asd.gov.au/infosec/top-mitigations/top-4-strategies-explained.htm. |
Author Unknown, “Blacklists & Dynamic Reputation: Understanding Why the Evolving Threat Eludes Blacklists,” www.dambala.com, 9 pages, Dambala, Atlanta, GA, USA. |
Aydin, Galip, et al., “Architecture and Implementation of a Scalable Sensor Data Storage and Analysis Using Cloud Computing and Big Data Technologies,” Journal of Sensors, vol. 2015, Article ID 834217, Feb. 2015, 11 pages. |
Backes, Michael, et al., “Data Lineage in Malicious Environments,” IEEE 2015, pp. 1-13. |
Bauch, Petr, “Reader's Report of Master's Thesis, Analysis and Testing of Distributed NoSQL Datastore Riak,” May 28, 2015, Brno. 2 pages. |
Bayati, Mohsen, et al., “Message-Passing Algorithms for Sparse Network Alignment,” Mar. 2013, 31 pages. |
Berezinski, Przemyslaw, et al., “An Entropy-Based Network Anomaly Detection Method,” Entropy, 2015, vol. 17, www.mdpi.com/journal/entropy, pp. 2367-2408. |
Berthier, Robin, et al. “Nfsight: Netflow-based Network Awareness Tool,” 2010, 16 pages. |
Bhuyan, Dhiraj, “Fighting Bots and Botnets,” 2006, pp. 23-28. |
Blair, Dana, et al., U.S. Appl. No. 62/106,006, filed Jan. 21, 2015, entitled “Monitoring Network Policy Compliance.” |
Bosch, Greg, “Virtualization,” 2010, 33 pages. |
Breen, Christopher, “MAC 911, How to dismiss Mac App Store Notifications,” Macworld.com, Mar. 24, 2014, 3 pages. |
Chandran, Midhun, et al., “Monitoring in a Virtualized Environment,” GSTF International Journal on Computing, vol. 1, No. 1, Aug. 2010. |
Chari, Suresh, et al., “Ensuring continuous compliance through reconciling policy with usage,” Proceedings of the 18th ACM symposium on Access control models and technologies (SACMAT '13). ACM, New York, NY, USA, 49-60. |
Chen, Xu, et al., “Automating network application dependency discovery: experiences, limitations, and new solutions,” 8th USENIX conference on Operating systems design and implementation (OSDI'08), USENIX Association, Berkeley, CA, USA, 117-130. |
Chou, C.W., et al., “Optical Clocks and Relativity,” Science vol. 329, Sep. 24, 2010, pp. 1630-1633. |
Cisco Systems, “Cisco Network Analysis Modules (NAM) Tutorial,” Cisco Systems, Inc., Version 3.5. |
Cisco Systems, Inc., “Addressing Compliance from One Infrastructure: Cisco Unified Compliance Solution Framework,” 2014. |
Cisco Systems, Inc., “Cisco Application Dependency Mapping Service,” 2009. |
Cisco Systems, Inc., “White Paper—New Cisco Technologies Help Customers Achieve Regulatory Compliance,” 1992-2008. |
Cisco Systems, Inc., “A Cisco Guide to Defending Against Distributed Denial of Service Attacks,” May 3, 2016, 34 pages. |
Cisco Systems, Inc., “Cisco Application Visibility and Control,” Oct. 2011, 2 pages. |
Cisco Systems, Inc., “Cisco Tetration Platform Data Sheet”, Updated Mar. 5, 2018, 21 pages. |
Cisco Technology, Inc., “Cisco Lock-and-Key:Dynamic Access Lists,” http://www/cisco.com/en/us/support/doc/security-vpn/lock-key/7604-13.html; Updated Jul. 12, 2006, 16 pages. |
Di Lorenzo, Guisy, et al., “EXSED: An Intelligent Tool for Exploration of Social Events Dynamics from Augmented Trajectories,” Mobile Data Management (MDM), pp. 323-330, Jun. 3-6, 2013. |
Duan, Yiheng, et al., Detective: Automatically Identify and Analyze Malware Processes in Forensic Scenarios via DLLs, IEEE ICC 2015—Next Generation Networking Symposium, pp. 5691-5696. |
Feinstein, Laura, et al., “Statistical Approaches to DDoS Attack Detection and Response,” Proceedings of the DARPA Information Survivability Conference and Exposition (DISCEX '03), Apr. 2003, 12 pages. |
George, Ashley, et al., “NetPal: A Dynamic Network Administration Knowledge Base,” 2008, pp. 1-14. |
Goldsteen, Abigail, et al., “A Tool for Monitoring and Maintaining System Trustworthiness at Run Time,” REFSQ (2015), pp. 142-147. |
Hamadi, S., et al., “Fast Path Acceleration for Open vSwitch in Overlay Networks,” Global Information Infrastructure and Networking Symposium (GIIS), Montreal, QC, pp. 1-5, Sep. 15-19, 2014. |
Heckman, Sarah, et al., “On Establishing a Benchmark for Evaluating Static Analysis Alert Prioritization and Classification Techniques,” IEEE, 2008; 10 pages. |
Hewlett-Packard, “Effective use of reputation intelligence in a security operations center,” Jul. 2013, 6 pages. |
Hideshima, Yusuke, et al., “STARMINE: A Visualization System for Cyber Attacks,” http://www.researchgate.net/publication/221536306, Feb. 2006, 9 pages. |
Huang, Hing-Jie, et al., “Clock Skew Based Node Identification in Wireless Sensor Networks,” IEEE, 2008, 5 pages. |
InternetPerils, Inc., “Control Your Internet Business Risk,” 2003-2015, http://www.internetperils.com. |
Ives, Herbert, E., et al., “An Experimental Study of the Rate of a Moving Atomic Clock,” Journal of the Optical Society of America, vol. 28, No. 7, Jul. 1938, pp. 215-226. |
Janoff, Christian, et al., “Cisco Compliance Solution for HIPAA Security Rule Design and Implementation Guide,” Cisco Systems, Inc., Updated Nov. 14, 2015, part 1 of 2, 350 pages. |
Janoff, Christian, et al., “Cisco Compliance Solution for HIPAA Security Rule Design and Implementation Guide,” Cisco Systems, Inc., Updated Nov. 14, 2015, part 2 of 2, 588 pages. |
Kerrison, Adam, et al., “Four Steps to Faster, Better Application Dependency Mapping—Laying the Foundation for Effective Business Service Models,” BMCSoftware, 2011. |
Kim, Myung-Sup, et al. “A Flow-based Method for Abnormal Network Traffic Detection, ” IEEE, 2004, pp. 599-612. |
Kraemer, Brian, “Get to know your data center with CMDB,” TechTarget, Apr. 5, 2006, http://searchdatacenter.techtarget.com/news/118820/Get-to-know-your-data-center-with-CMDB. |
Lab SKU, “VMware Hands-on Labs—HOL-SDC-1301” Version: 20140321-160709, 2013; http://docs.hol.vmware.com/HOL-2013/holsdc-1301_html_en/ (part 1 of 2). |
Lab SKU, “VMware Hands-on Labs—HOL-SDC-1301” Version: 20140321-160709, 2013; http://docs.hol.vmware.com/HOL-2013/holsdc-1301_html_en/ (part 2 of 2). |
Lachance, Michael, “Dirty Little Secrets of Application Dependency Mapping,” Dec. 26, 2007. |
Landman, Yoav, et al., “Dependency Analyzer,” Feb. 14, 2008, http://jfrop.com/confluence/display/DA/Home. |
Lee, Sihyung, “Reducing Complexity of Large-Scale Network Configuration Management,” Ph.D. Dissertation, Carniege Mellon University, 2010. |
Li, Ang, et al., “Fast Anomaly Detection for Large Data Centers,” Global Telecommunications Conference (GLOBECOM 2010, Dec. 2010, 6 pages. |
Li, Bingbong, et al, “A Supervised Machine Learning Approach to Classify Host Roles on Line Using sFlow,” in Proceedings of the first edition workshop on High performance and programmable networking, 2013, ACM, New York, NY, USA, 53-60. |
Liu, Ting, et al., “Impala: A Middleware System for Managing Autonomic, Parallel Sensor Systems,” In Proceedings of the Ninth ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming(PPoPP '03), ACM, New York, NY, USA, Jun. 11-13, 2003, pp. 107-118. |
Lu, Zhonghai, et al., “Cluster-based Simulated Annealing for Mapping Cores onto 2D Mesh Networks on Chip,” Design and Diagnostics of Electronic Circuits and Systems, pp. 1, 6, Apr. 16-18, 2008. |
Matteson, Ryan, “Depmap: Dependency Mapping of Applications Using Operating System Events: a Thesis,” Master's Thesis, California Polytechnic State University, Dec. 2010. |
Natarajan, Arun, et al., “NSDMiner: Automated Discovery of Network Service Dependencies,” Institute of Electrical and Electronics Engineers INFOCOM, Feb. 2012, 9 pages. |
Navaz, A.S. Syed, et al., “Entropy based Anomaly Detection System to Prevent DDoS Attacks in Cloud,” International Journal of computer Applications (0975-8887), vol. 62, No. 15, Jan. 2013, pp. 42-47. |
Neverfail, “Neverfail IT Continuity Architect,” 2015, https://web.archive.org/web/20150908090456/http://www.neverfailgroup.com/products/it-continuity-architect. |
Nilsson, Dennis K., et al., “Key Management and Secure Software Updates in Wireless Process Control Environments,” In Proceedings of the First ACM Conference on Wireless Network Security (WiSec '08), ACM, New York, NY, USA, Mar. 31-Apr. 2, 2008, pp. 100-108. |
Nunnally, Troy, et al., “P3D: A Parallel 3D Coordinate Visualization for Advanced Network Scans,” IEEE 2013, Jun. 9-13, 2013, 6 pages. |
O'Donnell, Glenn, et al., “The CMDB Imperative: How to Realize the Dream and Avoid the Nightmares,” Prentice Hall, Feb. 19, 2009. |
Ohta, Kohei, et al., “Detection, Defense, and Tracking of Internet-Wide Illegal Access in a Distributed Manner,” 2000, pp. 1-16. |
Pathway Systems International Inc., “How Blueprints does Integration,” Apr. 15, 2014, 9 pages, http://pathwaysystems.com/company-blog/. |
Pathway Systems International Inc., “What is Blueprints?” 2010-2016, http://pathwaysystems.com/blueprints-about/. |
Popa, Lucian, et al., “Macroscope: End-Point Approach to Networked Application Dependency Discovery,” CoNEXT'09, Dec. 1-4, 2009, Rome, Italy, 12 pages. |
Prasad, K. Munivara, et al., “An Efficient Detection of Flooding Attacks to Internet Threat Monitors (ITM) using Entropy Variations under Low Traffic,” Computing Communication & Networking Technologies (ICCCNT '12), Jul. 26-28, 2012, 11 pages. |
Sachan, Mrinmaya, et al., “Solving Electrical Networks to incorporate Supervision in Random Walks,” May 13-17, 2013, pp. 109-110. |
Sammarco, Matteo, et al., “Trace Selection for Improved WLAN Monitoring,” Aug. 16, 2013, pp. 9-14. |
Shneiderman, Ben, et al., “Network Visualization by Semantic Substrates,” Visualization and Computer Graphics, vol. 12, No. 5, pp. 733,740, Sep.-Oct. 2006. |
Thomas, R., “Bogon Dotted Decimal List,” Version 7.0, Team Cymru NOC, Apr. 27, 2012, 5 pages. |
Wang, Ru, et al., “Learning directed acyclic graphs via bootstarp aggregating,” 2014, 47 pages, http://arxiv.org/abs/1406.2098. |
Wang, Yongjun, et al., “A Network Gene-Based Framework for Detecting Advanced Persistent Threats,” Nov. 2014, 7 pages. |
Witze, Alexandra, “Special relativity aces time trial, ‘Time dilation’ predicted by Einstein confirmed by lithium ion experiment,” Nature, Sep. 19, 2014, 3 pages. |
Woodberg, Brad, “Snippet from Juniper SRX Series” Jun. 17, 2013, 1 page, O'Reilly Media, Inc. |
Zatrochova, Zuzana, “Analysis and Testing of Distributed NoSQL Datastore Riak,” Spring, 2015, 76 pages. |
Zhang, Yue, et al., “Cantina: A Content-Based Approach to Detecting Phishing Web Sites,” May 8-12, 2007, pp. 639-648. |
Number | Date | Country | |
---|---|---|---|
20170346736 A1 | Nov 2017 | US |
Number | Date | Country | |
---|---|---|---|
62342746 | May 2016 | US |