Reducing ARP/ND flooding in cloud environment

Information

  • Patent Grant
  • 10659283
  • Patent Number
    10,659,283
  • Date Filed
    Friday, July 8, 2016
    8 years ago
  • Date Issued
    Tuesday, May 19, 2020
    4 years ago
Abstract
Aspects of the embodiments are directed to receiving an address resolution protocol (ARP) request message from a requesting virtual machine, the ARP request message comprising a request for a destination address for a destination virtual machine, wherein the destination address comprises one or both of a destination hardware address or a destination media access control address; augmenting the ARP request message with a network service header (NSH), the NSH identifying an ARP service function; and forwarding the augmented ARP request to the ARP service function.
Description
FIELD

This disclosure pertains to reducing address resolution protocol (ARP) and/or neighbor discovery (ND) in a cloud environment.


BACKGROUND

Currently in Virtual Private cloud, Hybrid Cloud or Data center scenarios, it is common to see different Layer 2 network/sites are connected using various overlay technologies (like EVPN, VxLAN, NVO3, etc.). In these scenarios, the Overlay Edge Node (e.g. NVE, PE) uses dataplane based MAC learning and exchange the MAC reachability over BGP. The Overlay Edge node (e.g. NVE, PE) can also perform ND/ARP snooping for additional optimization and advertise the IP/MAC reachability info via BGP, so that any Edge Node (e.g. NVE, PE), upon receiving ND/ARP requests from connected L2 devices (e.g. Virtual Machines VMs) would check the local cache table for existing ND/ARP entries and reply directly to the connected L2 devices, if appropriate match is found. If there is no match found, then Edge Node would flood the request to remote Edge Nodes and wait for the reply.



FIG. 1 is a block diagram of an example network 100. In FIG. 1, a Host2108 in L2Site2110 has not originated any traffic. If a Host1102 from L2Site1104 is sending an ND/ARP request for Host2, the ND/ARP request will be flooded by network virtualization edge network element (NVE1) 106 to all remote NVE (NVE2112 and NVE3118 in this scenario), which in turn will flood the ND/ARP request to other connected L2 sites. The same flooding issue is observed if the MAC entry is timed out on the NVEs for any MAC. This flooding becomes a challenge in large scale data center deployments.


It is possible that an orchestrator network element, such as a controller, is made aware of the IP/MAC address of the network function virtualization (NFV)/virtual machine (VM) instances, thereby allowing controllers to exchange the IP/MAC address reachability details with adjacent NVEs. However, this still does not address the above flooding challenge, as NVEs are still required to rely on data plane (learning and/or ND/ARP snooping) and control plane (e.g. BGP advertisements /withdrawal).





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a block diagram of an example network topology. In FIG. 1, a Host2 in L2Site2 has not originated any traffic.



FIGS. 2A-2E are schematic diagrams of a network topology in accordance with embodiments of the present disclosure.



FIG. 3A-3B are schematic diagrams of a network topology with virtual machine migration in accordance with embodiments of the present disclosure.



FIG. 4 is a process flow diagram for reducing address protocol resolution flooding in accordance with embodiments of the present disclosure.



FIG. 5A-5D are schematic diagrams of a network topology that includes a border gateway protocol table in accordance with embodiments of the present disclosure.



FIG. 6 is a process flow diagram for reducing address protocol resolution flooding in accordance with embodiments of the present disclosure.



FIG. 7A illustrates a Service Function Chain (SFC), which may include an initial Classification function, as an entry point into a Service Function Path (SFP), according to some embodiments of the disclosure.



FIG. 7B-7C illustrate different service paths realized using service function chaining, according to some embodiments of the disclosure.



FIG. 8 shows a system view of a Service Chain Function-aware network element for prescribing a service path of a traffic flow, according to some embodiments of the disclosure.



FIG. 9 shows a system view of a service node, according to some embodiments of the disclosure.





DETAILED DESCRIPTION

This disclosure describes leveraging a controller's knowledge about the IP/MAC address of spawned VM/service function (SF)/NFV instances and prepopulate the same as an address resolution protocol service function (ARP-SF) on each L2 sites. NVE sniffs the ARP request from connected site and forwards the ARP to an ARP-SF after encapsulating the ARP request with a network service (NSH) header.


Aspects of the embodiments are directed to one or more computer readable storage media encoded with software comprising computer executable instructions and when the software is executed operable to receive an address resolution protocol (ARP) request message from a requesting virtual machine, the ARP request message comprising a request for a destination address for a destination virtual machine, wherein the destination address comprises one or both of a destination hardware address or a destination media access control address; augment the ARP request message with a network service header (NSH), the NSH identifying an ARP service function; and forward the augmented ARP request to the ARP service function.


In some embodiments, the software is further operable to receive an ARP reply message from the ARP service function; decapsulate a network service header from the ARP reply message; and forward the decapsulated ARP reply message to the requesting virtual machine, the ARP reply message including a destination address for a destination virtual machine.


In some embodiments, wherein the software is further operable to update a forwarding table with the destination address for the destination virtual machine.


In some embodiments, the software is further operable to determine, from the ARP reply message received from the ARP service function, that the destination address for the destination virtual machine is not present in an ARP service function database; and transmit an ARP request message to one or more network elements in a network.


Aspects of the embodiments are directed to a network element for performing address resolution, the network element comprising at least one memory element having instruction stored thereon; at least one processor coupled to the at least one memory element and configured to execute the instructions to cause the service classifier node to receive an address resolution protocol (ARP) request message from a requesting virtual machine, the ARP request message comprising a request for a destination address for a destination virtual machine, wherein the destination address comprises one or both of a destination hardware address or a destination media access control address; augment the ARP request message with a network service header (NSH), the NSH identifying an ARP service function; and forwarding the augmented ARP request to the ARP service function.


In some embodiments, the at least one processor configured to cause the network element to receive an ARP reply message from the ARP service function; decapsulate a network service header from the ARP reply message; and forward the decapsulated ARP reply message to the requesting virtual machine, the ARP reply message including a destination address for a destination virtual machine.


In some embodiments, the at least one processor configured to cause the network element to update a forwarding table with the destination address for the destination virtual machine.


In some embodiments, the network element comprises a network virtualization edge network element.


In some embodiments, the at least one processor configured to cause the network element to determine, from the ARP reply message received from the ARP service function, that the destination address for the destination virtual machine is not present in an ARP service function database; and transmit an ARP request message to one or more network elements in a network.


Aspects of the embodiments are directed to one or more computer readable storage media encoded with software comprising computer executable instructions and when the software is executed operable to receive, at an address resolution protocol (ARP) service function, an ARP request message from a network element; perform a lookup in an ARP database associated with the ARP service function; generate an ARP reply message; forward the ARP reply message to the network element.


In some embodiments, the software is further operable to determine the presence in the ARP database of a destination address for a destination virtual machine identified in the ARP request message, wherein the destination address comprises one or both of a destination hardware address or a destination media access control address; and augment the ARP reply message with the destination address for the destination virtual machine.


In some embodiments, the software is further operable to determine the presence in the ARP database of a destination address for a destination virtual machine identified in the ARP request message, wherein the destination address comprises one or both of a destination hardware address or a destination media access control address; determine that the virtual machine is a local virtual machine based, at least in part, on the destination address of the destination virtual machine; and ignore the ARP request message.


In some embodiments, determine that a destination address for a destination virtual machine identified in the ARP request message does not exist in the ARP database; wherein generating an ARP reply message comprises generating an ARP reply message that includes a network service header that indicates the absence of the destination address for the destination.


In some embodiments, the software is further operable to receive from a controller one or more virtual machine destination addresses, and for each virtual machine destination address, a corresponding virtual machine identifier; and storing the one or more virtual machine destination address and corresponding virtual machine identifier in a database, wherein the destination address comprises one or both of a destination hardware address or a destination media access control address.


Aspects of the embodiments are directed to a network element for performing address resolution, the network element comprising at least one memory element having instruction stored thereon; at least one processor coupled to the at least one memory element and configured to execute the instructions to cause the service classifier node to receive, at an address resolution protocol (ARP) service function, an ARP request message from a network element; perform a lookup in an ARP database associated with the ARP service function; generate an ARP reply message; forward the ARP reply message to the network element.


In some embodiments, the at least one processor configured to cause the network element to determine the presence in the ARP database of a destination address for a destination virtual machine identified in the ARP request message; and augment the ARP reply message with the destination address for the destination virtual machine; wherein the destination address comprises one or both of a destination hardware address or a destination media access control address.


In some embodiments, the at least one processor configured to cause the network element to determine the presence in the ARP database of a destination address for a destination virtual machine identified in the ARP request message; determine that the virtual machine is a local virtual machine based, at least in part, on the destination address of the destination virtual machine; and ignore the ARP request message; wherein the destination address comprises one or both of a destination hardware address or a destination media access control address.


In some embodiments, the at least one processor configured to cause the network element to determine that a destination address for a destination virtual machine identified in the ARP request message does not exist in the ARP database; wherein generating an ARP reply message comprises encapsulating the APR request message; setting a flag in the network service header to flood the APR request, and forwarding the ARP request message to the network element that includes a network service header that indicates the absence of the destination address for the destination.


In some embodiments, the at least one processor configured to cause the network element to receive from a controller one or more virtual machine destination addresses, and for each virtual machine destination address, a corresponding virtual machine identifier; and storing the one or more virtual machine destination address and corresponding virtual machine identifier in a database.


In some embodiments, wherein the destination address comprises one or both of an internet protocol (IP) or media access control (MAC) address.


Aspects of the embodiments are directed to an NVE that is configured to receive the ND/ARP request message from the requesting VM. The NVE can look up a BGP table for the destination address (hardware address or MAC address) information for the destination VM. The NVE can create a reply, and send the reply message to the VM. The NVE can also update the forwarding table, the entry of which would expire after some time period.


The ARP-SF replies back with the details if it has the same in local database. Else will signal the NVE to flood the request. Scalable and dynamic way of reducing the ARP flooding.


The idea is to dedicate one or more Service Functions (SF), dubbed as ND/ARP-SF that keeps track of IP/MAC details of each VM instance in local and remote L2 sites, and provide the IP/MAC reachability information to the Overlay Edge Node (e.g. NVE, PE) when requested using Network Services Header (NSH) mechanism.


Overlay Edge Node does NOT populate the IP/MAC reachability information in its forwarding table by default, until/unless there is a corresponding traffic. This is because the IP/MAC reachability details are outsourced to ND/ARP-SF.


ND/ARP-SF Service Function is populated with the IP/MAC details of each VM instance, after it is spun up, and deleted when VM instance is deleted. To make this happen (updating the IP/MAC details per VM), ND/ARP-SF communicates (using APIs (e.g. REST) preferably in pub/sub manner) with either Virtual Infra Manager (VIM) such as vCenter, Openstack, etc. or a controller such as Virtual Topology System (VTS), etc. depending on the deployment model.


Note that there could be more than one ND/ARP-SF and more than one VIM/Controllers for scale purposes. Controllers are typically used on top of VIM for scale purposes.


The IP/MAC reachability details are exchanged among the controllers of different sites using any (REST) API or BGPEVPN. This consolidated information is populated in ARP-SF.



FIGS. 2A-2E are schematic diagrams of a network topology 200 in accordance with embodiments of the present disclosure. Topology 200 includes a first site (Site1) 202. Site1202 can include one or more instantiated virtual machines, such as VM1206a and VM9206b. Though shown as two VMs, any site can include a plurality of VM instantiations.


ND/ARP-SF Service Function is populated with the IP/MAC details of each VM instance, after it is spun up, and deleted when VM instance is deleted. To make this happen (updating the IP/MAC details per VM), ND/ARP-SF communicates (using APIs (e.g. REST) preferably in pub/sub manner) with a controller 240 (which can be either Virtual Infra Manager (VIM) such as vCenter, Openstack, etc. or a controller such as Virtual Topology System (VTS), etc. depending on the deployment model). The controller 240 can populate an ARP-SF database 210.


Site1202 includes a network virtualization edge (NVE1) 204. A network Virtualization Edge (NVE) is a component in Network Virtualization Overlays Technology. An NVE can provide different types of virtualized network services to multiple tenants, i.e. an L2 service or an L3 service. Note that an NVE may be capable of providing both L2 and L3 services for a tenant.


An L2 NVE implements Ethernet LAN emulation, an Ethernet based multipoint service similar to an IETF VPLS, or EVPN service, where the Tenant Systems appear to be interconnected by a LAN environment over an L3 overlay. As such, an L2 NVE provides per-tenant virtual switching instance (L2 VNI), and L3 (IP/MPLS) tunneling encapsulation of tenant MAC frames across the underlay. Note that the control plane for an L2 NVE could be implemented locally on the NVE or in a separate control entity.


The site1202 can also include an address resolution protocol service function (ARP-SF) 208. ARP-SF 208 can maintain or access an ARP-SF database that cross references IP addresses/MAC addresses and remote NVEs for destination address resolution.


Other sites, such as site2212 and site3222 can include similar features. For example, site2212 includes NVE2214, VM2216 (instantiated VM), and ARP-SF 218. Site3222 includes NVE3224, instantiated VM3226, and ARP-SF 228. The sites can be connected through a cloud service provider 230. A controller 240 can provide orchestration and other control functions for each site or for all sites, collectively. Controller 240 can include a virtual infrastructure management (VIM) network element. In embodiments, the controller 240 can populate the APR-SF database 210. In embodiments, the ARP-SF database 210 can also be populated by the NVE forwarding table.


In the topology 200 shown in FIG. 2B, NVE1204 can receive an ND/ARP request from a device (e.g., VM1206a in local L2 site1202). In this example, the ND/ARP request can be a request for VM2216 in L2 site2212. NVE1204 can encapsulate the request in a network service header (NSH). NVE1204 can forward the encapsulated ND/ARP request towards the ARP-SF 208. In embodiments, the APR-SF 208 can also handle neighbor discovery (ND) requests.


As shown in FIG. 2C, the ARP-SF 208 can process the request by performing a lookup in database 210. If a match is found for VM2 in database 210, the ARP-SF 208 can generate the ND/ARP reply, encapsulate it with an NSH header (along with the remoteNVE associated with that VM entry, a VXLAN tunnel endpoint (VTEP), etc., in the NSH metadata) and unicast it to the NVE1204.


NVE1 on receiving the reply can use the details (IP/MAC and remoteNVE in metadata) to update NVE's forwarding table (e.g., cache table, shown in FIG. 3A), decapsulate the reply from the ARP-SF 208, and forward the ND/ARP reply towards the originating device (here, VM1206a). For example, ARP-SF 208 can reply back with {IP=10.1.1.2;MAC=2.2.2;remoteNVE=NVE2} to NVE1. NVE1 will program the details locally, decapsulate the network service header, and send the reply to VM1. By using stored information in the ARP-SF database 210, the NVE1204 in this scenario can forgo flooding the ND/ARP request.


In embodiments, the NVE could be configured to not redirect the ND/ARP reply for certain ranges of IP prefixes to avoid having the NVE try to resolve requests originated in local sites (since the underlay or overlay could implicitly resolve such requests).


In some embodiments, the VM1 can floods a request for VM10206n, which is in the L2 Site1202. All nodes in L2Site1 (including NVE1 and VM10) will receive the request. NVE1 will encapsulate with NSH header and forward to ARP-SF. Since VM10 is local, ARP-SF will simply ignore it. In the meantime, VM10206n will reply back to VM1206a with the address resolution.



FIG. 2D is a schematic diagram of a network topology in accordance with embodiments of the present disclosure wherein the ARP-SF 208 cannot resolve the ND/ARP request (e.g., because the ARP-SF 208 does not have MAC details in database 210). As shown in FIG. 2D, VM1206a sends a request for VM300206n. NVE1204 can check its cache (forwarding) table for VM300 information. When the NVE1204 does not have IP/MAC information for VM300, NVE1 can encapsulate the ARP request with an NSH header and forward the encapsulated APR request to ARP-SF 208.


In FIG. 2E, ARP-SF 208, after receiving the request from NVE1204, will reply back with relevant details (including a flag in Metadata) signaling that it does not have any local entry for V300, NVE1204, after receiving such reply will flood the APR request to all remote NVEs (e.g., NVE2214 and NVE3224). This is a backward compatible way and will not cause any packet loss if ARP-SF 208 has not been updated with the MAC details.


The above concept does not require the NVE to be loaded with details of all other nodes in remote sites. Instead uses a scalable manner of spinning ARP-SF instances as required and have the NVE to redirect the request to ARP-SF and thereby reduce the flooding.


VM Move Scenario:



FIGS. 3A-3B are schematic diagrams of a network topology for reducing address resolution flooding for a virtual machine that has moved in accordance with embodiments of the present disclosure. In FIG. 3A, VM2216 undergoes VM migration from site2212 to site3222. In embodiments, VM migration does not involve any change in IP/MAC associated instance. For example, VM2216 from site2212 can migrate to site3 (shown as VM2227 in FIG. 3B). So during such scenarios, all ARP-SFs in different L2Sites will be programmed with new details (like the remote NVE details, etc.). For example, a local ARP-SF 228 can inform a central controller (e.g., VIM/controller), which can publish the move to other ARP-SFs.


When VM1206a sends a new ARP request for the MAC address for VM2216 within an L2 site and if the local NVE1204 does not have an entry for that MAC in its forwarding table, then NVE1204 simply relies on the process (as described above) to learn the latest binding from ARP-SF 208.


However, if NVE1204 does have an entry (which might be stale) for that MAC for VM2216 in its forwarding table (pointing to the old remote NVE2214), then one of two embodiments could apply (depending on Stateful or Stateless ARP-SF logic):


(a) Stateful ARP-SF: a stateful ARP-SF would retain the identity of each NVE that interacted with ARP-SF for each IP/MAC. The stateful ARP-SF could send the updated IP/MAC: NVE binding info to NVE on an unsolicited basis using a NSH (new flag could be set), as soon as ARP-SF learns about the updated info (from the controller 240).


(b) Stateless ARP-SF: A stateless ARP-SF does not keep track of which NVE interacted for what IP/MAC:NVE binding. The NVE would follow the current dataplane learning (and flooding) to update its local forwarding table. Note that given the redirection by remote NVE, there would be minimal or no loss.


The above concepts do not require the NVE to be loaded with details of all other nodes in remote sites. Instead, APR-SF instances are spun up scalably and as required and have the NVE to redirect the request to ARP-SF to reduce the flooding.


When a VM is torn down, the VIM/controller 240 can delete the entry from the local ARP-SF instance or from any ARP-SF instance under the control of the VIM/controller 240. In some embodiments, an ARP-SF instance that has an entry deleted can alert other ARP-SF instances to delete entries.



FIG. 4 is a process flow diagram 400 for reducing address protocol resolution flooding in accordance with embodiments of the present disclosure. A virtual machine can send an ND/ARP request to an NVE. The NVE can receive the ND/ARP request from the VM (402). The NVE can encapsulate the ND/ARP request with a network service header (NSH) for an ARP-SF instance common to the L2 site of the VM (404). The NVE can forward the encapsulated ND/ARP request to the ARP-SF (406).


The ARP-SF can look up a destination address for the destination VM from a database stored with, associated with, or accessible by the ARP-SF (408). If the address is not in the table, the ARP-SF can generate a reply with a flag in the reply metadata indicating there is no entry for the destination address for the destination VM (412). The ARP-SF can send the reply to the NVE. The NVE can flood the ND/ARP to other NVEs in the network.


If the address is in the ARP-SF table, the ARP-SF can determine whether the address is local to the site of the requesting VM (416). If the destination VM is local, then the ARP-SF can ignore the ND/ARP request because the local VM will respond to the requesting VM (418).


If the destination VM is not local, then the ARP-SF can generate an ARP reply (420). The ARP-SF can encapsulate the ARP reply with an NSH header (along with the remoteNVE associated with that entry in the NSH metadata). The ARP-SF can unicast the encapsulated ARP reply to NVE that redirected the ND/ARP request (422).


At the NVE, the NVE can decapsulate the ARP reply and forward the ARP reply to the requesting VM (424). The NVE can also update its forwarding table with the address in the ARP reply.



FIG. 5A-5D are schematic diagrams of a network topology 500 that includes a border gateway protocol table 510 in accordance with embodiments of the present disclosure. FIG. 5A is a schematic diagram illustrating an overview of the BGP scenario. In scenarios where the number of IP/MAC to be retained is not at massive scale, if all the IP/MAC address reachability are kept in BGP table 510 associated with NVE/PE routers, then (ARP-SF) could be eliminated in these scenarios. The IP/MAC addresses are not installed in the NVE forwarding tables (at first), as well as not advertised further (so remote MAC addresses are not advertised locally).


In that case, the NVE 504 would receive the IP/MAC details from the controller 540 and populate in BGP table 510 (or other local control plane table). VM1506 would send the ARP to NV/PE1504, and NV/PE1504 would look into border gate protocol (BGP) table 510 for that destination IP. If the address exists in the BGP table 510, then the NVE 504 can use the incoming ARP request as a trigger to install the IP/MAC address in the forwarding table 508 of the NVE 504. The NVE 504 can send the ARP reply to VM1506.


After not being used for a predetermined amount of time, the address information can be purged from forwarding table, but not from the BGP table 510 unless the controller 540 updates the BGP table 510 due to VM tear down/spin up and/or VM migration.


In FIG. 5B, a controller 540 can update BGP tables 510a, 510b, and 510c. The controller 540 can update the local site MAC details to NVE nodes' BGP tables.


In FIG. 5C, VM1506 sends an ARP request message to the NVE 504. In FIG. 5D, the NVE checks the BGP table 572. The NVE1504 installs the BGP entry into the forwarding table and replies back to VM1.



FIG. 6 is a process flow diagram 600 for reducing address protocol resolution flooding in accordance with embodiments of the present disclosure. The NVE can receive an ND/ARP request from a VM (602). The NVE can perform a lookup for a destination address for destination VM in a BGP table (604). The BGP table can be populated with VM addresses and corresponding VM identification information by the VIM/Controller. The NVE can create an ND/ARP reply with the destination address (606). The NVE can update its forwarding table with the address for the destination VM (608). The NVE can remove the destination address for the VM after the expiration of a predetermined time period (610).


Basics of Network Service Chaining or Service Function Chains in a Network


To accommodate agile networking and flexible provisioning of network nodes in the network, Service Function Chains (SFC) can be used to ensure an ordered set of Service Functions (SF) to be applied to packets and/or frames of a traffic flow. SFCs provides a method for deploying SFs in a way that enables dynamic ordering and topological independence of those SFs. A service function chain can define an ordered set of service functions that is applied to packets and/or frames of a traffic flow, where the ordered set of service functions are selected as a result of classification. The implied order may not be a linear progression as the architecture allows for nodes that copy to more than one branch. The term service chain is often used as shorthand for service function chain.



FIG. 7A illustrates a Service Function Chain (SFC), which may include an initial service classification function 702, as an entry point into a Service Function Path (SFP) 704 (or service path). The (initial) service classification function 702 prescribes a service path, and encapsulates a packet or frame with the service path information which identifies the service path. The classification potentially adds metadata, or shared context, to the SFC encapsulation part of the packet or frame. The service function path 704 may include a plurality of service functions (shown as “SF1”, . . . “SFN”).


A service function can be responsible for specific treatment of received packets. A service function can act at the network layer or other OSI layers (e.g., application layer, presentation layer, session layer, transport layer, data link layer, and physical link layer). A service function can be a virtual instance or be embedded in a physical network element such as a service node. When a service function or other modules of a service node is executed by the at least one processors of the service node, the service function or other modules can be configured to implement any one of the methods described herein. Multiple service functions can be embedded in the same network element. Multiple instances of the service function can be enabled in the same administrative SFC-enabled domain. A non-exhaustive list of SFs includes: firewalls, WAN and application acceleration, Deep Packet Inspection (DPI), server load balancers, NAT44, NAT64, HOST_ID injection, HTTP Header Enrichment functions, TCP optimizer, etc. An SF may be SFC encapsulation aware, that is it receives, and acts on information in the SFC encapsulation, or unaware in which case data forwarded to the service does not contain the SFC encapsulation.


A Service Node (SN) can be a physical network element (or a virtual element embedded on a physical network element) that hosts one or more service functions (SFs) and has one or more network locators associated with it for reachability and service delivery. In many standardization documents, “service functions” can refer to the service nodes described herein as having one or more service functions hosted thereon. Service Function Path (SFP) (or sometimes referred simply as service path) relates to the instantiation of a SFC in the network. Packets follow a service path from a classifier through the requisite service functions.



FIGS. 7B-7C illustrate different service paths realized using service function chaining. These service paths can be implemented by encapsulating packets of a traffic flow with a network service header (NSH) or some other suitable packet header which specifies a desired service path (e.g., by identifying a particular service path using service path information in the NSH). In the example shown in FIG. 7B, a service path 720 can be provided between end point 760 and endpoint 780 through service node 706 and service node 710. In the example shown in FIG. 7C, a service path 730 (a different instantiation) can be provided between end point 770 and endpoint 790 through service node 706, service node 708, and service node 712.


Network Service Header (NSH) Encapsulation


Generally speaking, an NSH includes service path information, and NSH is added to a packet or frame. For instance, an NSH can include a data plane header added to packets or frames. Effectively, the NSH creates a service plane. The NSH includes information for service chaining, and in some cases, the NSH can include metadata added and/or consumed by service nodes or service functions. The packets and NSH are encapsulated in an outer header for transport. To implement a service path, a network element such as a service classifier (SCL) or some other suitable SFC-aware network element can process packets or frames of a traffic flow and performs NSH encapsulation according to a desired policy for the traffic flow.



FIG. 8 shows a system view of SFC-aware network element, e.g., such as a (initial) service classifier (SCL), for prescribing a service path of a traffic flow, according to some embodiments of the disclosure. Network element 802 includes processor 804, (computer-readable non-transitory) memory 806 for storing data and instructions. Furthermore, network element 802 includes service classification function 808 and service header encapsulator 810 (both can be provided by processor 804 when processor 804 executes the instructions stored in memory 806).


The service classification function 808 can process a packet of a traffic flow and determine whether the packet requires servicing and correspondingly which service path to follow to apply the appropriate service. The determination can be performed based on business policies and/or rules stored in memory 806. Once the determination of the service path is made, service header encapsulator 810 generates an appropriate NSH having identification information for the service path and adds the NSH to the packet. The service header encapsulator 810 provides an outer encapsulation to forward the packet to the start of the service path. Other SFC-aware network elements are thus able to process the NSH while other non-SFC-aware network elements would simply forward the encapsulated packets as is. Besides inserting an NSH, network element 802 can also remove the NSH if the service classification function 808 determines the packet does not require servicing.


Network Service Headers


A network service header (NSH) can include a (e.g., 64-bit) base header, and one or more context headers. Generally speaking, the base header provides information about the service header and service path identification (e.g., a service path identifier), and context headers can carry opaque metadata (such as the metadata described herein reflecting the result of classification). For instance, an NSH can include a 4-byte base header, a 4-byte service path header, and optional context headers. The base header can provide information about the service header and the payload protocol. The service path header can provide path identification and location within a path. The (variable length) context headers can carry opaque metadata and variable length encoded information. The one or more optional context headers make up a context header section in the NSH. For instance, the context header section can include one or more context header fields having pieces of information therein, describing the packet/frame. Based on the information in the base header, a service function of a service node can derive policy selection from the NSH. Context headers shared in the NSH can provide a range of service-relevant information such as traffic classification. Service functions can use NSH to select local service policy.


Service Nodes and Proxy Nodes


Once properly encapsulated, the packet having the NSF is then forwarded to one or more service nodes where service(s) can be applied to the packet/frame. FIG. 9 shows a system view of a service node, according to some embodiments of the disclosure. Service node 900, generally a network element, can include processor 902, (computer-readable non-transitory) memory 904 for storing data and instructions. Furthermore, service node 900 includes service function(s) 906 (e.g., for applying service(s) to the packet/frame, classifying the packet/frame) and service header processor 908. The service functions(s) 906 and service header processor 906 can be provided by processor 902 when processor 902 executes the instructions stored in memory 904. Service header processor 908 can extract the NSH, and in some cases, update the NSH as needed. For instance, the service header processor 908 can decrement the service index if a service index=0 is used to indicate that a packet is to be dropped by the service node 900. In another instance, the service header processor 908 or some other suitable module provide by the service node can update context header fields if new/updated context is available.


Within the context of the application, “metadata” refers to one or more pieces of information (e.g., bits of data, encoded values) in a context header section of a network service header. Metadata can refer to contents of the entire context header section, which can include the contents of one or more context header fields describing various attributes of the packet/frame. Metadata can also refer to contents of one individual context header field or a subset of context header fields in the context header section.


Moreover, the terms “first service node” and “second service node” does not necessarily imply that the “first service node” and the “second service node” are the first and second service nodes at the beginning of the service path that the packet/frame reaches as the packet/frame traverses over the service path. For instance, the first service node can be any suitable one of the service nodes among many service nodes in the service path (e.g., third one the packet/frame reaches as it traverses the service path, fourth one, fifth one, etc.). The second service node can be any suitable one of the service node(s) subsequent to the first service node downstream in the service path.


Within the context of the disclosure, a network used herein represents a series of points, nodes, or network elements of interconnected communication paths for receiving and transmitting packets of information that propagate through a communication system. A network offers communicative interface between sources and/or hosts, and may be any local area network (LAN), wireless local area network (WLAN), metropolitan area network (MAN), Intranet, Extranet, Internet, WAN, virtual private network (VPN), or any other appropriate architecture or system that facilitates communications in a network environment depending on the network topology. A network can comprise any number of hardware or software elements coupled to (and in communication with) each other through a communications medium.


In one particular instance, the architecture of the present disclosure can be associated with a service provider deployment. In other examples, the architecture of the present disclosure would be equally applicable to other communication environments, such as an enterprise wide area network (WAN) deployment, The architecture of the present disclosure may include a configuration capable of transmission control protocol/internet protocol (TCP/IP) communications for the transmission and/or reception of packets in a network.


As used herein in this Specification, the term ‘network element’ is meant to encompass any of the aforementioned elements, as well as servers (physical or virtually implemented on physical hardware), machines (physical or virtually implemented on physical hardware), end user devices, routers, switches, cable boxes, gateways, bridges, loadbalancers, firewalls, inline service nodes, proxies, processors, modules, or any other suitable device, component, element, proprietary appliance, or object operable to exchange, receive, and transmit information in a network environment. These network elements may include any suitable hardware, software, components, modules, interfaces, or objects that facilitate the network service header features/operations thereof. This may be inclusive of appropriate algorithms and communication protocols that allow for the effective exchange of data or information.


In one implementation, nodes with NSH capabilities may include software to achieve (or to foster) the functions discussed herein for providing the NSH-related features/functions where the software is executed on one or more processors to carry out the functions. This could include the implementation of instances of service functions, service header processors, metadata augmentation modules and/or any other suitable element that would foster the activities discussed herein. Additionally, each of these elements can have an internal structure (e.g., a processor, a memory element, etc.) to facilitate some of the operations described herein. In other embodiments, these functions may be executed externally to these elements, or included in some other network element to achieve the intended functionality. Alternatively, these nodes may include software (or reciprocating software) that can coordinate with other network elements in order to achieve the functions described herein. In still other embodiments, one or several devices may include any suitable algorithms, hardware, software, components, modules, interfaces, or objects that facilitate the operations thereof.


In certain example implementations, the NSH-related functions outlined herein may be implemented by logic encoded in one or more non-transitory, tangible media (e.g., embedded logic provided in an application specific integrated circuit [ASIC], digital signal processor [DSP] instructions, software [potentially inclusive of object code and source code] to be executed by one or more processors, or other similar machine, etc.). In some of these instances, one or more memory elements can store data used for the operations described herein. This includes the memory element being able to store instructions (e.g., software, code, etc.) that are executed to carry out the activities described in this Specification. The memory element is further configured to store databases or metadata disclosed herein. The processor can execute any type of instructions associated with the data to achieve the operations detailed herein in this Specification. In one example, the processor could transform an element or an article (e.g., data) from one state or thing to another state or thing. In another example, the activities outlined herein may be implemented with fixed logic or programmable logic (e.g., software/computer instructions executed by the processor) and the elements identified herein could be some type of a programmable processor, programmable digital logic (e.g., a field programmable gate array [FPGA], an erasable programmable read only memory (EPROM), an electrically erasable programmable ROM (EEPROM)) or an ASIC that includes digital logic, software, code, electronic instructions, or any suitable combination thereof.


Any of these elements (e.g., the network elements, service nodes, etc.) can include memory elements for storing information to be used in achieving the NSH-related features, as outlined herein. Additionally, each of these devices may include a processor that can execute software or an algorithm to perform the NSH-related features as discussed in this Specification. These devices may further keep information in any suitable memory element [random access memory (RAM), ROM, EPROM, EEPROM, ASIC, etc.], software, hardware, or in any other suitable component, device, element, or object where appropriate and based on particular needs. Any of the memory items discussed herein should be construed as being encompassed within the broad term ‘memory element.’ Similarly, any of the potential processing elements, modules, and machines described in this Specification should be construed as being encompassed within the broad term ‘processor.’ Each of the network elements can also include suitable interfaces for receiving, transmitting, and/or otherwise communicating data or information in a network environment.


Additionally, it should be noted that with the examples provided above, interaction may be described in terms of two, three, or four network elements. However, this has been done for purposes of clarity and example only. In certain cases, it may be easier to describe one or more of the functionalities of a given set of flows by only referencing a limited number of network elements. It should be appreciated that the systems described herein are readily scalable and, further, can accommodate a large number of components, as well as more complicated/sophisticated arrangements and configurations. Accordingly, the examples provided should not limit the scope or inhibit the broad techniques of using and augmenting NSH metadata, as potentially applied to a myriad of other architectures.


It is also important to note that the various steps described herein illustrate only some of the possible scenarios that may be executed by, or within, the nodes with NSH capabilities described herein. Some of these steps may be deleted or removed where appropriate, or these steps may be modified or changed considerably without departing from the scope of the present disclosure. In addition, a number of these operations have been described as being executed concurrently with, or in parallel to, one or more additional operations. However, the timing of these operations may be altered considerably. The preceding operational flows have been offered for purposes of example and discussion. Substantial flexibility is provided by nodes with NSH capabilities in that any suitable arrangements, chronologies, configurations, and timing mechanisms may be provided without departing from the teachings of the present disclosure.


It should also be noted that many of the previous discussions may imply a single client-server relationship. In reality, there is a multitude of servers in the delivery tier in certain implementations of the present disclosure. Moreover, the present disclosure can readily be extended to apply to intervening servers further upstream in the architecture, though this is not necessarily correlated to the ‘m’ clients that are passing through the ‘n’ servers. Any such permutations, scaling, and configurations are clearly within the broad scope of the present disclosure.


Numerous other changes, substitutions, variations, alterations, and modifications may be ascertained to one skilled in the art and it is intended that the present disclosure encompass all such changes, substitutions, variations, alterations, and modifications as falling within the scope of the appended claims. In order to assist the United States Patent and Trademark Office (USPTO) and, additionally, any readers of any patent issued on this application in interpreting the claims appended hereto, Applicant wishes to note that the Applicant: (a) does not intend any of the appended claims to invoke paragraph six (6) of 35 U.S.C. section 112 as it exists on the date of the filing hereof unless the words “means for” or “step for” are specifically used in the particular claims; and (b) does not intend, by any statement in the specification, to limit this disclosure in any way that is not otherwise reflected in the appended claims.

Claims
  • 1. One or more computer readable, non-transitory storage media encoded with software comprising computer executable instructions and, when the software is executed, operable to: receive an address resolution protocol (ARP) request message from a requesting virtual machine, the ARP request message comprising a request for a destination address for a destination virtual machine, the destination address comprising a destination hardware address and a destination media access control address;augment the ARP request message with a network service header (NSH), the NSH identifying an ARP service function;forward the ARP request message with the NSH to the ARP service function;when the destination address for the destination virtual machine is not present in an ARP service function database, receive an ARP reply message from the ARP service function with a flag indicating no entry;when the destination address for the destination virtual machine is present in the ARP service function database and the destination address is not local, receive the ARP reply message from the ARP service function with a flag indicating the destination address; and determine, from the ARP reply message, whether the destination address for the destination virtual machine is present in the ARP service function database based on the flag indicating no entry or the flag indicating the destination address.
  • 2. The one or more computer readable, non-transitory storage media of claim 1, wherein the software is further operable to: forward the ARP reply message to the requesting virtual machine if it is determined that the destination virtual machine is present in the ARP service function database, the ARP reply message including the destination address for the destination virtual machine.
  • 3. The one or more computer readable, non-transitory storage media of claim 2, wherein the software is further operable to: update a forwarding table with the destination address for the destination virtual machine if it is determined that the destination virtual machine is present in the ARP service function database.
  • 4. The one or more computer readable, non-transitory storage media of claim 1, wherein the software is further operable to: transmit the ARP request message to one or more network elements in a network if it is determined that the destination address for the destination virtual machine is not present in the ARP service function database.
  • 5. The one or more computer readable, non-transitory storage media of claim 1, wherein the flag indicating no entry is an indication in metadata and the flag indicating the destination address is another indication in metadata.
  • 6. A network element for performing address resolution, the network element comprising: at least one memory element having instruction stored thereon;a service classifier node; andat least one processor coupled to the at least one memory element and configured to execute the instructions to cause the service classifier node to: receive an address resolution protocol (ARP) request message from a requesting virtual machine, the ARP request message comprising a request for a destination address for a destination virtual machine, the destination address comprising a destination hardware address and a destination media access control address;augment the ARP request message with a network service header (NSH), the NSH identifying an ARP service function;forward the ARP request message with the NSH to the ARP service function;when the destination address for the destination virtual machine is not present in an ARP service function database, receive an ARP reply message from the ARP service function with a flag indicating no entry;when the destination address for the destination virtual machine is present in the ARP service function database and the destination address is not local, receive the ARP reply message from the ARP service function with a flag indicating the destination address; anddetermine, from the ARP reply message, whether the destination address for the destination virtual machine is present in the ARP service function database based on the flag indicating no entry or the flag indicating the destination address.
  • 7. The network element of claim 6, wherein the at least one processor is configured to cause the network element to: forward the ARP reply message to the requesting virtual machine if it is determined that the destination virtual machine is present in the ARP service function database, the ARP reply message including the destination address for the destination virtual machine.
  • 8. The network element of claim 7, wherein the at least one processor is configured to cause the network element to: update a forwarding table with the destination address for the destination virtual machine if it is determined that the destination virtual machine is present in the ARP service function database.
  • 9. The network element of claim 7, wherein the at least one processor is configured to cause the network element to: transmit the ARP request message to one or more network elements in a network if it is determined that the destination address for the destination virtual machine is not present in the ARP service function database.
US Referenced Citations (433)
Number Name Date Kind
5812773 Norin Sep 1998 A
5889896 Meshinsky et al. Mar 1999 A
6108782 Fletcher et al. Aug 2000 A
6178453 Mattaway et al. Jan 2001 B1
6298153 Oishi Oct 2001 B1
6343290 Cossins et al. Jan 2002 B1
6643260 Kloth et al. Nov 2003 B1
6683873 Kwok et al. Jan 2004 B1
6721804 Rubin et al. Apr 2004 B1
6733449 Krishnamurthy et al. May 2004 B1
6735631 Oehrke et al. May 2004 B1
6885670 Regula Apr 2005 B1
6996615 McGuire Feb 2006 B1
7054930 Cheriton May 2006 B1
7058706 Lyer et al. Jun 2006 B1
7062571 Dale et al. Jun 2006 B1
7076397 Ding et al. Jul 2006 B2
7111177 Chauvel et al. Sep 2006 B1
7212490 Kao et al. May 2007 B1
7277948 Igarashi et al. Oct 2007 B2
7313667 Pullela et al. Dec 2007 B1
7379846 Williams et al. May 2008 B1
7480672 Hahn et al. Jan 2009 B2
7496043 Leong et al. Feb 2009 B1
7536476 Alleyne May 2009 B1
7567504 Darling et al. Jul 2009 B2
7606147 Luft et al. Oct 2009 B2
7647594 Togawa Jan 2010 B2
7684322 Sand et al. Mar 2010 B2
7773510 Back et al. Aug 2010 B2
7808897 Mehta et al. Oct 2010 B1
7881957 Cohen et al. Feb 2011 B1
7917647 Cooper et al. Mar 2011 B2
8010598 Tanimoto Aug 2011 B2
8028071 Mahalingam et al. Sep 2011 B1
8041714 Aymeloglu et al. Oct 2011 B2
8121117 Amdahl et al. Feb 2012 B1
8171415 Appleyard et al. May 2012 B2
8234377 Cohn Jul 2012 B2
8244559 Horvitz et al. Aug 2012 B2
8250215 Stienhans et al. Aug 2012 B2
8280880 Aymeloglu et al. Oct 2012 B1
8284664 Aybay et al. Oct 2012 B1
8284776 Petersen Oct 2012 B2
8301746 Head et al. Oct 2012 B2
8345692 Smith Jan 2013 B2
8406141 Couturier et al. Mar 2013 B1
8407413 Yucel et al. Mar 2013 B1
8448171 Donnellan et al. May 2013 B2
8477610 Zuo et al. Jul 2013 B2
8495252 Lais et al. Jul 2013 B2
8495356 Ashok et al. Jul 2013 B2
8510469 Portolani Aug 2013 B2
8514868 Hill Aug 2013 B2
8532108 Li et al. Sep 2013 B2
8533687 Greifeneder et al. Sep 2013 B1
8547974 Guruswamy et al. Oct 2013 B1
8560639 Murphy et al. Oct 2013 B2
8560663 Baucke et al. Oct 2013 B2
8589543 Dutta et al. Nov 2013 B2
8590050 Nagpal et al. Nov 2013 B2
8611356 Yu et al. Dec 2013 B2
8612625 Andries et al. Dec 2013 B2
8630291 Shaffer et al. Jan 2014 B2
8639787 Lagergren et al. Jan 2014 B2
8656024 Krishnan et al. Feb 2014 B2
8660129 Brendel et al. Feb 2014 B1
8719804 Jain May 2014 B2
8775576 Hebert et al. Jul 2014 B2
8797867 Chen et al. Aug 2014 B1
8805951 Faibish et al. Aug 2014 B1
8850182 Fritz et al. Sep 2014 B1
8856339 Mestery et al. Oct 2014 B2
8909780 Dickinson et al. Dec 2014 B1
8909928 Ahmad et al. Dec 2014 B2
8918510 Gmach et al. Dec 2014 B2
8924720 Raghuram et al. Dec 2014 B2
8930747 Levijarvi et al. Jan 2015 B2
8938775 Roth et al. Jan 2015 B1
8959526 Kansal et al. Feb 2015 B2
8977754 Curry, Jr. et al. Mar 2015 B2
9009697 Breiter et al. Apr 2015 B2
9015324 Jackson Apr 2015 B2
9043439 Bicket et al. May 2015 B2
9049115 Rajendran et al. Jun 2015 B2
9063789 Beaty et al. Jun 2015 B2
9065727 Liu et al. Jun 2015 B1
9075649 Bushman et al. Jul 2015 B1
9104334 Madhusudana et al. Aug 2015 B2
9164795 Vincent Oct 2015 B1
9167050 Durazzo et al. Oct 2015 B2
9201701 Boldyrev et al. Dec 2015 B2
9201704 Chang et al. Dec 2015 B2
9203784 Chang et al. Dec 2015 B2
9223634 Chang et al. Dec 2015 B2
9244776 Koza et al. Jan 2016 B2
9251114 Ancin et al. Feb 2016 B1
9264478 Hon et al. Feb 2016 B2
9313048 Chang et al. Apr 2016 B2
9361192 Smith et al. Jun 2016 B2
9380075 He et al. Jun 2016 B2
9432294 Sharma et al. Aug 2016 B1
9444744 Sharma et al. Sep 2016 B1
9473365 Melander et al. Oct 2016 B2
9503530 Niedzielski Nov 2016 B1
9558078 Farlee et al. Jan 2017 B2
9613078 Vermeulen et al. Apr 2017 B2
9628471 Sundaram et al. Apr 2017 B1
9632858 Sasturkar et al. Apr 2017 B2
9658876 Chang et al. May 2017 B2
9692802 Bicket et al. Jun 2017 B2
9727359 Tsirkin Aug 2017 B2
9736063 Wan et al. Aug 2017 B2
9755858 Bagepalli et al. Sep 2017 B2
9792245 Raghavan et al. Oct 2017 B2
9804988 Ayoub et al. Oct 2017 B1
9954783 Thirumurthi et al. Apr 2018 B1
20020004900 Patel Jan 2002 A1
20020073337 Ioele et al. Jun 2002 A1
20020143928 Maltz et al. Oct 2002 A1
20020166117 Abrams et al. Nov 2002 A1
20020174216 Shorey et al. Nov 2002 A1
20030018591 Komisky Jan 2003 A1
20030056001 Mate et al. Mar 2003 A1
20030228585 Inoko et al. Dec 2003 A1
20040004941 Malan et al. Jan 2004 A1
20040095237 Chen et al. May 2004 A1
20040131059 Ayyakad et al. Jul 2004 A1
20040264481 Darling et al. Dec 2004 A1
20050060418 Sorokopud Mar 2005 A1
20050125424 Herriott et al. Jun 2005 A1
20060059558 Selep et al. Mar 2006 A1
20060104286 Cheriton May 2006 A1
20060120575 Ahn et al. Jun 2006 A1
20060126665 Ward et al. Jun 2006 A1
20060146825 Hofstaedter et al. Jul 2006 A1
20060155875 Cheriton Jul 2006 A1
20060168338 Bruegl et al. Jul 2006 A1
20060294207 Barsness et al. Dec 2006 A1
20070011330 Dinker et al. Jan 2007 A1
20070174663 Crawford et al. Jul 2007 A1
20070223487 Kajekar et al. Sep 2007 A1
20070242830 Conrado et al. Oct 2007 A1
20080005293 Bhargava et al. Jan 2008 A1
20080084880 Dharwadkar Apr 2008 A1
20080165778 Ertemalp Jul 2008 A1
20080198752 Fan et al. Aug 2008 A1
20080201711 Amir Husain Aug 2008 A1
20080235755 Blaisdell et al. Sep 2008 A1
20090006527 Gingell, Jr. et al. Jan 2009 A1
20090010277 Halbraich et al. Jan 2009 A1
20090019367 Cavagnari et al. Jan 2009 A1
20090031312 Mausolf et al. Jan 2009 A1
20090083183 Rao et al. Mar 2009 A1
20090138763 Arnold May 2009 A1
20090177775 Radia et al. Jul 2009 A1
20090178058 Stillwell, III et al. Jul 2009 A1
20090182874 Morford et al. Jul 2009 A1
20090265468 Annambhotla et al. Oct 2009 A1
20090265753 Anderson et al. Oct 2009 A1
20090293056 Ferris Nov 2009 A1
20090300608 Ferris et al. Dec 2009 A1
20090313562 Appleyard et al. Dec 2009 A1
20090323706 Germain et al. Dec 2009 A1
20090328031 Pouyadou et al. Dec 2009 A1
20100042720 Stienhans et al. Feb 2010 A1
20100061250 Nugent Mar 2010 A1
20100115341 Baker et al. May 2010 A1
20100131765 Bromley et al. May 2010 A1
20100191783 Mason et al. Jul 2010 A1
20100192157 Jackson et al. Jul 2010 A1
20100205601 Abbas et al. Aug 2010 A1
20100211782 Auradkar et al. Aug 2010 A1
20100217886 Seren et al. Aug 2010 A1
20100293270 Augenstein et al. Nov 2010 A1
20100318609 Lahiri et al. Dec 2010 A1
20100325199 Park et al. Dec 2010 A1
20100325257 Goel et al. Dec 2010 A1
20100325441 Laurie et al. Dec 2010 A1
20100333116 Prahlad et al. Dec 2010 A1
20110016214 Jackson Jan 2011 A1
20110035754 Srinivasan Feb 2011 A1
20110055396 Dehaan Mar 2011 A1
20110055398 Dehaan et al. Mar 2011 A1
20110055470 Portolani Mar 2011 A1
20110072489 Parann-Nissany Mar 2011 A1
20110075667 Li et al. Mar 2011 A1
20110110382 Jabr et al. May 2011 A1
20110116443 Yu et al. May 2011 A1
20110126099 Anderson et al. May 2011 A1
20110138055 Daly et al. Jun 2011 A1
20110145413 Dawson et al. Jun 2011 A1
20110145657 Bishop et al. Jun 2011 A1
20110173303 Rider Jul 2011 A1
20110185063 Head et al. Jul 2011 A1
20110199902 Leavy et al. Aug 2011 A1
20110213687 Ferris et al. Sep 2011 A1
20110213966 Fu et al. Sep 2011 A1
20110219434 Betz et al. Sep 2011 A1
20110231715 Kunii et al. Sep 2011 A1
20110231899 Pulier et al. Sep 2011 A1
20110239039 Dieffenbach et al. Sep 2011 A1
20110252327 Awasthi et al. Oct 2011 A1
20110261811 Battestilli et al. Oct 2011 A1
20110261828 Smith Oct 2011 A1
20110276675 Singh et al. Nov 2011 A1
20110276951 Jain Nov 2011 A1
20110295998 Ferris et al. Dec 2011 A1
20110305149 Scott et al. Dec 2011 A1
20110307531 Gaponenko et al. Dec 2011 A1
20110320870 Kenigsberg et al. Dec 2011 A1
20120005724 Lee Jan 2012 A1
20120023418 Frields et al. Jan 2012 A1
20120054367 Ramakrishnan et al. Mar 2012 A1
20120072318 Akiyama et al. Mar 2012 A1
20120072578 Alam Mar 2012 A1
20120072581 Tung et al. Mar 2012 A1
20120072985 Davne et al. Mar 2012 A1
20120072992 Arasaratnam et al. Mar 2012 A1
20120084445 Brock et al. Apr 2012 A1
20120084782 Chou et al. Apr 2012 A1
20120096134 Suit Apr 2012 A1
20120102193 Rathore et al. Apr 2012 A1
20120102199 Hopmann et al. Apr 2012 A1
20120131174 Ferris et al. May 2012 A1
20120137215 Kawara May 2012 A1
20120158967 Sedayao et al. Jun 2012 A1
20120159097 Jennas, II et al. Jun 2012 A1
20120166649 Watanabe et al. Jun 2012 A1
20120167094 Suit Jun 2012 A1
20120173541 Venkataramani Jul 2012 A1
20120173710 Rodriguez Jul 2012 A1
20120179909 Sagi et al. Jul 2012 A1
20120180044 Donnellan et al. Jul 2012 A1
20120182891 Lee et al. Jul 2012 A1
20120185632 Lais et al. Jul 2012 A1
20120185913 Martinez et al. Jul 2012 A1
20120192016 Gotesdyner et al. Jul 2012 A1
20120192075 Ebtekar et al. Jul 2012 A1
20120201135 Ding et al. Aug 2012 A1
20120203908 Beaty et al. Aug 2012 A1
20120204169 Breiter et al. Aug 2012 A1
20120204187 Breiter et al. Aug 2012 A1
20120214506 Skaaksrud et al. Aug 2012 A1
20120222106 Kuehl Aug 2012 A1
20120236716 Anbazhagan et al. Sep 2012 A1
20120240113 Hur Sep 2012 A1
20120265976 Spiers et al. Oct 2012 A1
20120272025 Park et al. Oct 2012 A1
20120281706 Agarwal et al. Nov 2012 A1
20120281708 Chauhan et al. Nov 2012 A1
20120290647 Ellison et al. Nov 2012 A1
20120297238 Watson et al. Nov 2012 A1
20120311106 Morgan Dec 2012 A1
20120311568 Jansen Dec 2012 A1
20120324092 Brown et al. Dec 2012 A1
20120324114 Dutta et al. Dec 2012 A1
20130003567 Gallant et al. Jan 2013 A1
20130013248 Brugler et al. Jan 2013 A1
20130036213 Hasan et al. Feb 2013 A1
20130044636 Koponen et al. Feb 2013 A1
20130066940 Shao Mar 2013 A1
20130069950 Adam et al. Mar 2013 A1
20130080509 Wang Mar 2013 A1
20130080624 Nagai et al. Mar 2013 A1
20130091557 Gurrapu Apr 2013 A1
20130097601 Podvratnik et al. Apr 2013 A1
20130104140 Meng et al. Apr 2013 A1
20130111540 Sabin May 2013 A1
20130117337 Dunham May 2013 A1
20130124712 Parker May 2013 A1
20130125124 Kempf et al. May 2013 A1
20130138816 Kuo et al. May 2013 A1
20130144978 Jain et al. Jun 2013 A1
20130152076 Patel Jun 2013 A1
20130152175 Hromoko et al. Jun 2013 A1
20130159097 Schory et al. Jun 2013 A1
20130159496 Hamilton et al. Jun 2013 A1
20130160008 Cawlfield et al. Jun 2013 A1
20130162753 Hendrickson et al. Jun 2013 A1
20130169666 Pacheco et al. Jul 2013 A1
20130179941 McGloin et al. Jul 2013 A1
20130182712 Aguayo et al. Jul 2013 A1
20130185413 Beaty et al. Jul 2013 A1
20130185433 Zhu et al. Jul 2013 A1
20130191106 Kephart et al. Jul 2013 A1
20130198050 Shroff et al. Aug 2013 A1
20130198374 Zalmanovitch et al. Aug 2013 A1
20130204849 Chacko Aug 2013 A1
20130232491 Radhakrishnan et al. Sep 2013 A1
20130232492 Wang Sep 2013 A1
20130246588 Borowicz et al. Sep 2013 A1
20130250770 Zou et al. Sep 2013 A1
20130254415 Fullen et al. Sep 2013 A1
20130262347 Dodson Oct 2013 A1
20130283364 Chang et al. Oct 2013 A1
20130297769 Chang et al. Nov 2013 A1
20130315246 Zhang Nov 2013 A1
20130318240 Hebert et al. Nov 2013 A1
20130318546 Kothuri et al. Nov 2013 A1
20130339949 Spiers et al. Dec 2013 A1
20140006481 Frey et al. Jan 2014 A1
20140006535 Reddy Jan 2014 A1
20140006585 Dunbar et al. Jan 2014 A1
20140019639 Ueno Jan 2014 A1
20140040473 Ho et al. Feb 2014 A1
20140040883 Tompkins Feb 2014 A1
20140052877 Mao Feb 2014 A1
20140059310 Du et al. Feb 2014 A1
20140074850 Noel et al. Mar 2014 A1
20140075048 Yuksel et al. Mar 2014 A1
20140075108 Dong et al. Mar 2014 A1
20140075357 Flores et al. Mar 2014 A1
20140075501 Srinivasan et al. Mar 2014 A1
20140089727 Cherkasova et al. Mar 2014 A1
20140098762 Ghai et al. Apr 2014 A1
20140108985 Scott et al. Apr 2014 A1
20140122560 Ramey et al. May 2014 A1
20140136779 Guha et al. May 2014 A1
20140140211 Chandrasekaran et al. May 2014 A1
20140141720 Princen et al. May 2014 A1
20140156557 Zeng et al. Jun 2014 A1
20140160924 Pfautz et al. Jun 2014 A1
20140164486 Ravichandran et al. Jun 2014 A1
20140188825 Muthukkaruppan et al. Jul 2014 A1
20140189095 Lindberg et al. Jul 2014 A1
20140189125 Amies et al. Jul 2014 A1
20140215471 Cherkasova Jul 2014 A1
20140222953 Karve et al. Aug 2014 A1
20140244851 Lee Aug 2014 A1
20140245298 Zhou et al. Aug 2014 A1
20140269266 Filsfils et al. Sep 2014 A1
20140280805 Sawalha Sep 2014 A1
20140282536 Dave et al. Sep 2014 A1
20140282611 Campbell et al. Sep 2014 A1
20140282669 McMillan Sep 2014 A1
20140282889 Ishaya et al. Sep 2014 A1
20140289200 Kato Sep 2014 A1
20140297569 Clark et al. Oct 2014 A1
20140297835 Buys Oct 2014 A1
20140314078 Jilani Oct 2014 A1
20140317261 Shatzkamer et al. Oct 2014 A1
20140366155 Chang et al. Dec 2014 A1
20140372567 Ganesh et al. Dec 2014 A1
20150006470 Mohan Jan 2015 A1
20150033086 Sasturkar et al. Jan 2015 A1
20150043335 Testicioglu et al. Feb 2015 A1
20150043576 Dixon et al. Feb 2015 A1
20150052247 Threefoot et al. Feb 2015 A1
20150052517 Raghu et al. Feb 2015 A1
20150058382 St Laurent et al. Feb 2015 A1
20150058459 Amendjian et al. Feb 2015 A1
20150058557 Madhusudana et al. Feb 2015 A1
20150070516 Shoemake et al. Mar 2015 A1
20150071285 Kumar et al. Mar 2015 A1
20150071289 Shin Mar 2015 A1
20150089478 Cheluvaraju et al. Mar 2015 A1
20150100471 Curry, Jr. et al. Apr 2015 A1
20150106802 Ivanov et al. Apr 2015 A1
20150106805 Melander et al. Apr 2015 A1
20150109923 Hwang Apr 2015 A1
20150117199 Chinnaiah Sankaran et al. Apr 2015 A1
20150117458 Gurkan et al. Apr 2015 A1
20150120914 Wada et al. Apr 2015 A1
20150149828 Mukerji et al. May 2015 A1
20150178133 Phelan et al. Jun 2015 A1
20150215819 Bosch et al. Jul 2015 A1
20150227405 Jan et al. Aug 2015 A1
20150242204 Hassine et al. Aug 2015 A1
20150249709 Teng et al. Sep 2015 A1
20150271199 Bradley et al. Sep 2015 A1
20150280980 Bitar Oct 2015 A1
20150281067 Wu Oct 2015 A1
20150281113 Siciliano et al. Oct 2015 A1
20150309908 Pearson et al. Oct 2015 A1
20150319063 Zourzouvillys et al. Nov 2015 A1
20150326524 Tankala et al. Nov 2015 A1
20150339210 Kopp et al. Nov 2015 A1
20150373108 Fleming et al. Dec 2015 A1
20150379062 Vermeulen et al. Dec 2015 A1
20160011925 Kulkarni et al. Jan 2016 A1
20160013990 Kulkarni et al. Jan 2016 A1
20160062786 Meng et al. Mar 2016 A1
20160065417 Sapuram et al. Mar 2016 A1
20160094398 Choudhury et al. Mar 2016 A1
20160094480 Kulkarni et al. Mar 2016 A1
20160094643 Jain et al. Mar 2016 A1
20160094894 Inayatullah et al. Mar 2016 A1
20160099847 Melander et al. Apr 2016 A1
20160099873 Gerö et al. Apr 2016 A1
20160103838 Sainani et al. Apr 2016 A1
20160105393 Thakkar et al. Apr 2016 A1
20160127184 Bursell May 2016 A1
20160134557 Steinder et al. May 2016 A1
20160147676 Cha et al. May 2016 A1
20160162436 Raghavan et al. Jun 2016 A1
20160164914 Madhav et al. Jun 2016 A1
20160188527 Cherian et al. Jun 2016 A1
20160234071 Nambiar et al. Aug 2016 A1
20160239399 Babu et al. Aug 2016 A1
20160253078 Ebtekar et al. Sep 2016 A1
20160254968 Ebtekar et al. Sep 2016 A1
20160261564 Foxhoven et al. Sep 2016 A1
20160277368 Narayanaswamy et al. Sep 2016 A1
20160292611 Boe et al. Oct 2016 A1
20160352682 Chang Dec 2016 A1
20160378389 Hrischuk et al. Dec 2016 A1
20170005948 Melander et al. Jan 2017 A1
20170024260 Chandrasekaran et al. Jan 2017 A1
20170026470 Bhargava et al. Jan 2017 A1
20170034199 Zaw Feb 2017 A1
20170041342 Efremov et al. Feb 2017 A1
20170054659 Ergin et al. Feb 2017 A1
20170063674 Maskalik et al. Mar 2017 A1
20170097841 Chang et al. Apr 2017 A1
20170099188 Chang et al. Apr 2017 A1
20170104755 Arregoces et al. Apr 2017 A1
20170118166 Du Apr 2017 A1
20170126583 Xia May 2017 A1
20170126615 Chanda May 2017 A1
20170147297 Krishnamurthy et al. May 2017 A1
20170163569 Koganti Jun 2017 A1
20170171158 Hoy et al. Jun 2017 A1
20170192823 Karaje et al. Jul 2017 A1
20170264663 Bicket et al. Sep 2017 A1
20170302521 Lui et al. Oct 2017 A1
20170310556 Knowles et al. Oct 2017 A1
20170317932 Paramasivam Nov 2017 A1
20170339070 Chang et al. Nov 2017 A1
20180069885 Patterson et al. Mar 2018 A1
20180173372 Greenspan et al. Jun 2018 A1
20180174060 Velez-Rojas et al. Jun 2018 A1
20190158997 Starsinic May 2019 A1
Foreign Referenced Citations (13)
Number Date Country
101719930 Jun 2010 CN
101394360 Jul 2011 CN
102164091 Aug 2011 CN
104320342 Jan 2015 CN
105740084 Jul 2016 CN
2228719 Sep 2010 EP
2439637 Apr 2012 EP
2645253 Nov 2014 EP
10-2015-0070676 May 2015 KR
M394537 Dec 2010 TW
WO 2009155574 Dec 2009 WO
WO 2010030915 Mar 2010 WO
WO 2013158707 Oct 2013 WO
Non-Patent Literature Citations (65)
Entry
U.S. Appl. No. 15/236,447, filed Aug. 14, 2016, entitled “Reducing ARP/ND Flooding in Cloud Environment,” Inventors: Nagendra Kumar Nainar, et al.
Fang K., “LISP MAC-EID-TO-RLOC Mapping (LISP based L2VPN),” Network Working Group Internet Draft, draft-zhiyfang-lisp-mac-eid-00, Jan. 2012; 12 pages.
Rabadan, J., et al., “Operational Aspects of Proxy-ARP/ND in EVPN Networks,” BESS Worksgroup Internet Draft, draft-snr-bess-evpn-proxy-arp-nd-02, Oct. 6, 2015; 22 pages.
Amedro, Brian, et al., “An Efficient Framework for Running Applications on Clusters, Grids and Cloud,” 2010, 17 pages.
Author Unknown, “A Look at DeltaCloud: The Multi-Cloud API,” Feb. 17, 2012, 4 pages.
Author Unknown, “About Deltacloud,” Apache Software Foundation, Aug. 18, 2013, 1 page.
Author Unknown, “Architecture for Managing Clouds, A White Paper from the Open Cloud Standards Incubator,” Version 1.0.0, Document No. DSP-IS0102, Jun. 18, 2010, 57 pages.
Author Unknown, “Cloud Infrastructure Management Interface—Common Information Model (CIMI-CIM),” Document No. DSP0264, Version 1.0.0, Dec. 14, 2012, 21 pages.
Author Unknown, “Cloud Infrastructure Management Interface (CIMI) Primer,” Document No. DSP2027, Version 1.0.1, Sep. 12, 2012, 30 pages.
Author Unknown, “cloudControl Documentation,” Aug. 25, 2013, 14 pages.
Author Unknown, “Interoperable Clouds, A White Paper from the Open Cloud Standards Incubator,” Version 1.0.0, Document No. DSP-IS0101, Nov. 11, 2009, 21 pages.
Author Unknown, “Microsoft Cloud Edge Gateway (MCE) Series Appliance,” Iron Networks, Inc., 2014, 4 pages.
Author Unknown, “Use Cases and Interactions for Managing Clouds, A White Paper from the Open Cloud Standards Incubator,” Version 1.0.0, Document No. DSP-IS00103, Jun. 16, 2010, 75 pages.
Author Unknown, “Apache Ambari Meetup What's New,” Hortonworks Inc., Sep. 2013, 28 pages.
Author Unknown, “Introduction,” Apache Ambari project, Apache Software Foundation, 2014, 1 page.
Citrix, “Citrix StoreFront 2.0” White Paper, Proof of Concept Implementation Guide, Citrix Systems, Inc., 2013, 48 pages.
Citrix, “CloudBridge for Microsoft Azure Deployment Guide,” 30 pages.
Citrix, “Deployment Practices and Guidelines for NetScaler 10.5 on Amazon Web Services,” White Paper, citrix.com, 2014, 14 pages.
Gedymin, Adam, “Cloud Computing with an emphasis on Google App Engine,” Sep. 2011, 146 pages.
Good, Nathan A., “Use Apache Deltacloud to administer multiple instances with a single API,” Dec. 17, 2012, 7 pages.
Kunz, Thomas, et al., “OmniCloud—The Secure and Flexible Use of Cloud Storage Services,” 2014, 30 pages.
Logan, Marcus, “Hybrid Cloud Application Architecture for Elastic Java-Based Web Applications,” F5 Deployment Guide Version 1.1, 2016, 65 pages.
Lynch, Sean, “Monitoring cache with Claspin” Facebook Engineering, Sep. 19, 2012, 5 pages.
Meireles, Fernando Miguel Dias, “Integrated Management of Cloud Computing Resources,” 2013-2014, 286 pages.
Mu, Shuai, et al., “uLibCloud: Providing High Available and Uniform Accessing to Multiple Cloud Storages,” 2012 IEEE, 8 pages.
Sun, Aobing, et al., “IaaS Public Cloud Computing Platform Scheduling Model and Optimization Analysis,” Int. J. Communications, Network and System Sciences, 2011, 4, 803-811, 9 pages.
Szymaniak, Michal, et al., “Latency-Driven Replica Placement”, vol. 47 No. 8, IPSJ Journal, Aug. 2006, 12 pages.
Toews, Everett, “Introduction to Apache jclouds,” Apr. 7, 2014, 23 pages.
Von Laszewski, Gregor, et al., “Design of a Dynamic Provisioning System for a Federated Cloud and Bare-metal Environment,” 2012, 8 pages.
Ye, Xianglong, et al., “A Novel Blocks Placement Strategy for Hadoop,” 2012 IEEE/ACTS 11th International Conference on Computer and Information Science, 2012 IEEE, 5 pages.
Author Unknown, “5 Benefits of a Storage Gateway in the Cloud,” Blog, TwinStrata, Inc., Jul. 25, 2012, XP055141645, 4 pages, https://web.archive.org/web/20120725092619/http://blog.twinstrata.com/2012/07/10//5-benefits-of-a-storage-gateway-in-the-cloud.
Author Unknown, “Joint Cisco and VMWare Solution for Optimizing Virtual Desktop Delivery: Data Center 3.0: Solutions to Accelerate Data Center Virtualization,” Cisco Systems, Inc. and VMware, Inc., Sep. 2008, 10 pages.
Author Unknown, “Open Data Center Alliance Usage: Virtual Machine (VM) Interoperability in a Hybrid Cloud Environment Rev. 1.2,” Open Data Center Alliance, Inc., 2013, 18 pages.
Author Unknown, “Real-Time Performance Monitoring on Juniper Networks Devices, Tips and Tools for Assessing and Analyzing Network Efficiency,” Juniper Networks, Inc., May 2010, 35 pages.
Beyer, Steffen, “Module “Data::Locations”?!,” YAPC::Europe, London, UK,ICA, Sep. 22-24, 2000, XP002742700, 15 pages.
Borovick, Lucinda, et al., “Architecting the Network for the Cloud,” IDC White Paper, Jan. 2011, 8 pages.
Bosch, Greg, “Virtualization,” last modified Apr. 2012 by B. Davison, 33 pages.
Broadcasters Audience Research Board, “What's Next,” http://Iwww.barb.co.uk/whats-next, accessed Jul. 22, 2015, 2 pages.
Cisco Systems, Inc. “Best Practices in Deploying Cisco Nexus 1000V Series Switches on Cisco UCS B and C Series Cisco UCS Manager Servers,” Cisco White Paper, Apr. 2011, 36 pages, http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/white_paper_c11-558242.pdf.
Cisco Systems, Inc., “Cisco Unified Network Services: Overcome Obstacles to Cloud-Ready Deployments,” Cisco White Paper, Jan. 2011, 6 pages.
Cisco Systems, Inc., “Cisco Intercloud Fabric: Hybrid Cloud with Choice, Consistency, Control and Compliance,” Dec. 10, 2014, 22 pages.
Cisco Technology, Inc., “Cisco Expands Videoscape TV Platform into the Cloud,” Jan. 6, 2014, Las Vegas, Nevada, Press Release, 3 pages.
CSS Corp, “Enterprise Cloud Gateway (ECG)—Policy driven framework for managing multi-cloud environments,” original published on or about Feb. 11, 2012; 1 page; http://www.css-cloud.com/platform/enterprise-cloud-gateway.php.
Herry, William, “Keep it Simple, Stupid: OpenStack nova-scheduler and its algorithm”, May 12, 2012, IBM, 12 pages.
Hewlett-Packard Company, “Virtual context management on network devices”, Research Disclosure, vol. 564, No. 60, Apr. 1, 2011, Mason Publications, Hampshire, GB, Apr. 1, 2011, 524.
Juniper Networks, Inc., “Recreating Real Application Traffic in Junosphere Lab,” Solution Brief, Dec. 2011, 3 pages.
Kenhui, “Musings on Cloud Computing and IT-as-a-Service: [Updated for Havana] Openstack Computer for VSphere Admins, Part 2: Nova-Scheduler and DRS”, Jun. 26, 2013, Cloud Architect Musings, 12 pages.
Kolyshkin, Kirill, “Virtualization in Linux,” Sep. 1, 2006, XP055141648, 5 pages, https://web.archive.org/web/20070120205111/http://download.openvz.org/doc/openvz-intro.pdf.
Lerach, S.R.O., “Golem,” http://www.lerach.cz/en/products/golem, accessed Jul. 22, 2015, 2 pages.
Linthicum, David, “VM Import could be a game changer for hybrid clouds”, InfoWorld, Dec. 23, 2010, 4 pages.
Naik, Vijay K., et al., “Harmony: A Desktop Grid for Delivering Enterprise Computations,” Grid Computing, 2003, Fourth International Workshop on Proceedings, Nov. 17, 2003, pp. 1-11.
Nair, Srijith K. et al., “Towards Secure Cloud Bursting, Brokerage and Aggregation,” 2012, 8 pages, www.flexiant.com.
Nielsen, “SimMetry Audience Measurement—Technology,” http://www.nielsen-admosphere.eu/products-and-services/simmetry-audience-measurement-technology/, accessed Jul. 22, 2015, 6 pages.
Nielsen, “Television ” http://www.nielsen.com/us/en/solutions/measurement/television.html, accessed Jul. 22, 2015, 4 pages.
Open Stack, “Filter Scheduler,” updated Dec. 17, 2017, 5 pages, accessed on Dec. 18, 2017, https://docs.openstack.org/nova/latest/user/filter-scheduler.html.
Saidi, Ali, et al., “Performance Validation of Network-Intensive Workloads on a Full-System Simulator,” Interaction between Operating System and Computer Architecture Workshop, (IOSCA 2005), Austin, Texas, Oct. 2005, 10 pages.
Shunra, “Shunra for HP Software; Enabling Confidence in Application Performance Before Deployment,” 2010, 2 pages.
Son, Jungmin, “Automatic decision system for efficient resource selection and allocation in inter-clouds,” Jun. 2013, 35 pages.
Wikipedia, “Filter (software)”, Wikipedia, Feb. 8, 2014, 2 pages, https://en.wikipedia.org/w/index.php?title=Filter %28software%29&oldid=594544359.
Wikipedia; “Pipeline (Unix)”, Wikipedia, May 4, 2014, 4 pages, https://en.wikipedia.org/w/index.php?title=Pipeline2/028Unix%29&oldid=606980114.
Extended European Search Report from the European Patent Office for the corresponding European Patent Application No. EP17180216.8, dated Nov. 2, 2017, 14 pages.
Al-Harbi, S.H., et al., “Adapting k-means for supervised clustering,” Jun. 2006, Applied Intelligence, vol. 24, Issue 3, pp. 219-226.
Bohner, Shawn A., “Extending Software Change Impact Analysis into COTS Components,” 2003, IEEE, 8 pages.
Hood, C. S., et al., “Automated Proactive Anomaly Detection,” 1997, Springer Science and Business Media Dordrecht , pp. 688-699.
Vilalta R., et al., “An efficient approach to external cluster assessment with an application to martian topography,” Feb. 2007, 23 pages, Data Mining and Knowledge Discovery 14.1: 1-23. New York: Springer Science & Business Media.
Related Publications (1)
Number Date Country
20180013611 A1 Jan 2018 US