This disclosure relates to computer networking apparatuses and to methods and apparatuses for forwarding data on computer networks.
Modern data networks typically handle a tremendous and ever-increasing quantity of data transmission, and thus it is beneficial to implement techniques and specialized hardware which may reduce the amount of extraneous and/or unnecessary traffic flow in modern network architectures. However, despite the need for efficiency, current network architectures oftentimes employ various procedures which are far from optimal.
One such operation frequently used in traditional Layer 3 networks is the so-called “address resolution protocol” or ARP. ‘ARP-ing’ is typically employed in both the bridging and routing context to facilitate communication between hosts as follows:
Generally, the process of initiating communication between source and destination hosts begins with the source host determining the IP address of the intended destination host through, for example, a ‘domain name service’ (DNS) hosted on a network-accessible server. Once the correct IP address is identified, a source host operating in a traditional Layer 3 network will decide whether a ‘bridging’ or ‘routing’ procedure will be used for forwarding packets to the destination host by assessing whether or not the destination host is located on the source host's own subnet (for example, by comparing the result of applying a subnet mask (255.255.255.0), to its own and the destination host's IP addresses).
If source and destination hosts are located on the same subnet and packets are to be ‘bridged,’ between hosts, the source host will employ ARP to determine the MAC address of the destination host which is needed to label the IP packets for forwarding. To determine the MAC address via ARP, the source host sends an ARP packet out onto its local subnet. The ARP packet is a Layer 2 broadcast packet. The relevant fields of a broadcast ARP packet are schematically illustrated in
Again, this packet-forwarding procedure is known in the art as ‘bridging’ and works for packet-forwarding between source and destination hosts located on the same subnet. Note that in bridging, the source host was able to identify the Layer 2 MAC address of the destination host without employing the use of a router-type network device. Further note that once the source host learns the correct MAC address of the destination host, packets transmitted by the source arrive at the destination without intervening modification.
As stated above, if the source host determines that it is not connected on the same subnet as the destination host, a packet forwarding procedure known in the art as ‘routing’ is employed to forward packets instead of the ‘bridging’ procedure just described. Unlike bridging, routing does involve the use of a router (as its name implies), and furthermore, unlike bridging, does result in the modification of the original packet.
In a conventional routing procedure, since the source host has determined that the intended destination host is not connected on its local subnet, the source host forwards packets by setting their Layer 3 destination address field to the intended destination host's IP address, but setting their Layer 2 destination address field to that of the router's MAC address. If the source host doesn't know the router's MAC address, it first ‘ARPs’ for it by sending out a broadcast ARP request packet with Layer 3 destination address field set to the router's IP address. The router then responds with an ARP reply packet carrying the router's MAC address in essentially the same manner described above with respect to local hosts. As indicated, once the router's MAC address is known to the source host, the source host may begin forwarding packets to the destination host by labeling them with the destination host's IP address and the router's MAC address.
When the router receives packets labeled with the router's Layer 2 MAC address, but another host's Layer 3 IP address, the router consults its routing table to forward the packets. If the routing table indicates that the destination IP address is on another directly attached subnet, the router will consult an ARP table to check whether it has the MAC address of the host corresponding to the destination IP address. If it finds the MAC address, the router rewrites the packet's Layer 2 destination address field with this MAC address and forwards the packet to the destination host. If the router does not find the destination host's MAC address in its ARP table, the router ARPs for the destination host's MAC address before rewriting the packet's Layer 2 destination address field and forwarding the packet.
However, when the router receives a packet with its Layer 2 destination field set to its own MAC address, but with its Layer 3 destination field set to an IP address which, according to its routing table, is not in a directly attached subnet, the router determines if the destination host is accessible through another router. If so, the first router forwards the packet to the second router, rewriting the packet's Layer 2 destination address with this second router's MAC address. (If the first router doesn't know the second router's MAC address, it ARPs for it, in the same manner as the original source host used ARP to determine the first router's MAC address.) This process may repeat—and the packet may thus hop from router to router—until it arrives at a router having the intended destination host connected on one of its directly attached subnets (as indicated in that router's routing table).
Thus, a distinction between bridging and routing is typically maintained in the operation of a traditional network. When a packet is bridged by a network device, it is forwarded by the device on the network without modification of the original packet. This functionality is typically embodied in a device generally referred to in the art as a “switch.” A “router” type network device, as distinct from a “switch,” modifies packets prior to forwarding them, as illustrated by the routing technique just described. Thus, when a packet's destination host is on the same subnet as its source host, the packet is typically forwarded without modification via bridging, and when a packet's destination is on a different subnet than its source the packet is typically modified and forwarded via routing. In practice, it is oftentimes the case that network devices operate as both switches and routers, and thus the distinction between ‘bridging’ and ‘routing’ results in more complicated network devices which must typically have logic devoted to performing both functions, as well as logic devoted to performing a determination, in the first place, of whether to bridge or to route each incoming packet.
Disclosed herein are methods of forwarding data over an IP network. The methods may include receiving a packet from a source host connected to the IP network, identifying the IP address of a destination host designated in the packet, determining the location on the IP network where the destination host designated by the packet is connected, without reference to the MAC address specified in the packet, by using location-identification information stored on the IP network, and forwarding the packet to the location on the IP network where the destination host is connected without reference to the MAC address specified in the packet. In some embodiments, the location-identification information may include a list matching one or more host IP addresses with one or more locations on the IP network where the hosts are connected.
Also disclosed herein are network devices for receiving packets from one or more source hosts connected to an IP network and forwarding the packets to one or more destination hosts connected to the IP network. In some embodiments, the network devices may include logic for receiving a packet from a source host connected to said network device, logic for identifying the IP address of a destination host designated in a received packet, logic for determining the location on the network where a destination host designated by a received packet is connected, without reference to the MAC address specified in the received packet, via look-up of the destination IP address in a list of location-identification information stored on the network, and logic for forwarding the received packet to the network device which is said location on the network.
Also disclosed herein are IP networks which include a first set of multiple network devices for connecting multiple hosts to the network, and a second set of multiple network devices for connecting together the first set of network devices. In some embodiments, the network devices in the first set may include logic for receiving a packet from a source host connected to said network device, logic for identifying the IP address of a destination host designated in a received packet, logic for attempting to determine the location on the network where a destination host designated by a received packet is connected, without reference to the MAC address specified in the received packet, via look-up of the destination IP address in a first list of location-identification information stored on the network, logic for labeling a received packet with said location, and logic for forwarding a received packet to a network device in the second set when said location is not the same network device in the first set having received the packet. In some embodiments, the network devices in the second set may include logic for receiving a packet from a network device in the first set, and forwarding the received packet to the network device in the first set which is the location of the destination host on the network designated in the packet.
The distinctions maintained between bridging and routing in a traditional network, as described above, typically result in various complexities and inefficiencies in a standard implementation. One example is the ARP procedure used to determine host MAC addresses. As illustrated by the sequence described above, a significant disadvantage of employing the ARP procedure to determine host MAC addresses is that ARP request packets are broadcast to every host on a given subnet. Such broadcasts flood a network with traffic. In addition, depending on the topological connectivity of the various subnets on a network, broadcast loops may result. Although, routing protocols based on spanning-tree type algorithms may be used to eliminate the broadcast loops, in so doing, many optimal paths through the network's topology are oftentimes eliminated. Accordingly, it is desirable to avoid or minimize the generation of broadcast ARP packets on a network. Nevertheless, typical networks as implemented in current datacenters, do broadcast ARP requests, do eliminate loops using spanning-tree algorithms, etc., and do not employ effective techniques to minimize or eliminate the broadcasting problem associated with the ARP procedure.
To restate the issue another way: the current state of art is to forward IP Packets using combo switch-router network devices based on their destination MAC addresses and VLAN IDs if the packets arrive at a network device carrying a destination MAC address different than that of the router's (or if routing is simply not enabled on the network device), and otherwise, if the packet's destination MAC address does match that of the switch/router (and assuming routing is enabled), the switch/router forwards packets based on the destination IP addresses designated in the packets. However, as indicated above, a significant downside of this approach is that, in the former case, a source host typically utilizes an ARP procedure to discover the MAC addresses of its desired destination host on the local subnet—leading to inefficient flooding on the local subnet and imposing a significant burden on the end hosts who aren't interested in the flooded traffic. Thus, in current network implementations ARP requests are typically flooded to all the end devices in the flood domain (often a VLAN), unnecessarily sapping the processing power of the end devices on the local subnet. In fact, in some large modern datacenters, flooded traffic very frequently consumes a large portion of the potentially available server CPU processing power.
More generally, it is desirable to eliminate the distinction between switched/bridged IP packets (packets which carry the MAC address of the receiving switch-router) and routed IP packets (packets which carry a MAC address other than that of the receiving switch-router) so that packets entering a network may be treated uniformly, regardless of their ultimate destination. For example, eliminating the foregoing distinction allows the forwarding tables stored at network ingress points to have a smaller scale: e.g., devices that support both bridging and routing needed to support two sets of tables. One which stores host IP addresses and another which stores host MAC addresses (the latter of which is additionally problematic because they cannot be aggregated due to their lacking a hierarchical format).
Thus, network devices may operate, whenever possible, by forwarding packets based on the destination IP address (IPv4 or IPv6) designated in the packets that they receive. Note that “network device” should be understood to encompass both switches and routers, and also combo switch/routers (except where it is clear from the context that one particular type of device or another is being referred to), since the same physical device typically implements both switching/bridging functionality as well as routing functionality. As stated, IP-based forwarding may be performed by network devices for IP packets, and also, in some cases, for non-IP packets (example ARP family of protocols). In case of ARP packets, the network devices forward the packets based on the IP address inside the ARP payload after examining the ARP opcode (request or reply). In some embodiments, in order to preserve external semantical behavior for the benefit of hosts and/or network devices designed for legacy networks, although forwarding based on IP, the network devices may note whether a packet would have been routed or bridged. In the case of bridging in a legacy network (e.g., packet received by a network device labeled with a MAC address other than that of the network device, e.g., routing is disabled on a network device's ingress interface, etc.), the network device forwards the packet based on the IP address but does not perform the rewrite operations which might typically be associated with IP routing—rewrite of the source and destination MAC address fields, decrementing the TTL, etc. may be suppressed. On the other hand, if packets are such that a legacy network would expect them to be routed, the packets would be forwarded based on their IP address and the typical routing rewrite operations would be performed.
It is noted in the context of handling ARP request packets that various embodiments of the IP-based forwarding techniques disclosed herein may be particularly advantageous because: (i) they eliminate (or significantly reduces) one of the most common sources of broadcast or flooded traffic (which is especially important for cloud and data center networks); and (ii) they improve network scaling properties by allowing networks to operate with forwarding tables based on IP addresses along with local forwarding tables having the MAC addresses of locally attached hosts, rather than operating with forwarding tables which generally store an IP address and MAC address pair for all hosts/end devices connected to the network. Accordingly, in various embodiments, the foregoing ARP forwarding technique may provide benefits in that it may: (i) eliminate the need for external directory services, (ii) allow resolution of ARP requests in-line with regular packet flow to end hosts/devices, (iii) better distribute the burden of responding to ARP requests to the end devices targeted by the ARP requests, (iv) efficiently provide opportunities for end devices to update their ARP caches, (v) use remote station (top) and local station (bottom) table efficiently, i.e. reduce/eliminate need for learning MAC addresses, and (vi) allow source IP learning based on conversations (triggered by ARP).
Accordingly, disclosed herein are methods, network devices, and IP networks for forwarding packets of data based on the IP address of the destination host designated in the packets, rather than, and without reference to, the MAC addresses specified in the packets. Generally these packets are IP packets but, as described above, ARP request packets may also be forwarded in this manner since they do provide a destination IP address in their payloads, and by doing so, subnet-wide broadcast of ARP request packets may be avoided. For instance, certain such method embodiments are schematically illustrated by the flowchart in
The location-identification information may reside in a database which may be implemented, for example, as a list which matches one or more host IP addresses with one or more locations on the IP network where the hosts are connected. Depending on the embodiment, such a list, or more generally, such a database of location-identification information, may be associated with (e.g., stored locally on) the network device receiving the packet as it enters the IP network—typically the first network device initially encountered by the packet when it reaches the IP network after it issues from the source host. In other embodiments, such a list or database may be associated with (e.g., stored on) another network device, or multiple other network devices on the IP network, or the database/list may be distributed across multiple network devices, or stored in-whole on one network device or devices while portions of the list/database may be locally-cached on other network devices. Examples will be illustrated below in the context of leaf-spine fabric overlay networks. Thus, depending on which network device has access to the relevant destination host identification-location information—e.g., a particular entry in the aforementioned list—the destination host's location on the network may be determined before or after the packet is forwarded from the first initially-encountered network device receiving the packet. For example, if the relevant destination host information is accessible from another network device, the packet may be forwarded to this second network device and, after said forwarding, the destination host's location on the network may be determined at this second network device.
In some embodiments, the IP network which implements the disclosed IP-based packet forwarding techniques may be a leaf-spine network fabric. Accordingly, presented below and provided in U.S. Provisional Pat. App. No. 61/900,228, filed Nov. 5, 2013, and titled “NETWORK FABRIC OVERLAY” (incorporated by reference in its entirety and for all purposes) are detailed descriptions of leaf-spine fabric overlay networks which, according to this disclosure, may employ mechanisms for forwarding incoming packets to destination hosts based on the destination IP addresses designated in the incoming packets, and in some embodiments, without reference to the destination MAC address designated in the incoming packets. Thus, for example, in the case of an ARP request packet, although in a legacy layer 2 network an ARP request packet is broadcast to all end devices on a local subnet, in various embodiments of the leaf-spine fabric overlay network set forth below and in U.S. Provisional Pat. App. No. 61/900,228, because an ARP request packet includes the intended destination host's IP address, and because network devices within the leaf-spine network fabric are aware of the locations where hosts are connected to the network, these network devices may forward ARP request packets to their intended destination hosts without broadcasting the ARP request packets within the fabric. A mapping database may keep the relevant location-identification information concerning the connection of end hosts to the leaf-spine network, in some embodiments, in the form of a list which matches one or more host IP addresses with one or more locations on the leaf-spine network where the hosts are connected.
Thus, in the context of the leaf-spine fabric overlay networks described below and in U.S. Provisional Pat. App. No. 61/900,228, and referring again to
It is noted that the IP-based forwarding techniques and operations disclosed herein may be used in connection with IP networks which provide a data abstraction layer oftentimes referred to as an overlay wherein packets are encapsulated with a packet encapsulation scheme/protocol such as VXLAN upon ingress to the network, and are de-encapsulated upon egress from the network. Examples of overlay networks in the context of leaf-spine network architectures utilizing a VXLAN encapsulation scheme/protocol are described in U.S. Provisional Pat. App. No. 61/900,228. Thus, in some embodiments, methods of IP-based packet forwarding may include applying an encapsulation to a packet after being received by the initial network device encountered by the packet as it reaches the network, and removing the encapsulation from the packet as it exits the IP network before it reaches the destination host. In the context of a leaf-spine fabric overlay network, the initially encountered network device is typically a leaf network device and so the encapsulation may be applied by this initially encountered leaf network device. However, it should be noted, or course, that IP-based packet forwarding techniques and operations do not require the existence of an overlay network in order to function and provide the benefits described above.
It should also be noted, particularly in the context of overlay networks, that in some embodiments, the location where the destination host connects may be a virtual switch device operating in a virtualization layer (running on an underlying physical host) and moreover that the destination host itself may be a virtual machine operating in the virtualization layer. (Note that virtualization in the context of a leaf-spine fabric overlay network is also described in detail in U.S. Provisional Pat. App. No. 61/900,228.) Likewise, in certain embodiments, the source host which issued the IP packet may be a physical host connected to a leaf network device which—as the initial network device encountered by the packet when it reaches the leaf-spine fabric overlay network—receives the packet and serves as the packet's ingress point to the network. And, likewise, in some embodiments, the source host may be a virtual machine operating in a virtualization layer (running on an underlying physical host), and the first network “device” in the fabric overlay network encountered by a packet after being issued from the source host may be a virtual switch device also running in the virtualization layer, which then serves as the packet's ingress point to the network.
Returning to the manner in which various IP-based packet forwarding methodologies' may access and utilize location-identification information: In some embodiments, the mapping database containing the location-identification information used for determining destination host location—e.g., a list matching host IP addresses with network locations—is associated with the leaf network devices, the spine network devices, with both types of devices, or with a third type of device which provides this information with respect to packets forwarded from a leaf or spine network device, or in some combination of the foregoing.
In certain such embodiments, a partial mapping database is associated with each leaf network device which may be a locally-cached subset of a full global location-identification mapping database associated with the spine network devices—in some embodiments, stored directly on each spine network device, and in other embodiments stored on a third type of network device which is associated with the spine network devices. Portions of the spine's global mapping database—which typically lists the location-identification information associated with every host connected to the network through each leaf network device—may be learned by the leaf network devices as the network operates, as described in U.S. Provisional Pat. App. No. 61/900,228 (incorporated by reference herein).
Thus, various embodiments of the IP-based forwarding techniques and operations disclosed herein work (in the ARP context or in the more general IP-based forwarding context) by looking-up an inbound packet's destination IP address in a mapping database associated locally with the leaf network device which receives the inbound packet. In such embodiments, the destination host's location on the network is determined at the initially encountered leaf network device before the packet is first forwarded from the initially encountered leaf network device. In other embodiments, the mapping database may be associated with a spine network device and therefore the destination host's location on the network is determined from a global mapping database associated with the spine network device after forwarding the packet from the leaf network device to a spine network device having access to this global mapping database. In yet other embodiments, the list may be associated with another type of network device—a proxy-function network device—which is associated with the spine network device receiving the packet, but which is used to perform the actual lookup/determination of the location of the correct destination host. In certain embodiments where packets are encapsulated upon ingress to the IP network, the encapsulation header (e.g., VXLAN header) carries a proxy address associated with or designating this proxy-function network device. The proxy-address may be carried in the destination address field of the encapsulation header, and after the packet is received at the proxy-function network device, said device may replace the proxy-address with the actual location/address on the network where the destination host connects. As mentioned above, whether the determination of destination host location is done at the initially encountered leaf network device or at a spine-network device (or proxy-function network device) after being forwarded from this leaf network device may depend on whether the destination host's location is present in the leaf network device's locally cached subset of the global mapping database associated with the spine. In any event, mapping database(s) which have the relevant location-identification information are employed in the foregoing manner to determine the location within an IP network where a given destination host is located and connected.
To further facilitate an understanding of mapping database usage in IP-based forwarding operations performed in the context of leaf-spine network architectures, a brief description of these architectures is now provided. A more detailed description is provided further below.
The basic leaf-spine network 500 presented in
Thus, among other things,
In a typical embodiment, each leaf network device's locally-cached partial mapping database will contain entries for the end hosts directly connected to it. Hence, communication between end hosts 711 and 712, which are both directly connected to leaf network device 721, may be accomplished without involving the spine, as illustrated by path 751 labeled ‘local’ in
Path 752 shown in
Thus,
Another packet's passage through the fabric is illustrated by path 753, which represents a communication between end hosts 711 and 714. In this instance, as with path 752, the communication between end hosts is non-local and involves multiple leaf network devices but, as indicated by the path 753's label in
Thus, in some network architecture embodiments, if location-identification information corresponding to the destination IP address designated in an inbound packet is found in the local mapping database associated with the initial network device receiving the inbound packet, the packet will be forwarded accordingly—e.g., if the destination host is local to the leaf network device receiving the packet, the packet will be forwarded out a local port on the leaf network device to the destination host. However, if the destination host is remote from the ingress leaf network device, the packet will be encapsulated (e.g. with VXLAN), the encapsulation carrying the address of the remote leaf network device to which the destination host is connected, and sent towards an appropriate spine network device. In some embodiments, if there is a miss in the local mapping database (cache of location-identification information), the packet will be encapsulated with the proxy IP address and sent towards a spine network device that has the proxy function or is associated with a third type of network device providing the proxy function. The proxy function then operates to determine the location of the host on the network having the destination IP address designated in the received packet.
In any event, referring again to
As indicated above, the foregoing IP-based packet forwarding techniques and operations may be used to handle ARP request packets and prevent their broadcast (generation of broadcast loops, etc.) within the fabric of a leaf-spine network while preserving the external semantical behavior expected by hosts connected via Layer 2 to the network. In one embodiment, an ARP request packet may be forwarded via the IP-based forwarding techniques described above to the leaf network device which connects the end host having the IP address designated in the ARP request packet. At this point, in networks employing packet encapsulation, this leaf network device—since it serves as the ARP request packet's egress point from the network—may de-encapsulate the ARP request packet prior to forwarding the packet to target destination host designated in the packet. Note that if more than one host is connected on this interface of the leaf network device—the interface connecting the destination host—forwarding of the ARP packet out this interface effectively broadcasts the ARP packet out this interface since the ARP packet is now un-encapsulated and it's destination MAC address field is still labeled “broadcast” as shown in
Accordingly, in some embodiments, a leaf network device in a leaf-spine network fabric may receive an ARP request packet from one of its attached hosts or other external devices which is labeled for broadcast. However, the leaf device prevents the packet's broadcasting by forwarding the packet based on the “target IP” address found in the payload of the packet, rather than in the conventional way by forwarding the packet based on the Layer 2 destination address, which is a broadcast address. To provide a specific, non-limiting example: host A connected to a leaf-spine fabric overlay network wants to communicate with host B also connected to the network, but host A does not know host B's MAC address. Host A therefore generates an ARP request packet and forwards it onto the network. The first network device receiving the ARP request packet is the leaf network device to which host A is attached. The ARP request packet includes the following information similarly to that shown in
The ingress leaf network device analyzes this ARP request packet and identifies Host B's IP address in the packet's payload. If this leaf network device determines from Host B's IP address that host B is locally connected to itself, this leaf network device forwards the packet directly to host B without encapsulating it. If the ingress leaf network device recognizes host B's IP address, but determines that it isn't a local IP address, this leaf network device encapsulates the packet and forwards it to the spine, the encapsulation identifying the IP address of the leaf network device connecting host B. If the ingress leaf network device does not recognize host B's IP address, this leaf network device produces an encapsulation identifying the IP address of a network device providing the proxy function as the destination IP address—either a spine network device or another class of network device which provides the proxy function—and forwards the packet to the spine—which then either applies the proxy function or forwards the packet to a proxy-function network device which applies the proxy function and forwards the packet back to the spine. With the packet's encapsulation now identifying the leaf network device connecting host B, the spine network device then sends the ARP request packet to this leaf network device. Note, once again, that the same forwarding procedure generally applies to other types of packets which specify a destination IP address.
In this example of an ARP request packet going from host A to host B, after forwarding from the spine, the receiving leaf network device recognizes the packet as an ARP request and recognizes host B's IP address. The receiving leaf network device may optionally update its forwarding table with information about host A. The leaf network device then forwards the packet to host B, which prepares and sends an ARP reply packet back to the leaf network device. The leaf network device now receives and forwards this ARP reply packet to the spine, which then routes the ARP reply to the leaf network device locally connecting host A. That leaf network device then de-encapsulates the ARP reply and forwards the ARP reply to host A. At this point, the leaf network device connecting host A may update its own forwarding table with information about host B.
Note that the gathering of the location-identification information for the mapping database cached at the leaf network devices may be done through protocols or through learning of the devices attached to the network, for example, as demonstrated in the preceding ARP example. The location-identification information in a local mapping database may include MAC and IP addresses of most or all locally connected host devices, however, as described above, these local mapping databases need not contain the MAC addresses of every host connected to every leaf network device on the network. In some embodiments as described above, the learned location-identification information may be provided in a mapping database resident on the spine, portions of which are locally-cached in the leaf network devices. Of course, it should also be noted that IP-based packet forwarding—whether applied to IP packets generally, or in the context of unicast ARP—may be implemented without an overlay network, and also in networks having other topologies besides the leaf-spine fabric now described in detail.
Detailed Description of Leaf-Spine Network Architectures Versus Traditional Network Architectures
A. Overview of Traditional “Access-Aggregation-Core” Network Architectures
Datacenter network design may follow a variety of topological paradigms—a given topology just referring to the system of networking lines/links which carry network traffic (i.e., data) and the networking switches, which control the flow of traffic over the lines/links in the network. One of the most common topological paradigms in use today is the aptly-named “access-aggregation-core” architecture. As the “core” part of the name suggests, such an architecture follows a hierarchical paradigm, wherein information traveling between hypothetical points A and B, first travel up the hierarchy away from point A and then back down the hierarchy towards point B.
Shared usage of links and network devices (such as just described) leads to bottlenecks in a network exhibiting a tree structure architecture like the access-aggregation-core (AAC) network shown in
Though the blocking problem is an inevitable consequence of the tree-structure paradigm, various solutions have been developed within this paradigm to lessen the impact of the problem. One technique is to build redundancy into the network by adding additional links between high traffic nodes in the network. In reference to
B. “Leaf-Spine” Network Architectures
Another way of addressing the ubiquitous “blocking” problem manifested in the modern datacenter's networking infrastructure is to design a new network around a topological paradigm where blocking does not present as much of an inherent problem. One such topology is often referred to as a “multi-rooted tree” topology (as opposed to a “tree”), which can be said to embody a full bi-partite graph if each spine network device is connected to each Leaf network device and vice versa. Networks based on this topology are oftentimes referred to as “Clos Networks,” “flat networks,” “multi-rooted networks,” or just as “multi-rooted trees.” In the disclosure that follows, a “leaf-spine” network architecture designed around the concept of a “multi-rooted tree” topology will be described. While it is true that real-world networks are unlikely to completely eliminate the “blocking” problem, the described “leaf-spine” network architecture, as well as others based on “multi-rooted tree” topologies, are designed so that blocking does not occur to the same extent as in traditional network architectures.
Roughly speaking, leaf-spine networks lessen the blocking problem experienced by traditional networks by being less hierarchical and, moreover, by including considerable active path redundancy. In analogy to microprocessor design where increased performance is realized through multi-core or multi-processor parallelization rather than simply by increasing processor clock speed, a leaf-spine network realizes higher performance, at least to a certain extent, by building the network “out” instead of building it “up” in a hierarchical fashion. Thus, a leaf-spine network in its basic form consists of two-tiers, a spine tier and leaf tier. Network devices within the leaf tier—i.e. “leaf network devices”—provide connections to all the end devices, and network devices within the spine tier—i.e., “spine network devices”—provide connections among the leaf network devices. Note that in a prototypical leaf-spine network, leaf network devices do not directly communicate with each other, and the same is true of spine network devices. Moreover, in contrast to an AAC network, a leaf-spine network in its basic form has no third core tier connecting the network devices within the second tier to a much smaller number of core network device(s), typically configured in a redundant fashion, which then connect to the outside internet. Instead, the third tier core is absent and connection to the internet is provided through one of the leaf network devices, again effectively making the network less hierarchical. Notably, internet connectivity through a leaf network device avoids forming a traffic hotspot on the spine which would tend to bog down traffic not travelling to and from the outside internet.
It should be noted that very large leaf-spine networks may actually be formed from 3 tiers of network devices. As described in more detail below, in these configurations, the third tier may function as a “spine” which connects “leaves” formed from first and second tier network devices, but a 3-tier leaf-spine network still works very differently than a traditional AAC network due to the fact that it maintains the multi-rooted tree topology as well as other features. To present a simple example, the top tier of a 3-tier leaf-spine network still does not directly provide the internet connection(s), that still being provided through a leaf network device, as in a basic 2-tier leaf-spine network.
Though in
To illustrate, consider analogously to the example described above, communication between end device A and end device K simultaneous with communication between end devices I and J, which led to blocking in AAC network 400. As shown in
As a second example, consider the scenario of simultaneous communication between end devices A and F and between end devices B and G which will clearly also lead to blocking in AAC network 400. In the leaf-spine network 500, although two leaf network devices 525 are shared between the four end devices 510, specifically network devices 1 and 3, there are still three paths of communication between these two devices (one through each of the three spine network devices I, II, and III) and therefore there are three paths collectively available to the two pairs of end devices. Thus, it is seen that this scenario is also non-blocking (unlike
As a third example, consider the scenario of simultaneous communication between three pairs of end devices—between A and F, between B and G, and between C and H. In AAC network 400, this results in each pair of end devices having ⅓ the bandwidth required for full rate communication, but in leaf-spine network 500, once again, since 3 paths are available, each pair has exactly the bandwidth it needs for full rate communication. Thus, in a leaf-spine network having single links of equal bandwidth connecting devices, as long as the number of spine network devices 535 is equal to or greater than the number of end devices 510 which may be connected to any single leaf network device 525, then the network will have enough bandwidth for simultaneous full-rate communication between the end devices connected to the network.
More generally, the extent to which a given network is non-blocking may be characterized by the network's “bisectional bandwidth,” which is determined by dividing a network that has N end devices attached to it into 2 equal sized groups of size N/2, and determining the total bandwidth available for communication between the two groups. If this is done for all possible divisions into groups of size N/2, the minimum bandwidth over all such divisions is the “bisectional bandwidth” of the network. Based on this definition, a network may then be said to have “full bisectional bandwidth” and have the property of being “fully non-blocking” if each leaf network device's total uplink bandwidth to the spine tier 530 (the sum of the bandwidths of all links connecting the leaf network device 525 to any spine network device 535) is at least equal to the maximum downlink bandwidth to end devices associated with any of the leaf network devices on the network.
To be precise, when a network is said to be “fully non-blocking” it means that no “admissible” set of simultaneous communications between end devices on the network will block—the admissibility constraint simply meaning that the non-blocking property only applies to sets of communications that do not direct more network traffic at a particular end device than that end device can accept as a consequence of its own bandwidth limitations. Whether a set of communications is “admissible” may therefore be characterized as a consequence of each end device's own bandwidth limitations (assumed here equal to the bandwidth limitation of each end device's link to the network), rather than arising from the topological properties of the network per se. Therefore, subject to the admissibility constraint, in a non-blocking leaf-spine network, all the end devices on the network may simultaneously communicate with each other without blocking, so long as each end device's own bandwidth limitations are not implicated.
The leaf-spine network 500 thus exhibits full bisectional bandwidth because each leaf network device has at least as much bandwidth to the spine tier (i.e., summing bandwidth over all links to spine network devices) as it does bandwidth to the end devices to which it is connected (i.e., summing bandwidth over all links to end devices). To illustrate the non-blocking property of network 500 with respect to admissible sets of communications, consider that if the 12 end devices in
To implement leaf-spine network 500, the leaf tier 520 would typically be formed from 5 ethernet switches of 6 ports or more, and the spine tier 530 from 3 ethernet switches of 5 ports or more. The number of end devices which may be connected is then the number of leaf tier switches j multiplied by ½the number of ports n on each leaf tier switch, or ½·j·n, which for the network of
However, not every network is required to be non-blocking and, depending on the purpose for which a particular network is built and the network's anticipated loads, a fully non-blocking network may simply not be cost-effective. Nevertheless, leaf-spine networks still provide advantages over traditional networks, and they can be made more cost-effective, when appropriate, by reducing the number of devices used in the spine tier, or by reducing the link bandwidth between individual spine and leaf tier devices, or both. In some cases, the cost-savings associated with using fewer spine-network devices can be achieved without a corresponding reduction in bandwidth between the leaf and spine tiers by using a leaf-to-spine link speed which is greater than the link speed between the leaf tier and the end devices. If the leaf-to-spine link speed is chosen to be high enough, a leaf-spine network may still be made to be fully non-blocking—despite saving costs by using fewer spine network devices.
The extent to which a network having fewer spine tier devices is non-blocking is given by the ratio of bandwidth from leaf network device to spine tier versus bandwidth from leaf network device to end devices. By adjusting this ratio, an appropriate balance between cost and performance can be dialed in. In
This concept of oversubscription and building cost-effective networks having fewer than optimal spine network devices also illustrates the improved failure domain provided by leaf-spine networks versus their traditional counterparts. In a traditional AAC network, if a device in the aggregation tier fails, then every device below it in the network's hierarchy will become inaccessible until the device can be restored to operation. Furthermore, even if redundancy is built-in to that particular device, or if it is paired with a redundant device, or if it is a link to the device which has failed and there are redundant links in place, such a failure will still result in a 50% reduction in bandwidth, or a doubling of the oversubscription. In contrast, redundancy is intrinsically built into a leaf-spine network and such redundancy is much more extensive. Thus, as illustrated by the usefulness of purposefully assembling a leaf-spine network with fewer spine network devices than is optimal, absence or failure of a single device in the spine (or link to the spine) will only typically reduce bandwidth by 1/k where k is the total number of spine network devices.
It is also noted once more that in some networks having fewer than the optimal number of spine network devices (e.g., less than the number of end devices connecting to the leaf network devices), the oversubscription rate may still be reduced (or eliminated) by the use of higher bandwidth links between the leaf and spine network devices relative to those used to connect end devices to the leaf network devices.
C. Example “Leaf-Spine” Network Architecture
The following describes a sample implementation of a leaf-spine network architecture. It is to be understood, however, that the specific details presented here are for purposes of illustration only, and are not to be viewed in any manner as limiting the concepts disclosed herein. With this in mind, leaf-spine networks may be implemented as follows:
Leaf network devices may be implemented as ethernet switches having: (i) 48 ports for connecting up to 48 end devices (e.g., servers) at data transmission speeds of 10 GB/s (gigabits per second)—i.e. ‘downlink ports’; and (ii) 12 ports for connecting to up to 12 spine network devices at data transmission speeds of 40 GB/s—i.e. ‘uplink ports.’ Thus, each leaf network device has 480 GB/s total bandwidth available for server connections and an equivalent 480 GB/s total bandwidth available for connections to the spine tier. More generally, leaf network devices may be chosen to have a number of ports in the range of 10 to 50 ports, or 20 to 100 ports, or 50 to 1000 ports, or 100 to 2000 ports, wherein some fraction of the total number of ports are used to connect end devices (‘downlink ports’) and some fraction are used to connect to spine network devices (‘uplink ports’). In some embodiments, the ratio of uplink to downlink ports of a leaf network device may be 1:1, or 1:2, or 1:4, or the aforementioned ratio may be in the range of 1:1 to 1:20, or 1:1 to 1:10, or 1:1 to 1:5, or 1:2 to 1:5. Likewise, the uplink ports for connection to the spine tier may have the same bandwidth as the downlink ports used for end device connection, or they may have different bandwidths, and in some embodiments, higher bandwidths. For instance, in some embodiments, uplink ports may have bandwidths which are in a range of 1 to 100 times, or 1 to 50 times, or 1 to 10 times, or 1 to 5 times, or 2 to 5 times the bandwidth of downlink ports.
Moreover, depending on the embodiment, leaf network devices may be switches having a fixed number of ports, or they may be modular, wherein the number of ports in a leaf network device may be increased by adding additional modules. The leaf network device just described having 48 10 GB/s downlink ports (for end device connection) and 12 40 GB/s uplink ports (for spine tier connection) may be a fixed-sized switch, and is sometimes referred to as a ‘Top-of-Rack’ switch. Fixed-sized switches having a larger number of ports are also possible, however, typically ranging in size from 50 to 150 ports, or more specifically from 64 to 128 ports, and may or may not have additional uplink ports (for communication to the spine tier) potentially of higher bandwidth than the downlink ports. In modular leaf network devices, the number of ports obviously depends on how many modules are employed. In some embodiments, ports are added via multi-port line cards in similar manner to that described below with regards to modular spine network devices.
Spine network devices may be implemented as ethernet switches having 576 ports for connecting with up to 576 leaf network devices at data transmission speeds of 40 GB/s. More generally, spine network devices may be chosen to have a number of ports for leaf network device connections in the range of 10 to 50 ports, or 20 to 100 ports, or 50 to 1000 ports, or 100 to 2000 ports. In some embodiments, ports may be added to a spine network device in modular fashion. For example, a module for adding ports to a spine network device may contain a number of ports in a range of 10 to 50 ports, or 20 to 100 ports. In this manner, the number of ports in the spine network devices of a growing network may be increased as needed by adding line cards, each providing some number of ports. Thus, for example, a 36-port spine network device could be assembled from a single 36-port line card, a 72-port spine network device from two 36-port line cards, a 108-port spine network device from a trio of 36-port line cards, a 576-port spine network device could be assembled from 16 36-port line cards, and so on.
Links between the spine and leaf tiers may be implemented as 40 GB/s-capable ethernet cable (such as appropriate fiber optic cable) or the like, and server links to the leaf tier may be implemented as 10 GB/s-capable ethernet cable or the like. More generally, links, e.g. cables, for connecting spine network devices to leaf network devices may have bandwidths which are in a range of 1 GB/s to 1000 GB/s, or 10 GB/s to 100 GB/s, or 20 GB/s to 50 GB/s. Likewise, links, e.g. cables, for connecting leaf network devices to end devices may have bandwidths which are in a range of 10 MB/s to 100 GB/s, or 1 GB/s to 50 GB/s, or 5 GB/s to 20 GB/s. In some embodiments, as indicated above, links, e.g. cables, between leaf network devices and spine network devices may have higher bandwidth than links, e.g. cable, between leaf network devices and end devices. For instance, in some embodiments, links, e.g. cables, for connecting leaf network devices to spine network devices may have bandwidths which are in a range of 1 to 100 times, or 1 to 50 times, or 1 to 10 times, or 1 to 5 times, or 2 to 5 times the bandwidth of links, e.g. cables, used to connect leaf network devices to end devices.
In the particular example of each spine network device implemented as a 576-port @ 40 GB/s switch and each leaf network device implemented as a 48-port @ 10 GB/s downlink & 12-port @ 40 GB/s uplink switch, the network can have up to 576 leaf network devices each of which can connect up to 48 servers, and so the leaf-spine network architecture can support up to 576·48=27,648 servers. And, in this particular example, due to the maximum leaf-to-spine transmission rate (of 40 GB/s) being 4 times that of the maximum leaf-to-server transmission rate (of 10 GB/s), such a network having 12 spine network devices is fully non-blocking and has full cross-sectional bandwidth.
As described above, the network architect can balance cost with oversubscription by adjusting the number of spine network devices. In this example, a setup employing 576-port switches as spine network devices may typically employ 4 spine network devices which, in a network of 576 leaf network devices, corresponds to an oversubscription rate of 3:1. Adding a set of 4 more 576-port spine network devices changes the oversubscription rate to 3:2, and so forth.
Datacenters typically consist of servers mounted in racks. Thus, in a typical setup, one leaf network device, such as the ‘Top-of-Rack’ device described above, can be placed in each rack providing connectivity for up to 48 rack-mounted servers. The total network then may consist of up to 576 of these racks connected via their leaf-network devices to a spine-tier rack containing between 4 and 12 576-port spine tier devices.
D. Leaf-Spine Network Architectures Formed from More than Two Tiers of Network Devices
The two-tier leaf-spine network architecture described above having 576-port @ 40 GB/s switches as spine network devices and 48-port @ 10 GB/s downlink & 12-port @ 40 GB/s uplink switches as leaf network devices can support a network of up to 27,648 servers, and while this may be adequate for most datacenters, it may not be adequate for all. Even larger networks can be created by employing spine tier devices with more than 576 ports accompanied by a corresponding increased number of leaf tier devices. However, another mechanism for assembling a larger network is to employ a multi-rooted tree topology built from more than two tiers of network devices—e.g., forming the network from 3 tiers of network devices, or from 4 tiers of network devices, etc.
One simple example of a 3-tier leaf-spine network may be built from just 4-port switches and this is schematically illustrated in
Note that in the foregoing disclosure, numerous specific embodiments were set forth in order to provide a thorough understanding of the inventive concepts disclosed herein. However, it will be appreciated by those skilled in the art that in many cases the disclosed concepts may be practiced with or without certain specific details, such as by the substitution of alternative elements or steps, or by the omission of certain elements or steps, while remaining within the scope and spirit of this disclosure. Furthermore, where certain processes, procedures, operations, steps, elements, devices, modules, components, and/or systems are already well-known to those skilled in the art, they may not be described herein in as great of detail as is necessarily possible, so that the inventive aspects of this disclosure are not unnecessarily obscured. Furthermore, note that the foregoing disclosed processes, methods, systems, and apparatuses have been described in detail within the context of specific embodiments for the purpose of promoting clarity and understanding, it will be apparent to one of ordinary skill in the art that there are many alternative ways of implementing these processes, methods, systems, and apparatuses which are within the scope and spirit of this disclosure. Accordingly, the embodiments described herein are to be viewed as illustrative of the disclosed inventive concepts rather than limiting or restrictive, and are not to be used as an impermissible basis for unduly limiting the scope of the appended claims.
This application claims priority to: U.S. Provisional Pat. App. No. 61/900,228, filed Nov. 5, 2013, titled “NETWORK FABRIC OVERLAY”; and U.S. Provisional Pat. App. No. 61/900,349, filed Nov. 5, 2013, titled “IP-BASED FORWARDING OF BRIDGED AND ROUTED IP PACKETS AND UNICAST ARP”; each of which is hereby incorporated by reference in its entirety for all purposes.
Number | Name | Date | Kind |
---|---|---|---|
4298770 | Nishihara et al. | Nov 1981 | A |
4636919 | Itakura et al. | Jan 1987 | A |
4700016 | Hitchcock et al. | Oct 1987 | A |
5115431 | Williams et al. | May 1992 | A |
5859835 | Varma et al. | Jan 1999 | A |
5926458 | Yin et al. | Jul 1999 | A |
6389031 | Chao | May 2002 | B1 |
6677831 | Cheng et al. | Jan 2004 | B1 |
6714553 | Poole et al. | Mar 2004 | B1 |
6757897 | Shi et al. | Jun 2004 | B1 |
6876952 | Kappler et al. | Apr 2005 | B1 |
6907039 | Shen | Jun 2005 | B2 |
6941649 | Goergen | Sep 2005 | B2 |
6952421 | Slater | Oct 2005 | B1 |
6954463 | Ma et al. | Oct 2005 | B1 |
6996099 | Kadambi et al. | Feb 2006 | B1 |
7068667 | Foster et al. | Jun 2006 | B2 |
7152117 | Stapp et al. | Dec 2006 | B1 |
7177946 | Kaluve et al. | Feb 2007 | B1 |
7372857 | Kappler et al. | May 2008 | B1 |
7411915 | Spain et al. | Aug 2008 | B1 |
7426604 | Rygh et al. | Sep 2008 | B1 |
7516211 | Gourlay et al. | Apr 2009 | B1 |
7539131 | Shen | May 2009 | B2 |
7580409 | Swenson et al. | Aug 2009 | B1 |
7630368 | Tripathi et al. | Dec 2009 | B2 |
7729296 | Choudhary | Jun 2010 | B1 |
7826469 | Li et al. | Nov 2010 | B1 |
8233384 | Osterhout et al. | Jul 2012 | B2 |
8302301 | Lau | Nov 2012 | B2 |
8325459 | Mutnury et al. | Dec 2012 | B2 |
8339973 | Pichumani et al. | Dec 2012 | B1 |
8378223 | Shiue et al. | Feb 2013 | B1 |
8442063 | Zhou et al. | May 2013 | B1 |
8514712 | Aswadhati | Aug 2013 | B1 |
8687629 | Kompella et al. | Apr 2014 | B1 |
8868766 | Theimer et al. | Oct 2014 | B1 |
8908691 | Biswas et al. | Dec 2014 | B2 |
9036481 | White | May 2015 | B1 |
9106508 | Banavalikar et al. | Aug 2015 | B2 |
9178715 | Jain et al. | Nov 2015 | B2 |
9197551 | DeCusatis et al. | Nov 2015 | B2 |
9203188 | Siechen et al. | Dec 2015 | B1 |
9258195 | Pendleton et al. | Feb 2016 | B1 |
9325524 | Banavalikar et al. | Apr 2016 | B2 |
9374294 | Pani | Jun 2016 | B1 |
9402470 | Shen et al. | Aug 2016 | B2 |
9407501 | Yadav et al. | Aug 2016 | B2 |
9426060 | Dixon et al. | Aug 2016 | B2 |
9433081 | Xiong et al. | Aug 2016 | B1 |
9444634 | Pani | Sep 2016 | B2 |
9502111 | Dharmapurikar et al. | Nov 2016 | B2 |
9509092 | Shen et al. | Nov 2016 | B2 |
9544185 | Yadav et al. | Jan 2017 | B1 |
9544224 | Chu et al. | Jan 2017 | B2 |
9590914 | Attar et al. | Mar 2017 | B2 |
9627063 | Dharmapurikar et al. | Apr 2017 | B2 |
9634846 | Pani | Apr 2017 | B2 |
9635937 | Shen et al. | May 2017 | B2 |
9654300 | Pani | May 2017 | B2 |
9654385 | Chu et al. | May 2017 | B2 |
9654409 | Yadav et al. | May 2017 | B2 |
9655232 | Saxena et al. | May 2017 | B2 |
9667431 | Pani | May 2017 | B2 |
9667551 | Edsall et al. | May 2017 | B2 |
9674086 | Ma et al. | Jun 2017 | B2 |
9686180 | Chu et al. | Jun 2017 | B2 |
9698994 | Pani | Jul 2017 | B2 |
9716665 | Alizadeh Attar et al. | Jul 2017 | B2 |
9742673 | Banerjee et al. | Aug 2017 | B2 |
9755965 | Yadav et al. | Sep 2017 | B1 |
9769078 | Attar et al. | Sep 2017 | B2 |
9876715 | Edsall et al. | Jan 2018 | B2 |
20020126671 | Ellis et al. | Sep 2002 | A1 |
20020136268 | Gan et al. | Sep 2002 | A1 |
20020146026 | Unitt et al. | Oct 2002 | A1 |
20030035385 | Walsh et al. | Feb 2003 | A1 |
20030058837 | Denney et al. | Mar 2003 | A1 |
20030058860 | Kunze | Mar 2003 | A1 |
20030067924 | Choe et al. | Apr 2003 | A1 |
20030097461 | Barham et al. | May 2003 | A1 |
20030115319 | Dawson et al. | Jun 2003 | A1 |
20030137940 | Schwartz et al. | Jul 2003 | A1 |
20030142629 | Krishnamurthi et al. | Jul 2003 | A1 |
20030174650 | Shankar et al. | Sep 2003 | A1 |
20030223376 | Elliott et al. | Dec 2003 | A1 |
20030231646 | Chandra et al. | Dec 2003 | A1 |
20040062259 | Jeffries et al. | Apr 2004 | A1 |
20040073715 | Folkes et al. | Apr 2004 | A1 |
20040100901 | Bellows | May 2004 | A1 |
20040103310 | Sobel et al. | May 2004 | A1 |
20040111507 | Villado et al. | Jun 2004 | A1 |
20040160956 | Hardy et al. | Aug 2004 | A1 |
20040249960 | Hardy et al. | Dec 2004 | A1 |
20050007961 | Scott et al. | Jan 2005 | A1 |
20050013280 | Buddhikot et al. | Jan 2005 | A1 |
20050073958 | Atlas et al. | Apr 2005 | A1 |
20050091239 | Ward et al. | Apr 2005 | A1 |
20050175020 | Park et al. | Aug 2005 | A1 |
20050201375 | Komatsu et al. | Sep 2005 | A1 |
20050207410 | Adhikari et al. | Sep 2005 | A1 |
20050213504 | Enomoto et al. | Sep 2005 | A1 |
20050232227 | Jorgenson et al. | Oct 2005 | A1 |
20060028285 | Jang et al. | Feb 2006 | A1 |
20060031643 | Figueira | Feb 2006 | A1 |
20060075093 | Frattura et al. | Apr 2006 | A1 |
20060083179 | Mitchell | Apr 2006 | A1 |
20060083256 | Mitchell | Apr 2006 | A1 |
20060182036 | Sasagawa et al. | Aug 2006 | A1 |
20060198315 | Sasagawa et al. | Sep 2006 | A1 |
20060209688 | Tsuge et al. | Sep 2006 | A1 |
20060209702 | Schmitt et al. | Sep 2006 | A1 |
20060215572 | Padhye et al. | Sep 2006 | A1 |
20060215623 | Lin | Sep 2006 | A1 |
20060221835 | Sweeney | Oct 2006 | A1 |
20060239204 | Bordonaro et al. | Oct 2006 | A1 |
20060250982 | Yuan et al. | Nov 2006 | A1 |
20060268742 | Chu et al. | Nov 2006 | A1 |
20060274657 | Olgaard et al. | Dec 2006 | A1 |
20060280179 | Meier | Dec 2006 | A1 |
20060285500 | Booth et al. | Dec 2006 | A1 |
20070016590 | Appleby et al. | Jan 2007 | A1 |
20070025241 | Nadeau et al. | Feb 2007 | A1 |
20070053303 | Kryuchkov | Mar 2007 | A1 |
20070058557 | Cuffaro et al. | Mar 2007 | A1 |
20070061451 | Villado et al. | Mar 2007 | A1 |
20070076605 | Cidon et al. | Apr 2007 | A1 |
20070091795 | Bonaventure et al. | Apr 2007 | A1 |
20070097872 | Chiu | May 2007 | A1 |
20070159987 | Khan et al. | Jul 2007 | A1 |
20070160073 | Toumura et al. | Jul 2007 | A1 |
20070211625 | Liu et al. | Sep 2007 | A1 |
20070223372 | Haalen et al. | Sep 2007 | A1 |
20070233847 | Aldereguia et al. | Oct 2007 | A1 |
20070258382 | Foll et al. | Nov 2007 | A1 |
20070258383 | Wada | Nov 2007 | A1 |
20070274229 | Scholl et al. | Nov 2007 | A1 |
20070280264 | Milton et al. | Dec 2007 | A1 |
20080031130 | Raj et al. | Feb 2008 | A1 |
20080031146 | Kwak et al. | Feb 2008 | A1 |
20080031247 | Tahara et al. | Feb 2008 | A1 |
20080092213 | Wei et al. | Apr 2008 | A1 |
20080147830 | Ridgill et al. | Jun 2008 | A1 |
20080151863 | Lawrence et al. | Jun 2008 | A1 |
20080177896 | Quinn et al. | Jul 2008 | A1 |
20080219173 | Yoshida et al. | Sep 2008 | A1 |
20080225853 | Melman et al. | Sep 2008 | A1 |
20080259809 | Stephan et al. | Oct 2008 | A1 |
20080259925 | Droms et al. | Oct 2008 | A1 |
20080310421 | Teisberg et al. | Dec 2008 | A1 |
20090052332 | Fukuyama et al. | Feb 2009 | A1 |
20090094357 | Keohane et al. | Apr 2009 | A1 |
20090103566 | Kloth et al. | Apr 2009 | A1 |
20090116402 | Yamasaki | May 2009 | A1 |
20090122805 | Epps et al. | May 2009 | A1 |
20090188711 | Ahmad | Jul 2009 | A1 |
20090193103 | Small et al. | Jul 2009 | A1 |
20090225671 | Arbel et al. | Sep 2009 | A1 |
20090232011 | Li et al. | Sep 2009 | A1 |
20090268614 | Tay et al. | Oct 2009 | A1 |
20090271508 | Sommers et al. | Oct 2009 | A1 |
20100128619 | Shigei | May 2010 | A1 |
20100150155 | Napierala | Jun 2010 | A1 |
20100189080 | Hu et al. | Jul 2010 | A1 |
20100191813 | Gandhewar et al. | Jul 2010 | A1 |
20100191839 | Gandhewar et al. | Jul 2010 | A1 |
20100223655 | Zheng | Sep 2010 | A1 |
20100260197 | Martin et al. | Oct 2010 | A1 |
20100287227 | Goel et al. | Nov 2010 | A1 |
20100299553 | Cen | Nov 2010 | A1 |
20100312875 | Wilerson et al. | Dec 2010 | A1 |
20110110241 | Atkinson et al. | May 2011 | A1 |
20110138310 | Gomez et al. | Jun 2011 | A1 |
20110158248 | Vorunganti, Sr. et al. | Jun 2011 | A1 |
20110170426 | Kompella et al. | Jul 2011 | A1 |
20110203834 | Yoneya et al. | Aug 2011 | A1 |
20110228795 | Agrawal et al. | Sep 2011 | A1 |
20110249682 | Kean et al. | Oct 2011 | A1 |
20110268118 | Schlansker | Nov 2011 | A1 |
20110286447 | Liu | Nov 2011 | A1 |
20110299406 | Vobbilisetty et al. | Dec 2011 | A1 |
20110310738 | Lee et al. | Dec 2011 | A1 |
20110321031 | Dournov et al. | Dec 2011 | A1 |
20120007688 | Zhou et al. | Jan 2012 | A1 |
20120063318 | Boddu et al. | Mar 2012 | A1 |
20120102114 | Dunn et al. | Apr 2012 | A1 |
20120147752 | Ashwood-Smith et al. | Jun 2012 | A1 |
20120163396 | Cheng et al. | Jun 2012 | A1 |
20120195233 | Wang et al. | Aug 2012 | A1 |
20120275304 | Patel et al. | Nov 2012 | A1 |
20120281697 | Huang | Nov 2012 | A1 |
20120300787 | Korger | Nov 2012 | A1 |
20120314581 | Rajamanickam et al. | Dec 2012 | A1 |
20130055155 | Wong et al. | Feb 2013 | A1 |
20130090014 | Champion | Apr 2013 | A1 |
20130097335 | Jiang et al. | Apr 2013 | A1 |
20130124708 | Lee et al. | May 2013 | A1 |
20130182712 | Aguayo et al. | Jul 2013 | A1 |
20130227108 | Dunbar | Aug 2013 | A1 |
20130250951 | Koganti | Sep 2013 | A1 |
20130311663 | Kamath et al. | Nov 2013 | A1 |
20130311991 | Li al. | Nov 2013 | A1 |
20130322258 | Nedeltchev et al. | Dec 2013 | A1 |
20130322446 | Biswas et al. | Dec 2013 | A1 |
20130322453 | Allan | Dec 2013 | A1 |
20130332399 | Reddy et al. | Dec 2013 | A1 |
20130332577 | Nakil et al. | Dec 2013 | A1 |
20130332602 | Nakil et al. | Dec 2013 | A1 |
20140006549 | Narayanaswamy et al. | Jan 2014 | A1 |
20140016501 | Kamath et al. | Jan 2014 | A1 |
20140043535 | Motoyama et al. | Feb 2014 | A1 |
20140043972 | Li | Feb 2014 | A1 |
20140047264 | Wang et al. | Feb 2014 | A1 |
20140050223 | Foo et al. | Feb 2014 | A1 |
20140056298 | Vobbilisetty et al. | Feb 2014 | A1 |
20140064281 | Basso et al. | Mar 2014 | A1 |
20140068750 | Tjahjono et al. | Mar 2014 | A1 |
20140086253 | Yong et al. | Mar 2014 | A1 |
20140092907 | Sridhar | Apr 2014 | A1 |
20140105039 | Mcdysan | Apr 2014 | A1 |
20140105062 | Mcdysan et al. | Apr 2014 | A1 |
20140105216 | Mcdysan | Apr 2014 | A1 |
20140146817 | Zhang | May 2014 | A1 |
20140146824 | Angst et al. | May 2014 | A1 |
20140201375 | Beereddy et al. | Jul 2014 | A1 |
20140219275 | Allan et al. | Aug 2014 | A1 |
20140241353 | Zhang et al. | Aug 2014 | A1 |
20140244779 | Roitshtein et al. | Aug 2014 | A1 |
20140269705 | DeCusatis et al. | Sep 2014 | A1 |
20140269712 | Kidambi | Sep 2014 | A1 |
20140307744 | Dunbar et al. | Oct 2014 | A1 |
20140321277 | Lynn, Jr. et al. | Oct 2014 | A1 |
20140328206 | Chan et al. | Nov 2014 | A1 |
20140334295 | Guichard et al. | Nov 2014 | A1 |
20140341029 | Allan et al. | Nov 2014 | A1 |
20140372582 | Ghanwani et al. | Dec 2014 | A1 |
20150009992 | Zhang | Jan 2015 | A1 |
20150010001 | Duda | Jan 2015 | A1 |
20150092551 | Moisand et al. | Apr 2015 | A1 |
20150092593 | Kompella | Apr 2015 | A1 |
20150113143 | Stuart et al. | Apr 2015 | A1 |
20150124611 | Attar et al. | May 2015 | A1 |
20150124629 | Pani | May 2015 | A1 |
20150124631 | Edsall et al. | May 2015 | A1 |
20150124640 | Chu et al. | May 2015 | A1 |
20150124644 | Pani | May 2015 | A1 |
20150124806 | Banerjee et al. | May 2015 | A1 |
20150124821 | Chu et al. | May 2015 | A1 |
20150124823 | Pani et al. | May 2015 | A1 |
20150124824 | Edsall et al. | May 2015 | A1 |
20150124825 | Dharmapurikar et al. | May 2015 | A1 |
20150124826 | Edsall et al. | May 2015 | A1 |
20150124833 | Ma et al. | May 2015 | A1 |
20150127797 | Attar et al. | May 2015 | A1 |
20150236900 | Chung | Aug 2015 | A1 |
20150378712 | Cameron et al. | Dec 2015 | A1 |
20150378969 | Powell et al. | Dec 2015 | A1 |
20160036697 | DeCusatis et al. | Feb 2016 | A1 |
20160119204 | Murasato et al. | Apr 2016 | A1 |
20160315811 | Yadav et al. | Oct 2016 | A1 |
20170085469 | Chu et al. | Mar 2017 | A1 |
20170207961 | Saxena et al. | Jul 2017 | A1 |
20170214619 | Chu et al. | Jul 2017 | A1 |
20170237651 | Pani | Aug 2017 | A1 |
20170237678 | Ma et al. | Aug 2017 | A1 |
20170250912 | Chu et al. | Aug 2017 | A1 |
20170346748 | Attar et al. | Nov 2017 | A1 |
Number | Date | Country |
---|---|---|
WO 2014071996 | May 2014 | WO |
Entry |
---|
Mahalingam, M., et al. “VXLAN: A Framework for Overlaying Virtualized Layer 2 Networks over Layer 3 Networks,” VXLAN, Internet Engineering Task Force, Internet Draft, located at https://tools.ietf.org/html/draft-mahalingam-dutt-dcops-vxlan-06, Oct. 2013, pp. 1-24. |
International Search Report and Written Opinion dated Feb. 18, 2015, issued in Application No. PCT/US14/63568. |
Khasnabish, et al., “Mobility and Interconnection of Virtual Machines and Virtual Network Elements; draft-khasnabish-vmmi-problems-03.txt,” Network Working Group, Dec. 30, 2012, pp. 1-29. |
Narten, et al., “Problem Statement: Overlays for Network Virtualization,” draft-ietf-nvo3-overlay-problem-statement-04.txt11, Internet Engineering Task Force, Jul. 31, 2013, pp. 1-24. |
U.S. Appl. No. 14/530,550, titled “Network Fabric Overlay,” by Edsall et al., filed Oct. 31, 2014. |
Office Action issued in U.S. Appl. No. 14/099,742, dated Jan. 29, 2016. |
Chandy, K. Mani, et al., “Distribution Snapshots: Determining Golbal States of Distributed Systems,” ACM Transaction on Computer Systems, Feb. 1985, vol. 3, No. 1, pp. 63-75. |
U.S. Appl. No. 14/450,172, titled “Service Tag Switching,” by Yadav et al., filed Aug. 1, 2014. |
Office Action issued in U.S. Appl. No. 14/099,742, dated Dec. 21, 2015. |
U.S. Appl. No. 14/086,803, titled “Hardware Based Fast Convergence for Network Failures,” by Yadav et al., filed Nov. 21, 2013. |
Office Action issued in U.S. Appl. No. 14/099,742, dated May 6, 2015. |
Aslam, F. et. al., “NPP: A Facility Based Computation Framework for Restoration Routing Using Aggregate Link Usage Information,” Proceedings of QoS-IP: quality of service in multiservice IP network, Feb. 2005, pp. 150-163. |
Kodialam, M. et. al., “Dynamic Routing of Locally Restorable Bandwidth Guaranteed Tunnels using Aggregated Link Usage Information,” Proceedings of IEEE INFOCOM, 2001, vol. 1, pp. 376-385. |
Li, L. et al., “Routing Bandwidth Guaranteed Paths with Local Restoration in Label Switched Networks,” IEEE Journal on Selected Areas in Communications, Feb. 7, 2005, vol. 23, No. 2, pp. 437-449. |
Pan, P. et al., “Fast Reroute Extensions to RSVP-TE for LSP Tunnels,” RFC-4090. May 2005, pp. 1-38. |
Raza, S. et al., “Online routing of bandwidth guaranteed paths with local restoration using optimized aggregate usage information,” IEEE-ICC '05 Communications, May 2005, vol. 1, pp. 201-207. |
Final Office Action issued in U.S. Appl. No. 14/099,742, dated Nov. 13, 2015. |
Office Action issued in U.S. Appl. No. 14/099,742, dated Mar. 7, 2016. |
Office Action issued in U.S. Appl. No. 14/530,550, dated May 24, 2016. |
Office Action issued in U.S. Appl. No. 14/530,550, dated Dec. 26, 2016. |
Sinha, Shan et al., “Harnessing TCP's burstiness with flowlet switching,” Nov. 2004, 6 pages. |
Final Office Action issued in U.S. Appl. No. 14/099,742, dated May 27, 2016. |
Moncaster, T., et al., “The Need for Congestion Exposure in the Internet”, Oct. 26, 2009, Internet-Draft, pp. 1-22. |
Number | Date | Country | |
---|---|---|---|
20150124817 A1 | May 2015 | US |
Number | Date | Country | |
---|---|---|---|
61900228 | Nov 2013 | US | |
61900349 | Nov 2013 | US |