The present technology pertains to computer networks, and more specifically pertains to determining the inter-connectivity between various nodes in a dense data network.
Determining the interconnectivity between various nodes in a data network is an integral part in successfully troubleshooting and managing the traffic flow in the network. Attempts to accurately explore the detailed interconnectivity in dense CLOS or folded-CLOS topologies, have proven inadequate. Further, with a Virtual Extensible Local Area Network (VXLAN), the data traffic is embedded within the VXLAN encapsulation and thus traditional tools fail to explore the connection at the VXLAN infrastructure layer.
In order to describe the manner in which the above-recited and other advantages and features of the disclosure can be obtained, a more particular description of the principles briefly described above will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. Understanding that these drawings depict only exemplary embodiments of the disclosure and are not therefore to be considered to be limiting of its scope, the principles herein are described and explained with additional specificity and detail through the use of the accompanying drawings in which:
The detailed description set forth below is intended as a description of various configurations of the subject technology and is not intended to represent the only configurations in which the subject technology can be practiced. The appended drawings are incorporated herein and constitute a part of the detailed description. The detailed description includes specific details for the purpose of providing a more thorough understanding of the subject technology. However, it will be clear and apparent that the subject technology is not limited to the specific details set forth herein and may be practiced without these details. In some instances, structures and components are shown in block diagram form in order to avoid obscuring the concepts of the subject technology.
The disclosed technology addresses the need in the art for determining the inter-connectivity relationship between various nodes in a data network having a dense topology.
Overview
In one aspect of the present disclosure, a method is provided that includes generating, at a first network device in a data network, a traceroute packet, where the traceroute packet includes source and destination address information. The traceroute packet is encapsulated in an outer packet, the outer packet including a destination address based on the destination address information of the inner traceroute packet. The encapsulated traceroute packet is forwarded to a second network device in the data network, where the second network device is identified based on the destination address of the outer packet. The second network device forwards the encapsulated traceroute packet to one or more intermediate network devices in the data network. The first network device receives response information from the second network device and each of the intermediate network devices, the response information identifying a path taken by the traceroute packet through the data network. The first network device determines an end-to-end path taken by the traceroute packet through the data network.
In another aspect of the present disclosure, a system is provided where the system includes a processor; and a computer-readable storage medium having stored therein instructions which, when executed by the processor, cause the processor to perform operations including generating, at a first network device in a data network, a traceroute packet, the traceroute packet including source and destination address information, encapsulating the traceroute packet in an outer packet, the outer packet including a destination address based on the destination address information of the inner traceroute packet, and forwarding the encapsulated traceroute packet to a second network device in the data network, the second network device identified based upon routing information, the destination address, and, in some cases, the destination port of the outer packet. The second network device forwards the encapsulated traceroute packet to one or more intermediate network devices in the data network. The first network device receives response information from the second network device and each of the intermediate network devices, the response information identifying a path taken by the traceroute packet through the data network, and determines an end-to-end path taken by the traceroute packet through the data network.
In yet another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium having stored therein instructions which, when executed by a processor, cause the processor to perform operations including generating, at a first network device in a data network, a traceroute packet, the traceroute packet including source and destination address information, encapsulating the traceroute packet in an outer packet, the outer packet including a destination address based on the destination address information of the traceroute packet, and forwarding the encapsulated traceroute packet to a second network device in the data network, the second network device identified based on routing information and the destination address of the outer packet. The second network device forwards the encapsulated traceroute packet to one or more intermediate network devices in the data network. The processor performs further operations including receiving response information from the second network device and each of the intermediate network devices, the response information identifying a path taken by the traceroute packet through the data network, and determining an one end-to-end path taken by the traceroute packet through the data network.
The present disclosure describes systems, methods, and non-transitory computer-readable storage media for determining the path between two nodes in a network. A traceroute packet is formed and injected into an ingress node's forwarding plane. The traceroute packet is a User Datagram Protocol (“UDP”) packet with source and destination addresses set to the tenant source and host destination source addresses. The UDP header of the traceroute packet also includes source and destination port information. The ingress node encapsulates the packet within an outer packet header, and forwards the encapsulated packet to a switch, which then forwards the packet to other switches within the network. The destination address of the outer packet is set to the VXLAN tunnel-end-point (“TEP”) switch to which the destination host of the traceroute packet is attached. The outer packet is forwarded based on the routing information in the VXLAN infra network. In case of multiple alternate paths between the originating switch and the destination TEP switch, the identity of the path taken by the encapsulated packet is determined by a hash computed from the packet's parameters, which includes the UDP source and destination ports. The intermediate switches receiving the packet send a copy of the packet to its own CPU while also forwarding the packet towards its original destination. This is typically done using a mechanism known as CPU logging.
Based on the copy of the packet at the intermediate switch CPU, various kinds of information can be derived. This information can include, for example, the identity of the ingress node, the switch port on which the packet was received, packet identification information (including the inner packet and the outer packet source destination address and IPv4 header's identifier), the time-to-live (TTL) for the packet, and time stamp information. Each node receiving the packet copies this information to its CPU and sends a response packet back to the originating node. The originating node collects this information and is able to determine the end-to-end path within the network taken by the traceroute packet. This is done for a given packet identifier by forming the end-to-end path based on the responding switch's identification sorted by the TTL value reported.
As discussed above, the infra network packet path is based on the hash computation based on the packet's header information including the UDP source and destination addresses. Therefore, by varying the source and destination port information of the inner packet, which in turn influences the outer VXLAN UDP port information, other end-to-end paths taken by the packet within the network can be determined.
The interfaces 168 are typically provided as interface cards (sometimes referred to as “line cards”). Generally, they control the sending and receiving of data packets over the network and sometimes support other peripherals used with the router 110. Among the interfaces that may be provided are Ethernet interfaces, frame relay interfaces, cable interfaces, Digital Subscriber Line (“DSL”) interfaces, token ring interfaces, and the like. In addition, various very high-speed interfaces may be provided such as fast token ring interfaces, wireless interfaces, Ethernet interfaces, Gigabit Ethernet interfaces, Asynchronous Transfer Mode (ATM″ interfaces, High Speed Serial Interfaces (HSSI), Packet-Over-SONET (POS) interfaces, Fiber Distributed Data Interfaces (FDDI) and the like. Generally, these interfaces may include ports appropriate for communication with the appropriate media. In some cases, they may also include an independent processor and, in some instances, volatile random access memory (RAM). The independent processors may control such communications intensive tasks as packet switching, media control and management. By providing separate processors for the communications intensive tasks, these interfaces allow the master microprocessor 162 to efficiently perform routing computations, network diagnostics, security functions, etc.
Although the system shown in
Regardless of the network device's configuration, it may employ one or more memories or memory modules (including memory 161) configured to store program instructions for the general-purpose network operations and mechanisms for roaming, route optimization and routing functions described herein. The program instructions may control the operation of an operating system and/or one or more applications, for example. The memory or memories may also be configured to store tables such as mobility binding, registration, and association tables, etc.
The communications interface 240 can generally govern and manage the user input and system output. There is no restriction on operating on any particular hardware arrangement and therefore the basic features here may easily be substituted for improved hardware or firmware arrangements as they are developed.
Storage device 230 is a non-volatile memory and can be a hard disk or other types of computer readable media which can store data that are accessible by a computer, such as magnetic cassettes, flash memory cards, solid state memory devices, digital versatile disks, cartridges, random access memories (RAMs) 225, read only memory (ROM) 220, and hybrids thereof.
The storage device 230 can include software modules 232, 234, 236 for controlling the processor 210. Other hardware or software modules are contemplated. The storage device 230 can be connected to the system bus 205. In one aspect, a hardware module that performs a particular function can include the software component stored in a computer-readable medium in connection with the necessary hardware components, such as the processor 210, bus 205, display 235, and so forth, to carry out the function.
In one example of the methodology described herein, virtual extensible local area network (“VXLAN”) is utilized as the infrastructure layer's encapsulation protocol. However, the use of VXLAN is exemplary only, and the methodology can be implemented using any encapsulation technology such as, for example, Transparent Interconnection of Lots of Links (TRILL). In VXLAN, the user's data traffic is injected into the VXLAN network from an ingress switch which encapsulates the user's data traffic within a VXLAN packet with the UDP source port set to a value based on the inner packet's header information. This dynamic setting of the UDP source port in a VXLAN header allows the packet to follow alternate Equal Cost Multi-Paths (ECMPs) within the VXLAN infra-network. At the egress switch (the boundary of the VXLAN network), the packet is de-capsulated and the inner packet (the user data packet) is forwarded out.
Network connectivity in the network topology 300 can flow through the leaf switches 304. In fact, in general, the spine switches 302 will only connect to leaf switches 304. Accordingly, any connections to external networks or servers, such as networks 306 and 308, will flow through the leaf switches 304.
In some cases, a leaf switch may only perform routing functions. However, in other cases, a leaf switch can perform one or more additional functions, such as encapsulating packets, enforcing ingress or egress policies, forwarding traceroute packets to spine switches 302, receiving responsive packets from spine switches and other leaf switches, and determining the end-to-end paths taken by traceroute packets injected into infra network 301.
On the other hand, tenant hosts 310C and 310D can connect to leaf switch 304B via network 306. Similarly, the wide area network (WAN) can connect to the leaf switches 304C or 304D via network 308. Networks 306 and 308 can be public and/or private networks. In some cases, network 306 can be a Layer 2 network, and network 308 can be a Layer 3 network, for example.
In one example, it is desirable to determine the interconnectivity between, for example, tenant host 310A and tenant host 310C. Data packets sent from host 310A in tenant network 303 can take various paths to tenant host 310C. For example, one path could be from leaf node 304A to spine switch 302A to leaf node 304B and to tenant host 310C via network 306. Another path could be via spine switch 302B and leaf node 304B to tenant host 310C. In dense networks, due to the large number of leaf nodes 304 and spine switches 302, it is often quite difficult to determine the exact path taken by a data packet. Thus, if a failure occurs during the routing of the packet, it may be difficult to determine just where the failure occurred. Further, because data network traffic may involve encapsulation protocols such as VXLAN, it is important to explore the connections at the VXLAN infrastructure layer.
According to an example of the present disclosure, a traceroute packet can be generated at ingress leaf node 304A on behalf of a host, such as tenant host 310A, an injected into the hardware forwarding plane of the ingress leaf node to which tenant host 310A is connected, in this example, leaf node 304A, in order to determine a path from the source host 310A to a destination host, i.e., host 310C. In one example, network 300 includes encapsulation technology such as VXLAN, Transport Interconnect of Lots of Links (“TRILL”), Fabric-path or other encapsulation protocols. Although the present disclosure can be used in any encapsulation technology, the ensuing discussion shall focus on VXLAN encapsulation.
VXLAN provides a traffic encapsulation scheme which allows network traffic to be carried across layer 2 (L2) and layer 3 (L3) networks over a logical tunnel. Such VXLAN tunnels can be originated and terminated through VXLAN tunnel end points (VTEPs). Moreover, VXLANs can include VXLAN segments, which can include VXLAN L2 and/or L3 overlay networks over which VMs communicate. The VXLAN segments can be identified through a VXLAN network identifier (VNI), which can specifically identify an associated VXLAN segment.
Leaf node 304A receives the traceroute packet and encapsulates the packet within a VXLAN packet with a source UDP port set to value based on the traceroute packet's header information. For example, if the inner traceroute packet has a source port identified as switch 302A, the VXLAN header will include a source UDP port set such that leaf node 304A will forward the encapsulated packet to spine switch 302A. Leaf node 304A can randomly choose and dynamically vary the source UDP port of the VXLAN packet in order to route the traceroute packet to different spine switches 302 within infra network 301. In this fashion alternate Equal Cost Multi-Path (“ECMP”) routes within infra network 301 can be explored.
Outer packet 418 also includes an outer IP header 420, which includes a source IP address identified as leaf node L1 304A and a destination IP address identified as leaf node L2 304B. Thus, the traceroute is to be from tenant host H1 to tenant host H2 via leaf node L1 and leaf node L2. The spine switches 302 that will be utilized depend upon the values of source port 404 and destination port 406. Outer IP header 420 also includes an outer packet TTL value 422, which can be set to an arbitrarily high value, such as, for example, 64. The present disclosure is not limited by any specific value for the outer packet TTL value 422. The outer TTL value limits the maximum number of intermediate nodes by which the outer packet can reach the outer destination.
At step 502, leaf node 304A encapsulates the packet within an outer packet that includes UDP port numbers set to a value based on the source port information in the traceroute packet. In one example, the network 300 is a VXLAN and the outer packet includes a VXLAN header 416. At step 504, the encapsulated traceroute packet is forwarded by leaf node 304A to the next switch in the infra network 301. In this example, this could be spine switch S1 302A or spine switch S2 302B. The spine switch 302 that is chosen is dependent upon the identity of the source port 404 and destination port 406 of inner packet 402.
At step 506, a copy of the encapsulated packet is sent to the CPU of each intermediate switch 302 that the tracerroute packet has been forwarded to. This can be done in several ways such as by the use of an access control list (“ACL”) based CPU logging feature based on the inner packet's UDP port and TTL value. In another example, a special signature string can be encoded in the payload of the inner packet and this can then be used as a lookup key in the intermediate switches to perform CPU logging. The payload of the inner packet contains information about the originating node (e.g., leaf node L1), the packet ID, and the time signature of the inner packet payload. All of this information is copied to the CPU of each spine switch 302.
At step 508, spine switch 302A (if this was where leaf node 304A forwarded the traceroute packet) processes the stored packet creates a response packet to the originating node, leaf node 304A. This response packet includes information such as the IP address or identifier of the intermediate switch (switch S1), the value of the outer TTL of the CPU packet copy, the ingress and egress port on which the packet was captured, outer and inner packet header information such as the packet's source and destination addresses, and time stamp information. For example, spine switch S1 302A can respond back to leaf node L1 304A with information that is received a traceroute packet at a certain time and that the outer TTL of the packet was, for example, 63. Similarly, leaf node L1 304A may have forwarded a traceroute packet to spine switch S2 302B. Switch S2 302B copies the packet to its CPU and forwards a response to leaf node L1 304A with the information described above.
At step 510, switch S1 302 forwards the traceroute packet to other intermediate nodes in the network 300, for example, leaf node L2 304B. Leaf node L2 304B also copies the packet information to its CPU and generates a response message that is forwarded back to the node that originated the packet, leaf node L1 304A, that includes information described above. In this example, leaf node L2 304B will inform leaf node L1 304A that the outer TTL of the packet L2 304B received had a TTL value of 62. Leaf node L1 304A interprets this to mean that the traceroute packet arrived at a spine switch 302, either switch S1 or S2 first, before it was forwarded to leaf node L2 304B because the value of the outer packet TTL value detected at the spine switch 302 was lower than the value of the outer packet TTL value detected at the leaf node 304. In this fashion, the originating node, leaf node L1 304A can, at step 512, determine the end-to-end order in which the traceroute packet traversed through infra network 301. In this example, leaf node L1 304A determines that there are two paths from host H1 to host H2; one path being L1-S1-L2 and the other path being L1-S2-L2. This analysis can be expanded to dense networks that include hundreds of paths with multiple hops.
The packet paths are stitched together based on the response packet's identifier field and TTL value. By sorting or ordering the responding switch's identification (e.g., the switch's IP address) based on the TTL value reported for a given packet identifier, a path is constructed. To explore multiple paths, the originating leaf switch L1 304A can generate multiple different traceroute packets with varying inner packet UDP source ports. Since each such packet has the likelihood of taking different paths within the infra network, the originating switch can successfully explore all paths.
Leaf L1 304A is also able to identify the last node (exit node) that the traceroute packet encountered in infra network 301. This occurs when the last node identifies the inner packet TTL value as 1, and sends a message, i.e., an Internet Control Message Protocol (“ICMP”) unreachable message, to the originating node. When leaf node L1 304A receives this message from the last node in the traceroute packet path, i.e., leaf node L2 304B, leaf node L1 304A knows that there are no further nodes in the path and can thus identify the entire end-to-end path taken by the traceroute packet within infra network 301, including the exit node. By varying the UDP source port of the inner packet, all other possible alternate paths within infra network 301 can be identified using this same method. Finally, once all the end-to-end paths within infra network 301 are known, other methods can be used to determine alternate paths beyond the infra network 301. Thus, once both the interconnectivity for a given flow within infra network 301 and the ECMP paths outside of the infra network 301 are known, an accurate view of the packet path for various flows between any two end points inside and outside of infra network 301 can be determined.
For clarity of explanation, in some instances the present technology may be presented as including individual functional blocks including functional blocks comprising devices, device components, steps or routines in a method embodied in software, or combinations of hardware and software.
In some embodiments the computer-readable storage devices, mediums, and memories can include a cable or wireless signal containing a bit stream and the like. However, when mentioned, non-transitory computer-readable storage media expressly exclude media such as energy, carrier signals, electromagnetic waves, and signals per se.
Methods according to the above-described examples can be implemented using computer-executable instructions that are stored or otherwise available from computer readable media. Such instructions can comprise, for example, instructions and data which cause or otherwise configure a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. Portions of computer resources used can be accessible over a network. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, firmware, or source code. Examples of computer-readable media that may be used to store instructions, information used, and/or information created during methods according to described examples include magnetic or optical disks, flash memory, USB devices provided with non-volatile memory, networked storage devices, and so on.
Devices implementing methods according to these disclosures can comprise hardware, firmware and/or software, and can take any of a variety of form factors. Typical examples of such form factors include laptops, smart phones, small form factor personal computers, personal digital assistants, and so on. Functionality described herein also can be embodied in peripherals or add-in cards. Such functionality can also be implemented on a circuit board among different chips or different processes executing in a single device, by way of further example.
The instructions, media for conveying such instructions, computing resources for executing them, and other structures for supporting such computing resources are means for providing the functions described in these disclosures.
Although a variety of examples and other information was used to explain aspects within the scope of the appended claims, no limitation of the claims should be implied based on particular features or arrangements in such examples, as one of ordinary skill would be able to use these examples to derive a wide variety of implementations. Further and although some subject matter may have been described in language specific to examples of structural features and/or method steps, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to these described features or acts. For example, such functionality can be distributed differently or performed in components other than those identified herein. Rather, the described features and steps are disclosed as examples of components of systems and methods within the scope of the appended claims.
This application claims priority to U.S. Provisional Patent Application No. 60/900,359, entitled “A Scalable Way to do Aging of a Very Large Number of Entities” filed on Nov. 5, 2013, which is hereby incorporated by reference herein in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
6456624 | Eccles et al. | Sep 2002 | B1 |
7152117 | Stapp et al. | Dec 2006 | B1 |
7177946 | Kaluve et al. | Feb 2007 | B1 |
7826400 | Sakauchi | Nov 2010 | B2 |
7848340 | Sakauchi et al. | Dec 2010 | B2 |
8339973 | Pichumani | Dec 2012 | B1 |
8868766 | Theimer et al. | Oct 2014 | B1 |
9258195 | Pendleton et al. | Feb 2016 | B1 |
9374294 | Pani | Jun 2016 | B1 |
9444634 | Pani et al. | Sep 2016 | B2 |
20030067912 | Mead et al. | Apr 2003 | A1 |
20030115319 | Dawson | Jun 2003 | A1 |
20040103310 | Sobel et al. | May 2004 | A1 |
20040160956 | Hardy et al. | Aug 2004 | A1 |
20040249960 | Hardy et al. | Dec 2004 | A1 |
20050010685 | Ramnath et al. | Jan 2005 | A1 |
20050013280 | Buddhikot et al. | Jan 2005 | A1 |
20050083835 | Prairie et al. | Apr 2005 | A1 |
20050117593 | Shand | Jun 2005 | A1 |
20050175020 | Park et al. | Aug 2005 | A1 |
20050207410 | Adhikari | Sep 2005 | A1 |
20060013143 | Yasuie et al. | Jan 2006 | A1 |
20060028285 | Jang et al. | Feb 2006 | A1 |
20060039364 | Wright | Feb 2006 | A1 |
20060072461 | Luong et al. | Apr 2006 | A1 |
20060193332 | Qian et al. | Aug 2006 | A1 |
20060209688 | Tsuge et al. | Sep 2006 | A1 |
20060221950 | Heer | Oct 2006 | A1 |
20060227790 | Yeung et al. | Oct 2006 | A1 |
20060250982 | Yuan et al. | Nov 2006 | A1 |
20060268742 | Chu et al. | Nov 2006 | A1 |
20060274847 | Wang et al. | Dec 2006 | A1 |
20060280179 | Meier | Dec 2006 | A1 |
20070025241 | Nadeau | Feb 2007 | A1 |
20070047463 | Jarvis et al. | Mar 2007 | A1 |
20070165515 | Vasseur | Jul 2007 | A1 |
20070171814 | Florit et al. | Jul 2007 | A1 |
20070177525 | Wijnands et al. | Aug 2007 | A1 |
20070183337 | Cashman et al. | Aug 2007 | A1 |
20070217415 | Wijnands et al. | Sep 2007 | A1 |
20070280264 | Milton et al. | Dec 2007 | A1 |
20080031130 | Raj et al. | Feb 2008 | A1 |
20080092213 | Wei et al. | Apr 2008 | A1 |
20080147830 | Ridgill et al. | Jun 2008 | A1 |
20080212496 | Zou | Sep 2008 | A1 |
20090067322 | Shand et al. | Mar 2009 | A1 |
20090094357 | Keohane | Apr 2009 | A1 |
20090161567 | Jayawardena et al. | Jun 2009 | A1 |
20090193103 | Small et al. | Jul 2009 | A1 |
20090232011 | Li | Sep 2009 | A1 |
20090238196 | Ukita et al. | Sep 2009 | A1 |
20100020719 | Chu et al. | Jan 2010 | A1 |
20100020726 | Chu et al. | Jan 2010 | A1 |
20100191813 | Gandhewar et al. | Jul 2010 | A1 |
20100191839 | Gandhewar et al. | Jul 2010 | A1 |
20100223655 | Zheng | Sep 2010 | A1 |
20100312875 | Wilerson et al. | Dec 2010 | A1 |
20110022725 | Farkas | Jan 2011 | A1 |
20110110241 | Atkinson et al. | May 2011 | A1 |
20110138310 | Gomez et al. | Jun 2011 | A1 |
20110170426 | Kompella | Jul 2011 | A1 |
20110199891 | Chen | Aug 2011 | A1 |
20110199941 | Ouellette et al. | Aug 2011 | A1 |
20110243136 | Raman et al. | Oct 2011 | A1 |
20110280572 | Vobbilisetty et al. | Nov 2011 | A1 |
20110286447 | Liu | Nov 2011 | A1 |
20110299406 | Vobbilisetty | Dec 2011 | A1 |
20110321031 | Dournov et al. | Dec 2011 | A1 |
20120030150 | McAuley et al. | Feb 2012 | A1 |
20120057505 | Xue | Mar 2012 | A1 |
20120102114 | Dunn et al. | Apr 2012 | A1 |
20120300669 | Zahavi | Nov 2012 | A1 |
20130055155 | Wong et al. | Feb 2013 | A1 |
20130097335 | Jiang et al. | Apr 2013 | A1 |
20130182712 | Aguayo et al. | Jul 2013 | A1 |
20130208624 | Ashwood-Smith | Aug 2013 | A1 |
20130223276 | Padgett | Aug 2013 | A1 |
20130227689 | Pietrowicz et al. | Aug 2013 | A1 |
20130250779 | Meloche et al. | Sep 2013 | A1 |
20130250951 | Koganti | Sep 2013 | A1 |
20130276129 | Nelson et al. | Oct 2013 | A1 |
20130311663 | Kamath et al. | Nov 2013 | A1 |
20130311991 | Li et al. | Nov 2013 | A1 |
20130322258 | Nedeltchev | Dec 2013 | A1 |
20130322446 | Biswas et al. | Dec 2013 | A1 |
20130322453 | Allan | Dec 2013 | A1 |
20130329605 | Nakil et al. | Dec 2013 | A1 |
20130332399 | Reddy | Dec 2013 | A1 |
20130332577 | Nakil et al. | Dec 2013 | A1 |
20130332602 | Nakil | Dec 2013 | A1 |
20140016501 | Kamath et al. | Jan 2014 | A1 |
20140068750 | Tjahjono et al. | Mar 2014 | A1 |
20140086097 | Qu et al. | Mar 2014 | A1 |
20140146817 | Zhang | May 2014 | A1 |
20140149819 | Lu et al. | May 2014 | A1 |
20140201375 | Beereddy et al. | Jul 2014 | A1 |
20140219275 | Allan et al. | Aug 2014 | A1 |
20140269712 | Kidambi | Sep 2014 | A1 |
20140321277 | Lynn, Jr. et al. | Oct 2014 | A1 |
20150016277 | Smith et al. | Jan 2015 | A1 |
20150092593 | Kompella | Apr 2015 | A1 |
20150113143 | Stuart et al. | Apr 2015 | A1 |
20150124586 | Pani | May 2015 | A1 |
20150124587 | Pani | May 2015 | A1 |
20150124644 | Pani | May 2015 | A1 |
20150124654 | Pani | May 2015 | A1 |
20150124823 | Pani et al. | May 2015 | A1 |
20150124842 | Pani | May 2015 | A1 |
20150127701 | Chu et al. | May 2015 | A1 |
20150188771 | Allan et al. | Jul 2015 | A1 |
20150378712 | Cameron et al. | Dec 2015 | A1 |
20150378969 | Powell et al. | Dec 2015 | A1 |
20160119204 | Murasato et al. | Apr 2016 | A1 |
Number | Date | Country |
---|---|---|
WO 2014071996 | May 2014 | WO |
Entry |
---|
Whitaker, Andrew, et al., “Forwarding Without Loops in Icarus,” IEEE OPENARCH 2002, pp. 63-75. |
Number | Date | Country | |
---|---|---|---|
20150124629 A1 | May 2015 | US |
Number | Date | Country | |
---|---|---|---|
61900359 | Nov 2013 | US |