This disclosure relates generally to computer networks, and more specifically, to periodic communications, such as communications used for liveliness detection, between devices in a computer network.
Applications executing within a network environment frequently utilize “keep alive” messaging schemes to monitor operational status of other applications within the network. For example, applications executing on network devices within network environment send periodic packets to each to confirm connectivity and to indicate operational status of each device. These periodic packets are sometimes referred to as “keepalives” or “hellos.” For example, a first application executing on one network device may send periodic packets to a peer application executing on another network device every 50 milliseconds (ms) to indicate that the first application is still operational. Likewise, the application may detect reception of corresponding periodic packets from the peer application within the same period of time (e.g., 50 ms). When a threshold number of packets have not been received in the allotted time frame, the application determines that a session failure event has occurred, such as failure of the network device on which the peer application is executing, failure of a link or node connecting the two network devices or failure of the application itself. In response to the failure, the network device on which the peer application is executing may take certain actions, such as redirecting communications to a different peer application.
As one example, routers may exchange periodic packets by establishing a session provided by the bidirectional forwarding detection (BFD) protocol. In accordance with BFD, a first router periodically sends BFD packets at a negotiated transmission time interval and detects a session failure event when the router does not receive a BFD packet from a second router within session detection time interval. For instance, a router may negotiate to receive BFD packets every 50 ms from a peer router and may independently utilize a detection multiplier of three (3) times that interval, i.e., 150 ms in this example, for detecting failure. If the receiving router does not receive a BFD packet from the peer router within the 150 ms session detection time interval, the receiving router detects a connectivity failure with respect to the second router. Consequently, the receiving router may update its routing information to route traffic around the second router. Further details of the BFD protocol may be found in the proposed standard for BFD, by D. Katz and D. Ward (Juniper Networks, June 2010, ISSN: 2070-1721), the entire content of which is incorporated herein by reference.
In general, techniques of this disclosure are directed to reducing false alarms in network devices utilizing keep-alive messaging schemes. As described herein, in order to potentially avoid false alarms, a transmitting network device may adjust quality of service (QOS)/type of service (TOS) settings in keep-alive probe packets for a communication session that are sent later in a detection interval for the communication session such that the keep-alive probe packets have escalating priorities. That is, when transmitting keep-alive probe packets for a given communication session, the network device monitors whether a response communication (e.g., keep-alive response packets or asynchronous keep-alive probe packets) has been received from the peer device within the current detection interval and sets the QOS/TOS settings within headers of the keep-alive probe packets accordingly. As such, keep-alive probe packets sent by network device at a time later in the detection interval receive increased QOS/TOS priority and, therefore, receive preferential treatment by intermediate network elements.
In addition, for keep-alive probe packets that are sent later in the detection interval, the network device may also insert host-level preferential indicator within each of the packets to request preferential treatment at both itself and the peer network device. For example, the host-level preferential indicator may indicate that the corresponding keep-alive probe packet is the last to be sent prior to expiration of the detection interval. Hardware/software of the network devices that handle packet transmissions, such as operating system (kernel) software including network stack software, interface drivers, network interface hardware, such as a network interface card (NIC), provide preferential treatment when servicing the transmission or reception of keep-alive probe packets and response packets.
Moreover, the peer network device may generate response communications so as to inherit the QOS/TOS settings and any host-level preferential indicator of the most recently received keep-alive probe packet.
In one example, a method includes maintaining, with a network device, a keep-alive transmit timer and a keep-alive detection timer associated with a communication session with a peer network device within a network. The keep-alive transmit timer defines a transmit time interval for transmitting keep-alive probe packets for the communication session and the keep-alive detection timer defines a current detection interval within which a response communication (e.g., keep-alive response messages or asynchronous keep-alive probe messages) from the peer network device must be received to avoid a failure event for the communication session. The method further includes responsive to expiration of the keep-alive transit timer during the current detection interval, outputting, by the network device, a first keep-alive probe packet associated with the communication session with the peer network device, wherein the keep-alive probe packet includes quality of service (QoS) settings that controls forwarding priority of the keep-alive probe packet by packet-switching devices within the network, and wherein the QoS settings have a value indicating a first priority level. The method further includes, responsive to a second expiration of the keep-alive transit timer associated during the current detection interval, determining whether a communication has been received from the peer network device since output of the first keep-alive probe packet and, when the communication has not been received, outputting a second keep-alive probe packet associated with the communication session, wherein the second keep-alive probe packet includes QoS settings having a value indicating a second priority level increased from the first priority level.
In another example, a method includes receiving, by a network device, a keep-alive probe packet associated with a communication session with a peer network device within a network, wherein the keep-alive probe packet includes quality of service (QoS) settings that controls forwarding priority of the keep-alive probe packet by packet-switching devices within the network. The method further includes constructing, with the network device, a keep-alive response packet, copying the QoS settings of the keep-alive probe packet to QoS settings within the keep-alive response packet, and outputting the keep-alive response packet from the network device to the peer network device.
In another example, a network device includes a memory, programmable processor(s), a network interface, and a control unit. The control unit is configured to perform the operations described herein.
In another example, a computer-readable storage medium is encoded with instructions. The instructions cause one or more programmable processors of a network device to perform the operations described herein.
The details of one or more examples are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of this disclosure will be apparent from the description and drawings, and from the claims.
In general, network devices 12 execute applications that periodically send status messages (e.g., send “periodic packets” or “keep-alive packets”) to one another in order to indicate monitor operational and connectivity status of each other. That is, by sending periodic inquiries and detecting receipt of similar periodic inquiries, network devices 12 detect any failures, either as a result of failure of one or more of network devices 12, network elements 16 or of links 18 between them. Upon detecting such a failure, the detecting network device 12 takes certain actions, such as redirecting communications to a different peer application. Network devices 12 may be end-user computers, desktops, laptops, mobile devices, servers, virtual machines or networking infrastructure, such as routers, switches, gateways, firewalls, or other network-enabled devices.
In the example of
In response, network device 12B transmits keep-alive response packets 14B on communication session 15. That is, upon receiving a keep-alive probe packet 14A, network device 12B constructs a respective keep-alive response packet 14B and outputs the response packet to network device 12A over the communication session. Keep-alive response packet 14B are, therefore, processed by transmit hardware/software on network device 12B when transmitting the packet, processed by hardware/software of network elements 16 (e.g., packet forwarding ASICs, switch fabrics, packet queues) when transporting the packets and ultimately processed by receive hardware/software of network device 12A.
In operation, network device 12A implements a transmission (or “transmit”) timer that controls transmission of keep-alive probe packets 14A over the communication session. That is, the transmit timer measures intervals for network device 12A to transmit a keep-alive probe packet 14A over the communication session, and triggers transmission of the packet upon reaching the negotiated interval.
Network device 12A implements a detection timer to monitor receipt of keep-alive response packets 14B. The detection timer measures intervals between received keep-alive probe packets 14B over the communication session. Using the detection timer, network device 12A determines the operational status of network device 12B, i.e., whether network device 12A is operational and in communication. For instance, if network device 12A does not receive a keep-alive response packet 14B within the session detection time, the network device determines that a network event has occurred that is preventing communication, such as a failure of an intermediate link 18 or network element 16 or failure of hardware and/or software of network device 12B. In many instances, network device 12A sets the detection timer to a multiple (e.g., an integer multiple) of the negotiated transmit interval, such as a value of 3*Transmit_Interval. For example, if the transmit interval is being used by network device 12A is 50 ms, network device 12A may determine a failure has occurred if no keep-alive response packet 14B is received in 150 ms, i.e., three (3) transmit intervals.
In some situations, the detection timer of network device 12A may expire even though network device 12B has not failed and communication connectivity still exists between the devices. For example, network congestion, i.e., heavy traffic loads leading to length packet queues, within intermediate network elements 16 may cause communication delays that exceed the detection interval maintained by network device 12A. During this period, network device 12A will typically have sent multiple keep-alive probe packets 14A, one for each respective transmit interval. Each of the successive keep-alive probe packets 14A sent by network device 12A during the detection interval traverses intermediate elements 16 and links 18 and may be subject to the same congestion and network delays, thereby giving rise to a network event upon expiration of the detection interval at network device 12B.
As described herein, in order to potentially avoid false alarms, network device 12A may adjust quality of service (QOS)/type of service (TOS) settings in keep-alive probe packets 14A that are sent later in the detection interval so as to have escalating priorities. That is, when transmitting keep-alive probe packets 14A, network device 12A monitors whether keep-alive response packets 14B have been received and sets the QOS/TOS settings within the header of the keep-alive probe packets 14A accordingly. For example, network device 12A may utilize an increased QOS/TOS setting in an outbound keep-alive probe packet 14A when an expected keep-alive response packet 14B has not been received within a current detection interval. Moreover, network device 12A may utilize a further increased QOS/TOS setting when transmitting subsequent keep-alive probe packets 14A when a keep-alive response packet 14B still has not been received within the detection interval. As such, keep-alive probe packets 14A sent by network device 12A at a time later in the detection interval of network device 12B receive increased QOS/TOS priority and, therefore, receive preferential treatment by intermediate network elements 16 and by queuing and processing operations of network devices 12A, 12B.
In addition, for keep-alive probe packets 14A that are sent later in the detection interval, network device 12A may insert host-level preferential indicator within each of the packets to request preferential treatment by network devices 12A, 12B. For example, network device 12A may include the host-level preferential indicator within the final keep-alive probe packet 14A to be transmitted before expiration of the detection interval of network device 12B. Keep-alive probe packets 14A containing the host-level preferential indicator are serviced with a higher priority by both the transmission hardware/software of network device 12A and the receive hardware/software of network device 12B. Example hardware/software of network devices 12A, 12B that typically handle packet transmissions include operating system (kernel) software including network stack software, interface drivers, network interface hardware, such as a network interface card (NIC). The hardware/software on network devices 12A, 12B may service the transmission or reception of keep-alive probe packets 14A having the host-level preferential indicator on an interrupt-driven basis rather than a thread polling scheme that would otherwise be used for keep-alive probe packets 14A that do not include the preferential indicator. As another example, hardware/software of network devices 12A, 12B may maintain separate transmit and receive queues for keep-alive probe packets 14A that contain the preferential indicator, thereby bypassing queues used for packets that do not contain the host-level preferential indicator.
In this way, the techniques described herein may help ensure timely delivery of keep-alive probe packets 14A when expiration of the detection interval maintained by network device 12A is approaching, thereby increasing the likelihood of the keep-alive probe packets reaching network device 12B and avoiding false alarms due to network congestion within network elements 16 or network devices 12A, 12B themselves.
Moreover, in some example implementations, network device 12B generates keep-alive response packets 14B so as to inherit the QOS/TOS settings and any host-level preferential indicator of the most recently received keep-alive probe packet 14A. That is, when constructing keep-alive response packets 14B, network device 12B may set the QOS/TOS settings within the packet header to be the same as the QOS/TOS settings within the packet header of the most-recently received keep-alive probe packet 14A. Moreover, network device 12B may also set the host-level preferential indicator within keep-alive response packets 14B to be the same as the host-level preferential indicator of the most-recently received keep-alive probe packet 14A. As such, in the event a keep-alive probe packet 14A is received having escalated priority and host-level preferential indicator, thereby receiving priorities processing and avoiding network congestion within intermediate network elements 16 and/or network devices 12A, the keep-alive response packet 14B sent in response thereto will automatically have the same escalated priorities and host-level preferential indicator. Thus, in this example implementation, the keep-alive response packet 14B will similarly have an increased likelihood of avoiding network congestion within intermediate network elements 16 and/or network devices 12A so as to be received and processed by network device 12A within the expected time frame. In other words, receipt of a keep-alive probe packet 14A having escalated priority and host-level preferential indicator provides an indication to network device 12B that network device 12A is operational but is not receiving keep-alive response packets 14B and, therefore, network device 12A has escalated the priorities and host-level preferential indicator in an effort to avoid false alarm triggering of a failure of the keep-alive messaging scheme. As such, network device 12B mirrors the priorities and host-level preferential indicator into keep-alive response packets 14B when generating the keep-alive response packets to further assist avoidance of triggering false alarms.
In one example implementation, applications executing on network devices 12A, 12B may configure the Differentiated Services Field (DS Field) in the IPv4 and IPv6 headers of the keep-alive probe packets 14A and keep-alive response packets 14B to set desired QOS/TOS priories for packets. Further example details of the differentiated services field in IP are described in RFC 2474, Definition of the Differentiated Services Field (DS Field) in the IPv4 and IPv6 Headers, December 1998, the entire contents of which are incorporated herein by reference.
Moreover, when utilizing TCP/IP-based communication sessions, the applications of network devices 12A, 12B may utilize one or more of the Reserved bits or the “Urgent” (URG) bit of the TCP header to carry the host-level preferential indicator. As another example implementation, when utilizing BFD communication sessions, network devices 12A, 12B may utilize one or more of the Diagnostic (DIAG) bit of the BFD header to carry the host-level preferential indicator. As another example, applications may utilize the Options field within the IP header to carry the host-level preferential indicatory.
As described, keep-alive probe packets 14A, 14B having increase QOS/TOS priority receive preferential treatment by intermediate network elements 16, and such packets having host-level preferential indicator are further prioritized and efficiently processed by network devices 12A, 12B. In this way, the techniques described herein may help ensure timely delivery of keep-alive probe packets 14A, 14B so as to avoid false alarms due to network congestion within network elements 16 or network devices 12A, 12B themselves.
Moreover, in some examples, both network devices 12A, 12B generates keep-alive probe packets 14A, 14B so as to inherit the QOS/TOS settings and any host-level preferential indicator of the keep-alive probe packet most recently received from the other network device. That is, upon receiving an incoming keep-alive probe packet having increased QOS/TOS settings and host-level preferential indicator, the receiving one of the network devices 12 copies the settings in the next keep-alive probe packet that it transmits. For example, receipt of a keep-alive probe packet 14A having escalated priority and host-level preferential indicator provides an indication to the receiving network device 12B that network device 12A is operational but is not receiving keep-alive probe packets 14B from network device 12B and that, in response, network device 12A has escalated the priorities and host-level preferential indicator in an effort to avoid false alarm triggering of a failure of the keep-alive messaging scheme. As such, network device 12B mirrors the priorities and host-level preferential indicator into keep-alive probe packets 14B when generating the keep-alive probe packets to further assist avoidance of triggering false alarms.
The techniques described above with respect to
In this example, network device 100 includes a network interface 101 to send and receive network packets. In addition, network device 100 includes a microprocessor 110 executing operating system 106 to provide an execution environment for one or more applications 103 that communicate with other network devices over a packet-based network. In general, applications 103 represent any component of a network device that utilizes keep-alive messages to communicate with other network device. As discussed herein, in example implementations, network device may be an endpoint network device, e.g., a user computing device, backend server or virtual machine executing in the “cloud.” Example user-related applications include email applications, video conferencing applications, peer computing applications, and the like. As additional examples, network device 100 may provide network operations, such as a router, switch, firewall, Intrusion detection system, network cache, DNS server. Example applications 103 include routing protocols, device management applications, such as BFD, SNMP or NETCONF, or the like.
In the example of
Sockets 112 are logical constructs having data structures and state data maintained by operating system 106 and may be viewed as acting as interfaces between applications 103 and protocol stack 114. For instance, sockets 112 may include one or more data structures that define data relating to one or communication sessions, such as a file descriptor of a socket, a thread identifier of the socket, an active/backup state of the socket, and a pointer to a TCP socket within protocol stack 114. Sockets are used herein as one common mechanism for establishing communication sessions between devices and the techniques described herein may be applied to any other type of communication session that utilizes which session maintenance messages.
In the example, TCP implementation of protocol stack 114 includes a keep alive manager 116 that provides keep-alive functionality for each socket 112 instantiated by applications 103. For example, for each socket, keep alive manager 116 creates timers 122, such as a transmit timer and a detection timer, for triggering transmission of keep alive probe messages and keep alive response messages, respectively, for the corresponding socket, i.e., for each communication session established by applications 103. Keep-alive transmit controller (“TX”) 118 and keep-alive receive controller (“RCV”) 120 operate responsive to transmit and detection timers 122, respectively, in accordance with the techniques described herein.
For example, keep-alive receive controller 120 produces message 124 to inform keep-alive transmit controller 118 when a keep-alive response message is received for a given communication session. When constructing and transmitting keep-alive probe packets 14A, keep-alive transmit controller 118 sets the QOS/TOS settings within the header of the packets based on when a most-recent keep-alive response packets 14B was received within a current detection interval. For example, keep-alive transmit controller 118 may utilize an increased QOS/TOS setting when constructing a current keep-alive probe packet 14A if a keep-alive response packet 14B has not been received since transmission of the prior keep-alive probe packet 14A within the current detection interval. Moreover, keep-alive transmit controller 118 may utilize a further increased QOS/TOS setting when transmitting subsequent keep-alive probe packets 14A in the event a keep-alive response packet 14B still has not been received at a time later in that same detection interval. As such, keep-alive probe packets 14A sent by keep-alive transmit controller 118 at a time later in the detection interval for the respective socket 112 receive increased QOS/TOS priority and, therefore, receive preferential treatment by intermediate network elements. In addition, for keep-alive probe packets 14A that are sent later in the detection interval, keep-alive transmit controller 118 may insert host-level preferential indicator within each of the packets to request preferential treatment by endpoint network devices. In this way, the techniques described herein may help ensure timely delivery of keep-alive probe packets 14A when expiration of the detection interval is approaching, thereby avoiding false alarms due to network congestion within network elements 16 or network devices 12A, 12B themselves.
As one example, keep-alive transmit controller 118 may operate generally as shown in Table 1. In this example, keep-alive receive controller 120 is configured with a detection interval of five (5) times the transmit interval and, in the event keep-alive response packets 14B are not received from a peer network device, constructs keep-alive probe packets throughout the detection interval as shown in Table 1.
In this example, keep-alive transmit controller 118 constructs the first and second keep-alive probe packets 14A within a current detection interval for the communication session as a conventional keep-alive probe packet 14A, i.e., without any TOS/QOS settings and without requesting any host-level preferential indicator. In the event a keep-alive response packet 14B has not been received by the time keep-alive transmit controller 118 is to transmit the third keep-alive probe packet 14A within the same detection interval, keep-alive transmit controller 118 constructs the third keep alive probe packet 14A within the detection interval to have increased priority bits, e.g., set to a MEDIUM priority level, but does not at this time include a host-level preferential indicator within the keep-alive probe packet. In the event a keep-alive response packet 14B has still not been received by keep-alive receive controller 120 when time keep-alive transmit controller 118 is triggered to transmit the fourth keep-alive probe packet 14A within the same detection interval, the keep-alive transmit controller 118 constructs the fourth keep alive probe packet 14A to have increased priority bits, e.g., set to a MEDIUM priority level as well as having a host-level preferential indicator. Finally, in this example, if a keep-alive response packet 14B has not been received by keep-alive receive controller 120 when keep-alive transmit controller 118 is triggered to transmit the fifth keep-alive probe packet 14A, the keep-alive transmit controller constructs the fifth keep alive probe packet 14A within the detection interval to have increased priority bits of HIGH priority level and to include a host-level preferential indicator.
In addition, keep-alive transmit controller 118 and keep-alive receive controller 120 of network device 100 operate to respond to keep-alive messages from other device for a given communication session. For example, keep-alive receive controller 120 may receive inbound keep alive probe message 14A′. Responsive to those inbound keep-alive probe message 14A′, keep-alive receive controller 120 outputs message 126 directing transmit controller 118 to construct and output a keep-alive response message 14B′. When generating message 126, keep alive receive controller 120 includes and priority settings (e.g., QOS/TOS bits) and any host-level preferential indicator that were present within inbound keep-alive probe message 14A′, thereby communicating this information to keep alive transmit controller 118. In response, keep alive transmit controller 118 generates keep-alive response packets 14B′ so as to inherit the QOS/TOS settings and any host-level preferential indicator of the most recently received keep-alive probe packet 14A′. That is, when constructing keep-alive response packets 14B′, keep alive transmit controller 118 set the QOS/TOS settings and host-level preferential indicator within the packet header to be the same as (i.e., copies) the QOS/TOS settings and host-level preferential indicator within the packet header of the most-recently received keep-alive probe packet 14A′, as specified by message 126. As such, in the event a keep-alive probe packet 14A′ is received having escalated priority and host-level preferential indicator, thereby receiving priorities processing and avoiding network congestion within intermediate network elements and/or host network devices, the keep-alive response packets 14B′ sent in response thereto will automatically have the same escalated priorities and host-level preferential indicator.
In this example, router 230 includes a control unit 231 that comprises a routing engine 232 and a forwarding engine 234. In addition, router 230 includes a set of interface cards (IFCs) 250A-250N (collectively, “IFCs 250”) for communicating packets via inbound links 252A-252N (collectively, “inbound links 252”) and outbound links 254A-254N (collectively, “outbound links 254”).
Routing engine 232 primarily provides an operating environment for control plane protocols, such as those included in protocols 240. For example, one or more routing protocols (“RPs”) 247 that maintain routing information 236 to reflect the current topology of a network and other network entities to which it is connected. In particular, each RP 247 updates routing information 236 to accurately reflect the topology of the network and other entities. Example routing protocols include Multi-Protocol Border Gateway Protocol (mpBGP), the Intermediate System to Intermediate System (ISIS) routing protocol, the Open Shortest Path First (OSPF) routing protocol and the like.
Routing engine 232 generates and programs forwarding engine 234 with forwarding information 238 that associates network destinations with specific next hops and corresponding interface ports of IFCs 250 in accordance with routing information 236. Routing engine 232 may generate forwarding information 238 in the form of a radix tree having leaf nodes that represent destinations within the network.
Based on forwarding information 238, forwarding engine 234 forwards packets received from inbound links 252 to outbound links 254 that correspond to next hops associated with destinations of the packets. U.S. Pat. No. 7,184,437 provides details on an exemplary embodiment of a router that utilizes a radix tree for route resolution, the contents of which is incorporated herein by reference in its entirety.
In one example, forwarding engine 234 is a rich and dynamic shared forwarding plane, optionally distributed over a multi-chassis router. Moreover, forwarding plane 234 may be provided by dedicated forwarding integrated circuits normally associated with high-end routing components of a network router. Further details of one example embodiment of router 230 can be found in U.S. Provisional Patent Application 61/054,692, filed May 20, 2008, entitled “STREAMLINED PACKET FORWARDING USING DYNAMIC FILTERS FOR ROUTING AND SECURITY IN A SHARED FORWARDING PLANE,” which is incorporated herein by reference.
Moreover, as shown in
BFD module 239′ implements BFD protocol-based functionalities, such as transmitting and monitoring for periodic BFD packets received by forwarding engine 234, thereby conserving resources that would otherwise be expended by routing engine 232. In case of a detected connectivity failure, BFD module 239′ is configured to transmit a failure notification, or other similar indication, to BFD module 239 of routing engine 232. In response to receiving the failure notification from BFD module 239′ of forwarding engine 234, BFD module 239 causes RP 247 to update the network topology currently stored to routing information 236, to reflect the failed link(s) represented by the BFD failure.
As shown in
In general, keep-alive manager 216 operates as described herein, such as with respect to keep alive manager 116, to reduce avoidance of triggering false alarms with respect to BFD protocol implemented by BFD module 239′. That is, keep-alive manager 216 provides keep-alive functionality for each BFD session instantiated by BFD module 239′. For example, for each communication session, keep alive manager 216 instantiates a transmit timer and a detection timer, for triggering transmission of BFD keep alive messages and, if required, BFD keep alive response messages, respectively, for the corresponding BFD session. Although not shown in
The architecture of router 230 illustrated in
Initially, the first network device receives configuration information (300) and establishes a communication session, such as a BFD session, with the peer network device (301). For instance, the first network device may receive configuration information that specifies TOS/QOS settings and host-level preferential indicator that escalate for keep-alive probe packets sent later within a detection interval of the acknowledging device, such as the example shown in Table 1. The configuration information may be received from an administrator, e.g., by a device management session, or from the peer network device.
Upon establishing the network session with the peer network device, the first network device initiates a transmit timer for the session (302) and, responsive to expiration of the transmit timer, constructs and outputs keep-alive probe packets (304). As described herein, in order to potentially avoid false alarms, the transmitting network device adjusts quality of service (QOS)/type of service (TOS) settings and any host-level preferential indicators in the keep-alive probe packets that are sent later in the detection interval so as to have escalating priorities. That is, when constructing a given keep-alive probe packet, the network device determines whether a communication (e.g., a keep-alive response packet or a keep-alive probe packet) has been received from the peer network device since the last keep-alive probe packet transmitted by the network device. When a communication has not been received the network device constructs and outputs the keep-alive probe packet and sets the QoS settings based on the configuration data so as to have increased priority. Moreover, when the communication from the peer network device has not been received, the network device sets the host-level preferential indicator of the keep-alive probe packet to have a value indicating that preferential treatment is requested.
The peer network device constructs a response communication (306). As one example, the peer network device may construct a keep-alive response packet in response to receiving the keep-alive probe packet. As another example, the peer network device may construct its own keep-alive probe packet upon expiration of a respective transmit timer. In any event, the peer network device copies the QoS settings and any host-level preferential indicator of the first keep-alive probe packet to QoS settings and host-level preferential indicator of the response communication (308). Once constructed, the peer network device outputs the response communication to the first network device (310). The peer network device may output the response communication on the same network session on which the keep-alive probe packet was received. For example, the peer network device may construct and output the response communication as a keep-alive response packet and output the keep-alive response packet on the same communication session. Alternatively, the peer network device may output the response communication to the first network device on a different communication session. For example, the peer network device may construct the response communication as a keep-alive probe packet (e.g., asynchronous BFD probe packet) and output the keep-alive probe packet on a separate communications session different from the communication session on which the keep-alive probe packet was received from the first network device.
The techniques described in this disclosure may be implemented in hardware or any combination of hardware and software (including firmware). Any features described as units, modules, or components may be implemented together in an integrated logic device or separately as discrete but interoperable logic devices. If implemented in hardware, the techniques may be realized in a processor, a circuit, a collection of logic elements, or any other apparatus that performs the techniques described herein. If implemented in software, the techniques may be realized at least in part by a non-transitory computer-readable storage medium or computer-readable storage device encoded with, having stored thereon, or otherwise comprising instructions that, when executed, cause one or more processors, such as programmable processor(s), to perform one or more of the methods described above. The non-transitory computer-readable medium may form part of a computer program product, which may include packaging materials. The non-transitory computer-readable medium may comprise random access memory (RAM) such as synchronous dynamic random access memory (SDRAM), read-only memory (ROM), non-volatile random access memory (NVRAM), electrically erasable programmable read-only memory (EEPROM), FLASH memory, magnetic or optical data storage media, and the like. The techniques additionally, or alternatively, may be realized at least in part by a computer readable communication medium that carries or communicates code in the form of instructions or data structures and that can be accessed, read, and/or executed by a computer.
The code may be executed by one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Accordingly, the term “processor,” as used herein, may refer to any of the foregoing structures or any other structure suitable for implementation of the techniques described herein. Likewise, the term “control unit,” as used herein, may refer to any of the foregoing structures or any other structure suitable for implementation of the techniques described herein. In addition, in some aspects, the functionality described herein may be provided within dedicated software and hardware units configured to perform the techniques of this disclosure. Depiction of different features as units is intended to highlight different functional aspects of the devices illustrated and does not necessarily imply that such units must be realized by separate hardware or software components. Rather, functionality associated with one or more units may be integrated within common or separate hardware or software components.
Various examples have been described. These and other examples are within the scope of the following claims.
Number | Name | Date | Kind |
---|---|---|---|
5253248 | Dravida et al. | Oct 1993 | A |
5613136 | Casavant et al. | Mar 1997 | A |
5721855 | Hinton et al. | Feb 1998 | A |
5826081 | Zolnowsky | Oct 1998 | A |
5848128 | Frey | Dec 1998 | A |
5933601 | Fanshier et al. | Aug 1999 | A |
6052720 | Traversal et al. | Apr 2000 | A |
6101500 | Lau | Aug 2000 | A |
6148337 | Estberg et al. | Nov 2000 | A |
6163544 | Andersson et al. | Dec 2000 | A |
6173411 | Hirst et al. | Jan 2001 | B1 |
6212559 | Bixler et al. | Apr 2001 | B1 |
6223260 | Gujral et al. | Apr 2001 | B1 |
6255943 | Lewis et al. | Jul 2001 | B1 |
6263346 | Rodriquez | Jul 2001 | B1 |
6272537 | Kekic et al. | Aug 2001 | B1 |
6304546 | Natarajan et al. | Oct 2001 | B1 |
6310890 | Choi | Oct 2001 | B1 |
6374329 | McKinney et al. | Apr 2002 | B1 |
6389464 | Krishnamurthy et al. | May 2002 | B1 |
6393481 | Deo et al. | May 2002 | B1 |
6405289 | Arimilli et al. | Jun 2002 | B1 |
6453403 | Singh et al. | Sep 2002 | B1 |
6453430 | Singh et al. | Sep 2002 | B1 |
6466973 | Jaffe | Oct 2002 | B2 |
6477566 | Davis et al. | Nov 2002 | B1 |
6477572 | Elderton et al. | Nov 2002 | B1 |
6480955 | DeKoning et al. | Nov 2002 | B1 |
6502131 | Vaid et al. | Dec 2002 | B1 |
6507869 | Franke et al. | Jan 2003 | B1 |
6510164 | Ramaswamy et al. | Jan 2003 | B1 |
6516345 | Kracht | Feb 2003 | B1 |
6529941 | Haley et al. | Mar 2003 | B2 |
6542934 | Bader et al. | Apr 2003 | B1 |
6563800 | Salo et al. | May 2003 | B1 |
6584499 | Jantz et al. | Jun 2003 | B1 |
6618360 | Scoville et al. | Sep 2003 | B1 |
6636242 | Bowman-Amuah | Oct 2003 | B2 |
6662221 | Gonda et al. | Dec 2003 | B1 |
6681232 | Sistanizadeh et al. | Jan 2004 | B1 |
6684343 | Bouchier et al. | Jan 2004 | B1 |
6725317 | Bouchier et al. | Apr 2004 | B1 |
6738908 | Bonn et al. | May 2004 | B1 |
6751188 | Medved et al. | Jun 2004 | B1 |
6757897 | Shi et al. | Jun 2004 | B1 |
6804816 | Liu et al. | Oct 2004 | B1 |
6816897 | McGuire | Nov 2004 | B2 |
6816905 | Sheets et al. | Nov 2004 | B1 |
6850253 | Bazerman et al. | Feb 2005 | B1 |
6910148 | Ho et al. | Jun 2005 | B1 |
6922685 | Greene et al. | Jul 2005 | B2 |
6934745 | Krautkremer | Aug 2005 | B2 |
6952728 | Alles et al. | Oct 2005 | B1 |
6982953 | Swales | Jan 2006 | B1 |
6983317 | Bishop et al. | Jan 2006 | B1 |
6990517 | Bevan et al. | Jan 2006 | B1 |
7024450 | Deo et al. | Apr 2006 | B1 |
7055063 | Leymann et al. | May 2006 | B2 |
7069344 | Carolan et al. | Jun 2006 | B2 |
7082463 | Bradley et al. | Jul 2006 | B1 |
7082464 | Hasan et al. | Jul 2006 | B2 |
7085277 | Proulx et al. | Aug 2006 | B1 |
7085827 | Ishizaki et al. | Aug 2006 | B2 |
7093280 | Ke et al. | Aug 2006 | B2 |
7099912 | Ishizaki et al. | Aug 2006 | B2 |
7103647 | Aziz | Sep 2006 | B2 |
7120693 | Chang et al. | Oct 2006 | B2 |
7124289 | Suorsa | Oct 2006 | B1 |
7130304 | Aggarwal | Oct 2006 | B1 |
7131123 | Suorsa et al. | Oct 2006 | B2 |
7139263 | Miller et al. | Nov 2006 | B2 |
7151775 | Renwick et al. | Dec 2006 | B1 |
7152109 | Suorsa et al. | Dec 2006 | B2 |
7161946 | Jha | Jan 2007 | B1 |
7184437 | Cole et al. | Feb 2007 | B1 |
7200662 | Hasan et al. | Apr 2007 | B2 |
7206836 | Dinker et al. | Apr 2007 | B2 |
7219030 | Ohtani | May 2007 | B2 |
7236453 | Visser et al. | Jun 2007 | B2 |
7305492 | Bryers et al. | Dec 2007 | B2 |
7310314 | Katz et al. | Dec 2007 | B1 |
7310666 | Benfield et al. | Dec 2007 | B2 |
7313611 | Jacobs et al. | Dec 2007 | B1 |
7336615 | Pan et al. | Feb 2008 | B1 |
7359377 | Kompella et al. | Apr 2008 | B1 |
7362700 | Frick et al. | Apr 2008 | B2 |
7363353 | Ganesan et al. | Apr 2008 | B2 |
7379987 | Ishizaki et al. | May 2008 | B2 |
7391719 | Ellis et al. | Jun 2008 | B2 |
7406030 | Rijsman | Jul 2008 | B1 |
7406035 | Harvey et al. | Jul 2008 | B2 |
7433320 | Previdi et al. | Oct 2008 | B2 |
7447167 | Nadeau et al. | Nov 2008 | B2 |
7463591 | Kompella et al. | Dec 2008 | B1 |
7471638 | Torrey et al. | Dec 2008 | B2 |
7487232 | Matthews et al. | Feb 2009 | B1 |
7499395 | Rahman et al. | Mar 2009 | B2 |
7506194 | Appanna et al. | Mar 2009 | B2 |
7508772 | Ward et al. | Mar 2009 | B1 |
7522599 | Aggarwal et al. | Apr 2009 | B1 |
7523185 | Ng et al. | Apr 2009 | B1 |
7539769 | McGuire | May 2009 | B2 |
7561527 | Katz et al. | Jul 2009 | B1 |
7606898 | Hunt et al. | Oct 2009 | B1 |
7609637 | Doshi et al. | Oct 2009 | B2 |
7639624 | McGee et al. | Dec 2009 | B2 |
7720047 | Katz et al. | May 2010 | B1 |
7720061 | Krishnaswamy et al. | May 2010 | B1 |
7724677 | Iwami | May 2010 | B2 |
7738367 | Aggarwal et al. | Jun 2010 | B1 |
7760652 | Tsillas et al. | Jul 2010 | B2 |
7764599 | Doi et al. | Jul 2010 | B2 |
7765306 | Filsfills et al. | Jul 2010 | B2 |
7765328 | Bryers et al. | Jul 2010 | B2 |
7813267 | Tsai et al. | Oct 2010 | B2 |
7852778 | Kompella | Dec 2010 | B1 |
7860981 | Vinokour et al. | Dec 2010 | B1 |
7911938 | Florit et al. | Mar 2011 | B2 |
7940646 | Aggarwal et al. | May 2011 | B1 |
7957330 | Bahadur et al. | Jun 2011 | B1 |
7990888 | Nadeau et al. | Aug 2011 | B2 |
8019835 | Suorsa et al. | Sep 2011 | B2 |
8077726 | Kumar et al. | Dec 2011 | B1 |
8189579 | Krishnaswamy et al. | May 2012 | B1 |
8254271 | Nadeau et al. | Aug 2012 | B1 |
8266264 | Hasan et al. | Sep 2012 | B2 |
8339959 | Moisand et al. | Dec 2012 | B1 |
8370528 | Bryers et al. | Feb 2013 | B2 |
8488444 | Filsfils et al. | Jul 2013 | B2 |
8503293 | Raszuk | Aug 2013 | B2 |
8543718 | Rahman | Sep 2013 | B2 |
8693398 | Chaganti et al. | Apr 2014 | B1 |
8797886 | Kompella et al. | Aug 2014 | B1 |
8902780 | Hedge et al. | Dec 2014 | B1 |
8948001 | Guichard et al. | Feb 2015 | B2 |
8953460 | Addepalli et al. | Feb 2015 | B1 |
9258234 | Addepalli et al. | Feb 2016 | B1 |
9455894 | Neelam et al. | Sep 2016 | B1 |
20010042190 | Tremblay et al. | Nov 2001 | A1 |
20020007443 | Gharachorloo et al. | Jan 2002 | A1 |
20020032725 | Araujo et al. | Mar 2002 | A1 |
20020038339 | Xu | Mar 2002 | A1 |
20020093954 | Well et al. | Jul 2002 | A1 |
20020105972 | Richter et al. | Aug 2002 | A1 |
20020120488 | Bril et al. | Aug 2002 | A1 |
20020141343 | Bays | Oct 2002 | A1 |
20020158900 | Hsieh et al. | Oct 2002 | A1 |
20020165727 | Greene et al. | Nov 2002 | A1 |
20020169975 | Good | Nov 2002 | A1 |
20020191014 | Hsieh et al. | Dec 2002 | A1 |
20020194497 | Mcguire | Dec 2002 | A1 |
20020194584 | Suorsa et al. | Dec 2002 | A1 |
20030005090 | Sullivan et al. | Jan 2003 | A1 |
20030009552 | Benfield et al. | Jan 2003 | A1 |
20030055933 | Ishizaki et al. | Mar 2003 | A1 |
20030097428 | Afkhami et al. | May 2003 | A1 |
20030112749 | Hassink et al. | Jun 2003 | A1 |
20030123457 | Koppol | Jul 2003 | A1 |
20030149746 | Baldwin et al. | Aug 2003 | A1 |
20030152034 | Zhang | Aug 2003 | A1 |
20040024869 | Davies | Feb 2004 | A1 |
20040116070 | Fishman et al. | Jun 2004 | A1 |
20050013310 | Banker et al. | Jan 2005 | A1 |
20050021713 | Dugan et al. | Jan 2005 | A1 |
20050063458 | Miyake et al. | Mar 2005 | A1 |
20050083936 | Ma | Apr 2005 | A1 |
20050175017 | Christensen et al. | Aug 2005 | A1 |
20050195741 | Doshi et al. | Sep 2005 | A1 |
20050259571 | Battou | Nov 2005 | A1 |
20050259634 | Ross | Nov 2005 | A1 |
20050281192 | Nadeau et al. | Dec 2005 | A1 |
20060018266 | Seo | Jan 2006 | A1 |
20060095538 | Rehman et al. | May 2006 | A1 |
20060133300 | Lee et al. | Jun 2006 | A1 |
20060233107 | Croak et al. | Oct 2006 | A1 |
20060239201 | Metzger et al. | Oct 2006 | A1 |
20060262772 | Guichard et al. | Nov 2006 | A1 |
20060285500 | Booth et al. | Dec 2006 | A1 |
20070014231 | Sivakumar et al. | Jan 2007 | A1 |
20070021132 | Jin et al. | Jan 2007 | A1 |
20070041554 | Newman et al. | Feb 2007 | A1 |
20070061103 | Patzschke et al. | Mar 2007 | A1 |
20070147281 | Dale et al. | Jun 2007 | A1 |
20070165515 | Vasseur | Jul 2007 | A1 |
20070180104 | Filsfils et al. | Aug 2007 | A1 |
20070180105 | Filsfils et al. | Aug 2007 | A1 |
20070207591 | Rahman et al. | Sep 2007 | A1 |
20070220252 | Sinko | Sep 2007 | A1 |
20070263836 | Huang | Nov 2007 | A1 |
20070280102 | Vasseur et al. | Dec 2007 | A1 |
20080004782 | Kobayashi et al. | Jan 2008 | A1 |
20080034120 | Oyadomari et al. | Feb 2008 | A1 |
20080049622 | Previdi et al. | Feb 2008 | A1 |
20080074997 | Bryant et al. | Mar 2008 | A1 |
20080163291 | Fishman et al. | Jul 2008 | A1 |
20080225731 | Mori et al. | Sep 2008 | A1 |
20080247324 | Nadeau et al. | Oct 2008 | A1 |
20080253295 | Yumoto et al. | Oct 2008 | A1 |
20090016213 | Lichtwald | Jan 2009 | A1 |
20090019141 | Bush et al. | Jan 2009 | A1 |
20090046579 | Lu et al. | Feb 2009 | A1 |
20090046723 | Rahman et al. | Feb 2009 | A1 |
20090201799 | Lundstrom et al. | Aug 2009 | A1 |
20090201857 | Daudin | Aug 2009 | A1 |
20090225650 | Vasseur | Sep 2009 | A1 |
20090232029 | Abu-Hamdeh et al. | Sep 2009 | A1 |
20090279440 | Wong | Nov 2009 | A1 |
20100208922 | Erni | Aug 2010 | A1 |
20100299319 | Parson et al. | Nov 2010 | A1 |
20110019550 | Bryers et al. | Jan 2011 | A1 |
20110063973 | VenkataRaman et al. | Mar 2011 | A1 |
20110170408 | Furbeck et al. | Jul 2011 | A1 |
20130028099 | Birajdar et al. | Jan 2013 | A1 |
20130086144 | Wu | Apr 2013 | A1 |
20130185767 | Tirupachur Comerica et al. | Jul 2013 | A1 |
20140149819 | Lu et al. | May 2014 | A1 |
20140321448 | Backholm | Oct 2014 | A1 |
20150063117 | DiBurro | Mar 2015 | A1 |
Number | Date | Country |
---|---|---|
1367750 | Dec 2003 | EP |
1816801 | Aug 2007 | EP |
1861963 | Dec 2007 | EP |
1864449 | Dec 2007 | EP |
1891526 | Feb 2008 | EP |
1891526 | Feb 2012 | EP |
2006104604 | Oct 2006 | WO |
Entry |
---|
Response to Examination Report dated Oct. 29, 2018, from counterpart European Application No. 16204540.5, filed Feb. 26, 2019 10 pp. (Year: 2018). |
“ActiveXperts Ping backgrounds (PING is part of the ActiveSocket Toolkit),” ActiveSocket Network Communication Toolkit 2.4, Activexperts, retrieved from www.activexperts.com/activsocket/toolkits/ping.html, Nov. 10, 2005, 3 pp. |
“Configure an Unnumbered Interface,” retrieved from www.juniper.net/tech,nubs/software/junos/junos56/index.html, Nov. 7, 2005, 1 p. |
“Configure the loopback Interface,” retrieved from www.juniper.net/techpubs/soft-ware/iunos/junos56/index.html, Nov. 7, 2005, 2 pp. |
“ICMP (Internet Control Message Protocol),” Data Network Resource, www.rhyshaden.com/icmn.html, last printed Nov. 10, 2005, 4 pp. |
Kessler et al., RFC 2151, “A Primer on Internet and TCP/IP Tools and Utilities,” Chapter 3.4: Traceroute, Nov. 9, 2005, pp. 9-11. |
“Traceroute,” Webopedia, retrieved from www.webopedia.com/TERM/t/traceroute.html, Aug. 26, 2004, 1 p. |
“Using the IP unnumbered configuration FAQ,” APNIC, retrieved from www.apnic.net/info/faq/ip_unnumb.html, Jul. 1, 2005, 2 pp. |
Aggarwal et al., “Bidirectional Forwarding Detection (BFD) for MPLS Label Switched Paths (LSPs),” Internet Engineering Task Force (IETF), RFC 5884, Cisco Systems, Inc., Jun. 2010, 12 pp. |
Aggarwal, “OAM Mechanisms in MPLS Layer 2 Transport Networks,” IEEE Communications Magazine, Oct. 2004, pp. 124-130. |
Atlas, “ICMP Extensions for Unnumbered Interfaces,” Internet Draft, draft-atlas-icmp-unnumbered-01, Feb. 2006, 8 pp. |
Berkowitz, “Router Renumbering Guide,” Network Working Group, RFC 2072, Jan. 1997, 41 pp. |
Bonica et al., “Generic Tunnel Tracing Protocol (GTTP) Specification,” draft-bonica- tunproto-01.txt, IETF Standard-Working-Draft, Internet Engineering Task Force, Jul. 2001, 20 pp. |
Chen et al., “Dynamic Capability for BGP-4,” Network Working Group, Internet Draft, draft-ietf-idr-dynamic-cap-03.txt, Dec. 2002, 6 pp. |
Fairhurst, “Internet Control Message protocol,” Internet Control Protocol, (ICMP), retrieved from www.erg.abdn.ac.uk/users/gorry/course/inet-pages/icmp.html, Sep. 6, 2006, 3 pp. |
Chapter 1: BFD Configuration Commands, Command Manual—BFD-GR, H3C S361 O&S551 0 Series Ethernet Switches, Version: 20081229-C-1.01, Release 5303, 2006-2008, 13 pp. |
Hegde et al., Multipoint BFD for MPLS, Network Working Group, Internet-Draft, draft- chandra-hedge-mpoint-bfd-for-mpls-00.txt, Mar. 5, 2012, 12 pp. |
Kattz et al., “Biderectional Forwarding Detection (BFD) for Multihop Paths,” Internet Engineering Task Force (IETF), RFC 5883, Jun. 2010, 6 pp. |
Katz et al.,“Bidirectional Forwarding Detection (BFD) for IPv4 and IPv6 (Single Hop),” Internet Engineering Task Force (IETF), RFC 5881, Jun. 2010, 7 pp. |
Katz et al., “Bidirectional Forwarding Detection (BFD),” Internet Engineering Task Force (IETF), RFC 5880, Jun. 2010, 49 pp. |
Katz et al., “Generic Application of Bidirectional Forwarding Detection (BFD),” Internet Engineering Task Force (IETF), RFC 5882, Jun. 2010, 17 pp. |
Kattz et al., “BFD for Multipoint Networks”, Network Working Group, Internet Draft, draft-ietf-bfd-multipoint-00.txt, Oct. 18, 2011, 29 pp. |
Kolon, “BFD spots router forwarding failures,” Network World, retrieved from www.networkworld.com/news/tech/2005/030705techupdate.html, Mar. 7, 2005, 3 pp. |
Kompella et al., “Detecting Multi-Protocol Label Switched (MPLS) Data Plane Failures,” Network Working Group, RFC 4379, Feb. 2006, 50 pp. |
Kompella et al., Signalling Unnumbered Links in Resource ReSerVation Protocol—Traffic Engineering (RSVP-TE), Network Working Group, RFC 3477, Jan. 2003, 8 pp. |
Mannie, “Generalized Multi-Protocol Label Switching Architecture,” Network Working Group, Internet draft, draft-ietf-ccamp-gmpls-architecture-07.txt, May 2003, 56 pp. |
Nadeau et al., “Bidirectional Forwarding Detection (BFD) for the Pseudowire Virtual Circuit Connectivity Verification (VCCV),” Internet Engineering Task Force (IETF), RFC 5885, Jun. 2010, 14 pp. |
Sangli et al., “Graceful Restart Mechanism for BGP,” Network Working Group, Internet Draft, draft-ietf-idr-restart-06.txt, Jul. 2003, 10 pp. |
Saxena et al., Detecting Data-Plane Failures in Point-to-Multipoint Mpls—Extensions to LSP Ping, Internet Engineering Task Force (IETF), RFC 6425, Nov. 2011, 28 pp. |
Sun Hai-Feng, “Advanced TCP Port Scan and it's Response,” O.L. Automation 2005, vol. 24, No. 4, China Academic Electronic Publishing House, Apr. 24, 2005, 2 pp. (English translation provided for abstract only). |
Mukhi et al., “Internet Control Message Protocol ICMP,” retrieved from www.vijaymukhi.com/vmis/icmp, Sep. 6, 2006, 5 pp. |
Zvon, RFC 2072, [Router Renumbering Guide]—Router Identifiers, Chapter 8.3, Unnumbered Interfaces, Nov. 7, 2005, 2 pp. |
Boney, “Cisco IOS in a Nutshell, 2nd Edition”, O'Reilly Media, Inc., Aug. 2005, 16 pp. |
Harmon, “32-Bit Bus Master Ethernet Interface for the 68030 (Using the Macintosh SE/30),” National Semiconductor Corporation, Apr. 1993, 10 pp. |
Muller, “Managing Service Level Agreements,” International Journal of Network Management, John Wiley & Sons, Ltd., May 1999, vol. 9, Issue 3, pp. 155-166. |
Papavassiliou, “Network and service management for wide-area electronic commerce networks,” International Journal of Network Management, John Wiley & Sons, Ltd., Mar. 2001, vol. 11, Issue 2, pp. 75-90. |
Ramakrishnan et al.,“The Addition of Explicit Congestion Notification (ECN) to IP”, RFC 3168, Network Working Group, Sep. 2001, 63 pp. |
Schmidt, “A Family of Design Patterns for Flexibly Configuring Network Services in Distributed Systems,” Proceedings of the Third International Conference on Configurable Distributed Systems, May 6-8, 1996, IEEE Press, pp. 124-135. |
Troutman, “DP83916EB-AT: High Performance AT Compatible Bus Master Ethernet Adapter Card,” National Semiconductor Corporation, Nov. 1992, 34 pp. |
Mukhi et al., “Internet Control Message Protocol ICMP,” www.vijaymukhi.com/vmis/icmp, last printed Sep. 6, 2006, 5pp. |
Extended Search Report from counterpart European Application No. 16204540.5, dated Apr. 21, 2017, 5 pp. |
U.S. Appl. No. 15/018,669, by Juniper Networks, Inc. (Inventors: Addepalli et al.), filed Feb. 8, 2016. |
U.S. Appl. No. 15/198,756, by Juniper Networks, Inc. (Inventors: Seth et al.), filed Jun. 30, 2016. |
“DARPA Internet Program Protocol Specification,” Transmission Control Protocol, RFC 793, Sep. 1981, 90 pp. |
Nichols, et al., “Definition of the Differentiated Services Field (DS Field) in the IPv4 and IPv6 Headers,” Network Working Group, RFC 2474, Dec. 1998, 19 pp. |
Examination Report from counterpart European Application No. 16204540.5, dated Feb. 8, 2018, 9 pp. |
Response to Examination Report dated Feb. 8, 2018, from counterpart European Application No. 16204540.5, filed Jun. 5, 2018, 23 pp. |
Examination Report from counterpart European Application No. 16204540.5, dated Oct. 29, 2018, 5 pp. |
Response to Extended Search Report dated Apr. 21, 2017, from counterpart European Application No. 16204540.5, filed Jan. 4, 2018, 1 pp. |
Number | Date | Country | |
---|---|---|---|
20170195209 A1 | Jul 2017 | US |