A shared communication link is a communication link that is shared by two or more communication network elements. One example of a shared communication link is a cable communication link where data is carried between a cable modem termination system (CMTS) and a plurality of cable modems (CMs) via shared communication media, such as coaxial electrical cable and/or fiber optic cable. Another example of a shared communication link is a fifth generation (5G) wireless communication link where data is carried between a wireless base station and a plurality of user equipment (UE) devices via shared radio frequency spectrum. Use of shared communication links advantageously enables high-performance communication services to be provided to multiple clients with minimal infrastructure and associated cost.
While shared communication links have revolutionized modern society by enabling large-scale provisioning of high-performance and cost-effective communication services, shared communication links have some limitations. For example,
Shared communication link 102 transports data between first termination device 104 and each second termination device 106. Accordingly, shared communication link 102 is shared by clients 108. Consequently, a given client 108 may not be able to transmit as much data as desired due to the communication link being used by one or more other clients 108. Therefore, communication network 100 is configured to control flow of data between application 110 and clients 108 to help prevent excess queuing of data and/or delay in data transmission, due to capacity limitations of shared communication link 102.
For example, consider a scenario where client 108(1) is exchanging data with application 110, and where client 108(1) is sensitive to data transmission latency, such that client 108(1)'s operation will be impaired by delays in data transmission. Application 110 is accordingly configured to limit flow of data between application 110 and client 108(1) to a level which can be handled by shared communication link 102 under its current operating conditions, without excessive data queuing or excessive data transmission latency. For example, if client 108(1) fails to receive a data structure at an expected time, client 108(1) can infer that (1) there is excessive latency in downlink data 112 transmitted over shared communication link 102, and/or (2) there is excessive queuing of downlink data 112 structures at first termination device 104. In response, client 108(1) may send a message to application 110 requesting that application 110 reduce downlink data 112 transmission rate to a level which prevents excessive latency or queuing. In response, application 110 reduces downlink data 112 transmission rate, for example, by decreasing downlink throughput and/or by increasing compression of downlink data 112.
Delays in downlink data 112 transmission or uplink data 114 transmission may prevent application 110 from quickly reducing data transmission rate between application 110 and client 108(1), in response to an increase in congestion on shared communication link 102. For example, assume that the downlink data 112 path has a relatively low latency of 1 millisecond (ms) and that the uplink data 114 path has a relatively high latency of 10 milliseconds, as illustrated in
Disclosed herein are systems and methods for reducing communication network performance degradation using in-band telemetry data which may at least partially overcome one or more of the drawbacks discussed above. Certain embodiments include an analytics engine that is capable of detecting performance degradation in at least a portion of a communication network, such as manifested by excessive data structure queuing and/or excessive data transmission latency, without waiting for a data structure to traverse a high-latency communication link. Consequently, communication networks including the new analytics engines can respond to communication network performance degradation more-quickly than is possible when using conventional techniques. Certain embodiments of the new analytics engines are capable of detecting communication network performance degradation from telemetry data that is based on in-band telemetry data of data structures flowing through the communication network. Consequently, the analytics engines can advantageously be located upstream of a high-latency communication link, which helps prevent delays in the communication network responding to communication link performance degradation. Furthermore, some embodiments of the analytics engines are capable of detecting communication network characteristics other than, or in addition to, performance degradation. For example, particular embodiments are configured to detect one or more communication network characteristics which can be used to help optimize the communication network, such as by (1) changing the communication network's topology, (2) changing configuration of one or more elements of the communication network, and/or (3) changing resources, e.g. computing resources and/or communication resources, allocated to one or more portions of the communication network.
In this document, “in-band telemetry information” refers to telemetry information transmitted in a data structure, such as in a header or footer of the data structure, where the data structure is not dedicated to carrying the telemetry information. For example, telemetry information may be transmitted in-band by embedding the telemetry information in headers or footers of data structures carrying content data. Out-of-band telemetry information, in contrast, refers to telemetry information transmitted in a data structure that is dedicated to carrying the telemetry information. Examples of in-band telemetry data include, but are not limited to, the items discussed below with respect to Tables 1-4.
Shared communication link 202 transports data between first termination device 204 and each second termination device 206. Accordingly, shared communication link 202 is shared by clients 208. In some embodiments, shared communication link 202 includes one or more of a coaxial electrical cable, a twisted-pair electrical cable, a fiber optic cable, a free space optical communication link, and a wireless communication link (e.g. a fourth generation (4G) wireless communication link, a 5G wireless communication link, a sixth generation (6G) wireless communication link, a satellite communication link, a microwave communication link, and/or a Wi-Fi wireless communication link). First termination device 204 interfaces application 210 with shared communication link 202, and each second termination device 206 interfaces its respective application 208 with shared communication link 202. Although shared communication link 202 is depicted as a single element, shared communication link 202 could include multiple elements. For example, in some embodiments, shared communication link 202 includes (a) a fiber optic cable, (b) a coaxial electrical cable, and (c) a fiber node interfacing the fiber optic cable with the coaxial electrical cable. As another example, in some embodiments, shared communication link 202 includes (a) a fiber optic cable, (b) a wireless communication link, and (c) a transceiver interfacing the fiber optic cable with the wireless communication link.
In some embodiments, first termination device 204 includes a CMTS and each second termination device 206 includes a CM. In these embodiments, data is optionally transmitted along shared communication link 202 according to a Data Over Cable Service Interface Specification (DOCSIS) protocol. In some other embodiments, first termination device 204 includes an optical line terminal (OLT), and each second termination device 206 includes an optical network terminal (ONT) or an optical network unit (ONU). In these embodiments, data is optionally transmitted along shared communication link 202 according to an ethernet passive optical network (EPON) communication protocol, a radio frequency over glass (RFOG) communication protocol, or a gigabit passive optical network (GPON) communication protocol. In some other embodiments, first termination device 204 includes a wireless base station (e.g. an evolved NodeB (eNB), a next generation NodeB (gNB), an Institute of Electrical and Electronics Engineers (IEEE) 802.11-based wireless access point, an Integrated Access and Backhaul (IAB) access point, a microcell, a picocell, a femtocell, a macrocell, a IEEE 802.11-based application, a satellite communication device, etc.), and each second termination device 206 includes a wireless transceiver. In these embodiments, data is optionally transmitted along shared communication link 202 according to a 4G protocol, a 5G protocol, a Wi-Fi protocol, or a satellite protocol.
Application 210 is configured to exchange data with one or more clients 208. In some embodiments, application 210 is at least partially embodied by hardware, and in some embodiments, application 210 is embodied at least partially by a computing processing device executing non-transitory instructions, such as in the form of software or firmware, stored in a storage subsystem. First termination device 204 and application 210 could be combined without departing from the scope hereof. Each client 208 is configured to exchange data with application 210, and each client 208 need not have the same configuration. In some embodiments, one or more of clients 208 are at least partially embodied by hardware. Additionally, in some embodiments, one or more of clients 208 are at least partially embodied by a computing processing device executing non-transitory instructions, such as in the form of software or firmware, stored in a storage subsystem. One or more clients 208 could be combined with its respective termination device 206 without departing from the scope hereof. In some embodiments, at least one client 208 includes one or more of a mobile telephone, a computer, a set-top device, a data storage device, an Internet of Things (IoT) device, an entertainment device, a computer networking device, a smartwatch, a wearable device with wireless capability, a medical device, a security device, a monitoring device, and a wireless access device (including, for example, an eNB, a gNB, an IEEE 802.11-based wireless access point, an IAB access point, a microcell, a picocell, a femtocell, a macrocell, IEEE 802.11-based application, a satellite communication device, etc).
Analytics engine 212, for example, is implemented by analog electronics and/or digital electronics. In some embodiments, analytics engine 212 is at least partially implemented by a computing processing device executing non-transitory instructions, such as in the form of software or firmware, stored in a storage subsystem. Two or more of the elements of communication network 200 could be implemented by common infrastructure. For example, in some embodiments, two or more of analytics engine 212, first termination device 204, and application 210 are implemented by respective containers or virtual machines in common computing infrastructure. As another example, in particular embodiments, two or more of analytics engine 212, first termination device 204, and application 210 are implemented by respective kernels of a field-programmable gate array (FPGA) or a graphics processing unit (GPU).
Communication network 200 could be modified without departing from the scope herein, as long as communication network 200 includes at least one instance of analytics engine 212. For example, communication network 200 could include a different number of second termination devices 206, and as another example, communication network 200 could include additional applications communicatively coupled to first termination device 204. Furthermore, communication network 200 could include additional elements, such as amplifiers, repeaters, splitters, taps, translators, buffers, analog-to-digital converters, digital-to-analog converters, power inserters, transceivers, etc. coupled to shared communication medium 202, as well as additional elements, such as routers, switches, gateways etc. that are not necessarily directly coupled to shared communication medium 202. Moreover, communication network 200 could include one or more additional analytics engines. Furthermore, analytics engine 212 could be combined with first termination device 204 or application 210, or analytics engine 212 could be distributed among multiple elements of communication network 200. Additionally, communication network 200 could include additional communication links, such as another shared communication link and/or a dedicated communication link.
Analytics engine 212 receives data structures 214, e.g. data frames or data packets, from first termination device 204. Data structures 214 include telemetry data based on in-band telemetry data of data structures passing through first termination device 204. In some embodiments, data structures 214 are a copy of some or all of data structures passing through first termination device 204. In some other embodiments, data structures 214 are related to, but not identical to, some or all of data structures passing through first termination device 204. For example, in some embodiments, data structures 214 include in-band telemetry data, but not payload, of data structures passing through first termination device 204. As another example, in some embodiments, data structures 214 include a summary of at least some in-band telemetry data of data structures passing through first termination device 204. In certain embodiments, data structures 214 include, or are related to, downlink data structures leaving first termination device 204 for shared communication link 202 and/or uplink data structures entering first termination device 204 from shared communication link 202. First termination device 204 generates data structures 214, for example, by mirroring some or all data structures passing through first termination device 204, or by using a tap (e.g. an optical tap) communicatively coupled to a data transmission medium of first transmission device 204 and/or a data transmission medium connected to first termination device 204.
Analytics engine 212 is configured to detect performance degradation of communication network 200, e.g. congestion on shared communication link 202, insufficient throughput in one or more portions of communication network 200, excessive queue depth at one or more portions of communication network 200, and/or excessive errors in one or more portions of communication network 200, from telemetry data in data structures 214. For example, in some embodiments, analytics engine 212 is configured to detect congestion on shared communication link 202 in response to data queue status, as indicated by in-band telemetry data, exceeding a threshold value. As another example, in some embodiments, analytics engine 212 is configured to detect congestion on shared communication link 202 in response to a difference in time stamps, as indicated by in-band telemetry data, exceeding a threshold value. As another example, in certain embodiments, analytics engine 212 is configured to detect performance degradation of communication network 200 from other in-band telemetry data, such as from one or more of the items in Tables 1-4 below. Furthermore, in some embodiments, analytics engine 212 is capable of detecting communication network characteristics other than, or in addition to, performance degradation, from data structures 214.
Analytics engine 212 is further to configured to communicate with application 210 regarding performance degradation of communication network 200, e.g. congestion on shared communication link 202, and/or regarding another characteristic of communication network 200, via messages 216 transmitted between analytics engine 212 and application 210. In some embodiments, analytics engine 212 is configured to interface with first termination device 204 and/or application 210 at least partially via computer code written in a Programming Protocol-independent Packet Processors (P4) computer language.
In certain embodiments, analytics engine 212 is configured to send one or more messages 216 to application 210 in response to analytics engine 212 detecting performance degradation of communication network 200, e.g. congestion on shared communication link 202, and/or in response to analytics engine 212 detecting another characteristic of communication network 200. Messages 216, for example, notify application 210 of congestion on shared communication link 202. In some embodiments, analytics engine 212 is configured to send messages 216 from time-to-time, such as on a periodic basis, where each message 216 indicates congestion status of shared communication link 202 and/or other performance degradation of communication network 200. In particular embodiments, if there is no significant change in congestion or other operating parameter on shared communication link 202 since a previous message 216 was sent, message 216 indicates that there is no change in congestion or the other operating parameter. Additionally, in some embodiments, analytics engine 212 is configured to send a message 216 in response to a request from application 210, such as in response to a polling request from application 210. Furthermore, in some embodiments, analytics engine 212 is configured to send messages 216 to first termination device 204 via an application programming interface (API) which allows application 210, and optionally additional or alternative applications, to access one or more aspects of analytics engine 212. For example, an operator of analytics engine 212 could allow a third party to access data of analytics engine 212, such as in exchange for compensation, via an API of analytics engine 212.
In certain embodiments, application 210 responds to congestion or other communication network 200 performance degradation indicated by a message 216 by reducing rate of data transmission between application 210 and one or more clients 208, thereby reducing rate of data transmission through shared communication link 202. For example, application 210 may increase compression of data transmitted between application 210 and one or more clients 208, and/or application 210 may decrease throughput of data transmission between application 210 and one or more clients 208. It should be appreciated that the fact that analytics engine 212 is located upstream of shared communication link 202, from a standpoint of application 210, enables analytics engine 212 to communicate with application 210 without traversing shared communication link 202. Consequently, analytics engine 212 can communicate detection of congestion on shared communication link 202, and/or other performance degradation of communication network 200, more quickly than is feasible when using traditional techniques. Additionally, in some embodiments, analytics engine 212 is configured to detect congestion on shared communication link 202, and/or other performance degradation of communication network 200, that is associated with multiple instances of client 208, thereby giving analytics engine 212 insight on complete operation of shared communication link 202, which analytics engine 212 can communicate to application 210 via messages 216.
Application 210 is not limited to reducing a data transmission rate in response to receipt of a message 216. Alternately or additionally, application 210 may be configured to take a different action in response to receipt of a message 216, including but not limited to, one or more of the following: (a) change a configuration of communication network 200, such as by using software defined network techniques, e.g. to bypass congestion on shared communication link 202, (b) add, decrease, or reallocate resources in communication network 200, e.g. to help reduce congestion or other communication network 200 performance degradation detected by analytics engine 212, or to reduce cost in response to communication network 200 having excess capacity, (c) notify a system administrator, e.g. so that the system administrator can take corrective action, (d) cause one or more clients 208 to change its operation, e.g. to mitigate impact of congestion or other communication network 200 performance degradation on the client, and (e) increase rate of data transmission between application 210 and one or more clients 208, such as in response to a message 216 indicating that communication network 200 has excesses capacity.
Analytics engine 212 determines from QS that shared communication link 202 is experiencing excessive latency. For example, in some embodiments, analytics engine 212 determines that shared communication link 202 is experiencing excessive latency in response to a value of QS exceeds a threshold value. As another example, in certain embodiments, analytics engine 212 determines that shared communication link 202 is experiencing excessive latency in response to QS being set to, or being equal to, a predetermined value. Analytics engine 212 sends a message 320 to application 210 in response to determining that shared communication link 202 is experiencing excessive latency. Message 320 is an embodiment of message 216 of
Client 208(1) sends a data structure 322 including data 2 to second termination device 206(1) at time t7, and second termination device 206(1) sends a data structure 324 including data 2 to first termination device 204 at time t8. First termination device 204 sends a data structure 326 to application 210 at time t9. It should be noted that application 210 receives message 320 at time t6 significantly before application 210 receives data structure 326 at time t10, thereby illustrating how analytics engine 212 helps quickly notify application 210 of congestion on shared data link 202.
Analytics engine 212 determines from IT and ET that shared communication link 202 is experiencing excessive latency. For example, in some embodiments, analytics engine 212 determines that shared communication link 202 is experiencing excessive latency in response to a difference between ET and IT exceeding a threshold value. Analytics engine 212 sends a message 420 to application 210 in response to determining that shared communication link 202 is experiencing excessive latency. Message 420 is an embodiment of message 216 of
Client 208(1) sends a data structure 422 including data 2 to second determination device 206(1) at time t7, and second termination device 206(1) sends a data structure 424 including data 2 to first termination device 204 at time t8. First termination device 204 sends a data structure 426 to application 210 at time t9. It should be noted that application 210 receives message 420 at time t6 significantly before application 210 receives data structure 426 at time t10, thereby illustrating how analytics engine 212 helps quickly notify application 210 of congestion on shared data link 202.
vBBU 510 and RU 508(1) collectively form a 5G radio access network (RAN). vBBU 510 includes a central unit (CU) and a distribution unit (DU). The central unit processes non-real-time protocols and services, and the distribution unit processes physical (PHY) level protocols and real-time services. RU 508(1) performs, for example, conversion between analog and digital domains, filtering, amplification, transmission, and reception. First termination device 204, shared communication link 202, and second termination device 206(1) collectively transmit data between vBBU 510 and RU 508(1). RU 508(1) is sensitive to latency, but analytics engine 212 is advantageously capable of notifying vBBU 510 of congestion on shared communication link 202 via messages 216, such as using one or more of the techniques discussed above with respect to
Communication network 500 could be modified without departing from the scope hereof. For instance, communication network 500 could be modified to support an alternative or additional wireless communication type, including but not limited to 4G, 6G, Wi-Fi, microwave, satellite, free space optical, etc. Additionally, communication network 500 could modified to have a different split between elements. For example,
Analytics engine 212 is not limited to detecting an operating parameter of a shared communication link. Instead, analytics engine 212 could be used to detect congestion, or other communication network performance degradation, at essentially any hop in a communication network, from in-band telemetry data. For example,
Analytics engine 212 receives data structures 828, e.g. data frames or data packets, from network device 820. Data structures 828 include telemetry data based on in-band telemetry data of data structures passing through network device 820. In some embodiments, data structures 828 are a copy of some or all of data structures passing through network device 820. In some other embodiments, data structures 828 are related to, but not identical to, some or all of data structures passing through network device 820. For example, in some embodiments, data structures 828 include in-band telemetry information, but not payload, of data structures passing through network device 820. As another example, in some embodiments, data structures 828 include a summary of at least some in-band telemetry data of data structures passing through network device 820. Network device 820 generates data structures 828, for example, by mirroring some or all data structures passing through network device 820, or by using a tap (e.g. an optical tap) communicatively coupled to a data transmission medium of network device 820 and/or a data transmission medium connected to network device 820.
Analytics engine 212 is configured to detect congestion on dedicated communication link 822, and/or other performance degradation of dedicated communication link 822, from telemetry data in data structures 828, in a manner analogous to that discussed above with respect to
Resource 906 is configured to provide one or more services to client 902. Resource 906 includes, for example, a communication network (e.g. the Internet or an intranet), a server (e.g. an application server, a content server, a gaming server, a data server, a communication server, etc.), a wireless transceiver (e.g. a cellular transceiver, a Wi-Fi transceiver, a satellite transceiver, etc.), an optical transceiver, a termination device, a network device, etc. Software defined networking element 904 is communicatively coupled between client 902 and resource 906. In some embodiments, software defined networking element 904 includes a software defined router, a software defined switch, a software defined gateway, etc. Network controller 908 is configured to control software defined element device 904. Communication network 900 may include additional elements without departing from the scope hereof. For example, communication network 900 may include additional network elements communicatively coupled between resource 906 and client 902.
Analytics engine 212 receives data structures 910, e.g. data frames or data packets, from software defined network element 904. Data structures 910 include telemetry data based on in-band telemetry data of data structures passing through software defined network element 904. In some embodiments, data structures 910 are a copy of some or all of data structures passing through software defined network element 904. In some other embodiments, data structures 910 are related to, but not identical to, some or all of data structures passing through software defined network element 904. For example, in some embodiments, data structures 910 include in-band telemetry information, but not payload, of data structures passing through software defined network element 904. As another example, in some embodiments, data structures 910 include a summary of at least some in-band telemetry data of data structures passing through software defined network element 904. Software defined network element 904 generates data structures 910, for example, by mirroring some or all data structures passing through software defined network element 904, or by using a tap (e.g. an optical tap) communicatively coupled to a data transmission medium of software defined network element 904 and/or a data transmission medium connected to software defined network element 904.
Analytics engine 212 is configured to detect congestion at software defined network element 904, or another operating parameter of software defined network element 904, from telemetry data in data structures 910, in a manner analogous to that discussed above with respect to
In a block 1004 of method 1000, it is determined from the first telemetry data that there is performance degradation in at least a portion of the communication network. In one example of block 1004, analytics engine 212 determines from data structures 214 that there is congestion on shared communication link 202. In another example of block 1004, analytics engine 212 determines from data structures 828 that there is congestion on dedicated communication link 822. In yet another example of block 1004, analytics engine 212 determines from data structures 910 that there is congestion at software defined network element 904.
In a block 1006 of method 1000, the analytics engine sends a message to a second element of the communication network, in response to determining that there is performance degradation in the portion of the communication network. In one example of block 1006, analytics engine 212 sends a message 216 to application 210 in response to determining that there is congestion on shared communication link 202. In another example of block 1006, analytics engine 212 sends a message 216 to network controller 826 in response to determining that there is congestion on dedicated communication link 822. In yet another example of block 1006, analytics engine 212 sends a message 216 to network controller 908 in response to determining that there is congestion at software defined network element 904.
In an alternate embodiment of method 1000, an alternative network characteristic is determined in block 1004, and in block 1006, a message is sent to the second element in response to determining this alternative network characteristic. For example, in some alternate embodiments, it is determined in block 1004 that the communication network has excess capacity, and a message indicating excess network capacity is sent to the second element in block 1006.
Elements at a network hop could also be configured to use in-band telemetry data without assistance of an analytics engine, such as to help increase efficiency of data routing. For example, in a communication network including multiple paths, one or more network elements may add in-band telemetry data to some or all data structures passing through the network element. By way of example and not of limitation, the telemetry data may include one or more of the items of Tables 1-4 below. One or more of the network elements may then determine which path(s) to send data structures through at least partially according to telemetry data that is based on some or all of the in-band telemetry data. For example, a network element may to determine from in-band telemetry data that path A currently has a lower latency than parallel path B, and the network element may cause the data structure to be routed through path A in response thereto. As another example, a network element may determine from in-band telemetry data that path C is currently more robust than parallel path D, e.g. path C has a higher SNR than path D or path C has a lower number of retransmits than path D, and the network element may cause the data structure to be routed through path C in response thereto. As another example, a network element may determine from in-band telemetry data that resources allocated to a network resource should be changed, e.g. increased, decreased, or reallocated, such as in response to performance degradation of the network resource or excess capacity at the network resource.
Each network element 1102, 1104, 1106, 1108, and 1110 includes, for example, one or more of a router, a switch, a gateway, a translator, a repeater, an amplifier, a concentrator, a splitter, a tap, a termination device, a transceiver, a wireless device (including, for example, an eNB, a gNB, an IEEE 802.11-based wireless access point, an IAB access point, a microcell, a picocell, a femtocell, a macrocell, IEEE 802.11-based application, a satellite communication device, etc), a user equipment device, etc. Each network element 1102, 1104, 1106, 1108, and 1110 need not have the same configuration. Each communication link 1112, 1114, 1116, 1118, and 1120 includes, for example, one or more of a coaxial electrical cable, a twisted-pair electrical cable (including, for example, a telephone cable or an Ethernet cable), a fiber optic cable, a free space optical communication link, and a wireless communication link (e.g. a fourth generation (4G) wireless communication link, a 5G wireless communication link, a sixth generation (6G) wireless communication link, a satellite communication link, a microwave communication link, and/or a Wi-Fi wireless communication link). Each communication link 1112, 1114, 1116, 1118, and 1120 need not have the same configuration.
Communication network 1100 includes two paths between network elements 1102 and 1106, i.e. Path A and Path B. Path A includes communication link 1112, network element 1104, and communication link 1114. Path B includes communication link 1116, network element 1108, communication link 1118, network element 1110, and communication link 1120. Communication network 1100 can include a different number of network elements and/or a different number of communication links without depart from the scope hereof. Additionally, communication network 1100 can have a different topology without departing from the scope hereof. For example, communication network 1100 could be modified to include one or more additional communication links and/or network elements, to form a third path between network elements 1102 and 1106.
Some or all of network elements 1102, 1104, 1106, 1108, and 1110 add in-band telemetry data to data structures passing therethrough, such as some or all of the telemetry data discussed above. Additionally, in some embodiments, one or more of network elements 1102, 1104, 1106, 1108, and 1110 may from time to time, e.g. periodically, transmit a data structure to another network element even if there is no data to transfer between the network elements, to evaluate alternate paths and generate associated in-band telemetry data.
Network element 1102 collects the telemetry data, such as in a telemetry table stored in a storage subsystem, from data structures passing therethrough, and network element 1102 determines whether to route data structures through Path A or Path B, or through both Path A and Path B, at least partially based on the telemetry data. For example, if network element 1102 determines from telemetry data that path B has a lower latency than Path A (even though Path B is longer than Path A), network element 1102 may, in response thereto, cause a data structure to be routed via Path B, to minimize data transmission latency. As another example, if network element 1102 determines from telemetry data that path A has a high SNR, a lower number of retransmits, or a lower cost of use than Path B, network element 1102 may, in response thereto, cause a data structure to be transmitted via Path A. As yet another example, if network element 1102 determines from telemetry data that Paths A and B have the same latency within a predetermined tolerance range, network element 1102 may, in response thereto, cause data structures to be transmitted via Paths A and B in parallel, to maximize data transmission throughput. One or more other network elements of communication network 1100 in addition to network element 1102, or in place of network element 1102, could be configured to determine which path to use for routing data structures at least partially based on in-band telemetry data.
Furthermore, some embodiments of communication network 1100 are configured to take action other than, or in addition to, determining data structure routing from in-band telemetry data. For example, some embodiments of communication network 1100 are configured to change (increase or decrease) number of paths between network elements 1102 and 1106 from in-band telemetry data. As another example, some embodiments of communication network 1100 are configured to change (increase, decrease, or reallocate) resources allocated to one or more network elements from in-band telemetry data.
Arrow 1214 represents a wireless communication link between central wireless access point 1202 and remote wireless access point 1204, and arrow 1216 represents a wireless communication link between remote wireless access point 1204 and remote wireless access point 1208. Arrow 1218 represents a wireless communication link between remote wireless access point 1208 and UE device 1212, and arrow 1220 represents a wireless communication link between central wireless access point 1202 and remote wireless access point 1206. Arrow 1222 represents a wireless communication link between remote wireless access point 1206 and remote wireless access point 1208, and arrow 1224 represents a wireless communication link between remote wireless access point 1206 and remote wireless access point 1210. Arrow 1226 represents a wireless communication link between remote wireless access point 1210 and UE device 1212.
Wireless communication link strength is proportional to arrow thickness in
Each of remote access points 1204, 1206, 1208, and 1210 adds in-band telemetry data, such as some or all of the telemetry data discussed above, to data structures passing therethrough. Additionally, in some embodiments, one or more of remote access points 1204, 1206, 1208, and 1210 may from time to time, e.g. periodically, transmit a data structure to another network element even if there is no data to transfer between the network elements, to evaluate alternate paths and generate associated in-band telemetry data. Central wireless access point 1202 updates a telemetry table from the in-band telemetry data, such as to determine to a path between central wireless access point 1202 and UE device 1212 having a lowest latency. Central wireless access point 1202 causes data to be transferred between central wireless access point 1202 and UE device 1212 via a path having a lowest latency, as determined from the telemetry table.
At time t4, network element 1106 sends a data structure 1416 including data 2 to network element 1110. Network element 1110 adds time for data structure 1416 to travel from network element 1106 to network element 1110 (06-10) as in-band telemetry data to data structure 1416, to generate data structure 1418. Network element 1110 then sends data structure 1418 to network element 1108 at t5. Network element 1108 adds time for data structure 1418 to travel from network element 1110 to network element 1108 (10-08) as in-band telemetry data to data structure 1418, to generate data structure 1420. Network element 1108 then sends data structure 1420 to network element 1102 at t6. Network element 1102 subsequently updates the telemetry table at time t7 based on telemetry data included in data structure 1420, as well as time for data structure 1420 to travel from network element 1108 to network element 1102 (08-02). Table 6 shows one example of the telemetry table after it is updated at time t7, if 06-10 is 4 ms, 10-08 is 7 ms, and 08-02 is 6 ms. As shown in Table 6, the total latency for data 2 to travel from network element 1106 to network element 1102 via Path B is 17 ms.
Network element 1102 determines from the updated telemetry table that Path B has a lower latency than Path A, i.e. latency of Path B is 17 ms and latency of Path A is 34 ms. In response, network element 1102 causes the next data structure to be routed between network elements 1102 and 1106 to be routed via Path B. For example, network element 1102 may instruct network element 1106 to route its next data structure destined for network element 1102 via Path B. As another example, network element 1102 may send a data structure destined for network element 1106 via Path B instead of Path A. Network element 1102 updates the telemetry table as it receives additional data structures with new in-band telemetry data.
Network element 1102 subsequently updates a telemetry table at time t4 based on telemetry data included in data structure 1516. Table 7 shows one example of the telemetry table after it is updated at time t4, if 02-04 is 8 ms and 04-06 is 11 ms. As shown in Table 7, the total latency for data 1 to travel from network element 1102 to network element 1106 via Path A is 19 ms.
At time t5, network element 1102 sends a data structure 1518 including data 2 to network element 1108. Network element 1108 then generates a data structure 1520 including time for data structure 1518 to travel from network element 1102 to network element 1108 (02-08). Network element 1108 sends data structure 1520 to network element 1102 at time t6. Data structure 1520 optionally includes additional data (not shown). Network element 1102 subsequently updates the telemetry table at time t7 based on telemetry data included in data structure 1520. Table 8 shows one example of the telemetry table after it is updated at time t7, if 02-08 is 2 ms.
The telemetry table does not have complete data for Path B when updated at time t7. However, network element 1102 continues to update the telemetry table as it continues to receive telemetry data using a procedure analogous to that discussed above, such that the telemetry table will over time include complete data for Path B. Network element 1102 subsequently causes data to be routed between network elements 1102 and 1106 via whichever Path has a lowest latency.
Features described above may be combined in various ways without departing from the scope hereof. The following examples illustrate some possible combinations:
Changes may be made in the above methods, devices, and systems without departing from the scope hereof. It should thus be noted that the matter contained in the above description and shown in the accompanying drawings should be interpreted as illustrative and not in a limiting sense. The following claims are intended to cover generic and specific features described herein, as well as all statements of the scope of the present method and system, which, as a matter of language, might be said to fall therebetween.
This application is a continuation-in-part of U.S. patent application Ser. No. 16/507,893, filed on Jul. 10, 2019, which claims benefit of priority to (a) U.S. Provisional Patent Application Ser. No. 62/695,912, filed on Jul. 10, 2018, (b) U.S. Provisional Patent Application Ser. No. 62/853,491, filed on May 28, 2019, (c) U.S. Provisional Patent Application Ser. No. 62/795,852, filed on Jan. 23, 2019, and (d) U.S. Provisional Patent Application Ser. No. 62/788,283, filed on Jan. 4, 2019. This application also claims benefit of priority to U.S. Provisional Patent Application Ser. No. 62/877,000, filed on Jul. 22, 2019. Each of the above-mentioned applications is incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
8072976 | Bennett | Dec 2011 | B2 |
17060921 | Schrimpsher | Oct 2020 | |
20030202520 | Witkowski et al. | Oct 2003 | A1 |
20040148391 | Lake et al. | Jul 2004 | A1 |
20050108444 | Flauaus | May 2005 | A1 |
20070211647 | Hao et al. | Sep 2007 | A1 |
20080043716 | Toombs et al. | Feb 2008 | A1 |
20090177404 | Hartmann | Jul 2009 | A1 |
20100180048 | Guo et al. | Jul 2010 | A1 |
20110197274 | Callon et al. | Aug 2011 | A1 |
20110231551 | Hassan et al. | Sep 2011 | A1 |
20120157106 | Wang et al. | Jun 2012 | A1 |
20170052821 | Wu | Feb 2017 | A1 |
20170272465 | Steele | Sep 2017 | A1 |
20180123705 | Henry | May 2018 | A1 |
20180131617 | Hira | May 2018 | A1 |
20180191619 | Karthikeyan et al. | Jul 2018 | A1 |
20180288091 | Doron et al. | Oct 2018 | A1 |
20180359184 | Inbaraj | Dec 2018 | A1 |
20180359811 | Verzun et al. | Dec 2018 | A1 |
20180367412 | Sethi | Dec 2018 | A1 |
20190014394 | Anand et al. | Jan 2019 | A1 |
20190068693 | Bernat | Feb 2019 | A1 |
20190132206 | Hanes | May 2019 | A1 |
20190140976 | Liou et al. | May 2019 | A1 |
20190190804 | Tang | Jun 2019 | A1 |
20200021490 | Schrimpsher et al. | Jan 2020 | A1 |
Entry |
---|
Brown-LeeAnn, 5 Ways to improve your WAN's throughput Tackling Traffic Problems, Dec. 16, 2016, CTC Technologies, https://www.ctctechnologies.com/improve-wan-throughput/ (Year: 2016). |
Number | Date | Country | |
---|---|---|---|
62877000 | Jul 2019 | US | |
62853491 | May 2019 | US | |
62795852 | Jan 2019 | US | |
62788283 | Jan 2019 | US | |
62695912 | Jul 2018 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16507893 | Jul 2019 | US |
Child | 16936300 | US |