Within networks such as Wide Area Networks (WANs), tunneling is a mechanism that creates a connection between two locations of a data network while maintaining data security and bandwidth separation. Some existing solutions may implement a full mesh topology which includes tunnels between every single device.
The present disclosure, in accordance with one or more various examples, is described in detail with reference to the following figures. The figures are provided for purposes of illustration only and merely depict typical or example examples.
The figures are not exhaustive and do not limit the present disclosure to the precise form disclosed.
Traditionally, wide area networks (WANs) have bridged gaps between local area networks (LANs), which may reside in different geographical locations. WANs rely on hardware network devices such as gateways or routers to prioritize transmission of data, voice, and video traffic between LANs. The increase of cloud service providers and Software as a Service (SaaS) has triggered a concurrent rise in data being stored on over cloud networks. To more effectively adapt to the rapidly growing usage of cloud networks, software-developed WANs (SDWANs) have developed as a new paradigm. SDWANs provide centralized and/or software-based controls of policies that coordinate traffic paths, failover, and real-time monitoring to reduce delays in data transmission, thereby automating functions that were previously manually configured. Nodes, sites, portions, or branches of a WAN in the SDWAN may be connected via multiprotocol label switching (MPLS), Last Mile Fiber Optic Network, wireless, broadband, virtual private networks (VPNs), Long Term Evolution (LTE), 5G, 6G, and the internet. SDWANs may also implement virtual tunnels which constitute a logical overlay over an existing physical network of WANs. These tunnels may include any of Internet Protocol Security (IPSec), User Datagram Protocol (UDP), Transmission Control Protocol (TCP), Generic Routing Encapsulation (GRE), Virtual Extensible LAN (VxLAN), Datagram Transport Layer Security (DTLS), grpc Remote Procedure Call (gPRC) or some other IP-based protocol tunnel. These tunnels may be instrumental in establishing connections to data and services stored at different portions of the SDWANs, for example, among branches or sites, thus providing additional and more efficient access to data and services while maintaining data security. Tunnels may bridge portions of the SDWANs which have disjoined capabilities, policies, and protocols via the shipping of protocols that are otherwise unsupported by the portions of the SDWANs. As an example, IPSec tunnels protect data traffic by ensuring confidentiality, integrity, authentication, and anti-replay. Confidentiality encompasses encryption of data so that only a sender and receiver would be able to read data packets. Integrity entails transmitting both the sender and the receiver a hash valve so that both parties will become aware of any changes to the data packets. Meanwhile, authentication provides the sender and the receiver assurance regarding identities of each other. Lastly, anti-replay prevents transmission of duplicate packets by a potential attacker. In some examples, IPSec tunnels may be implemented in conjunction with Dynamic Multipoint VPN (DMVPN), MPLS-based L3 VPN, and layer 2 (L2) tunneling protocols.
One type of topology in SDWANs includes a hub-and-spoke topology, in which different portions, branches, or sites (hereinafter “branches”) of a SDWAN are connected to other branches via a data center or a centralized center (hereinafter “data center”). In such a hub-and-spoke topology, the branches of a SDWAN are not directly connected to one another via tunnels. However, a full mesh topology has been more frequently implemented. In such a full mesh topology, each branch is connected to all other branches, along with a data center, via tunnels. The tunnels may be bidirectional. Tunneling results in a number of benefits, such as, faster transmission as a result of direct connections among branches rather than indirect connections that traverse the data center. Other benefits include more efficient and secure data transmission, and increasing redundancy of paths to transmit data. However, an excessive amount of tunneling, such as in a full-mesh topology, may constitute a double-edged sword.
One downside of direct tunnels among branches is the resulting additional consumption of computing resources, as manifested, for example, by additional bytes to existing IP packets and increased bandwidth. The additional bytes may have a detrimental impact on transmission and queueing delay, thereby affecting jitter and overall packet delay. The added packet size may also result in fragmentation of packets due to the packets exceeding a threshold size. Fragmentation may increase chances of packet drop and increase consumption of processing power, memory, and CPU. In some applications such as Voice over Internet Protocol (VoIP), overhead resulting from tunnels may consume as much as 40% to 100% additional packet bandwidth, resulting in compromised bandwidth efficiency, increased latency, and packet drop.
Furthermore, the requirement of hardware such as routers, which support tunnels, increases a load on processors of the hardware. This load is exacerbated as a number of sites or branches of a network increases. The support of tunnels entails transmitting, from the routers, for example, periodic probe traffic, or probe packets, to maintain liveliness of each of the tunnels. Once the router transmits the probe packets, the router may determine whether any response to the probe packets has been received from respective endpoints (e.g., other network devices such as routers) of the tunnels. If the router detects a response to a probe packet, and that the response further indicates an identifier, such as an Internet Protocol (IP) address, of the router, then the router determines that the response was successful. Thus, the tunnel through which the probe packet was transmitted is maintained. However, if the router fails to detect a response to the probe packet, then the router may resend the probe packet a threshold number of times at threshold intervals, such as, four retries every five seconds. If still no response, or an improper response (e.g., failing to indicate an identifier of the router), then the router determines that the response is unsuccessful. In such a situation, the tunnel may be removed. Bandwidth consumed in such a process may be computed as a product of a number of tunnels, a probe packet size, and a probe burst size indicating a number of probe packets transmitted simultaneously, divided by a time interval of the transmission of the probe packets. The probe packets may be UDP based packets of around 200 bytes. The probe packets may be transmitted regardless of whether, or how much, data is being transmitted across the tunnels. In other words, even a tunnel being utilized does not exempt the router, or other computing component, from transmitting probe packets in order to maintain that tunnel. A full mesh network entails each branch gateway or router at each site forming tunnels with all the other branch gateways or routers at different sites, resulting in n*(n−1)/2 total tunnels, wherein n indicates a number of network devices. Within a branch topology having only 16 network devices, such as routers or gateways, with each network device having three uplinks, in some implementations, an estimated 9.3 Gigabytes (GB) of traffic may be, for example, consumed to merely maintain the liveliness of the tunnels over a 24-hour duration within a full mesh network. In some examples, uplinks may indicate a number of separate WAN connections or links between two branches. Additionally, a cloud service, or microservice such as an orchestrator, which controls and coordinates operations on the network, computes cryptographic maps for each of the tunnels, and propagates these cryptographic maps via a tunnel such as a Remote Procedure Call (RPC) tunnel to each of devices that are associated with each of the branches. The additional computing resources incurred as a result of a full mesh network may be prohibitive and severely hamper performance within the network, especially as a number of devices continues to proliferate.
Examples described herein address these challenges by implementing a computing component, such as a server, that selectively establishes tunnels among certain network devices that constitute a network, such as a WAN or SDWAN, environment. This selective establishment entails determining a number of the tunnels to be formed, as well as which network devices tunnels are to be formed between. As alluded to previously, a number of tunnels formed is less than that of a full mesh topology, in order to reduce a cost of computing resources. Therefore, some pairs of network devices have no tunnels directly between them. The tunnels may be selectively formed, for example, between network devices through which data transmission occurs relatively more frequently, and/or a relatively large amount of data is transmitted. Meanwhile, if two network devices transmit data relatively less frequently, and/or at a relatively low volume or amount, then these two network devices may not have tunnels connecting them. The selective establishment of tunnels achieves a balance between computing overhead on one hand and data security and efficient data transmission on the other hand. As an initial step in determining a number of tunnels to be formed, the server may separate, partition, or demarcate network devices in a network into sections, portions, groups or cohorts (hereinafter “cohorts”). The formation of cohorts may reduce or minimize a number of tunnels while still maintaining transmission, either directly or indirectly via tunnels, through each of the network devices. Therefore, tunnels may be selectively established, formed or provisioned to save computing resources without compromising a range of communication between each of the network devices. Within each cohort, a leader may be selected, evaluated, and re-elected or switched periodically based on the evaluation. Tunnels among the leaders and among the network devices in a same cohort may be formed, while other tunnels may not be formed. Thus, the formation of cohorts sets, or clarifies, a guideline or criteria as to where tunnels are to be formed.
The network devices 120-127, 130-137 may each constitute edge devices and/or separate branches of a WAN or a SDWAN. Although only 16 network devices are shown for the sake of illustration, any number of network devices may be contemplated. The computing component 111 may further provision tunnels or coordinate or initiate the formation thereof. Moreover, the computing component 111 may elect one of the network devices in each cohort as a leader. Each of the leaders communicates with leaders in other cohorts, and in an event of a failure, reroutes data in case the computing component 111 has not updated a topology that indicates particular routes through which traffic is routed. The computing component 111 may include one or more hardware processors and logic 113 that implements instructions to carry out the functions of the computing component 111.
In some examples, the computing component 111 may be associated with a platform or orchestrator 114 (hereinafter “orchestrator”). Any operations attributed to the computing component 111 may also be attributed to the orchestrator 114. In some examples, the orchestrator 114 may include services that use rules or policies (hereinafter “policies”) to automate tasks associated with separating the network devices 120-127, 130-137 into cohorts and selectively forming and provisioning tunnels between a subset of the network devices 120-127, 130-137. In particular, the orchestrator 114 may coordinate a workflow to organize the tasks. In some examples, the computing component 111 may implement policies, services, or microservices of the orchestrator 114, which may be comprised as part of the logic 113. The computing component 111 may include one or more physical devices or servers, or cloud servers on which services or microservices run. The computing component 111 may store, in a database 112, information such as information about a network, the network devices 120-127 and 130-137, the cohorts, which may include current data and/or historical data regarding the aforementioned. For example, the database 112 may include data of attributes, metrics, parameters, and/or capabilities of the network devices 120-127 and 130-137. In some examples, the computing component 111 may cache a subset of the data stored in the database 112 in a cache 116. For example, the computing component 111 may cache any of the data within the database 112 that may be frequently accessed, referenced, or analyzed, and/or may be frequently changing (e.g., having a higher than a threshold standard deviation and/or higher than a threshold variability with respect to time). Such data may include performance metrics, attributes, or parameters of the network devices 120-127 and 130-137.
In
Such a distribution results in fewer tunnels compared to other scenarios. For example, having all 16 network devices being distributed in a single cohort or having each network device being distributed in a different cohort would amount to a full mesh topology in which 360 tunnels are formed. As another example, having 15 network devices being distributed in a first cohort and a single network device being distributed in a second cohort would result in 318 tunnels being formed. As another example, having 14 network devices being distributed in a first cohort and two network devices being distributed in a second cohort would result in 279 tunnels being formed. As yet another example, having four network devices each being distributed across four cohorts would result in 90 tunnels being formed, which is still more than the 84 tunnels in the distribution of
Next, the computing component 111 assigns each network device 120-127, 130-137 to a particular one of the five cohorts, in a clustering procedure, illustrated as cohorts 202, 203, 204, 205, and 206 in
As a particular example, the computing component 111 may be more likely to cluster network devices 120-127, 130-137 that are located closer together with respect to one another into a common cohort. For instance, the computing component 111 may cluster all network devices 120-127, 130-137 that are located within a threshold distance of one another, such as, 500 feet. Alternatively, the computing component 111 may cluster all network devices 120-127, 130-137 based on radiofrequency (RF) neighbor data, which may encompass a group of network devices that can detect and recognize signals from one another of at least a threshold level, such as negative 80 decibels relative to milliwatt (dBm).
Next, regarding the software landscape, the computing component 111 may be more likely to cluster network devices 120-127, 130-137 that have same or similar embedded software into a common cohort. In such a manner, network devices within a common cohort may be more likely to have compatible software, and thus, may communicate more effectively with one another. Furthermore, regarding the bandwidth consumed by different categories of applications, the computing component 111 may be more likely to cluster network devices 120-127, 130-137 that tend to have similar bandwidth consumption patterns, such as, amounts or proportions of total bandwidth consumed on particular applications. Moreover, regarding the traffic distributions or patterns, the computing component 111 may be more likely to cluster network devices 120-127, 130-137 that tend to have similar traffic distributions or patterns. For example, the traffic patterns may be relative to a time of day, such as, a relative frequency of traffic patterns during daytime compared to nighttime. In another example, the traffic distributions or patterns may indicate relative and/or total amounts of traffic consumed across different categories of traffic, such as, data transmission, video, and voice, and how traffic consumption varies over time and/or cyclically. Next, regarding the number of different device types, the computing component 111 may be more likely to cluster network devices 120-127, 130-137 that have similar types or distributions or proportions of device types connected. Lastly, regarding the reputation, the computing component 111 may be more likely to cluster network devices 120-127, 130-137 that have similar reputations, in an effort to distribute load over a cohort more evenly. In a particular example, if a network device within a cohort has a low reputation but a second network device within the cohort has a high reputation, then data traffic may be diverted, disproportionately, to the second network device.
Such criteria may be determined based on historical data, which, for example, may be stored in the database 112 and/or cached in the cache 116. An objective of selectively assigning client devices to particular cohorts may be that network devices within each of the cohorts may have or be associated with similar characteristics, whether of the network devices themselves or associated client devices or LANs so that loads upon each of the network devices may be roughly evenly distributed and the performance and/or other characteristics in a particular cohort may be predictable. For example, if a first network device in one cohort has different characteristics compared to a second network device, such as amounts of traffic or types of client devices that are supportable, then one network device may have to bear an unreasonably high load while the other network device may be unable to support or limited in its ability to support certain functions requested by the client devices.
In some examples, attributes of the software landscape, or software stack embedding, may include any of an operating system (OS) version, a software version, corresponding user accounts, a Kernel version, OS registry databases, .plist files, running processes, Daemon, background, and persistent processes, startup operations, launched entries, application and system errors encountered, DNS lookups, and/or network connections of or associated with client devices. Meanwhile, the traffic pattern may be determined based on a protocol type, a service, flags, a number of source bytes, a number of destination bytes, frequencies of occurrence of incorrect fragments, packet counts per transmission or over a period of time, packet sizes per packet, receiver error rates, types of data transmitted (e.g., media or textual data) and/or fluctuations such as spikes in traffic. Furthermore, a reputation of a network device may be determined based on a total number of unpermitted applications accessed at that network device, a total number of malware or suspected malware URL (universal resource link) requests at the network device, a total number of banned file attachments and/or MIME types used in emails or other communications, a total number of anomalous intrusions detected on client devices connected to the network device, and/or a total number of sensitive data breaches detected on the network device. These attributes or parameters may be summed after individually being weighted. The weights may be based on a relative importance of each of the attributes or parameters, which may be the same across all network devices or specific to a particular network device. For example, in a particular network device, a consideration of unpermitted applications may be especially deemed important and thus be weighted heavily. These attributes or parameters may be measured according to a raw number over a given amount of time, such as within the last day, ten days, or month, or according to a frequency of occurrence, adjusted based on a data throughput on the network device.
In some examples, the computing component 111 may implement an artificial intelligence (AI) or a supervised and trained machine learning model 117 that incorporates factors 1) through 6) described above to determine an assignment of network devices into cohorts. The machine learning model may be trained, either sequentially or in parallel, using two different training datasets. A first training dataset may include situations or scenarios in which a network device is assigned to a particular cohort because of sufficient similarities between the attributes of the network device and those of other network devices within the cohort. A second training dataset may include situations or scenarios in which a network device is not assigned to a particular cohort due to more than a threshold degree of differences between the attributes of the network device and those of other network devices within the cohort. In some examples, the machine learning model may further be trained based on feedback, following the assignment of network devices into cohorts, regarding certain performance attributes. These performance attributes may include, packet transmission rate or speed, network speed, packet drop rates, frequencies of occurrence of incorrect fragments, packet counts per transmission or over a period of time, receiver error rates, and/or fluctuations such as spikes in traffic or packet sizes, over a particular cohort and/or across multiple cohorts. For example, the machine learning model may receive feedback that certain performance attributes may fail to satisfy a threshold level or measure, and modify or adapt its criteria in assigning network devices into cohorts. In some scenarios, the machine learning model 117 may be trained based on the performance attributes. For example, if a particular parameter such as packet transmission rate or speed failed to satisfy a threshold standard or threshold, the machine learning model 117 may be trained to weight that parameter more heavily in assigning network devices to cohorts. The computing component 111 may automatically implement, without user input, the determined assignment of network devices into cohorts, or alternatively, provide a recommendation to a user regarding such so that the user may manually implement the recommendation.
In the example scenario of
The computing component 111 may provision tunnels or initiate the formation thereof by generating or computing unique keys, such as symmetric keys, corresponding to each tunnel to be formed between a pair of network devices. Once that pair of network device receives the keys, they may exchange encrypted data in order to validate the keys. Once the keys are validated, then the tunnel may be formed. The computing component 111 then transmits the generated keys to any of the network devices between which a tunnel is to be formed. In some examples, the computing component 111 may determine that tunnels are to be formed within a cohort (e.g., each of the cohorts 202-206) such that each network device within a cohort can transmit data directly to any other network device within that cohort. In other examples, as illustrated in
The criteria to determine whether or not a network device should be appointed as a leader, or to select a network device in a cohort among different network devices, may include parameters or attributes such as available amounts of bandwidth, memory, and available computing resources such as CPU cycles of the network devices, uplink speed, uplink jitter, uplink latency, uplink packet loss, uplink average round trip time consumed by a packet transmission, consumption of bandwidth, memory, or computing resources, models of the network devices, and/or software versions of the network devices. In some examples, historical data of, or indicative of, these parameters may be utilized to determine the appointment of a network device as a leader. In some examples, the machine learning model 119 may predict respective future parameters based on the historical data, and/or trends across the historical data. The determination of which network device to appoint as a leader may be based on the predicted future parameters, which may be indicative of a predicted future performance, and/or historical data regarding the parameters. In some examples, a network device having among lowest computing loads and/or among best performances, and/or lowest predicted future computing loads and/or best or highest predicted performances, as measured by the aforementioned parameters or attributes, may be selected as a leader for a particular cohort. Upon determination of leaders of the cohorts 202-206, the computing component 111 may commence the formation of bidirectional tunnels to create a fully meshed network among the leaders, as illustrated in
In some examples, the appointment of a leader of a cohort may be implemented using an AI or a machine learning model (hereinafter “machine learning model”) 119. The machine learning model 119 may be trained, either sequentially or in parallel, using two different training datasets. A first training dataset may include situations or scenarios in which a network device is assigned as a leader. A second training dataset may include situations or scenarios in which a network device is not assigned as a leader. Thus, the machine learning model 119 may be able to distinguish between different situations or contexts in which a network device is to be appointed as a leader, compared to situations or contexts in which a network device is not to be appointed as a leader. In some examples, the machine learning model 119 may further be trained based on feedback, following the determination of leaders, regarding certain performance attributes or metrics of the determined leader. These performance attributes may include, packet transmission rate or speed, network speed, packet drop rates, frequencies of occurrence of incorrect fragments, packet counts per transmission or over a period of time, receiver error rates, and/or fluctuations such as spikes in traffic or packet sizes, over a particular cohort and/or failure or error rates of the determined leader. For example, the machine learning model may receive feedback that a network device determined or appointed as a leader fails to satisfy certain performance attributes and modify or adapt the criteria in determining a leader. The computing component 111 may automatically determine and assign particular network devices as respective leaders of different cohorts, without user input, or alternatively, provide a recommendation to a user regarding such so that the user may manually implement the recommendation.
The computing component 111 may continuously, or periodically, monitor performance metrics or parameters of the determined leader of each cohort. If one or more parameters, and/or an overall measure of performance, of the determined leader fail to satisfy one or more performance attributes, parameters, standards or thresholds, and/or if the determined leader becomes inoperative (e.g., unreceptive or unable to transmit data), then the computing component 111 may selectively switch out the current leader and predetermine or appoint a different leader using a same or similar criteria as the determination of a leader alluded to previously (e.g., one or more performance parameters or attributes). In some embodiments, the machine learning model 119 may be trained based on the parameters of the determined leader of each cohort. For example, if a particular parameter such as uplink jitter failed to satisfy a threshold standard or threshold, the machine learning model 119 may be trained to weight that parameter more heavily in redetermining a leader.
If a new leader of a particular cohort is selected or determined, the computing component 111 may generate and transmit new keys so that the new leader may form tunnels with each of the other leaders. In some examples, the computing component 111 may implement a make-before-break strategy or mechanism, in which the computing component 111 may determine or verify that the new tunnels become fully functional prior to breaking down existing tunnels with the previous leader.
Upon formation of the tunnels 220-227, 230-237, and the tunnels 302, 304, 306, 308, 310, 312, 314, 316, 318, and 320, the computing component 111 and/or the orchestrator 114 may determine or obtain information regarding routes of data transmission which include all the aforementioned tunnels. The information regarding routes may include all possible routes, regardless of whether the routes are operational. The information regarding routes may be updated, for example, if new network devices are introduced, network devices are removed, and/or tunnels are formed or removed. The computing component 111 or the orchestrator 114 may propagate or advertise the information to all network devices 120-127, 130-137. The computing component 111 or the orchestrator 114 may further receive information regarding a topology. The topology information may include both an advertisement regarding nodes (e.g., information regarding the network device itself such as an identity of the network device) and an advertisement regarding links (e.g., the tunnel or the interface information of the network device, such as whether the tunnel is operational).
In some examples, a link (e.g., tunnel) between two network devices will be construed or considered by the computing component 111 or the orchestrator 114 to be working, operational, or up (hereinafter “operational”), only when both the network devices report that link to be operational, as part of the link advertisement. Each network device may transmit such link information using a protocol such as Overlay Agent Protocol. Referring to
Once the computing component 111 or the orchestrator 114 receives the topology information from each of the network devices 120-127, 130-137, the computing component 111 or the orchestrator 114 may create or form a topological diagram or database. The topological diagram or database may be manifested as a connected graph or a connectivity graph of the network devices 120-127, 130-137 within that branch mesh topology. The connectivity graph may be based on current tunnel statuses between the network devices. The topological diagram or database may be augmented or overlaid with link costs to transport or transmit data packets between any two network devices 120-127, 130-137. The costs may represent computing costs of data transmission along each of the links connecting two network devices. In some examples, a cost to transmit data between two network devices of a common cohort (e.g., between the network devices 120 and 122 in the cohort 202 of
The computing component 111 or the orchestrator 114 may distribute, publish, or propagate the topological diagram or database to every network device (e.g., the network devices 120-127, 130-137) within the branch mesh topology. The orchestrator 114 or the computing component 111 may transmit, to the network devices, updates regarding tunnel statuses, only when changes are occurring in the network. The transmission of the topological database updates to the network devices may occur at a higher priority compared to updates regarding route statuses. The computing component 111 or the orchestrator 114 may create and/or store multiple topological diagrams or databases (hereinafter “topological databases”), each of which corresponds to a different branch mesh topology. Each of the topological diagrams or databases may be stored and maintained in a unique database. If a network device is part of two different branch mesh topologies at a same time, the computing component 111 or the orchestrator 114 may publish both topological diagrams or databases, which correspond to the two different branch mesh topologies, to that network device.
After a network device (e.g., any of the network devices 120-127, 130-137) receives the topological diagrams or databases, the network device may calculate a shortest path to route data by using an algorithm, such as Dijkstra's Shortest Path First (SPF) algorithm. In some examples, a shortest path may be based on a lowest total cost to route data from the network device to a destination. If the transmission of data requires multiple hops, meaning that the transmission goes through one or more intermediate or intervening network devices, then the algorithm may be used to select a subsequent hop. The calculation of the shortest path may be based on links within the topological database, and may only consider links that are construed or considered as being working or up.
In some examples, the network devices determined to be leaders may receive information regarding other leaders, or tunnels between leaders, becoming inoperational, via Dead Peer Detection (DPD). In such examples, the leaders may shunt data to alternative paths such as through a node or a data center, as will be illustrated in
Similarly to
At step 506, the hardware processor(s) 502 may execute machine-readable/machine-executable instructions stored in the machine-readable storage media 504 to cluster network devices into cohorts. Each cohort of the cohorts includes a logical demarcation of a subset of the network devices. For example, a first cohort may include a first set of the network devices and a second cohort comprises a second set of the network devices. As illustrated in
The clustering may be based on any of the aforementioned criteria described with respect to
At step 508, the hardware processor(s) 502 may execute machine-readable/machine-executable instructions stored in the machine-readable storage media 504 to selectively determine a subset of the network devices among which a full mesh topology is to be formed or created. In some examples, at least a portion of the determined subset of the network devices may include leaders of cohorts which are responsible for data transmission across different cohorts. In some examples, additionally or alternatively, a portion of the determined subset may be all network devices in a common cohort. In the scenario of the determination of leaders, the determination may be based at least in part on any of, amounts of available bandwidth, available memory, available CPU cycles, jitter, latency, packet loss, and average round trip time within the network devices, models of the network devices, and/or software versions of the network devices, as illustrated with reference to
At step 510, the hardware processor(s) 502 may execute machine-readable/machine-executable instructions stored in the machine-readable storage media 504 to provision a first tunnel and a second tunnel, which were selectively determined in step 508. The provisioning of the first tunnel and the second tunnel may include, initiating, commencing, facilitating, and/or coordinating a creation of the first tunnel. The provisioning may entail generating unique keys corresponding to each of the first tunnel and the second tunnel. In particular, a first key pair may be generated to initiate creation of the first tunnel. The first key pair may be transmitted to a first device and a second device through which the first tunnel is to be created or formed. Once the first device and the second device use the key pair to successfully transmit data, the first tunnel is created. Similarly, a second key pair may be generated to initiate creation of the second tunnel. The second key pair may be transmitted to the first device and a third device through which the second tunnel is to be created or formed. Once the first device and the third device use the key pair to successfully transmit data, the second tunnel is created.
At step 606, the hardware processor(s) 602 may execute machine-readable/machine-executable instructions stored in the machine-readable storage media 604 to determine that a first tunnel between a first network device of the first cohort and a second network device within the first cohort is to be created. For example, the hardware processors 602 may create tunnels among all network devices within the first cohort such that the network devices within the first cohort are connected in a full mesh topology. In such a manner, the network devices within the first cohort may communicate efficiently while having options of redundant data transmission pathways. Each network device in the first cohort may receive updates regarding statuses of tunnels and/or other network devices, either via periodic DPD signals or from the computing component 600. Therefore, in an event of a failure in either or both a network device or a tunnel, each network device in the first cohort may modify or revise its routing table to determine alternate data transmission paths, such as those that consume a least amount of computing cost.
In some examples, the first network device has a higher historical performance metric or a higher predicted performance metric compared to that of the second network device, based on a comparison of any of amounts of available memory, jitter, latency, packet loss, and average round trip time.
At step 608, the hardware processor(s) 602 may execute machine-readable/machine-executable instructions stored in the machine-readable storage media 604 to determine that a second tunnel between the first network device of the first cohort and a third network device within the second cohort is to be created. For example, the hardware processor(s) 602 may determine that the first network device is a leader of the first cohort and the third network device is a leader of the second cohort. The hardware processor(s) 602 may determine a single leader in each cohort, and determine that tunnels are to connect each of the leaders in a fully meshed topology. In such a manner, data transmission across cohorts may be facilitated.
At step 610, the hardware processor(s) 602 may execute machine-readable/machine-executable instructions stored in the machine-readable storage media 604 to determine not to create, refrain from creating, or skip the creation of, one or more tunnels between first remaining network devices of the first cohort and the second set of network devices of the second cohort. The first remaining network devices may include the first set of network devices besides the first network device. In some examples, the hardware processor(s) 602 may determine not to create a third tunnel between the second network device and the third network device. In some examples, the hardware processor(s) 602 may determine not to create any tunnels between the first remaining network devices and the second set of network devices. For example, only a single network device from the first cohort may be tunneled to only a single network device from the second cohort. On a broader scale, only a single network device from each cohort may be tunneled to only a single network device from each of the other cohorts, as illustrated in
In such a manner, selectively determining not to create tunnels between devices of different cohorts, except for a single device in each cohort, may reduce computing costs by 77% compared to a full mesh topology across an entire network, without compromising the integrity, speed, or effectiveness of data transmission.
The computer system 700 also includes a main memory 706, such as a random access memory (RAM), cache and/or other dynamic storage devices, coupled to bus 702 for storing information and instructions to be executed by processor 704. Main memory 706 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by the hardware processor(s) 704. Such instructions, when stored in storage media accessible to the hardware processor(s) 704, render computer system 700 into a special-purpose machine that is customized to perform the operations specified in the instructions.
The computer system 700 further includes a read only memory (ROM) 708 or other static storage device coupled to bus 702 for storing static information and instructions for the hardware processor(s) 704. A storage device 710, such as a magnetic disk, optical disk, or USB thumb drive (Flash drive), etc., is provided and coupled to bus 702 for storing information and instructions.
The computer system 700 may be coupled via bus 702 to a display 712, such as a liquid crystal display (LCD) (or touch screen), for displaying information to a computer user. An input device 714, including alphanumeric and other keys, is coupled to bus 702 for communicating information and command selections to the hardware processor(s) 704. Another type of user input device is cursor control 716, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to the hardware processor(s) 704 and for controlling cursor movement on display 712. In some examples, the same direction information and command selections as cursor control may be implemented via receiving touches on a touch screen without a cursor.
The computing system 700 may include a user interface module to implement a GUI that may be stored in a mass storage device as executable software codes that are executed by the computing device(s). This and other modules may include, by way of example, components, such as software components, object-oriented software components, class components and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables.
In general, the word “component,” “system,” “component,” “database,” data store,” and the like, as used herein, can refer to logic embodied in hardware or firmware, or to a collection of software instructions, possibly having entry and exit points, written in a programming language, such as, for example, Java, C or C++. A software component may be compiled and linked into an executable program, installed in a dynamic link library, or may be written in an interpreted programming language such as, for example, BASIC, Perl, or Python. It will be appreciated that software components may be callable from other components or from themselves, and/or may be invoked in response to detected events or interrupts. Software components configured for execution on computing devices may be provided on a computer readable medium, such as a compact disc, digital video disc, flash drive, magnetic disc, or any other tangible medium, or as a digital download (and may be originally stored in a compressed or installable format that requires installation, decompression or decryption prior to execution). Such software code may be stored, partially or fully, on a memory device of the executing computing device, for execution by the computing device. Software instructions may be embedded in firmware, such as an EPROM. It will be further appreciated that hardware components may be comprised of connected logic units, such as gates and flip-flops, and/or may be comprised of programmable units, such as programmable gate arrays or processors.
The computer system 700 may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computer system causes or programs computer system 700 to be a special-purpose machine. According to one example, the techniques herein are performed by computer system 700 in response to the hardware processor(s)704 executing one or more sequences of one or more instructions contained in main memory 706. Such instructions may be read into main memory 706 from another storage medium, such as storage device 710. Execution of the sequences of instructions contained in main memory 706 causes the hardware processor(s) 704 to perform the process steps described herein. In alternative examples, hard-wired circuitry may be used in place of or in combination with software instructions.
The term “non-transitory media,” and similar terms, as used herein refers to any media that store data and/or instructions that cause a machine to operate in a specific fashion. Such non-transitory media may comprise non-volatile media and/or volatile media. Non-volatile media includes, for example, optical or magnetic disks, such as storage device 710. Volatile media includes dynamic memory, such as main memory 706. Common forms of non-transitory media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge, and networked versions of the same.
Non-transitory media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between non-transitory media. For example, transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus 702. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications.
The computer system 700 also includes a communication interface 718 coupled to bus 702. Network interface 718 provides a two-way data communication coupling to one or more network links that are connected to one or more local networks. For example, communication interface 718 may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, network interface 718 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN (or WAN component to communicated with a WAN). Wireless links may also be implemented. In any such implementation, network interface 718 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
A network link typically provides data communication through one or more networks to other data devices. For example, a network link may provide a connection through local network to a host computer or to data equipment operated by an Internet Service Provider (ISP). The ISP in turn provides data communication services through the world wide packet data communication network now commonly referred to as the “Internet.” Local network and Internet both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link and through communication interface 718, which carry the digital data to and from computer system 700, are example forms of transmission media.
The computer system 700 can send messages and receive data, including program code, through the network(s), network link and communication interface 718. In the Internet example, a server might transmit a requested code for an application program through the Internet, the ISP, the local network and the communication interface 718.
The received code may be executed by the hardware processor(s) 704 as it is received, and/or stored in storage device 710, or other non-volatile storage for later execution.
Each of the processes, methods, and algorithms described in the preceding sections may be embodied in, and fully or partially automated by, code components executed by one or more computer systems or computer processors comprising computer hardware. The one or more computer systems or computer processors may also operate to support performance of the relevant operations in a “cloud computing” environment or as a “software as a service” (SaaS). The processes and algorithms may be implemented partially or wholly in application-specific circuitry. The various features and processes described above may be used independently of one another, or may be combined in various ways. Different combinations and sub-combinations are intended to fall within the scope of this disclosure, and certain method or process blocks may be omitted in some implementations. The methods and processes described herein are also not limited to any particular sequence, and the blocks or states relating thereto can be performed in other sequences that are appropriate, or may be performed in parallel, or in some other manner. Blocks or states may be added to or removed from the disclosed example examples. The performance of certain of the operations or processes may be distributed among computer systems or computers processors, not only residing within a single machine, but deployed across a number of machines.
As used herein, a circuit might be implemented utilizing any form of hardware, software, or a combination thereof. For example, one or more processors, controllers, ASICs, PLAs, PALs, CPLDs, FPGAs, logical components, software routines or other mechanisms might be implemented to make up a circuit. In implementation, the various circuits described herein might be implemented as discrete circuits or the functions and features described can be shared in part or in total among one or more circuits. Even though various features or elements of functionality may be individually described or claimed as separate circuits, these features and functionality can be shared among one or more common circuits, and such description shall not require or imply that separate circuits are required to implement such features or functionality. Where a circuit is implemented in whole or in part using software, such software can be implemented to operate with a computing or processing system capable of carrying out the functionality described with respect thereto, such as computer system 700.
As used herein, the term “or” may be construed in either an inclusive or exclusive sense. Moreover, the description of resources, operations, or structures in the singular shall not be read to exclude the plural. Conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain examples include, while other examples do not include, certain features, elements and/or steps.
Terms and phrases used in this document, and variations thereof, unless otherwise expressly stated, should be construed as open ended as opposed to limiting. Adjectives such as “conventional,” “traditional,” “normal,” “standard,” “known,” and terms of similar meaning should not be construed as limiting the item described to a given time period or to an item available as of a given time, but instead should be read to encompass conventional, traditional, normal, or standard technologies that may be available or known now or at any time in the future. The presence of broadening words and phrases such as “one or more,” “at least,” “but not limited to” or other like phrases in some instances shall not be read to mean that the narrower case is intended or required in instances where such broadening phrases may be absent.
Unless the context requires otherwise, throughout the present specification and claims, the word “comprise” and variations thereof, such as, “comprises” and “comprising” are to be construed in an open, inclusive sense, that is as “including, but not limited to.” Recitation of numeric ranges of values throughout the specification is intended to serve as a shorthand notation of referring individually to each separate value falling within the range inclusive of the values defining the range, and each separate value is incorporated in the specification as it were individually recited herein. Additionally, the singular forms “a,” “an” and “the” include plural referents unless the context clearly dictates otherwise. The phrases “at least one of,” “at least one selected from the group of,” or “at least one selected from the group consisting of,” and the like are to be interpreted in the disjunctive (e.g., not to be interpreted as at least one of A and at least one of B).