This disclosure generally relates to networking systems and, in particular, to analysis of perturbation of links and/or flows in a network, and to network manipulation.
The problem of congestion control is one of the most widely studied areas in data networks. Many congestion control algorithms, including the BBR algorithm recently proposed by Google, are known. The conventional view of the problem of congestion control in data networks has focused around the principle that a flow's performance is uniquely determined by the state of its bottleneck link. This view helped the Internet recover from congestion collapse in 1988, and throughout the more than 30 years of research and development that followed. A well-known example of the traditional single-bottleneck view is the Mathis equation, which can model the performance of a single TCP flow based on the equation
where MSS is the maximum segment size, RTT is the round trip time of the flow and p is the packet loss probability.
Bottleneck links in congestion-controlled networks do not operate as independent resources, however. For instance, Mathis equation does not take into account the system-wide properties of a network, including its topology, the routing and the interactions between flows. In reality, bottleneck links generaly operate according to a bottleneck structure described herein that can reveal the interactions of bottleneck links, and the system-wide ripple effects caused by perturbations in the network. Techniques using the bottleneck structure, such as the GradientGraph method described below, can adresses a gap in the analysis performed by the conventional techniques, and can provide an alternative methodology to estimate network flow throughput.
Specifically, we present a quantitative technique for expressing bottleneck structures, a mathematical and engineering framework based on a family of polynomial-time algorithms that can be used to reason and identify optimized solutions in a wide variety of networking problems, including network design, capacity planning, flow control and routing. For each of these applications, we present examples and experiments to demonstrate how bottleneck structures can be practically used to design and optimize data networks.
Accordingly, in one aspect a method is provided for analyzing/managing network flows. The method includes, performing by a processor, for a network having several links and several active during a specified time window, constructing a gradient graph. The gradient graph includes one or more link vertices respectively corresponding to one or more links and one or more flow vertices respectively corresponding to one or more flows. The gradient graph also includes one or more link-to-flow edges from a link vertex to one or more flow vertices, where the link-to-flow edges indicate that respective flows corresponding to the one or more flow vertices are bottlenecked at a link corresponding to the link vertex. The method also includes computing and storing, for each link vertex, a respective fair share of a corresponding link.
In some embodiments, the gradient graph includes one or more flow-to-link edges from a flow vertex to one or more link vertices, where a flow corresponding to the flow vertex traverses respective links corresponding to the respective link vertices, but that flow is not bottlenecked at the respective links. In other embodiments, the flow is bottlenecked at at least one of the network links and, as such, at least one of the one or more link-to-flow edges is or includes a bidirectional edge.
Constructing the gradient graph may include determining, for each link in the network, a number of flows bottlenecked at that link, and summing, over the plurality of links, the respective numbers of flows bottlenecked at each link, to obtain a total number of link-to-flow edges in the gradient graph. The method may further include allocating memory based on, at least in part, the total number of link-to-flow edges for the gradient graph. The overall memory allocation may additionally depend, at least in part, on the total number of link vertices, the total number of flow vertices, and the total number of flow-to-link edges. Since, for one or more links, all flows traversing such links may not be bottlenecked at those respective links, the total number of link-to-flow edges (or the total number of bidirectional link-to-flow edges) that are required may be minimized compared to a network graph structure having, for each link, and edge from a corresponding link vertex to vertices corresponding to all flows traversing the link. This can facilitate a memory efficient storage of the gradient graph.
In some embodiments, the method further includes selecting, from the plurality of flows, a flow to be accelerated and determining, by traversing the gradient graph, a target flow associated with a positive flow gradient. In addition, the method may include computing a leap and a fold for the target flow, where the fold includes at least two links having the same or substantially the same faire share. The method may also include reducing flow rate of the target flow using a traffic shaper by a factor up to the leap, and increasing flow rate of the flow to be accelerated up to up to a product of the leap and a gradient of the flow to be accelerated. The factor may be selected to preserve completion time of slowest of the flows in the network. The method may include repeating the determining, computing, reducing, and increasing steps.
The gradient graph may include several levels, including a first level of link vertices and a second, lower level of link vertices, where the flows associated with (e.g., bottlenecked at) the lower level of link vertices may generally have higher rates. The method may include, for adding a new flow to the network, designating the new flow to at least one link of the second level, regardless of whether that link is a part of the shortest path for the flow to be added, to improve flow performance.
The method may include selecting, from the links in the network, a link for which capacity is to be increased, computing a leap of a gradient of the selected link, and increasing capacity of the selected link by up to the leap, to improve network performance. The network may include a data network, a transportation network, an energy distribution network, a fluidic network, or a biological network.
In another aspect, a system is provided for analyzing/managing network flows. The system includes a first processor and a first memory in electrical communication with the first processor. The first memory includes instructions that, when executed by a processing unit that includes one or more computing units, where one of such computing units may include the first processor or a second processor, and where the processing unit is in electronic communication with a memory module that includes the first memory or a second memory, program the processing unit to: for a network having several links and several active during a specified time window, constructing a gradient graph.
The gradient graph includes one or more link vertices respectively corresponding to one or more links and one or more flow vertices respectively corresponding to one or more flows. The gradient graph also includes one or more link-to-flow edges from a link vertex to one or more flow vertices, where the link-to-flow edges indicate that respective flows corresponding to the one or more flow vertices are bottlenecked at a link corresponding to the link vertex. The instructions also configure the processing unit to compute and store, for each link vertex, a respective fair share of a corresponding link. In various embodiments, the instructions can program the processing unit to perform one or more of the method steps described above.
In another aspect, a method is provided for analyzing/managing a network. The method includes performing by a processor the steps of: obtaining network information and determining a bottleneck structure of the network, where the network includes several links and several flows. The method also includes determining propagation of a perturbation of a first flow or link using the bottleneck structure, and adjusting the first flow or link, where the adjustment results in a change in a second flow or link, where the change is based on the propagation of the perturbation or the adjustment to the first flow or link.
The network may include a data network, a transportation network, an energy distribution network, a fluidic network, or a biological network. Determining the propagation may include computing a leap and a fold associated with the first flow or link, and adjusting the first flow or link may include increasing or decreasing a rate of the first flow or increasing or decreasing allotted capacity of the first link.
In another aspect, a system is provided for analyzing/managing a network. The system includes a first processor and a first memory in electrical communication with the first processor. The first memory includes instructions that, when executed by a processing unit that includes one or more computing units, where one of such computing units may include the first processor or a second processor, and where the processing unit is in electronic communication with a memory module that includes the first memory or a second memory, program the processing unit to: obtain network information and determine a bottleneck structure of the network, where the network includes several links and several flows.
The instructions also program the processing unit to determine propagation of a perturbation of a first flow or link using the bottleneck structure, and to adjust or direct adjusting of the first flow or link, where the adjustment results in a change in a second flow or link, and where the change is based on the propagation of the perturbation or the adjustment to the first flow or link. In various embodiments, the instructions can program the processing unit to perform one or more of the method steps described above.
The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.
The present disclosure will become more apparent in view of the attached drawings and accompanying detailed description. The embodiments depicted therein are provided by way of example, not by way of limitation, wherein like reference numerals/labels generally refer to the same or similar elements. In different drawings, the same or similar elements may be referenced using different reference numerals/labels, however. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating aspects of the invention.
In the drawings:
While it is generally true that a flow's performance is limited by the state of its bottleneck link, we recently discovered how bottlenecks in a network interact with each other through a structure—which we call the bottleneck structure—that depends on the topological, routing and flow control properties of the network. A related structure is described in co-pending U.S. patent application Ser. No. 16/580,718, titled “Systems and Methods for Quality of Service (Qos) Based Management of Bottlenecks and Flows in Networks,” filed on Sep. 24, 2019, which is incorporated herein by reference. U.S. patent application Ser. No. 16/580,718 (which may also refer to the graph structure described therein as a bottleneck structure), generally describes qualitative properties of the bottleneck precedence graph (BPG), a structure that analyzes the relationships among links.
In the discussion below, we introduce a new bottleneck structure called the gradient graph. One important difference between the gradient graph and the BPG is that the gradient graph also describes the relationships among flows and links, providing a more comprehensive view of the network. Another important difference is that the gradient graph enables a methodology to quantify the interactions among flows and links, resulting in a new class of techniques and algorithms to optimize network performance. The bottleneck structure describes how the performance of a bottleneck can affect other bottlenecks, and provides a framework to understand how perturbations on a link or flow propagate through a network, affecting other links and flows. If the congestion control problem for data networks were an iceberg, the traditional single-bottleneck view would be its tip and the bottleneck structure would be its submerged portion, revealing how operators can optimize the performance of not just a single flow but of the overall system-wide network.
Thus, we present herein a quantitative theory of bottleneck structures, a mathematical framework and techniques that results in a set of polynomial time algorithms that allow us to quantify the ripple effects of perturbations in a network. Perturbations can either be unintentional (such as the effect of a link failure or the sudden arrival of a large flow in a network) or intentional (such as the upgrade of a network link to a higher capacity or the modification of a route with the goal of optimizing performance). With the framework described herein, a network operator can quantify the effect of such perturbations and use this information to optimize network performance.
In particular:
The techniques described herein are generally applicable to networks that transport commodity flows. In addition to communication networks, examples include (but are not limited to) vehicle networks, energy networks, fluidic networks, and biological networks. For example, the problem of vehicle networks generally involves identifying optimized designs of the road system that allows for a maximal amount of vehicles that can circulate through the network without congesting it or, similarly, minimizing the level of congestion for a given amount of circulating vehicles. In this case, vehicles are analogous to packets in a data network, while flows correspond to the set of vehicles going from location A to location B at a given time that follow the same path.
The capacity planning techniques described below can be used to analyze the need to construct a road to mitigate congestion hotspots, compute the right amount of capacity needed for each road segment, and to infer the projected effect on the overall performance of the road system. Similarly, the routing techniques described below can be used to suggest drivers alternative paths to their destination that would yield higher throughput or, equivalently, lower their destination arrival time.
The problem of energy networks generally includes transporting energy from the locations where energy is generated to the locations where it is consumed. For instance, energy can be in the form of electricity carried via the electrical grid. Other examples include fluidic networks, which can carry crude oil, natural gas, water, etc., or biological networks that may carry water, nutrients, etc.
Biological networks, through evolution, may tend to organize themselves in optimized structures that maximize their performance (in terms of transporting nutrients) and/or minimize the transportation costs. For instance, a tree transports sap from the root to its branches and in both directions. The sap transported from the root to its branches and leaves is called xylem, which carries energy and nutrients found from the soil where the tree is planted.
The sap transported from the leaves and branches to the root is called phloem, which carries also important nutrients obtained from the biochemical process of photosynthesis performed in the cells of the leaves. In both networks (upward and downward), it is likely that the network transporting the sap performs optimally in terms of minimizing the amount of energy required to transport a given amount of sap. Such optimized designs can be generated for the types of networks, using the bottleneck structures and perturbation propagation based thereon, as discussed below. Biological networks can themselves be optimized based on such analysis.
2.1 Network Model
In their simplest form, networks are systems that can be modeled using two kinds of elements: links, which offer communication resources with a limited capacity; and flows, which make use of such communication resources. We formalize the definition of network as follows:
Definition 1 Network. We say that a tuple =,,{cl,∀l∈} is a network if:
Each flow f traverses a subset of links f⊂ and, similarly, each link l is traversed by a subset of flows l⊂. We will also adopt the convenient notation f=f and 1=l. That is, a flow is the list of links that it traverses and a link is the list of flows that traverse it. Finally, each flow f transmits data at a rate rf and the capacity constraint Σ∀f∈lrf≤cl must hold for all l∈.
A core concept upon which our framework resides is the notion of a bottleneck link. Intuitively, a link in a network is a bottleneck if its capacity is fully utilized. Mathematically and in the context of this work, we will use a more subtle definition:
Definition 2 Bottleneck link. Let =,,{cl,∀l∈} be a network where each flow f∈ transmits data at a rate rf determined by a congestion control algorithm (e.g., TCP's algorithm). We say that flow f is bottlenecked at link l—equivalently, that link l is a bottleneck to flow f—if and only if:
Flow f traverses link l, and
This definition of bottleneck generalizes some of the classic definitions found in the literature, while differing from them in that it focuses on the notion of perturbation, mathematically expressed as a derivative of a flow rate with respect to the capacity of a link,
We complete the description of the network model introducing the concept of fair share:
Definition 3 Fair share of a link. Let =,,{cl,∀l∈} be a network. The fair share sl of a link l∈ is defined as the the rate of the flows that are bottlenecked at such link.
The flows bottlenecked at a link may all have the same rate that may be the same as the faire share of the link. As used throughout the discussion below, the concept of link fair share is dual to the concept of flow rate. That is, all the mathematical properties that are applicable to the rate of a flow, are also applicable to the fair share of a link.
2.2 The Gradient Graph
Our objective is to derive a mathematical framework capable of quantifying the effects that perturbations on links and flows exert on each other. Because the bottleneck structure described in U.S. patent application Ser. No. 16/580,718 considers only the effects between bottleneck links, we need a generalization of such structure that can also describe the effects of perturbations on flows. We refer to this data structure as the gradient graph, formally defined as follows (the name of this graph derives from the fact that perturbations can mathematically be expressed as derivatives or, more generically, as gradients):
Definition 4A Gradient graph. The gradient graph is a digraph such that:
We may also employ a variation of the Definition 4A as:
Definition 4B Gradient graph. The gradient graph is a digraph such that:
By way of notation, in the discussion below we will use the terms gradient graph and bottleneck structure indistinguishably. Intuitively, a gradient graph describes how perturbations on links and flows propagate through a network as follows. A directed edge from a link l to a flow f indicates that flow f is bottlenecked at link l (Condition 2(a) in Definitions 4A and 4B). A directed edge from a flow f to a link l indicates that flow f traverses but is not bottlenecked at link l (Condition 2(b) in Definition 4A), and a bidirectional edge from a flow f to a link l indicates that flow f traverses (and is bottlenecked at) link l (Condition 2(b) in Definition 4B).
From Definition 2, this necessarily implies that a perturbation in the capacity of link l will cause a change on the transmission rate of flow f,
The relevancy of the gradient graph as a data structure to help understand network performance is captured in the following lemma, which mathematically describes how perturbations propagate through a network.
Lemma 1 Propagation of network perturbations.
Proof. See Section 7.2.
Leveraging Lemma 1, we are now in a position to formally define the regions of influence of a data network.
Definition 5 Regions of influence in a data network. We define the region of influence of a link l, denoted as (l), as the set of links and flows that are affected by a perturbation in the capacity cl of link l, according to Lemma 1. Similarly, we define the region of influence of a flow f, denoted as (f), as the set of links and flows that are affected by a perturbation in the transmission rate rf of flow f, according to Lemma 1.
From Lemma 1, we know that the region of influence of a link (or a flow) corresponds to its descendants in the gradient graph. Such regions are relevant to the problem of network performance analysis and optimization because they describe what parts of a network are affected by perturbations on the performance of a link (or a flow). In Section 2.3, it is discussed how such influences can be quantified using the concept of link and flow gradient.
We can now introduce the GradientGraph (Algorithm 1A,
Lemma 2A states the time complexity of the GradientGraph algorithm:
Lemma 2A Time complexity of the GradientGraph algorithm. The time complexity of running GradientGraph( ) is O(H·||2+||·||), where H is the maximum number of links traversed by any flow.
Proof. See Section 7.4.1
Once this link is selected, all unresolved flows remaining in the network that traverse it are resolved. That is, their rates are set to the fair share of the link (line 12) and they are added to the set of vertices of the gradient graph V (line 13). In addition, directed edges are added in the gradient graph between the link and all the flows bottlenecked at it (line 10) and from each of these flows to the other links that they traverse (line 15). Lines 16-17-18 update the available capacity of the link, its fair share, and the position of the link in the min-heap according to the new fair share. Finally, the link itself is also added as a vertex in the gradient graph (line 22). This iterative process may be repeated until all flows have been added as vertices in the gradient graph (line 7). The algorithm returns the gradient graph G, the fair share of each link {sl,∀l∈} and the rate of each flow {rf,∀f∈}.
Lemma 2B provides the run-time complexity of this embodiment of the GradientGraph( ) algorithm:
Lemma 2B. Time complexity of GradientGraph( ). The time complexity of running GradientGraph( ) is O(|L|log|L|·H), where H is the maximum number of flows that traverse a single link.
Proof. See Section 7.4.2.
The GradientGraph is memory efficient, as well. In particular, various embodiments of the GradientGraph include a respective vertex for each link and a respective vertex for each flow. As such, the number of vertices in a GradientGraph is O(||+||). The edges in the graph from a link vertex to one or more flow vertices do not include, however, an edge to each and every flow vertex where that flow vertex represents a flow traversing the link corresponding to the link vertex. Rather, edges exist from a link vertex to a flow vertex only if, as described above, a flow corresponding to that flow vertex is bottlenecked at the link corresponding to the link vertex. This minimizes the total number of edges in various embodiments and implementations of GradientGraph.
Since the memory required to construct a GradientGraph is a function of (e.g., proportional to the total number of vertices and the total number of edges, the identification of the bottleneck structure facilitates efficient memory allocation in various embodiments. Specifically, in some cases, the memory to be allocated can be a function of the total number of link vertices to flow vertices edges, denoted (|Ebl→f|) where |Ebl→f| is a sum of the number of bottlenecked flows at each link. The required memory may be proportional to O(||+||+|E|), where the set {E} includes the set of edges from flow vertices to link vertices, denoted {Ef→l} and the set of edges from link vertices to flow vertices corresponding to bottlenecked flows, denoted {El→f}. In some cases, the total number of flows bottlecknecked at a link l is less than the total number of flows traversing the link l, minimizing the number of edges |El→f|.
Since, for one or more links, all flows traversing such links may not be bottlenecked at those respective links, the total number of link-to-flow edges (or the total number of bidirectional link-to-flow edges) that are required may be minimized compared to a network graph structure having, for each link, and edge from a corresponding link vertex to vertices corresponding to all flows traversing the link. This can facilitate a memory efficient storage of the gradient graph. Thus, the derivation of the bottleneck structure can minimize the memory required to store and manipulate such a structure, in various embodiments.
2.3 Link and Flow Gradients
In this section, we focus on the problem of quantifying the ripple effects created by perturbations in a network. Because networks include links and flows, generally there are two possible causes of perturbations: (1) those originating from changes in the capacity of a link and (2) those originating from changes in the rate of a flow. This leads to the concept of link and flow gradient, formalized as follows:
Definition 6 Link and flow gradients. Let =,,{cl,∀l∈} be a network. We define:
Intuitively, the gradient of a link measures the impact that a fluctuation on the capacity of a link has on other links or flows. In real networks, this corresponds to the scenario of physically upgrading a link or, in programmable networks, logically modifying the capacity of a virtual link. Thus, link gradients can generally be used to resolve network design and capacity planning problems. Similarly, the gradient of a flow measures the impact that a fluctuation on its rate has on a link or another flow. For instance, this scenario corresponds to the case of traffic shaping a flow to alter its transmission rate or changing the route of a flow-which can be seen as dropping the rate of that flow down to zero and adding a new flow on a different path. Thus, flow gradients can generally be used to resolve traffic engineering problems. (In Section 3 applications in real networks that illustrate each of these scenarios are provided.)
Before describing how link and flow gradients can be efficiently computed using the gradient graph, we introduce the concept of flow drift:
Definition 7 Drift. Let =,,{cl,∀l∈} be a network and assume {sl,∀l∈},{rf,∀f∈} is the output of GradientGraph() (Algorithms 1A or 1B). Let δ be an infinitesimally small perturbation performed on the capacity of a link l*∈ (equivalently, on the rate of a flow f*∈). Let also sl+Δl and rf+Δf be the fair share of any link l∈ and the rate of any flow f∈, respectively, after the perturbation δ has propagated through the network. We will call Δl and Δf the drift of a link l and a flow f, respectively, associated with perturbation δ.
Intuitively, the drift corresponds to the change of performance experienced by a link or a flow when another link or flow is perturbed. Using this concept, the following lemma describes how the gradient graph structure introduced in Definition 4 encodes the necessary information to efficiently calculate link and flow gradients in a network:
Lemma 3 Gradient graph invariants. Let =,,{cl,∀l∈} be a network and let be its gradient graph. Let δ be an infinitesimally small perturbation performed on the capacity of a link l*∈ (equivalently, on the rate of a flow f*∈) and let Δl and Δf be the drifts caused on a link l∈ and a flow f∈, respectively, by such a perturbation. Assume also that the perturbation propagates according to the gradient graph by starting on the link vertex l* (equivalently, on the flow vertext f*) and following all possible directed paths that depart from it, while maintaining the following invariants at each traversed vertex:
Invariant 1: Link equation.
Invariant 2: Flow equation. Δf=min{Δl
Let also ′ be the gradient graph of the resulting network after the perturbation has propagated. Then, if =′, the link and flow gradients can be computed as follows:
Proof. See Section 7.3.
The previous lemma states that if the gradient graph does not change its structure upon a small perturbation (i.e., =′) and the two invariants are preserved, then such a perturbation can be measured directly from the graph. The first invariant ensures that (1) the sum of the drifts arriving to and departing from a link vertex are equal to zero and (2) the drifts departing from a link vertex are equally distributed. Intuitively, this is needed to preserve the congestion control algorithm's objective to maximize network utilization while ensuring fairness among all flows. The second invariant is a capacity feasibility constraint, ensuring that a flow's drift is limited by its most constrained bottleneck.
It should be noted that it is feasible for a link or flow gradient to have a value larger than 1. Such gradients are of interest because they mean that an initial perturbation of one unit at some location of a network, generates a perturbation at another location of more than one unit. For instance, a gradient of the form ∇f*(f)>1 implies that reducing the rate of flow f* by one unit creates a perturbation that results in an increase on the rate of flow f by more than one unit, thus creating a multiplicative effect. Such gradients can be used to identify arbitrage situations—e.g., configurations of the network that increase the total flow of a network. Because of their relevance, we will use the term power gradient to refer to such effect:
Definition 8 Power gradient. Let =,,{cl,∀l∈} be a network and let δ be an infinitesimally small perturbation performed on a flow or link x∈∪, producing a drift Δy, for all y∈∪. If Δy>δ, equivalently ∇x(y)>1, then we will say that ∇x(y) is a power gradient. In Section 3, we provide examples of power gradients. For now, we conclude this section stating a property of boundedness that all gradients in congestion-controlled networks satisfy:
Property 1 Gradient bound. Let =,,{cl,∀l∈} be a network and let be its gradient graph. Let δ be an infinitesimally small perturbation performed on a flow or link x∈∪, producing a drift Δy, for all y∈∪. Then,
Proof. See Section 7.5.
2.4 Leaps and Folds
The concepts of link and flow gradients introduced in the previous section provide a methodology to measure the effect of perturbations on a network that are small enough (infinitesimally small) to avoid a structural change in the gradient graph (see Lemma 3). In this section, we introduce the concepts of leap and fold, which allow us to generalize the framework to measure perturbations of arbitrary sizes. Two simple and intuitive examples of such kind of perturbations found in real networks include: a link failure, which corresponds to the case its capacity goes down to zero; or the re-routing of a flow, which corresponds to the case its rate goes down to zero and a new flow is initiated.
From Lemma 3, we know that if a perturbation in the network is significant enough to modify the structure of the gradient graph (i.e., ≠′), then the link and flow equations (
Definition 9 Gradient leap. Let ∇x(y) be a gradient resulting from an infinitesimally small perturbation δ on a link or flow x, where x,y∈∪. Suppose that we intensify such a perturbation by a factor k, resulting in an actual perturbation of λ=k·δ, for some k>0. Further, assume that k is the largest possible value that keeps the structure of the gradient graph invariant upon perturbation k. Then, we will say that k is the leap of gradient ∇X(y).
The following lemma shows the existence of folds in the bottleneck structure when its corresponding network is reconfigured according to the direction indicated by a gradient and by an amount equal to its leap:
Lemma 4 Folding links. Let =,,{cl,∀l∈} be a network and let be its gradient graph. Let X be the leap of a gradient ∇x(y), for some x,y∈∪. Then, there exist at least two links l and l′ such that: (1) for some f∈, there is a directed path in of the form l→f→l′; and (2) sl=sl′ after the perturbation has propagated through the network.
Proof. See Section 7.6.
Intuitively, the above lemma states that when a perturbation is large enough to change the structure of the gradient graph, such structural change involves two links l and l′ directly connected via a flow f (i.e., forming a path l→f→l′) that have their fair shares collapse on each other (s′l=s′l,) after the perturbation has propagated. The faire shares can be substantially or approximately equal (e.g., the difference between the faire shares can be zero or less than a specified threshold, e.g., 10%, 5%, 2%, 1%, or even less of the fair share of one of the links.) Graphically, this corresponds to the folding of two consecutive levels in the bottleneck structure. We can now formalize the definition of fold as follows.
Definition 10 Fold of a gradient. Let A be the leap of a gradient ∇X(y), for some x, y∈∪, and let l and l′ be two links that fold once the perturbation λ has propagated through the network (note that from Lemma 4, such links must exist). We will refer to the tuple (l,l′) as a fold of gradient ∇X(y).
Algorithm 2 shown in
The concept of leap and fold is relevant in that it enables a methodology to efficiently travel along the solution space defined by the bottleneck structure, towards reaching a certain performance objective is achieved. Specifically, for some x,y∈∪, if x is perturbed negatively so as to benefit another flow or link in the network, but only up to the leap of x, i.e., λ, the negative and positive changes may be balanced. On the other hand, if x is perturbed negatively by more than its λ, the positive impact of this perturbation on another flow or link would not exceed λ, potentially resulting in degradation of the overall network performance.
We introduce a method in Algorithm 3, MinimizeFCT( ), shown in
From Lemma 4, we know that the additional traffic shaper changes the structure of the gradient graph, at which point we need to iterate again the procedure (line 1) to recompute the new values of the gradients based on the new structure. This process is repeated iteratively until either no more positive gradients are found or the performance of fs has increased above a given rate target ρ (lines 3 and 4). In the next section, an example is presented demonstrating how embodiments of MinimizeFCT( ) may be used to optimize the performance of a time-bound constrained flow.
Because the existence of bottleneck structures are a fundamental property intrinsic to any congestion-controlled data network, its applications are numerous in a variety of network communication problems. In this section, our goal is to present some examples illustrating how the proposed Theory of Bottleneck Structures (TBS) introduced in the previous section can be used to resolve some of these problems. We show that in each of them, the framework is able to provide new insights into one or more operational aspects of a network. The examples presented in this section are not exhaustive, but only illustrative. To help organize the breadth of applications, we divide them in two main classes: traffic engineering and capacity planning. For each of these classes, we provide specific examples of problems that relate to applications commonly found in modern networks.
3.1 Traffic Engineering
3.1.1 Scheduling Time-Bound Constrained Flows
Suppose that our goal is to accelerate a flow fs∈F in a network with the objective that such flow is completed before a certain time-bound requirement. A common application for the optimization of time-bound constrained flows can be found in research and education networks, where users need to globally share data obtained from their experiments, often involving terabytes or more of information—e.g., when scientists at the European Organization for Nuclear Research (CERN) need to share data with other scientific sites around the world using the LHCONE network. Another common use case can be found in large scale data centers, where massive data backups need to be transferred between sites to ensure redundancy. In this context, suppose the operators are only allowed to sacrifice the performance of a subset of flows ′⊂{fs}, considered of lower priority than fs. What flows in ′ present an optimal choice to accelerate fs? By what amount should the rate of such flows be reduced? And by what amount will flow fs be accelerated?
To illustrate that we can use TBS to resolve this class of problems, consider the network shown in
To identify an optimal strategy for accelerating an arbitrary flow in a network, we use an implementation of the MinimizeFCT( ) procedure (Algorithm 3,
In line 7, we invoke LeapFold(,,f4) (Algorithm 2,
The second iteration, thus, starts with the original network augmented with a traffic shaper l7 that forces the rate of flow f4 to be throttled at 1.875. Using its bottleneck structure (
In
In summary, a strategy to maximally accelerate the performance of flow f7 consists in traffic shaping the rates of flows f3, f4 and f8 down to 1.25, 1.875 and 5.625, respectively. Such a configuration results in an increase to the rate of flow f7 from 10.25 to 16.875, while ensuring no flow performs at a rate lower than the slowest flow in the initial network configuration.
3.1.2 Identification of High-Bandwidth Routes
In this section, we show how TBS can also be used to identify high-bandwidth routes in a network. We will consider one more time the B4 network topology, but assume there are two flows (one for each direction) connecting every data center in the US with every data center in Europe, with all flows following a shortest path. Since there are six data centers in the US and four in Europe, this configuration has a total of 48 flows (||=6×4×2=48), as shown in
Note also that all the top-level flows operate at a lower transmission rate (with all rates at 1.667) than the bottom-level flows (with rates between 2.143 and 3). This in general is a property of all bottleneck structures: flows operating at lower levels of the bottleneck structure have higher transmission rates than those operating at levels above. Under this configuration, suppose that we need to initiate a new flow f25 to transfer a large data set from data center 4 to data center 11. Our objective in this exercise is to identify a high-throughput route to minimize the time required to transfer the data.
Because the bottleneck structure reveals the expected transmission rate of a flow based on the path it traverses, we can also use TBS to resolve this problem. In 8B we show the bottleneck structure obtained for the case that f25 uses the shortest path l15→l10. Such configuration places the new flow at the upper bottleneck level—the lower-throughput level—in the bottleneck structure, obtaining a theoretical rate of r25=1.429.
Note that the presence of this new flow slightly modifies the performance of some of the flows on the first level (flows {f1,f3,f4,f5,f7,f8} experience a rate reduction from 1.667 to 1.429), but it does not modify the performance of the flows operating at the bottom level. This is because, for the given configuration, the new flow only creates a shift in the distribution of bandwidth on the top level, but the total amount of bandwidth used in this level stays constant. (In FIG. ??, the sum of all the flow rates on the top bottleneck level is 1.667×12=20, and in FIG. ?? this value is the same: 1.429×7+1.667×6=20.) As a result, the ripple effects produced from adding flow f25 into the network cancel each other out without propagating to the bottom level.
Assume now that, instead, we place the newly added flow on the non-shortest path l16→l8→l19. The resulting bottleneck structure is shown in
In conclusion, for the given example, the non-shortest path solution achieves both a higher throughput for the newly placed flow and better fairness in the sense that such allocation—unlike the shortest path configuration—does not deteriorate the performance of the most poorly treated flows.
3.2.1 Design of Fat-Tree Networks in Data Centers
In this experiment, we illustrate how TBS can be used to optimize the design of fat-tree network topologies. Fat-trees are generally understood to be universally efficient networks in that, for a given network size s, they can emulate any other network that can be laid out in that size s with a slowdown at most logarithmic in s. This property is one of the underlying mathematical principles that make fat-trees (also known as folded-clos or spine-and-leaf networks) highly competitive and one of the most widely used topologies in large-scale data centers and high-performance computing (HPC) networks.
Consider the network topology in
We fix the capacity of the leaf links to a value λ (i.e., cl
The focus of our experiment is to use the bottleneck structure analysis to identify optimized choices for the tapering parameter τ. In
The first bottleneck structure (
By looking at the bottleneck structure in
On the other hand, the link gradient of any of the spine links with respect to any of the low-level flows is ∇1(f)=−0.25, for all l∈{l5,l6} and f∈{f1,f4,f9,f12}. That is, an increase by one unit on the capacity of the spine links increases the rate of the top-level flows by 0.125 and decreases the rate of the low-level flows by 0.25. Since the rates of the top and low-level flows are 2.5 and 5, respectively, this means that the two levels will fold at a point where the tapering parameter satisfies the equation 2.5+0.125·τ·λ=5−0.25·τ·λ, resulting in
Note that this value corresponds exactly to the leap of the spine links gradient, and thus can also be programmatically obtained using Algorithm 2 (
What is the effect of increasing the tapering parameter above 4/3? This result is shown in
In summary, for the fat-tree network shown in
Note that this result might be counter-intuitive if we take some of the established conventional best practices. For instance, while a full fat-tree (τ=2, in our example) is generally considered to be efficient, the analysis of its bottleneck structure, as presented above, demonstrates that such design is inefficient when flows are regulated by a congestion-control protocol, as is the case of many data centers and HPC networks. See section 4.3 where we experimentally demonstrate this result using TCP congestion control algorithms. It should be understood that the value of i, in general, will depend on the network topology and would not always be 3 but that given a network topology, an optimized value of r can be determined using the gradient graph and the leap-fold computation, as described above.
We have implemented various embodiments of the algorithms and processes described herein in a tool that provides a powerful, flexible interface to emulate networks of choice with customizable topology, routing, and traffic flow configurations. It uses Mininet and the POX SDN controller to create such highly customizable networks. It also uses iPerf internally to generate network traffic and offers an interface to configure various flow parameters such as the source and destination hosts, start time, and data size, among others. This tool also offers an integration with sFlow-RT agent that enables real-time access to traffic flows from Mininet emulated network. Since Mininet uses real, production grade TCP/IP stack from the Linux kernel, it can be an ideal testbed to run experiments using congestion control protocols such as BBR and Cubic to study bottleneck structures and flow performance in a realistic way. Apart from its flexible configuration interface, our tool also offers a set of useful utilities to compute and plot various performance metrics such as instantaneous network throughput, flow convergence time, flow completion time, or Jain's fairness index, among others for a given experiment.
We used our tool, to experimentally verify and demonstrate that the framework described above can be used to address various practical network operational issues outlined in Section 3. We ran several experiments by varying the network topology, traffic flow configuration, routing scheme, and congestion control protocols.
Results shown in this section are based on experiments run using the BBR (bottleneck bandwidth and round-trip propagation time) congestion control algorithm and for similar experiments run using Cubic. For each experiment, we used Jain's fairness index as an estimator to measure how closely the bottleneck structure model matches with the experimental results. For all BBR experiments presented in the next sections, this index was above 0.99 accuracy on a scale from 0 to 1 (See Section 4.4), reflecting the strength of our framework in modeling network behavior.
4.1 Time-Bound Constrained Data Transfers
The objective of this experiment is to empirically demonstrate the results obtained in Section 3.1.1, reproducing the three steps required in that exercise to identify the optimal set of traffic shapers to accelerate flow f7 as shown in
Table 2A shows the transmission rate obtained for each of the flows and for each of the three experiments. Next to each experimental rate, this table also includes the theoretical value according to the bottleneck structure.
Table 2B and
4.2 Identification of High-Throughput Routes
In this set of experiments, we empirically demonstrate the correctness of the high-throughput path identified from the bottleneck structure analysis in Section 3.1.2. We start by creating the B4 network configuration shown in
As shown, flow f25 achieves a performance of 1.226 and 2.386 Mbps for the shortest and longer paths, respectively—with the theoretical rates being 1.428 and 2.5 Mbps, respectively. Thus the longer path yields a 94% improvement on flow throughput with respect to the shortest path.
4.3 Bandwidth Tapering on Fat-Tree Networks
The objective of this experiment is to empirically demonstrate the results obtained in Section 3.2.1, reproducing the steps to identify an optimal tapering parameter T in the binary fat-tree configuration introduced in
As predicated by TBS, the case i=1 has flows operating at one of two bottleneck levels, close to the rates predicted by the bottleneck structure (2.5 Mbps for the upper-level flows and 5 Mbps for the lower-level flows, see
If we want to maximize the rate of the slowest flow, TBS tells us that the right tapering Parameter Value is 4/3. This Case is Presented in
(the slowest flow completes in 178 seconds), since in this configuration the leaf links become the bottlenecks and the extra bandwidth added in the spine links does not produce any net benefit, as shown by the bottleneck structure in
provides an optimized design in that it is the least costly network that minimizes the completion time of the slowest flow.
Table 4B and
4.4 Jain's Fairness Index Results
Jain's index is a metric that rates the fairness of a set of values x1, x2, . . . , xn according to the following equation:
4.4 Notes on Using the Gradient Graph Framework in Real-Life Networks
In this section we provide notes on using the proposed gradient graph framework in real-life networks (also called production networks). To construct the gradient graph of a network, only the information about a network =,,{cl,∀I∈} is needed. The set of flows can be obtained from traditional network monitoring tools such as NetFlow or sFlow, though the use of the identified tools is illustrative; not required. Other tools, available now or may become available subsequently, may be used. For each flow, the GradientGraph procedure (Algorithms 1A (
4.5 Capacity Planning Using NetFlow Logs
In this section, we demonstrate how link gradients (outlined in Section 2.3) can be used by network operators in practicality for baselining networks and for identifying performance bottlenecks in a network for capacity planning purposes. We demonstrate this by integrating an embodiment of the framework described herein, where that embodiment is implemented in the tool we developed, with NetFlow logs obtained from the Energy Sciences Network (ESnet). ESnet is the U.S. Department of Energy's (DoE) large scale, high performance network that connects all the US national laboratories and supercomputing centers. It is designed to enable high-speed data transfers, and collaboration between scientists across the country. It should be understood that the use of NetFlow and ESnet are illustrative only, and that the use of any tool or technique that can provide the relevant network data, available now or developed subsequently, may be used. Likewise, the techniques described herein are generally applicable to any network and are not limited to ESnet.
Our tool includes plugins that enable integration with standard network monitoring tools such as NetFlow, sFlow, etc., and a Graphical User Interface (GUI) to visualize the gradient graph of practical networking environments. The observations drawn in this section are based on analysis of a week's worth of anonymized NetFlow logs from real traffic flows, and topology information from ESnet as shown in
At a high level, the procedure we use to analyze NetFlow logs includes the following four steps. Step (i) Process topology information: In this step, we read the topology information provided and build a directed graph with the routers as nodes of the graph and the various links (based on BGP, IS-IS, L2 VLAN links) that connect them as directed edges between the nodes in the graph. We also save the link capacities read from the topology information. This is later used in resolving the flow path.
Step (ii) Extract TCP flows: This step consists of identifying TCP flows by deduplicating them using their source, destination IP addresses, port numbers and the flow start time. Using this info, we build a flow cache to track all the active flows in the network during the analysis window.
Step (iii) Build flow path: For each of the flows in flow cache, based on the router where the NetFlow sample was seen and the next hop router from NetFlow log, we build the flow path by correlating it with the topology information we processed earlier in step (i). If a given flow was sampled and logged by multiple routers, we pick a route that includes all such intermediate segments. By the end of this step, we have all the info we need to build a gradient graph.
Step (iv) Compute link, and flow gradients: Using the flow and link information extracted in the earlier steps, we compute the link and flow gradients for the network using our tool.
Baseline models can be developed using link gradients and bottleneck levels. For instance,
Bottleneck structures are recently discovered graphs that describe the relationships that exist among bottleneck links and the influences they exert on each other in congestion-controlled networks. While existing work has studied these structures from a qualitative standpoint, in this disclosure we provide a quantitative theory of bottleneck structures that allows us to quantify the effects of perturbations (both unintentional and intentional) as they travel through a network. The analytical strength of a bottleneck structure stems from its ability to capture the solution-space produced by a congestion-control algorithm. This is achieved, at least in part, by combining (1) a graph structure that qualitatively defines how perturbations propagate in the network and (2) the mathematical relationships that quantify the extent to which such perturbations affect the performance of its links and flows.
We show that these perturbations can be expressed in terms of link and flow gradients. Based on this concept, we present a new family of polynomial-time and memory efficient algorithms/processes that allow us to travel within the solution-space of the applicable congestion control technique towards optimizing network performance. The outcome of various techniques and embodiments described herein includes optimized network configurations that can be practically engineered in any network, such as data networks, transportation networks, energy networks, etc., using techniques such as traffic shaping, traffic re-routing, link upgrades, or topology reconfiguration, among others. While we demonstrate the validity of the quantitative theory of bottleneck structures using Mininet, we have also prototyped a tool that allows validation of TBS in real-world networks.
The overall network analysis and/or manipulation or control process is depicted in
In case of data networks, the nodes may be data centers and/or computing centers, the links include data links, whether cable, wireless, or satellite based, the flow rates may include number of bits, bytes, packets, etc., passing through the links, and link capacities may be expressed in terms of available or allotted bandwidth or bit rate. In case of transportation networks, the nodes can be cities, locations within cities or a metropolitan area, airports, marine ports, etc., the links can be roadways, railways, subway routes, airline routes, marine routes, etc., the flow rates and link capacities can be expressed in terms of the number of passengers or travelers, the number of vehicles, etc.
In case of energy networks, the nodes can be energy generators such as power plants and consumers, such as towns, cities, industrial complexes, shopping centers, etc. The links include energy delivery systems including high-voltage transmission lines, substations, local energy distribution lines, etc. The flow rates and link capacity can be expressed in terms of peak energy demand, average energy demand, etc.
In case of fluidic or biological networks, the nodes can be sources and consumers of material, such as oil, gas, nutrients, blood, etc., and the link capacity can be the sizes of conduits or vessels carrying the fluids or biological materials, the pressure in such conduits or vessels, etc. In some cases, the capacity and/or rate of flow in one or more conduits/vessels can be adjusted by shutting off or pruning other conduits/vessels. The flow rate optimization and/or capacity planning can thus be used to manage or control irrigation systems, fertilizer delivery system, plant/crop disease control systems, etc.
Δfter collecting the required information, the bottleneck structure of the network is generated and, thereafter, the GradientGraph that includes various flow and link gradients is generated using embodiments of Algorithms 1A or 1B (
The effect of this perturbation can be observed on the flow(s) and/or link(s) of interest, and the process may be repeated a specified number of times, until a desired effect (e.g., increase in the rate of a flow of interest) is attained, or a maximum feasible change can be attained. Such iterations may be performed under constraints, such as not permitting the flow rate of any flow below the current minimum or a specified lower-bound rate, maintaining the relative order of the flow rates, allotting at least a specified lower-bound capacity to each link, etc.
7.1 Generalization to max-min fairness
Lemma 5 If a link is a bottleneck in the max-min sense, then it is also a bottleneck according to Definition 2, but not vice-versa.
Proof. It is generally known that if a flow f is bottlenecked at link l in the max-min sense, then such a flow must traverse link l and its rate is equal to the link's fair share, rf=sl. Since a change in the capacity of a link always leads to a change in its fair share, i.e
Proof. Let =,,{cl,∀I∈} be a network and assume is its gradient graph. Consider the two statements in 1a-1b and assume link l is affected by a perturbation. From Definition 2, we have that
From Definition 4 this corresponds to all the links l* for which there exists an edge (f1,l*) in . This process of perturbation followed by a propagation repeats indefinitely affecting all the link and flow vertices that are descendants of link vertex l (that is, the region of influence of link l, (l), according to Definition 5), which demonstrates the sufficient condition of 1a-1b. The necessary condition of these two statements is also true because, by construction from the definitions of bottleneck link and gradient graph, none of the links and flows outside (l) will be affected by the perturbation. The proof of the statements in 2a-2b follow a very similar argument if we take into account that an initial perturbation of a flow f will create a perturbation in its bottleneck link. Applying 1a-1b to such link, we conclude that 2a-2b also hold.
7.3 Lemma 3 Gradient Graph Invariants
Let =,,{cl,∀I∈} be a network and let be its gradient graph. Let δ be an infinitesimally small perturbation performed on the capacity of a link l*∈ (equivalently, on the rate of a flow f*∈) and let Δl and Δf be the drifts caused on a link l∈ and a flow f∈, respectively, by such a perturbation. Assume also that the perturbation propagates according to the gradient graph by starting on the link vertex l* (equivalently, on the flow vertext f*) and following all possible directed paths that depart from it, while maintaining the following invariants at each traversed vertex:
Proof. Perturbations can be understood as one-time modifications of the configuration of a network that bring its operational point to a new optimum. For instance, a perturbation could be a link capacity change (e.g., due to a link upgrade or a change in the signal to noise ratio of a wireless channel), a flow rate change (e.g., due to a change in the rate of a traffic shaper or the route of a flow), among others.
When such changes occur, the congestion control algorithm adjusts the rate of the flows to reach another operational target point. In traditional congestion-controlled data networks, such target includes two objectives: maximizing network utilization while ensuring fairness. Thus, the link and flow equations must take into account these two objectives to ensure that, upon a perturbation, the resulting drifts bring the network to a new operational point that preserves the level of link utilization and fairness within the solution-space imposed by the congestion control algorithm.
The link equation
The time complexity of running GradientGraph( ) of
Proof. We start by noting that since a link is removed from k at line 10 of the GradientGraph algorithm, then each of the lines inside the main while loop (line 3) cannot be executed more than || times. The complexity of each line inside the while loop is as follows:
The time complexity of running GradientGraph( ) of
Proof. Note that each statement in the algorithm runs in constant time except for lines 5, 8, and 18. Each is an operation on a heap of size at most ||, so each will run in log ||time. Lines 5 and 8 will each run ||times, since the two outer loops run at most once for each link. Line 18 will run at most once for every pair of a link with a flow that traverses it. Note that this value is less than the number of edges that are added to the gradient graph in lines 10 and 15. Thus, the number of times line 18 is run is bounded by ||·H, where H is the maximum number of flows that traverse a single link. Thus, in total, the algorithm runs in time O(H||log(||)).
7.5 Property 1: Gradient Bound
Let =,,{cl,∀I∈} be a network and let be its gradient graph. Let δ be an infinitesimally small perturbation performed on a flow or link x∈∪, producing a drift Δy, for all y∈∪. Then,
Proof. From the link and flow equations in Lemma 3, first we observe that the absolute value of a perturbation can only increase when traversing a link vertex. This is because the flow equation Δf=min{Δl
The size of the perturbation will in fact maximally increase when the link outdegree is 1 and the sum of the flow drifts arriving at it is maximal. This is achieved when the bottleneck structure is configured with flows having an outdegree of d and links having an indegree of d, connected by a stage of inter-medium links and flows of indegree and outdegree equal to 1, as shown in
7.6 Lemma 4 Folding Links
Let =,,{cl,∀l∈} be a network and let be its gradient graph. Let λ be the leap of a gradient ∇x(y), for some x, y∈∪. Then, there exist at least two links l and l′ such that: (1) for some f∈, there is a directed path in of the form l→f→l′; and (2) sl=sl, after the perturbation has propagated through the network.
Proof. Let l→f→l′ be a path in . From the link equation
It is clear that there are many ways to configure the device and/or system components, interfaces, communication links, and methods described herein. The disclosed methods, devices, and systems can be deployed on convenient processor platforms, including network servers, personal and portable computers, and/or other processing platforms. Other platforms can be contemplated as processing capabilities improve, including personal digital assistants, computerized watches, cellular phones and/or other portable devices. The disclosed methods and systems can be integrated with known network management systems and methods. The disclosed methods and systems can operate as an SNMP agent, and can be configured with the IP address of a remote machine running a conformant management platform. Therefore, the scope of the disclosed methods and systems are not limited by the examples given herein, but can include the full scope of the claims and their legal equivalents.
The methods, devices, and systems described herein are not limited to a particular hardware or software configuration, and may find applicability in many computing or processing environments. The methods, devices, and systems can be implemented in hardware or software, or a combination of hardware and software. The methods, devices, and systems can be implemented in one or more computer programs, where a computer program can be understood to include one or more processor executable instructions. The computer program(s) can execute on one or more programmable processing elements or machines, and can be stored on one or more storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), one or more input devices, and/or one or more output devices. The processing elements/machines thus can access one or more input devices to obtain input data, and can access one or more output devices to communicate output data. The input and/or output devices can include one or more of the following: Random Access Memory (RAM), Redundant Array of Independent Disks (RAID), floppy drive, CD, DVD, magnetic disk, internal hard drive, external hard drive, memory stick, or other storage device capable of being accessed by a processing element as provided herein, where such aforementioned examples are not exhaustive, and are for illustration and not limitation.
The computer program(s) can be implemented using one or more high level procedural or object-oriented programming languages to communicate with a computer system; however, the program(s) can be implemented in assembly or machine language, if desired. The language can be compiled or interpreted. Sets and subsets, in general, include one or more members.
As provided herein, the processor(s) and/or processing elements can thus be embedded in one or more devices that can be operated independently or together in a networked environment, where the network can include, for example, a Local Area Network (LAN), wide area network (WAN), and/or can include an intranet and/or the Internet and/or another network. The network(s) can be wired or wireless or a combination thereof and can use one or more communication protocols to facilitate communication between the different processors/processing elements. The processors can be configured for distributed processing and can utilize, in some embodiments, a client-server model as needed. Accordingly, the methods, devices, and systems can utilize multiple processors and/or processor devices, and the processor/processing element instructions can be divided amongst such single or multiple processor/devices/processing elements.
The device(s) or computer systems that integrate with the processor(s)/processing element(s) can include, for example, a personal computer(s), workstation (e.g., Dell, HP), personal digital assistant (PDA), handheld device such as cellular telephone, laptop, handheld, or another device capable of being integrated with a processor(s) that can operate as provided herein. Accordingly, the devices provided herein are not exhaustive and are provided for illustration and not limitation.
References to “a processor”, or “a processing element,” “the processor,” and “the processing element” can be understood to include one or more microprocessors that can communicate in a stand-alone and/or a distributed environment(s), and can thus can be configured to communicate via wired or wireless communication with other processors, where such one or more processor can be configured to operate on one or more processor/processing elements-controlled devices that can be similar or different devices. Use of such “microprocessor,” “processor,” or “processing element” terminology can thus also be understood to include a central processing unit, an arithmetic logic unit, an application-specific integrated circuit (IC), and/or a task engine, with such examples provided for illustration and not limitation.
Furthermore, references to memory, unless otherwise specified, can include one or more processor-readable and accessible memory elements and/or components that can be internal to the processor-controlled device, external to the processor-controlled device, and/or can be accessed via a wired or wireless network using a variety of communication protocols, and unless otherwise specified, can be arranged to include a combination of external and internal memory devices, where such memory can be contiguous and/or partitioned based on the application. For example, the memory can be a flash drive, a computer disc, CD/DVD, distributed memory, etc. References to structures include links, queues, graphs, trees, and such structures are provided for illustration and not limitation. References herein to instructions or executable instructions, in accordance with the above, can be understood to include programmable hardware.
Although the methods and systems have been described relative to specific embodiments thereof, they are not so limited. As such, many modifications and variations may become apparent in light of the above teachings. Many additional changes in the details, materials, and arrangement of parts, herein described and illustrated, can be made by those skilled in the art. Accordingly, it will be understood that the methods, devices, and systems provided herein are not to be limited to the embodiments disclosed herein, can include practices otherwise than specifically described, and are to be interpreted as broadly as allowed under the law.
This application claims priority to and benefit of U.S. Provisional Patent Application No. 63/013,183, titled “Systems And Methods For Identifying Bottlenecks In Data Networks,” filed on Apr. 21, 2020, the entire contents of which are incorporated herein by reference.
This invention was made with government support under Contract No. DE-SC0019523 awarded by the U.S. Department of Energy (DoE). The government has certain rights in the invention.
Number | Name | Date | Kind |
---|---|---|---|
7310793 | Teig | Dec 2007 | B1 |
20020161914 | Belenki | Oct 2002 | A1 |
20090279434 | Aghvami | Nov 2009 | A1 |
20130311643 | Kulkarni | Nov 2013 | A1 |
20150295827 | Zhu | Oct 2015 | A1 |
20150365325 | Hwang | Dec 2015 | A1 |
20160087899 | Katevenis | Mar 2016 | A1 |
20170127463 | Narasimha | May 2017 | A1 |
20180349198 | Sun | Dec 2018 | A1 |
20190238468 | Kadel | Aug 2019 | A1 |
Number | Date | Country | |
---|---|---|---|
63013183 | Apr 2020 | US |