The present invention relates generally to managing the allocation of resources in a network, and in particular embodiments, to techniques and mechanisms for randomized mesh network routing.
Next-generation wireless networks may adopt millimeter wave (mmWave) wireless mesh backhaul networks in place of, or addition to, traditional wireline (e.g., fiber optic) backhaul networks. In general, mmWave signals refer to wireless transmissions over carrier frequencies between 6 Gigahertz (GHz) and 300 GHz. Due to the free space path loss of carrier frequencies exceeding 6 GHz, mmWave signals tend to exhibit high, oftentimes unacceptable, packet loss rates when transmitted over relatively long distances. Beamforming may be used to extend the range of mmWave signals to a distance that is suitable for implementation in mesh backhaul networks. However, the highly directional nature of beamformed mmWave signals may have the unintended consequence of “pass through interference” between the nodes (e.g., access points, gateways, etc.) forming the mesh backhaul network.
Technical advantages are generally achieved, by embodiments of this disclosure which describe techniques and mechanisms for randomized mesh network routing.
In accordance with an embodiment, a method for scheduling wireless transmissions is provided. In this example, the method includes selecting routes between access nodes and one or more gateways in a wireless mesh network, and mapping wireless links in at least some of the routes to timeslots of a frame to form a plurality of synchronized paths between the access nodes and the gateways. The method further includes iteratively adding, or removing, an individual one of the plurality of synchronized paths to, or from, a time division multiplexed (TDM) routing schedule according to each individual state in a state progression of a Markov chain, and instructing the access nodes and the one or more gateways to communicate messages over the wireless links according the TDM routing schedule. The TDM routing schedule includes a different subset of synchronized paths for each state in the Markov chain. An apparatus for performing this method is also provided.
For a more complete understanding of the present disclosure, and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:
Corresponding numerals and symbols in the different figures generally refer to corresponding parts unless otherwise indicated. The figures are drawn to clearly illustrate the relevant aspects of the embodiments and are not necessarily drawn to scale.
The making and using of embodiments of this disclosure are discussed in detail below. It should be appreciated, however, that the concepts disclosed herein can be embodied in a wide variety of specific contexts, and that the specific embodiments discussed herein are merely illustrative and do not serve to limit the scope of the claims. Further, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of this disclosure as defined by the appended claims. Although much of this disclosure are discussed in the context of beamformed mmWave transmissions in a wireless backhaul network, those of ordinary skill in the art will understand that the inventive aspects provided herein may be applied in any wireless mesh network, including those using non-beamformed transmissions over lower carrier frequencies. Aspects of this disclosure relate to Markov chains and Markov processes. A general description of Markov processes is provided in the text entitled “Random Processes for Engineers” by Bruce Hajek, which is incorporated herein by reference as if reproduced in its entirety.
The term “pass through interference” generally refers to interference experienced by a neighboring node as a result of a transmission between two neighboring nodes in a wireless mesh network. One way of mitigating pass through interference is to schedule transmissions over links in the wireless mesh network according to a time division multiplexed (TDM) scheme. As an example, consider a wireless mesh network that includes two access nodes (e.g., base stations) and one gateway positioned in-between the access nodes. In such an example, a transmission from either of the access nodes to the gateway may result in high levels of interference over a wireless backhaul link between the gateway and the other access node.
In larger mesh backhaul networks, there may be multiple hops (e.g., multiple backhaul links) along a given route between an access node and a gateway. In such networks, TDM schemes may schedule links of a given route to timeslots in a frame to form synchronized paths between the access nodes and gateways. As used herein, the term “synchronized path” refers to the scheduling (or mapping) of links in a given route to timeslots in frame. In that context, mapping the links in a given route to the timeslots of the frame forms a “synchronized path” through the wireless mesh network. For larger wireless mesh networks (e.g., networks with many nodes, links, and potential routes), determining which synchronized paths to include in the TDM routing schedule becomes a relatively complex optimization problem, particularly when loading is unevenly distributed across the access nodes and fluctuates over time.
Embodiments of this disclosure simplify that optimization problem by iteratively adding, and removing, individual synchronized paths to a TDM routing schedule according to a Markov chain. As used herein, the term “Markov chain” refers to a state diagram for modeling the communication of signaling over a mesh network. In particular, each state of a Markov chain maps a different combination of synchronized paths to the TDM routing schedule. Accordingly, an individual synchronized path is either added or removed when transitioning from one state of the Markov chain to another. In some embodiments, transitioning between states of a Markov chain is performed according to a proportionally fair transition rate. Before transitioning to a new state, a scheduler may determine whether it is feasible to add a synchronized path associated with the Markov state to an existing TDM routing schedule based on an interference model of the mesh network. In one embodiment, the feasibility determination is based on a protocol interference model that prohibits transmissions from being scheduled over two or more interfering links during the same timeslot of the frame. In another embodiment, the feasibility determination is based on a physical interference model that permits transmissions to be scheduled over two or more interfering links during the same timeslot of the frame when an interference cost associated with the transmissions is less than a threshold. In such an embodiment, the interference costs may vary based on the amount of interference experienced between transmissions performed over the two or more interfering links during the same period. The amount of interference experienced between the transmissions may vary according to a number of factors, such as the path loss between the transmitters and receivers, as well as the transmission parameters (e.g., transmit power levels, beam-directions, etc.) used to perform the respective transmissions.
In some embodiments, a transition rate of the Markov chain may be adjusted following a transition from a previous state to a subsequent state. The transition rate may be adjusted periodically, e.g., after each transition, every other transition, every Nth transition (where N is an integer greater than one), etc. Alternatively, the transition rate may be adjusted aperiodically, e.g., randomly, at the discretion of the network operator, after a triggering condition has occurred, etc. The transition rate may be adjusted according to one or more characteristics, parameters, and/or values associated with the network and/or the Markov chain. For example, the transition rate may be adjusted based on a ratio of users to synchronized paths specified by the subsequent state in the Markov chain. In such an example, the ratio of users to synchronized paths may include the summation of ratios between users accessing each of the access nodes and synchronized paths assigned to the corresponding access point in the TDM routing schedule. In one embodiment, the transition rate is a function of beta (β). By way of example, the path setup rate at a given node (access node “s”) may be determined according to the following equations: setup-rate(s)=gamma exp(beta ds/ms); release_rate(s)=gamma, where setup-rate(s) is the path setup rate for access node s, release-rate(s) is the path release rate for access node s, ds is the number of users at the access node S, ms is the number of paths setup for access node s, and gamma is a system parameter. In such an example, the transition rate may be adjusted by re-computing beta according to the ratio of users to synchronized paths specified. By way of example, beta may be computed according to the following equation
wherein intensity is a control variable corresponding to how aggressive the algorithm is in attempting to find paths, nodes is the number of access nodes in the networks, and the sum_of_ratios_of_users_to_assigned_paths is the summation of ratios between users accessing each access node and the number of paths assigned to the access node in the TDM schedule specified by the current state in the Markov chain. Other examples are also possible. These and other inventive aspects are described in greater detail below.
In this example, the access nodes 110-180 are connected with one another, as well as with the gateway 190 via wireless links 112, 114, 123, 125, 136, 145, 147, 156, 158, 169, 178, 189. In particular, the access node 11o is interconnected to the access node 120 via the wireless link 112 and to the access node 140 via the wireless link 114. The access node 120 is interconnected to the access node 130 via the wireless link 123 and to the access node 150 via the wireless link 125. The access node 130 is interconnected to the access node 160 via the wireless link 135, and the access node 140 is interconnected to the access node 150 via the wireless link 145 and to the access node 170 via the wireless link 147. The access node 150 is interconnected to the access node 160 via the wireless link 156 and to the access node 180 via the wireless link 158. The access node 160 is interconnected to the gateway 190 via the wireless link 169, the access node 170 is interconnected to the access node 180 via the wireless link 178, and the access node 180 is interconnected to the gateway 190 via the wireless link 189.
Due to pass through interference, transmissions over the wireless links 112, 114, 123, 125, 136, 145, 147, 156, 158, 169, 178, 189 may interfere with one another.
As mentioned above, it is possible to mitigate inter-link interference in a wireless mesh network by communicating transmissions over links in the wireless mesh network according to a time division multiplexed (TDM) routing schedule. Embodiments of this disclosure provide techniques generating and/or refining the TDM routing schedule according to a Markov chain. In particular, embodiments techniques map wireless links in routes between access nodes and gateway(s) to timeslots of a frame to from synchronized paths through the wireless mess network. Individual synchronized paths are then iteratively added, and removed, from the to a TDM routing schedule by transitioning between states of the Markov chain.
Embodiments of this disclosure generate and/or refine TDM routing schedule according to a Markov chain.
As shown in
Subsequently, the embodiment Markov chain progression 300 proceeds to state 321, where a synchronized path mapping links of the route 240 to timeslots of the frame is added to the TDM routing schedule. A resulting frame configuration 421 associated with the state 321 is depicted in
Thereafter, the embodiment Markov chain progression 300 proceeds to state 331, where a synchronized path mapping links of the route 230 to timeslots of the frame is added to the TDM routing schedule. A resulting frame configuration 431 associated with the state 331 is depicted in
Next, the embodiment Markov chain progression 300 proceeds to state 341, where a synchronized path mapping links of the route 220 to timeslots of the frame is added to the TDM routing schedule. A resulting frame configuration 441 associated with the state 341 is depicted in
Next, the embodiment Markov chain progression 300 proceeds to state 332, where the synchronized path for the route 240 that was added in state 321 is removed. As shown in
Subsequently, the embodiment Markov chain progression 300 proceeds to state 342, where a synchronized path mapping links of the route 270 to timeslots of the frame is added to the TDM routing schedule. A resulting frame configuration 442 associated with the state 342 is depicted in
Thereafter, the embodiment Markov chain progression 300 proceeds to state 351, where a synchronized path mapping links of the route 280 to timeslots of the frame is added to the TDM routing schedule. A resulting frame configuration 451 associated with the state 351 is depicted in
Next, the embodiment Markov chain progression 300 proceeds to state 361, where a synchronized path mapping links of the route 250 to timeslots of the frame is added to the TDM routing schedule. A resulting frame configuration 461 associated with the state 361 is depicted in
Subsequently, the embodiment Markov chain progression 300 proceeds to state 371, where a synchronized path mapping links of the route 240 to timeslots of the frame is added to the TDM routing schedule. A resulting frame configuration 471 associated with the state 371 is depicted in
Finally, the embodiment Markov chain progression 300 proceeds to state 381, where a synchronized path mapping links of the route 260 to timeslots of the frame is added to the TDM routing schedule. A resulting frame configuration 481 associated with the state 381 is depicted in
As shown in
In particular,
As discussed above, embodiments of this disclosure configure a TDM routing schedule for a wireless mesh network based on a Markov chain. A Markov chain is technique for modeling a random process that undergoes transitions between states in a state space. Markov chains are memoryless states machines, meaning that the probability of progressing to the next state depends only on the current state.
If the state transition probabilities are constant, then the different states have steady state probabilities (aka stationary distribution), and then the steady state probabilities (π) can found by computing Pn as n goes to infinity. In this example, the steady state probabilities would be as follows:
Markov chains can be either reducible or irreducible. In an irreducible Markov chain, it is possible to get to any state from any state.
In a continuous time Markov chain, the probabilities of the equivalent finite state Markov chain are converted into a length of time that the progression spends in the given state space.
In some examples, there may also be traffic from node A to node E.
Then (r1, r3, r4) can coexist, and this triple achieves a throughput of 1/3 from A to E as well as the throughput of 2/3 from A to D. If the relative throughputs required were different, then a different frame length than T=3 might well be desirable. For example, a frame length of T=2 allows a throughput of 1/2 from A to E and of 1/2 from A to D.
By making the frame length arbitrarily long, it is possible to approach any convex combination of the two solutions. More generally, for any network, the capacity (or rate) region will approach a convex region as the frame length becomes larger and larger.
Consider the following example, where nr=1 if a synchronized path is currently set up along a route rϵR, and nr=0 otherwise. Here R is the set of possible paths (many of which conflict with each other). Define the vector n=(nr, rϵR). Write that the state n is feasible if the SINR inequalities are simultaneously satisfied for all of the paths in n, that is all paths r with nr=1. Let N be the set of feasible states. It is a subset of {0,1}R with the following hierarchical property: if er is the state describing one path on route r, then n+erϵNnϵN.
Let S be the set of access points, S, say, in total. Let H, the flow composition matrix, be the incidence matrix identifying which paths serve which access points: i.e. Hsr=1 if path r serves access point s, and Hsr=0 otherwise. (Column sums of H are 1, i.e. each path has a single access point that it serves. Let s(r) be that access point for path r.) The aggregate throughput xs for an access point s is the sum of the paths serving it:
sϵS., which may also be written as x=Hn.
A proportionally fair allocation of capacity may be given by maximizing:
subject to x=Hn over nϵN. (1).
It is possible to determine the rough complexity of this approach. Consider a set of routes R. For each access point sϵS, suppose there are constructed one or more physical paths to a gateway. In some examples, the paths are the shortest paths through the physical network to gateways if the shortest physical path is not unique. In some examples, the paths include a node-disjoint physical path to a gateway. Each physical path has associated with it T synchronized paths. So there are of order O(ST) paths in the set R. The set N is of vastly larger size—it's a subset of {0,1}R and may grow exponentially with the number of access points S.
If T≥1 then the number of maximal cliques is 2T. We know this, but Algorithm 1 that we are about to describe does not. Algorithm 1 On an arrival of a new end-user, say an increase of d1 by 1, the corresponding link AB looks through the T paths (AB, t), t=1, 2, . . . , T in a random order to see if one is available: if it finds one it grabs it. Whether it can find one or not it shares all the paths it has set up over the d1+1 end-users it now has. On the departure of an end-user, say a decrease of d1 by 1, the corresponding link AB looks to see if it has d1+1 paths set up: if it has, it clears down the oldest of these paths. Similarly for links 2, 3 and 4.
Algorithm 2: Suppose the algorithm is aware of the set of maximal cliques, and so does the following variation. On an increase of d1 by 1, the link AB looks through the T paths (AB, t), t=1, 2, . . . , T in that order for a new available path. The link CD behaves similarly. The link BC looks through its T paths in the reverse order (BC, t), t=T, . . . , 2, 1, and the link DA behaves similarly. When a link clears down a path it chooses the most recently set up of its paths in use. Thus the paths in use by a link will be in contiguous time slots, from 1 upwards for links AB and CD and from T downwards for links BC and DA.
Recall that the number of end-users, dr, r=1, 2, 3, 4, are independent Poisson random variables with means of say λr, r=1, 2, 3, 4 respectively. Let Xr, r=1, 2, 3, 4 be the number of paths set up to carry the traffic of these end-users. (Both Xr and dr are values of a stochastic process observed at a point in time.)
Algorithm 3. As with Algorithm 1, setup 1 path from every access node to a network gateway. This time, we don't use attach/detach as a trigger. Instead every access node establishes a new path at the rate γ eθ
We release paths at a constant rate γ. We can release the oldest path using this approach. In this algorithm γ controls the rate at which paths are setup and cleared, so γ is controlling the rate at which the system evolves. The parameter θr is controlling the rate at which individual access nodes try to setup requests. Since θr is based on the ratio of the number of users at an access point to the number of paths at an access point, the setup rate will try and increase when an access point has a bandwidth deficit and decrease when an access point has a bandwidth surplus.
Regarding Markov Approximation. Consider that N ⊂{0,1}R is the set of feasible states. Let n be a Markov process with state space N and transition rates 2q(n,n−er)=γ if nr=1, (4); q(n,n+er)=γ exp(θs(r)) if n+erϵN, (5) for rϵR, where erϵN is a unit vector with a 1 as its r th component and 0s elsewhere. When nr=1, a synchronized path r is currently set up for the given route: it remains set up for a time that is exponentially distributed with parameter γ. Thus the first path serving access point s to clear down does so after an exponentially distributed time with parameter γms where
the number of paths set up that are serving access point s. (The results remain the same if every path that is set up clears down after a fixed length of time γ−1.)
If nr=0, and it is feasible to set up the path r without violating SINR inequalities for existing paths, then path r is set up at rate γ exp (θs(r)), for each path rϵR. (Recall s(r) is the access point served by path r.) Thus the rate at which paths are set up for access point s is ksγ exp(θs) where ks counts the number of paths in the set {rϵR: Hsr=1 and n+erϵN}, i.e. the number of paths serving access point s that are individually system feasible to add to n.
The equilibrium distribution for n=(nr, rϵR) can be written in the form:
nϵN. (6). This follows, since (πθ(n), nϵN) is a probability distribution and it satisfies the detailed balance condition πθ(n)q(n, n+er)=πθ(n+er)q(n+er, n). The expected number of paths set up that serve access point s is then
sϵS or in matrix form
Formulating the optimization problem. Consider the optimization problem maximize
sϵS, and
over p(n)≥0, nϵN; xs, sϵS. (7). The objective function is concave and differentiable and the constraints are linear, so Lagrangian methods may be used. The Lagrangian for the problem is
where θs, sϵS, and κ are Lagrange multipliers for the constraints. We know there exist Lagrange multipliers θs, sϵS, κ such that the Lagrangian is maximized at p, x that are feasible, and p, x are then, for the original problem.
We now attempt to maximize L over p(n)≥0 and xs≥0. Differentiating with respect to xs gives
and differentiating with respect to p(n) gives
At a maximum over xs, we have that
At a maximum over p(n),
Choose κ so that (p(n),nϵN) sum to 1: then
of the form (6).
Thus the Markov chain (4)-(5) achieves an equilibrium distribution (6) that solves the optimization problem (7) provided the parameters (θs, sϵS) are set to satisfy (8). The objective function of this optimization problem is the sum of the proportionally fair objective function (1) plus the entropy of the probability distribution (p(n),nϵN). The dual to the optimization problem (7) is to maximize
over θs≥0, sϵS. At the optimum θs=ds/Eθ[ΣrϵRHsrnr] once again. This can be used to develop a convergence proof for sufficiently slow changes of (θs, sϵS)—Exercise 7.23, [4]. When θs=βds/xs then this corresponds to multiplying the first term of the objective function (7) by β, i.e. to increasing the importance of the proportionally fair objective function relative to the entropy term.
Aspects of this disclosure provide an adaptive method for choosing β. Since both β and (θs, sϵS) may be time-varying, the Markov chain (4)-(5) may not be time-homogeneous. After an initial period of convergence, both β and (θs, sϵS) may become relatively stable. Although β and (θs, sϵS) may fluctuate with the random transitions of the Markov chain, these fluctuations may be comparable with fluctuations in the numbers of users (ds, sϵS).
The rate at which paths serving access point s are torn down and set up, are γms and ksγ exp(θs) respectively. Thus in equilibrium ms≈ks exp(θs), and thus
The ratio ms/ks is the ratio of paths in use to paths not currently in use but individually system feasible for access point s: so it may be helpful to set the average value of θs over sϵS to be say log [intensity/(1−intensity)] where intensity=90%, where log is the natural logarithm. Thus log10 z=(log10 e)log z. Accordingly, θs may be updated according to
where
to ensure that ΣsϵSθs/S=log [9].
This approach is relatively aggressive in so far as it uses instantaneous value of ms rather than its time-average xs, as well as because it does not dampen the updates to θs. In some embodiment, s the exp(θs) is averaged, rather than θs.
In networks with less symmetry, it is possible to update θs by setting it to
where
this may ensure that the weighted average ΣsϵSdsθs/ΣsϵSds=log[9].
In an embodiment, N ⊂{0,1}R is the set of feasible states, and
respectively are the number of paths each individually capable of serving access point s (note that these may not be compatible with each other, so the maximum capacity available to access point s may be less than hs), and the number of paths currently set up from access point s. Now let n be a Markov process with state space N and transition rates 2q(n, n−er)=γ if nr=1, (9)
if n+erϵN, (10) for rϵR. Again a path that is set up remains so for a time that is exponentially distributed with parameter γ. The rate (10) corresponds to access point s attempting to set up a path at rate γ exp (θs(r)), and choosing at random one of the hs paths from access point s to to try. (Under the preliminary model, access point s attempts to set up a path at rate γhs exp (θs(r)), choosing at random one of the hs paths to try. Thus the preliminary model is more aggressive for access points s for which hs is larger.)
The equilibrium distribution for n=(nr, rϵR) can be written in the form
nϵN, (11), where B is a normalizing constant chosen so that the distribution (11) sums to one. If we set θ by the relation
then the distribution (11) solves the optimization problem (7) with the amended objective function
[Equivalently, the distribution (11) solves the optimization problem (7) with its entropy term replaced by minus the Kullbackâ″-Leibler divergence of the distribution (p(n), nϵN) from the distribution where the components of n are independent Bernoulli random variables, nr with mean 1/(1+hs(r)).]
In another embodiment, n is a Markov process with state space N and transition rates q(n, n−er)=γ if n=1, (12)
if n+erϵN, (13) for rϵR. Again a path that is set up remains so for a time that is exponentially distributed with parameter γ. The rate (13) corresponds to access point s attempting to set up a path at rate γ exp (θs(r)), and choosing at random one of the hs−ms paths it is not already using to try.
The equilibrium distribution for n=(nr, rϵR) can be written in the form
nϵN, (14) where B is a normalizing constant chosen so that the distribution (14) sums to one. If we set θ by the relation
then the distribution (14) solves the optimization problem (7) with the amended objective function
In some embodiments, the processing system 1600 is included in a network device that is accessing, or part otherwise of, a telecommunications network. In one example, the processing system 1600 is in a network-side device in a wireless or wireline telecommunications network, such as a base station, a relay station, a scheduler, a controller, a gateway, a router, an applications server, or any other device in the telecommunications network. In other embodiments, the processing system 1600 is in a user-side device accessing a wireless or wireline telecommunications network, such as a mobile station, a user equipment (UE), a personal computer (PC), a tablet, a wearable communications device (e.g., a smartwatch, etc.), or any other device adapted to access a telecommunications network.
In some embodiments, one or more of the interfaces 1610, 1612, 1614 connects the processing system 1600 to a transceiver adapted to transmit and receive signaling over the telecommunications network.
The transceiver 1700 may transmit and receive signaling over any type of communications medium. In some embodiments, the transceiver 1700 transmits and receives signaling over a wireless medium. For example, the transceiver 1700 may be a wireless transceiver adapted to communicate in accordance with a wireless telecommunications protocol, such as a cellular protocol (e.g., long-term evolution (LTE), etc.), a wireless local area network (WLAN) protocol (e.g., Wi-Fi, etc.), or any other type of wireless protocol (e.g., Bluetooth, near field communication (NFC), etc.). In such embodiments, the network-side interface 1702 comprises one or more antenna/radiating elements. For example, the network-side interface 1702 may include a single antenna, multiple separate antennas, or a multi-antenna array configured for multi-layer communication, e.g., single input multiple output (SIMO), multiple input single output (MISO), multiple input multiple output (MIMO), etc. In other embodiments, the transceiver 1700 transmits and receives signaling over a wireline medium, e.g., twisted-pair cable, coaxial cable, optical fiber, etc. Specific processing systems and/or transceivers may utilize all of the components shown, or only a subset of the components, and levels of integration may vary from device to device.
Although the description has been described in detail, it should be understood that various changes, substitutions and alterations can be made without departing from the spirit and scope of this disclosure as defined by the appended claims. Moreover, the scope of the disclosure is not intended to be limited to the particular embodiments described herein, as one of ordinary skill in the art will readily appreciate from this disclosure that processes, machines, manufacture, compositions of matter, means, methods, or steps, presently existing or later to be developed, may perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or steps.
Number | Name | Date | Kind |
---|---|---|---|
20080069034 | Buddhikot et al. | Mar 2008 | A1 |
20090010205 | Pratt, Jr. | Jan 2009 | A1 |
20100158628 | Wu et al. | May 2010 | A1 |
20100232299 | Conway | Sep 2010 | A1 |
20110225311 | Liu et al. | Sep 2011 | A1 |
20120008542 | Koleszar | Jan 2012 | A1 |
20120108276 | Lang | May 2012 | A1 |
20120195212 | Zhang | Aug 2012 | A1 |
20130262648 | Orlik et al. | Oct 2013 | A1 |
20150195670 | Agee | Jul 2015 | A1 |
20160007371 | Pietraski | Jan 2016 | A1 |
Entry |
---|
Hajek, B., et al., “Random Processes for Engineers,” Cambridge University Press., Mar. 2015, 448 pgs. |
Wang, P., et al., “Practical Computation of Optimal Schedules in Multihop Wireless Networks,” IEEE/ACM Transactions on Networking, vol. 19, No. 2, Apr. 2011, pp. 305-318. |
Chen, M., et al., “Markov Approximation for Combinatorial Network Optimization,” Proceedings IEEE INFCOM, Mar. 14-19, 2010, 9 pages. |
Number | Date | Country | |
---|---|---|---|
20170353259 A1 | Dec 2017 | US |