The present disclosure relates to the management of network traffic in a communications network such as the Internet, and in particular to a network band-width apportioning system and process.
Network neutrality—the principle that all packets in a network should be treated equally, irrespective of their source, destination or content—remains a principle cherished dearly in the academic community, but is neither mandated nor enforced in much of the world. The USA has seen the most vigorous debate on this topic, with the pendulum swinging one way and then the other every so often, depending on political mood. The underlying problem in the USA remains that there is no competition—more than 60% of households in the USA have a choice of at most two Internet Service Providers (one over a phone line and the other over a cable TV line), which creates public pressure to regulate the ISPs to prevent traffic differentiation. Interestingly, mobile networks in the same country have seen more competition, and hence have been largely exempt from the net-neutrality debates.
In contrast, several other countries in the world have encouraged competition in broadband services, and in some cases have even paid for national broadband infrastructures from the public purse (e.g., Singapore, Australia, New Zealand, Korea, and Japan), which gives subscribers a choice of tens if not hundreds of ISPs to choose from. In the presence of such healthy competition, the inventors believe it would be wrong to impose neutrality on all ISPs because it would force them to provide bland services that compete solely on price; instead, the inventors believe ISPs should be allowed (indeed encouraged) to differentiate their services in unique ways, and the market left to decide how much their offering is worth (and indeed if a net-neutral ISP dominates, so be it).
In view of the above, the inventors have identified a general need for network traffic discrimination that is flexible enough to allow ISPs to innovate and differentiate their offerings, while being open enough to allow consumers to compare these offerings, and rigorous enough for regulators to hold ISPs accountable for the resulting user experience.
It is desired, therefore, to overcome or alleviate one or more difficulties of the prior art, or to at least provide a useful alternative.
In accordance with some embodiments of the present disclosure, there is provided a network bandwidth apportioning process executed by an Internet Service Provider (ISP), the process including the steps of:
In some embodiments, the relationships are defined by respective different analytic formulae, and the process includes generating display data for displaying the analytic formulae to a network user and sending the display data to the network user in response to a request to view the analytic formulae.
In some embodiments, the analytic formulae include one or more analytic formulae with one or more of the following forms:
In some embodiments, the analytic formulae include analytic formulae according to:
U
i(Xi)=aixi and Uj(xj)=ajxj where ai>aj
wherein class-i's bandwidth demand is always met before class-j receives any allocation.
In some embodiments, the predetermined classes of network traffic include a class for mice flows, a class for elephant flows, and a class for streaming video.
In some embodiments, the predetermined classes of network traffic consist of a class for mice flows, a class for elephant flows, and a class for streaming video.
In some embodiments, the plurality of mutually exclusive predetermined classes of network traffic are no more than a few tens in number.
In accordance with some embodiments of the present disclosure, there is provided at least one computer-readable storage medium having stored thereon processor-executable instructions that, when executed by one or more processors, cause the processors to execute the network bandwidth apportioning process of any one of the above processes.
In accordance with some embodiments of the present disclosure, there is provided a network bandwidth apportioning system, including:
In some embodiments, the network bandwidth apportioning system further includes:
Also described herein is a network bandwidth apportioning system, including:
In some embodiments, the metrics of network performance include one or more of: web page load time, video stalls, and download rate.
In some embodiments, the metrics of network performance include: web page load time, video stalls, and download rate.
In some embodiments, the relationships are defined by respective different analytic formulae, and the system includes a display component to generate display data for displaying the analytic formulae to a network user and send the display data to the network user in response to receipt of a request to view the analytic formulae.
In some embodiments, the analytic formulae include one or more analytic formulae with one or more of the following forms:
where a≠0, k≠0.
Some embodiments of the present disclosure are hereinafter described, by way of example only, with reference to the accompanying drawings, wherein:
In order to address the shortcomings of the prior art, the inventors have developed the present disclosure embodied as a network bandwidth apportioning bandwidth apportioning system and process to meet the requirements of the various stakeholders in the following way. For ISPs, the network bandwidth apportioning system and process give flexibility to specify differentiation policies based on any attribute(s), such as content type, content provider, subscriber tier, or any combination thereof. For example, the network bandwidth apportioning system allows prioritizing streaming video over downloads, giving ‘gold’ subscribers a greater share of bandwidth than ‘bronze’ ones, or even restricting certain applications or content. Needless to say, the system's theoretical flexibility will in practice be constrained by the legal and regulatory environment of the region in which it is applied, and ultimately by market forces.
For consumers, the network bandwidth apportioning system described herein allows them to see and compare the policies on offer from the various ISPs, in terms of the number of traffic classes each ISP supports, how traffic streams map to classes, and how bandwidth is shared amongst classes at various levels of congestion. This allows consumers to clearly identify ISPs that better support their specific tastes or requirements, be it gaming or streaming video or large downloads, or indeed non-discrimination. Further, in exposing its policy, the ISP need not reveal any sensitive information about their network (such as provisioned bandwidth) or their subscriber base (such as numbers in each tier).
Lastly, for regulators, the system provides rigor so that the differentiation behaviour during congestion is computable, predictable, and repeatable. Regulators can audit performance to verify that the sharing of bandwidth in the ISP's network conforms to the ISPs' stated discrimination policies.
Embodiments of the present disclosure are described herein in the context of a local-exchange/central-office where traffic to/from subscribers (typically a few thousand in number) on a broadband access network (based on DSL, cable, or national infrastructure) is aggregated by one or more broadband network gateways (BNGs) 102, as shown in
For example, if 5,000 subscribers in an access network aggregated at a BNG 102 are each offered a 20 Mbps plan, the ISP would not provision 100 Gbps of backhaul capacity on the BNG 102, since that would be excessive in cost (for example, at the time of writing the list price of bandwidth on an Australian national broadband network shows that even 10 Gbps capacity at the BNG 102 will cost the ISP A$2 million per-year!). The ISP would therefore rely on statistical multiplexing to provision, say, a tenth of the theoretical maximum required bandwidth in order to save cost, equating to an aggregate bandwidth of 10 Gbps (or 2 Mbps per-user on average). Needless to say, this can cause severe congestion during peak hour when many users are active on their broadband connections.
The features of the network bandwidth apportioning system and process that allow the ISP to deal with this congestion in an open, flexible, and rigorous manner are described below.
The first part of the network bandwidth apportioning process described herein requires the ISP to specify the number of traffic classes (queues) they support at this congestion point, and how traffic streams are mapped to their respective classes. For example, at one extreme, the ISP may have only one (FIFO) class, in which case they are net-neutral. At the other extreme, they may have a class per-user per-application stream (akin to the IETF IntSery proposal); though theoretically permissible, this would require hundreds of thousands of queues, making it infeasible in practice. A pragmatic approach is for the ISP to support a small number (say 2 to 16) of classes—while this may sound somewhat similar to the IETF DiffSery proposal, it should be noted that the number of classes and the mapping of traffic streams to classes is decided by the ISP, and is not mandated by any standard. For example, the ISP may choose to have three classes: one each for browsing, video, and large download streams.
In any case, the ISP has to clearly define the criteria by which traffic flows are mapped to classes. For example, the ISP could specify that flows that transfer no more than 4 MB each (referred to by those skilled in the art as ‘mice’) are mapped to the “browsing” class, flows that carry streaming video (deduced from address prefixes, deep packet inspection, statistical profile measurement, and/or any other technique) map to the “video” class, and non-video flows that carry significant volume (referred to by those skilled in the art as ‘elephants’) are mapped to the “downloads” class. Additional classes can be introduced if and when necessary; for example to have a separate class for video from one or more specific providers, say Netflix. However, such changes need to be openly announced by the ISP, including the mapping criteria, as well as the bandwidth sharing, as described below.
In order for all stakeholders to obtain the most benefit from the disclosure, the bandwidth sharing amongst classes has to be specified in a way that: (a) is highly flexible so that ISPs can customize their offerings as they see fit; (b) is rigorous so that it is repeatable and enforceable across the entire range of traffic conditions; (c) is simple to implement at high traffic speeds; (d) does not require ISPs to reveal sensitive information including link speeds and subscriber counts; and (e) is meaningful for customers and regulators.
In work leading up to the disclosure, the inventors rejected several possible bandwidth sharing arrangements, including simplistic ones that specify a minimum bandwidth share per-class (as it may be variable with total capacity, and is ambiguous when some classes do not offer sufficient demand), and complex ones (like in IntServ/DiffServ) requiring sophisticated schedulers. Instead, the network bandwidth apportioning system and process described herein use utility functions to optimally partition bandwidth. Specifically, each class of network traffic is associated with a corresponding utility function that represents the “value” of bandwidth to that class, as determined by the ISP. Though utility functions have been discussed in the networking literature, they usually start with the bandwidth “needs” of an application (voice, video or download) stream, and attempt to distribute bandwidth resources to maximally satisfy application needs. By contrast, the network bandwidth apportioning process described herein flips the viewpoint by having the ISP determine the utility function for a class, based on their perceived value of that traffic class in their network. Stated differently, the utility function for each class is a way for the ISP to state how much they value that class at various levels of resourcing. As shown below, the use of utility functions gives ISPs high flexibility to customise their differentiation policy, protects sensitive information, and is simple to implement, while consumers and regulators benefit from open knowledge of the ISP's differentiation policy that they can meaningfully compare and validate.
[44] An optimal partitioning of a resource (aggregate bandwidth in this case) between classes is deemed to be one in which the total utility is maximized. Stated mathematically, let di denote the traffic demand of class-i, and Ui (xi) its utility when allocated bandwidth xi. For a given capacity C, the objective then is to determine xi that maximizes Σi Ui (xi), where Σi xi=C and ∀i: xi≤di. Methods for determining this numerically are available in the literature—in particular, a simple approach to compute optimal allocations is by taking the partial derivative of the utility function, ∝Ui/∝xi, also known as the marginal utility function, and distributing bandwidth amongst the classes such that their marginal utilities are balanced.
As described above, the per-class utility function in the described embodiments is defined by the ISP, not by the consumer or the application. This then begs the question of how an ISP chooses the utility functions, and how a consumer interprets them. It should be noted that a general feature of the system and process described herein is that many different flows of network traffic are aggregated into each of the classes, which are relatively few in number. For example, in any hour there may be many (e.g., typically from at least thousands to several hundreds of thousands) of different network traffic flows, but these are typically aggregated into at most a few tens (e.g., 40) of different classes, and more typically at most ten, and in the examples described below, only three, corresponding to the three major types of network traffic of most interest to most consumers.
Some simple example policies will first be described. In one example, an ISP wants to implement a pure priority system wherein class-i gets priority over class-j. The ISP can then choose respective utility functions Ui (xi)=aixi and Uj (xj)=ajxj where ai>aj. This ensures that the marginal utility ∝U/∝x is always higher for class-i than class-j, and class-i's bandwidth demand is therefore always met before class-j receives any allocation.
In a second example, the ISP wants to divide bandwidth amongst the classes in a given proportion: for example, browsing gets 30% of bandwidth, video 50%, and downloads 20%. Then the ISP can choose utility functions of the form Ui (xi)=√{square root over (aixi)}, which ensures that the marginal utilities of the classes are balanced when ai/xi is the same for each class, namely when bandwidth for class-i is proportional to ai.
The flexibility of using utility functions as described herein allows the network bandwidth apportioning system and process to accommodate a much wider variety of bandwidth allocation arrangements than the simple examples described above. For example, consider the three traffic classes—browsing, video, and downloads, and develop utility functions that are meaningful to consumers. In order to keep information on provisioned bandwidths (both aggregate and per-consumer) private, the ISP publicly releases a scaled version of these functions, namely one in which the provisioned backhaul capacity is divided by the number of subscribers multiplexed on that link. Using the example of a link (provisioned at say 10-20 Gbps) that serves 5000 subscribers,
U
m=1−e−1.5x; Uv=1/(1+e−1.3(x−2.0)); Ue=1−e0.16x (1)
and
U
m=1−e−1.5x; Uv=1/(1+e−0.5(x−2.0)); Ue=1−e0.50x (2)
Comparison of the utility functions of Equations (1) and (2) as shown in
An idealized simulator was built to evaluate the impact of the network bandwidth apportioning system and process on user experience. A single link at the BNG 102 that aggregates multiple subscribers over the access network was considered, wherein each traffic flow is classified into one of multiple queues, and bandwidth is partitioned between the classes based on their respective utility functions. Traffic is modelled as a fluid, and the simulation progresses in discrete time slots. In each time slot, each active flow submits its request (i.e., the number of bits it wants transferred in that slot); the requests are aggregated into classes, allocations are made to each class in a way that maximizes overall utility for the given demands, and the bandwidth allocated to each class is shared evenly amongst the active flows in that class.
Each flow implements standard TCP dynamics to adjust its request for the subsequent time slot based on the allocation in the current slot: if the request is fully met, it increases its rate (linearly or exponentially, depending on whether it is in the congestion-avoidance or slow-start phase), whereas if the request is not fully met, it reduces its rate (by half or to one MSS-per-RTT, depending on the degree of congestion determined by whether the allocation is at least half of its request or not). Further, the rate of any flow is limited by its access link capacity. While the fluid simulation model does not fully capture all the packet dynamics and variants of TCP, it captures its essence, and allows the simulation of large workloads quickly and with reasonable accuracy.
The simulation parameters are adjusted using the graphical user interface (GUI) shown in
The following three metrics were used to quantify user experience: page-load time, also referred to as ‘average flow completion time’ (“AFCT”) in seconds for browsing flows; playback stalls (in seconds per minute) for streaming video flows; and mean rate (in Mbps) for elephant/download flows. These are displayed continuously by the simulation process via the user interface shown in
In the described embodiment, the network bandwidth apportioning process is implemented as executable instructions of software components or modules 1824, 1826, 1828 stored on non-volatile storage 1804, such as a solid-state memory drive (SSD) or hard disk drive (HDD), of a data processing component, as shown in
In the described embodiment, the data processing system includes random access memory (RAM) 1806, at least one processor 1808, and external interfaces 1810, 1812, 1814, all interconnected by at least one bus 1816. The external interfaces include at least one network interface connector (NIC) 1812 which connects the data processing system to the SDN switch, and may include universal serial bus (USB) interfaces 1810, at least one of which may be connected to a keyboard 1818 and a pointing device such as a mouse 1819, and a display adapter 1814, which may be connected to a display device such as a panel display 1822.
The data processing system also includes an operating system 1824 such as Linux or Microsoft Windows, and an SDN or ‘flow rule’ controller 1830 such as the Ryu framework, available from http://osrg.github.io/ryu/. Although the software components 1824, 1826, 1828 components and the flow rule controller 1830 are shown as being hosted on a single operating system 1824 and hardware platform, it will be apparent to those skilled in the art that in other embodiments the flow rule controller may be hosted on a separate virtual machine or hardware platform with a separate operating system.
The software components 1824, 1826, 1828 were written in the Go programming language and are as follows:
Unfortunately, the NoviSwitch 2116 SDN switch only allows its queue rates to be modified in steps of 10 Mbps. Consequently, a simple utility curve with a square root function (i.e. U(x)=k√{square root over (x)}) was employed so that the bandwidth allocations become proportional to √{square root over (k)}. For example, if an ISP wants to allocate a fixed fraction of the capacity to each class say rm, rv, re then their parameter k respectively becomes √{square root over (rm)}, √{square root over (rv)}, √{square root over (re)}.
Three scenarios were tested, namely: a neutral ISP, a video-friendly ISP, and an elephant-friendly ISP, with each run lasting for 100 seconds. In all tests, the network traffic was generated so that computers A, B, and C respectively emulate browsing-heavy, download-heavy and video-heavy subscribers. At time 1s, mice flows begin on A. At 10s, computer B starts four downloads (that run concurrently until 80s). The traffic mix remains elephant and mice until 30s when computer C plays a couple of 4K videos on Youtube until 90s.
The performance of video flows (in terms of average buffer health) is shown in
Lastly, elephants perform the best in the neutral scenario, causing mice and videos to suffer, as shown in the graph of average download speed of
With bandwidth held at 80 Mbps, the next experiment uses the network bandwidth apportioning system and process described herein, with utility curves tuned to achieve weighted priorities in the ratio of 25:50:25 for browsing, video, and downloads, respectively. It is now observed that webpage load time reduces to 0.34 seconds, the Youtube 4k stream takes 60 seconds to fill its buffers, while the Netflix stream is now able to operate at 720 p and takes only 10 seconds to fill its buffers—these performance improvements come at the cost of reducing average download speeds to 20 Mbps. For the final experiment, the utility functions were configured to prioritise video over browsing, and browsing over downloads. In this case, web-page load times average 0.38 seconds, Youtube and Netflix take only 10 and 5 seconds respectively to fill buffers, and downloads are throttled to 15 Mbps. These experiments confirm that the described network bandwidth apportioning system and process can be tuned to greatly enhance performance for browsing and video streams while reducing the aggregate bandwidth requirement, thereby improving user experience while reducing band-width costs.
Many modifications will be apparent to those skilled in the art without departing from the scope of the present disclosure.
Number | Date | Country | Kind |
---|---|---|---|
2019900655 | Feb 2019 | AU | national |
This patent application is a national stage application of PCT/AU2020/050183, filed on Feb. 28, 2020, which claims priority to and the benefit of Australian Patent Application No. 2019900655, filed on Feb. 28, 2019, the entire contents of which are incorporated herein by reference.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/AU2020/050183 | 2/28/2020 | WO | 00 |