The disclosure relates to apparatuses and methods for predicting a metric of quality of a network, in particular a telecommunications network.
Monitoring metrics of a quality of a network, especially a telecommunications network, is a task of the utmost importance in order to ensure customer satisfaction and that the quality of the network is as advertised.
Some metrics cannot be efficiently monitored for every user at the same time. One such metric is a speedtest rate or a speedtest success. A speedtest measures the time it takes to download a certain amount of data from the Internet. Such a measure cannot be conducted on every user of a telecommunications network at the same time, due to a generation of real traffic on the telecommunications network and a risk of saturation.
A solution is to resort to a heuristic approach by ensuring such a level of capacity in the network that most speedtests succeed. However, such an approach results in a waste of capacity and does not even ensure that users receive the appropriate quality of service in some extreme scenarios.
In some example embodiments, the disclosure provides an apparatus for building a prediction model adapted to predict a value of a metric of a quality of a point-to-multipoint telecommunications network.
The apparatus may comprise means for:
In some example embodiments, the disclosure also provides a method for building a prediction model adapted to predict a value of a metric of a quality of a point-to-multipoint telecommunications network, the method comprising the steps of:
According to embodiments, such an apparatus or a method may comprise one or more of the features below.
In an embodiment, the plurality of high-paced telemetry traffic measurement values is retrieved from a minority of users of the point-to-multipoint telecommunications network.
In an embodiment, the descriptor of at least one network configuration comprises at least one value relating to a dynamic bandwidth management system.
In an embodiment, the at least one simulated value of the metric of the quality of the point-to-multipoint telecommunications network comprises a plurality of simulated values of the metric of the quality of the point-to-multipoint telecommunications network, wherein the metric of the quality of the point-to-multipoint telecommunications network is selected in the group consisting of data rates, latencies and speedtest results.
In an embodiment, the apparatus further comprises means for:
The at least one simulated value of the metric of the quality of the point-to-multipoint telecommunications network may be computed using one or more of the plurality of cluster traffic models and the descriptor of one or more network configurations.
In an embodiment, the at least one traffic model comprises a Discrete Auto Regressive model.
In an embodiment, the at least one network configuration comprises a plurality of network configurations and the at least one simulated value of the metric of the quality of the point-to-multipoint telecommunications network comprises a plurality of simulated values of the metric of the quality of the point-to-multipoint telecommunications network.
In an embodiment, the prediction model comprises a classifier and/or a regression model.
In an embodiment, the prediction model computes a probability of success of a speedtest and/or a speedtest rate.
In an embodiment, the point-to-multipoint telecommunications network is a Passive Optical Network.
According to an embodiment, an apparatus for building a prediction model adapted to predict a value of a metric of a quality of a point-to-multipoint telecommunications network, comprises:
According to an embodiment, an apparatus for building a prediction model adapted to predict a value of a metric of a quality of a point-to-multipoint telecommunications network, comprises at least one processor; and at least one memory storing instructions that, when executed by the at least one processor, cause the apparatus at least to perform:
In some example embodiments, the disclosure also provides an apparatus for predicting a value of a metric of a quality of a point-to-multipoint telecommunications network, the apparatus comprising means for:
In some example embodiments, the disclosure also provides a method for predicting a value of a metric of a quality of a point-to-multipoint telecommunications network, the method comprising the steps of:
In an embodiment, an apparatus for predicting a value of a metric of a quality of a point-to-multipoint telecommunications network, comprises:
In an embodiment, an apparatus for predicting a value of a metric of a quality of a point-to-multipoint telecommunications network, comprises at least one processor; and at least one memory storing instructions that, when executed by the at least one processor, cause the apparatus at least to perform:
These and other aspects of the invention will be apparent from and elucidated with reference to example embodiments described hereinafter, by way of example, with reference to the drawings.
With reference to
The telemetry data stream 3 may be slow-paced or high-paced. According to one embodiment, the telemetry data stream 3 transmits new data points with a period of five minutes. However, the telemetry data stream 3 may be high-paced and may transmit new data points with a period of five to ten seconds.
The remote infrastructure 2 may process the telemetry data stream 3 in order to estimate or compute one or several metrics pertaining to a quality of service in the telecommunications network 1.
The telecommunications network 1 comprises a shared medium serving a plurality of users. The telecommunications network 1 may be a fiber-optic communication network, a radio network, a coaxial network, a mobile network. The shared medium of the telecommunications network may include an optical fiber, a radio link, a wavelength channel, a frequency channel, a coaxial cable and so on.
With reference to
A PON scheduler 11 distributes a bandwidth between the first slice and the second slice, using static weights or dynamic weights. Dynamic weights may be computed by a Dynamic Bandwidth Management system.
The first slice comprises a first slice traffic shaper 12, the first slice traffic shaper 12 being configured to limit a bitrate in the first slice. A first slice scheduler 14 is configured to distribute the bandwidth between a plurality of first optical connections 161, . . . , 16k.
The plurality of first optical connections 161, . . . , 16k connects the first slice scheduler 14 to a plurality of first endpoints 201, . . . , 20k, wherein each of the plurality of first endpoints 201, . . . , 20k corresponds to a user.
A plurality of first connection traffic shapers 181, . . . , 18k is placed on the plurality of first optical connections 161, . . . , 16k. The plurality of first connection traffic shapers 181, . . . , 18k is configured to set a maximum bitrate to each of the plurality of first optical connections 161, . . . , 16k, the maximum bitrate not being necessarily identical across the plurality of first optical connections 161, . . . , 16k.
The second slice comprises a second slice traffic shaper 13, the second slice traffic shaper 13 being configured to limit a bitrate in the second slice. The second slice scheduler 15 is configured to distribute the bandwidth between a plurality of second optical connections 161, . . . , 16k. A number of second optical connections may or may not be identical to a number of first optical connections.
The plurality of second optical connections 161, . . . , 16k connects the second slice scheduler 15 to a plurality of second endpoints 211, . . . , 21k, wherein each of the plurality of second endpoints 211, . . . , 21k corresponds to a user.
A plurality of second connection traffic shapers 191, . . . , 19k is placed on the plurality of second optical connections 171, . . . , 17k. The plurality of second connection traffic shapers 191, . . . , 19k is configured to set a maximum bitrate to each of the plurality of second optical connections 171, . . . , 17k, the maximum bitrate not being necessarily identical across the plurality of second optical connections 171, . . . , 17k.
The PON scheduler 11, the first slice scheduler 14 and the second slice scheduler 15 may rely on static weights and/or dynamic weights. Dynamic weights may be computed by the Dynamic Bandwidth Management system. The PON scheduler 11, the first slice scheduler 14 and the second slice scheduler 15 may resort to fair-queue algorithms, such as the Generalized Process Sharing Algorithm.
The telecommunications network comprising a shared medium, a metric of quality of service may be measured for individual users of the telecommunications network. The metric of quality of service may be a data rate or a latency for one or more individual users of the telecommunications network, for example.
The metric of quality of service may be measured instantaneously or averaged over a chosen length of time. The metric of quality of service may also be measured for an individual user or averaged over a group of users.
Instead of measuring the metric of quality of service, a prediction model may be used within a prediction apparatus. The prediction apparatus is configured to predict values of the metric of quality of service in real-time for a large number of users at the same time, whereas a measurement of the metric of quality of service is sometimes not scalable.
With reference to
A speedtest measures a data rate on a connection of an individual user to the telecommunications network by downloading a certain amount of data from a server. The speedtest is deemed successful or not depending on a downloading speed.
Speedtests are a widespread metric of the quality of the connection to the telecommunications network. A common criterion for a satisfying quality of the connection of the telecommunications network overall is if 80% of an advertised datarate is available 80% of the time. Any other percentages may be considered. However, speedtests are not scalable and cannot be carried out on every user of the telecommunications network at the same time. Thus, the speedtest prediction apparatus 100 may help estimate speedtest results for every user of the telecommunications network 1.
The speedtest prediction apparatus 100 comprises a model-building module 30 and an inference module 31. The model-building module 30 may be offline and is configured to train a speedtest prediction model 7. The inference module 31 is an online module and is configured to predict values 9 of a speedtest for a plurality of users, using the speedtest prediction model 7 and a telemetry data stream 8.
The model-building module 30 receives a high-paced telemetry dataset 4. The high-paced telemetry dataset 4 is a set of pre-registered time-series of traffic measurement over time (i.e. bitrates), each of the pre-registered time series corresponding to an individual user among the users.
The pre-registered time series may have different lengths or starting times and may have a duration ranging from a day to approximately a week. The pre-registered time series are sampled with a high frequency. According to an embodiment, a sampling period is five seconds.
The minority of users is chosen to be representative of the entirety of users of the network. This might be made by studying metadata associated with the users. A sample size of the minority of users is chosen so that the high-paced telemetry dataset is statistically significant.
However, the sample size is still very small compared to an overall number of users of the telecommunications network 1. The minority of users may represent from one hundredth to one thousandth of the users of the telecommunications network 1, or from a few hundreds to a few thousand users.
The model-building module 30 comprises a traffic modelling module 32, a simulation module 33 and a training module 34.
The traffic modelling module 32 receives the high-paced telemetry dataset 4 and outputs at least one traffic model 5. The traffic model 5 is a mathematical model fitted on the high-paced telemetry dataset 4.
The traffic modelling module 32 may fit the traffic model 5 to the high-paced telemetry dataset 4 on a Discrete Auto-Regressive model using the Yule-Walker equations. The traffic model 5 may also be fitted on generative models such as auto-encoders or Generative Adversarial Network. Finally, known parametric models may be used, such as Hidden Markov models and Fractional Gaussian Noise models.
The traffic model 5 is then transmitted and integrated into the simulation module 33. The simulation module 33 is configured to generate synthetic data traces using the traffic model and to compute simulated speedtest results under different starting conditions.
A synthetic traffic dataset 6 comprises the simulated speedtest results associated to the different starting conditions. The synthetic traffic dataset 6 is then transmitted to the training module 34.
The training module 34 trains the speedtest prediction model 7 using the synthetic traffic dataset 6, so that the speedtest prediction model 7 learns to predict speedtest values 9 under various starting conditions.
The speedtest prediction model 7 may be a classification model, for example a Support Vector Machine. Alternatively, other classification algorithms may be used in order to classify the speedtest as either a success or a failure. For example, Naïve Bayes, logistic regression or a neural network classifier may be used.
The speedtest prediction model 7 may also be a regression model, for example an isotonic regression. Alternatively, other regression algorithms may be used in order to predict a speedtest rate. For example, a linear regression model, a decision tree regression model or a neural network regression model may be used.
With reference to
In this embodiment, the high-paced telemetry dataset 4 comprises a plurality of high-paced telemetry time-series 401, . . . , 40n. A plurality of elementary segments 4011, . . . , 401l is extracted from the plurality of high-paced telemetry time-series 401, . . . , 40n.
According to an embodiment, the plurality of elementary segments 4011, . . . , 401l corresponds to a data consumption profile associated with a certain activity (such as working, streaming, etc.).
The plurality of elementary segments 4011, . . . , 401l is collected across the plurality of high-paced telemetry time-series 401, . . . , 40n and may be a subset of the plurality of high-paced telemetry time-series 401, . . . , 40n. A number of elementary segments extracted from each of the plurality of high-paced telemetry time-series 401, . . . , 40n may vary.
The elementary segments all have a same duration (e.g. 15 mins). They are taken over the entire traffic trace excepted when the traffic is very low (typically during the night).
The plurality of elementary segments 4011, . . . , 401l is then grouped into a plurality of clusters 3221, . . . , 322m. The clusters are chosen using clustering algorithms and usual time-series clustering techniques.
Dimensionality reduction algorithms may be used, for example Principal Components Analysis.
Clustering algorithms may be used, for example K-means, Local Outlier Factor or other usual clustering algorithms adapted for time series. Dynamic time warping or Euclidian matching may be used.
A plurality of elementary traffic model-fitting modules 3231, . . . , 323m fits a model on each of the plurality of clusters 3221, . . . , 322m. A plurality of elementary traffic models 3241, . . . , 324m is then transmitted to the simulation module 33 for data synthesis, the simulation module 33 outputting the synthetic traffic dataset 6.
With reference to
With reference to
With reference to
A simulation environment 332 integrates the traffic model 5 and performs a plurality of network simulations, e.g. a very large number of network simulations, which may be hundreds of thousands. An initializing module 331 initializes a set of configuration parameters 334 randomly and transmits the set of configuration parameters 334 to the simulation environment 332.
The set of configuration parameters 334 may comprise:
The traffic model 5 is used to generate synthetic data for the virtual users in each of the high number of network simulations. The network simulations are transient and high-paced, and have a timestep similar to a period of the high-paced telemetry data, for example, five seconds. The timestep is lower than a typical speedtest duration, the typical speedtest duration being approximately thirty seconds.
For each of the network simulations, a simulated network is initialized. The simulated network may comprise at least one traffic source, at least one traffic shaper and at least one fair queue scheduler, wherein the traffic source generates an in-flow through the simulated network. The simulated network is initialized with the set of configuration parameters 334.
A simulated duration in the simulated network may range from a few minutes to several hours, so that relevant activity patterns may be witnessed. A virtual speedtest user is designated among the virtual users.
According to an embodiment, a plurality of virtual speedtest users is designated at once. According to an embodiment, the virtual speedtest user or the plurality of virtual speedtest users is chosen at random.
Each network simulation generates synthetic high-paced traffic traces for each virtual user and derives simulated speedtest values 335 for each virtual speedtest user. Simulated speedtest values 335 are computed and transmitted to a dataset-building module 333.
The simulation environment 332 also produces a slow-paced telemetry information 336 by performing moving averages on the synthetic high-paced traffic traces generated for each virtual user in each simulation. This telemetry information is similar to the slow-paced telemetry stream 81 that will be fed to the inference module 31. It can either consist of individual traffic streams (one stream per virtual user) or a single aggregated traffic stream (made of the sum of all the virtual users traffic streams in the simulated network).
The dataset-building module 333 receives the simulated speedtest values 335, the slow-paced telemetry information 336 and the set of configuration parameters 334 for a plurality of network simulations and outputs the synthetic traffic dataset 6. The synthetic traffic dataset 6 comprises the configuration parameters 334, simulated speedtest values 335 and slow-paced telemetry information 336 resulting from the network simulations.
According to an embodiment, the dataset-building module 333 outputs a plurality of datasets. For example, the simulated speedtest values may be grouped within a first dataset and the plurality of sets of configuration parameters 334 may be grouped within a second dataset. With reference to
According to an embodiment, the inference module 31 receives a slow-paced telemetry data stream 81 and network data 82. The network data 82 may be provided by a network controller and may comprise a configuration of the network (a number of slices, subscribers, traffic schedulers and traffic shapers), a maximum bitrate per subscriber and a configuration of the Dynamic Bandwidth Management system.
The inference module 31 is configured to predict speedtest values 9 for all users of the telecommunications network 1 without actually performing a speedtest. Predictions are carried out with a period inferior to five minutes. Thus, the network controller may deduce whether the quality of service is satisfactory. The network controller may use the prediction to change a configuration of the Dynamic Bandwidth Management system.
With reference to
The network controller 260 communicates with an access node 270. The access node 270 is a data communication equipment. According to an embodiment, the access node 270 may be a SDN access node.
The access node 270 retrieves a high-paced telemetry dataset 271 and a slow-paced telemetry data stream 281 from the telecommunications network 1.
The access node 270 transmits the high-paced telemetry dataset 271 and the slow-paced telemetry data stream 281 to the speedtest prediction apparatus 200. The speedtest prediction apparatus 200 comprises a speedtest prediction model, the speedtest prediction model being trained on the high-paced telemetry dataset 271.
The speedtest prediction model, once trained, predicts speedtest values for users of the telecommunications network. The speedtest prediction model takes as input the slow-paced telemetry data stream 281.
The network controller 260 also comprises a network inventory 287 and optionally a Dynamic Bandwidth Management system 288. The network inventory 287 stores network topology data and service-level agreement data. The network topology data and the service-level agreement data may be transmitted to the speedtest prediction apparatus for training and/or inference. The Dynamic Bandwidth Management system 288 may transmit bandwidth management data (such as scheduler weights) to the speedtest prediction apparatus 200.
The speedtest prediction apparatus 200 transmits predictions to a management user interface 290 to display results of the speedtest prediction across the telecommunications network 1 and to change values of parameters of the telecommunications network 1.
The speedtest prediction apparatus 200 may also transmit prediction to the Dynamic Bandwidth Management system 288, so that the Dynamic Bandwidth Management system 288 may update bandwidth management data.
According to another embodiment, represented on
The model-building module 330 outputs a trained speedtest prediction model 307, the trained speedtest prediction model 307 being then integrated within an inference module 331. The inference module 331 is integrated within the network controller 360.
The access node 370 also transmits a slow-paced telemetry data stream 381 to the inference module 331.
The network controller 360 also comprises a network inventory 387 and optionally a Dynamic Bandwidth Management system 388. The network inventory 387 may transmit network topology data and service-level agreement data to the inference module 331 for inference. The Dynamic Bandwidth Management system 388 may transmit bandwidth management data (such as scheduler weights) to the inference module 331. The inference module 331 may transmit speedtest prediction to a management user interface 390.
The speedtest prediction model 307 may be used as an off-the-shelf solution and may be deployed across several telecommunications networks. The speedtest prediction model 307 may be retrained and fine-tuned for a given network configuration.
The invention is not limited to the described example embodiments. The appended claims are to be construed as embodying all modifications and alternative constructions that may occur to one skilled in the art, and which fairly fall within the basic teaching as set forth herein.
As used in this application, the term “circuitry” may refer to one or more or all of the following:
This definition of circuitry applies to all uses of this term in this application, including in any claims. As a further example, as used in this application, the term circuitry also covers an implementation of merely a hardware circuit or processor (or multiple processors) or portion of a hardware circuit or processor and its (or their) accompanying software and/or firmware. The term circuitry also covers, for example and if applicable to the particular claim element, a baseband integrated circuit or processor integrated circuit for a mobile device or a similar integrated circuit in server, a cellular network device, or other computing or network device.
Elements such as the apparatus and its components could be or include e.g. hardware means like e.g. an Application-Specific Integrated Circuit (ASIC), or a combination of hardware and software means, e.g. an ASIC and a Field-Programmable Gate Array (FPGA), or at least one microprocessor and at least one memory with software modules located therein, e.g. a programmed computer.
The use of the verb “to comprise” or “to include” and its conjugations does not exclude the presence of elements or steps other than those stated in a claim. Furthermore, the use of the article “a” or “an” preceding an element or step does not exclude the presence of a plurality of such elements or steps. The example embodiments may be implemented by means of hardware as well as software. The same item of hardware may represent several “means”.
In the claims, any reference signs placed between parentheses shall not be construed as limiting the scope of the claims.
Number | Date | Country | Kind |
---|---|---|---|
22192130.7 | Aug 2022 | EP | regional |