The present disclosure relates to wireless networks.
Accurate characterization of application performance over wireless networks is a key component to optimizing wireless network operations. The characterization of application performance allows for the detection of problems that matter to end users of the wireless network, as well as confirmation that those problems are addressed once the wireless network has been optimized. While network traffic throughput is an acceptable indicator of application performance for many application types, network traffic throughput is insufficient for interactive applications, such as video conferencing, where loss, delay, and jitter typically have a greater negative affect on the performance of the interactive applications. Therefore, in order to more accurately optimize wireless networks, it is desirable to determine application performance from the observable network states, which are available in access points, without generating probing traffic that requires additional infrastructure during run time, or without tracking live traffic from a specific application.
Techniques presented herein relate to optimizing network performance based on performance of an application program running on a client device that communicates traffic through a network device, even when the application program is not actively communicating traffic through the network device. The network device may collect a training dataset representing one or more states of the network device deployed in a network. The network device may train a first model disposed within the network device with the training dataset. The first model may be trained to generate one or more fabricated attributes of artificial network traffic through the network device. The network device may also train a second model disposed within the network device with the training dataset. The second model may be trained to generate a predictive experience metric that represents a predicted performance of an application program of a client device that is connected to the network device and is communicating traffic via the network device. The network device may then generate the one or more fabricated attributes based on the training of the first model. Furthermore, the network device may generate the predictive experience metric based on the training of the second model and using the one or more fabricated attributes. Using the predictive experience metric, the network device may alter one or more configurations of the network based on the predictive experience metric.
Reference is first made to
The network environment 10 further includes a central server 60. More specifically, the APs 40(1)-40(K) connect to LAN 20 via the routers 30(1)-30(N), and the server 60 is connected to the LAN 20. The server 60 may include a software defined networking (SDN) controller 62 that enables the server 60 to control functions for the APs 40(1)-40(K) and the clients 50(1)-50(M) that are connected to network 20 via the APs 40(1)-40(K). The SDN controller 62 of the server 60 may, among other things, enable the server 60 to perform location functions to track the locations of clients 50(1)-50(M) based on data gathered from signals received at one or more APs 40(1)-40(K). Furthermore, the server 60 may be a physical device or apparatus, or may be a process running in the cloud/datacenter.
With reference to
As further illustrated in
It is known that network devices may utilize a single model to generate a predictive experience metric in order to optimize a network, where the single model models together the interactions of the network device and the applications. To do so, the network devices contain a single model to represent each network device type and specific application program combination. As the number increases for both the types of network devices (e.g., different types of APs) and the different application programs, the number of single network models greatly increases. Furthermore, when a new network device is introduced, or when the software of an application program changes (e.g., an update is issued for the application program), the entire model would need to be retrained. This can result in a significant amount of downtime for generating predictive experience metrics. However, the architecture of the system 100 described above with reference to
Turning to
As further illustrated in
X
fabricated
=G(C, Znoise),
where Xfabricated 240 is the fabricated network packet attributes of a network packet, C is a training dataset of conditional classes that may include fabricated conditional classes Cfabricated and the conditional classes Cdass 210 of the training dataset 130, and Znoise 250 is a noise variable. The parameters of the generator model 220 are updated so that the output of the Xfabricated 240 attributes (given Znnoise 250 as a random input) of the generator model 220 cannot be distinguished from actual network packet attributes Xreal(data) by the discriminator model 230. Moreover, the determination of Znoise 250 may be determined independent of the data of the training dataset 130.
The discriminator model 230 is configured to analyze network packet attributes for a series of network packets and provide a probability distribution of an identified source S for each set of network packet attributes, as well as a probability distribution of the class condition C. Thus, discriminator model 230 may use the formula:
P(S|X), P(C|X)=D(X),
where P is the probability assigned by the discriminator model 230, S is the source of the analyzed network packet attributes, and X is a dataset that includes both Xfabricated and Xreal(data). More specifically, the discriminator 230 may be trained to maximize the log-likelihood that it assigns the correct source and class to the analyzed packet attributes as follows:
L
S
=E[log P(S=real|Xreal(data))+E[log P(S=fabricated|Xfabricated),
L
C
=E[log P(C=Cclass|Xreal(data))+E[log P(C=Cfabricated|Xfabricated),
where LS is the log-likelihood of the correct source, and LC is the log-likelihood of the correct conditional class. The discriminator model 230 is trained to maximize the value of LS+LC, while the generator model 220 is trained to maximize the value of the value of LC−LS. This architecture of the AC-GAN network generative model 110 permits the network generative model to separate large datasets of network traffic into subsets by conditional class, and then train the generator model 220 and the discriminator model 230 for each of the conditional class sub sets.
While the embodiment of the network generative model 110 illustrated in
With reference to
At 320, the discriminator model 230 analyzes the data set of network packets attributes, and assigns to each analyzed network packet attribute a probability that the analyzed network packet attribute is a real attribute of an actual network packet or is a fabricated attribute of an artificial network packet. In other words, at 320, the discriminator model 230 determines whether an analyzed network packet attribute was observed and/or collected by the network device or was generated by the generator model 220. At 325, after analyzing the dataset of both real and fabricated network packet attributes 200, 240, the discriminator model 230 is trained so that it more correctly identifies which network packet attributes are from the training dataset 130 and which network packet attributes were fabricated from the by the generator model 220. In other words, the parameters of the discriminator model 230 are updated so that the discriminator model 230 accurately identifies the source and class (‘Rear’, Cclass) when fed (Xreal(data)), and accurately identifies the source and class (‘Fabricated’, Cfabricated) when fed (Xfabricated). Furthermore, at 330, the generator model 220 is trained so that it generates fabricated network packet attributes 240 that the discriminator model 230 cannot accurately identify as being fabricated by the generator model 220. In other words, the parameters of the generator model 220 are updated so that the discriminator model 230 incorrectly identifies the source and class (‘Real’, Cfabricated) when fed (Xfabricated). The completion of steps 310-330 may constitute one training iteration of the generator and discriminator models 220, 230. At 335, it is determined whether or not a predetermined number of training iterations has been run by the network generator model 110. If at 335, the number of training iterations that have been run is less than the predetermined value, then the method 300 returns to 310, where the generator model 220 generates a new training set of fabricated network packet attributes 240 based on a fabricated conditional class Cfabricated and a randomly selected noise value 250 as inputs. However, if, at 335, the number of training iterations that have been run equals the predetermined value, then, at 340, generator model 220 generates a set of fabricated network packets 140 to send to an application model 120(1)-120(L), as illustrated in
With reference to
At 420, after the application model 120 has finished replaying each of the audio content sets for each one of the network packet attributes of the training dataset 130, the application model 120 calculates a predictive experience metric 150 that is associated with each set of attributes of each of the network packets of the training dataset 130. Thus, each network packet attribute set is associated with a distribution of predictive experience metrics 150, one predictive experience metric 150 for each replay of an audio content set. For example, if the training dataset 130 includes attributes sets for each of the 5000 network packets, if the application model 120 collects 80 different sets of audio content, and if each audio content set is replayed 10 times, then each of the 5000 network packet attribute sets contains a distribution of 800 (80 audio content sets×10 replays) predictive experience metrics 150. As previously explained, this predictive experience metric 150 may be an expected MOS value, or a probability distribution over likely MOS values. The MOS may be computed by the application model 120 using standard waveform-based methods, such as, but not limited to, the Perceptual Evaluation of Speech Quality (PESO) method, the Perceptual Evaluation of Audio Quality (PEAR) method, the Perceptual Objective Listening Quality Analysis (POLQA) method, etc.
Once the predictive experience metrics 150 are calculated, at 425, the application model 120 is trained based on the predictive experience metrics distributions 150 and each of the network packet attribute sets of the training dataset 130. In other words, the application model 120 is trained to associate a given network packet attribute set with a predictive experience metric distribution 150. At 430, the application model 120 receives the fabricated network attribute sets 240 from the network generative model 110. At 435, the application model 120, based on the training of the application model 120, generates a predictive experience metric distribution 150 for each of the received fabricated network attribute sets 240. Steps 405 through 425 may occur simultaneously to that of the method 300 illustrated in
While the embodiment of the application models 120(1)-120(L) illustrated in
Once the application model 120 generates the predictive experience metric distribution 150, the system 100 may alter one or more configurations of the network environment 10 to better optimize the network environment 10. In one embodiment, where the client devices 50(1)-50(M) are mobile wireless devices having multiple modes of connectivity, such as, but not limited to, WiFi® and broadband cellular network connectivity (4G, 5G, etc.), the system 100 may be configured to recommend to each one of the client devices 50(1)-50(M) connected to an AP 40(1)-40(K) the optimal type of network connection when operating an associated application program. The server 60 or serving AP for the given client device may send this recommendation to the given client device prior to the given client device operating the associated application program.
In yet a similar embodiment, where the system 100 resides in the server 60, the server 60 will continuously monitor the operation states of the APs 40(1)-40(K), and compute predictive experience metrics 150 for application programs communicating data through each APs 40(1)-40(K). The server 60 can then store the predictive experience metrics 150 (e.g., in software defined networking (SDN) controllers 62) so that when an application is operated on a client device (e.g., initiating a conference call using WebEx or Spark), the server 60 can consult the SDN controller 62 for the corresponding one of the APs 40(1)-40(K) and their associated predictive experience metrics 150. If a specific AP has a poor predictive experience metric 150, the SDN controller 62 will notify/control that specific AP or the relevant clients associated to that specific AP, so that the client devices associated with that specific AP will be instead utilize broadband cellular network connectivity.
In yet another embodiment, the system 100 may be utilized to instruct a client device to connect to a specific one of the APs 40(1)-40(K) of the network environment 10. Conventionally, AP selections are initiated by the client devices 50(1)-50(M) based on signal strength, etc. To assist the client devices 50(1)-50(M) to make informed decisions, extensions such as IEEE 802.11k allow for providing a suggested list of APs 40(1)-40(K) to a client device. When equipped with the ability to generate predictive experience metrics 150, either from the APs 40(1)-40(K) or from the central server 60, the server 20 or APs can continuously update/modify/sort the suggested list of APs so that a given client device can be directed to a particular one of the APs 40(1)-40(K) that can best sustain/support the network traffic needed for an application program running on the given client device.
Furthermore, in another embodiment, because the techniques and the system 100 presented above enable the prediction of application-specific experience metrics 150, the system 100 may provide the client devices 50(1)-50(M) (i.e., directly to the application controller via programmable application programming interfaces (APIs)) with a recommended application mode (e.g., audio-only versus voice and video for a conferencing application) based on current network conditions of the network 20 and the APs 40(1)-40(K). For example, the system 100 may direct the application of a client device 50(1)-50(M) that is attempting to establish a video conference from a public venue during a heightened network traffic time to initiate the video conference with audio-only.
In a further embodiment, the system 100 may be trained to generate predictive experience metrics 150 for applications, where the predictive experience metrics 150 are conditioned on different network configurations (e.g., treating the traffic flow as prioritized classes). Consequently, this information can be used to guide the server 60 or APs 40(1)-40(K) to dynamically and automatically adjust the quality of service (QoS) configurations to best maximize the expected performance of all network traffic flows sharing the same AP 40(1)-40(K).
Reference is now made to
The processor(s) 500 may be embodied by one or more microprocessors or microcontrollers, and execute software instructions stored in memory 520 for the control logic 530, the network generative model 110, and the application models 120(1)-120(L) in accordance with the techniques presented herein in connection with
The memory 520 may include read only memory (ROM), random access memory (RAM), magnetic disk storage media devices, optical storage media devices, flash memory devices, electrical, optical, or other physical/tangible (e.g., non-transitory) memory storage devices. Thus, in general, the memory 520 may comprise one or more computer readable storage media (e.g., a memory device) encoded with software comprising computer executable instructions and when the software is executed (by the processor 500) it is operable to perform the operations described herein. For example, the memory 520 stores or is encoded with instructions for control logic 530 for managing the operations of the network generative model 110 of the AP 40 and the multiple application models 120(1)-120(L).
In addition, memory 520 stores data 540 that is observed and collected by the AP 40, and is generated by logic/models 530, 110, 120(1)-120(L), including, but not limited to: the training dataset 130 of real network packet attributes (i.e., packet loss, packet delay, etc.) and class/operational state of the AP 40; fabricated network packet attributes (i.e., packet loss, packet delay, etc.) generated by the network generative model 110; and predictive experience metrics (i.e., MOS distributions) generated by the application models 120(1)-120(L).
Illustrated in
The processor(s) 600 may be embodied by one or more microprocessors or microcontrollers, and execute software instructions stored in memory 620 for the control logic 630, one or multiple network generative models 110, and the application models 120(1)-120(L) in accordance with the techniques presented herein in connection with
Memory 620 may include one or more computer readable storage media that may comprise read only memory (ROM), random access memory (RAM), magnetic disk storage media devices, optical storage media devices, flash memory devices, electrical, optical, or other physical/tangible memory storage devices.
Thus, in general, the memory 620 may include one or more tangible (e.g., non-transitory) computer readable storage media (e.g., a memory device) encoded with software comprising computer executable instructions, and when the software is executed by the processor(s) 600, the processor(s) 600 are operable to perform the operations described herein by executing instructions associated with the control logic 630 for managing the operations of the one or more network generative models 110 representative of each of the APs 40(1)-40(K), and of the multiple application models 120(1)-120(L).
In addition, memory 620 stores data 640 that is observed and collected by the various APs 40(1)-40(K), and is generated by logic/models 630, 110, 120(1)-120(L), including, but not limited to: the training dataset 130 of real network packet attributes (i.e., packet loss, packet delay, etc.) and class/operational states of the APs 40(1)-40(K); fabricated network packet attributes (i.e., packet loss, packet delay, etc.) 240 generated by the one or more network generative models 110; and predictive experience metrics (i.e., MOS distributions) 150 generated by the application models 120(1)-120(L).
The functions of the processor(s) 600 may be implemented by logic encoded in one or more tangible computer readable storage media or devices (e.g., storage devices compact discs, digital video discs, flash memory drives, etc. and embedded logic such as an ASIC, digital signal processor instructions, software that is executed by a processor, etc.).
While
With reference to
In summary, the method and system described above enable wireless networks to be optimized based on separate neural network models that generated sets of fabricated attributes of artificial network packets and predictive experience metrics of application programs. The method and system described above further enable wireless networks to be optimized even when the application programs are not actively communication data over the wireless network. This allows wireless networks to be optimized prior to use by an application program, which maximizes application performance and minimizes any time periods where the network may negatively affect application performance. Furthermore, the system described above provides an accurate predictive experience metric of application programs because the application models are based on actual application specific strategies of concealment, loss-recovery, de-jittering, etc., and because the predictive experience metrics capture variations of application content. In addition, because the predictive experience metrics are calculated and generated using a neural network, the computational cost per generated predictive experience metric is greatly reduced compared to that of conventional experience metrics that require actual operation of the application program and/or network traffic. Moreover, the system and method described above enable the continuous optimization of a wireless network because it requires minimal amounts of data collection. Once the two neural network models have been trained (i.e., the network generative model and an application model), the models may be continuously utilized to generate predictive experience metrics that are used to optimize wireless network performance. Even if a model is required to be retrained, the data collection time for acquiring a new training dataset may be as little as ten seconds, while the predictive experience metric may be calculated offline or without running any application programs.
The separate model system described above, also provides advantages over a single model system. As previously explained, the network generative model may be used for multiple application programs, and for different versions of the same application program (i.e., when the application program is updated). This results in a reduced amount of down-time for retraining of the models when the application programs are altered or updated. Furthermore, the application models may be used for different network generative models. For example, if each AP of the network environment is represented by a network generative model, and additional APs are added to the network environment, the existing application models may be utilized with the new network generative models of the new APs. This prevents the need for retraining of each of the applications models when changes occur to the network devices that are represented by the network generative models.
In one form, a method is provided comprising: collecting, at a network device, a training dataset representing one or more states of the network device deployed in a network; training, by the network device and based on the training dataset, a first model that generates one or more fabricated attributes of artificial network traffic through the network device; training, by the network device and based on the training dataset, a second model that generates a predictive experience metric that represents a predicted performance of an application program of a client device that is connected to the network device and is communicating traffic via the network device; generating the one more fabricated attributes based on the training of the first model; generating the predictive experience metric based on the training of the second model using the one or more fabricated attributes; and altering, by the network device, one or more configurations of the network based on the predictive experience metric.
In another form, an apparatus is provided comprising: a network interface unit configured to enable communications over a network; and a processor coupled to the network interface unit, the processor configured to: collect a training dataset representing one or more states of a network device deployed in a network; train, based on the training dataset, a first model that generates one or more fabricated attributes of artificial network traffic through the network device; train, based on the training dataset, a second model that generates a predictive experience metric that represents a predicted performance of an application program of a client device that is connected to the network device and is communicating traffic via the network device; generate the one more fabricated attributes based on the training of the first model; generate the predictive experience metric based on the training of the second model using the one or more fabricated attributes; and alter one or more configurations of the network based on the predictive experience metric.
In yet another form, a (non-transitory) processor readable storage media is provided. The computer readable storage media is encoded with software comprising computer executable instructions, and when the software is executed, operable to: collect a training dataset representing one or more states of a network device deployed in a network; train, based on the training dataset, a first model that generates one or more fabricated attributes of artificial network traffic through the network device; train, based on the training dataset, a second model that generates a predictive experience metric that represents a predicted performance of an application program of a client device that is connected to the network device and is communicating traffic via the network device; generate the one more fabricated attributes based on the training of the first model; generate the predictive experience metric based on the training of the second model using the one or more fabricated attributes; and alter one or more configurations of the network based on the predictive experience metric.
The above description is intended by way of example only. Various modifications and structural changes may be made therein without departing from the scope of the concepts described herein and within the scope and range of equivalents of the claims.