The present disclosure relates to communication networks, and particularly to traffic engineering in communication networks. In order to improve network utilization, and to meet user requirements, the disclosure proposes an intent-based smart policy routing. The disclosure is specifically concerned with load balancing traffic over multiple paths of a set of tunnels.
Traffic engineering plays a crucial role in improving network utilization and meeting user requirements. Routing over multiple paths helps to make better use of network capacity and allows dynamically selection of paths based on network conditions.
Multi-path routing is typically used in Software-Defined Wide Area Networks (SD-WAN) where a headquarter site and local branches (e.g., sites) of an enterprise can be interconnected through different networks (e.g., Multiprotocol Label Switching (MPLS), Internet, Long-Term Evolution (LTE)). In such an overlay network, a controller entity is often deployed at the headquarter site to manage the network while premises are equipped with access routers (ARs). The controller is responsible for the configuration of access routers and the update of high-level routing policies. In turn, access routers route traffic over the set of available overlay links so as to align with the policy defined by the controller and the local knowledge of network conditions (e.g., average delay, loss, jitter of overlay links).
At the individual application level, specific Quality of Service (QOS) or Service Level Agreements (SLA) requirements can be defined. Applications are generally mapped to different “flow groups” with different QoS requirements, for instance: Real-time (Moderate traffic, High SLA requirements), Business (High traffic, Moderate SLA requirements), and Bulk (Moderate traffic, Low/No SLA requirements).
At the global network level, several goals, or “intents”, may be expressed by the network owner. It can be for instance the minimization of a financial cost, the minimization of the congestion or the maximization of performance. Optimizing intents and guaranteeing SLA may be conflicting objectives in nature. Indeed, minimizing a financial cost intent can lead to the selection of borderline paths in terms of QoS, inducing a risk to violate application SLAs.
Typically, load balancing IP networks are used to reduce congestion and it is implemented inside network devices such as switches or routers. As traffic and network conditions evolve, the load balancing of traffic must be adjusted to better use network resources and meet application requirements. In the state of the art, several solutions have been proposed to solve load balancing problems.
The Performance Routing (PfR)/iWAN solution has been released for the dynamic selection of paths in WAN networks so as to satisfy SLA requirements. In this architecture, the user defines a policy at the Master Controller (MC) level in terms of SLA requirements for each application. Access routers monitor the quality of the paths and send monitoring updates to MC. The controller then compares the quality of paths with application requirements and updates the path selection in routers if needed. This solution requires frequent communication between the device and the controller as paths are actually selected in the controller. It is then slow to react to the change of network conditions. Furthermore, the policy is only defined with Qos requirements or path preferences for each application. It is not optimized to satisfy a given global intent.
A link utilization (LU) based routing has been introduced to dynamically load balance traffic, where a link utilization threshold is assigned to links. When the LU of a link surpasses the threshold, this link is marked as “out-of-policy” (OOP). Then, traffic on OOP links is redirected to other available links until the LU of the link goes back to “in-policy” (under the given threshold). In another LU-based load balancing method, the available bandwidth of links is estimated. From the list of Equal Cost Multi-Paths (ECMP) computed by the shortest path first algorithm, it determines the minimum available bandwidth (MAB) of each path. Then, the weight of each path is determined according to MAB and the flows are allocated to ECMP paths according to the weights. In one conventional solution, traffic is distributed over paths proportionally to the capacity of the paths. These solutions address the load balancing problem by considering link utilization, however, they cannot guarantee SLA.
Load balancing may be done based on awareness of congestion in the network where a feedback loop between the source and destination is formed to detect congestion. However, each application has a different sensitivity to the congestion, e.g., Bulk applications tolerate light to moderate congestions while Real-time applications do not. In a delay-based load balancing solution, the delay of links is estimated with the current load and the remaining capacity, then the paths are selected to obtain a combined target (bandwidth, utilization, and delay). In another intent and QoS-based routing framework for openflow; routing decisions are taken in the controller for every flow. Even though this solution considers both SLA and intent, it only runs inside the controller and there is no support from devices to operate load balancing.
In view of the above, this disclosure aims to introduce an intent-based policy routing method inside devices. An objective is to allow devices are in support of a network controller to meet global intents. One aim is to minimize the need for communication between the controller and devices. Another aim is to improve the satisfaction of SLA.
The objective is achieved by embodiments as provided in the enclosed independent claims. Advantageous implementations of the embodiments are further defined in the dependent claims.
A first aspect of the disclosure provides a network entity for routing a plurality of flow groups in a network, wherein the network entity is configured to obtain policy information from a controller, wherein the policy information comprises one or more global intents and information about at least one SLA requirement for each of the plurality of flow groups, wherein each global intent is indicative of one requirement of a network operator: and make one or more routing decisions for the plurality of flow groups based on the policy information.
In this disclosure, the network may be a software-defined network, e.g., SD-WAN. The network entity may be an access device, for instance, an access router (AR) in SD-WAN scenarios. This disclosure proposes a solution for intent-based policy routing inside devices. Possibly, the policy information may include routing policies, or load balancing policies. Devices (e.g., the network entity) are in support of a controller, which provides the policy information, to meet global intents.
In an embodiment of the first aspect, each global intent is indicative of a requirement of the network operator related to one of the following: link utilization, financial cost, quality, congestion, safety, stability, and performance.
Typically, the network operator or a network owner may intent to minimize the financial cost of the network, to minimize the network congestion, to maximize the network performance, to maximize the stability of the network (e.g., stick as much as possible to previous configurations), or to maximize the safety of the network (e.g., satisfy a maximum number of SLAs). One or more of these requirements may be included in the policy information, thereby allowing the network entity to make decisions taking such intents or requirements into account.
In an embodiment of the first aspect, the policy information further comprises information about a set of overlay links that can be used for routing the plurality of flow groups, wherein each overlay link comprises a plurality of underlay links.
In some embodiments, a set of active/ backup links may be given, for example by a centralized controller. Notably, the centralized controller may be aware of the network topology and/or link characteristics (e.g., capacity; background traffic, packet loss, delay, etc.). The network entity makes routing decisions also taking information of the overlay links into account. Typically, an underlay link or network refers to a physical link or network, while an overlay link is a logical or virtual link that is overlaid on one or more physical links.
In an embodiment of the first aspect, the network entity is further configured to obtain a traffic prediction result and/or an SLA prediction result for each of the plurality of flow groups, wherein the traffic prediction result of each flow group comprises one or more traffic parameters of that flow group, and/or the SLA prediction result of each flow group comprises one or more QoS performance indicator of that flow group.
Notably, the network entity may be an SPR module implemented inside the AR. The network entity may obtain traffic prediction results and/or SLA prediction results from other modules (such as traffic prediction module, or SLA prediction module) inside the AR.
In an embodiment of the first aspect, making the one or more routing decisions for the plurality of flow groups comprises: making a routing decision for each of the plurality of flow groups based on the policy information, and the traffic prediction result and/or the SLA prediction result of that flow group.
In some embodiments, when the network entity makes routing decisions for each flow group, in addition to the policy information, it may further take the traffic prediction result and/or the SLA prediction result of that flow group into account. It should be noted that communication with the controller can be reduced and SLA satisfaction is better ensured, due to the use of predictions models for traffic and performance. As the optimization of intents can conflict with the optimization of SLAs, the use of prediction models allows the network entity to make efficient decisions (e.g., routing decisions or load balancing decisions), i.e., so that it can anticipate the consequences of the decisions it is taking.
In an embodiment of the first aspect, the network entity is further configured to monitor the plurality of flow groups to obtain statistical information for each of the plurality of flow groups, wherein the statistical information for each flow group comprises at least one of throughput information and a QoS statistics of that flow group: and perform traffic prediction on each of the plurality of flow groups based on the statistical information of that flow group, to obtain the traffic prediction result of each of the plurality of flow groups.
For instance, the network entity may be the AR itself that includes an SPR module. In such a case, the network entity produces the traffic prediction result.
In an embodiment of the first aspect, the traffic prediction is performed using a traffic prediction model.
Possibly, the traffic prediction models can be already embedded into the denetwork entity.
In an embodiment of the first aspect, the network entity is further configured to obtain one or more traffic prediction parameters related to the traffic prediction from the controller: and select the traffic prediction model from one or more trained models using the one or more traffic prediction parameters and/or train the traffic prediction model using the one or more traffic prediction parameters.
Possibly, the parameters related to traffic predictions may be used for selecting a particular traffic prediction model (History-based, SARIMA, machine learning) to use. In some embodiments, the parameters may be used as model parameters for configuring or training the selected model with some parameters.
In an embodiment of the first aspect, the network entity is further configured to provide the statistical information of the plurality of flow groups to the controller.
In an embodiment of the first aspect, the network entity is further configured to perform SLA prediction for each of the plurality of flow groups and for each overlay link of the set of overlay links using an SLA prediction model, to obtain the SLA prediction result of each of the plurality of flow groups.
In some embodiments, the network entity may further comprise an SLA prediction module for producing the SLA prediction result.
In an embodiment of the first aspect, the network entity is further configured to obtain one or more SLA prediction parameters related to the SLA prediction from the controller; and determine or activate the SLA prediction model using the one or more SLA prediction parameters.
In some embodiments, the parameters related to SLA predictions may be used for determining or activating the SLA prediction model.
In an embodiment of the first aspect, each global intent is associated with a priority or a weight indicating an order of importance of that global intent in the one or more global intents.
The priority or weight of each global intent may be provided by the network operator or owner.
In an embodiment of the first aspect, the one or more global intents includes a first global intent that is indicative of the requirement related to safety, and the first global intent is associated with a highest priority in the one or more global intents, wherein the first global intent indicates the network entity to make the routing decisions that meet the at least one SLA requirement.
In an embodiment of the first aspect, the one or more global intents includes a second global intent that is indicative of the requirement related to stability, and the second global intent is associated with a second highest priority in the one or more global intents, wherein the second global intent indicates the network entity to make the one or more routing decisions that minimize changes on one or more previous routing decisions.
Notably, the maximization of stability (also called “Stickiness”) is often defined as a secondary intent, and the maximization of safety is often defined as a primary intent.
In an embodiment of the first aspect, the network entity is further configured to periodically make the one or more routing decisions for the plurality of flow groups, or make the one or more routing decisions for the plurality of flow groups in response to one or more trigger-events.
In some embodiments, the intent-based SPR in the devices can be periodic or event-triggered (e.g., by an external controller).
In an embodiment of the first aspect, the network entity is further configured to provide information about whether the network entity supports a particular global intent to the controller.
Notably, the controller can collect information about intent capabilities of devices, and provide the policy information including the global intents accordingly.
In an embodiment of the first aspect, the policy information comprises a first set of global intents dedicated for a first group of applications running on the network entity, and a second set of global intents dedicated for a second group of applications running on the network entity.
Intents can be defined globally for all applications or for different groups of applications/tenants. For instance, it may be desired to reduce the financial expenses induced by a given application from one group: while it may be also desired to increase the quality perceived by another application from one group. Intents can also be defined different for devices (e.g., core routers versus access routers). For instance, it may be desired to reduce the financial expenses induced by an enterprise branch.
In an embodiment of the first aspect, the network entity is further configured to make one or more load balancing decisions for the plurality of flow groups based on the policy information.
Possibly, the intent-based approach proposed in this disclosure can also apply to load balancing.
In an embodiment of the first aspect, making the one or more load balancing decisions for the plurality of flow groups comprises: making a load balancing decision for each of the plurality of flow groups based on the policy information, and the traffic prediction result and/or the SLA prediction result of that flow group.
In an embodiment of the first aspect, the network entity is an access router, and the network is a software-defined network.
A second aspect of the disclosure provides a controller for assisting of routing a plurality of flow groups in a network, wherein the controller is configured to provide policy information to a network entity, wherein the policy information comprises one or more global intents and information about at least one SLA requirement for each of the plurality of flow groups, wherein each global intent is indicative of one requirement of a network operator.
An embodiment of this disclosure thus proposes a controller for supporting the intent-based SPR method. Intents are provided to access devices, e.g., the network entity, so that the network entity makes efficient decisions (e.g., routing decisions or load balancing decisions), i.e., so that it can anticipate the consequences of the decisions it is taking.
In an embodiment of the second aspect, each global intent is indicative of a requirement of the network operator related to one of the following: link utilization, financial cost, quality, congestion, safety, stability, and performance.
In an embodiment of the second aspect, the policy information further comprises information about a set of overlay links that can be used for routing the plurality of flow groups, wherein each overlay link comprises a plurality of underlay links.
In an embodiment of the second aspect, the controller is configured to provide one or more traffic prediction parameters related to traffic prediction, and/or one or more SLA prediction parameters related to SLA prediction, to the network entity.
Possibly, the traffic prediction models can be already embedded into the devices. The controller may provide parameters related to traffic predictions to the network entity for selecting a particular traffic prediction model (History-based, seasonal auto-regressive integrated moving average (SARIMA), machine learning) to use.
In an embodiment of the second aspect, the controller is configured to obtain statistical information of the plurality of flow groups from the network entity, wherein the statistical information of each flow group comprises at least one of throughput information and a QOS statistics of that flow group.
In an embodiment of the second aspect, each global intent is associated with a priority or a weight indicating an order of importance of that global intent in the one or more global intents.
In an embodiment of the second aspect, the controller is configured to obtain information about whether the network entity supports a particular global intent from the network entity.
In an embodiment of the second aspect, providing policy information to a network entity further comprises providing the policy information based on the obtained information about whether the network entity supports a particular global intent.
In some embodiments, the controller can collect information about intent capabilities of devices, and provide the policy information including the global intents accordingly.
A third aspect of the disclosure provides a method for routing a plurality of flow groups in a network, wherein the method comprises: obtaining policy information from a controller, wherein the policy information comprises one or more global intents and information about at least one SLA requirement for each of the plurality of flow groups, wherein each global intent is indicative of one requirement of a network operator: and making one or more routing decisions for the plurality of flow groups based on the policy information.
Embodiments of the method of the third aspect may correspond to the embodiments of the network entity of the first aspect described above. The method of the third aspect and its embodiments achieve the same advantages and effects as described above for the network entity of the first aspect and its embodiments.
A fourth aspect of the disclosure provides a method for assisting of routing a plurality of flow groups in a network, wherein the method comprises providing policy information to a network entity, wherein the policy information comprises one or more global intents and information about at least one SLA requirement for each of the plurality of flow groups, wherein each global intent is indicative of one requirement of a network operator.
Embodiments of the method of the fourth aspect may correspond to the embodiments of the controller of the second aspect described above. The method of the fourth aspect and its embodiments achieve the same advantages and effects as described above for the controller of the second aspect and its embodiments.
A fifth aspect of the disclosure provides a computer program product comprising a program code for carrying out, when implemented on a processor, the method according to the third aspect and any embodiments of the third aspect, or the fourth aspect and any embodiments of the fourth aspect.
It has to be noted that all devices, elements, units and means described in the present application could be implemented in software or hardware elements or any kind of combination thereof. All steps which are performed by the various entities described in the present application as well as the functionalities described to be performed by the various entities are intended to mean that the respective entity is adapted to or configured to perform the respective steps and functionalities. Even if, in the following description of specific embodiments, a specific functionality or step to be performed by external entities is not reflected in the description of a specific detailed element of that entity which performs that specific step or functionality, it should be clear for a skilled person that these methods and functionalities can be implemented in respective software or hardware elements, or any kind of combination thereof.
The above-described aspects and embodiments of the present disclosure will be explained in the following description of specific embodiments in relation to the enclosed drawings, in which:
Illustrative embodiments of a network entity, a controller, and corresponding methods for traffic routing in a network are described with reference to the figures. Although this description provides a detailed example of possible implementations, it should be noted that the details are intended to be exemplary and in no way limit the scope of the application.
Moreover, an embodiment/example may refer to other embodiments/examples. For example, any description including but not limited to terminology, element, process, explanation and/or technical advantage mentioned in one embodiment/example is applicative to the other embodiments/examples.
For ease of understanding, SD-WAN network architectures are first described here.
Typically, load balancing IP networks are used to reduce congestion and it is implemented inside network devices such as switches or routers using two techniques: 1) hash-based splitting, where a hash is calculated over significant fields of packet headers, and is used to select the outgoing paths, and 2) Weighted Cost Multi-Pathing (WCMP), where load balancing weights are used to make decisions, e.g., to make sure that the number of flows on each outgoing path meets a certain ratio. These approaches generally aim at minimizing the Maximum Link Utilization (MLU). To select paths according to the QoS requirements of applications, more advanced techniques are required. They are either based on a flow table that maintains a specific forwarding rule for each flow; or based on the continuous assessment of a valid set of paths for multiple flows. The Smart Policy Routing (SPR) technology described here falls in the latter category.
For each flow group, a set of flows with similar requirements, access routers get an SPR policy from the controller that contains 1) the SLA requirements that must be satisfied and 2) a set of active/backup links that can be used for load balancing. The quality of each path in the pool of active and backup links (1 overlay link=1 path) is continuously monitored and evaluated using the Composite Measure Indicator (CMI). For example, CMI=800−((D, 500)+mi(J, 200)+min(L. 10)) with D (delay). L (packet loss) and J (jitter) being the real-time measurements and 500 ms, 10%, 200 ms being the respective QoS constraints. An overlay link, or a path, violating all constraints will be not chosen because its CMI is negative. The set of overlay links that remain eligible are used to load balance traffic. A switchover period (SW) and a Flapping suppression period (FSW) can be used to stabilize the system. During the SW period, if the quality of the path recovers, the switchover period is canceled. After switching traffic to a new link, within an FSW period, SPR may not perform a link switchover even if it does not obtain a good CMI (this is optional).
While SPR in devices focuses on SLA satisfaction for applications (e.g., flow groups), it is not aware of the global intents that need to be optimized at the network level.
In a conventional SPR solution, the controller periodically adjusts policies and communicates them to edge devices (i.e., ARs). To adjust policies, the goal of the controller is twofold: 1) helping devices to guarantee SLAs, and 2) satisfying network-level intents.
It takes as inputs:
Based on that, the controller updates, at a slow pace, the SPR policy of devices. The policy for each flow group includes the sets of active and backup links and QoS requirements.
A main issue with this two-level architecture is that the edge devices are not aware of global intents (they only focus on SLA guarantees) and the controller needs to interact frequently with devices to optimize global intents.
The network entity 200 is configured to obtain policy information from a controller 300. The policy information 201 comprises one or more global intents and information about at least one SLA requirement for each of the plurality of flow groups. Notably, each global intent is indicative of one requirement of a network operator. The network entity 200 is further configured to make one or more routing decisions 202 for the plurality of flow groups based on the policy information 201.
The network may be a software-defined network, e.g., SD-WAN. This disclosure proposes a solution for intent-based policy routing inside devices. Devices are in support of the network controller to meet global intents. For instance, each global intent may be indicative of a requirement of the network operator related to one of the following: link utilization, financial cost, quality, congestion, safety, stability, and performance. Typically, the network owner may intent to minimize the financial cost of the network, to minimize the network congestion, to maximize the network performance, to maximize the stability of the network (e.g., stick as much as possible to previous configurations), or to maximize the safety of the network (e.g., satisfy a maximum number of SLAs).
In some embodiments, each global intent is associated with a priority or a weight indicating an order of importance of that global intent in the one or more global intents. Notably, the maximization of stability (also called “Stickiness”) is often defined as a secondary intent, and the maximization of safety is often defined as a primary intent.
According to an embodiment of this disclosure, the one or more global intents may include a first global intent that is indicative of the requirement related to safety. The first global intent is associated with a highest priority in the one or more global intents, wherein the first global intent indicates the network entity 200 to make the routing decisions 202 that meet the at least one SLA requirement.
As shown in
In the control plane, an intent-based “Smart Policy Routing (SPR)” module may periodically compute a split ratio or forwarding rules for each flow group based on policy information (the policy information 201). According to this disclosure, the policy information 201 as shown in
In some embodiments, the policy information 201 further comprises information about a set of overlay links that can be used for routing the plurality of flow groups, wherein each overlay link comprises a plurality of underlay links.
The AR shown in
Notably, the network entity 200 as shown in
In some embodiments, the network entity 200 may be configured to obtain a traffic prediction result for each of the plurality of flow groups, wherein the traffic prediction result of each flow group comprises one or more traffic parameters of that flow group. Typically, a flow can be characterized by an average throughput and/or a maximum burst size. The traffic parameter may be a “demand”, e.g., the throughput, or other parameters such as peak/burst rate. Traffic demand is typically characterized by the network throughput for a set of origin-destination (OD) flows.
Optionally, the network entity 200 may be configured to obtain an SLA prediction result for each of the plurality of flow groups. The SLA prediction result of each flow group comprises one or more QoS performance indicators (i.e., delay, loss, and jitter) of that flow group.
According to an embodiment of this disclosure, when the network entity 200 makes a routing decision 202 for each of the plurality of flow groups, in addition to the policy information 201, it may further take into account the traffic prediction result and/or the SLA prediction result of that flow group.
In an embodiment of the disclosure, the network entity 200, i.e., the intent-based SPR module, can take decisions by solving an optimization model. In the following example, it is presented a versatile intent optimization model for 5 typical intents:
These models are combined in the objective function with weights that determine their relative priority or importance.
The inputs of the intent-based SPR optimization model are the following:
Notably, dk(t) maybe given by the traffic prediction module. qk, Coste, Prk, and βk maybe defined by the user and communicated to the devices (e.g., the network entity 200) by the network controller 300. PropDe is given by the monitoring module. The resilient bandwidth cost model is adopted. In this model, a lower price applies to the baseline bandwidth which is agreed between the user and operator. If the user uses more than this threshold, a higher price will be applied.
The outputs (decisions variables) are:
It should be noted that yk(t) is the main output and will be sent to the Traffic scheduler module to redirect the incoming flows to the appropriate overlay link.
The intent-based load balancing optimization model may be expressed as:
This model is a non-linear optimization model and it can be solved by a solver (e.g., SCIP solver for instance). The objective has multiple terms related to different available intents. A user can select or combine intents by adjusting the weights. Constraints (1) and (2) are to compute a load of a flow group/a priority on a link which will be exploited to compute the waiting delay using the Non-Preemptive Queuing (NPQ) model from queuing theory for strict priority schedulers in (8) and (9). Constraints (3)-(7) are to determine the link utilization of each link, the amount of traffic below and above the baseline bandwidth, and the MLU. Constraint (13) is to derive the delay of a flow group and constraint (10) determines if the delay meets its SLA requirements. Constraints (11) and (12) are to compare the difference between the output and the current solution. Faster heuristic algorithms can also be derived if a solver cannot be used.
It should be noted that, in another example, the network entity 200 may be the AR as shown in
In some embodiments, the network entity 200 may be further configured to monitor the plurality of flow groups to obtain statistical information for each of the plurality of flow groups. For instance, the statistical information for each flow group may comprise at least one of the throughput information and a QoS statistics of that flow group. In some embodiments, the network entity 200 may be further configured to perform traffic prediction on each of the plurality of flow groups based on the statistical information of that flow group, to obtain the traffic prediction result of each of the plurality of flow groups.
In some embodiments, the traffic prediction is performed using a traffic prediction model. Possibly, the traffic prediction models can be already embedded into the devices. The network entity 200 may be further configured to obtain one or more traffic prediction parameters related to the traffic prediction from the controller 300. The parameters related to traffic predictions may be used for selecting a particular traffic prediction model (History-based, SARIMA, machine learning) to use. That is, network entity 200 may be further configured to select the traffic prediction model from one or more trained models using the one or more traffic prediction parameters.
In some embodiments, the parameters may be used as model parameters for configuring the selected model with some parameters. In such a case, the network entity 200 may be further configured to train the traffic prediction model using the one or more traffic prediction parameters.
In some embodiments, the network entity 200 may be further configured to provide the statistical information of the plurality of flow groups to the controller 300.
In some embodiments, the network entity 200 may be further configured to perform SLA prediction for each of the plurality of flow groups and for each overlay link of the set of overlay links using an SLA prediction model, to obtain the SLA prediction result of each of the plurality of flow groups. Possibly, the network entity 200 may be further configured to obtain one or more SLA prediction parameters related to the SLA prediction from the controller 300.
The parameters related to SLA predictions may be used for determining or activating the SLA prediction model using the one or more SLA prediction parameters. For instance, the SLA prediction model may be a closed-form model from queuing theory or network calculus to be used. Alternatively, the SLA prediction model may be a machine learning model.
According to an embodiment of this disclosure, the network entity 200 may periodically make the one or more routing decisions 202 for the plurality of flow groups. According to another embodiment of this disclosure, the network entity 200 may make the one or more routing decisions 202 for the plurality of flow groups in response to one or more trigger-events.
The controller can collect information about intent capabilities of devices. In some embodiments, the network entity 200 may be further configured to provide information about whether the network entity 200 supports a particular global intent to the controller 300.
It may be worth mentioning that, the intents can be defined globally for all applications, or dedicated for different groups of applications/tenants. They can also be different depending on devices (e.g., core routers versus access routers).
According to an embodiment of this disclosure, the policy information 201 comprises a first set of global intents dedicated for a first group of applications running on the network entity 200, and a second set of global intents dedicated for a second group of applications running on the network entity 200. Possibly, the first group of applications and the second group of applications may have different QoS requirements. For instance, the first group may require moderate traffic and high SLA requirements (Real-time type). The second group may require high traffic and moderate SLA requirements (Business type), or moderate traffic and low/no SLA requirements (Bulk type).
The previous embodiments discuss making routing decisions based on policy information including global intents. It should be noted that this disclosure can also apply to load balancing. In some embodiments, the network entity 200 may be further configured to make one or more load balancing decisions for the plurality of flow groups based on the policy information 201.
In some embodiments, in addition to the policy information 201, the network entity 200 may make a load balancing decision for each of the plurality of flow groups based on the policy information 201, and the traffic prediction result and/or the SLA prediction result of that flow group. Details are similar as discussed in the previous embodiments related to making routing decisions.
Notably, a main aspect of this application is that intents are provided to routers (objectives to optimize). As the optimization of intents can conflict with the optimization of SLAs, prediction models need to be embedded into devices so that the intent-based SPR module makes efficient decisions (e.g., routing decisions or load balancing decisions), i.e., so that it can anticipate the consequences of the decisions it is taking.
In particular, the controller 300 is adapted for assisting in routing a plurality of flow groups in a network. The controller is configured to provide policy information 201 to a network entity 200, wherein the policy information 201 comprises one or more global intents and information about at least one SLA requirement for each of the plurality of flow groups, wherein each global intent is indicative of one requirement of a network operator.
This disclosure further proposes a controller for providing necessary information to the network entity 200 and thus assisting the network entity 200 in making routing or load balancing decisions. The network entity 200 may be the network entity shown in
As discussed in previous embodiments, each global intent may be indicative of a requirement of the network operator related to one of the following: link utilization, financial cost, quality, congestion, safety, stability, and performance. In some embodiments, each global intent may be associated with a priority or a weight indicating an order of importance of that global intent in the one or more global intents.
In some embodiments, the policy information 201 further comprises information about a set of overlay links that can be used for routing the plurality of flow groups, wherein each overlay link comprises a plurality of underlay links.
According to an embodiment of the disclosure, the controller 300 may be further configured to provide one or more traffic prediction parameters related to traffic prediction, to the network entity 200.
In some embodiments, the controller 300 may be further configured to provide one or more SLA prediction parameters related to SLA prediction, to the network entity 200.
According to an embodiment of the disclosure, the controller 300 may be further configured to obtain statistical information of the plurality of flow groups from the network entity 200, wherein the statistical information of each flow group comprises at least one of throughput information and a QoS statistics of that flow group.
According to an embodiment of the disclosure, the controller 300 may be further configured to obtain information about whether the network entity 200 supports a particular global intent from the network entity 200.
Accordingly, the controller 300 may provide the policy information 201 based on the obtained information about whether the network entity 200 supports a particular global intent.
In the disclosure. an apparatus and method for intent-based SPR are proposed. Embodiments of the disclosure provide an intent-based policy routing system for network devices. Network devices are enabled to decide load balancing or routing for each flow group they managed based on SLA requirements and global intents. Predictions can also be used (optional). Global intents can be related to the minimization of financial costs. network congestion. or the maximization of performance (i.e .. so-called “high quality” intent). Further. configuration updates may be received at a slow frequency from the network controller. Updates may contain intent parameters and performance prediction models.
The present disclosure has been described in conjunction with various embodiments as examples as well as implementations. However, other variations can be understood and effected by those persons skilled in the art and practicing the claimed embodiments of the disclosure. from the studies of the drawings. this disclosure and the independent claims. In the claims as well as in the description the word “comprising” does not exclude other elements or steps and the indefinite article “a” or “an” does not exclude a plurality. A single element or other unit may fulfill the functions of several entities or items recited in the claims. The mere fact that certain measures are recited in the mutual different dependent claims does not indicate that a combination of these measures cannot be used in an advantageous implementation.
Furthermore. any method according to embodiments of the disclosure may be implemented in a computer program. having code means. which when run by processing means causes the processing means to execute the steps of the method. The computer program is included in a computer-readable medium of a computer program product. The computer-readable medium may comprise essentially any memory. such as a ROM (Read-Only Memory), a PROM (Programmable Read-Only Memory). an EPROM (Erasable PROM). a Flash memory, an EEPROM (Electrically Erasable PROM), or a hard disk drive.
Moreover. it is realized by the skilled person that embodiments of the network entity 200. or the controller 300. comprises the necessary communication capabilities in the form of e.g., functions. means. units. elements. etc., for performing the solution. Examples of other such means, units, elements and functions are: processors, memory, buffers, control logic, encoders, decoders, rate matchers, de-rate matchers, mapping units, multipliers, decision units, selecting units, switches, interleavers, de-interleavers, modulators, demodulators, inputs, outputs, antennas, amplifiers, receiver units, transmitter units, DSPs, trellis-coded modulation (TCM) encoder, TCM decoder, power supply units, power feeders, communication interfaces, communication protocols, etc. which are suitably arranged together for performing the solution.
Especially, the processor(s) of the network entity 200, or the controller 300 may comprise, e.g., one or more instances of a Central Processing Unit (CPU), a processing unit, a processing circuit, a processor, an Application Specific Integrated Circuit (ASIC), a microprocessor, or other processing logic that may interpret and execute instructions. The expression “processor” may thus represent a processing circuitry comprising a plurality of processing circuits, such as, e.g., any, some or all of the ones mentioned above. The processing circuitry may further perform data processing functions for inputting, outputting, and processing of data comprising data buffering and device control functions, such as call processing control, user interface control, or the like.
This application is a continuation of International Application No. PCT/CN2021/119654, filed on Sep. 22, 2021, the disclosure of which is hereby incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/CN2021/119654 | Sep 2021 | WO |
Child | 18611120 | US |