Congestion Control Monitoring

Information

  • Patent Application
  • 20250030640
  • Publication Number
    20250030640
  • Date Filed
    November 19, 2021
    3 years ago
  • Date Published
    January 23, 2025
    2 months ago
Abstract
The present disclosure relates to a method of monitoring congestion for incoming data packets and a device (10) performing the method. In a first aspect, a method of a device (10) configured to monitor congestion for incoming data packets is provided. The method comprises determining (S101) whether or not a bit rate of incoming data packets indicated to be sent from a node (11) capable of scalable congestion control exceeds a predetermined bit rate threshold value being lower than a maximum allowable bit rate of outgoing data packets from the device (10); and if so selectively marking (S102) the outgoing data packets as being subjected to congestion, wherein the marking is performed by less frequently marking outgoing data packets having a bit rate closer to the predetermined bit rate threshold value and more frequently marking outgoing data packets having a bit rate closer to the maximum allowable bit rate.
Description
TECHNICAL FIELD

The present disclosure relates to a method of monitoring congestion for incoming data packets and a device performing the method.


BACKGROUND

Upon performing so-called traffic policing of data packets in a communications network, a node performing the traffic policing will typically monitor incoming data packets and conclude whether the incoming data packet rate complies with requirements set forth in e.g. a service level agreement (SLA) setup between a subscriber and a network operator.


If the incoming data rate is higher than what is stipulated in the SLA, the policing node applies policing and subjects the incoming data packets to enforcement in that packets are discarded since the incoming data rate is higher than a maximum allowable rate stipulated in the SLA and thus cannot be met by the policing node.


Alternatively, the node may apply traffic shaping to the incoming data packets by buffering the incoming data packets and then forwarding the buffered packets at the lower output data rate stipulated in the SLA.


The benefit of traffic shaping is that fewer packets are discarded as compared to traffic policing, but comes with a cost of increased end-to-end packet latency due to the buffering delay and performance impact at the enforcement node, due to the requirement of maintaining buffers. Further, there is a limit as to how many packets can be buffered before incoming packets must be dropped. Hence, there is thus room for improvement when handling incoming data packets.


SUMMARY

One objective is to solve, or at least mitigate, this problem in the art and to provide an improved method of monitoring congestion for incoming data packets.


This objective is attained in a first aspect by a method of a device configured to monitor congestion for incoming data packets. The method comprises determining whether or not a bit rate of incoming data packets indicated to be sent from a node capable of scalable congestion control exceeds a predetermined bit rate threshold value being lower than a maximum allowable bit rate of outgoing data packets from the device, and if so selectively marking the outgoing data packets as being subjected to congestion, wherein the marking is performed by less frequently marking outgoing data packets having a bit rate closer to the predetermined bit rate threshold value and more frequently marking outgoing data packets having a bit rate closer to the maximum allowable bit rate.


This objective is attained in a second aspect by a device configured to monitor congestion for incoming data packets, said device comprising a processing unit and a memory, said memory containing instructions executable by said processing unit, whereby the device is operative to determine whether or not a bit rate of incoming data packets indicated to be sent from a node capable of scalable congestion control exceeds a predetermined bit rate threshold value being lower than a maximum allowable bit rate of outgoing data packets from the device, and if so selectively marking the outgoing data packets as being subjected to congestion, wherein the marking is performed by less frequently marking outgoing data packets having a bit rate closer to the predetermined bit rate threshold value and more frequently marking outgoing data packets having a bit rate closer to the maximum allowable bit rate.


Advantageously, the predetermined bit rate threshold value is a set as a “safety” threshold value allowing the device to signal in advance that there is a risk of upcoming congestion by marking the outgoing data packets accordingly, where incoming data packets with a bit rate closer to the predetermined bit rate threshold value will be outputted with a congesting marking less often than packets having a bit rate closer to the maximum allowable bit rate.


A node receiving the data packets being output from the device may conclude from the ratio of congestion-marked packets to total number of received data packets a risk of congestion occurring; if few congestion-marked packets are received, the risk of congestion is lower while if many congestion-marked packets are received the risk is higher


In an embodiment, the method comprises determining whether or not the bit rate of incoming data packets exceeds a maximum allowable bit rate of outgoing data packets from the device, and if so marking all the outgoing data packets as being subjected to congestion.


In an embodiment, the method comprises computing a measure indicating a frequency with which outgoing data packets are marked as being subjected to congestion; wherein no data packets having a bit rate being equal to or lower than the predetermined bit rate threshold value is marked while all data packets having a bit rate being equal to or higher than a maximum allowable bit rate are marked as being subjected to congestion.


In an embodiment, the frequency with which outgoing data packets are marked as being subjected to congestion increases linearly starting at the predetermined bit rate threshold value and ending at the maximum allowable bit rate.


In an embodiment, the measure being computed as a relation between a difference between the bit rate of the incoming data packet and the predetermined bit rate threshold value and a difference between maximum allowable bit rate of outgoing data packets and the predetermined bit rate threshold value.


In an embodiment, the marking of the outgoing data packets being subjected to congestion is performed by setting an Explicit Congestion Notification (ECN) flag in an outgoing data packet header to “11”.


In an embodiment, the device is configured to either discard or buffer incoming data packets should the bit rate of the incoming data packets exceed the maximum allowable bit rate of outgoing data packets from the device.


In a third aspect, a computer program is provided comprising computer-executable instructions for causing a device to perform the method of the first aspect when the computer-executable instructions are executed on a processing unit included in the device.


In a fourth aspect, a computer program product comprising a computer readable medium is provided, the computer readable medium having the computer program of the third aspect embodied thereon.


Generally, all terms used in the claims are to be interpreted according to their ordinary meaning in the technical field, unless explicitly defined otherwise herein. All references to “a/an/the element, apparatus, component, means, step, etc.” are to be interpreted openly as referring to at least one instance of the element, apparatus, component, means, step, etc., unless explicitly stated otherwise. The steps of any method disclosed herein do not have to be performed in the exact order disclosed, unless explicitly stated.





BRIEF DESCRIPTION OF THE DRAWINGS

Aspects and embodiments are now described, by way of example, with reference to the accompanying drawings, in which:



FIG. 1 shows a fifth generation (5G) communication system in which embodiments may be applied;



FIG. 2 shows a flowchart illustrating a method of a device configured to control bit rate of incoming data packets according to an embodiment;



FIG. 3 shows a flowchart illustrating a method of a device configured to control bit rate of incoming data packets according to a further embodiment.



FIG. 4 illustrates a computed measure indicating risk of congestion according to an embodiment; and



FIG. 5 illustrates a device configured to control bit rate of incoming data packets according to an embodiment.





DETAILED DESCRIPTION

The aspects of the present disclosure will now be described more fully hereinafter with reference to the accompanying drawings, in which certain embodiments of the invention are shown.


These aspects may, however, be embodied in many different forms and should not be construed as limiting; rather, these embodiments are provided by way of example so that this disclosure will be thorough and complete, and to fully convey the scope of all aspects of invention to those skilled in the art. Like numbers refer to like elements throughout the description.



FIG. 1 illustrates a core network of a fifth generation (5G) communication system-commonly referred to as New Radio (NR)—being connected to a Radio Access Network (RAN) 12 and a wireless communications device 11 referred to as a User Equipment (UE), such as a smart phone, a tablet, a gaming console, a connected vehicle, etc., and further being connected to a data network 13 such as the Internet. Commonly, the core network in a 5G telecommunication system is referred to as 5GC. In an example, the UE 11 may be embodied in the form of a tablet communicating with a gaming server 22 hosting computer games to be accessed by a user of the tablet.


The 5GC comprises a number of entities referred to as Network Functions (NFs) which will be described in the following. A User Plane Function (UPF) 10 is a service function that processes user plane packets; processing may include altering the packet's payload and/or header, interconnection to data network(s), packet routing and forwarding, etc. The UPF 10 may in practice be decomposed into many small UPFs referred to as μUPFs. Embodiments may be implemented in such a UPF 10 as subsequently will be discussed.


Further, the 5GC comprises a Network Exposure Function (NEF) 14 for exposing capabilities and events, an NF (Network Function) Repository Function (NRF) 15 for providing discovery and registration functionality for NFs, a Policy Control Function (PCF) 16, Unified Data Management (UDM) 17 for storing subscriber data and profiles, and an Application Function (AF) 18 for supporting application influence on traffic routing.


Moreover, the 5GC comprises an Authentication Server Function (AUSF) 19 storing data for authentication of the UE 11, an Access and Mobility Function (AMF) 20 for providing UE-based authentication, authorization, mobility management, etc., and a Session Management Function (SMF) 21 configured to perform session management, e.g. session establishment, modify and release, etc.


If implemented in a third generation (3G) communication system, commonly referred to as Universal Mobile Telecommunications System (UMTS) embodiments would typically be implemented in a so-called Gateway GPRS (“General Packet Radio Service”) Support Node (GGSN), while the functionality of the PCF would be performed by a Home Location Register (HLR).


If implemented in a fourth generation (4G) communication system, commonly referred to as Long Term Evolution (LTE), embodiments would typically be implemented in a Packet Data Network Gateway (PGW) while the functionality of the PCF would be performed by a Policy and Charging Rules Function (PCRF).


Now, for data packets transmitted in the data plane from the UE 11 over the RAN 12 to the UPF 10 and further to the data network 13 (or in the opposite direction) for reception at the gaming server 22, a data rate for the packets may be controlled by the UPF 10. This is commonly referred to as traffic policing. The UPF 10 when performing traffic policing monitors incoming data packets and concludes whether the incoming data packet rate complies with requirements set forth in e.g. a service level agreement (SLA) setup between a subscriber/user of the UE and a network operator.


Such information is typically supplied to the UPF 10 by the PCF 16 over interface N7 to the SMF 21 and further over interface N4.


If the incoming data rate is higher than what is stipulated in the SLA, the UPF 10 applies policing and subjects the incoming data packets to enforcement in that packets are discarded since the incoming data rate is higher than a maximum allowable rate stipulated in the SLA and thus cannot be met by the UPF 10.


Policing may be enforced using a so-called token bucket algorithm envisaging a virtual bucket filled with n tokens stipulating capacity. Whenever a data packet of n bytes arrives, it will be transmitted if there are at least n tokens left in the bucket or dropped if there are less than n tokens left.


Alternatively, the UPF 10 may apply traffic shaping to the incoming data packets by buffering the incoming data packets and then forwarding the buffered packets at the lower output data rate stipulated in the SLA. The benefit of traffic shaping is that fewer packets are dropped as compared to traffic policing, but comes with a cost of increased end-to-end packet latency due to the buffering delay and performance impact at the enforcement node, due to the requirement of maintaining buffers. Further, there is a limit as to how many packets can be buffered before incoming packets must be dropped.


Explicit Congestion Notification (ECN) is an extension to the Internet Protocol (IP) and to the Transmission Control Protocol (TCP) allowing the UPF 10 performing traffic shaping to mark any outgoing data packets—in practice performed by setting an ECN flag in the IP header—to indicate congestion occurring at the UPF 10.


A receiver of the marked packets being output from the UPF 10, i.e. in this example the gaming server 22 in the data network 13, informs the sending UE 11 of the congestion indication provided by the marked data packet, wherein the UE 10 reduces its transmission rate in order to avoid data packet drop or buffering at the UPF 10. The marking approach is commonly referred to as Active Queue Management (AQM).


A slightly more advanced approach to congestion control is an approach referred to as Low Latency Low Loss Scalable Throughput (LAS). LAS is a queueing mechanism designed to reduce queueing latency for Internet traffic. An LAS enabled node, such as the UPF 10, will determine size of its buffering queue and signal congestion by setting a flag when a predetermined queue size threshold is exceeded, thereby indicating that the UPF 10 will reach a buffering limit within short and thus have to drop packets unless the incoming data rate decreases.


This allows for more fine-grained control of both throughput and queueing delay than classic congestion controllers. L4S repurposes the ECN flags, or codepoints, as illustrated in Table 1 below such that ECT(1) is used to signal LAS support and ECT(0) is used to signal conventional ECN support.









TABLE 1







L4S header format.











Flag
Name
Meaning







00
Non-ECT
Not ECN-capable transport



01
ECT (1)
L4S-capable transport



10
ECT (0)
ECN-capable transport



11
CE
Congestion experienced










To conclude, current traffic policing functions cap flows to a specific bitrate using some form of token bucket algorithm, where packets are dropped when the incoming rate exceeds the maximum allowed output bitrate. Hence, dropped packets will have to be retransmitted causing application layer jitter. Further, a high degree of packet loss may create situations where the sending-side congestion controller reduces its sending rate more than necessary.


Traffic shapers buffer data packets instead of dropping the packets, which resolve the problem of data packet retransmissions (until the buffering capacity is reached). However, this leads to large-end-to-end delays and requires a high memory capacity in the policing node.



FIG. 2 shows a flowchart illustrating a method of a device configured to monitor congestion for incoming data packets according to an embodiment.


The device in which the method is performed is in this embodiment is exemplified to be the UPF 10.


In a first step S101, upon receiving data packets from a node being capable of scalable congestion control, in this exemplifying embodiment the UE 11, the UPF 10 determines whether or not a bit rate PRI of the received incoming data packets exceeds a predetermined bit rate threshold value TM.


If not, the UPF 10 will conclude that there is no immediate risk of congestion and no congestion marking will be performed.


In this example, TM=8 Mbit/s and a further condition of the predetermined bit rate threshold value TM is that it is lower than a maximum allowable bit rate TO of outgoing data packets from the UPF 10. The maximum allowable bit rate TO is typically stipulated in the previously discussed SLA managed by the PCF 16. The user of the UE 11 thus holds a subscription allowing the user a maximum bit rate TO in the network.


In practice, TM and TO may be included in information specifying a Policy and Charging Control (PCC) rule sent by the PCF 16 to the SMF 21 over the N7 interface. The SMF 21 may translate the PCC rule to a modified QoS (“Quality of Service”) Enhancement Rule (QER) comprising the information, in addition to comprising information specifying that ECN is to be applied.


A UPF in 5GC will apply traffic policing to flows of packets that are matched by packet detection information (PDI) and has a corresponding QER. PDI outlines key filter elements that should be applied in order to detect a specific packet e.g. IP address, port information, etc. If a modified QER has been received for a flow that matches the PDI and that flow is advertising ECN capability with the ECT(1) flag set, the QER indicates that L4S policing will be performed.


In this example, TO=10 Mbit/s indicating that the maximum data rate PRO with which the UPF 10 will output data packets pertaining to this particular subscription upon enforcing policing is 10 Mbit/s.


However, if the rate PRI of the incoming packets exceeds the predetermined bit rate threshold value TM, the UPF 10 will conclude that there is an upcoming risk of congestion should the bit rate of the incoming data packets further increase and a congestion marking will selectively be performed in step S102 by setting the flag of Table 1 to 11, thereby indicating congestion (CE, “congestion experienced”).


In other words, the UPF 10 performs a selective marking in step S102 of the outgoing data packets indicating that there is a risk of upcoming congestion for incoming data packets, wherein the marking is performed by less frequently marking outgoing data packets having a bit rate closer to the predetermined bit rate threshold value TM and more frequently marking outgoing data packets having a bit rate closer to the maximum allowable bit rate TO.


Advantageously, with this embodiment, a safety threshold TM is set allowing the UPF 10 to signal in advance that there is a risk of upcoming congestion by marking the outgoing data packets accordingly. This is communicated by setting the ECN flag to “11” in the header of the outgoing data packets to the gaming server 22, whereby the gaming server 22 subsequently informs the original sending node, i.e. the UE 11, that there is a risk of congestion at if the UE 11 was to increase the bit rate PRI of transmitted data packets.


Advantageously, the gaming server 22 may conclude by the ratio of CE-marked packets (having ECN flag set to “11”) to total number of received data packets what risk of congestion occurring.



FIG. 3 shows a flowchart illustrating a further embodiment. Thus, after having determined that the bit rate PRI of the incoming data packets exceeds the predetermined bit rate threshold value TM, the UPF 10 performs a computation of a measure p indicating a frequency with which outgoing data packets are marked as being subjected to congestion (thereby also indicating the risk of upcoming congestion) as described in the following.









p
=

{




0
,





PR
I



T
M










PR
I

-

T
M




T
O

-
M


,





T
M

<

PR
I



T
O







1
,





PR
I

>

T
O










(
1
)







Hence, if the bit rate PRI of incoming packets is determined in step S101 to not exceed the predetermined bit rate threshold value TM—i.e. the safety threshold-none of the outgoing data packets will be marked as congested. This will be indicated to the main server 22 that there currently is no risk of congestion (or at least a low risk).


At the other end of the spectrum, if the bit rate PRI of incoming data packets is determined in step S101a to exceed the maximum allowable bit rate TO for outgoing data packets, all the outgoing data packets will be CE-marked in step S102a indicating that there indeed is congestion, even at the current bit rate PRI for the incoming data packets. The gaming server 22 will thus conclude that there currently is congestion in that all received data packet are marked to have an ECN flag of “11”.


As can be seen in equation (1) computed in step S101b for incoming data packets TM<PRI≤TO, a measure p indicating a frequency with which outgoing data packets are marked as being subjected to congestion is computed.



FIG. 4 illustrates the risk of upcoming congestion by the UPF 10 using equation (1) to compute measure p indicating the risk given that TM=8 Mbit/s and TO=10 Mbit/s.


As can be seen, up until the incoming bitrate PRI does not exceed the safety threshold TM=8 Mbit/s, there is no immediate risk of upcoming congestion and no congestion marking will be undertaken by the UPF 10 of outgoing packets. The flag of Table 1 would thus be set to either 01 or 10 depending on whether or not the UE 11 is L4S capable.


For any incoming bitrate PRI exceeding the safety threshold TM=8 Mbit/s, the measure p is computed according to equation (1) where for PRI=8.25 Mbit/s, p=0.125, for PRI=9 Mbit/s, p=0.5, and for PRI=9.75 Mbit/s, p=0.875, and so on until PRI reaches the maximum value TO at which there is a 100% marking frequency of outgoing data packets at the UPF 10. In fact, for any PRI exceeding the maximum threshold value TO, there is indeed already congestion as stipulated by TO.


With the example hereinabove, 12.5% of the incoming packets having PRI=8.25 Mbit/s will be CE-marked upon being outputted from the UPF 10, 50% of the incoming packets having PRI=9 Mbit/s will be CE-marked upon being outputted, and so on.


Thus, with reference to FIG. 4, the gaming server 22 receiving the data packets being outputted from the UPF 10 may conclude that the computed measure p, as reflected at the gaming server 22 by the ratio of CE-marked packets to non-CE-marked packets, indicates with which degree the incoming bit rate can increased before congestion indeed occurs; the lower the ratio, the higher the possible increase.


As is understood, if PRI>TO, the UPF 10 may either perform policing thus dropping the incoming packets, or perform shaping thereby buffering the incoming packets, or a combination of both (i.e. dropping some packets while queuing others).


Incoming data packets that do not indicate support for scalable congestion control (i.e. marked “00” as illustrated in Table 1) can coexist with scalable flows in the policing function provided by the UPF 10 in that packets belonging to non-scalable flows is not marked by the UPF 10. If scalable and non-scalable traffic are passing through the policing function of the UPF 10 in parallel, the packets belonging to the scalable flows will be marked based on the incoming packet bitrate of a combined set of data packet flows.



FIG. 5 illustrates a device 10 such as e.g. a UPF configured to monitor congestion for incoming data packets according to an embodiment, where the steps of the method performed by the device 10 in practice are performed by a processing unit 111 embodied in the form of one or more microprocessors arranged to execute a computer program 112 downloaded to a storage medium 113 associated with the microprocessor, such as a Random Access Memory (RAM), a Flash memory or a hard disk drive. The processing unit 111 is arranged to cause the device 10 to carry out the method according to embodiments when the appropriate computer program 112 comprising computer-executable instructions is downloaded to the storage medium 113 and executed by the processing unit 111. The storage medium 113 may also be a computer program product comprising the computer program 112. Alternatively, the computer program 112 may be transferred to the storage medium 113 by means of a suitable computer program product, such as a Digital Versatile Disc (DVD) or a memory stick. As a further alternative, the computer program 112 may be downloaded to the storage medium 113 over a network. The processing unit 111 may alternatively be embodied in the form of a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field-programmable gate array (FPGA), a complex programmable logic device (CPLD), etc. The device 10 further comprises a communication interface 114 (wired or wireless) over which it is configured to transmit and receive data.


The device 10 of FIG. 5 may be provided as a standalone device or as a part of at least one further device. For example, the device 10 may be provided in a node of a core network, or in an appropriate device of a radio access network (RAN), such as in a radio base station. Alternatively, functionality of the device 10 may be distributed between at least two devices, or nodes. These at least two nodes, or devices, may either be part of the same network part (such as the core network) or may be spread between at least two such network parts.


Thus, a first portion of the instructions performed by the device 10 may be executed in a first device, and a second portion of the of the instructions performed by the device 10 may be executed in a second device; the herein disclosed embodiments are not limited to any particular number of devices on which the instructions performed by the device 10 may be executed.


Hence, the method according to the herein disclosed embodiments are suitable to be performed by a device 10 residing in a cloud computational environment. Therefore, although a single processing circuitry 111 is illustrated in FIG. 5, the processing circuitry 111 may be distributed among a plurality of devices, or nodes. The same applies to the computer program 112. Embodiments may be entirely implemented in a virtualized environment.


The aspects of the present disclosure have mainly been described above with reference to a few embodiments and examples thereof. However, as is readily appreciated by a person skilled in the art, other embodiments than the ones disclosed above are equally possible within the scope of the invention, as defined by the appended patent claims.


Thus, while various aspects and embodiments have been disclosed herein, other aspects and embodiments will be apparent to those skilled in the art. The various aspects and embodiments disclosed herein are for purposes of illustration and are not intended to be limiting, with the true scope and spirit being indicated by the following claims.

Claims
  • 1-20. (canceled)
  • 21. A method of operation by a device, to monitor congestion for incoming data packets, the method comprising: determining whether or not a bit rate of incoming data packets indicated to be sent from a node capable of scalable congestion control exceeds a predetermined bit rate threshold value being lower than a maximum allowable bit rate of outgoing data packets from the device; andresponsive to the predetermined bit rate threshold value being exceeded, selectively marking the outgoing data packets as being subjected to congestion, wherein the marking is performed by less frequently marking outgoing data packets having a bit rate closer to the predetermined bit rate threshold value and more frequently marking outgoing data packets having a bit rate closer to the maximum allowable bit rate.
  • 22. The method of claim 21, further comprising: determining whether or not the bit rate of incoming data packets exceeds a maximum allowable bit rate of outgoing data packets from the device; andresponsive to the maximum allowable bit rate being exceeded, marking all the outgoing data packets as being subjected to congestion.
  • 23. The method of claim 21, further comprising computing a measure indicating a frequency with which outgoing data packets are marked as being subjected to congestion, wherein no data packets having a bit rate being equal to or lower than the predetermined bit rate threshold value is marked while all data packets having a bit rate being equal to or higher than a maximum allowable bit rate are marked as being subjected to congestion.
  • 24. The method of claim 23, wherein the frequency with which outgoing data packets are marked as being subjected to congestion increases linearly starting at the predetermined bit rate threshold value and ending at the maximum allowable bit rate.
  • 25. The method of claim 23, the measure being computed as a relation between a difference between the bit rate of the incoming data packet and the predetermined bit rate threshold value and a difference between maximum allowable bit rate of outgoing data packets and the predetermined bit rate threshold value.
  • 26. The method of claim 21, the scalable congestion control comprising Low Latency Low Loss Scalable Throughput (LAS).
  • 27. The method of claim 21, the marking of the outgoing data packets being subjected to congestion being performed by setting an Explicit Congestion Notification (ECN) flag in an outgoing data packet header to “11”.
  • 28. The method of claim 21, wherein the device is one of a User Plane Function (UPF) being configured to receive the predetermined bit rate threshold value and the maximum allowable bit rate from a Policy Control Function (PCF), a Gateway General Packet Radio Service Support Node (GGSN) being configured to receive the predetermined bit rate threshold value and the maximum allowable bit rate from a Home Location Register (HLR), and a Packet Data Network Gateway (PGW) being configured to receive the predetermined bit rate threshold value and the maximum allowable bit rate from a Policy and Charging Rules Function (PCRF).
  • 29. The method of claim 21, comprising either discarding or buffering incoming data packets responsive to the bit rate of the incoming data packets exceeding the maximum allowable bit rate of outgoing data packets from the device.
  • 30. A non-transitory computer readable medium storing computer program instructions that, when executed by a processor of a device, configure the device to monitor congestion for incoming data packets based on causing the device to: determine whether or not a bit rate of incoming data packets indicated to be sent from a node capable of scalable congestion control exceeds a predetermined bit rate threshold value being lower than a maximum allowable bit rate of outgoing data packets from the device; andresponsive to the predetermined bit rate threshold value being exceeded, selectively mark the outgoing data packets as being subjected to congestion, wherein the marking is performed by less frequently marking outgoing data packets having a bit rate closer to the predetermined bit rate threshold value and more frequently marking outgoing data packets having a bit rate closer to the maximum allowable bit rate.
  • 31. A device comprising: a memory storing instructions; anda processor to execute the instructions, whereby the processor controls the device to monitor congestion for incoming data packets, based on causing the device to: determine whether or not a bit rate of incoming data packets indicated to be sent from a node capable of scalable congestion control exceeds a predetermined bit rate threshold value being lower than a maximum allowable bit rate of outgoing data packets from the device; and if soselectively marking the outgoing data packets as being subjected to congestion, wherein the marking is performed by less frequently marking outgoing data packets having a bit rate closer to the predetermined bit rate threshold value and more frequently marking outgoing data packets having a bit rate closer to the maximum allowable bit rate.
  • 32. The device of claim 31, the device further being caused to: determine whether or not the bit rate of incoming data packets exceeds a maximum allowable bit rate of outgoing data packets from the device; and if so mark all the outgoing data packets as being subjected to congestion.
  • 33. The device of claim 31, the device further being caused to: compute a measure indicating a frequency with which outgoing data packets are marked as being subjected to congestion; wherein no data packets having a bit rate being equal to or lower than the predetermined bit rate threshold value is marked while all data packets having a bit rate being equal to or higher than a maximum allowable bit rate are marked as being subjected to congestion.
  • 34. The device of claim 33, wherein the frequency with which outgoing data packets are marked as being subjected to congestion increases linearly starting at the predetermined bit rate threshold value and ending at the maximum allowable bit rate.
  • 35. The device of claim 33, the device further being caused to compute the measure as a relation between a difference between the bit rate of the incoming data packet and the predetermined bit rate threshold value and a difference between maximum allowable bit rate of outgoing data packets and the predetermined bit rate threshold value.
  • 36. The device of claim 31, the scalable congestion control comprising Low Latency Low Loss Scalable Throughput (LAS).
  • 37. The device of claim 31, the device being caused to perform the marking of the outgoing data packets being subjected to congestion by setting an Explicit Congestion Notification (ECN) flag in an outgoing data packet header to “11”.
  • 38. The device of claim 31, wherein the device is one of a User Plane Function (UPF) being configured to receive the predetermined bit rate threshold value and the maximum allowable bit rate from a Policy Control Function (PCF), a Gateway General Packet Radio Service Support Node (GGSN) being configured to receive the predetermined bit rate threshold value (TM) and the maximum allowable bit rate from a Home Location Register (HLR), and a Packet Data Network Gateway (PGW) being configured to receive the predetermined bit rate threshold value and the maximum allowable bit rate from a Policy and Charging Rules Function (PCRF).
  • 39. The device of claim 31, the device being caused to either discard or buffer incoming data packets should the bit rate of the incoming data packets exceed the maximum allowable bit rate of outgoing data packets from the device.
PCT Information
Filing Document Filing Date Country Kind
PCT/SE2021/051157 11/19/2021 WO