The disclosure relates to the field of wireless communication networks, and for example, relates to a system and a method for tuning system parameters for one or more network slices.
With the advancements in wireless technology and communication systems, the demand for wireless data traffic has increased since deployment of 4th-generation (4G) communication systems. To meet such demand for wireless data traffic, efforts have been made to develop an improved 5th-generation (5G) or pre-5G communication system. Therefore, the 5G or pre-5G communication system is also called a ‘beyond 4G network’ or a ‘post-long-term evolution (LTE) system’.
5G is developed to provide higher bandwidth, lower End-to-End (E2E) latency, and more flexible and reliable network access. For example, 5G is configured to support stable network connection for high-end user devices and high-density distributed sensors, which are necessary for Internet of Things (IoT) based applications. In addition to these features, 5G provides customized services to the users in terms of specific requirements for different verticals, such as manufacturing, automotive, health-care industries, and the like. To provide the above-mentioned services, the concept of network slicing is adopted in 5G. The core idea beneath 5G is to divide a single physical network into multiple E2E logically separated sub-networks, each of which is called a Network Slice (NS). Specifically, every NS owns a management domain and an E2E logical topology. Operators can flexibly create, modify, or delete the NS as per different Quality of Service (QoS) requirements without disrupting other existing NS.
An example block diagram depicting a system environment including a deployment of the NS is shown in
Further, each NS is identified by the S-NSSAI which includes a Slice Service Type (SST) for identifying a service for which the NS is suitable. A network operator can use either standardized SST values (e.g., 1 for enhanced mobile broadband, 2 for ultra-reliable low latency communications, 3 for massive IoT, 4 for V2X, and 5 for High-performance machine type communications) or non-standardized SST values that can be locally defined. The UE is configured with a set of User Equipment Route Selection (URSP) rules 114 that allows the UE 102 to select the S-NSSAI. The S-NSSAI is selected based on the application that the UE is required to use based on or more parameters, such as QoS requirements of the application. The UE has two Protocol Data Unit (PDU) sessions, where one is established via the S-NSSAI #1 (URLLC slice) towards the Data Network Name (DNN) of the internet while the other is established via the S-NSSAI #2 (eMBB slice) towards the same DNN. When the UE is required to send traffic of App1, the UE finds the matching traffic descriptor in the URSP rule and selects the PDU session according to the corresponding route selection descriptor (e.g., PDU session of S-NSSAI #1 and DNN of the internet).
Further, a current working example of components associated with the android networking stack for the flow of data packets flow is shown with the help of
Further, the current implementation assigns the static values for 5G without considering one or more scenarios in mmwave bands, such as blockage problem (e.g., the phenomena that the signal cannot pass through An obstacle owing to the directivity and the receiving Signal-to-Noise Ratio (SNR) value is severed), highly variable channel causing channel fluctuations (e.g., frequent line of sight to non-line of sight transitions), and the like. Under these scenarios, values associated with the kernel parameters are required to be tuned based on new network conditions. Furthermore, tuning the android networking stack based on Radio Access Technology (RAT) is not an efficient method for a given RAT, when a signal condition is not good or under high packet loss conditions, bigger values lead to poor performance. For example, a web page fails to load even if there is enough bandwidth to load the web page.
In general, in the 5GC network 402, Point Coordination Function (PCF) 404 sends a set of URSP rules to Access and Mobility Management Function (AMF) 406. Further, the 5GC network 402 transmits the set of URSP rules to the UE 408, and the UE 408 may apply the set of URSP rules with default kernel parameter values for all slices resulting in increased latency for the URLLC slice and lesser throughput for the eMBB slice. An example communication system depicting an application of User Equipment Route Selection (URSP) rules by the UE 408 is shown in
Conventionally, huge values are set for the kernel parameters to prioritize Throughput (TP) traffic. This may result in a significant increase in UE stack latency (USL) which affects latency-sensitive traffic of slices, such as the URLLC slice. The USL corresponds to time taken by packet traversal in UE stack. In other example, if static values are assigned without considering network conditions for 5G RAT then this may lead to poor performance under bad network conditions for throughput-oriented traffic of slices, such as the eMBB slice. Further, URLLC Protocol Data Unit (PDU) sessions involve shorter data transfers. Therefore, its buffers (rmem,wmem) cannot reach peak values quickly and it is difficult to compete with bulk traffic once URLLC traffic crosses the 5G RAN. Hence, boosting connection speed from the beginning of the session is required for URLLC. For example, a bigger initial congestion window (INIT_CWND). Also, during parallel ongoing PDU sessions for each of the URLLC slice and the eMBB slice, the URLLC traffic required to be processed immediately is queued as cores of the CPU are busy servicing interrupts for bulk traffic of eMBB and processing it. In yet another example corresponding to the LTE, the USL is not significant compared to network latency which typically ranges from about 30 ms to 150 ms. However, most of the use cases for 5G, such as cloud gaming, Augmented reality/Virtual reality (AR/VR) demand latencies to be as low as 10 ms. Hence, the USL is comparable to network latency and is required to be optimized.
Thus, it is desired to address the above-mentioned disadvantages or shortcomings or at least provide a useful alternative for tuning system parameters for one or more network slices.
According to an example embodiment of the present disclosure, a method implemented for tuning system parameters for one or more network slices by a user equipment (UE) is disclosed. The method includes receiving, from a network, a set of user equipment route selection (URSP) rules including slice-specific information for each of the one or more network slices. Further, the method includes determining an application user ID (UID) associated with the one or more network slices based on the slice-specific information included in the received URSP rules. The method includes acquiring, from one or more applications running on the UE, packet information related to each of one or more ongoing protocol data unit (PDU) sessions associated with a corresponding network slice of the one or more network slices based on the received set of URSP rules and the determined application UID. Furthermore, the method includes obtaining a flow rate for each of the one or more ongoing PDU sessions based on the received set of URSP rules, the determined application UID, and the acquired packet information related to each of one or more ongoing PDU sessions associated with the corresponding network slice. The method includes tuning the set of system parameters for the one or more ongoing PDU sessions based on the obtained flow rate and a threshold flow rate. Further, the method includes applying, based on the tuned set of system parameters, one or more policies for the one or more ongoing PDU sessions.
According to an example embodiment of the present disclosure, a user equipment (UE) for tuning system parameters for one or more network slices is disclosed. The UE includes: a memory and one or more processors communicatively coupled to the memory. Further, the one or more processors are configured to receive, from a network, a set of user equipment route selection (URSP) rules including slice-specific information for each of the one or more network slices. Further, the one or more processors are configured to determine an application user ID (UID) associated with the one or more network slices based on slice-specific information included in the received URSP rules. The one or more processors are configured to acquire, from one or more applications running on the UE, packet information related to each of one or more ongoing protocol data unit (PDU) sessions associated with a corresponding network slice of the one or more network slices based on the received set of URSP rules and the determined application UID. Furthermore, the one or more processors are configured to obtain a flow rate for each of the one or more ongoing PDU sessions based on the received set of URSP rules, the determined application UID, and the acquired packet information related to each of one or more ongoing PDU sessions associated with the corresponding network slice. The one or more processors are configured to tune the set of system parameters for the one or more ongoing PDU sessions based on the obtained flow rate and a threshold flow rate. Further, the one or more processors are configured to apply, based on the tuned set of system parameters, one or more policies for the one or more ongoing PDU sessions.
A more detailed description of the various example embodiments will be provided below with reference to the appended drawings. It is appreciated that these drawings depict example embodiments of the disclosure and are therefore not to be considered limiting of its scope. The disclosure will be described and explained with additional specificity and detail with the accompanying drawings.
The above and other features, aspects, and advantages of certain embodiments of the present disclosure will be more apparent from the following detailed description, taken in conjunction with the accompanying drawings in which like characters represent like parts throughout the drawings, and in which:
Further, skilled artisans will appreciate that elements in the drawings are illustrated for simplicity and may not have necessarily been drawn to scale. For example, the flowcharts illustrate the method in terms of operations involved to help to improve understanding of aspects of the present disclosure. Furthermore, in terms of the construction of the device, one or more components of the device may have been represented in the drawings by conventional symbols, and the drawings may show only those specific details that are pertinent to understanding the embodiments of the present disclosure so as not to obscure the drawings with details that may be readily apparent to those of ordinary skill in the art having the benefit of the description herein.
Reference will now be made to the various example embodiments and specific language will be used to describe the same. It will nevertheless be understood that no limitation of the scope of the disclosure or claims is thereby intended, such alterations and further modifications in the illustrated system, and such further applications of the principles of the disclosure as illustrated therein being contemplated as would normally occur to one skilled in the art to which the disclosure relates.
It will be understood by those skilled in the art that the foregoing general description and the following detailed description are explanatory of the disclosure and are not intended to be restrictive thereof.
Reference throughout this disclosure to “an aspect”, “another aspect” or similar language may refer, for example, to a particular feature, structure, or characteristic described in connection with the embodiment being included in at least one embodiment of the present disclosure. Thus, appearances of the phrase “in an embodiment”, “in another embodiment” and similar language throughout this disclosure may, but do not necessarily, all refer to the same embodiment.
The terms “comprises”, “comprising”, or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a process or method that comprises a list of steps does not include only those steps but may include other steps not expressly listed or inherent to such process or method. Similarly, one or more devices or sub-systems or elements or structures or components proceeded by “comprises . . . a” does not, without more constraints, preclude the existence of other devices or other sub-systems or other elements or other structures or other components or additional devices or additional sub-systems or additional elements or additional structures or additional components.
In an embodiment of the present disclosure, the set of system parameters corresponds to a set of kernel parameters. For example, the set of kernel parameters corresponds to one or more Transmission Control Protocol/Internet Protocol (TCP/IP) parameters, one or more driver layer parameters, or a combination thereof. The TCP/IP parameters and the one or more driver layer parameters are shown in Table 1 and Table 2. The configuration of
In an embodiment of the present disclosure, each of the one or more network slices represents an independent virtualized instance defined by the allocation of a subset of available network resources. For example, the one or more network slices may be an Enhanced Mobile Broadband (eMBB) slice, an Ultra-Reliable Low Latency Communications (URLLC) slice, an Internet of Things (IoT) slice, and the like.
Referring to
As an example, the one or more processors 502 may be a single processing unit or a number of units, all of which could include multiple computing units. The one or more processors 502 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the one or more processors 502 are configured to fetch and execute computer-readable instructions and data stored in the memory. The one or more processors 502 may include one or a plurality of processors. At this time, one or a plurality of processors may be a general-purpose processor, such as a central processing unit (CPU), an application processor (AP), or the like, a graphics-only processing unit such as a graphics processing unit (GPU), a visual processing unit (VPU), and/or an Artificial Intelligence (AI)-dedicated processor such as a neural processing unit (NPU). The one or more processors 502 may control the processing of the input data in accordance with a predefined operating rule or Artificial Intelligence (AI) model stored in the non-volatile memory and the volatile memory, e.g., memory unit 506. The predefined operating rule or the AI model is provided through training or learning.
The memory unit 506 may include any non-transitory computer-readable medium known in the art including, for example, volatile memory, such as static Random-Access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read-only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes.
Various example embodiments disclosed herein may be implemented using processing circuitry. For example, some example embodiments disclosed herein may be implemented using at least one software program running on at least one hardware device and performing network management functions to control the elements.
In an embodiment of the present disclosure, the one or more processors 502 include, for example, and without limitation, a Communication Processor (CP) and an Application Processor (AP). For example, the CP is like a modem. The CP is configured to handle Layer 2 and other protocols. In an embodiment of the present disclosure, the AP is associated with upper layers, such as the network layer, transport layer, and application layer.
Further, the one or more processors 502 may be disposed in communication with one or more I/O devices via the I/O interface 504. The I/O interface 504 may employ communication code-division multiple access (CDMA), high-speed packet access (HSPA+), global system for mobile communications (GSM), long-term evolution (LTE), WiMax, or the like, etc.
Using the I/O interface 504, the UE 500 may include various circuitry and communicate with one or more I/O devices, specifically, the user devices associated with human-to-human conversation. For example, the input device may be an antenna, microphone, touch screen, touchpad, storage device, transceiver, video device/source, etc. The output devices may be a printer, fax machine, video display (e.g., cathode ray tube (CRT), liquid crystal display (LCD), light-emitting diode (LED), plasma, Plasma Display Panel (PDP), Organic light-emitting diode display (OLED) or the like), audio speaker, etc.
The one or more processors 502 may be disposed in communication with a communication network via a network interface. In an embodiment, the network interface may be the I/O interface 504. The network interface may connect to the communication network to enable the connection of the UE 500 with the outside environment. The network interface may employ connection protocols including, without limitation, direct connect, Ethernet (e.g., twisted pair 10/100/1000 Base T), transmission control protocol/internet protocol (TCP/IP), token ring, IEEE 802.11a/b/g/n/x, etc. The communication network may include, without limitation, a direct interconnection, local area network (LAN), wide area network (WAN), wireless network (e.g., using Wireless Application Protocol), the Internet, and the like.
In an embodiment of the present disclosure, the UE 500 is communicatively coupled to a network 508 for receiving a set of User Equipment Route Selection (URSP) rules from the network, as shown in
In various embodiments, the FAST module may be included within the memory. The FAST module may include a set of instructions that may be executed to cause the one or more processors 502 of the UE 500 to perform any one or more of the methods/processes disclosed herein. The FAST module may be configured to perform the steps of the present disclosure using the data stored in the database for tuning system parameters for one or more network slices, as discussed herein. In an embodiment, the FAST module may be a hardware unit that may be outside the memory. Further, the memory may include an operating system for performing one or more tasks of the UE 500, as performed by a generic operating system in the communications domain.
Further, the one or more processors 502 may be configured to determine an application User ID (UID) associated with the one or more network slices based on the slice-specific information included in the received URSP rules.
Furthermore, the one or more processors 502 may be configured to acquire, from one or more applications running on the UE 500, packet information related to each of one or more ongoing Protocol Data Unit (PDU) sessions associated with a corresponding network slice of the one or more network slices based on the received set of URSP rules and the determined application UID.
The one or more processors 502 may be configured to calculate a flowrate for each of the one or more ongoing PDU sessions based on the received set of URSP rules, the determined application UID, and the acquired packet information related to each of one or more ongoing PDU sessions associated with the corresponding network slice. In an example embodiment of the present disclosure, the packet information includes information associated with a source Internet Protocol (IP), a source port, a destination IP, a destination port, a protocol, a packet length, and the like. The calculating the flow rate for each of the one or more ongoing PDU sessions will be described in greater detail below with reference to
Furthermore, the one or more processors 502 may be configured to dynamically tune the set of system parameters for each of the one or more ongoing PDU sessions associated with the corresponding network slice based on the calculated flow rate and a predefined threshold flow rate. For dynamically tuning the set of system parameters for each of the one or more ongoing PDU sessions, the one or more processors 502 are configured to obtain one or more RAT characteristics from a Modulator-Demodulator (MODEM) upon calculating the flow rate. In an example embodiment of the present disclosure, the one or more RAT characteristics may include Received Signal Strength Indicator (RSSI), Reference Signal Received Power (RSRP), Reference Signal Received Quality (RSRQ), New Radio (NR), and Long-Term Evolution (LTE) bands, bandwidth availability, and the like. Further, the one or more processors 502 may be configured to dynamically update/tune the one or more policies for each of the one or more ongoing PDU sessions based on the calculated flow rate, the predefined threshold flow rate, and the obtained one or more RAT characteristics. In an embodiment of the present disclosure, the one or more policies include a throughput enhancement policy, latency reduction policy, default policy, and the like. The dynamically updating/tuning the one or more policies will be described in greater detail below with reference to
The one or more processors 502 may be configured to apply, based on the dynamically tuned set of system parameters, one or more policies for each of the one or more ongoing PDU sessions associated with the corresponding network slice.
For applying the one or more policies, the one or more processors 502 may be configured to determine whether one or more socket options are available for each of the one or more policies. Further, the one or more processors 502 may be configured to configure the one or more socket options via one or more Extended Berkeley Packet Filters (eBPFs) upon the determination that the one or more socket options are available for each of the one or more policies. The one or more processors 502 may be configured to apply the one or more policies for each of the one or more ongoing PDU sessions associated with the corresponding network slice based on the configured one or more socket options.
Further, for applying the one or more policies, the one or more processors 502 may be configured to configure the set of system parameters via a network interface upon the determination that the one or more socket options are unavailable for each of the one or more policies. In an embodiment of the present disclosure, the set of system parameters is unique to the network slice (eMBB, URLLC, and the like) to give the best experience for each PDU session associated with each network slice. The one or more processors 502 may be configured to apply the one or more policies for each of the one or more ongoing PDU sessions associated with the corresponding network slice based on the configured set of system parameters. The configuring the set of system parameters will be described in greater detail below with reference to
Furthermore, the one or more processors 502 may be configured to obtain one or more statistics for the one or more ongoing PDU sessions from a set of layers of a Kernel via one or more eBPFs. In an embodiment of the present disclosure, the one or more statistics are related to packet drops and error rates. The one or more processors 502 may be configured to generate one or more static values for the set of system parameters based on the obtained one or more statistics. Further, the one or more processors 502 may be configured to dynamically update the set of system parameters for each of the one or more ongoing PDU sessions via one of a netd sysctl interface and the one or more eBPFs based on the generated one or more static values. In an embodiment of the present disclosure, the one or more static values for the set of system parameters are shown in Table 1 and Table 2. In an embodiment of the present disclosure, the one or more static values are determined based on the one or more statistics available at different layers of the kernel, as shown in table 3.
Further, the one or more processors 502 may be configured to determine whether there is a performance degradation in the one or more ongoing PDU sessions based on the obtained one or more statistics. The one or more processors 502 may also be configured to determine whether there is a change in one or more Radio Access Technology (RAT) characteristics, the flow rate, or a combination thereof upon determining that there is a performance degradation in the one or more ongoing PDU sessions. Furthermore, the one or more processors 502 may be configured to dynamically tune the set of system parameters for each of the one or more ongoing PDU sessions associated with the corresponding network slice to one or more new values based on the flow rate and the predefined threshold flow rate upon determining a change in at least one of the one or more RAT characteristics or the flow rate.
The one or more processors 502 may be configured to identify a foreground application running on the UE 500. Further, the one or more processors 502 may be configured to determine a type of the identified foreground application. The one or more processors 502 may be configured to load the one or more policies for each of the one or more ongoing PDU sessions based on the determined type of the identified foreground application.
Further, the one or more processors 502 may be configured to dynamically create and tune the one or more policies for each of the one or more ongoing PDU sessions associated with the corresponding network slice based on the calculated flow rate and the predefined threshold flow rate. In an embodiment of the present disclosure, the dynamic creation and tuning of the one or more policies based on the slice-specific information and the flow rate per each PDU session per slice provides a better user experience for latency and throughput-oriented applications. In an embodiment of the present disclosure, dynamic tuning of policies to adapt to volatile 5G network conditions by constant monitoring Radio Access Technologies (RAT). For example, RAT may correspond to 5G/LTE, Wi-Fi/Wi-Fi 6E characteristics, such as RSSI, RSRP, RSRQ, NR and LTE bands, bandwidth availability, and the like. The dynamically creating the one or more policies for each of the one or more ongoing PDU sessions will be described in greater detail below with reference to
The comparison of the FAST module with conventional modules will be described in greater detail below with reference to
In a use case scenario, the UE 500 receives an URSP rule (URLLC, 3rd Generation Partnership Project (3GPP) Access, APP1) from the network 508 e.g., the 5GC. The network 508 requested the URLLC slice for APP1=Bixby. Without the present disclosure, the performance of the Bixby app may be degraded as the UE 500 tuned android stack to high values to prefer only bulk traffic. However, the present disclosure tunes an android stack to prefer latency traffic and ensures faster processing of data. Thus, lower latency is achieved. For example, the applications for this mode may include Bixby voice applications, chat applications, video/voice calling applications, cloud gaming applications, and the like.
In another use case scenario, the UE 500 receives a URSP rule (eMBB, 3GPP/Non-3GPP access, APP2) from the network 508. Further, the network 508 requested the eMBB slice for streaming the applications over either 3GPP or non-3GPP access. However, the RAT characteristics are not good. This may result in buffering of the video even though enough bandwidth is available to process this data. This happens due to the setting of high values to the kernel parameters in bad network conditions. The present disclosure detects bad network conditions and dynamically adjusts the kernel parameters to moderate values for high throughput. For example, the applications for this mode may include video streaming applications, Augmented Reality (AR), Virtual Reality (VR) applications, and the like.
In another use case scenario, the UE 500 receives the URSP rule (eMBB, 3GPP access, APP2) from the network 508. Further, the network 508 requested the eMBB slice for APP2. Current settings may not handle the high rate of incoming traffic and may result in packet drops. The present disclosure tunes android stack to ensure the highest throughput and handle bulk traffics. For example, the applications for this mode may include high-resolution video streaming applications, file download applications, and the like.
In another use case scenario, the UE 500 receives two URSP rules (URLLC, 3GPP Access, APP1 and eMBB, 3GPP access, APP2) from the network 508. Further, the network 508 requested for URLLC slice for APP1 and eMBB slice for APP2. The user is using both APPs. Without the present disclosure, one of the sessions needs to be compromised resulting in bad performance. The present disclosure tunes android stack, such that both the PDU sessions may receive the best Quality of Service (QoS). For example, the applications for this mode may include all low latency and throughput-oriented applications including voice, online gaming applications, streaming applications, and the like.
In an embodiment of the present disclosure, the set of system parameters is configured for different network slices using a Flow Aware Stack Tuner (FAST) module 602. The FAST module 602 provides techniques for dynamic tuning of kernel parameters to improve latency and enhance the throughput of PDU sessions associated with different network slices. The FAST module 602 minimizes and/or reduces the application delay by boosting the connection speed and by improving the processing time in protocol layers of android stack. As shown in
As depicted, the PCF 512 of the 5GC network 606 sends the set of URSP rules 608 to a UE modem 610 via the AMF 510. The UE 500 runs a set of applications 612, such as App 1, App2, . . . App N. In an embodiment of the present disclosure, the set of URSP rules 608 includes a traffic descriptor and a route selection descriptor. For example, the traffic descriptor may be rule precedence=1 and application identifier=App 1, and the route selection descriptor may be network slice selection: URLLC, SSC mode selection: SSC Mode 3, DNN selection: internet and access type preference: 3GPP access. In another example, rule precedence=2 and application identifier=App 2, and the route selection descriptor may be network slice selection: eMBB, SSC mode selection: SSC Mode 3, DNN selection: internet and access type preference: non-3GPP access. Further, the modem 610 forwards the set of URSP rules 606 to URSP manager 614 located at the android framework 604 via a kernel 616. The kernel 616 includes TCP/IP, User Datagram Protocol (UDP) 618, and driver 620. In an embodiment of the present disclosure, the FAST module 602 is communicatively coupled with eBPF programs and Netd of the native layer 621. Further, the android framework includes a telephony manager and a connectivity manager. Furthermore, the FAST module 602 receives the slice-specific information and the application UID from the URSP manager 614.
Further, the FAST module 602 fetches RAT characteristics information, such as RSSI, RSRQ, and the like from the connectivity manager 622 and the telephony manager 624. The FAST module 602 uses eBPF programs 626 to gather statistics of sockets associated with the PDU session of the network slice. In an embodiment of the present disclosure, the eBPF programs are hooked into the kernel from Netd 628. Further, the socket options for a particular PDU session are configured via the eBPF programs 626. The eBPF programs 626 configures the remaining kernel parameters which do not have socket options via android Netd module.
As depicted in
Further, the traffic differentiator receives the slice-specific information from the URSP manager and classifies the traffic based on the S-NSSAI value which is part of the route selection descriptor in the set of URSP rules. In an embodiment of the present disclosure, the S-NSSAI value indicates the behavior of traffic variations of a PDU session. This gives an initial direction for configuring most of the system parameters. In an embodiment of the present disclosure, a single S-NSSAI may correspond to different applications and traffic generated for each application varies in burstiness, e.g., the same application can generate burst traffic and small amounts of data. To handle this, the traffic differentiator is configured to inspect the flowrate for each connection which is estimated as below. Further, an operation flow of the operations performed by the traffic differentiator is shown and described in greater detail below with reference to
At step (a), the traffic differentiator 714 is configured to read the set of URSP rules and obtain the application UID for which the network slice is requested. Further, at step (b), the traffic differentiator 714 is further configured to add the application UID to a UidOwnerMap via the Netd module. Thereafter, at step (c), the traffic differentiator 714 is further configured to run one eBPF program from the Netd layer attached to a skfliter for collecting statistics of a PDU session associated with the application UID. In an embodiment of the present disclosure, skfliter is a program type available in android eBPF. The eBPF program is shown as eBPF prog1802 of Netd in
At step (g), the traffic differentiator 714 is configured to estimate or generate the best policy for a given PDU session based on the flow rate and informs the stack tuner. The traffic differentiator 714 is further configured to provide a traffic load parameter/which indicates an average amount of traffic been downloaded for a given matching flow based on database pool history. η helps in estimating the number of CPU cores required and Tx/Rx queues to be allocated for this PDU session.
In an embodiment of the present disclosure, the stack tuner 716 dynamically creates the one or more policies based on information received from the traffic differentiator, such as the flow rate and the predefined threshold flow rate. Further, the stack tuner 716 configures the kernel parameters 902 via ebpf socket options or via sysctl interface to apply the one or more policies per PDU session. In an embodiment of the present disclosure, the stack tuner 716 also monitors socket-level statistics for each PDU session and dynamically tunes the one or more policies.
The stack tuner 716 creates the one or more policies, such as the throughput enhancement policy 718, the latency reduction policy 720, and the default policy 722. In an embodiment of the present disclosure, the throughput enhancement policy 718 tunes the stack to ensure the highest throughput possible. Under the throughput enhancement policy, queues and buffer sizes are set to high, Generic Receive Offload (GRO) is enabled, parameters specific to low latency are disabled, the lowest interrupt rate is assigned for CPUs, and the like.
Further, the latency reduction policy 720 tunes the stack to achieve the lowest latency. Under the latency reduction policy, the queues and buffer sizes are set to low, and the highest interrupt rate is assigned to forward packets to the application immediately. Further, latency-specific parameters, such as tcp_low_latency are enabled which gives preference to latency over throughput. Furthermore, GRO is disabled, and auto-corking is performed to avoid delays due to coalescing. The latency reduction policy also boosts the connection speed from the beginning of the connection by setting the initial congestion window to a high value and disabling the slow start phase. Furthermore, the default policy tunes the stack with moderate values for all other kinds of traffic. In an embodiment of the present disclosure, other kinds of traffic correspond to traffic not corresponding to any slice and traffic which does not fall under throughput oriented or latency sensitive, such as location detection, application updates in background, and the like.
Furthermore, upon receiving the policy to be loaded from the traffic differentiator, the stack tuner 716 configures the set of system parameters e.g., the kernel parameters 902, with the values already tuned from the one or more policies. To effect policy to only per-connection, one more eBPF program e.g., known as eBPF prog2904 is inserted into the kernel from Netd layer 802 which is attached to cgroup as shown in
Further, the stack tuner 716 monitors the socket level statistics, such as packet drops, error rates, and the like. The stack tuner 716 runs an eBPF prog3906 attached to the trace point, as shown in
The stack tuner 716 also monitors overall statistics available at multiple layers. These statistics are summarized in Table 3 as mentioned above. If there are packet drops or performance degradation, then the stack tuner dynamically tunes the one or more policies at runtime using sysctl to improve the performance. The stack tuner updates the one or more policies based on the RSSI and RSRQ values of the RAT connected. Under bad network conditions and for lesser 9, the one or more policies are tuned to less aggressive values compared to previously set values for achieving peak throughputs or lowest latencies. For example, buffers are set to moderate values to avoid bufferbloating problems, interrupt rates may be modified to moderate from low for eMBB slice, and INIT CWND may be set to a less value for avoiding higher network jitter.
In an embodiment of the present disclosure, Table 1 depicts the key parameters that are tuned by the stack tuner of the FAST module 602. Further, Table 2 depicts the TCP/IP Parameters that are tuned by the stack tuner 716. The set of parameters mentioned in the tables 1 and 2 are available under /proc/sys/net/ipv4, /proc/sys/net/ipv6, /proc/sys/net/core, /sys/class/net/rmnet X (where x=0, 1, . . . , 9), /sys/class/net/wlan0. The tuned values mentioned in Tables 1 and 2 are not static but are tuned dynamically. Further, algorithms used in the FAST module 602 for tuning the kernel parameters are explained below as algorithm 1 and algorithm 2.
In a real network slicing deployment scenario, eMBB PDU sessions involve higher incoming or outgoing traffic rates, larger file sizes, and thick streams whereas URLLC PDU sessions involve short flows, relatively lesser file sizes, and lesser traffic rates. To mimic a real network slicing deployment scenario, a test is performed with different file sizes and by varying test duration. The performance of the FAST module 602 is evaluated below scenarios with two S22 devices, one without FAST module 602 where default values are used and another device with FAST module 602.
Further, the congestion control-related parameters are tuned for latency-sensitive traffic where the initial congestion window is set to a bigger value to push more packets from the beginning of the session, reduced tcp_limit_output_bytes to reduce buffering in the network stack, and disabled tcp_slow_start_after_idle from default 1 to 0 to avoid falling back to a slow start which keeps congestion window large. Results of comparison between different congestion control (CC) techniques including low latency CC, such as Data Center Transmission Control Protocol (DCTCP), High Speed Transmission Control Protocol (HSTCP), and delay-based CC BBR, Westwood vs default BIC CC are depicted in the graph 1002 of
Further, the throughput is tested with different file sizes by modifying one or more TCP parameters, such as buffers tcp_rmem, tcp_wmem, and backlog queues are kept moderate for latency traffic and very high for throughput-oriented traffic. Further, depending on the incoming/outgoing packet rate, dev_weight is increased to let the CPU handles more number of packets on a New Application Programming Interface (NAPI) interrupt. Furthermore, to avoid delays, tcp_low_latency is enabled, and tcp_auto_corking and GRO are disabled for latency traffic flows. To improve performance under heavy packet loss, tcp_thin_linear_timeouts is enabled which postpones exponential back-off mode up to 6 retransmission timeouts, and tcp_reordering is tuned up to 10 which increases the packet reordering rate. For extremely low sensitive traffic, busy_poll is tuned to a nonzero value to let the CPU continuously poll the received queues without sleeping. For bulk traffic, netdev_budget value is tuned to a higher value to let the kernel handle a maximum number of overall packets on a NAPI interrupt. The result of the comparison upon modifying the above-mentioned TCP parameters is illustrated in graph 1004 of
To measure android stack latency, parallel sessions involving both bulk traffic representing eMBB and short flows representing URLLC are tested. In one session, a 5 GB file is being downloaded in the background while an online game is played in another session in the foreground. For devices with FAST, /sys/class/net/rmnet, dataX/queues/rx-X, and/sys/class/net/rmnet dataX/queues/tx-X are modified to map two Receiving (Rx) & Transmission (Tx) queues to CPU cores 2,3 for file downloading session and another two Rx & Tx queues to CPU cores 4,5 for gaming session respectively. Further, incoming traffic of the gaming session is redirected to queue 4,5 and the limited file downloading session is redirected to queue 2,3. To measure processing time in the kernel, the packets of the gaming socket are timestamped using the SO_TIMESTAMP socket option. The results of the measurement are plotted in the graph 1100 for gaming session for both the FAST module 602 and without the FAST module 602, where processing time in the kernel are plotted on the Y-axis in units of usecs, and a number of packets timestamped are plotted on X-axis. The upper portion of the graph 1100 represents the UE 500 not using the FAST module 602 and the lower portion of the graph 1100 represents the UE 500 with the FAST module 602. There is a consistent improvement of around 1000 usecs in response time with the FAST module 602.
At step 1202, the UE 500 receives the set of URSP rules from the 5GC network.
Further, at step 1204, the UE 500 tunes the set of system parameters per network slice. The UE 500 further estimates the flow rate of each PDU session associated with each network slice, updates the set of system parameters and applies the set of URSP rules after the set of system parameters are updated.
At step 1206, the UE 500 collects statistics for each network slice at multiple layers of the networking stack.
At step 1208, the UE 500 determines whether there is performance degradation due to packet drops. If yes, the UE 500 tunes the set of system parameters per slice to new values to improve the latency and throughput of the PDU sessions at step 1210. Further, in case the result of the determination at step 1208 is no, the UE 500 determines whether there is a change in RAT characteristics or the flow rate at step 1212. In a case, if the result of the determination at step 1212 is yes, then the UE 500 performs the step 1210. However, if the result of the determination at step 1212 is no, then the UE 500 makes no change in the kernel configuration at step 1214. Further, at step 1216, the UE 500 determines if the PDU session has ended. If not, the UE 500 performs the step 1206.
At step 1302, the PCF of the 5GC sends the set of URSP rules to the AMF. At step 1304, the AMF sends the set of URSP rules to the UE modem. At step 1306, the UE modem sends the set of URSP rules to the framework.
At step 1308, the FAST module 602 requests the modem to fetch the RAT characteristics, such as RSSI, RSRP, RSRQ, NR and LTE bands, bandwidth availability, and the like. At step 1310, the FAST module 602 receives the slice-specific information from the URSP manager and derives the application UID from the slice-specific information.
At step 1312, the FAST module 602 configures the socket options via the EBPF programs to apply the one or more policies per PDU session. At step 1314, the FAST module 602 configures the set of system parameters for which the socket options are not available via the netd sysctl interface.
Further, at step 1316, the FAST collects the statistics, such as source IP, source port, destination IP, destination port, protocol, and packet length for the ongoing PDU sessions from the eBPF program (which in turn gathers information from the kernel) and estimates the flowrate of all the ongoing PDU sessions. The FAST module 602 collects statistics, such as packet drops, error rates, and the like, for each PDU session associated with a network slice via eBPF program (which in turn gathers information from multiple layers of the kernel) at step 1318. Furthermore, the FAST module 602 dynamically updates the kernel parameters either via the ebpf programs or via the netd sysctl interface at step 1320 and step 1322 respectively.
At step 1402, the method 1400 includes receiving, from a network (508), a set of User Equipment Route Selection (URSP) rules including slice-specific information for each of the one or more network slices. In an embodiment of the present disclosure, the set of URSP rules includes a traffic descriptor and a route selection descriptor. The traffic descriptor includes a rule precedence and an application identifier. In an example embodiment of the present disclosure, the route selection descriptor includes a network slice selection, a Session and Service Continuity (SSC) mode, a Data Network Name (DNN) selection, an access type preference, and the like.
At step 1404, the method 1400 includes determining an application User ID (UID) associated with the one or more network slices based on the slice-specific information included in the received URSP rules.
At step 1406, the method 1400 includes acquiring, from one or more applications running on the UE 500, packet information related to each of one or more ongoing Protocol Data Unit (PDU) sessions associated with a corresponding network slice of the one or more network slices based on the received set of URSP rules and the determined application UID. In an example embodiment of the present disclosure, the packet information includes information associated with a source Internet Protocol (IP), a source port, a destination IP, a destination port, a protocol, a packet length, and the like.
At step 1408, the method 1400 includes calculating a flowrate for each of the one or more ongoing PDU sessions based on the received set of URSP rules, the determined application UID, and the acquired packet information related to each of one or more ongoing PDU sessions associated with the corresponding network slice.
At step 1410, the method 1400 includes dynamically tuning the set of system parameters for each of the one or more ongoing PDU sessions associated with the corresponding network slice based on the calculated flow rate and a predefined threshold flow rate.
At step 1412, the method 1400 includes applying, based on the dynamically tuned set of system parameters, one or more policies for each of the one or more ongoing PDU sessions associated with the corresponding network slice. In an example embodiment of the present disclosure, the set of system parameters corresponds to a set of kernel parameters. The set of kernel parameters corresponds to at least one of one or more Transmission Control Protocol/Internet Protocol (TCP/IP) parameters or one or more driver layer parameters.
For applying the one or more policies, the method 1400 includes determining whether one or more socket options are available for each of the one or more policies. Further, the method 1400 includes configuring the one or more socket options via one or more Extended Berkeley Packet Filters (eBPFs) upon the determination that the one or more socket options are available for each of the one or more policies. Furthermore, the method 1400 includes applying the one or more policies for each of the one or more ongoing PDU sessions associated with the corresponding network slice based on the configured one or more socket options.
Further, for applying the one or more policies, the method 1400 includes configuring the set of system parameters via a network interface upon the determination that the one or more socket options are unavailable for each of the one or more policies. Further, the method 1400 includes applying the one or more policies for each of the one or more ongoing PDU sessions associated with the corresponding network slice based on the configured set of system parameters.
Further, the method 1400 includes obtaining one or more statistics for the one or more ongoing PDU sessions from a set of layers of a Kernel via one or more eBPFs. The one or more statistics are related to packet drops and error rates. The method 1400 includes generating one or more static values for the set of system parameters based on the obtained one or more statistics. Furthermore, the method 1400 includes dynamically updating the set of system parameters for each of the one or more ongoing PDU sessions via one of a netd sysctl interface and the one or more eBPFs based on the generated one or more static values.
Furthermore, the method 1400 includes determining whether there is a performance degradation in the one or more ongoing PDU sessions based on the obtained one or more statistics. The method 1400 includes determining whether there is a change in at least one of one or more Radio Access Technology (RAT) characteristics or the flow rate upon determining that there is performance degradation in the one or more ongoing PDU sessions. Further, the method 1400 includes dynamically tuning the set of system parameters for each of the one or more ongoing PDU sessions associated with the corresponding network slice to one or more new values based on the flow rate and the predefined threshold flow rate upon determining a change in at least one of the one or more RAT characteristics or the flow rate.
Further, the method 1400 includes identifying a foreground application running on the UE 500. The method 1400 includes determining a type of the identified foreground application. Furthermore, the method 1400 includes loading the one or more policies for each of the one or more ongoing PDU sessions based on the determined type of the identified foreground application.
For dynamically tuning the set of system parameters for each of the one or more ongoing PDU sessions, the method 1400 includes obtaining one or more RAT characteristics from a Modulator-Demodulator (MODEM) upon calculating the flow rate. In an example embodiment of the present disclosure, the one or more RAT characteristics include Received Signal Strength Indicator (RSSI), Reference Signal Received Power (RSRP), Reference Signal Received Quality (RSRQ), New Radio (NR), and Long-Term Evolution (LTE) bands, bandwidth availability, and the like. The method 1400 includes dynamically updating the one or more policies for each of the one or more ongoing PDU sessions based on the calculated flow rate, the predefined threshold flow rate, and the obtained one or more RAT characteristics. In an example embodiment of the present disclosure, the one or more policies include a throughput enhancement policy, latency reduction policy, default policy, and the like.
Furthermore, the method 1400 includes dynamically creating the one or more policies for each of the one or more ongoing PDU sessions associated with the corresponding network slice based on the calculated flow rate and the predefined threshold flow rate.
While the above steps illustrated in
The disclosed method has several technical advantages over the conventional methods. In conventional methods, for example, the electronic device applies a common setting globally on every pixel or region, to all video frames. However, each pixel or region of the image frame has a unique perceptual relevance and aesthetic enhancement requirement. The disclosed approach allows users to apply more sophisticated aesthetic effects to video (e.g., long exposure silhouette employing motion blurs) while maintaining static regions crystal sharp, and it also enables optimized multi-frame processing for HDR. As a result, processing is limited to only those areas that require such upgrades, which enhances the user's experience.
The present disclosure provides for various technical advancements based on the key features discussed above. Further, the present disclosure discloses a Flow/Slice Aware Stack Tuner (FAST) which tunes kernel parameters based on the slice-specific information and the flow rate per connection per slice. Further, the present disclosure (FAST) creates policies, such as throughput enhancement, latency reduction, and default to configure per slice. The present disclosure also tracks packet drops, and error rates with the help of the Extended Berkeley Packet Filter (eBPF) hook added in the kernel and dynamically tune the policies at runtime. Furthermore, the present disclosure configures the kernel parameters unique to the network slice (eMBB, URLLC, and the like). The present disclosure also dynamically creates or tunes the policies based on the slice-specific information and the flow rate for each session per slice to give a better user experience for latency and throughput-oriented applications. Furthermore, the present disclosure determines optimum values for the kernel parameters based on statistics available at different layers of the kernel. The present disclosure dynamically tunes the policies to adapt to volatile 5G network conditions by constantly monitoring RAT characteristics, such as RSSI, RSRP, RSRQ, NR and LTE bands, bandwidth availability, and the like. Further, the present disclosure is deployed in the UE 500 to ensure promised benefits of the network slice, such as lower latency and higher throughput for network slices. The present disclosure aims to make the 5G more Robust by solving android smartphones inability of tuning kernel parameters for different network slices such, as URLLC (latency sensitive), eMBB (throughput oriented) traffic, and the like.
While specific language has been used to describe the present subject matter, any limitations arising on account thereto, are not intended. Further, while the disclosure has been illustrated and described with reference to various example embodiments, it will be understood that the various example embodiments are intended to be illustrative, not limiting. It will be further understood by those skilled in the art that various changes in form and detail may be made without departing from the true spirit and full scope of the disclosure, including the appended claims and their equivalents. It will also be understood that any of the embodiment(s) described herein may be used in conjunction with any other embodiment(s) described herein.
Number | Date | Country | Kind |
---|---|---|---|
202241038700 | Jul 2022 | IN | national |
202241038700 | Jun 2023 | IN | national |
This application is a continuation of International Application No. PCT/KR2023/008962 designating the United States, filed on Jun. 27, 2023, in the Korean Intellectual Property Receiving Office and claiming priority to Indian Provisional Patent Application No. 2022410038700, filed on Jul. 5, 2022, in the Indian Patent Office, and to Indian Complete Patent Application No. 2022410038700, filed on Jun. 9, 2023, in the Indian Patent Office, the disclosures of each of which are incorporated by reference herein in their entireties.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/KR2023/008962 | Jun 2023 | US |
Child | 18347414 | US |