Dynamic MTU Management in an Enterprise Network

Information

  • Patent Application
  • 20230155950
  • Publication Number
    20230155950
  • Date Filed
    November 11, 2022
    a year ago
  • Date Published
    May 18, 2023
    a year ago
  • Inventors
    • Krishnaswamy; Vijayaraghavan (Campbell, CA, US)
  • Original Assignees
Abstract
Various embodiments of a method and apparatus are disclosed. A higher MTU (Maximum Transmit Unit) between a network Core and the eNBs of the network are configured, as well as in wireless RAN segments. These MTU are dynamically adjusted to allow for the most efficient the packet-sizes according to the UE capabilities and Radio-conditions.
Description
BACKGROUND
1 Technical Field

The disclosed method and apparatus relate generally to communication systems. In particular, the disclosed method and apparatus relates to a method and apparatus for efficient communication of information by selection of the amount of information to be included in transmitted packets.


2 Background

Traffic between a core network and BS/AP (base station/access point) usually has a 1500 byte MTU (maximum transmission unit). A BS/AP is typically a WiFi access point or wireless telephone network base station, such as a 4G (4th generation cellular network) eNB (eNodeB) or 5G (5th generation cellular network) gNB (gNodeB). However, the data that passes through the various devices of the network add many encapsulations to the data packet. Each such encapsulation reduces the end-user “application” throughput. That is, with each encapsulation, the payload size decreases due to encapsulation header.


Enterprise networks are covered with cellular or Wi-Fi networks and are typically well administered networks, unlike the Internet. The equipment used may be capable of supporting a higher MTU. The wireless RAN (Radio Access Network) packet transport capacity increases with the use of a higher modulation scheme, a higher MIMO (multi-in-multi-out) capability and carrier aggregation. Accordingly, use of such techniques allows the amount of data that can be communicated by the packets that conform to the MTU in the network to be increased, thus reducing the ratio of the overhead to the payload transmitted through the network. Increasing the MTU also reduces the size of the packets that can be transmitted, thus increasing the payload to overhead ratio. Accordingly, the system and all network elements can transmit user-traffic more effectively. However, network packet losses and air link issues can reduce the throughput. In addition to making it necessary to retransmit packets, relatively small errors in a large packet will make it necessary to retransmit the entire large packet. Therefore, while it is more efficient to transmit large packets in order to increase the ratio of the payload to the overhead (i.e., headers), if such large packets have to be transmitted frequently, it is more efficient to send smaller packets that are less likely to have errors that require the packet to be retransmitted. In addition, since the packets are smaller, retransmission of those packets that are in error requires less of the network resources. That is, when large packets are being transmitted, valid data has to be retransmitted with the errant data within the large packets. In contrast, when smaller packets are being transmitted, typically, less valid data is retransmitted with the errant data.


In case of the CBRS private networks operating in an LTE (Long Term Evolution 4G cellular network), eNBs and EPCs are typically deployed together to provide LTE cellular service for the enterprise customer devices. These devices can be regular mobiles/tablets, industry specific devices like POS units, cameras, sensors, controls, etc. Some of the devices are high end devices and can support IPv4 (Internet Protocol, version 4), IPv6 (Internet Protocol, version 6). These devices can support a wide variety of other features, like 4×4 MIMO and carrier-aggregation. In future, devices will be able to support multiple RATs, including 5G NR (New Radio) cellular networks.


In the operator networks, when the eNB and EPC are separated by the Internet, IPSec (internet protocol secure) tunnels are typically deployed. In such cases, the packet sizes can get large due to different header encapsulations for GTP/IPSec (GPRS Tunneling Protocol/IPSec) etc., where GPRS is the acronym for general packet radio service. The packet sizes can exceed the typical ethernet MTU of 1500. This results in a need to fragment and reassembly the packets that are too large to transmit in the network. In general, this fragmentation and reassembly can cause significant transmission delays and reductions in the throughput of the system.


In TCP (Transmission Control Protocol) traffic, for example, the MSS (maximum segment size) may be clamped in the intermediate node to reset the MSS values to values that can assist with the problems created by the need to fragment packets. Such values might be 1280 bytes, etc. In this way, traffic does not suffer fragmentation and reassembly even after the addition of the GTP/IPSec headers and the packets can be forwarded in the fast-path. However, every packet carries less than 1280 bytes, rather than carrying a payload of up to 1448 bytes. This is equivalent to >11% reduction in the Downlink application throughput.


On the RAN side, clamping the MSS results in an additional PDCP/RLC/MAC (Packet Data Convergence Protocol/Radio Link Control/Media Access Control) header for each packet. These headers add relatively few bytes. Therefore, the overhead is not significantly increased. On the wireless side, multiple IP-packets get concatenated into Transport-Blocks and carried to the UE where they are demultiplexed and sent to the UE TCP/IP stack after removal of the PDCP/RLC/MAC headers.



FIG. 1 is an illustration of a network in which the network uses 1500 byte MTU with appropriate reduction for GTP/IP overheads. To transport a 9000 byte payload, the system uses 6 packets of 1448 bytes + 52-byte headers and an additional packet of 312 bytes + 52-byte header.


Basic Operation With Normal MTUs


FIG. 3 is an illustration of a network 300 in which the static setting with normal 1500-byte MTU in both core and RAN. The following describes some aspects of the basic operation that occurs with the normal MTU size. In the network 300, the MTU of 1500 is used to establish packet size transactions that avoid fragmentation/reassembly at IPv4 level. Note that the numerical values are example numbers and can be different depending on the encapsulations, UE protocols and the core-network configuration that are used.


SUMMARY

Various embodiments of a method and apparatus are disclosed. In some embodiments, higher MTU between Core-network and eNB (like up to 9000 bytes) are configured and also in wireless RAN segment and dynamically adjust the packet-sizes according to the UE capabilities and Radio-conditions.


It is possible to configure higher MTU mainly in private CBRS networks, as usually the elements are co-located and from same vendor and enterprise networks are easier to reconfigure for higher MTU. Using higher packet-sizes provides increased application throughput, lesser packet-rates, lesser uplink acknowledgements, etc. Disadvantages include the fact that many parts of the network need to support any necessary retransmissions in which a full large packet has to be retransmitted, wasting resources especially on wireless networks. In order to take into account network packet losses and air line issues, the system is further enhanced to use a more dynamic method of packet-size/MTU setting on a per UE, per flow and per direction basis.


Initial Static settings:

  • Have the CBRS network (including traffic servers) configured with Higher MTU;
  • Have the EPC configure the UE with higher MTU during PDN-session creation (on IPv4 and IPv6 bearers only); and
  • Tune the EPC and eNB/gNB configurations to process big size packets.


However, UE may not be capable of this higher MTU. So, initially if there are any DL-packets to UE, EPC ‘pre-fragments to low-MTU’ and sends the traffic towards UE. UL-packets from UE are processed as-is. If the UE is capable of using Higher-MTU and the UE negotiates a TCP-connection with High-MSS value, then EPC sends the “whole packets” as-is towards UE. UL-packets from UE are processed as-is.


UE traffic suffers TCP-retransmissions (indicating packet loss in UE/network):

  • EPC detects this condition by deeply inspecting the UE TCP-flow’s SACK packets and making a determination that TCP retransmissions are high above a threshold.
  • EPC performs in steps a) pre-fragmentation to lower-packet size, b) performs MSS-clamping to reduces packet-sizes for next TCP-sessions c) sets lower MTU values for any new PDN connections from the UE.
  • EPC continues monitoring TCP-SACKs from UE TCP-flows and sees TCP-retransmissions have reduced below threshold.
  • EPC performs reverse steps a) stops pre-fragmentation to lower-packet size, b) avoids MSS-clamping for next TCP-sessions c) sets higher MTU values for any new PDN connections from the UE.


It is possible, that the TCP level retransmissions are not happening, but RLC level retransmissions could be happening, affecting the end-to-end throughput.


UE traffic suffers RLC-retransmissions (indicating packet loss in wireless network):

  • eNB periodically sends UE-specific radio-condition related health counters to EPC, through a ‘novel’ proprietary interface specifically used for the private CBRS network.
  • EPC monitors these counters and detects the high RLC-retransmissions above a threshold for the specific UE bearers.
  • EPC performs in steps a) pre-fragmentation to lower-packet size, b) performs MSS-clamping to reduces packet-sizes for next TCP-sessions c) sets lower MTU values for any new PDN connections from the UE.
  • EPC continues monitoring the counters from detects the RLC-retransmissions have fallen below lower threshold for the specific UE.
  • EPC performs reverse steps a) stops pre-fragmentation to lower-packet size, b) avoids MSS-clamping for next TCP-sessions c) sets higher MTU values for any new PDN connections from the UE.


The same concept is applicable in CBRS NR networks, Wi-Fi networks – either in standalone modes or even when these technologies are utilized concurrently (like LAA/ENDC/Carrier-aggregation) modes.


If TCP-SACK is not supported by UE, then EPC may have to resort to other methods of detecting TCP-retransmissions


UE might perform its own packet-size adjustments based on its own assessment of the network capability and/or Radio conditions and this proposal is not precluding them.


UE application may perform its own IPMTUD and/or PLPMTUD and it is independent of this proposal


For non IPv4 and non IPv6 bearers – this proposal cannot be applied.





BRIEF DESCRIPTION OF THE DRAWINGS

The disclosed method and apparatus, in accordance with one or more various embodiments, is described with reference to the following figures. The drawings are provided for purposes of illustration only and merely depict examples of some embodiments of the disclosed method and apparatus. These drawings are provided to facilitate the reader’s understanding of the disclosed method and apparatus. They should not be considered to limit the breadth, scope, or applicability of the claimed invention. It should be noted that for clarity and ease of illustration these drawings are not necessarily made to scale.



FIG. 1 is an illustration of a network in which the network uses 1500 byte MTU with appropriate reduction for GTP/IP overheads.



FIG. 2 is an illustration of a network in which the network the core network uses 9192 bytes as MTU.



FIG. 3 is an illustration of a network in which the static setting with normal 1500-byte MTU in both core and RAN.



FIG. 4 explains the static setting with higher level 9000-byte MTU in both core and RAN and typical packet-size transactions used to avoid fragmentation/reassembly at IPv4 level.



FIG. 5 explains the static setting with higher level 9000-byte MTU in both core and RAN and typical packet-size transactions used to avoid fragmentation/reassembly at IPv4 level.



FIG. 6 explains the static setting with higher level 9000-byte MTU in both core and RAN and how dynamically MTU/MSS settings may be adjusted dynamically to different UE/flows.



FIG. 7 is an illustration of the introduction of TCP-SACK to efficiently handle the retransmissions.





The figures are not intended to be exhaustive or to limit the claimed invention to the precise form disclosed. It should be understood that the disclosed method and apparatus can be practiced with modification and alteration, and that the invention should be limited only by the claims and the equivalents thereof.


DETAILED DESCRIPTION

In some embodiments, using higher MTU (Maximum Transmission Unit) for enterprise CBRS (Citizen Band Radio Service) networks raises challenges. In some embodiments, packet-size selection is made dynamic for each of the UEs (User Equipment) traffic flows, using statistical counters and predictions based on packet-losses, delay observations, in the CBRS system in the enterprise networks. Further to make the UE-specific radio-health information accessible proprietary communication interface between eNB (eNodeB) and EPC (Evolved Packet Core) is presented in some embodiments.


With regard to CBRS private LTE (Long Term Evolution) networks, when the enterprise owns and operates the cellular network as well enterprise network, the eNB and EPC elements are just separated by a VLAN/L3 (Virtual Local Access Network/Level 3) router. In some such embodiments, there’s no IPSec (Internet Protocol Secure link) involved. It is possible that the MTU setting used in the network can be as high as 9192 bytes. If the core network uses 9192 bytes as MTU for example, then the network architecture is as depicted in FIG. 2. The server can send a single 9000 byte packet with 52 bytes header.


Current systems set a different MTU by a UE based on radio-condition monitoring on its side. But UE is a termination point of both wireless and IP connection, whereas some embodiments of the disclosed method and apparatus perform dynamic MTU setting and enforcement in an intermediate node like EPC.


Here the saving is about 312 bytes (6-packets of 52 bytes header information) for 9000-byte payload, which translates to saving of just 3% in overhead. These 312 bytes (UE header overhead for packet-headers information) is also saved from going thru PDCP-ciphering and transmissions over the wireless layers of RLC/MAC.


While this packet-header overhead saving of 3% in itself might seem insignificant, but making the packet-size large has an indirect benefit of reducing the packet rate, and makes the packet-rate goes down by 84% (1 large packet instead of 7 small packets).


As can be seen the higher MTU in RAN and Core helps reduce the packet-header overheads and when UE’s are provided with IPv6 addresses, this savings in overhead becomes double to 6%.


While using higher packet sizes on the DL for applications like TCP, the TCP acknowledgements that need to flow back also gets reduced. In the above example, to transport a set of 36 k bytes, if 1400-odd byte packets are used, it will result in 14+ Uplink TCP-ACK packets (approximate 14×64 ~= 900 bytes. However, when the same 36 k, when transported using 4×9 k TCP-packets, it results in just 2 Uplink TCP-ACK packets (2×64 ~= 125 bytes). Essentially the traffic savings on UL can be more than 6 times, up to 85%, and while using IPv6 for UE’s, the effective savings on UL overhead is even higher.


As can be seen, with the higher MTU usage in TCP-level, the Uplink (packets/byte) Demand comes down and this is significant and helps better even if the DL/UL configuration is more DL-centric in the CBRS LTE-TDD systems.


Similar benefit may be realizable on other transaction-oriented protocols like NetBios, QUIC and those examples are not discussed explicitly here.


In the typical operator/service-provide networks, these extra packet headers are also counted for end-user’s byte-count/month, but in the private LTE networks, more than the byte-count, it is the transportation efficiency, increase in throughput, user experience etc gets valued more and higher packet-sizes can help in this case.


As discussed, making packets large sized may not save significant savings in overhead, but it reduces the packet rate (effectively to ⅙th, for a typical 9k byte packet).


The packet rate savings is applicable for both eNB and EPC nodes while processing GTP and SCTP traffic.


Similarly, in the intervening switches, where packet decisions are done purely based on headers and the actual data is switched in cut-through mode, large packet size and reduced packet-rate are again beneficial.


Also, for the eNB, the reduction in packet-rate translates to reduction in the number of scheduling-triggers like buffer occupancy updates, PDCP ciphering/deciphering operations (and IPSec encryption/decryption operations, if applicable). In ciphering/crypt operations, typically DMA transfers are used, and usually larger packet-sizes are beneficial, instead of CPU initiating multiple small-packet transfers).


Currently, UEs come with wide variety of information regarding the RAN UE capability and the amount of information is ever increasing. This information is shared between EPC and eNB on signaling messages during UE attach/UE handovers etc. and using the higher MTU allows for lesser signaling packet-rate and SCTP level fragmentation/reassembly overheads for eNB and EPC.


As the amount of traffic consumed over mobile devices increases, the associated security transactions also increase and almost every transaction requires TLS protection, which gets a benefit due to higher packet size in form of quicker connection setup, faster page downloads etc.


While using higher packet sizes on the RAN, it requires that the UE/mobile is capable of supporting those packet sizes. If certain devices don’t support, then the full system should be capable of supporting both small-MTU UE’s as well as higher-MTU UE’s.


Similarly, in the core network between APs and EPC should also be able to support higher packet sizes. If they are not, the system falls back to the normal lower MTU values.


Also, if the core network is partially covered with higher MTU, then again it will place certain difficulties, when the UE’s which are attached via higher-MTU APs handover or move-around to APs which are covered by lower-MTU network or vice-versa. In such cases, UE needs to be reconfigured with appropriate MTU and in case of ongoing transactions, those packets will undergo IP-level fragmentation and re-assembly in eNB and EPC, which can be performance impacting.


In case if the UEs are in bad SINR conditions, where there are more retransmissions happening at HARQ level, it is still ok. However, if the RLC level retransmissions are happening more, it means the entire 9k higher layer TCP packet has to be retransmitted rather than retransmitting only the failed packet (amongst the 6-7 of the smaller MTU TCP packets of sizes 1500 and 312 bytes). So the RLC level retransmission penalty is more with big packet sizes and it becomes an issue when UE moves between high SINR and low SINR conditions often.


In case there are packet-drops in the core (or) if the RLC-retransmissions are failing to recover the packet-losses in RAN, the higher-packet size packets require retransmissions of the entire packet (as against selected number of retransmissions amongst the 6-7 smaller sized packets).


Packet Routing and Transfer Function

The packet routing and transfer function:

  • routes and transfers packets between a mobile TE and a packet data network, i.e. between reference point R and reference points Gi or SGi;
  • routes and transfers packets between mobile TE across different PLMNs, i.e.:
  • between reference point R and reference point Gi via interface Gp;
  • between reference point R and reference point SGi via interface S8;
  • routes and transfers packets between TEs, i.e., between the R reference point in different MSs; and
  • optionally supports IP Multicast routing of packets via a relay function in the GGSN and P GW.


The PDP PDUs shall be routed and transferred between the MS and the GGSN or P GW as N PDUs. In order to avoid IP layer fragmentation between the MS and the GGSN or P-GW, the link MTU size in the MS should be set to the value provided by the network as a part of the IP configuration. The link MTU size for IPv4 is sent to the MS by including it in the PCO (see TS 24.008 [13]). The link MTU size for IPv6 is sent to the MS by including it in the IPv6 Router Advertisement message (see IETF RFC 4861 IPv6 Neighbor Discovery, published by the United States Dept. of Transportation).


When using a packet data network connection of type “non-IP” (see clause 4.3.17.8 of TS 23.401 published by the 3rd Generation Partnership Project; entitled “Technical Specification Group Services and System Aspects; GPRS enhancements for E-UTRAN access”), the maximum uplink packet size that the MS shall use may be provided by the network as a part of the session management configuration by encoding it within the PCO (see TS 24.008 published by the 3rd Generation Partnership Project; entitled “Technical Specification Group Core Network and Terminals; Mobile radio interface Layer 3 specification; Core network protocols; Stage 3”). To provide a consistent environment for application developers, the network shall use a maximum packet size of at least 128 octets (this applies to both uplink and downlink).


In some embodiments, the network configuration ensures that for PDP type IPv4v6 the link MTU values provided to the UE via PCO and in the IPv6 Router Advertisement message are the same. In some embodiments, when this condition cannot be met, the MTU size selected by the UE is unspecified.


When the MT and the TE are separated, e.g., a dongle based MS, it is not always possible to set the MTU value by means of information provided by the network. In some embodiments, the network has the capability of transferring N-PDUs containing PDP PDUs, where the PDP PDUs are of 1500 octets, between the MS and GGSN/P-GW.


In some embodiments, the TE when it is separated from the MT can perform MTU configuration itself, which is out of the scope of 3GPP standardization procedures. Thus, when the MT component in the terminal obtains MTU configuration from the network, this does not imply that the behavior of the MS considered as a whole will always employ this MTU. In many terminals having a separated TE, the TE component configured by default to use an MTU of 1500 octets.


In some embodiments in which the network deployments have MTU size of 1500 octets in the transport network, providing a link MTU value of 1358 octets to the MS as part of the IP configuration information from the network will prevent the IP layer fragmentation within the transport network between the MS and the GGSN/P-GW.


As the link MTU value is provided as a part of the session management configuration information, in some embodiments, a link MTU value is provided during each PDN connection establishments.


In some embodiments, PDP type PPP is supported only when data is routed over a GGSN employing the Gn/Gp interfaces. In some such embodiments, A P-GW supports PDP type IPv4, IPv6 and IPv4/v6 only.


Between the 2G SGSN and the MS, PDP PDUs are transferred with SNDCP. Between the 3G SGSN and the MS, PDP PDUs are transferred with GTP U and PDCP.


Between the SGSN and the GGSN when using Gn/Gp, or between the SGSN and the S GW when using S4, PDP PDUs are routed and transferred with the UDP/IP protocols. The GPRS Tunnelling Protocol (GTP) transfers data through tunnels. A tunnel endpoint identifier (TEID) and an IP address identify a GTP tunnel. When a Direct Tunnel is established, PDP PDUs are routed and transferred directly between the UTRAN and the GGSN using Gn or between UTRAN and the S GW using S12. On S5/S8 interfaces PMIP may be used instead of GTP (see 3GPP standard TS 23.402; entitled “Architecture enhancements for non-3GPP accesses”).


Static Higher MTU Setting for the System

In some embodiments, the following steps are performed in the system in eNB, EPC, the intervening network elements, as well as UE. This section sets up steps for the static MTU configuration to higher value like 9000 bytes in the entire system.


In some embodiments, the UE can support high MTU/MSS settings and EPC/Core-network element, while setting up/modifying the Packet-session should setup MTU accordingly.


The core network between eNB to EPC is configured to 9192 bytes and further to the enterprise servers are all configured with higher MTU settings like 9100 bytes (to cover for the GTP/IP (and optionally IPSec/Vlan) encapsulation in the RAN network) [If IPV6 is used, an additional 32 bytes overhead can be added as buffer].


In the eNB and EPC nodes, the entire system buffer pools/memory pools shall be retuned to support a greater number of 9192-byte packets in various software/hardware components, crypto and networking driver layers.


If the eNB is configured to do any kind of TCP-MSS clamping operations or packet-size reductions (in either direction), that functionality shall be disabled.


Optionally, both the eNB and EPC(Sgw/Pgw) shall be capable of doing fragmentation and reassembly of (GTP-encapsulated) outer IP packets at wire-speeds with very little impact to performance.


The EPC system should maintain statistics and counters on, count of bearers using higher MTU sizes, count of bearers using lower MTU sizes, count of packets transferred using higher MTU packets in both DL and UL directions for UEs (and an aggregate counter) and count of packets transmitted-with fragmentation at IPv4/IPv6 levels in RAN.



FIG. 4 explains the static setting with higher level 9000-byte MTU in both core and RAN and typical packet-size transactions used to avoid fragmentation/reassembly at IPv4 level. Note that the numerical values are example numbers and can be different depending on the encapsulations, ue-protocols, core-network configuration used.


Dynamic MTU Management for Different UEs/Different Flows - Basic

Apart from using static MTU/MSS settings for entire system as described above, it is still pos-sible that the UE due to its own internal reasons, may not be able to support the higher MTU capability and in such situations, this section discusses an option to support different MTU values for different UEs/flows.



FIG. 5 explains the static setting with higher level 9000-byte MTU in both core and RAN and typical packet-size transactions used to avoid fragmentation/reassembly at IPv4 level.


Assuming the core network up to eNB is capable of higher level MTU, the system doesn’t know whether UE really is capable or not, it begins with assumption that UE’s MTU is still at 1500 bytes and every downlink packet is pushed through after doing IP-level fragmentation (pre-fragmentation before GTP/IP/IPSec encapsulations). [For uplink traffic, UE may use up to 9000 bytes and then system supports it].


When the UE begins TCP-negotiations and starts with a hint MSS value of <1500 bytes, that is intercepted by EPC and EPC marks that particular UE’s MTU capability to be limited to. 1500 bytes and continues with IP-fragmentation strategy.


For other UE’s which indicate higher MSS value (like 8960 bytes), EPC allows full 9000-byte higher MTU traffic, without resorting to fragmentation, as described in previous section. Note that the numerical values are example numbers and can be different depending on the encapsulations, ue-protocols, core-network configuration used.


Dynamic MTU Management for Different UEs/Different Flows - Strategic

As noticed in the earlier, it is also possible that it might be inefficient to use higher-MTU on select occasions. So, in order to provide relief and fall back for those UEs the system should provide different MTU/MSS settings for UEs, different flows of specific UEs. These steps help achieve such dynamic decision making of MTU/MSS settings. [Exact details of MSS-clamping are not described here to keep this document brief]



FIG. 6 explains the static setting with higher level 9000-byte MTU in both core and RAN and how dynamically MTU/MSS settings may be adjusted dynamically to different UE/flows. Note that the numerical values are example numbers and can be different depending on encapsulations, ue-protocols, core-network configuration used.


Even though system and UE are setup with >9000-byte setting, EPC assumes UE with 1500-byte MTU capability in DL-direction and performs IP-fragmentation if higher packet-size traffic is received from core.


If UE initiates TCP connection with Servers and TCP-MSS negotiates to higher values like 8900+ bytes, it means UE is capable of accepting higher MTU and EPC marks this UE as such (and suspends IP-fragmentation in the downlink direction) for all traffic towards the UE.


UE and servers keep transacting TCP traffic, using higher MSS values (upto 8900 bytes). EPC keeps intercepting and monitoring the traffic for retransmissions by monitoring the TCP-SACK gaps from UE.


Mechanism of detecting TCP retransmissions via SACK monitoring is described in Appendix-A.


If those TCP-transactions indicate packet loss between Server and UE, then EPC marks this UE capability temporarily as 1500-bytes and begins IP-pre-fragmentation of DL-packets. [The idea here is, if there are failures, retransmitting the entire 9000-byte packets results in unnecessary wastage of airlink resources. If the IP-packets are pre-fragmented and sent as 1500-byte packets or smaller, even if there are failures in airlink, the RLC-level recovery needs to deal with smaller size packets].


At the UE-TCP-level this still is 8900-byte DL packet and results in fewer ACK packets in the uplink.


As EPC has marked this UE’s MTU capability as 1500-bytes, when the UE tries to setup new TCP/IP sessions and UE and server tries to negotiate TCP/IP MSS settings, the EPC intercepts them and does MSS-re-clamp to lower value like 1500 bytes.


EPC continues this UE-traffic IP-pre fragmentation and TCP-MSS re-clamping to lower MSS value, as long as the TCP-packet loss is noticed on other TCP-flow(s) for the UE.


In addition to this, if the UE establishes new PDN connections, EPC can set lower MTU value for those PDN-connections to force the UE to use smaller sized payloads.


EPC continues monitoring the UE’s TCP-acks and as soon as the SACK-gaps reduce for certain monitoring period, the EPC restores the higher MTU values and restores the UE to step-B) as above and avoids pre-fragmentation and lower-MSS-clamping for the UE-flows.


In addition to this, if the UE establishes new PDN connections, EPC can set higher MTU value for those PDN-connections to force the UE to use higher byte-count payloads.


Apart from these, the UE can perform its own monitoring of link performance and traffic rates and can decide on its mechanism to decide on the MTU and packet-sizes, and TCP-MSS values independently.


The system should maintain counters on number of UEs utilizing higher MTU vs the number of UEs utilizing lower MTU. In addition to the above system level statistics/counters, for the number of UEs whose MTU settings are changed from higher to lower value or vice-versa will be useful, to finetune the algorithm.


Dynamic MTU Management for Different UEs/Different Flows - Advanced

Note if the TCP-SACK is not supported or not reliable, EPC may use other strategies to deter-mine the network behavior to decide on MTU/MSS values for the UE-flows. For other transport protocols, other elaborate methods of detecting protocol retransmissions between UE and server may be performed and is not described in this disclosure.


Sometimes even without higher layer retransmissions, if the eNB detects lots of RLC-retransmissions, it again may reduce throughput especially with large MTU size packets and even in those cases also MTU/MSS setting of UE shall be reverted to make it effective payload < 1500 bytes (One mechanism of detecting RLC retransmissions via RTT monitoring is described in Appendix-B).


Depending on the TCP flow monitoring and other approaches mentioned below, the EPC may choose to set up the link value during different UE’s bearer setup/modify or PDN setup/modify operations, to higher or lower MTU values, to balance higher efficiency with high-er MTU and wastages due to packet-losses.


Additional periodic MTU discovery via IPMTUD and PLMTUD, SCTP method may be per-formed by the UEs and servers periodically for different flows and accordingly decide on the MSS-values. Depending on the end-to-end network (including EPC/eNB/intervening nodes) supported, they may decide the MSS values accordingly, and is independent on the proposal mentioned here.


Appendix A: TCP-Retransmission Detection via SACK Information Monitoring

In modern TCP transactions, TCP-SACK has been introduced to efficiently handle the retransmissions. It is roughly described and illustrated in FIG. 7, as follows:


Server is pushing packets containing TCP-seq numbers x and further x+n1, all the way up to x+n6, however UE is indicating to the server via TCP-SACK packets, that it has received up to sequence number x fine and it has received some segments between x+n1 and x+n2 but missing segments between x and x+n1.


Similarly, UE could also be sending sacks indicating gaps in the reception and requesting Server to retransmit.


The EPC is configured to do deep packet-inspection of the UE TCP packets between UE and the server for every UE’s every TCP connection/flow.


On reception of every Ack-packet containing SACK info, it calculates the re-transmission percentage as








r
e
t
x

%
=


r
e
t
r
a
n
s
m
i
s
s
i
o
n
B
y
t
e
s


t
o
t
a
l
B
y
t
e
s


=








x
+
n
1


x



+



x
+
n
3

x
+
n
2


+


x
+
n
5

x
+
n
4






x
+
n
6










Where x = x- prevAck and prevAck=1st Ack sequence number (OR) Ack-sequence-number seen at start of time window.

















EPC maintains information like


Ue-IP
Ue-port
Srvr-IP
Srvr-port
Time
prevAck-Seq-num
Total-received-bytes
Total retransmissions
Re-transmission percentage




U1
Up1
S1
Sp1
T1+1 sec
prevAck1


A11


U1
Up1
S1
Sp1
T1+2 sec
prevAck2





U1
Up1
S1
Sp1
T1+3 sec
prevAck3
















U1
Up2
S1
Sp2
T1+1 sec



A12


U1
Up2
S1
Sp2
T1+2 sec






U1
Up2
S1
Sp2
T1+3 sec

















U1
Upn
Sn
Spn
T1+1 sec



A13


U1
Upm
Sn
Spn
T1+2 sec






U1
Upm
Sn
Spn
T1+3 sec










Every measurement interval, the retransmission percentages of specific UE’s all TCP connections are aggregated, and average ratio determined ((A11+A12+A13)/3). If this ratio is high (say 5 %) over multiple measurement intervals like (say 3 measurements of 1 sec each), it means either the network or the UE is having a noisy connection with lot of packet-drops and if at that time, UE/network is using higher-MTU/MSS setting for that UE, EPC considers downgrading this UE for lower MTU/MSS setting for its future TCP-flows and begin pre-fragmenting the packet to lower MTU/MSS values.


If the retransmission percentages improve beyond specific threshold levels, then EPC can restore the UE to higher MTU/MSS values supported in the network configuration.


Appendix B: RLC-Retransmission Detection via RTT/Dup-Ack Information Monitoring

As described earlier RLC-retransmissions are detected more easily at eNB level, but in this document the CBRS UE’s MTU/MSS setting is controlled at the EPC level and so one approach will be to communicate this information from eNB and EPC using a non-standard proprietary message. However the alternate approach is to be able to detect this condition at EPC itself by monitoring the RTT values on each of the UE’s Protocol flows and adjust the MTU/MSS setting for the UE flows accordingly.


Now consider the situation where there’s proprietary channel of communication (S1AP protocol extensions) is available between eNB and EPC to periodically communicate per-UE specific Radio-health condition. For each UE, eNB gathers UE-Radio-health information like


DL: Tx bytes, PDCP-tx-bytes, Tx-Tonnage, RLC-failures, RLC-retransmissions, RLC-total transmissions, HARQ-retransmission counts, HARQ-failures, Avg DL CQI, Avg Rank used, Avg Pathloss, etc.


UL: Rx bytes, PDCP-rx-bytes, rx-Tonnage, RLC-failures, RLC-retransmissions, RLC-total transmissions, HARQ-retransmission counts, HARQ-failures etc.


Rrc-connection time, Number-of-rrc-re-establishments etc.


The EPC is configured to receive the UE-Radio-health-information messages and perform calculations like ue-RLC-fail in last 1-sec, last 5-sec, in last 10-sec in DL and UL directions.


Now EPC is equipped to make decisions like, if the RLC-failures in last-1-sec, increases beyond a certain threshold, then EPC can consider for downgrading the UE’s to lower MTU/MSS setting and performance pre-fragmentation and/or MSS-clamping-to-lower-MTU strategies.


Similarly, if the RLC-failures in last-1-sec improves beyond certain threshold for next X time-windows (say next 3 sec), EPC can consider restoring the UEs to higher MTU/MSS setting and avoid pre-fragmentation and/or MSS-clamping-to-lower-MTU values.


Although the disclosed method and apparatus is described above in terms of various examples of embodiments and implementations, it should be understood that the particular features, aspects and functionality described in one or more of the individual embodiments are not limited in their applicability to the particular embodiment with which they are described. Thus, the breadth and scope of the claimed invention should not be limited by any of the examples provided in describing the above disclosed embodiments.


Terms and phrases used in this document, and variations thereof, unless otherwise expressly stated, should be construed as open ended as opposed to limiting. As examples of the foregoing: the term “including” should be read as meaning “including, without limitation” or the like; the term “example” is used to provide examples of instances of the item in discussion, not an exhaustive or limiting list thereof; the terms “a” or “an” should be read as meaning “at least one,” “one or more” or the like; and adjectives such as “conventional,” “traditional,” “normal,” “standard,” “known” and terms of similar meaning should not be construed as limiting the item described to a given time period or to an item available as of a given time, but instead should be read to encompass conventional, traditional, normal, or standard technologies that may be available or known now or at any time in the future. Likewise, where this document refers to technologies that would be apparent or known to one of ordinary skill in the art, such technologies encompass those apparent or known to the skilled artisan now or at any time in the future.


A group of items linked with the conjunction “and” should not be read as requiring that each and every one of those items be present in the grouping, but rather should be read as “and/or” unless expressly stated otherwise. Similarly, a group of items linked with the conjunction “or” should not be read as requiring mutual exclusivity among that group, but rather should also be read as “and/or” unless expressly stated otherwise. Furthermore, although items, elements or components of the disclosed method and apparatus may be described or claimed in the singular, the plural is contemplated to be within the scope thereof unless limitation to the singular is explicitly stated.


The presence of broadening words and phrases such as “one or more,” “at least,” “but not limited to” or other like phrases in some instances shall not be read to mean that the narrower case is intended or required in instances where such broadening phrases may be absent. The use of the term “module” does not imply that the components or functionality described or claimed as part of the module are all configured in a common package. Indeed, any or all of the various components of a module, whether control logic or other components, can be combined in a single package or separately maintained and can further be distributed in multiple groupings or packages or across multiple locations.


Additionally, the various embodiments set forth herein are described with the aid of block diagrams, flow charts and other illustrations. As will become apparent to one of ordinary skill in the art after reading this document, the illustrated embodiments and their various alternatives can be implemented without confinement to the illustrated examples. For example, block diagrams and their accompanying description should not be construed as mandating a particular architecture or configuration.

Claims
  • 1. A method comprising: determining, based on various network operational factors, an optimal Maximum Transmission Unit (MTU); andestablishing the determined MTU.
CLAIM OF PRIORITY TO PREVIUOSLY FILED PROVISIONAL APPLICATION -INCORPORATION BY REFERENCE

This non-provisional application claims priority to an earlier-filed provisional application number 63/279,010 filed Nov. 12, 2021, entitled “Dynamic MTU Management in an Enterprise Network” (ATTY DOCKET NO. CEL-041-PROV) and the provisional application number 63/279,010 filed Nov. 12, 2021, and all its contents, are hereby incorporated by reference herein as if set forth in full.

Provisional Applications (1)
Number Date Country
63279010 Nov 2021 US