Cellular telephones and other user equipment (UE) typically communicate with a cellular network using concurrent bi-directional data traffic, including the transmission of uplink (UL) data to the cellular network and receipt of downlink (DL) data from the cellular network. The transmission of UL data from UE to cellular network can involve various real-time transmission pre-processing, such as ciphering of UL data before transmission. The resulting pre-processed UL data is then provided to a radio frequency (RF) transceiver for RF transmission.
Typically, UL data is pre-processed for transmission in the order in which it is enqueued. However, the UL data may be composed of data of different transmission priorities, and under the first-in first-out (FIFO) transmission approach, a significant amount of buffered lower-priority UL data can excessively delay later-enqueued higher-priority UL data, which can impact one or both of the UL throughput or the DL throughput. For example, audio data or video data of an UL multimedia stream may be prioritized and thus may be unnecessarily delayed when there is a significant amount of enqueued UL data of lower priority awaiting pre-processing and transmission. As another example, when the UE receives a Transmission Control Protocol (TCP) DL packet from the cellular network, TCP typically requires that the UE transmit a TCP Acknowledge (TCP-ACK) UL packet back to the cellular network to confirm receipt of the TCP DL packet. Some cellular standards, such as Third Generation Partnership Project (3GPP) Fifth Generation New Radio (5G NR), introduce certain procedures, such as a mini-slot concept, which result in a relatively short Transmission Time Interval. With a relatively short Transmission Time Interval, the UE may not be able to successfully transmit a TCP-ACK UL packet in time in the presence of a significant amount of previously-enqueued UL data, which would result in the cellular network having to retransmit the same DL TCP packet again in response to failure to receive a TCP-ACK UL packet within the Transmission Time Interval, and thus negatively impact both DL and UL data throughput.
The present disclosure may be better understood, and its numerous features and advantages made apparent to those skilled in the art by referencing the accompanying drawings. The use of the same reference symbols in different drawings indicates similar or identical items.
Certain protocols in the cellular protocol stack of a wireless device provide for pre-processing of UL data prior to its transmission. For example, the Packet Data Convergence Protocol (PDCP) provides for pre-transmission processing in the form of header compression processes, enciphering processes, and data integrity processes. Such protocols may employ a FIFO-type input queue to buffer UL data to be subjected to such pre-transmission processes. The protocol layer pulls UL data from this input queue, performs one or more pre-transmission processes, and forwards the resulting processed UL data to the next protocol layer in the protocol stack, ultimately leading to transmission of the processed UL data. As noted, the FIFO nature of this input queue can result in an unacceptable delay in the processing and transmission of certain UL data, due to its priority or otherwise due to a quality-of-service (QoS) requirement or other timing requirement placed on the particular UL data. Not only does the input queue introduce prioritization and delay issues in the presence of too much enqueued UL data, but the input queue is also subject to issues in the event of too little enqueued UL data. To illustrate, the cellular network typically grants the wireless component network UL transmission resources such that the wireless component must fully utilize the granted network resources or otherwise be subjected to a reduced future network resources grant. As such, should the input queue run out of UL data, the protocol stack is incentivized to employ padding to increase the amount of transmitted data to ensure that the granted UL resources are fully utilized. For example, if the UL data is insufficient to complete a transport block (TB), null padding “data” (transmission padding) will be added to the UL data to complete a transport block. While this maintains the UL resources grant, it results in inefficient UL transmission due to the transmission of padding data. Thus, the input queue for a pre-transmission process can introduce unacceptable delay in higher-priority UL data transmission in the presence of substantial enqueued UL data or introduce excessive transmission inefficiency in the absence of sufficient enqueued UL data.
To provide suitable balance between the risks of higher-priority data transmission delay and transmission inefficiency, disclosed herein are systems and techniques for adaptive flow control for pre-transmission processing via an input queue that employs one or more criteria for control of enqueuing of input data. In at least one embodiment, a modem employs a protocol stack with at least one protocol layer that implements one or more pre-transmission processes for data. These one or more pre-transmission processes are fed by a FIFO input queue that receives data of different priorities. In at least one embodiment, data with a higher priority is generally enqueued in the input queue without restriction, whereas data with a lower priority is enqueued in the input queue selectively based on one or more criteria. In implementations, these one or more criteria include application of one or more thresholds to the current amount of data in the input queue, such as an upper, or maximum, threshold (denoted “VHIGH”) or a lower, or minimum, threshold (denoted “VLOW”), or both the maximum threshold VHIGH and the minimum threshold VLOW. For example, using these particular criteria, when the amount, or volume of data in the input queue falls to the lower threshold VLOW, the flow control process initiates enqueuing of low-priority data. However, if and when the volume of data in the input queue rises to the upper threshold VHIGH, the flow control process prevents any further enqueuing of lower-priority data into the input queue. Thereafter, should the volume of data in the input queue fall back to the lower threshold VLOW, enqueuing of low-priority data is again initiated, and so on. To facilitate this selective enqueuing of the low-priority data based on one or more criteria, the flow control process may employ another queue, identified herein as the low-priority queue, to buffer low-priority data while enqueuing of low-priority data into the input queue is blocked, so as to mitigate the risk of loss of the low-priority data.
Further, in at least one embodiment, the one or more implemented criteria are dynamically adjustable to provide for tuning of the balance between risk of higher-priority UL data delay and excessive transmission padding. To illustrate, using the thresholds VHIGH and VLOW as the selective enquiring criteria as an example, the lower threshold VLOW set too low for the present transmission conditions risks excessive transmission padding as the input queue is more likely to be depleted and thus require the use of data padding, while being set too high for the present transmission conditions risks excessive enqueuing of low-priority data relative to enqueued high-priority data and thus increases the risk of unacceptable higher-priority UL data transmission delay. Similarly, setting the upper threshold VHIGH too low would result in more aggressive blocking of low-priority data with the attendant excessive data padding risks, and setting this threshold too high likewise can result in higher-priority data transmission delay risks. Accordingly, in at least one embodiment, the wireless component characterizes the current transmission environment through monitoring of the use of padding in UL transmission, and dynamically adjusts one or both of the upper threshold or the lower threshold based on the monitored padding use. This same, or similar, approach may be utilized to dynamically adjust other criteria used for selective enqueuing of low-priority data into the input queue using the guidelines provided herein.
For ease of description and illustration, the adaptive flow rate control techniques for balancing higher-priority data transmission and transmission efficiency are described herein with reference to application of such techniques in a UE for uplink (UL) data transmissions. However, this context is for illustrative purposes only, and the techniques described herein are not limited to UL transmissions only. Rather, the same or similar principles may be similarly applied at a base station or other edge network component for downlink (DL) transmissions to a UE, to another base station, or other wireless component. Thus, reference herein to a UE employing the described techniques should be understood to apply similarly to other wireless devices, such as base stations, using the guidelines provided herein.
The cellular infrastructure network 104 includes a core network 108 and a plurality of edge networks, or radio access networks (RANs), connected via a backhaul infrastructure. Although cellular infrastructure network 104 is illustrated as having core network 108, in other embodiments multiple cellular infrastructure networks 104 can share the same core network 108 and differ instead by the edge network connected to the shared core network 108. Each edge network includes at least one base station (BS) 110 operable to wirelessly communicate with UEs within signal range based on one or more radio access technologies (RATs). Examples of the base station 110 include, for example, a NodeB (or base transceiver station (BTS)) for a Universal Mobile Telecommunications System (UMTS) RAT implementation (also known as “3G”), an enhanced NodeB (eNodeB) for a 4G LTE RAT implementation, a 5G Node B (gNB) for a 5G NR RAT implementation, and the like. As is well known in the art, the base stations 110 operate as an “air interface” to establish radio frequency (RF) wireless connections with UEs (such as UE 102), and these wireless connections (or “links”) then serve as data and voice paths between the UEs and the core networks 108 for providing various services to the UEs, including voice services via circuit-switched networks or packet-switched networks, messaging services such as simple messaging service (SMS) or multimedia messaging service (MMS), multimedia content delivery, presence services, and the like. For ease of illustration, the base station 110 is described in various scenarios below as a 5G NR radio access network (RAN), and thus is an eNodeB in such scenarios, while the base station 110 is in various other scenarios a 4G LTE RAN, and thus is a gNB in such scenarios. However, it will be appreciated that the base station 110 is not limited to these example configurations. For example, the base station 110 could be a 3G RAN (e.g., a NodeB).
As a general operational overview, the UE 102 and the base station 110 cooperate to provide for transmission and receipt of RF signaling representative of a bi-directional data flow, including RF signaling representing DL data transmission 112 from the base station 110 to the UE 102 and RF signaling representing UL data transmission 114 from the UE 102 to the BS 110. For DL data transmission 112, DL RF signaling is received at the UE 102 via an RF antenna 116 and an RF transceiver 118, and the resulting digital output representing DL data is processed by one or more protocol layers of a protocol stack 120 of a cellular modem (not shown in
Due to various factors, such as scheduling and resource availability, certain UL pre-transmission processes may employ an input queue to buffer UL data awaiting further processing. To illustrate, a UL pre-transmission process 122 implemented by a protocol layer of the protocol stack 120 may employ an input queue 124 to buffer incoming data to be processed by the UL pre-transmission process 122. For example, the UL pre-transmission process 122 may represent one or more of the header compression, ciphering, or data integrity processes performed by a PDCP layer in the protocol stack 120. The input queue 124 typically is operated as a FIFO queue such that the first data enqueued in the input queue 124 is the first data dequeued to the UL pre-transmission process 122. However, the UL data provided to the protocol stack 120 in preparation for UL transmission typically have different transmission priorities. For example, certain UL data packets that are used to characterize the current transmission environment (e.g., Internet Control Message Protocol (ICMP) packets or Domain Name Service (DNS) packets), UL data subject to QoS requirements (e.g., multimedia data streams), or UL data subject to a narrow transmission window (e.g., TCK-ACK packets) may be considered to have a higher priority (and thus, “high-priority” data) than other types of UL data, such as normal TCP/UDP UL data for non-real-time traffic(and thus, “low-priority”, or “normal”, UL data). Thus, as noted above, the processing and transmission of high-priority UL data may be unacceptably delayed in the event that there is a substantial amount of UL data already enqueued in the input queue ahead of the high-priority data.
Accordingly, in at least one embodiment the UE employs an adaptive flow control process for opportunistic management of the enqueuing of UL data into the input queue 124 of one or more UL pre-transmission processes 122 of the protocol stack 120. Further, in at least one embodiment, a queueing structure for the UL pre-transmission process 122 can include a multiple-level queue structure, such as the input queue 124 having an output to provide enqueued UL data to the UL pre-transmission process 122, as well as two lower-level queues, identified as high-priority queue 128 and low-priority queue 130, to temporarily buffer the high-priority UL data and low-priority UL data, respectively, before it is enqueued into the input queue 124.
In this process, UL data identified as higher priority (high-priority UL data) is permitted to enqueue into the input queue 124 from the high-priority queue 128 (or directly from the source AP or other source component) as it becomes available for enqueuing (queue capacity permitting). However, to achieve a more favorable balance between the risk of excessive delay of high-priority UL data and the risk of inefficient UL data throughput due to excessive padding as a result of queue underflow, the UE employs a flow control module 126 to selectively permit low-priority UL data to be enqueued from the low-priority queue 130 into the input queue 124 based on one or more criteria. Such criteria may be based on, for example, the current fullness (that is, the current enqueued data volume) of the input queue 124. For example, one or more criteria may represent the current volume of data in the input queue 124 meeting, or not meeting, a corresponding fullness threshold. In implementations, such criteria relate to one or both of an upper, or maximum, threshold (denoted “VHIGH”) and a lower, or minimum, threshold (denoted “VLOW”).
For these example threshold criteria, the selective enqueuing of low-priority data includes the flow control module 126 initiating the enqueuing of low-priority data when the fullness of the input queue 124 falls to lower threshold VLOW (that is, when the data volume falls to meet a lower threshold), while the flow control module 126 prevents any further enqueuing of low-priority data when the fullness of the input queue 124 reaches (that is, is at or above) the upper threshold VHIGH (that is, when the data volume rises to meet an upper threshold criterium). Thereafter, as data, both low-priority and high-priority, is dequeued from the input queue 124, the fullness of the input queue 124 may again fall to the lower threshold VLOW, in response to which the flow control module 126 resumes enqueuing of low-priority data, and so on. Note that although the flow control module 126 is depicted as being external to the protocol stack 120 for ease of illustration, in implementation the flow control module 126 typically is a software component of the protocol layer implementing the UL pre-transmission process 122.
In processing UL data for transmission, one or more protocol layers of the protocol stack 120 may employ data padding to ensure full utilization of the granted network resources. Thus, if the input queue 124 underflows, either the UL pre-transmission process 122 or another pre-transmission process downstream in the protocol stack 120 will compensate for the underflow by padding the incomplete transport block or other transmission data unit with padding (that is, null data). As explained above, this padding is not actionable data at the receiving end, and thus the presence of padding represents a lost opportunity to transmit actual UL data.
Thus, as transmission conditions may change, and as the settings for VHIGH and VLOW impact the balance between high-priority UL transmission delay risk and inefficient UL transmission risk, in at least one embodiment, the flow control module 126 dynamically adjusts one or both of these thresholds to provide for tuning of the balance between risk of higher-priority UL data delay and excessive UL transmission padding, or if other criteria are used, dynamically adjusts one or more other criteria in a similar manner. To illustrate, a lower threshold VLOW set too low for the present transmission conditions risks excessive UL transmission padding, while being set too high for the present transmission conditions increases the risk of unacceptable higher-priority UL data transmission delay. Similarly, setting the upper threshold VHIGH likewise may cause an increase in the instances of queue underflow and thus increased risk of excessive UL transmission padding, and setting this threshold too high can result in higher-priority UL data delay risks as a greater overall amount of low-priority data would be permitted to be enqueued in the input queue 124. Accordingly, in at least one embodiment, the flow control module 126 characterizes the current UL transmission environment through monitoring of the use of padding in the UL transmission (represented by padding ratio signal 132 or “6”), and dynamically adjusts one or both of the upper threshold or the lower threshold based on the monitored padding use. The processes of monitoring the current padding usage and dynamically adjusting one or more criteria, such as dynamically adjusting one or both of VHIGH or VLOW, for the input queue 124 accordingly is described in greater detail below with reference to
The application processor 202 executes executable instructions from a software stack that includes an operating system (OS) 230 and one or more user software applications, such as user software application 232, and which further can include protocol stacks executed by the baseband processor 214 of the RF modem(s) 206. The OS 230, through manipulation of the application processor 202, manages the general operation of the various hardware components of the UE 102 as well as supports the execution of the one or more user software applications, with the executable instructions representing the OS 230 and the user software application typically accessed from system memory 204 for execution by the application processor 202.
The modules of the OS 230 thus include a cellular telephony module 236 for controlling or facilitating the higher-level cellular-related operations of the UE 102, including subscriber identity management, initiation, control, and tear-down of cellular connections (including SIP messaging and RRC messaging), authentication, interfacing between cellular connections and the user software applications, and the like. Further, the memory 216 of the RF modem 206 stores one or more protocol stacks for a corresponding cellular standard, including the protocol stack 120 of
As explained with reference to
In at least one embodiment, the current padding rate is represented by a padding ratio (δ) 310 (one embodiment of padding ratio signal 132,
As explained above, setting VLOW too high or VHIGH too low for the current transmission conditions risks frequent underflow for the input queue 124, and thus triggering the frequent use of data padding as a result of actual data being unavailable from the input queue 124 for transmission. However, setting VLOW WO low or VHIGH too high for the current transmission conditions risks enqueuing more low-priority data ahead of high-priority data, and thus risking an excessive delay in the dequeuing and transmission of the high-priority data from the input queue 124. Thus, with the threshold adaptation sub-process 402, the threshold manager 304 seeks to determine suitable values for the criteria represented by VLOW and VHIGH that balance efficient loading of the transport blocks and the timely communication of high-priority data. Thus, at block 406, the threshold manager 304 determines the current padding ratio δ (e.g., padding ratio 310,
At block 408, the threshold manager 304 uses the current padding ratio δ determined at block 406 to update one or both of the threshold criteria (e.g., thresholds V Low or VHIGH) In some embodiments, the threshold manager 304 employs an algorithm based on certain predetermined factors, such as an expected padding ratio ε (that is, a padding ratio that is selected as a target padding ratio), a minimum padding ratio threshold θ (which is selected to indicate that insufficient data is being maintained in the input queue 124), and the aforementioned current padding ratio δ. In other embodiments, a preset amount in the form of a smoothing delta Δ also can be specified to facilitate incremental increases and decreases in the threshold criteria so as to smooth out any rapid changes.
As an example, VLOW can be dynamically calculated using the expressions:
V
LOW(t+1)=UL_TP(t+1)×TLOW(t+1) and
V
HIGH(t+1)=k*VLOW(t+1)
where:
To set initial values for VLOW and V HIGH (that is, VLOW(0) and V HIGH (0) at time t=0), TLOW(0) can be set to, for example, 1-2 ms to ensure sufficient data enqueuing to avoid excessive padding at the outset.
Thereafter, the value of TLOW(t+1) can be set based on a comparison of the current padding ratio δ to the expected, or target, padding ratio ε and minimum padding ratio threshold θ as follows:
Concurrent with the threshold manager 304 dynamically updating the threshold criteria in sub-process 402, at sub-process 404 the fill manager 302 is managing the enqueuing of data into the input queue 124. As explained above, the fill manager 302 permits high-priority data to be enqueued without regard to the threshold criteria, with the only limitation on enqueuing the high-priority data being the current inability for the input queue 124 to store any more data (that is, the input queue 124 is full). However, the fill manager 302 implements selective enqueuing of low-priority data based on the threshold criteria.
Accordingly, at block 410, the fill manager 302 determines the total volume (or amount) VTOTAL of data, both high-priority and low-priority, enqueued in the input queue 124. At block 412, this total volume of data VTOTAL is compared to one or both of the threshold criteria VLOW and VHIGH In at least one embodiment, the fill manager 302 employs a hysteresis-type approach to control the enqueuing of low-priority data. In this approach, as represented by block 414, the fill manager 302 temporarily prevents any further enqueuing of low-priority data once the total volume VTOTAL meets the threshold VHIGH (that is, VTOTAL≥VHIGH) and, as represented by block 416, the fill manager 302 subsequently permits enqueuing of low-priority data to resume once the total volume of data VTOTAL has fallen to meet the threshold VLOW (that is, VTOTAL<VLOW). As represented by block 418, when VTOTAL is between VLOW and VHIGH (that is VLOW<VTOTAL<VHIGH), then the fill manager 302 continues with whatever enqueuing permission state is current in place for the low-priority data. Thus, if the fill manager 302 resumed enqueuing of low-priority data once VTOTAL fell to VLOW (block 414), then the fill manager 302 continues to permit enqueuing of low-priority data (block 418) until VTOTAL meets VHIGH, at which point the fill manager 302 ceases enqueuing of low-priority data and continues to prevent enqueuing of low-priority data (block 418) until the VTOTAL once again falls to VLOW, at which point enqueuing of low-priority data is resumed again.
With this approach, the adaptive flow control process described herein balances overall data throughput with responsiveness to high-priority data through selective enqueuing of low-priority data based on monitoring of criteria relative to current input queue fullness and current data padding statistics, and the dynamic modification of such criteria in view of current uplink characteristics.
In some embodiments, certain aspects of the techniques described above may be implemented by one or more processors of a processing system executing software. The software comprises one or more sets of executable instructions stored or otherwise tangibly embodied on a non-transitory computer readable storage medium. The software can include the instructions and certain data that, when executed by the one or more processors, manipulate the one or more processors to perform one or more aspects of the techniques described above. The non-transitory computer readable storage medium can include, for example, a magnetic or optical disk storage device, solid state storage devices such as Flash memory, a cache, random access memory (RAM) or other non-volatile memory device or devices, and the like. The executable instructions stored on the non-transitory computer readable storage medium may be in source code, assembly language code, object code, or other instruction format that is interpreted or otherwise executable by one or more processors.
A computer readable storage medium may include any storage medium, or combination of storage media, accessible by a computer system during use to provide instructions and/or data to the computer system. Such storage media can include, but is not limited to, optical media (e.g., compact disc (CD), digital versatile disc (DVD), Blu-Ray disc), magnetic media (e.g., floppy disc, magnetic tape, or magnetic hard drive), volatile memory (e.g., random access memory (RAM) or cache), non-volatile memory (e.g., read-only memory (ROM) or Flash memory), or microelectromechanical systems (MEMS)-based storage media. The computer readable storage medium may be embedded in the computing system (e.g., system RAM or ROM), fixedly attached to the computing system (e.g., a magnetic hard drive), removably attached to the computing system (e.g., an optical disc or Universal Serial Bus (USB)-based Flash memory), or coupled to the computer system via a wired or wireless network (e.g., network accessible storage (NAS)).
Note that not all of the activities or elements described above in the general description are required, that a portion of a specific activity or device may not be required, and that one or more further activities may be performed, or elements included, in addition to those described. Still further, the order in which activities are listed are not necessarily the order in which they are performed. Also, the concepts have been described with reference to specific embodiments. However, one of ordinary skill in the art appreciates that various modifications and changes can be made without departing from the scope of the present disclosure as set forth in the claims below. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of the present disclosure.
Benefits, other advantages, and solutions to problems have been described above with regard to specific embodiments. However, the benefits, advantages, solutions to problems, and any feature(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential feature of any or all the claims. Moreover, the particular embodiments disclosed above are illustrative only, as the disclosed subject matter may be modified and practiced in different but equivalent manners apparent to those skilled in the art having the benefit of the teachings herein. No limitations are intended to the details of construction or design herein shown, other than as described in the claims below. It is therefore evident that the particular embodiments disclosed above may be altered or modified and all such variations are considered within the scope of the disclosed subject matter. Accordingly, the protection sought herein is as set forth in the claims below.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/US2022/046829 | Oct 2022 | US |
Child | 18495301 | US |