Uplink bandwidth estimation over broadband cellular networks

Information

  • Patent Grant
  • 11864020
  • Patent Number
    11,864,020
  • Date Filed
    Tuesday, September 8, 2020
    3 years ago
  • Date Issued
    Tuesday, January 2, 2024
    4 months ago
Abstract
Disclosed are methods, systems and non-transitory computer readable mediums for estimating bandwidth over packet data networks, for example, 5G networks. The methods, systems and non-transitory computer readable mediums can include modifying a buffer status report (e.g., via application programming interface) and reporting, to an eNodeB, the modified buffer status report. The methods, systems and non-transitory computer readable mediums can also include calculating the required throughput to satisfying transmitting a data amount stored at a regular buffer, receiving, from the eNodeB, uplink grants and transmitting, data from the regular buffer. The methods, systems and non-transitory computer readable mediums can also include calculating estimated throughput from the user equipment, determining if the estimated throughput services the data amount stored at the regular buffer and in response to the estimated throughput being insufficient to service the data amount stored the regular buffer, determining if a counter is less than a threshold value.
Description
TECHNICAL FIELD

The present technology pertains to packet data networks, and more specifically to estimating uplink bandwidth for user equipment over 5G networks.


BACKGROUND

Fifth generation (5G) mobile and wireless networks will provide enhanced mobile broadband communications and are intended to deliver a wider range of services and applications as compared to prior generation mobile and wireless networks. Compared to prior generations of mobile and wireless networks, the 5G architecture is service based, meaning that wherever suitable, architecture elements are defined as network functions that offer their services to other network functions via common framework interfaces. In order to support this wide range of services and network functions across an ever-growing base of user equipment (UE), 5G networks extend the network slicing concept utilized in previous generation architectures.


Within the scope of the 5G mobile and wireless network architecture, resources are shared between a number of subscribers (e.g., UE). As a results, overall bandwidth available to subscribers is shared based on one or more parameters (e.g., channel conditions, network congestion, signal to noise ratio, resource availability at the evolved node B (eNodeB)). As a result, even though the theoretical maximum throughput that the UE can support is known, it is difficult to estimate or predict the amount of throughput any specific UE can actually achieve over the network. As a result, upper layer protocols (e.g., TCP/IP, UDP, etc.) cannot make accurate decisions for traffic over cellular interfaces (e.g., modem).





BRIEF DESCRIPTION OF THE DRAWINGS

In order to describe the manner in which the above recited and other advantages and features of the disclosure can be obtained, a more particular description of the principles briefly described above will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. Understanding that these drawings depict only exemplary embodiments of the disclosure and are not therefore to be considered to be limiting of its scope, the principles herein are described and explained with additional specificity and detail through the use of the accompanying drawings in which:



FIG. 1 illustrates an example 5G network environment in which one or more aspects of the present disclosure may operate;



FIG. 2 illustrates an example block diagram of user equipment according to one or more aspects of the present disclosure;



FIG. 3 illustrates an example method for estimating bandwidth according to one or more aspects of the present disclosure;



FIG. 4 illustrates an example network device upon which one or more aspects of the present disclosure may be provided; and



FIG. 5 illustrates an example computing system architecture upon which one or more aspects of the present disclosure may be provided.





DESCRIPTION OF EXAMPLE EMBODIMENTS

Various embodiments of the disclosure are discussed in detail below. While specific implementations are discussed, it should be understood that this is done for illustration purposes only. A person skilled in the relevant art will recognize that other components and configurations may be used without parting from the spirit and scope of the disclosure. Thus, the following description and drawings are illustrative and are not to be construed as limiting the scope of the embodiments described herein. Numerous specific details are described to provide a thorough understanding of the disclosure. However, in certain instances, well-known or conventional details are not described in order to avoid obscuring the description. References to one or an embodiment in the present disclosure can be references to the same embodiment or any embodiment; and, such references mean at least one of the embodiments.


Reference to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosure. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Moreover, various features are described which may be exhibited by some embodiments and not by others.


The terms used in this specification generally have their ordinary meanings in the art, within the context of the disclosure, and in the specific context where each term is used. Alternative language and synonyms may be used for any one or more of the terms discussed herein, and no special significance should be placed upon whether or not a term is elaborated or discussed herein. In some cases, synonyms for certain terms are provided. A recital of one or more synonyms does not exclude the use of other synonyms. The use of examples anywhere in this specification including examples of any terms discussed herein is illustrative only, and is not intended to further limit the scope and meaning of the disclosure or of any example term. Likewise, the disclosure is not limited to various embodiments given in this specification.


Without intent to limit the scope of the disclosure, examples of instruments, apparatus, methods and their related results according to the embodiments of the present disclosure are given below. Note that titles or subtitles may be used in the examples for convenience of a reader, which in no way should limit the scope of the disclosure. Unless otherwise defined, technical and scientific terms used herein have the meaning as commonly understood by one of ordinary skill in the art to which this disclosure pertains. In the case of conflict, the present document, including definitions will control.


Additional features and advantages of the disclosure will be set forth in the description which follows, and in part will be obvious from the description, or can be learned by practice of the herein disclosed principles. The features and advantages of the disclosure can be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. These and other features of the disclosure will become more fully apparent from the following description and appended claims, or can be learned by the practice of the principles set forth herein.


OVERVIEW

Disclosed are methods, systems and non-transitory computer readable mediums for estimating bandwidth over packet data networks, for example, 5G networks. The methods, systems and non-transitory computer readable mediums can include modifying a buffer status report (e.g., via application programming interface) and reporting, to an eNodeB, the modified buffer status report. The methods, systems and non-transitory computer readable mediums can also include calculating the required throughput to satisfying transmitting a data amount stored at a regular buffer, receiving, from the eNodeB, uplink grants and transmitting, data from the regular buffer. The methods, systems and non-transitory computer readable mediums can also include calculating estimated throughput from the user equipment, determining if the estimated throughput services the data amount stored at the regular buffer and in response to the estimated throughput being insufficient to service the data amount stored the regular buffer, determining if a counter is less than a threshold value (e.g., 5, configured by a user, etc.). When the counter is less than the threshold value, recording the estimated throughput for the counter, incrementing the counter, reporting a padding amount, and padding a bandwidth estimation buffer with the padding amount.


The methods, systems and non-transitory computer readable mediums can also include modifying the buffer status report by determining an amount of data to be sent from a regular buffer, determining a padding amount from the bandwidth estimation buffer, combining the amount of data to be sent from the regular buffer and the padding amount and modifying the buffer status report with the combined amount.


The methods, systems and non-transitory computer readable mediums can also include in response to the estimated throughput being sufficient to service the data amount stored the regular buffer, determining maximum throughput calculated over one or more packet data networks from the user equipment and determining the padding amount by subtracting the estimated throughput from the maximum throughput.


The methods, systems and non-transitory computer readable mediums can also include when the counter is equal to or greater than the threshold value, calculating an average throughput over one or more estimated throughput values, reporting the average estimated throughput values and emptying the bandwidth estimation buffer. In some examples, the emptying comprises zeroing out the bandwidth estimation buffer.


EXAMPLE EMBODIMENTS

The disclosed technology addresses the need in the art for estimating available bandwidth of user equipment in a 5G network. Disclosed are systems, methods, and computer-readable storage media for estimating available bandwidth by manipulating status reports of a regular buffer (RB) by utilizing a bandwidth estimate buffer (BEB) based on maximum and estimated throughput. A description of network computing environments and architectures, as illustrated in FIG. 1, is first disclosed herein. A discussion of user equipment as illustrated in FIG. 2. will then follow. A discussion of estimating the overall bandwidth as illustrated in FIG. 3 will then follow. The discussion then concludes with a description of example devices, as illustrated in FIGS. 4 and 5. These variations shall be described herein as the various embodiments are set forth. The disclosure now turns to FIG. 1.



FIG. 1 depicts an example representation of a network environment 100 in which one or more aspects of the present disclosure may operate. Network 104 can be a broadband cellular network, for example, 5G, 4G, LTE, etc. While not limiting, examples herein will discuss 5G networks. Network 104 can include one or more Evolved Node B (eNodeB) 102A, . . . 102N (collectively “102”). eNodeB 102 can be network hardware connected to network 104 that communicates directly and wirelessly through packet data network 112 (PDN) with user equipment (UE) 110A, . . . 110N (collectively “110”). For example, user equipment can include any component, element or object capable of transferring and receiving voice or data with eNodeB of the network. In some examples, user equipment can be a router, a switch, mobile telephone, etc. Data and/or information, as used herein, refers to any type of numeric, voice, video, media, or script data, or any type of source or object code, or any other suitable information in any appropriate format that may be communicated from one point to another. In some examples, eNodeB 102 can recognize a cellular interface (e.g., modem) of UE 110 initiating communication with network 104. UE 110 can transmit and/or receive data through modem 106 (e.g., I/O interface, cellular interface, etc.). UE 110 can also include processor 108 and memory 110 configured to store and execute instructions for the operation of UE 110 and for transmitting and/or receiving data through modem 106. Further details of UE 100 is shown in FIGS. 4 and 5.



FIG. 2 depicts an example representation of a user equipment 110 in which one or more aspects of the present disclosure may operate. Modem 106 of UE 110 can include transmitter 240 for sending data via PDN 112 to eNodeB 102 and receiver 242 for receiving data via PDN 112 from eNodeB 102. Transmitter 240 can transmit data as much as the uplink grants allocated by eNodeB 102. That is, eNodeB 102 can allocate uplinks grants of a specific amount of throughput that UE 102 can transmit at any given time. Transmitter 240 can also include RB 204 and BEB 232. RB 204 can store data to be transmitted to eNodeB 102. For example, modem 106 transmits, to eNodeB 102, a Buffer Status Report (BSR). The BSR can indicate the amount of data stored in RB 204 that needs to be transmitted to eNodeB 102. eNodeB 102 can then allocated UL grants to enable transmitter 240 to transmit the data stored in RB 204 to the eNodeB 102. In some situations, it is difficult for the UE to estimate how much throughput can be achieved. BEB 232, as shown in FIG. 3, can store padded data (e.g., data that will not be sent to an eNodeB), which can be used to alter the BSR, in order to determine the maximum throughput on any specific eNodeB.


The disclosure now turns to FIG. 3. The method shown in FIG. 3 is provided by way of example, as there are a variety of ways to carry out the method. Additionally, while the example method is illustrated with a particular order of blocks, those of ordinary skill in the art will appreciate that FIG. 3 and the blocks shown therein can be executed in any order that accomplishes the technical advantages of the present disclosure and can include fewer or more blocks than illustrated.


Each block shown in FIG. 3 represents one or more processes, methods or subroutines, carried out in the example method. The blocks shown in FIG. 3 can be implemented in a network environment such as network environment 100 shown in FIGS. 1 and 2. The flow chart illustrated in FIG. 3 will be described in relation to and make reference to at least the elements of network 100 shown in FIGS. 1 and 2.


Method 300 can begin at block 302. At block 302, an estimation of bandwidth can start (e.g., initiated). Upon initiation, and on first run through of method 300, a data amount in regular buffer (RB) 304 is determined. On the first run through of method 300, any data in BEB 332, if present, is considered stale and not used. At block 306, the buffer status report (BSR) is modified. The BSR can indicate how much data is stored in RB 304 that modem 106 has to send to eNodeB 102. On the first run though of method 300, the BSR is not modified. On subsequent run throughs, the BSR can be modified to reflect the amount of data is RB 304 and BEB 332. The BSR can be modified by, for example, one or more application programming interface (API). For example, the API can be utilized to modify the BSR to indicate RB 304 is “full” or “over capacity” (i.e., data waiting to be read into RB 304). By allocating RB 304 as full, eNodeB 102 can provide maximum available uplink grants for transmitting data in RB 304 (e.g., throughput). When the RB 304 is not full, eNodeB 102 can provide uplink grants (e.g., less uplink grants than when RB 304 is full) to send the data stored in RB 304.


At block 308, the BSR can be reported to eNodeB 102. In some examples, the eNodeB can receive a plurality of BSRs from a plurality of UEs. In providing uplink grants to each UE, the eNodeB can consider the received plurality of BSRs. At block 310, the UE can calculate the throughput required to satisfy the data in RB 304. At block 312, the eNodeB can provide, and modem 106 can wait for and receive, one or more uplink grants. At block 314, upon receiving the uplink grants, modem 106 can transmit, and the eNodeB 102 can receive the data stored in RB 304.


At block 316, estimated throughput (ET) can be calculated. ET can be calculated and based on the uplink grants provided from the eNodeB. In some example, the ET can be calculated from modulation scheme and number of physical resource blocks that are assigned (e.g., TS36.213). At block 318, a determination can be made as to whether the ET can service (e.g., sufficient to transmit) the data currently stored (and/or all of the data that could be stored) in RB 304. In situations where BEB padding exists, the determination can include the combination of data in RB 304 and the BEB padding stored in BEB 332. For example, if the UL grants of 50 Mbps are required to service data stored in RB 304, but the calculated ET is 30 Mbps, then the UE knows the most bandwidth the eNodeB can provide at this instance is less than the UE requires. This value is recorded (block 326). In other examples, if UL grants of 30 Mbps are required to service data stored in RB 304, but the calculated ET is 30 Mbps then the modem could request more bandwidth (i.e., since the eNodeB can potentially provide the UE more grants). This can be calculated into the BSR (e.g., BEB padding).


In some examples, when the traffic (e.g., data) being transmitted over the PDN is of high volume, the BEB will always be empty since the UL grants being received at the modem will be equal to or less than the total required by the modem to service the data requests in the RB. As such, averaging the received UL grants (block 334) will provide an estimation of the bandwidth, which can be provides to the UE and other upper layer protocols. When the traffic (e.g., data) being transmitted over the PDN is low, the UL grants will be enough to service 100% of the RB. The BEB can then aid in modifying the reported BSR to a higher value than data stored in the RB. The higher value to the BSR can provide an estimation of the possible throughput achievable over the uplink (e.g., the modem sends the data from the RB over the UL grants).


When the ET cannot service 100% of the RB, method 300 can proceed to block 324. When the ET can service 100% of the RB, method 300 can proceed to block 320.


At block 320, the maximum possible throughput (MT) is determined for the one or more PDNs from modem 106 to the eNodeB. For example, the MT can be determined based on the capabilities of the modem (e.g., type, CAT4, CAT5, etc.). At block 322, BEB padding is calculated by subtracting the ET from the MT.


At block 324, a determination is made as to whether the BEB counter is less than a threshold amount. The threshold amount can be a number of times that method 300 has been iterated. On the initial run of method 300, BEB counter can be 0. On subsequent runs, the BEB counter will increment. When the BEB counter reaches a threshold amount, the BEB can be emptied and the counter reset (as shown in block 338). While not limiting, in some examples, a threshold amount can be 5. In other examples, the threshold amount can be configurable by a user or automatically configured based on historical data, including historical iterations of method 300.


When the BEB counter is less than the threshold, method 300 can proceed to block 326. At block 326, the ET for the current BEB counter is recorded. In one example, the ET value can be in a local memory of the UE, along with the BEB counter value. For example, for BEB counter value 1, the ET could be 74.6 MBps; for BEB counter value 2, the ET could be 90.0 MBps; etc. At block 328, the BEB counter is incremented and the BEB padding is reported. At block 330, the BEB is either padded or emptied. When the BEB counter is less than the threshold, BEB 332 is padded and another iteration of method 300 is initiated.


Upon further iterations, the BEB padding (block 332) and the data in RB 304 can be used in combination to modify the BSR, at block 306, via the API. For example, the combination can be used to modify the BSR to indicate more data is stored in RB 304 than is actually present. This indication, of the combination of data (e.g., actual data and padding) waiting to be sent to an eNodeB, can notify modem 106 of the actual available bandwidth at that specific time (e.g., received UL grants based on the additional padded data and actual data). That is, the actual available bandwidth, at that specific time, can be the uplink grants from the eNodeB—which are based on the combination of the data in RB 304 and BEB padding in BEB 332. Method 300 can continue, as described above until at block 324, the BEB counter is equal to or greater than the threshold amount. When the BEB counter is equal to or greater than the threshold amount, method 300 can proceed to block 334.


At block 334, the average throughput over recorded ET values is calculated. For example, the ET values (e.g., stored at block 326) can be the average estimated bandwidth over the cellular network (e.g. PDN). In one example, the ET values (for each BEB counter value) can be averaged to calculate the average throughput. At block 336, the average throughput over recorded ET values (e.g. average ET) is reported (e.g., to UE 110 for use by the upper layer protocols). At block 338, the BEB is emptied and the BEB counter can be reset to zero. For example, the BEB can be zeroed out. Method 300 can then restart the estimation process. In some examples, the method can proceed at predetermined intervals, timers, user configured, on-demand, etc.



FIG. 4 depicts an example network device upon which one or more aspects of the present disclosure can be implemented. Although the system shown in FIG. 4 is one specific network device of the present disclosure, it is by no means the only network device architecture on which the concepts herein can be implemented. For example, an architecture having a single processor that handles communications as well as routing computations, etc., can be used. Further, other types of interfaces and media could also be used with the network device 400.


Regardless of the network device's configuration, it may employ one or more memories or memory modules (including memory 406) configured to store program instructions for the general-purpose network operations and mechanisms for roaming, route optimization and routing functions described herein. The program instructions may control the operation of an operating system and/or one or more applications, for example. The memory or memories may also be configured to store tables such as mobility binding, registration, and association tables, etc. Memory 406 could also hold various software containers and virtualized execution environments and data.


The network device 400 can also include an application-specific integrated circuit (ASIC), which can be configured to perform routing, switching, and/or other operations. The ASIC can communicate with other components in the network device 400 via the connection 410, to exchange data and signals and coordinate various types of operations by the network device 400, such as routing, switching, and/or data storage operations, for example.



FIG. 5 illustrates an example computing system architecture 500 including components in electrical communication with each other using a connection 505, such as a bus, upon which one or more aspects of the present disclosure can be implemented. System 500 includes a processing unit (CPU or processor) 510 and a system connection 505 that couples various system components including the system memory 515, such as read only memory (ROM) 520 and random access memory (RAM) 525, to the processor 510. The system 500 can include a cache of high-speed memory connected directly with, in close proximity to, or integrated as part of the processor 510. The system 500 can copy data from the memory 515 and/or the storage device 530 to the cache 512 for quick access by the processor 510. In this way, the cache can provide a performance boost that avoids processor 510 delays while waiting for data. These and other modules can control or be configured to control the processor 510 to perform various actions. Other system memory 515 may be available for use as well. The memory 515 can include multiple different types of memory with different performance characteristics. The processor 510 can include any general purpose processor and a hardware or software service, such as service 1 532, service 2 534, and service 3 536 stored in storage device 530, configured to control the processor 510 as well as a special-purpose processor where software instructions are incorporated into the actual processor design. The processor 510 may be a completely self-contained computing system, containing multiple cores or processors, a bus, memory controller, cache, etc. A multi-core processor may be symmetric or asymmetric.


To enable user interaction with the computing device 500, an input device 545 can represent any number of input mechanisms, such as a microphone for speech, a touch-sensitive screen for gesture or graphical input, keyboard, mouse, motion input, speech and so forth. An output device 535 can also be one or more of a number of output mechanisms known to those of skill in the art. In some instances, multimodal systems can enable a user to provide multiple types of input to communicate with the computing device 500. The communications interface 540 can generally govern and manage the user input and system output. There is no restriction on operating on any particular hardware arrangement and therefore the basic features here may easily be substituted for improved hardware or firmware arrangements as they are developed.


Storage device 530 is a non-volatile memory and can be a hard disk or other types of computer readable media which can store data that are accessible by a computer, such as magnetic cassettes, flash memory cards, solid state memory devices, digital versatile disks, cartridges, random access memories (RAMs) 525, read only memory (ROM) 520, and hybrids thereof.


The storage device 530 can include services 532, 534, 536 for controlling the processor 510. Other hardware or software modules are contemplated. The storage device 530 can be connected to the system connection 505. In one aspect, a hardware module that performs a particular function can include the software component stored in a computer-readable medium in connection with the necessary hardware components, such as the processor 510, connection 505, output device 535, and so forth, to carry out the function.


For clarity of explanation, in some instances the present technology may be presented as including individual functional blocks including functional blocks comprising devices, device components, steps or routines in a method embodied in software, or combinations of hardware and software.


In some embodiments the computer-readable storage devices, mediums, and memories can include a cable or wireless signal containing a bit stream and the like. However, when mentioned, non-transitory computer-readable storage media expressly exclude media such as energy, carrier signals, electromagnetic waves, and signals per se.


Methods according to the above-described examples can be implemented using computer-executable instructions that are stored or otherwise available from computer readable media. Such instructions can comprise, for example, instructions and data which cause or otherwise configure a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. Portions of computer resources used can be accessible over a network. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, firmware, or source code. Examples of computer-readable media that may be used to store instructions, information used, and/or information created during methods according to described examples include magnetic or optical disks, flash memory, USB devices provided with non-volatile memory, networked storage devices, and so on.


Devices implementing methods according to these disclosures can comprise hardware, firmware and/or software, and can take any of a variety of form factors. Typical examples of such form factors include laptops, smart phones, small form factor personal computers, personal digital assistants, rackmount devices, standalone devices, and so on. Functionality described herein also can be embodied in peripherals or add-in cards. Such functionality can also be implemented on a circuit board among different chips or different processes executing in a single device, by way of further example.


The instructions, media for conveying such instructions, computing resources for executing them, and other structures for supporting such computing resources are means for providing the functions described in these disclosures.


Although a variety of examples and other information was used to explain aspects within the scope of the appended claims, no limitation of the claims should be implied based on particular features or arrangements in such examples, as one of ordinary skill would be able to use these examples to derive a wide variety of implementations. Further and although some subject matter may have been described in language specific to examples of structural features and/or method steps, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to these described features or acts. For example, such functionality can be distributed differently or performed in components other than those identified herein. Rather, the described features and steps are disclosed as examples of components of systems and methods within the scope of the appended claims.

Claims
  • 1. A system comprising: at least one processor; andat least one memory storing instructions, which when executed by the at least one processor, cause the at least one processor to perform operations comprising: determine a data amount in a first buffer;calculate an estimated throughput needed to satisfy the data amount in the first buffer;determine whether the estimated throughput is sufficient to service the data stored at the first buffer;determine, in response to the throughput being determined to be insufficient to service the data, whether a counter value for a counter is less than a threshold value;empty, in response to the counter value being above the threshold, a second buffer;in response to the counter value being less than the threshold value: pad the second buffer with a padding amount;incrementing the counter value; anditeratively repeating the operations by returning to the calculating with the incremented counter value.
  • 2. The system of claim 1, wherein the instructions, when executed by the at least one processor, further cause the at least one processor to: when the counter value is less than the threshold value, report the padding amount and increment the counter.
  • 3. The system of claim 1, wherein the at least one memory comprising further instructions, which when executed by the at least one processor, causes the at least one processor to: when the throughput is sufficient to service the data stored in the first buffer, determine maximum throughput calculated over one or more packet data networks from user equipment, and determine the padding amount by subtracting the throughput from the maximum throughput.
  • 4. The system of claim 1, wherein the at least one memory comprising further instructions, which when executed by the at least one processor, causes the at least one processor to: when the counter is equal to or greater than the threshold value, calculate an average throughput over one or more estimated throughput values, report the average throughput, empty the second buffer, and reset the second buffer.
  • 5. The system of claim 1, wherein the at least one memory comprising further instructions, which when executed by the at least one processor, causes the at least one processor to: empty the second buffer and zero out the second buffer.
  • 6. The system of claim 1, wherein the threshold value is 5.
  • 7. A method comprising: determining a data amount in a first buffer;calculating an estimated throughput needed to satisfy the data amount in the first buffer;determining whether the estimated throughput is sufficient to service the data stored at the first buffer;determining, in response to the throughput being determined to be insufficient to service the data, whether a counter value for a counter is less than a threshold value;emptying, in response to the counter value being above the threshold, a second buffer;in response to the counter value being less than the threshold value: padding the second buffer with a padding amount;incrementing the counter value; anditeratively repeating the method by returning to the calculating with the incremented counter value.
  • 8. The method of claim 7, further comprising: when the counter value is less than the threshold value, reporting the padding amount and incrementing the counter.
  • 9. The method of claim 7, further comprising: when the throughput is sufficient to service the data, determining maximum throughput calculated over one or more packet data networks from user equipment, and determining the padding amount by subtracting the throughput from the maximum throughput.
  • 10. The method of claim 7, further comprising: when the counter is equal to or greater than the threshold value, calculating an average throughput over one or more estimated throughput values, reporting the average throughput, emptying the second buffer, and resetting the second buffer.
  • 11. The method of claim 10, further comprising: emptying the second buffer and zeroing out the second buffer.
  • 12. The method of claim 7, wherein the threshold value is 5.
  • 13. A non-transitory computer readable medium, for estimating throughput from user equipment to an eNodeB, storing instructions which when executed by at least one processor, cause the at least one processor to perform operations comprising: determine a data amount in a first buffer;calculate an estimated throughput needed to satisfy the data amount in the first buffer;determine whether the estimated throughput is sufficient to service the data stored at the first buffer;determine, in response to the throughput being determined to be insufficient to service the data, whether a counter value for a counter is less than a threshold value;empty, in response to the counter value being above the threshold, a second buffer;in response to the counter value being less than the threshold value: pad the second buffer with a padding amount;incrementing the counter value; anditeratively repeating the method by returning to the calculating with the incremented counter value.
  • 14. The non-transitory computer readable medium of claim 13, wherein the instructions, when executed by the at least one processor, further cause the at least one processor to: when the counter value is less than the threshold value, report the padding amount and increment the counter.
  • 15. The non-transitory computer readable medium of claim 13, wherein the instructions, when executed by the at least one processor, further cause the at least one processor to: when the throughput is sufficient to service the data, determine maximum throughput calculated over one or more packet data networks from the user equipment, and determine the padding amount by subtracting the throughput from the maximum throughput.
  • 16. The non-transitory computer readable medium of claim 13, wherein the instructions, when executed by the at least one processor, further cause the at least one processor to: when the counter is equal to or greater than the threshold value, calculate an average throughput over one or more estimated throughput values, report the average throughput, empty the second buffer, and reset the second buffer.
  • 17. The non-transitory computer readable medium of claim 16, wherein the instructions, when executed by the at least one processor, further cause the at least one processor to: empty the second buffer includes zero out the second buffer.
CROSS-REFERENCE TO RELATED APPLICATION

This application is a continuation of U.S. patent application Ser. No. 16/123,830 filed on Sep. 6, 2018, the contents of which is incorporated by reference in its entirety.

US Referenced Citations (199)
Number Name Date Kind
4236068 Walton Nov 1980 A
5642303 Small et al. Jun 1997 A
5751223 Turner May 1998 A
6812824 Goldinger et al. Nov 2004 B1
D552603 Tierney Oct 2007 S
7573862 Chambers et al. Aug 2009 B2
D637569 Desai et al. May 2011 S
7975262 Cozmei Jul 2011 B2
8010079 Mia et al. Aug 2011 B2
8102814 Rahman et al. Jan 2012 B2
8260320 Herz Sep 2012 B2
8284748 Borghei Oct 2012 B2
8300594 Bernier et al. Oct 2012 B1
8325626 Tóth et al. Dec 2012 B2
8396485 Grainger et al. Mar 2013 B2
8446899 Lei et al. May 2013 B2
8457145 Zimmerman et al. Jun 2013 B2
8458184 Dorogusker et al. Jun 2013 B2
D691636 Bunton Oct 2013 S
8549638 Aziz Oct 2013 B2
8553634 Chun et al. Oct 2013 B2
8644301 Tamhankar et al. Feb 2014 B2
8650279 Mehta et al. Feb 2014 B2
8669902 Pandey et al. Mar 2014 B2
8676182 Bell et al. Mar 2014 B2
8682279 Rudolf et al. Mar 2014 B2
8693367 Chowdhury et al. Apr 2014 B2
8718644 Thomas et al. May 2014 B2
8761174 Jing et al. Jun 2014 B2
8768389 Nenner et al. Jul 2014 B2
8849283 Rudolf et al. Sep 2014 B2
8909698 Parmar et al. Dec 2014 B2
8958318 Hastwell et al. Feb 2015 B1
9060352 Chan et al. Jun 2015 B2
9130859 Knappe Sep 2015 B1
9173084 Foskett Oct 2015 B1
9173158 Varma Oct 2015 B2
D744464 Snyder et al. Dec 2015 S
9270709 Shatzkamer et al. Feb 2016 B2
9271216 Friman et al. Feb 2016 B2
9281955 Moreno et al. Mar 2016 B2
D757424 Phillips et al. May 2016 S
D759639 Moon et al. Jun 2016 S
9369387 Filsfils et al. Jun 2016 B2
9389992 Gataullin et al. Jul 2016 B2
9426305 De Foy et al. Aug 2016 B2
D767548 Snyder et al. Sep 2016 S
9467918 Kwan Oct 2016 B1
D776634 Lee et al. Jan 2017 S
9544337 Eswara et al. Jan 2017 B2
9569771 Lesavich et al. Feb 2017 B2
9609504 Karlqvist et al. Mar 2017 B2
9615268 Navarro et al. Apr 2017 B2
9634952 Gopinathan et al. Apr 2017 B2
9642167 Snyder et al. May 2017 B1
9654344 Chan et al. May 2017 B2
9712444 Bolshinsky et al. Jul 2017 B1
9713114 Yu Jul 2017 B2
9736056 Vasseur et al. Aug 2017 B2
9762683 Karampurwala et al. Sep 2017 B2
9772927 Gounares et al. Sep 2017 B2
9820105 Snyder et al. Nov 2017 B2
D804450 Speil et al. Dec 2017 S
9858559 Raleigh et al. Jan 2018 B2
9860151 Ganichev et al. Jan 2018 B2
9933224 Dumitriu et al. Feb 2018 B2
9923780 Rao et al. Mar 2018 B2
9961560 Farkas et al. May 2018 B2
9967906 Verkaik et al. May 2018 B2
9980220 Snyder et al. May 2018 B2
9985837 Rao et al. May 2018 B2
9998368 Chen et al. Jun 2018 B2
20030087645 Kim et al. May 2003 A1
20030116634 Tanaka Jun 2003 A1
20040203572 Aerrabotu et al. Oct 2004 A1
20050090225 Muehleisen et al. Apr 2005 A1
20050169193 Black et al. Aug 2005 A1
20050186904 Kowalski et al. Aug 2005 A1
20060022815 Fischer et al. Feb 2006 A1
20060030290 Rudolf et al. Feb 2006 A1
20060092964 Park et al. May 2006 A1
20060126882 Deng et al. Jun 2006 A1
20060187866 Werb et al. Aug 2006 A1
20070037605 Logan Feb 2007 A1
20070239854 Janakiraman et al. Oct 2007 A1
20080037715 Prozeniuk et al. Feb 2008 A1
20080084888 Yadav et al. Apr 2008 A1
20080101381 Sun et al. May 2008 A1
20080163207 Reumann et al. Jul 2008 A1
20080233969 Mergen Sep 2008 A1
20090129389 Halna DeFretay et al. May 2009 A1
20090203370 Giles et al. Aug 2009 A1
20090282048 Ransom et al. Nov 2009 A1
20090298511 Paulson Dec 2009 A1
20090307485 Weniger et al. Dec 2009 A1
20100039280 Holm et al. Feb 2010 A1
20100097969 De Kimpe et al. Apr 2010 A1
20110087799 Padhye et al. Apr 2011 A1
20110142053 Van Der Merwe et al. Jun 2011 A1
20110182295 Singh et al. Jul 2011 A1
20110194553 Sahin et al. Aug 2011 A1
20110228779 Goergen Sep 2011 A1
20110296064 Ehsan Dec 2011 A1
20120023552 Brown et al. Jan 2012 A1
20120054367 Ramakrishnan et al. Mar 2012 A1
20120088476 Greenfield Apr 2012 A1
20120115512 Grainger et al. May 2012 A1
20120157126 Rekimoto Jun 2012 A1
20120167207 Beckley et al. Jun 2012 A1
20120182147 Forster Jul 2012 A1
20120311127 Kandula et al. Dec 2012 A1
20120324035 Cantu et al. Dec 2012 A1
20130029685 Moshfeghi Jan 2013 A1
20130039391 Skarp Feb 2013 A1
20130057435 Kim Mar 2013 A1
20130077612 Khorami Mar 2013 A1
20130088983 Pragada et al. Apr 2013 A1
20130107853 Pettus et al. May 2013 A1
20130108263 Srinivas et al. May 2013 A1
20130115916 Herz May 2013 A1
20130145008 Kannan et al. Jun 2013 A1
20130155906 Nachum et al. Jun 2013 A1
20130191567 Rofougaran et al. Jul 2013 A1
20130203445 Grainger et al. Aug 2013 A1
20130217332 Altman et al. Aug 2013 A1
20130232433 Krajec et al. Sep 2013 A1
20130273938 Ng et al. Oct 2013 A1
20130317944 Huang et al. Nov 2013 A1
20130322438 Gospodarek et al. Dec 2013 A1
20130343198 Chhabra et al. Dec 2013 A1
20130347103 Veteikis et al. Dec 2013 A1
20140007089 Bosch et al. Jan 2014 A1
20140016926 Soto et al. Jan 2014 A1
20140025770 Warfield et al. Jan 2014 A1
20140031031 Gauvreau et al. Jan 2014 A1
20140052508 Pandey et al. Feb 2014 A1
20140059655 Beckley et al. Feb 2014 A1
20140087693 Walby et al. Mar 2014 A1
20140105213 A K et al. Apr 2014 A1
20140118113 Kaushik et al. May 2014 A1
20140148196 Bassan-Eskenazi et al. May 2014 A1
20140179352 V.M. et al. Jun 2014 A1
20140191868 Ortiz et al. Jul 2014 A1
20140198808 Zhou Jul 2014 A1
20140222997 Mermoud et al. Aug 2014 A1
20140233460 Pettus et al. Aug 2014 A1
20140269321 Kamble et al. Sep 2014 A1
20140302869 Rosenbaum et al. Oct 2014 A1
20140337824 St. John et al. Nov 2014 A1
20140341568 Zhang et al. Nov 2014 A1
20150016286 Ganichev et al. Jan 2015 A1
20150016469 Ganichev et al. Jan 2015 A1
20150023176 Korja et al. Jan 2015 A1
20150030024 Venkataswami et al. Jan 2015 A1
20150043581 Devireddy et al. Feb 2015 A1
20150063166 Sif et al. Mar 2015 A1
20150065161 Ganesh et al. Mar 2015 A1
20150087330 Prechner et al. Mar 2015 A1
20150103818 Kuhn et al. Apr 2015 A1
20150163192 Jain et al. Jun 2015 A1
20150172391 Kasslin et al. Jun 2015 A1
20150223337 Steinmacher-Burow Aug 2015 A1
20150256972 Markhovsky et al. Sep 2015 A1
20150264519 Mirzaei et al. Sep 2015 A1
20150280827 Adiletta et al. Oct 2015 A1
20150288410 Adiletta et al. Oct 2015 A1
20150326704 Ko et al. Nov 2015 A1
20150358777 Gupta Dec 2015 A1
20150362581 Friedman et al. Dec 2015 A1
20160007315 Lundgreen et al. Jan 2016 A1
20160044627 Aggarwal et al. Feb 2016 A1
20160099847 Melander et al. Apr 2016 A1
20160100395 Xu et al. Apr 2016 A1
20160105408 Cooper et al. Apr 2016 A1
20160127875 Zampini, II May 2016 A1
20160146495 Malve et al. May 2016 A1
20160330045 Tang et al. Nov 2016 A1
20160344641 Javidi et al. Nov 2016 A1
20170026974 Dey et al. Jan 2017 A1
20170180999 Alderfer et al. Jun 2017 A1
20170181136 Bharadwaj et al. Jun 2017 A1
20170195205 Li et al. Jul 2017 A1
20170202000 Fu et al. Jul 2017 A1
20170214551 Chan et al. Jul 2017 A1
20170273083 Chen et al. Sep 2017 A1
20170317997 Smith et al. Nov 2017 A1
20170332333 Santhanam et al. Nov 2017 A1
20170332421 Sternberg et al. Nov 2017 A1
20170339706 Andreoli-Fang et al. Nov 2017 A1
20180063018 Bosch et al. Mar 2018 A1
20180069311 Pallas et al. Mar 2018 A1
20180084389 Snyder et al. Mar 2018 A1
20180124764 Lee May 2018 A1
20180131490 Patel et al. May 2018 A1
20180199343 Deogun Jul 2018 A1
20180270700 Babaei Sep 2018 A1
20180302868 Bhorkar et al. Oct 2018 A1
20180317123 Chen Nov 2018 A1
20210306903 Wang Sep 2021 A1
Foreign Referenced Citations (10)
Number Date Country
2826165 Jul 2017 EP
WO 2013020126 Feb 2013 WO
WO 2014098556 Jun 2014 WO
WO 2015131920 Sep 2015 WO
WO 2017078657 May 2017 WO
WO 2017187011 Nov 2017 WO
WO 2018009340 Jan 2018 WO
WO 2018028777 Feb 2018 WO
WO 2018053271 Mar 2018 WO
WO 2018066362 Apr 2018 WO
Non-Patent Literature Citations (2)
Entry
International Search Report and Written Opinion from the International Searching Authority, dated Nov. 6, 2019, 14 pages, for corresponding International Patent Application No. PCT/US2019/048827.
Shwetha et al.,“A Bandwidth Request Mechanism for QoS Enhancement in Mobile WiMAX Networks,” International Journal of Advanced Research in Electrical Electronics and Instrumentation Engineering, vol. 3, Issue 1, Jan. 2014, pp. 1-8.
Related Publications (1)
Number Date Country
20200404535 A1 Dec 2020 US
Continuations (1)
Number Date Country
Parent 16123830 Sep 2018 US
Child 17014647 US