Embodiments of the present disclosure relate to apparatus and methods that may be used to communicate data according to a schedule.
Wireless communication systems are widely deployed to provide various telecommunication services such as telephony, video, data, messaging, and broadcasts. Various wireless communication systems rely on scheduled communication of data. For example, in a fifth generation (5G) communication system, an access node may schedule transmission by one or more user equipment devices. The user equipment devices may be responsible for communicating data according to the schedule.
Embodiments of methods and apparatus that may be used to communicate data according to a schedule are disclosed herein.
In one example, an apparatus can include at least one processor and at least one memory including computer program code. The at least one memory and the computer program code can be configured to, with the at least one processor, cause the apparatus at least to obtain a plurality of packets to be transmitted via uplink. The at least one memory and the computer program code can also be configured to, with the at least one processor, cause the apparatus at least to queue the plurality of packets according to logical channel prioritization. The at least one memory and the computer program code can further be configured to, with the at least one processor, cause the apparatus at least to receive a service grant after the queueing. The at least one memory and the computer program code can additionally be configured to, with the at least one processor, cause the apparatus at least to trim the plurality of packets according to a grant size of the service grant.
In another example, a method for data transmission scheduling can include obtaining, by at least one processor, a plurality of packets to be transmitted via uplink. The method can also include queueing, by the at least one processor, the plurality of packets according to logical channel prioritization. The method can further include receiving, by the at least one processor, a service grant after the queueing. The method can additionally include trimming, by the at least one processor, the plurality of packets according to a grant size of the service grant.
In still another example, a non-transitory computer-readable medium can be encoded with instructions that, when executed at least one processor, perform a process. The process can include obtaining a plurality of packets to be transmitted via uplink. The process can also include queueing the plurality of packets according to logical channel prioritization. The process can further include receiving a service grant after the queueing. The process can additionally include trimming the plurality of packets according to a grant size of the service grant.
In yet another example, an apparatus can include a packet obtaining module configured to obtain a plurality of packets to be transmitted via uplink. The apparatus can also include a packet queueing module configured to queue the plurality of packets according to logical channel prioritization. The apparatus can further include a grant serving module configured to receive a service grant after the queueing. The apparatus can additionally include a packet trimming module configured to trim the plurality of packets according to a grant size of the received service grant.
The accompanying drawings, which are incorporated herein and form a part of the specification, illustrate embodiments of the present disclosure and, together with the description, further serve to explain the principles of the present disclosure and to enable a person skilled in the pertinent art to make and use the present disclosure.
Embodiments of the present disclosure will be described with reference to the accompanying drawings.
Although specific configurations and arrangements are discussed, it should be understood that this is done for illustrative purposes only. A person skilled in the pertinent art will recognize that other configurations and arrangements can be used without departing from the spirit and scope of the present disclosure. It will be apparent to a person skilled in the pertinent art that the present disclosure can also be employed in a variety of other applications.
It is noted that references in the specification to “one embodiment,” “an embodiment,” “an example embodiment,” “some embodiments,” “certain embodiments,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases do not necessarily refer to the same embodiment. Further, when a particular feature, structure or characteristic is described in connection with an embodiment, it would be within the knowledge of a person skilled in the pertinent art to effect such feature, structure or characteristic in connection with other embodiments whether or not explicitly described.
In general, terminology may be understood at least in part from usage in context. For example, the term “one or more” as used herein, depending at least in part upon context, may be used to describe any feature, structure, or characteristic in a singular sense or may be used to describe combinations of features, structures or characteristics in a plural sense. Similarly, terms, such as “a,” “an,” or “the,” again, may be understood to convey a singular usage or to convey a plural usage, depending at least in part upon context. In addition, the term “based on” may be understood as not necessarily intended to convey an exclusive set of factors and may, instead, allow for existence of additional factors not necessarily expressly described, again, depending at least in part on context.
The techniques described herein may be used for various wireless communication networks such as Long Term Evolution (LTE) system, code division multiple access (CDMA) system, time division multiple access (TDMA) system, frequency division multiple access (FDMA) system, orthogonal frequency division multiple access (OFDMA) system, single-carrier frequency division multiple access (SC-FDMA) system, and other networks. The terms “network” and “system” are often used interchangeably. A CDMA network may implement a radio technology such as Universal Terrestrial Radio Access (UTRA), CDMA 2000, etc. UTRA includes Wideband CDMA (WCDMA) and other variants of CDMA. CDMA 2000 covers IS-2000, IS-95, and IS-856 standards. A TDMA network may implement a radio technology such as Global System for Mobile Communications (GSM). An OFDMA network may implement a radio technology such as new radio (NR) (e.g., 5G RAT), Evolved UTRA (E-UTRA), Ultra Mobile Broadband (UMB), IEEE 802.11 (Wi-Fi), IEEE 802.16 (WiMAX), IEEE 802.20, Flash-OFDMA, etc. UTRA and E-UTRA are part of Universal Mobile Telecommunication System (UMTS). NR is an emerging wireless communications technology under development in conjunction with the 5G Technology Forum (SGTF). 3GPP Long Term Evolution (LTE) and LTE-Advanced (LTE-A) are releases of UMTS that use E-UTRA. UTRA, E-UTRA, UMTS, LTE, LTE-A, and GSM are described in documents from an organization named “3rd Generation Partnership Project” (3GPP). CDMA2000 and UMB are described in documents from an organization named “3rd Generation Partnership Project 2” (3GPP2). The techniques described herein may be used for the wireless networks and radio technologies mentioned above as well as other wireless networks and radio technologies.
In the uplink (UL) direction, incoming packet data from an external application processor (AP) or a host (e.g., through universal serial bus (USB) or peripheral component interconnected express (PCIe)) in the form of IP packets from a protocol data unit (PDU) session arrives at the Layer 3 protocol stack. These IP packets are classified into the quality of service (QoS) flows in each data radio bearer (DRB), shown as DRB1, DRB2, and DRB3. Packets in each DRB will be dequeued and processed by the packet data convergence protocol (PDCP) layer. PDCP layer processing includes robust header compression (ROHC) and security functions, such as integrity checking and ciphering. Once the PDCP layer processing is done, the packets are queued into its corresponding Layer 2 (L2) logical channels (LCs), identified as LC0, LC1, LC2, LC3, LC4, LC5, and LC6. In the meantime, modem signaling messages also arrive at their Layer 2 logical channels for signaling messages.
At the physical (PHY) layer, at every slot, the physical downlink control channel (PDCCH), which contains the downlink control indicator (DCI) information, is decoded. The DCI contains the dynamic grant allocation for dynamic uplink transmission, for a slot transmission at an indicated time. The transmission time offset (K2) may be expressed in terms of slots, or for 5G low latency application, K2 may be in symbols and may indicate that the transmission needs to be done in the same slot.
At the MAC layer, once the dynamic grant allocation size is calculated, the modem has to dequeue and gather L2 packets from the logical channels through a logical channel prioritization (LCP) algorithm as specified in the 3GPP standard and compose the MAC protocol data unit (PDU) in a transport block for the PHY layer to be sent out. There is one such transport block for each component carrier. Hence packet data is being transmitted out from the packet data stack to the base station (BS) according to the logical channel prioritization in the base station-allocated uplink grant size for each slot.
MAC sub-PDU (MacSubPDU) packets can be prepared in L2 logical channel queues after L3 data arrives at the modem. Once a dynamic grant is allocated by a base station and received by the MAC layer, the MAC layer can perform logical channel prioritization to create a MAC PDU with the exact grant size. The packets in the logical channels are extracted with priority accordingly from the logical channel prioritization. After that, the MAC PDU is transferred to the physical layer for transmission.
In another approach, logical channel L2 data within each individual logical channel queue are combined a few packets at a time to a continuous block. However, they are not prepared in a MAC PDU format, because the exact grant allocation size is still not known yet. Once a dynamic grant is allocated by the base station and received by the MAC layer, the MAC layer performs the logical channel prioritization to create the MAC PDU with the exact grant size. The packets in the logical channels are extracted with priority accordingly from the logical channel prioritization. After that, the MAC PDU is transferred to the physical layer for transmission. The assembly of a first transport block corresponding to a component carrier (CC) is shown for CC1, but a similar assembly may occur for each of CC2 and CC3 and so on, as well.
One challenge in MAC transmission is how to gather the MacSubPDU or L2 packets in the MAC PDU in the limited time given, from the time when the exact dynamic grant allocation size is received in the current slot to the transmission time. The transmission time offset can be referred to as K2. If the transmission time offset (K2) from current slot n means that transmission will occur two or more time slots later (i.e., slot n+2 or later), the uplink MAC will have a longer time to prepare the MAC PDU for the scheduled slot transmission. However, if the K2 requires that the transmission is in the next slot (i.e., slot n+1) or the same slot (i.e., slot n), then there may be insufficient time to dequeue with priority, gather the MacSubPDUs with exact grant size, and prepare the MAC PDU for transmission encoding.
Since, in certain approaches, the logical channel prioritization is performed only after the exact dynamic grant size is known, it may take a long time to gather the packets from various memory locations from the several logical channel queues, and construct the MAC PDU for physical layer transmission. This time may exceed the K2 interval given before the scheduled transmission time, which may cause a transmission error.
Certain embodiments of the present disclosure can avoid frequent transmission errors due to preparation time exceeding the transmission time offset interval. Furthermore, certain embodiments of the present disclosure can avoid excessive delay in forming MAC PDUs from Layer 2 to the MAC layer to the physical layer. Certain embodiments of the present disclosure can also minimize memory storage needed for L2 packet queues. Furthermore, certain embodiments of the present disclosure can provide efficient data movement among L3, L2, MAC, and physical layers. Additionally, certain embodiments of the present disclosure can reduce power expenditure due to reduced memory storage and decreased data movement.
Certain embodiments of the present disclosure relate to a system configured to prepare at least one MAC PDU as much ahead of time as possible. For example, certain embodiments relate to a system that prepares the MAC PDU before a grant indication from a base station arrives with a dynamically allocated grant size. The grant indication may dictate that the user equipment transmits within a very short time following the receipt of this grant.
Certain embodiments of the present disclosure provide an efficient method for scheduling 5G uplink MAC packets for data transmission. Certain embodiments of the present disclosure can allow effective execution of packet preparation before the transmission deadline, which is suitable for high throughput and low latency packets.
According to one aspect, certain embodiments of the present disclosure prepare MAC PDUs ahead of time. For example, the user equipment or other terminal device may prepare one or many MAC PDUs with logical channel prioritization before receiving a grant indication from a base station or other access node. The grant indication from the access node may have a dynamically allocated grant size. The grant indication may also instruct the terminal device to transmit within a very short time following receipt of the grant. A very short time may be in the same slot or next slot.
According to another aspect, certain embodiments of the present disclosure relate to trimming a prepared MAC PDU packet list response to a service grant. A service grant, also referred to as an uplink grant or servicing grant, can be transmitted by an access node to a user equipment, for example, in downlink control information (DCI). A given DCI scheduling message may provide information for scheduling one or more user equipment. Various DCI message formats may be used, and these formats may depend on, for example, what channel is being allocated, the type of operation being scheduled, the type of transmission being scheduled, and so on. The 3GPP provides some standardization of DCI, but service grants that comply with other standards and no standard are also permitted. The service grant can include an indicator of when transmission for the user equipment is scheduled, which can be expressed by a value K2, which can be an integer, for example, from zero to thirty-two. In certain cases, the value of zero can be implied by not explicitly indicating a K2 value.
By preparing the MAC PDUs according to logical channel prioritization of upper layer packets early enough, and then at the servicing grant time, trimming the prepared MAC PDU packet list to fit into the exact received grant size for transmission at a slot (e.g., n+K2), the packets can be retrieved and encoded for transmission at the scheduled time efficiently.
According to still another aspect, certain embodiments of the present disclosure provide the assembly of prepared L2 MacSubPDU packets into a contiguous memory block. In the MAC PDU preparation with logical channel prioritization, the L2 MacSubPDU packets may be assembled into a contiguous memory block after PDCP layer processing, allowing efficient data transfer streaming to the physical layer.
According to yet another aspect, certain embodiments of the present disclosure provide a fast transmission (FastTx) queue for low latency packets. A FastTx queue may be maintained to allow high priority and/or low latency packets to be served first when scheduling the MAC PDU for the next upcoming transmission.
An access node 220 may be a device that communicates with the user equipment 210, such as wireless access point, a base station, an enhanced Node B (eNB), a cluster master node, or the like. Access node 220 may have a wired connection to user equipment 210, a wireless connection to user equipment 210, or any combination thereof. Access node 220 may be connected to user equipment 210 by multiple connections, and user equipment 210 may be connected to other access nodes in addition to access node 220. Access node 220 may also be connected to other user equipment. Access node 220 is illustrated by a radio tower by way of illustration and not by way of limitation.
A core network element 230 may serve access node 220 and user equipment 210 to provide core network services. Examples of a core network element 230 include a home subscriber server (HSS), a mobility management entity (MME), a serving gateway (GW), a packet data network (PDN) GW. These are examples of core network elements of an evolved packet core (EPC) system, which is a core network for the LTE system. Other core network elements may be used in LTE and in other communication systems. Core network element 230 is shown as a set of rack-mounted servers by way of illustration and not by way of limitation.
Core network element 230 may connect with a large network, such as the Internet 240, or another IP network, to communicate packet data over any distance. In this way, data from user equipment 210 may be communicated to other user equipment connected to other access points, including, for example, a personal computer 250 connected to Internet 240, for example, using a wired connection, or a tablet 270 connected to Internet 240 via a router 260. Thus, personal computer 250 and tablet 270 provide additional examples of possible user equipment devices, and router 260 provides an example of another access point device.
A generic example of a rack-mounted server is provided as an illustration of core network element 230. However, there may be multiple elements in the core network including database servers, such as database 280, and security and authentication servers, such as authentication server 290. Database 280 may, for example, manage data related to user subscription to network services. A home location register (HLR) is an example of standardized database of subscriber information for a mobile network. Likewise, authentication server 290 may handle authentication of users, sessions, and so on. In 5G, an authentication server function (AUSF) may be the specific entity to perform user equipment authentication. In certain embodiments, a single server rack may handle multiple such functions, such that the connections between core network element 230, authentication server 290, and database 280 may be local connections within a single rack.
Certain embodiments of the present disclosure may be implemented in a modem of a user equipment, such as user equipment 210, tablet 270, or personal computer 250. For example, a modem or other transceiver of user equipment 210 may be scheduled for transmission by a communication from access node 220. As described below in detail, user equipment 210 may prepare in advance of receiving a service grant that schedules transmission and then may finalize one or more MAC PDU based on the service grant.
Each of the elements of
As shown in
At operation 320, the modem can process a service grant. Specifically, the modem can process a grant received from the physical layer for the next or current slot, either with respect to new or retransmission data. For new data, the modem can compose the MAC PDU with the actual grant size received for the slot. Data for the slot can be retrieved from MAC queues containing prepared MAC PDUs, and re-prepared (for example, trimmed or filled out) MAC PDUs to fulfill the exact grant received from the network. For retransmission data, the modem can prepare the retransmission data by retrieving the sent data from a hybrid automatic repeat request (HARD) queue. To illustrate further, the grant service for new data can be as follows. The goal may be to assemble the MAC PDU for physical layer transmission in the fastest possible time. When the MAC PDU packet list was prepared earlier in contiguous memory, a trimming step can be executed quickly to trim the prepared packet list to the exact grant size allocated by the network.
At operation 420, the modem can serve packets from a MAC control element queue (MACCEQ). The MAC control element may be the highest priority packets that need to be included in the MAC PDU creation. These requests may be queued in the MACCEQ, which may be a separate queue from the other queues. These requests may be served according to their priorities. The MAC CE packets are attached to the ends of the MAC PDU packets. The grant bytes are subtracted with the length of these MAC CE packets first.
At operation 430, the modem may serve packets from a MAC fast transmission queue (MACFASTTXQ). The low latency, high priority packets from various logical channels can be grouped together and fast-tracked through a separate special MACFASTTXQ earlier. At this stage of MAC PDU assembly, these packets are served first into the MAC PDU. The current grant bytes are then subtracted with the length of these MAC CE packets first.
At operation 440, the modem can serve prepared data MAC PDU from a MAC data queue (MACDATAQ) with leftover grant data (GrantData). For the remaining grant data bytes, the prepared MAC PDUs from the MACDATAQ can be served. Since these MAC PDUs were prepared with an estimated grant size, they may need to be trimmed to the exact leftover grant data bytes.
At operation 442, the modem can determine whether the prepared MAC PDU is greater than the leftover grant data. If so, the prepared MAC PDU has exceeded the remaining grant data. Thus, in such a case, the modem can proceed to, at operation 444, trim the prepared data MAC PDU to the exact grant data. For example, the lower priority packet data bytes may be trimmed off to fit into the exact grant size remaining. The MAC PDU may have been prepared with logical channel prioritization (LCP), which dequeued with priority the packets from the logical channels when assembling the prepared MAC PDU. Thus, for example, the lower priority packet data bytes may be at the end of the prepared MAC PDU. During this trimming process, the last packet will be segmented to form two Radio Link Control (RLC) packets. The first segment will be served in this grant, and the second segment will be queued with high priority for the next grant transmission.
If the modem determines the prepared MAC PDU does not exceed the leftover grant data, the modem can, at operation 446, determine whether the opposite is true, namely, whether the prepared MAC PDU is less than the grant data. If not, this implies that no adjustment to the MAC PDU is needed. In certain cases, however, the modem may determine that the prepared MAC PDU has fewer bytes than the remaining grant data. In this case, the modem may, at 448, grab more prepared data MAC PDU packages to fill grant data. In such cases, the next prepared MAC PDU is retrieved, and the relevant bytes are extracted to fit in the current remaining grant.
At operation 450, upon either trimming at operation 444 or grabbing more data at operation 448, the modem can assemble a final MAC PDU transport block (TB). The MAC PDU can be assembled with the MAC PDU subblocks, where the fast transmission and data MAC PDU are already prepared in contiguous memory blocks.
At operation 330, upon the completion of grant service at operation 320, whether by using the previously prepared MAC PDU (in the case that the prepared MAC PDU exactly matches the service grant) or by using the MAC PDU transport block assembled at operation 450, the modem can program physical layer transmission with the MAC PDU transport block. More particularly, the MAC PDU transport block can be sent to the physical layer to be transmitted. The bytes may be encoded per the 3GPP standard (or any other desired standard) at the physical layer and sent out accordingly. Operation 330 shown in
Referring back to
At operation 350, the modem can prepare MAC PDUs for upcoming slots.
As shown in
Furthermore, the modem can run logical channel prioritization with respect to data packets at operation 530. The MAC layer of the modem can dequeue data packets from logical channels and can assemble MAC PDUs in contiguous memory. This MAC PDU can then be enqueued to MACDATAQ. Additionally, at operation 540, the modem can dequeue MAC control element requests to MACCEQ. In addition to data packets, MAC control element requests are also composed in contiguous memory and queued in a separate MACCEQ, to be assembled into the tail of the final service MAC PDU.
Although operations 520, 530, and 540 are shown in one order, they may be performed in parallel or in a different order than shown. For example, operations 520, 530, and 540 can be performed on packets that come to the modem for transmission, depending on the type of packet received at the modem. Thus, the various queues can be variously updated on an as-needed and nearly real-time basis.
As shown in
Once the MAC PDU is finalized, it can be provided to the MACSENDQ, using an operation such as operation 330 in
As shown in
The approach of
As shown in
The queueing at operation 820 can include storing packets according to logical priority in contiguous memory. Thus, for example, the packets may be stored in consecutive bytes in memory in order from the highest priority packet to the lowest priority packet. The addresses of the lowest priority packets may be the last addresses in the memory span, and the addresses of the highest priority packets may be the first addresses in the memory span. As illustrated, for example, in
The queueing at operation 820 can include placing the plurality of packets partially into at least one high priority queue and partially into at least one low priority queue. For example, the high priority queue may include MACFASTTXQ and MACCEQ in
At operation 825, a service grant is determined by an access node, such as access node 220 in
As shown in
As shown in
When node 1100 is a user equipment, additional components may also be included, such as a user interface (UI), sensors, and the like. Similarly, node 1100 may be implemented as a blade in a server system when node 1100 is configured as a core network element 230. Other implementations are also possible.
As shown in
As shown in
Similarly, node 1100 can also be configured as personal computer 250, router 260, tablet 270, database 280, or authentication server 290 in
User equipment 1202 can include a packet obtain module 1210 configured to obtain a plurality of packets to be transmitted via uplink. User equipment 1202 can also include a packet queueing module 1220 configured to queue the plurality of packets according to logical channel prioritization. The queueing by packet queueing module 1220 can include storing packets according to logical priority in contiguous memory. The queueing by packet queueing module 1220 can include placing the plurality of packets partially into at least one high priority queue and partially into at least one lower priority queue. For example, the high priority queue may include MACFASTTXQ and MACCEQ in
Access node 1204 can include a service grant determination module 1225 configured to determine a service grant and a service grant transmission module 1235 configured to send the service grant to user equipment 1202.
As shown in
User equipment 1202 can additionally include packet forwarding module 1250 configured to forward the remainder of the plurality of packets to transmission after trimming the packets by packet trimming module 1240. These packets can be received at a packet reception module 1255 of access node 1204.
The modules of
Another aspect of the disclosure is directed to a non-transitory computer-readable medium encoded with instructions that, when executed by at least one processor (e.g., processor 1110 in
Certain embodiments of the present disclosure may have various benefits and/or advantages. For example, certain embodiments of the present disclosure may be able to meet a transmission timeline even when the K2 offset requires the same slot or next slot transmission from when the grant is received. Furthermore, certain embodiments of the present disclosure may provide lower latency in preparing MAC PDU packets from L3, L2 to the MAC and physical layers for transmission. Certain embodiments of the present disclosure may also provide minimal data movement for end-to-end data transfer from L3 to the physical layer. Additionally, certain embodiments of the present disclosure may use less on-chip or external memory because the MAC PDU may be directly prepared without intermediate steps for buffering L2 data queues.
Furthermore, certain embodiments of the present disclosure may reduce power for the entire chipset due to reduced memory and data movement. Moreover, certain embodiments of the present disclosure may also be applicable to a fixed grant allocation scheme as well, for example, in cases where dynamic and fixed grant allocations are concurrently applied. Additionally, certain embodiments of the present disclosure may be applicable for different wireless technologies requiring uplink grant access by the base station, such as 5G, LTE, or future 3GPP or other standards.
According to one aspect of the present disclosure, an apparatus can include at least one processor and at least one memory including computer program code. The at least one memory and the computer program code can be configured to, with the at least one processor, cause the apparatus at least to obtain a plurality of packets to be transmitted via uplink. The at least one memory and the computer program code can also be configured to, with the at least one processor, cause the apparatus at least to queue the plurality of packets according to logical channel prioritization. The at least one memory and the computer program code can further be configured to, with the at least one processor, cause the apparatus at least to receive a service grant after the queueing. The at least one memory and the computer program code can additionally be configured to, with the at least one processor, cause the apparatus at least to trim the plurality of packets according to a grant size of the service grant.
In some embodiments, the at least one memory and the computer program code can be further configured to, with the at least one processor, cause the apparatus at least to forward a remainder of the plurality of packets after the trimming to transmission.
In some embodiments, the at least one memory and the computer program code can be configured to, with the at least one processor, cause the apparatus at least to queue the plurality of packets at least by placing the plurality of packets partially into at least one high priority queue and partially into at least one low priority queue.
In some embodiments, the at least one memory and the computer program code can be configured to, with the at least one processor, cause the apparatus at least to trim the plurality of packets at least by removing an integer number of packets from the low priority queue.
In some embodiments, the at least one memory and the computer program code can be further configured to, with the at least one processor, cause the apparatus at least to calculate the grant size based on the received service grant. The at least one memory and the computer program code can also be configured to, with the at least one processor, cause the apparatus at least to identify a size of an untrimmable subset of the plurality of packets. The trimming the plurality of packets can include trimming a remainder of the plurality of packets excluding the untrimmable subset.
In some embodiments, the at least one memory and the computer program code can be configured to, with the at least one processor, cause the apparatus at least to trim the plurality of packets in a MAC layer. The packets can include MAC packet data units.
In some embodiments, the at least one memory and the computer program code can be further configured to, with the at least one processor, cause the apparatus at least to estimate a predicted grant size prior to receiving the service grant. The at least one memory and the computer program code can also be configured to, with the at least one processor, cause the apparatus at least to pre-assemble the plurality of packets into a packet data unit before receiving the service grant.
In some embodiments, the at least one memory and the computer program code can be configured to, with the at least one processor, cause the apparatus at least to trim the plurality of packets at least by modifying the packet data unit to exclude one or more packets of the plurality of packets.
In some embodiments, the at least one memory and the computer program code can be configured to, with the at least one processor, cause the apparatus at least to queue the plurality of packets at least by storing the plurality of packets according to logical priority in contiguous memory.
According to another aspect of the present disclosure, a method for data transmission scheduling can include obtaining, by at least one processor, a plurality of packets to be transmitted via uplink. The method can also include queueing, by the at least one processor, the plurality of packets according to logical channel prioritization. The method can further include receiving, by the at least one processor, a service grant after the queueing. The method can additionally include trimming, by the at least one processor, the plurality of packets according to a grant size of the service grant.
In some embodiments, the method can also include forwarding, by the at least one processor, a remainder of the plurality of packets after the trimming to transmission.
In some embodiment, the queueing can include placing the plurality of packets partially into at least one high priority queue and partially into at least one low priority queue.
In some embodiments, the trimming can include removing an integer number of packets from the low priority queue.
In some embodiments, the method can include calculating, by the at least one processor, a grant size based on the received service grant. The method can also include identifying, by the at least one processor, a size of an untrimmable subset of the plurality of packets. The trimming the plurality of packets can include trimming a remainder of the plurality of packets excluding the untrimmable subset.
In some embodiments, the trimming can be performed in a MAC layer. The packets may be MAC packet data units.
In some embodiments, the method can include estimating, by the at least one processor, a predicted grant size prior to receiving the service grant. The method can also include pre-assembling, by the at least one processor, the plurality of packets into a packet data unit before receiving the service grant.
In some embodiments, the trimming can include modifying the packet data unit to exclude one or more packets of the plurality of packets.
In some embodiments, the queueing can include storing the plurality of packets according to logical priority in contiguous memory.
According to still another aspect of the present disclosure, a non-transitory computer-readable medium can be encoded with instructions that, when executed at least one processor, perform a process. The process can include obtaining a plurality of packets to be transmitted via uplink. The process can also include queueing the plurality of packets according to logical channel prioritization. The process can further include receiving a service grant after the queueing. The process can additionally include trimming the plurality of packets according to a grant size of the service grant.
In some embodiments, the process can include forwarding a remainder of the plurality of packets after the trimming to transmission.
In some embodiments, the queueing can include placing the plurality of packets partially into at least one high priority queue and partially into at least one low priority queue.
In some embodiments, the trimming can include removing an integer number of packets from the low priority queue.
In some embodiments, the process can include calculating a grant size based on the received service grant. The process can further include identifying a size of an untrimmable subset of the plurality of packets. The trimming the plurality of packets can include trimming a remainder of the plurality of packets excluding the untrimmable subset.
In some embodiments, the trimming can be performed in a MAC layer. The packets can be MAC packet data units.
In some embodiments, the process can include estimating a predicted grant size prior to receiving the service grant. The process can also include pre-assembling the plurality of packets into a packet data unit before receiving the service grant.
In some embodiments, the trimming can include modifying the packet data unit to exclude one or more packets of the plurality of packets.
In some embodiments, the queueing can include storing the plurality of packets according to logical priority in contiguous memory.
According to yet another aspect of the disclosure, an apparatus can include a packet obtaining module configured to obtain a plurality of packets to be transmitted via uplink. The apparatus can also include a packet queueing module configured to queue the plurality of packets according to logical channel prioritization. The apparatus can further include a grant serving module configured to receive a service grant after the queueing. The apparatus can additionally include a packet trimming module configured to trim the plurality of packets according to a grant size of the received service grant.
In some embodiments, the apparatus can further include a packet forwarding module configured to forward a remainder of the plurality of packets after the trimming to transmission.
In some embodiments, the packet trimming module can be further configured to place the plurality of packets partially into at least one high priority queue and partially into at least one low priority queue.
In some embodiments, the packet trimming module can be further configured to remove an integer number of packets from the low priority queue.
In some embodiments, the grant serving module can include a grant size calculation module configured to calculate a grant size based on the received service grant. The grant serving module can include a untrimmable amount identification module configured to identify a size of an untrimmable subset of the plurality of packets. The packet trimming module can be further configured to trim a remainder of the plurality of packets excluding the untrimmable subset.
In some embodiments, the trimming can be performed in a MAC layer. The packets can include MAC packet data units.
In some embodiments, the packet queueing module can include a grant size estimation module configured to estimate a predicted grant size prior to receiving the service grant. The packet queueing module can include a packet pre-assembly module configured to pre-assemble the plurality of packets into a packet data unit before receiving the service grant.
In some embodiments, the packet trimming module can be further configured to modify the packet data unit to exclude one or more packets of the plurality of packets.
In some embodiments, the packet queueing module can be further configured to store the plurality of packets according to logical priority in contiguous memory.
The foregoing description of the specific embodiments will so reveal the general nature of the present disclosure that others can, by applying knowledge within the skill of the art, readily modify and/or adapt for various applications such specific embodiments, without undue experimentation, without departing from the general concept of the present disclosure. Therefore, such adaptations and modifications are intended to be within the meaning and range of equivalents of the disclosed embodiments, based on the teaching and guidance presented herein. It is to be understood that the phraseology or terminology herein is for the purpose of description and not of limitation, such that the terminology or phraseology of the present specification is to be interpreted by the skilled artisan in light of the teachings and guidance.
Embodiments of the present disclosure have been described above with the aid of functional building blocks illustrating the implementation of specified functions and relationships thereof. The boundaries of these functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternate boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed.
The Summary and Abstract sections may set forth one or more but not all exemplary embodiments of the present disclosure as contemplated by the inventor(s), and thus, are not intended to limit the present disclosure and the appended claims in any way.
Various functional blocks, modules, and steps are disclosed above. The particular arrangements provided are illustrative and without limitation. Accordingly, the functional blocks, modules, and steps may be re-ordered or combined in different ways than in the examples provided above. Likewise, certain embodiments include only a subset of the functional blocks, modules, and steps, and any such subset is permitted.
The breadth and scope of the present disclosure should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.
This is a continuation application of International Application No. PCT/IB2020/058407, filed Sep. 10, 2020, which claims the benefit of priority to U.S. Provisional Application No. 62/964,059 filed Jan. 21, 2020, entitled “EFFICIENT 5G UPLINK MAC DATA TRANSMISSION SCHEDULING METHOD FOR HIGH THROUGHPUT AND LOW LATENCY PACKETS,” the contents of which are hereby incorporated by reference in their entireties.
Number | Date | Country | |
---|---|---|---|
62964059 | Jan 2020 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/IB2020/058407 | Sep 2020 | US |
Child | 17870688 | US |