The present invention relates generally to the scheduling of user terminals on a shared channel and, more particularly, to allocation of the shared resources among the user terminals in an Orthogonal Frequency Division Multiplexing (OFDM) system.
The Physical Downlink Shared Channel (PDSCH) in Long Term Evolution (LTE) systems is a time and frequency multiplexed channel shared by a plurality of user terminals. User terminals periodically send channel quality indication (CQI) reports to a base station. The CQI reports indicate the instantaneous channel conditions as seen by the receivers in the user terminals. During each 1 ms subframe interval, commonly referred to in the standard as a Transmission Time Interval (TTI), a scheduler at the base station schedules one or more user terminals to receive data on the PDSCH and determines the transmission format for downlink transmissions. The identity of the user terminals scheduled to receive data in a given time interval, and the transmission format, is transmitted to the user terminals on the Physical Downlink Control Channel (PDCCH).
LTE systems use Orthogonal Frequency-Division Multiplexing (OFDM), and schedule user terminals in both time and frequency domains. Thus, the scheduler needs to determine the appropriate time (sub-frames) and frequency (sub-bands) to allocate to a given user in order to satisfy the user QoS (Quality of Service) requirements, while at the same time maximizing the possible cell capacity and coverage. The common approach to scheduling of a shared channel in both time and frequency attempts to share the available PDSCH resource blocks (RBs) equally among the user terminals to be scheduled in a given sub-frame. Each sub-band in the frequency domain corresponds to one or more contiguous RBs. Scheduling is performed in an iterative manner. During each iteration, RBs are allocated to each user and link adaptation is performed. If any RBs are unused, subsequent iterations are performed to re-allocate the unused RBs to other user terminals.
During the first iteration, the number of RBs that can be allocated to each user is capped. The cap level is determined by dividing the number of available RBs by the number of user terminals to be scheduled. The scheduler begins by allocating up to the maximum number of RBs to each user in order beginning with the highest priority user. In general, the scheduler will allocate to each user the best available RBs based on the channel conditions reported by the user. Link adaptation is performed at the end of each iteration. The scheduler determines the modulation and coding scheme (MCS) for each user based on the number of RBs allocated to the user, the amount of buffered data for the user, and the channel quality associated with the sub-bands allocated to the user. If a user does not need all of its allocated RBs, the scheduling process is repeated and the unused RBs are re-allocated to other user terminals in subsequent iterations. This process repeats until all RBs have been allocated or there is no more data to schedule.
The scheduling process used in the prior art has a number of disadvantages. First, the amount of resources that can be allocated to each user is blindly capped without regard to the actual needs of the user terminals. Second, the allocation of resources to user terminals in order of scheduling priority does not result in the most efficient use of the resources. For example, a resource that is better used by a lower priority user may be assigned to a higher priority user. Therefore, the resource will not be available to the lower priority user when his/her turn for scheduling arrives. Also, the blind cap on resources may cause a resource best used by a higher priority user to be allocated to a lower priority user. Third, the scheduling algorithm is executed sequentially in real-time. Due to the increasingly large number of wireless user terminals being added to the system, it is becoming more difficult to perform sequential scheduling while meeting the stringent time constraints for making scheduling decisions.
The present invention provides methods and apparatus for scheduling user terminals in an OFDM system. The scheduling approach implemented by embodiments of the present invention attempts to maximize system capacity while meeting QoS requirements for the user terminals. To perform more efficient scheduling, a per sub-band prioritization is performed before allocation of the sub-bands to the user terminals to generate a pre-allocation schedule. The prioritization is performed independently for each sub-band. The resulting pre-allocation schedule indicates the relative priorities of the user terminals for each sub-band taking into account the channel conditions and specific needs of the user terminals. Based on the pre-allocation schedule, the scheduler can more efficiently allocate the radio resources to the user terminals based on the channel conditions and the specific needs of the user terminals. This scheduling approach is suitable for parallel computing architectures. The use of a parallel computing architecture increases MIPS (million instructions per second) capacity and allows faster scheduling in order to meet stringent real-time constraints.
Exemplary embodiments of the invention comprise methods for scheduling user terminals in an OFDM system. In one exemplary method, the scheduler independently determines a scheduling weight for each user terminal for each of a plurality of sub-bands as a function of a corresponding channel quality weight for the sub-band and service quality weight for the user terminal. Based on the scheduling weights, the scheduler assigns scheduling priorities to the user terminals based on the per sub-band scheduling weights and determines a pre-allocation schedule for each sub-band based on the assigned scheduling priorities. The scheduler then allocates sub-bands to the user terminals based on the sub-band pre-allocation schedule.
Other embodiments of the invention comprise a base station for communicating with a plurality of user terminals over a shared downlink or uplink channel. In one exemplary embodiment, the base station comprises a transceiver circuit for communicating with the mobile terminals and a scheduler, which may comprise one or more scheduling processors, to schedule transmissions to or from the user terminals. The scheduler is configured to determine, for each of a plurality of sub-bands, a scheduling weight for each user terminal as a function of the channel quality metric for the corresponding sub-band and service quality metric for the user terminal. The scheduler is further configured to determine scheduling priority for the user terminals based on the scheduling weights. The scheduling priorities indicate the priority level of each user terminal on each sub-band of interest. The scheduler generates a pre-allocation schedule for each sub-band based on the scheduling priorities of the user terminals and allocates the sub-bands to the user terminals based on the pre-allocation schedule.
The scheduling approach as herein described provides optimal scheduling in a given scheduling interval based on the scheduling weight, resulting in more efficient use of system resources and greater system capacity. The processing intensive operations can be performed in parallel resulting in more efficient hardware utilization and increased scheduling speed. The parallel processes can be extended across multiple sectors within a cell site utilizing a common pool of digital signal processors.
Turning now to the drawings,
For illustrative purposes, an exemplary embodiment of the present invention will be described in the context of a Long Term Evolution (LTE) system. Those skilled in the art will appreciate, however, that the present invention is more generally applicable to other wireless communication systems, such as WiMAX (IEEE 802.16) systems, where scheduling of frequency resources is performed.
LTE uses Orthogonal Frequency Division Multiplexing (OFDM) in the downlink and Discrete Fourier Transform (DFT) spread OFDM in the uplink. The available radio resources in LTE systems can be viewed as a time-frequency grid.
In LTE systems, data is transmitted to the user terminals 60 over a downlink transport channel known as the Physical Downlink Shared Channel (PDSCH). The PDSCH is a time and frequency multiplexed channel shared by a plurality of user terminals 60. During each 1 ms subframe interval, commonly referred to as a Transmission Time Interval (TTI), a scheduler for the base station 20 schedules one or more user terminals 60 to receive data on the PDSCH. The user terminals 60 scheduled to receive data in a given TTI are chosen based on Channel Quality Indication (CQI) reports from the user terminals 60. The CQI reports indicate the instantaneous channel conditions as seen by a receiver at the user terminals 60. As described in more detail below, the CQI reports may report CQI separately for different sub-bands. The base station 20 also uses the CQI reports from the user terminals 60 and the buffer status for the user terminals 60 to select the transmission format for downlink transmissions. The transmission format includes, for example, the transport block size, modulation, and coding, which are selected to achieve a desired error performance.
In LTE and other OFDM systems, user terminals 60 are scheduled in both time and frequency domains. The available resources are grouped into resource blocks (RBs). A RB comprises twelve adjacent subcarriers in the frequency domain, and one 0.5 ms slot (one half of one subframe) in the time domain. In the frequency domain, the RBs are grouped into sub-bands. Each sub-band comprises one or more contiguous RBs. User terminals 60 are scheduled in 1 ms intervals, which is equal to two resource blocks (one subframe) in the time domain. To schedule the user terminals 60, the scheduler 50 needs to determine the appropriate time (sub-frames) and frequency (sub-bands) to allocate a given user in order to satisfy user QoS (Quality of Service) requirements and at the same time maximize the possible cell capacity and coverage.
In general, the scheduler 50 determines which user terminals 60 to schedule in a given TTI, i.e., sub-frame. Then the scheduler 50 allocates the resources blocks for the sub-frame to the scheduled user terminals. In general, it is desirable to allocate the RBs in a sub-band to the user terminal 60 with the best channel conditions, subject to service quality requirements. Allocating resources to user terminals 60 with the best channel conditions allows higher data rates to be achieved, and hence greater system capacity.
In exemplary embodiments of the invention, each user terminal 60 being scheduled is assigned a scheduling weight for each sub-band based on the channel quality reported by the user terminal 60 for that sub-band and the quality of service requirements for the user terminal 60. The user terminals 60 may then be prioritized separately for each sub-band. Per sub-band prioritization enables more optimal scheduling to achieve greater system capacity. Additionally, per sub-band prioritization is well-suited for parallel processing architectures.
In some embodiments, the scheduler 50 may be co-located with the transceiver 30 and perform scheduling for a single cell. In other embodiments, the scheduler 50 may located remotely from the transceiver 30 and perform scheduling for multiple cells.
The task of computing the scheduling weights may be assigned by the scheduling controller 52 to different scheduling processors 54. In one exemplary embodiment, the scheduling controller 52 assigns each user terminal 60 to a different scheduling processor 54 to compute scheduling weights. In this case, each scheduling processor 54 computes the scheduling weights for an assigned user terminal 60 for all sub-bands. In other embodiments, the scheduling controller 52 may assign each sub-band to a scheduling processor 54 to compute the scheduling weights for the sub-band. In this case, each scheduling processor 54 is assigned to compute scheduling weights for all user terminals 60 for an assigned sub-band.
In some embodiments of the invention, a service quality weight is computed for each user terminal 60 (block 230). The service quality weight is a reflection of the service quality state of a given user terminal 60 and indicates how well the user terminal 60 is being served based on its (QoS) requirements. A user terminal 60 that is being underserved according to its QoS requirements will be given a higher service quality weight than a user terminal 60 whose QoS requirements are met. Assigning higher weights to underserved user terminals 60 increases the probability that the user terminal 60 will be scheduled in the next scheduling interval.
The scheduling weights are computed as a function of both the sub-band specific channel quality weights and service quality weights (block 240). The computation of the scheduling weight, denoted WSB is given by:
W
SB
=W
QoS
+W
CQ,SB (0.1)
where WCQ,SB is the sub-band specific channel quality weight and WQos is the service quality weight, which is the same for all sub-bands. The sub-band specific channel quality weight WCQ,SB is related to a data rate that can be supported within the sub-band. The larger the weight, the larger the data rate that can be supported. The service quality weight WQos indicates how urgent the need is to schedule the user terminal 60 in order to meet its QoS requirements. The update procedure 200 is then completed (block 250).
Returning to
The task of computing the sub-band priorities may be assigned by the scheduling controller 52 to different scheduling processors 54. In one exemplary embodiment, each scheduling processor 54 is assigned to compute the user terminal priorities for an assigned sub-band. Thus, the user terminal priorities for each sub-band can be computed in parallel. In other embodiments, the scheduling controller 52 or one of the scheduling processors 54 may compute the scheduling priorities for all of the sub-bands.
Once the scheduling priorities are determined, the scheduling controller 52 generates a consolidated pre-allocation schedule for all of the sub-bands of interest (block 140).
In some embodiments, the scheduling controller 52 consolidates the sub-band prioritizations performed by the individual scheduling processors 54 to generate the pre-allocation schedule. In other embodiments, the scheduling controller 52, or one of the scheduling processors 54, may simultaneously prioritize the user terminals 60 and generate the pre-allocation schedule.
Referring again to
The scheduler 50 is configured to iteratively allocate the sub-bands to the user terminals 60 in order of the sub-band specific priorities beginning with the user terminals having the highest sub-band specific priority in each sub-band. A user terminal 60 may be pre-allocated multiple sub-bands. In such case, the sub-bands pre-allocated to the user terminal 60 are allocated in the order of best to worst as measured by the scheduling weights. If a user terminal 60 does not require all of the pre-allocated sub-bands, the unused sub-bands can be redistributed to other user terminals 60 in subsequent iterations.
Because buffer status is not considered in the pre-allocation schedule, a user terminal 60 may not use all of the resources, i.e. sub-bands, that it was allocated in the pre-allocation schedule. Therefore, after link adaptation is completed for the first iteration, the scheduling controller 52 determines whether all data has been scheduled (block 340) and, if not, whether there are any unused resources remaining (block 350). If so, a second scheduling iteration is performed to redistribute the unused resources (block 360). Thus, a sub-band pre-allocated to a user terminal 60 having insufficient data to use the pre-allocated sub-band may be redistributed to a second user terminal 60 having data in excess of the capacity of its pre-allocated sub-bands. After re-allocation of the unused resources, the transmission format is determined for the user terminals 60 affected by the re-allocation (block 370). This re-allocation process repeats until all resources are assigned or until all buffered data has been scheduled. Once all resources have been assigned and the transmission format determined for all user terminals, the scheduler 50 updates the service quality weights for the user terminals 60, which are used in the next scheduling interval to determine the scheduling weights for the user terminals 60 (block 380). The computation of the service quality weights can be performed simultaneously for all user terminals 60 by different scheduling processors 54. The resource allocation process ends (block 390) after the service quality weights are updated.
By considering the buffer status of the user terminals 60 in advance, it is possible to reduce the number of scheduling iterations in certain situations. In the example of
The scheduling approach as herein described provides optimal scheduling in a given scheduling interval based on the scheduling weight, resulting in more efficient use of system resources and greater system capacity. The processing intensive operations can be performed in parallel resulting in more efficient hardware utilization and increased scheduling speed. The parallel processes can be extended across multiple sectors within a cell site utilizing a common pool of digital signal processors. Although the exemplary embodiment as described is used for scheduling downlink transmission, the techniques as described herein may also be applied to schedule uplink transmissions.
The present invention may, of course, be carried out in other ways than those specifically set forth herein without departing from essential characteristics of the invention. The present embodiments are to be considered in all respects as illustrative and not restrictive, and all changes coming within the meaning and equivalency range of the appended claims are intended to be embraced therein.