Embodiments of the present invention relate to communication networks, more particularly but not exclusively to packet data transmission in mobile communication networks.
Demands for higher data rates for mobile services are steadily increasing. At the same time modern mobile communication systems as 3rd generation systems (3G as abbreviation) and 4th generation systems (4G as abbreviation) provide enhanced technologies which enable higher spectral efficiencies and allow for higher data rates and cell capacities. Users of today's handhelds become more difficult to satisfy. While old feature phones generated only data or voice traffic, current smartphones, tablets, and netbooks run various applications in parallel that can fundamentally differ from each other. Compared to feature phones, this application mix leads to a number of new characteristics. For example, highly dynamic load statistics result. Modern handhelds support various applications that generate bursty traffic, cf. G. Maier, F. Schneider, A. Feldmann. “A First Look at Mobile Hand-held Device Traffic”, In Proc. Int. Conference on Passive and Active Network Measurement (PAM '10), April 2010. Even worse, with multitasking operation systems many of these applications run in parallel and a user may change this mix of active applications at any instant. Consequently, the generated load may change rapidly and high peaks can appear at any time.
Moreover, load statistics can be highly diverse. Even if an application mix remains static, the requested load may fundamentally differ among the applications. Consequently, there is now a larger spectrum of load requests to satisfy than with feature phones. Furthermore, dynamics of constraints have increased. Each application can have different requirements in error rate and delay, which may change when the application becomes inactive or the application mix changes. Consequently, guarantees granted to an UE (as abbreviation for User Equipment, in line with the 3GPP terminology, 3GPP abbreviating 3rd Generation Partnership Project) can become quickly obsolete.
These traffic characteristics make it challenging to efficiently allocate wireless channel resources to modern UEs while keeping an acceptable Quality of Service (QoS for abbreviation). First, the load statistics are now instable, difficult to characterize, and to predict, cf. F. Schneider, S. Agarwal, T. Alpcan, A. Feldmann. “The New Web: Characterizing AJAX Traffic”, In Proc. Int. Conference on Passive and Active Network Measurement, April 2008. Second, the constraints under which resources are allocated are highly diverse and may change at any time. Finally, the application QoS demands may depend on the user's current environment (e.g., its location, speed, and distance to other users).
G. Bianchi et al, “A Programmable MAC Framework for Utility-Based Adaptive Quality of Service Support”, IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 18, NO. 2, FEBRUARY 2000, discloses the design and evaluation of a programmable medium access control framework, which is based on a hybrid centralized/distributed data link controller. The programmable framework and its associated algorithms are capable of supporting adaptive real-time applications over time-varying and bandwidth limited networks (e.g., wireless networks) in a fair and efficient manner taking into account application-specific adaptation needs. The framework is flexible, extensible and supports the dynamic introduction of new adaptive services on-demand. As part of the service creation process, applications interact with a set of distributed adaptation handlers to program services without the need to upgrade the centralized adaptation controller. This approach is in contrast to techniques that offer a fixed set of “hard-wired” services at the data link from which applications select. A centralized adaptation controller responsible for the fair allocation of available bandwidth among adaptive applications is driven by application specific bandwidth utility curves. A set of distributed adaptation handlers execute at edge devices interacting with a central controller allowing applications to program their adaptation needs in terms of utility curves, adaptation time scales and adaptation policy. The central controller offers a set of simple meta-services called “profiles” that distributed handlers use to build adaptive real-time services.
Embodiments are based on the finding that the application QoS demands may depend on the user's current environment, e.g., its location, speed, and distance to other users. Embodiments are based on the finding that there is a need for more advanced scheduling concepts, which take into account application and application state specific metrics. In other words, embodiments are based on the finding that for modern UEs the users' Quality of Service (QoS) depends not only on its currently running applications but on its context. To efficiently allocate channel resources even under such conditions, embodiments may utilize context information for wireless radio resource management (RRM for abbreviation).
Embodiments can be further based on the finding that context information can be obtained and signaled through a transaction-based architecture and data structures to efficiently access, store, and transfer context information. Moreover, embodiments can be based on the finding that wireless resources can be allocated according to the users' context. Embodiments may provide a scheduling concept to efficiently allocate resources to UEs while accounting for their current application mix and further context information. Hence, embodiments may provide a resource allocation framework that may be completely or partially used to assess, signal, and allocate resources in context-aware wireless networks. Embodiments may therefore also be based on the finding that a more efficient radio resource management can be achieved in a mobile communication system when the resource allocation, i.e. the scheduling, is aware of the users' context. Such context can be defined as information extracted from the users' environment and as a combination of such information.
Embodiments may also be referred to as context-aware resource allocation (CARA for abbreviation), and they may comprise a system with multiple components.
Embodiments may provide an apparatus for a mobile transceiver in a mobile communication system or network. The terms mobile communication system and mobile communication network will be used synonymously in the following. Such an apparatus may be implemented as a context extraction module that observes context information at the UE or mobile transceiver. The context information may then be transported based on transactions which, for each running application, combine data, traffic requirements, and related signaling information within a single protocol data unit. Moreover, embodiments may provide an apparatus for a base station transceiver, which may comprise a corresponding transaction-based scheduler that efficiently solves the resource allocation problem over all applications in the system.
Embodiments may enable efficient radio resource management by using a scheduling concept which is aware of the users' context. Embodiments may enable precise adjustments of the scheduling and resource allocation to the applications' demands. Embodiments may enable quick reactions of a scheduler when an application changes its demands or when these demands cannot be fulfilled. Moreover, embodiments may enable to integrate context-awareness into existing RRM schemes independent on scheduler specifics or traffic models.
More specifically, embodiments may provide an apparatus for a mobile transceiver in a mobile communication system, i.e. embodiments may provide said apparatus to be operated by or included in a mobile transceiver. In the following, the apparatus will also be referred to as mobile relay station transceiver apparatus. The mobile communication system further comprises a base station transceiver. The mobile communication system may, for example, correspond to one of the 3GPP-standardized mobile communication networks as an LTE (as abbreviation for Long Term Evolution), an LTE-A (as abbreviation for LTE-Advanced), a UTRAN (as abbreviation for UMTS Terrestrial Radio Access, wherein UMTS abbreviates Universal Mobile Telecommunication System), a E-UTRAN (as abbreviation for Evolved-UTRAN), a GERAN (as abbreviation for GSM/EDGE Radio Access Network, GSM abbreviating Global System for Mobile Communication, EDGE abbreviating Enhanced Data Rates for GSM Evolution), generally an OFDMA (as abbreviation for Orthogonal Frequency Division Multiple Access) network, etc.
The mobile transceiver apparatus comprises means for extracting context information from an application being run on the mobile transceiver, from an operation system being run on the mobile transceiver, or hardware drivers or hardware of the mobile transceiver, the context information comprising information on a state of the application and/or information on a state of the mobile transceiver. In other words, the context information may comprise information on the application, for example, it may comprise an information on a user focus, i.e. whether the application is currently displayed in the foreground or in the background, information on the type of application, i.e. web browsing, interactive, streaming, conversational, etc., information on the type of request, i.e. whether the requested data is just a prefetch or it is to be displayed immediately, information on certain delay or QOS requirements, etc.
In other words, the context information can be provided per application. For example, two streaming applications are running in parallel on the mobile transceiver. According to the prior art, both applications' data would be mapped to streaming transport channels at the lower layers. Therefore, according to the prior art, data from the two applications would not be distinguished by a scheduler. According to embodiments, the context information may be available for the applications separately. For example, the context information of one application may indicate that it is displayed in the foreground; the context information of the other application may indicate it is in background. Therefore, embodiments can provide the advantage that these two applications and their data can be distinguished by the scheduler and the application running in the foreground can be prioritized. The context information can as well be extracted from the operation system, as an application may not have the information on whether it is in foreground or background. This information, also determining a state of the application, may be extracted from a window manager of the operation system of the mobile transceiver.
Moreover, the mobile transceiver apparatus may comprise means for communicating data packets associated with the application with a data server through the base station transceiver. In other words, the mobile transceiver uses the base station transceiver to communicate with the data server using data packets. These data packets can be transmitted and received in both directions, from the mobile transceiver to the base station transceiver, i.e. in the uplink, and also from the base station transceiver to the mobile transceiver, i.e. in the downlink. For data scheduling the downlink direction is more popular and the following embodiments will be described with focus on the downlink. However, embodiments can also provide context awareness for uplink scheduling, as e.g. in UTRAN using E-DCH (as abbreviation for Enhanced-Dedicated Channel, also referred to as HSUPA abbreviating High Speed Uplink Packet Access). It is to be noted, that the data exchange is assumed to be carried out between the mobile transceiver and a data server, through the mobile communication network. The data server can therefore correspond to any other communication equipment, as e.g. a data storage, a personal computer, another mobile transceiver, a tablet computer, etc. As the wireless interface between the base station transceiver and the data server is likely to be the bottleneck in the transmission chain, scheduling for the wireless interface is critical for the overall transmission and may therefore determine the user satisfaction and whether the QOS requirements are met for the respective service.
Furthermore, the mobile transceiver apparatus comprises means for providing the context information to the base station transceiver. The means for providing the context information can be adapted to provide the context information using a signaling connection to the base station transceiver, it may as well include the context information for the downlink transmission in an uplink transmission and vice versa. In embodiments the context information may comprise information on a quality of service requirement of the application, priority information of the data packets associated with the application, information on a unity of a plurality of the data packets of the application, information on a load demand of the application, information on a delay or error rate constraint of the application, information on the window state, information on the memory consumption, information on the processor usage of the application running on the mobile transceiver, information on the current location, speed, orientation of the mobile transceiver, and/or a distance of the mobile transceiver to another mobile transceiver.
The unity of the data packets may refer to information indicating that a number of data packets belong together, for example, the application can correspond to an image displaying application and the image data is contained in a plurality of data packets. Then the context information may indicate how many data packets refer to one image. This information may be taken into account by the scheduler. In other words, from the context information the scheduler may determine a certain relation between the data packets, e.g. the user may only be satisfied if the whole image is displayed, therefore all packets referring to the image have to be transmitted to the mobile transceiver in an adequate time interval. Therewith the scheduler can be enabled to plan ahead.
In embodiments the means for extracting can be adapted to extract the context information from an operation system of the mobile transceiver or from the application being run on the mobile transceiver. In other words, the operation system of the mobile transceiver can provide the context information, e.g. as state information of an application (foreground/background, active/suspended, standby, etc.). Another option is that the application itself provides the context information. In embodiments the mobile transceiver apparatus may further comprise means for composing a transaction data packet, the transaction data packet may comprise data packets from the application and the context information. In other words, embodiments may use a protocol having multiple data packets in its payload section and the context information in its control section.
Moreover, embodiments may provide an apparatus for a base station transceiver in the mobile communication system, i.e. embodiments may provide said apparatus to be operated by or included in a base station transceiver. In the following, the apparatus will also be referred to as base station transceiver apparatus. The base station transceiver apparatus comprises means for receiving data packets associated with an application being run on the mobile transceiver and it comprises means for obtaining context information on the data packets associated with the application. Moreover, the base station transceiver apparatus comprises means for scheduling the mobile transceiver for transmission of the data packets based on the context information. As has been described above, the scheduler or the means for scheduling takes into account the context information and therefore carries out context-aware scheduling.
Embodiments of the base station transceiver apparatus may obtain the context information in different ways. Three examples are, the context information is received from the mobile transceiver, the context information is received from the data server, or the context information is determined from the bypassing data packets exchanged between the mobile transceiver and the data server, e.g. by sniffing or eavesdropping or inspecting the data packets. In other words, in embodiments the means for obtaining can be adapted to obtain the context information by inspecting the data packets, by receiving context information from the mobile transceiver, and/or by receiving the context information from a data server. As has been described above, the context information may comprise information on a quality of service requirement of the application, priority information of the data packets associated with the application, information on a unity of a plurality of the data packets of the application, information on a load demand of the application, information on a delay or error rate constraint of the application, information on the window state, information on the memory consumption, information on the processor usage of the application running on the mobile transceiver, information on the current location, speed, orientation of the mobile transceiver, and/or a distance of the mobile transceiver to another mobile transceiver.
Furthermore, the means for scheduling can be adapted to schedule the mobile transceiver for transmission such that the quality of service requirement for the plurality of data packets to which the information on the unity refers to is met. In other words, the scheduler can take into account that user satisfaction may only be achieved when all data packets of the unity are delivered in time and therefore plan ahead. The means for scheduling can be adapted to determine a transmission sequence of a plurality of transactions, a transaction being a plurality of data packets for which the context information indicates unity and the plurality of transactions referring to a plurality of applications being run by one or more mobile transceivers. In other words, data packets originating from the same application, i.e. sharing the same state and requirement as e.g. all objects of a web page, may be gathered together with the additional context information forming a so-called transaction. The transactions may then serve to determine a scheduling class. That is to say the scheduling may not be carried out on a user basis, e.g. on a buffer state, but rather on an application or transaction basis. The transactions may then be differentiated by the scheduler rather than differentiating only on a user level. A user or mobile transceiver may utilize multiple transactions for multiple applications and the context information may be obtained for each transaction separately.
The means for scheduling may then determine an order of the sequence of transactions based on a utility function, the utility function depending on a completion time of a transaction, which is determined based on the context information. In other words, the context information may be evaluated using a utility function. The utility function may be a measure for the user satisfaction and therefore depend on a completion time of a transaction, e.g. for a transaction comprising data packets of a web page a web browsing application has requested the completion time may for example be 2 s. In other words, full user satisfaction may be achieved when the full content of the web page is transmitted in less than 2 s. Otherwise, the user satisfaction and therewith the utility function will degrade. The sequence of the transactions can be determined in different ways in embodiments. In some embodiments the transmission sequence is determined from an iteration of multiple different sequences of transactions. The multiple different sequences can correspond to different permutations of the plurality of transactions. The means for scheduling can be adapted to determine the utility function for each of the multiple different sequences and it can be further adapted to select the transmission sequence from the multiple different sequences corresponding to the maximum utility function. In other words, in embodiments the scheduling decision may be determined based on an optimized user satisfaction or utility function, where the optimization may be based on a limited set of sequences.
In some embodiments, the actual transmission sequence or scheduling decision may be further based on the radio condition of a particular user, e.g. the means for scheduling can be adapted to further modify the transmission sequence based on the supportable data rate for each transaction. In other embodiments other fairness criteria or rate or throughput criteria may be considered.
Furthermore, embodiments may provide an apparatus for a data server, i.e. embodiments may provide said apparatus to be operated by or included in a data server. In the following, the apparatus will also be referred to as data server apparatus. The data server may communicate data packets associated with an application being run on the mobile transceiver through the mobile communication system to the mobile transceiver. The data server apparatus may comprise means for deriving context information for the data packets and means for transmitting the context information along with the data packets to the mobile communication system. In other words, the application or operation system on the data server may be the counter part with respect to context information provision to the application or operation system on the mobile transceiver. Again, the context information may comprise information on a quality of service requirement of the application, priority information of the data packets associated with the application, information on a unity of a plurality of the data packets of the application, information on a load demand of the application, information on a delay or error rate constraint of the application, information on the window state, information on the memory consumption, information on the processor usage of the application running on the mobile transceiver, etc. The means for deriving can be adapted to extract the context information from an operation system of the data server or from the application being run on the data server.
In embodiments the data server apparatus can further comprise means for composing a data packet, the data packet comprising data packets from the application and the context information. I.e. the data server may terminate the transaction protocol. Thus, the data server apparatus may further comprise means for composing a transaction data packet, the transaction data packet may comprise data packets from the application and the context information.
Embodiments may further provide the corresponding methods. Embodiments may provide a method for a mobile transceiver in a mobile communication system, the mobile communication system further comprises a base station transceiver. The method comprises a step of extracting context information from an application being run on the mobile transceiver, from an operation system being run on the mobile transceiver, or hardware drivers or hardware of the mobile transceiver, the context information comprising information on a state of the application and/or information on a state of the mobile transceiver. The method further comprises a step of communicating data packets associated with the application with a data server through the base station transceiver and a step of providing the context information to the base station transceiver.
Furthermore, embodiments may provide a method for a base station transceiver in a mobile communication system, the mobile communication system further comprises a mobile transceiver. The method comprises a step of receiving data packets associated with an application being run on the mobile transceiver and a step of obtaining context information on the data packets associated with the application. The method further comprises a step of scheduling the mobile transceiver for transmission of the data packets based on the context information.
Moreover, embodiments may provide a method for a data server. The data server communicates data packets associated with an application being run on a mobile transceiver through a mobile communication system to the mobile transceiver. The method comprises a step of deriving context information for the data packets and a step of transmitting the context information along with the data packets to the mobile communication system.
Embodiments may further provide a mobile transceiver comprising the above mobile transceiver apparatus, a base station transceiver comprising the above base station transceiver apparatus, a data server comprising the above data server apparatus, and/or a communication system comprising the mobile transceiver, the base station transceiver, and/or the data server.
Embodiments can further comprise a computer program having a program code for performing one of the above described methods when the computer program is executed on a computer or processor.
It is to be noted, that embodiments may use channel estimation or channel prediction means for determining the channel quality or supportable data rates for transactions in the future. The channel estimation and/or prediction means can be adapted to base the channel estimation and/or prediction on a current channel estimation, a channel estimation history, i.e. former channel estimates, a known propagation condition or propagation loss, statistical knowledge on the radio channel, etc.
Embodiments can provide the advantage of allowing a radio resource management that enables to free channel resources when they are not needed by an application or to prioritize applications only when required, which can improve the efficiency at which the channel resources are used. Simulations showed that embodiments may make more efficient use of the radio resources than current scheduling policies under the PF (as abbreviation for Proportional Fair) constraint or under the minimum average delay constraint (i.e., EDF as abbreviation for Earliest Deadline First). Compared to PF, 75% more load may be supported at equal QoS. Compared to EDF, 65% more may be supported.
Moreover, embodiments may increase the flexibility of the RRM and applications. Unlike with current RRM schemes, delay may be traded-off with data rate and applications can be informed on the RRM status. This may not only allow to adjust resource usage to the users' or operator's demands. It may also allow RRM and applications to react to changed conditions (channel, load, traffic requirements, UE capabilities) and, thus, may open more efficient ways for RRM and application design.
Some other features or aspects will be described using the following non-limiting embodiments of apparatuses and/or methods and/or computer programs by way of example only, and with reference to the accompanying figures, in which
A basic structure of an RRM system 600 is illustrated in
Embodiments may provide the advantage that their main operation unit may not be data rate. Examples for objective functions and constraints in data rate can be found in J. Huang, V. G. Subramanian, R. Agrawal, and R. A. Berry “Downlink Scheduling and Resource Allocation for OFDM Systems”, IEEE Trans. Wireless Commun., vol. 8, pp. 288-296, 2009; Wen-Hsing Kuo and Wanjiun Liao, “Utility-based radio resource allocation for QoS traffic in wireless networks”, IEEE Trans. Wireless Commun., vol. 7, pp. 2714-2722, 2008; S. Shakkottai and A. L. Stolyar, “Scheduling Algorithms for a Mixture of Real-Time and Non-Real-Time Data in HDR”, Proc. Int. Teletraffic Congress (ITC-17), 2001; and F. Kelly, “Charging and rate control for elastic traffic”, Euro. Trans. Telecomms., vol. 8, pp. 33-37, 1997.
Examples for objective functions and constraints in bandwidth can be found in G. Bianchi and A. T. Campbell, “A programmable MAC framework for utility-based adaptive quality of service support”, IEEE Journal on Selected Areas in Commun., vol. 18, pp. 244-255, 2000. These objective functions and constraints may be applied in embodiments in addition to the context awareness. Conventional resource allocation schemes may not directly account for delay or error rate requirements of the UEs. Such requirements can be artificially transformed to an average data rate, which may become a poor statistical representation with bursty traffic. This makes difficult to design rate-based resource allocation schemes that guarantee a certain delay or error rate.
Moreover, embodiments may additionally account for traffic requirements by either adjusting the utility function, for which examples can be found in G. Bianchi and A. T. Campbell, “A programmable MAC framework for utility-based adaptive quality of service support”, IEEE Journal on Selected Areas in Commun., vol. 18, pp. 244-255, 2000; and Wen-Hsing Kuo and Wanjiun Liao, “Utility-based radio resource allocation for QoS traffic in wireless networks”, IEEE Trans. Wireless Commun., vol. 7, pp. 2714-2722, 2008, or the QoS weights for a specific UE, which are exemplified for example in S. Shakkottai and A. L. Stolyar, “Scheduling Algorithms for a Mixture of Real-Time and Non-Real-Time Data in HDR”, Proc. Int. Teletraffic Congress (ITC-17), 2001.
By giving priorities to UEs but not to applications, these schemes may only prioritize all applications of one UE 100 at once. Consequently, they may not separately prioritize single or subsets of applications. When a UE 100 runs a multitasking operation system, this UE may run multiple applications in parallel whose demands fundamentally differ from each other, which may not be accounted for in conventional systems.
Furthermore, conventional RRM systems are not context-aware. Embodiments may provide the advantage that additional context information is considered, such as e.g. the load demands of each application currently running on the UE 100, the delay or error rate constraints of each application currently running on the UE 100, information on the window state, information on the memory consumption, information on the processor usage of the application running on the mobile transceiver 100, and/or the current location, speed, orientation of the UE and its distance to other users.
Having access to this context information, the schedulers of embodiments may optimize the resource allocation to the users' current context. Embodiments may use a context-aware approach therewith provide a QoS that is higher than the QoS of conventional concepts and embodiments may achieve this enabling a more efficient usage of resources.
A typical scheduling approach aims to maximize a utility function Uj(.) that is a function of PHY (as abbreviation for Physical Layer or Layer 1) data rate. Such a scheduler is illustrated in
The UE-specific weight wj can account for fairness and QoS constraints and it can be computed based on a global fairness parameter α and on a UE-specific QoS weight cj. Different fairness modes are typically reflected by the strictly concave utility function, as for example
based on the average PHY rate
of an arbitrary user j.
It is easy to show that for α=1 the utility function (1) represents max-rate scheduling and it was proven by F. Kelly, “Charging and rate control for elastic traffic”, Euro. Trans. Telecomms., vol. 8, pp. 33-37, 1997, that α=0 results in the widely-used proportional fair scheduling rule. As first embodiment of scheduling extension is based on proportional fairness, the basics of this scheme are detailed in the following, starting with the weight computation 604.
Proportional fair scheduling allocates the channel to the user with the maximum instantaneous PHY rate rj with respect to its average rate
where
with β being a forgetting factor; a parameter between 0 and 1 chosen by the operator and determining the convergence rate.
Embodiments can make use of the context-aware resource allocation (CARA) using context information (CI) either to improve the QoS or to efficiently allocate channel resources while keeping the applications' QoS constraints.
Moreover,
At the UE 100, the mobile transceiver apparatus 10, which may also referred to as Context Extraction Module (CEM), collects and processes CI that is then transferred to the BS 200 within a transaction. At the BS 200, the CARA scheme carried out by the base station transceiver apparatus 20 uses the CI and further information to assign the resources to the users' applications. Then, control channels may be used to signal these assignments to the UEs 100.
The mobile transceiver apparatus 10, which may be realized as a CEM, can be integrated into the UE's 100 operation system (OS for abbreviation) or into the applications running on the UE 100. In other words, the means for extracting 12 can be adapted to extract the context information from the operation system of the mobile transceiver 100 or from the application being run on the mobile transceiver 100. Integration of such extracting into the OS, can be realized as a module of the OS Kernel. Such an implementation may also be supported for hardware drivers and it is supported by many OS and related development frameworks. Embodiments implementing CEM as a Kernel module may provide the benefit that CEM can directly communicate with Kernel functions, such as an OS scheduler, a window manager, a memory management, and a network stack, or other Kernel modules via system calls.
The system calls can be assumed public de-facto standards for each OS, i.e. accessible by the CEM 10, and can be, thus, used by the CEM 10. For instance, the CEM 10, i.e. the apparatus 10 for the mobile transceiver 100, can observe system calls from the processor scheduler 13a and window manager to extract which applications are currently running in the OS 13 foreground while consuming processing cycles. Thereby, CEM 10 extracts which applications currently require QoS priority at the base station transceiver 200.
Integrating CEM or the apparatus 10 at application level can be unified via an application programming interface (API for abbreviation). Most OS vendors provide such APIs and publish their interfaces. In particular, CEM 10 can be a part of the API's programming libraries and thereby its interface (function or method call), its source code may even be unknown. In a software implementation the CEM 10 object code may be statically or dynamically linked to the application. While this would simplify access to internal parameters of each application, it may complicate the observation of other applications or of OS functions. Functions or applications not linked to the CEM 10 library may be indirectly observed. This makes implementing CEM 10 as a Kernel Module as an embodiment with additional advantages.
In embodiments the apparatus 10 for the mobile transceiver 100 may further comprise means for composing a transaction data packet, the transaction data packet comprising data packets from the application and the context information. In other words, the transaction may correspond to a protocol data unit that includes all communication between an application 11 on the UE 100 and an application or server program running on another UE 300 or in a computing center 300, which are implementations of the data server 300, cf.
Thereby, transactions or transaction data packets may provide the interface and information to perform context-aware RRM while being transparent to the applications. An implementation example for a transaction is illustrated in
Again, said context information may comprise information on a quality of service requirement of the application, priority information of the data packets associated with the application, information on a unity of a plurality of the data packets of the application, information on a load demand of the application, information on a delay or error rate constraint of the application, information on the window state, information on the memory consumption, information on the processor usage of the application running on the mobile transceiver 100, information on the current location, speed, orientation of the mobile transceiver 100, and/or a distance of the mobile transceiver 100 to another mobile transceiver. Moreover, the means for scheduling 26 can be adapted to schedule the mobile transceiver 100 for transmission such that the quality of service requirement for the plurality of data packets to which the information on the unity refers to is met.
The means for scheduling 26 can be adapted to determine a transmission sequence of a plurality of transactions. A transaction may correspond to a plurality of data packets for which the context information indicates unity and the plurality of transactions may refer to a plurality of applications being run by one or more mobile transceivers 100. The order of the sequence of transactions can be based on a utility function. The utility function can depend on a completion time of a transaction, which is determined based on the context information.
In other words, in embodiments context-aware RRM schemes may allocate a weight to each transaction and schedule the transaction with the highest weight. To follow the time-variant channel and application demands, it can be assumed that weights and schedule are periodically updated, as e.g. once per transmission time interval (TTI for abbreviation). Therewith, embodiments may schedule based on transactions, they may operate in time rather than in data rate and they may determine a beneficial or improved scheduling sequence before scheduling.
The first function or component 26c can be independent on the scheduler design and is described below. For the second function or component 26d, two embodiments are described that may integrate context awareness into a variety of existing schedulers. As has already been stated above, a sequence of transactions that aims to maximize the sum utility by using CI may be determined. In other words, the transmission sequence can be determined from an iteration of multiple different sequences of transactions, where the multiple different sequences correspond to different permutations of the plurality of transactions. The means for scheduling 26 can be adapted to determine the utility function for each of the multiple different sequences and can be further adapted to select the transmission sequence from the multiple different sequences corresponding to the maximum utility function.
More specifically, in the embodiment a constraint that one transaction always has to be processed as a whole may be used. This may substantially reduce the number of possible combinations and, thus, the computational complexity. The sequence determination component 26c in
It may, in a first step, start with an arbitrary transaction sequence S1={T11, T12, . . . } with Tij being the transaction at index j in sequence i. N is the total number of transactions. rj (t) is the estimated PHY capacity in bits, transaction j can transmit at time slot t and Uj is the utility that transaction j achieves if it finishes at time t. Subsequently, in a second step, the determination component 26c may determine the total sum utility U1 of S1 as follows:
i.e. by summing up all utilities of all transactions in the sequence to obtain a sum utility per sequence. Next, in third step, the sequence S1 is mutated to obtain sequence S2 with the following function:
Furthermore, the total sum utility U2 of S2 can be calculated in a fourth step as in the second step. Subsequently, in fifth step the procedure can be repeated for a predefined number of iterations k as follows
In other words, the sequence with the maximum utility function among the permutations is searched. The result is a sequence S1 of ordered transactions that approaches a value close to the maximal sum utility, when the transactions are scheduled in this order. The maximum utility (i.e., the optimum) is not reached in practice, as the computation time, i.e., number of iterations k, is limited. Nonetheless, even small k lead to substantial performance gains, which will be shown in the sequel by simulation results. Moreover, embodiments may repeat the above procedure if the estimated PHY capacity rj and the remaining bits Rj for all transactions jε{1, . . . , N} can not be assumed constant or semi-static any more. If either rj or Rj changes, the above steps may be repeated in embodiments.
The means for scheduling 26 can be adapted to further modify the transmission sequence based on the supportable data rate for each transaction. For example, proportional fair scheduling may be integrated. A first embodiment may combine the obtained CARA sequence with a proportional fair (PF) scheduling concept and therewith support CQI-aware scheduling, which exploits CQI differences among the UEs.
The PF scheduling weight and moving average can be calculated, for example, as in (3) and (4), respectively. To combine the CARA sequence and PF scheduling weight, it is assumed that the transactions are ordered as a sequence S1, as given above such that each transaction can be assessed by an index j. Then, the embodiment may calculate the combined weight vj as follows:
where p is a so-called penalty-factor. This free parameter allows to trade-off the context-optimized CARA sequence versus the CQI-optimized PF weight. A penalty factor of p=0 means that pure PF scheduling is used, whereas p→∞ does not change the CARA sequence. Finally, the transaction with the largest weight vj is scheduled. Embodiments may therewith provide another advantage that a fine-tuning is enabled between CARA and PF, or generally between CARA and any other scheduling concept.
To have realistic interference conditions, one tier of interfering base stations is placed around the evaluated cell. These base stations are assumed to be constantly transmitting on all resources. 20 UEs are dropped into the serving area of the evaluated cell uniformly. The base stations and the mobile devices are equipped with isotropic antennas. All base stations transmit with a constant power equally distributed over all resources. For each link, the path loss is fixed during the whole simulation, which allows omitting handovers. Shadowing and fast fading fluctuate according to a fixed velocity to resemble the variations of the radio channel in the time scale of seconds. The details of the radio propagation model are given in the following table.
The scheduler operates at an interval of 1 ms. For simplicity, it can only allocate the whole bandwidth to a single user. The effects of frequency-selectivity are well-understood in literature and therefore not regarded here. The link adaptation is idealized by the Shannon formula, as it is not in the focus here. The SINR value is clipped at 20 dB to avoid unrealistic good channel conditions. Transport protocols (e.g. Transmission Control Protocol, TCP) are not considered. It is assumed that all data of a transaction is available at the base station immediately after it has been sent by the server. This approximates the behavior of a system which is equipped with a TCP proxy in the base station. The traffic model is configured on the basis of the NGMN (abbreviating Next Generation Mobile Networks) traffic model, cf. NGMN Alliance, “Radio access performance evaluation methodology,” available online at http://www.ngmn.org/, June 2007.
Furthermore two traffic classes are selected, which are usually served in the best effort bearer: Web surfing (HTTP as abbreviation for Hypertext Transfer Protocol) and file downloads (FTP abbreviating File transfer Protocol). The HTTP model describes the composition of a web page. It consists of a main object (HTML text, HTML abbreviating Hypertext Markup Language) and a random number of embedded objects (pictures, java script etc.). The sizes of the main objects and of the embedded objects follow truncated lognormal distributions. The number of embedded objects per page follows a truncated Pareto distribution with a mean of 5.64 and a maximum of 53 (for more details see the above referenced document). All objects of a web page constitute a single transaction.
For simplification, the total size of the web page is calculated (sum of main object size and all embedded object sizes) and assumed that the whole page is transmitted in a single object. Unless mentioned otherwise, aggregated traffic consisting of 90% HTTP and 10% File Transfer Protocol (FTP), corresponding to 20% and 80% of the data volume, is used.
In embodiment context information from the application layer may not be directly exploited for scheduling. CARA may be enabled by giving each transaction a utility function. The utility may depend on the requirements of a transaction and how these requirements are met. The process flow for utility functions can be as follows:
Context information like the foreground/background status of an application or the application type is used to derive the requirements for the associated transactions. These requirements may then allow deriving utility functions giving a value to the transaction in dependence of its finish or completion time. This derivation, giving the shape and parameters of the respective utility function, can be reasoned with user experience studies.
E.g., when surfing the web, users are happy with fast page loads, but also tolerate a certain delay, cf. J. Nielsen, “Website response times,” http://www.useit.com/alertbox/response-times.html, 2010, link verified on Feb. 3, 2011. This can then be expressed in a utility function as described below. Utility functions may express the latency requirements of transactions. This can allows the scheduler to decide which transaction should be scheduled when. Transactions with relaxed latency requirements can be shifted in time to increase the multi-user diversity and channel-awareness. Utility is typically defined as a function of the data rate, which can be extended by embodiments. Embodiments may express the value of a transaction for the user. For most transactions, e.g. downloading a web page, this depends on the finish time only. The value of all transactions can be defined to be in the range 0.1, where 0 means no value (delayed infinitely) and 1 means optimal value.
If the transaction is finished earlier than expected, this can only slightly increase its value. If it is delayed much longer then, when the typical user resigns to wait for it, the value cannot become worse, because most of the users are not waiting anymore. Therefore, the function of the value depending on the finish time has an S-shape. The logistic function may be chosen in an embodiment.
It is assumed that the transaction arrives at the scheduler at time tstart. All other points in time are defined as durations relative to tstart. The Utility of a transaction finished in the time expected by the user can be defined to be uexp. To allow for a small increase of the utility if the network's performance exceeds the user's expectation, uexp is less than 1. The expected finish time of a transaction depends on its size, on the type of application, and the user's context. It is assumed that the user has purchased a certain data rate rmax from his operator. The user is ignorant of his current radio channel and therefore expects this data rate to be available at all times. The expected data rate rexp is defined in relation to the purchased data rate:
r
exp
=f·r
max
The user requests that foreground transactions are served with the full data rate (f=1). For background transactions, this requirement is relaxed and the user is satisfied with a fraction of the rate (f<1). The duration from the start of the transaction to the expected finish time is then determined by
where s is the size of the transaction in bits and rexp is the expected data rate in bits per second. The duration from start to the inflection point of the logistic curve (uinflection=0.5) can be modeled to be a multiple of the expected finish duration:
d
inflection
=x·d
exp
The resulting utility function can be given by
Further embodiments may use utility based resource allocation. These embodiments directly aim to maximize the overall utility U1 of the determined CARA sequence by removing the constraint of a fixed sequence S1. To do so, in an embodiment it can be determined if it is advantageous to schedule a different transaction j≠1 than the first transaction within sequence S1.
This is illustrated in
and the expected change of the finish time of Transaction 2
When the utility function is linearized at the expected finish time, the utility difference for this switching operation can be calculated
Then, it can be decided if the switching operation is advantageous in terms of total utility. For this, the utility gain ΔU for all transactions can be compared and the transaction with the highest gain can be scheduled. It is to be noted, that embodiments may use channel estimation or channel prediction means for determining the channel quality or supportable data rates for transactions in the future. The channel estimation and/or prediction means can be adapted to base the channel estimation and/or prediction on a current channel estimation, a channel estimation history, i.e. former channel estimates, a known propagation condition or propagation loss, statistical knowledge on the radio channel, etc.
As for the first embodiment, it is straightforward to extend this second embodiment frequency-selective scheduling by evaluating the rates for each sub-band index separately.
So far, embodiments have been discussed in which the context information is provided by the mobile transceiver apparatus 10. As it has already been discussed the context information may also be obtained by the base station transceiver apparatus 20, e.g. by packet inspection, or by an according data server apparatus 30.
Moreover, embodiments may provide a computer program having a program code for performing one of the above methods when the computer program is executed on a computer or processor.
A person of skill in the art would readily recognize that steps of various above-described methods can be performed by programmed computers. Herein, some embodiments are also intended to cover program storage devices, e.g., digital data storage media, which are machine or computer readable and encode machine-executable or computer-executable programs of instructions, wherein said instructions perform some or all of the steps of said above-described methods. The program storage devices may be, e.g., digital memories, magnetic storage media such as magnetic disks and magnetic tapes, hard drives, or optically readable digital data storage media. The embodiments are also intended to cover computers programmed to perform said steps of the above-described methods.
The description and drawings merely illustrate the principles of the invention. It will thus be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described or shown herein, embody the principles of the invention and are included within its spirit and scope. Furthermore, all examples recited herein are principally intended expressly to be only for pedagogical purposes to aid the reader in understanding the principles of the invention and the concepts contributed by the inventor(s) to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions. Moreover, all statements herein reciting principles, aspects, and embodiments of the invention, as well as specific examples thereof, are intended to encompass equivalents thereof.
Functional blocks denoted as “means for . . . ” (performing a certain function) shall be understood as functional blocks comprising circuitry that is adapted for performing or to perform a certain function, respectively. Hence, a “means for s.th.” may as well be understood as a “means being adapted or suited for s.th.”. A means being adapted for performing a certain function does, hence, not imply that such means necessarily is performing said function (at a given time instant).
The functions of the various elements shown in the Figures, including any functional blocks labeled as “means”, “means for extracting”, “means for communicating”, “means for providing”, “means for composing”, “means for receiving”, “means for obtaining”, “means for scheduling”, “means for deriving”, “means for transmitting”, “means for controlling”, etc., may be provided through the use of dedicated hardware, such as “a performer”, “an extractor”, “a communicator”, “a provider”, “a composer”, “a receiver”, “an obtainer”, “a scheduler”, “a deriver”, “a transmitter”, “a controller”, etc. as well as hardware capable of executing software in association with appropriate software. When provided by a processor, the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which may be shared. Moreover, explicit use of the term “processor” or “controller” should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor (DSP) hardware, network processor, application specific integrated circuit (ASIC), field programmable gate array (FPGA), read only memory (ROM) for storing software, random access memory (RAM), and non volatile storage. Other hardware, conventional and/or custom, may also be included. Similarly, any switches shown in the Figures are conceptual only. Their function may be carried out through the operation of program logic, through dedicated logic, through the interaction of program control and dedicated logic, or even manually, the particular technique being selectable by the implementer as more specifically understood from the context.
It should be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative circuitry embodying the principles of the invention. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudo code, and the like represent various processes which may be substantially represented in computer readable medium and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.
Number | Date | Country | Kind |
---|---|---|---|
11305685.7 | Jun 2011 | EP | regional |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/EP2012/060259 | 5/31/2012 | WO | 00 | 12/4/2013 |