This application claims priority under 35 USC §119 or §365 to Great Britain Patent Application No. 1320667.7 entitled “Resource Allocation” filed Nov. 22, 2013 by Zhao et al., the disclosure of which is incorporate in its entirety.
There exist communication systems that allow the user of a device, such as a personal computer or mobile device, to conduct voice or video calls over a packet-based computer network such as the Internet using various applications. Such communication systems include voice or video over Internet Protocol (VoIP) systems. These systems are beneficial to the user as they are often of significantly lower cost than conventional fixed line or mobile cellular networks. This may particularly be the case for long-distance communication. To use a VoIP system, the user installs and executes client software on their user device. The client software sets up the VoIP connections as well as providing other functions such as registration and authentication. In addition to voice communication, the client may also set up connections for other communication media such as instant messaging (“IM”), SMS messaging, file transfer and voicemail. All of these communications utilise the exchange of communication event data for effecting communication. Communication event data may include at least one of audio data, video data and information related to the content of a communication event (such as a video or audio call).
During a real time communication of event, resources in the user device are allocated for handling the communication event, for example, processing and memory resources for handling incoming and outgoing data and managing a network interface of the user device. Where the communication event is two way, resources are required for receiving data from another user device through the network (receiving downlink data) and for transmitting data to the other user device through the network (transmitting uplink data). Each user device has constrained resources, which may be required for other activities, as well as managing the communication event. A resource manager allocates resources to receiving downlink communication event data and to transmitting uplink communication event data. The resources could be processing resources of the user equipment, network bandwidth and/or any other resource for handling communication event data in the user equipment.
According to a first aspect, there is provided a resource allocation module configured to: allocate a first set of communication event resources for receiving communication event data at a computer device; allocate a second set of communication event resources for transmitting communication event data from the computer device; and reallocate resources from one of said sets to the other of said sets in dependence on an indication of the relative importance of the received communication event data compared to the transmitted communication event data. The computer device can be a user device and/or a device implemented in a network.
According to another aspect described herein, there is provided a method implemented by an application executed on a device, the method comprising the operations of: allocating a first set of communication event resources for receiving communication event data at a computer device; allocating a second set of communication event resources for transmitting communication event data from the computer device; and reallocating resources from one of said sets to the other of said sets in dependence on an indication of the relative importance of the received communication event data compared to the transmitted communication event data.
According to another aspect described herein, there is provided a computer program product, the computer program product being embodied on a computer readable medium and configured so as when executed on a processor of a device comprising a network interface to: allocate a first set of communication event resources for receiving communication event data at the computer device; allocate a second set of communication event resources for transmitting communication event data from the computer device; and reallocate resources from one of said sets to the other of said sets in dependence on an indication of the relative importance of the received communication event data compared to the transmitted communication event data.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
For an understanding of the following and to show how the same may be put into effect, reference will now be made, by way of example, to the following drawings, in which:
Embodiments will now be described by way of example only.
In an audio or video call the quality of different incoming audio and video streams can be of different importance to different users. An example is when a business user is talking to his customer. The perceived quality for the customer is of higher value than the perceived quality for the business user. Another example is when a speaker is giving a presentation in a multi-party call. The video and/or audio quality from the speaker may be more important than the video and/or audio quality from the mainly “listening-only” participants.
Thus, in order to maximize the user's opinion score (which is a metric indicative of the quality experienced by the user), the system resources such as CPU, bandwidth, etc. can be unequally distributed between the incoming streams. For instance, the incoming stream from the most active participant in a multiparty call may get more resources compared to the stream from the least active participant. Another example is to assign more resources to a stream that is actively picked by the user. Another example is for one user to configure his client in a way that outgoing quality is optimized more than incoming streams. The following is focussed on configuring one-to-one and multiparty audio and video calls in such an asymmetric fashion.
Consider in
The communication system 100 shown in
The user device 104 executes an instance of a communication client, provided by a software provider associated with the communication system 100. The communication client is a software program executed on a local processor in the user device 104. The client performs the processing required at the user device 104 in order for the user device 104 to transmit and receive data over the communication system 100.
The user device 110 also executes, on a local processor, a communication client which corresponds to the communication client executed at the user device 104. The client at the user device 110 performs the processing required to allow the user 108 to communicate over the network 106 in the same way that the client at the user device 104 performs the processing required to allow the user 102 to communicate over the network 106. The user devices 104 and 110 are endpoints in the communication system 100.
As mentioned above, the user terminal 200 may be, for example, a mobile phone, a tablet, a personal digital assistant (“PDA”), a personal computer (“PC”) (including, for example, Windows™, Mac OS™ and Linux™ PCs), a gaming device or other embedded device able to connect to the network 100 via the network controller 108. The user terminal 200 is arranged to receive information from and output information to a user of the user terminal 200.
The user terminal 200 comprises a central processing unit (“CPU”) 202, to which is connected a display 204 such as a screen or touch screen, input devices such as a keypad 206 and a camera 208. An output audio device 210 (e.g. a speaker) and an input audio device 212 (e.g. a microphone) are connected to the CPU 202. The display 204, keypad 206, camera 208, output audio device 210 and input audio device 212 may be integrated into the user terminal 200 as shown in
The rate at which data can be transmitted over the network 100 from a user device is limited by the uplink bandwidth available to the user device. Similarly, the rate at which data can be transmitted over the network 100 to a user device is limited by the downlink bandwidth available to the user device. The present disclosure considers a reallocation between uplink and downlink bandwidth as described in the following. The uplink bandwidth of a user device is the range of frequencies over which the user device is currently configured to transmit event data. The downlink bandwidth of a user device is the range of frequencies over which the user device is currently configured to receive event data.
The user device 110 of the client 108 is depicted in
The user device 110 may also be configured to allocate computing resources in dependence on indications so as to prioritise the presentation of information to a user 108 of the user device 110 (for example, audio data from a microphone and video data from a camera) relative to information collected from the user 108 (for example, audio data presented via the loudspeaker and video data presented via a display screen). The reverse configuration (i.e. prioritising collection of information over the presentation of information) is also possible.
The logic for resource allocation can be located in the client engine layer 220 of the allocating user device (e.g. in user device 110 in the present embodiment). However, in some embodiments (discussed later), a server located in a network 106 or router 107 may comprise the logic for resource allocation.
The user device may determine to make such an adjustment following the receipt of a direct or indirect indication from the client 108 of the user device 110 of a relative prioritisation of audio to video data. In other words, the user device may determine how to reallocate resources (such as computing resources) based on an indication of the relative priority of the uplink and downlink data channels. The user device may determine how to make such an adjustment following the receipt of an indication in a control signal received from another device, such as the control signal transmitted by the user device 104 in
Commonly, a user device has a certain number of resources, such as processing resources, allocated bandwidth, etc. In this context, allocated bandwidth can be bandwidth allocated by an external resource, such as a WiFi network or WLAN. Some of these resources may be allocated for effecting the communication of communication event data for video and/or audio calls. Commonly, these communication resources are allocated by the user device to uplink communications with at least one other user device and to downlink communications with the at least one other user device to achieve equal quality outcomes in the up and down links (option (1) discussed earlier). The following discloses embodiments in which the number of resources assigned for uplink communications is different to the number of resources assigned for downlink communications. The asymmetric resource allocation is determined in dependence on an indication of the relative importance of the uplink and downlink communication paths to at least one user of one or more of the user devices. When an indication is received from multiple user devices/multiple users, the indications may be aggregated to form a single indication for determining how resources may be reallocated in at least one of those devices.
All of the following embodiments are arranged so that an indication provided by a user and/or a user device on the relative importance of an uplink compared to a downlink for the user can be used to influence the ratio of allocated uplink to downlink resources to achieve different (asymmetric) quality outcomes. The indication may be implicit or explicit. This allows for link quality in a particular direction to be improved, which increases the quality of communications for a designated user.
In a first embodiment, illustrated with reference to
At 401, the first user device 104 is configured to determine the type and number of adjustable resources it has available for handling communication event data. In this context, “handling” includes at least those resources for receiving communication event data, transmitting communication event data, processing communication event data and presenting communication event data to a user of the first user device 104. In this context, “adjustable resources” means those resources over which the first user device 104 has control to reallocate.
At 402, the first user device 104 is configured to allocate a first number of resources to uplink communication event data transmissions to the second user device 110. This allocation may be made using a default allocation mechanism, such as allocating half the number of available resources to the uplink communication event data transmissions. Alternatively, this allocation may be made using information on the current or recent state of the uplink conditions (such as interference).
At 403, the first user device 104 is configured to allocate a second number of resources to downlink communication event data transmissions from the second user device 110. This allocation may be made using a default allocation mechanism or using information about the downlink conditions, as described above in relation to operation 402.
At 404, the first user is configured to determine whether or not to reallocate the currently allocated resources. This may be determined separately in respect of each type of resource or a single decision may be made that applies to every type of resource. The determination can be based on a plurality of criteria, all of which indicate the relative importance placed by a user on particular streams of communication event data on the uplink and the downlink.
One criterion is that an indication has been received from the second user device 110 indicating that more resources are to be provided to the uplink than the downlink (or vice versa). This indication could be based on an explicit user instruction to the second user device 110 instructing the reallocation of resources of the first user device 104 in a specified way. This indication could be based on implicit information on the relative importance between the uplink and downlink streams of communication event data. For example, implicit information could encompass whether more audio information is currently being detected in the uplink or the downlink direction, whether any windows through which image data from the communication event data is being displayed to a user have been minimised or otherwise covered up and whether the user of the second device is currently detected in the field of view of a camera of the device.
Another criterion is an indication provided by the first user device 104. Like the indication received from the second user device 110, this indication could be based on implicit information and/or an explicit instruction from the user of the first user device 104 to reallocate resources currently allocated to the uplink communication event data to downlink communication event data (or vice versa).
If it is determined that the resources are not to be reallocated, operation 404 is repeated at a later time.
If it is determined that the resources are to be reallocated, operation 405 is performed, in which the resources are reallocated between the uplink and the downlink in dependence on the result of the determination operation. The first user device 104 may be configured to, at any one time, reallocate the uplink/downlink resource ratio of only one type of resource between the uplink and downlink communication event data. Alternatively, the first user device 104 may be configured to reallocate the uplink/downlink resource ratios of multiple types of resources between the uplink and the downlink communication event data. The first user device 104 could be configured to determine which, and how many, resources to reallocate.
Operations 404 to 405 are subsequently repeated until the call ends.
The embodiment described in relation to
The principle outlined above in relation to
In this embodiment, the first user device 104 is configured to execute the same process operations described above in relation to
In the above described embodiments, multiple user devices are described as providing a relative indication of the importance of the uplink and downlink resources. If multiple indications are received from different user devices, the first user device 104 is configured to determine how to select to reallocate resources between the uplink and downlink in dependence on these multiple indications. This may include weighting the indications in dependence on where it comes from. In this way, indications from, for example, a call moderator, may affect the determination of how to reallocate resources more than indications received from regular users. The call moderator may be determined at set-up.
The principles described above may also be extended to the case where the reallocation of resources is performed by a central reallocation unit. In a first embodiment, the central reallocation unit may be based in network 106 and/or routing node 107 in
In the above described embodiments, explicit user instructions are described. To assist a user in determining whether or not to optimise an uplink over a downlink or vice versa, the user may be provided with an indication of the quality of communications received over the uplink and an indication of the quality of communications received over the downlink for the uplink and the downlink to be optimised. These quality indications can be presented to the user via a display screen of the user device.
The methods described above can be implemented in software (e.g. in the clients described above), or in hardware. More precisely, the methods described above can be implemented in a computer program product comprising computer readable instructions for execution by computer processing means (e.g. a CPU) at a node of the communication system (e.g. the user terminal 104 or the user terminal 110).
In all of the above described embodiments, the resources of the first user equipment may be at least one of: processing resources of the first user device; network bandwidth; and any other resource for handling communication event data in the user equipment.
In all of the above described embodiments, the ratio of the resources of a first user device allocated to an uplink to the resources of the first user device allocated to a downlink is varied in dependence on an indication from at least one user device of the relative importance of at least one data stream on the uplink or downlink. This reallocation can be performed periodically or aperiodically. The reallocation may be triggered to start only when an explicit instruction from a user of a user device participating in the call has been received. The explicit instruction could indicate to the device that determines the reallocation (i.e. either the first user equipment or the central allocation unit) how the resource indication may be changed. Alternatively, the explicit instruction could simply indicate to the device that determines the reallocation that a determination is requested. The reallocation determining device may then retrieve information indicative of the relative importance of the uplink communication event data to the downlink communication event data for making this determination. The determination may also be performed, on occasion, without any explicit user input or instruction (e.g. as in the case of the implicit indication described in relation to
As mentioned above, the quality of a stream of communication data may by modified by modifying at least one of: the frame-rate, the resolution and the source coding quality of the stream. The modification may be made so as to prioritise at least one stream of communication event data transmitted or received by a device over other streams of communication event data transmitted or received by that device.
It will be appreciated that in the example described above the first user device 104 may require a larger share of the total available bandwidth than the second user device 110 and/or the third user device 502 (i.e. may need to transmit and/or receive data at a higher rate) based on the types of activity performed by the first user device 104.
In other implementations, the resource reallocation unit (embodied in either a user device or in a network entity upstream of a user device) may determine the data rate limits (and thus the bandwidth allocations) for each of the plurality of user devices based on the user devices' level of demand for bandwidth.
For example, when the request for bandwidth received from each of the plurality of users comprises an indication of the activity to be handled (for example, a voice call, a video call, a file transfer etc.) in addition to an indication that bandwidth is required, the resource reallocation unit (embodied in either a user device or in a network entity upstream of a user device) is able to determine using suitable processing logic the data rate (and thus the bandwidth to provide the determined data rate) required for the particular activity. The resource reallocation unit may be configured with upload and/or download rates required for certain activities, and thus be able to determine an appropriate uplink and/or downlink data rate limit (and thus an appropriate upload and/or download bandwidth) based on detecting the activity to be performed by the application. The resource reallocation unit is able to obtain a global view of the demand for bandwidth from each of a plurality of applications requiring usage of the total available bandwidth and determine a bandwidth allocation for each of the plurality of applications accordingly.
The resource reallocation unit is also able to obtain the global view of the demand for bandwidth from each of a plurality of applications requiring usage of the total available bandwidth when the request for bandwidth from each application comprises an indication of a required upload and/or download data rate (i.e. connection speed). Thus, the resource reallocation unit is able to determine a bandwidth allocation for each of the plurality of applications based on the required upload and/or download data rates.
The inventors have recognised that such a symmetric allocation is not always desirable, depending on the type of relationship and the type of communication between the different users. The inventors have therefore proposed a mechanism for reallocating resources for communication event data.
References in the above to a bandwidth may include references to a frequency (Hz), a connection speed (data rate in bps) and to both.
Modern audio and video processing components (such as encoders, decoders, echo canceller, noise reducers, anti-aliasing filters etc.) can typically achieve higher output audio/video quality by employing more complex audio/video algorithmic processing operations. These operations are typically implemented by one or more software applications executed by a processor (e.g. CPU) of a computing system. The application(s) may comprise multiple code components (for instance, separate audio and video processing components), each implementing separate processing algorithms Processor resource management in the present context pertains to adapting the complexity of such algorithms to the processing capabilities of such a processor. As used herein “complexity” of a code component implementing an algorithm refers to a temporal algorithmic complexity of the underlying algorithm. As is known in the art, the temporal complexity of an algorithm is an intrinsic property of that algorithm which determines a number of elementary operations required for that algorithm to process any given input, with more complex algorithms requiring more elementary processing operations per input than their less sophisticated counterparts. As such, this improved quality comes at a cost as the more complex, higher-quality algorithms either require more time to process each input, or they require more processor resources, and thus result in higher CPU loads, if they are to process input data at a rate which is comparable to less-complex, lower-quality processing algorithms.
For “real-time” data processing, such as processing of audio/video data in the context of audio/video conferencing implemented by real-time audio/video code components of a communication client application, quality of output is not the only consideration: it is also strictly necessary that these algorithmic operations finish in “real-time”. As used herein, in general terms, “real-time” data processing means processing of a stream of input data at a rate which is at least as fast as an input rate at which the input data is received (i.e. such that if N bits are received in a millisecond, processing of these N bits must take no longer than one millisecond); “real-time operation” refers to processing operations meeting this criteria. As such, allowing the more complex algorithms more processing time is not an option as the algorithm has only a limited window in which to process N bits of the stream, that window running from the time at which the N bits are received and the time at which the next N bits in the stream are received—the algorithmic operations needed to process the N bits all have to be performed within this window and cannot be deferred if real-time operation is to be maintained. Therefore more processor resources are required by a code component as its complexity increases if it is to maintain real-time operation. Further, if CPU load is increased beyond a certain point—for instance, by running unduly complex audio/video processing algorithms—then real-time operation will simply not be possible as the audio and/or video components would, in order to operate in real-time, require more processor resources than are actually available. Thus, there is a trade-off between maximising output quality on the one hand whilst preserving real-time operation on the other.
In the context of audio/video processing specifically, raw audio and video data is processed in portions, which are then packetized for transmission. Each audio data portion may be (e.g.) an audio frame of 20 ms of audio; each video data portion may be (e.g.) a video frame comprising an individual captured image in a sequence of captured images. In order to maintain real-time operation, processing of an audio frame finalizes before capture of the next audio frame is completed; otherwise, subsequent audio frames will be buffered and an increasing delay is introduced in the computing system. Likewise, processing of a video frame should finalize before the next video frame is captured for the same reason. For unduly complex audio/video algorithms, the processor may have insufficient resources to achieve this.
The resources of a particular user device 104, 110, 502 may be embodied in hardware or software. Examples include sampling rate of received data, processor resources for executing code (e.g. number of cycles and/or operating processor clock speed in a particular time period) assigned to audio and/or video data and any other resources assigned for presenting audio and/or video information to a user.
Processor resources may be reallocated by adjusting a number of low-level machine-code instructions needed to implement processing functions such as audio or video processing (as less complex algorithms are realized using fewer machine-code instructions). Processor resources may also be reallocated using a low-level thread scheduler, which allocates resources to different threads by selectively delaying execution of thread instructions relative one another.
Although the above describes the reallocation of resources in relation to video calls comprising both a video and an audio component, it is understood that the same principles may apply to audio only data streams or video only data streams.
Generally, any of the functions described herein can be implemented using software, firmware, hardware (e.g., fixed logic circuitry), or a combination of these implementations. The terms “module,” “functionality,” “component” and “logic” as used herein generally represent software, firmware, hardware, or a combination thereof. In the case of a software implementation, the module, functionality, or logic represents program code that performs specified tasks when executed on a processor (e.g. CPU, CPUs, or DSP). The program code can be stored in one or more computer readable memory devices. The features of the techniques described below are platform-independent, meaning that the techniques may be implemented on a variety of commercial computing platforms having a variety of processors.
For example, the user terminals may also include an entity (e.g. software) that causes hardware of the user terminals to perform operations, e.g., processors functional blocks, and so on. For example, the user terminals may include a computer-readable medium that may be configured to maintain instructions that cause the user terminals, and more particularly the operating system and associated hardware of the user terminals to perform operations. Thus, the instructions function to configure the operating system and associated hardware to perform the operations and in this way result in transformation of the operating system and associated hardware to perform functions. The instructions may be provided by the computer-readable medium to the user terminals through a variety of different configurations.
One such configuration of a computer-readable medium is signal bearing medium and thus is configured to transmit the instructions (e.g. as a carrier wave) to the computing device, such as via a network. The computer-readable medium may also be configured as a computer-readable storage medium and thus is not a signal bearing medium. Examples of a computer-readable storage medium include a random-access memory (RAM), read-only memory (ROM), an optical disc, flash memory, hard disk memory, and other memory devices that may us magnetic, optical, and other techniques to store instructions and other data.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
Number | Date | Country | Kind |
---|---|---|---|
1320667.7 | Nov 2013 | GB | national |