In the past, the technique known as “super resolution” has been used in satellite imaging to boost the resolution of the captured image beyond the intrinsic resolution of the image capture element. This can be achieved if the satellite (or some component of it) moves by an amount corresponding to a fraction of a pixel, so as to capture samples that overlap spatially. In the region of overlap, a higher resolution sample can be generated by extrapolating between the values of the two or more lower resolution samples that overlap that region, e.g. by taking an average. The higher resolution sample size is that of the overlapping region, and the value of the higher resolution sample is the extrapolated value.
The idea is illustrated schematically in
More recently the concept of super resolution has been proposed for use in video coding. One potential application of this is similar to the scenario described above—if the user's camera physically shifts between frames by an amount corresponding to a non-integer number of pixels (e.g. because it is a handheld camera), and this motion can be detected (e.g. using a motion estimation algorithm), then it is possible to create an image with a higher resolution than the intrinsic resolution of the camera's image capture element by extrapolating between pixel samples where the pixels of the two frames partially overlap.
Another potential application is to deliberately lower the resolution of each frame and introduce an artificial shift between frames (as opposed to a shift due to actual motion of the camera). This enables the bit rate per frame to be lowered. Referring to
Embodiments of the present invention receive as an input a video signal comprising a plurality of frames representing a video image at different respective times, each frame comprising a plurality of higher resolution samples. Multiple different projections of the video image are generated, each projection comprising a plurality of lower resolution samples representing the video image at a lower resolution. The lower resolution samples of the different projections represent different but overlapping groups of the higher resolution samples which overlap spatially in a plane of the video image. The video signal is encoded by encoding the different projections into separate respective encoded streams, and each of the separate encoded streams are transmitted to a receiving terminal over a network.
Further embodiments of the present invention decode a video signal comprising a plurality of frames representing a video image at different respective times, each frame comprising a plurality of higher resolution samples. A plurality of separate encoded video streams are received from a transmitting terminal over a network, each of the encoded video streams comprising a different respective one of multiple different projections of the video image. Each projection comprises a plurality of lower resolution samples representing the video image at a lower resolution, wherein the lower resolution samples of the different projections represent different but overlapping groups of the higher resolution samples which overlap spatially in a plane of the video image. The encoded video streams are decoded so as to decode the projections. Higher resolution samples are generated representing the video image at a higher resolution by, for each higher resolution sample thus generated, forming the higher resolution sample from a region of overlap between ones of the lower resolution samples from the different projections. The video signal is output to a screen at the higher resolution following generation from the projections.
The various embodiments may be embodied at a transmitting terminal, receiving terminal system, or as computer program code to be run at the transmitting or receiving side, or may be practiced as a method. The computer program may be embodied on a tangible, computer-readable storage medium.
In further embodiments there may be provided a network element for forwarding a video signal comprising a plurality of frames representing a video image at different respective times, each frame comprising a plurality of higher resolution samples. The network element comprises transceiver apparatus arranged to receive a plurality of separate encoded video streams from a transmitting terminal over a network, each of the encoded video streams comprising a different respective one of multiple different projections of the video image. Each projection comprises a plurality of lower resolution samples representing the video image at a lower resolution, wherein the lower resolution samples of the different projections represent different but overlapping groups of the higher resolution samples which overlap spatially in a plane of the video image. The network element also comprises processing apparatus configured to determine whether to drop at least one of said encoded video streams in dependence on a condition of one of the network, network element and a receiving terminal, leaving one or more of the encoded video streams remaining. The transceiver is arranged to forward the one or more remaining streams to the receiving terminal over the network, but not any of the encoded video streams dropped by the processing apparatus.
For a better understanding of the various embodiments and to show how they may be put into effect, reference is made by way of example to the accompanying drawings in which:
Embodiments of the present invention provide a super-resolution based compression technique for use in video coding. Over a sequence of frames, the image represented in the video signal is divided into a plurality of different lower resolution “projections” from which a higher resolution version of the frame can be reconstructed. Each projection is a version of a different respective one of the frames, but with a lower resolution than the original frame. The lower resolution samples of each different projection have different spatial alignments relative to one another within a reference grid of the video image, so that the lower resolution samples of the different projections overlap but are not coincident. For example each projection is based on the same raster grid defining the size and shape of the lower resolution samples, but with the raster being applied with a different offset or “shift” in each of the different projections, the shift being a fraction of the lower resolution sample size in either the horizontal and/or vertical direction relative to the raster orientation. Each frame is subdivided into only one projection regardless of shift step, e.g. ½ or ¼ pixel.
An example is illustrated schematically in
A given frame F(t) comprises a plurality of higher resolution samples S′ defined by a higher resolution raster shown by the dotted grid lines in
Each of a sequence of frames F(t), F(t+1), F(t+2), F(t+3) is then converted into a different respective projection (a) to (d). Each of the projections of comprises a plurality of lower resolution samples S defined by applying a lower resolution raster to the respective frame, as illustrated by the solid lines overlaid on the higher resolution grid of in
Each lower resolution sample S represents a respective group of higher resolution samples S′ (each lower resolution sample covers a whole number of higher resolution samples). In embodiments the value of the lower resolution sample S is determined by combining the values of the higher resolution samples, for example by taking an average such as a mean or weighted mean (although more complex relationships are not excluded). Alternatively the value of the lower resolution sample could be determined by taking the value of a representative one of the higher resolution samples, or averaging a representative subset of the higher resolution values.
The grid of lower resolution samples in the first projection (a) has a certain, first alignment relative to the underlying higher-resolution raster of the video image represented in the signal being encoded, in the plane of the frame. For reference this may be referred to here as a shift of (0, 0). The grid of lower resolution samples formed by each further projection (b) to (d) of the subsequent frames F(t+1), F(t+2), F(t+3) respectively is then shifted by a different respective amount in the plane of the frame. For each successive projection, the shift is by a fraction of the lower resolution sample size in the horizontal or vertical direction. In the example shown, in the second projection (b) the lower resolution grid is shifted right by half a (lower resolution) sample, i.e. a shift of (+½, 0) relative to the reference position (0, 0). In the third projection (c) the lower resolution grid is shifted down by another half a sample, i.e. a shift of (0, +½) relative to the second shift or a shift of (+½, +½) relative to the reference position. In the fourth projection the lower resolution grid is shifted left by another half a sample, i.e. a shift of (−½, 0) relative to the third projection or (0, +½) relative to the reference position. Together these shifts make up a shift pattern.
In
The value of the lower resolution sample in each projection is taken by combining the values of the higher resolution samples covered by that lower resolution sample, i.e. by combining the values of the respective group of lower resolution samples which that higher resolution sample represents. This is done for each lower resolution sample of each projection based on the respective groups, thereby generating a plurality of different reduced-resolution versions of the image over a sequence of frames.
The pattern repeats over multiple sequences of frames. The projection of each frame is encoded and sent to a decoder in an encoded video signal, e.g. being transmitted over a packet-based network such as the Internet. Alternatively the encoded video signal may be stored for decoding later by a decoder.
At the decoder, the different projections of the sequence of frames can then be used reconstruct a higher resolution sample size from the overlapping regions of the lower resolution samples. For example, in the embodiment described in relation to
Over a sequence of frames the video image may be subdivided into a full set of projections, e.g. when the shift is half a sample there are provided four projections over a sequence of four frames, and in the case of a quarter shift sixteen projections over sixteen frames. Therefore overall, the frame including all its projections together may still recreate the same resolution as if the super resolution technique was not applied, albeit taking longer to build up that resolution.
However, the video image is broken down into separate descriptions or sub-frames, which can be manipulated separately or differently. There are a number of uses for the division of the video into multiple projections, for example as follows.
Note also that, in embodiments, the multiple projections are created by a predetermined shift pattern, not signalled over the network from the encoder to the decoder and not included in the encoded bitstream. The order of the projection may determine the shift position in combination with the shift pattern. That is, each of said projections may be of a different respective one of a sequence of said frames, and the projection of each of said sequence of frames may be a respective one of a predetermined pattern of different projections, wherein said pattern repeats over successive sequences of said frames. The decoder is then configured to regenerate a higher resolution version of the video based on the predetermined pattern being pre-stored or pre-programmed at the receiving terminal rather than received from the transmitting terminal in any of the streams.
Alternative embodiments of the present invention divide a given frame into a plurality of different lower resolution projections from which a higher resolution version of the frame can be reconstructed. Each projection is a version the same frame with a lower resolution than the original frame. The lower resolution samples of each different projection of the same frame have different spatial alignments relative to one another within the frame, so that the lower resolution samples of the different projections overlap but are not coincident. For example each projection is based on the same raster grid defining the size and shape of the lower resolution samples, but with the raster being applied with a different offset or “shift” in each of the different projections, the shift being a fraction of the lower resolution sample size in either the horizontal and/or vertical direction relative to the raster orientation.
An example is shown schematically in
A given input frame F(t) comprises a plurality of higher resolution samples S′ defined by a higher resolution raster shown by the dotted grid lines in
Similarly to the embodiments described in relation to
The grid of lower resolution samples in the first projection (a) has a certain, first alignment within the frame F(t), i.e. in the plane of the frame. For reference this may be referred to here as a shift of (0, 0). The grids of lower resolution samples formed by each further projection (b) to (d) of the same frame F(t) is then shifted by a different respective amount in the plane of the frame. For each successive projection, the shift is by a fraction of the lower resolution sample size in the horizontal or vertical direction. In the example shown, similar to the pattern of
In
Note that the different projections within the same frame do not necessarily need to be generated in any particular order, and any could be considered the “reference position”. Other ways of describing the same pattern may be equivalent. Other patterns are also possible, e.g. based on a lower resolution sample size of 4×4 higher resolution samples being shifted in a pattern of quarter sample shifts (a quarter of the lower resolution sample size).
Again, the value of the lower resolution sample in each projection is taken by combining the values of the higher resolution samples covered by that lower resolution sample, i.e. by combining the values of the respective group of lower resolution samples which that higher resolution sample represents. This is done for each lower resolution sample of each projection based on the respective groups, thereby generating a plurality of different reduced-resolution versions of the same frame. The process is also repeated for multiple frames.
The effect is that each two dimensional frame now effectively becomes a three dimensional “slab” or cuboid, as shown schematically in
The projections of each frame are encoded and sent to a decoder in an encoded video signal, e.g. being transmitted over a packet-based network such as the Internet. Alternatively the encoded video signal may be stored for decoding later by a decoder.
At the decoder, the multiple different projections of the same frame can then be used reconstruct a higher resolution sample size from the overlapping regions of the lower resolution samples. For example, in the embodiment described in relation to
Each frame may be subdivided into a full set of projections, e.g. when the shift is half a sample each frame is represented in four projections, and in the case of a quarter shift into sixteen projections. Therefore overall, the frame including all its projections together may still represent the same resolution as if the super resolution technique was not applied.
However, unlike a conventional video coding scheme the frame is broken down into separate descriptions or sub-frames, which can be manipulated separately or differently. There are a number of uses for this, for example as follows.
Also, again the multiple projections may be created by a predetermined shift pattern, not signalled over the network from the encoder to the decoder and not included in the encoded bitstream.
An example communication system in which the various embodiments may be employed is described with reference to the schematic block diagram of
The communication system comprises a first, transmitting terminal 12 and a second, receiving terminal 22. For example, each terminal 12, 22 may comprise one of a mobile phone or smart phone, tablet, laptop computer, desktop computer, or other household appliance such as a television set, set-top box, stereo system, etc. The first and second terminals 12, 22 are each operatively coupled to a communication network 32 and the first, transmitting terminal 12 is thereby arranged to transmit signals which will be received by the second, receiving terminal 22. Of course the transmitting terminal 12 may also be capable of receiving signals from the receiving terminal 22 and vice versa, but for the purpose of discussion the transmission is described herein from the perspective of the first terminal 12 and the reception is described from the perspective of the second terminal 22. The communication network 32 may comprise for example a packet-based network such as a wide area internet and/or local area network, and/or a mobile cellular network.
The first terminal 12 comprises a tangible, computer-readable storage medium 14 such as a flash memory or other electronic memory, a magnetic storage device, and/or an optical storage device. The first terminal 12 also comprises a processing apparatus 16 in the form of a processor or CPU having one or more cores; a transceiver such as a wired or wireless modem having at least a transmitter 18; and a video camera 15 which may or may not be housed within the same casing as the rest of the terminal 12. The storage medium 14, video camera 15 and transmitter 18 are each operatively coupled to the processing apparatus 16, and the transmitter 18 is operatively coupled to the network 32 via a wired or wireless link. Similarly, the second terminal 22 comprises a tangible, computer-readable storage medium 24 such as an electronic, magnetic, and/or an optical storage device; and a processing apparatus 26 in the form of a CPU having one or more cores. The second terminal comprises a transceiver such as a wired or wireless modem having at least a receiver 28; and a screen 25 which may or may not be housed within the same casing as the rest of the terminal 22. The storage medium 24, screen 25 and receiver 28 of the second terminal are each operatively coupled to the respective processing apparatus 26, and the receiver 28 is operatively coupled to the network 32 via a wired or wireless link.
The storage medium 14 on the first terminal 12 stores at least a video encoder arranged to be executed on the processing apparatus 16. When executed the encoder receives a “raw” (unencoded) input video signal from the video camera 15, encodes the video signal so as to compress it into a lower bitrate stream, and outputs the encoded video for transmission via the transmitter 18 and communication network 32 to the receiver 28 of the second terminal 22. The storage medium on the second terminal 22 stores at least a video decoder arranged to be executed on its own processing apparatus 26. When executed the decoder receives the encoded video signal from the receiver 28 and decodes it for output to the screen 25. A generic term that may be used to refer to an encoder and/or decoder is a codec.
In operation, the projection generator 60 sub-divides the input video signal into a plurality of projections, either generating a respective projection for each successive frame as discussed above in relation to
Within a given projection, the forward transform module 42 transforms each block of lower resolution samples from a spatial domain representation into a transform domain representation, typically a frequency domain representation, so as to convert the samples of the block to a set of transform domain coefficients. Examples of such transforms include a Fourier transform, a discrete cosine transform (DCT) and a Karhunen-Loève transform (KLT) details of which will be familiar to a person skilled in the art. The transformed coefficients of each block are then passed through the forward quantization module 44 where they are quantized onto discrete quantization levels (coarser levels than used to represent the coefficient values initially). The transformed, quantized blocks are then encoded through the prediction coding stage 45 or 46 and then a lossless encoding stage such as an entropy encoder 48.
The effect of the entropy encoder 48 is that it requires fewer bits to encode smaller, frequently occurring values, so the aim of the preceding stages is to represent the video signal in terms of as many small values as possible.
The purpose of the quantizer 44 is that the quantized values will be smaller and therefore require fewer bits to encode. The purpose of the transform is that, in the transform domain, there tend to be more values that quantize to zero or to small values, thereby reducing the bitrate when encoded through the subsequent stages.
The encoder may be arranged to encode in either an inter prediction coding mode or an inter prediction coding mode (i.e. motion prediction). If using inter prediction, the inter prediction module 46 encodes the transformed, quantized coefficients from a block of one frame F(t) relative to a portion of a preceding frame F(t−1). The block is said to be predicted from the preceding frame. Thus the encoder only needs to transmit a difference between the predicted version of the block and the actual block, referred to in the art as the residual, and the motion vectors. Because the residual values tend to be smaller, they require fewer bits to encode when passed through the entropy encoder 48.
The location of the portion of the preceding frame is determined by a motion vector, which is determined by the motion prediction algorithm in the inter prediction module 46.
In embodiments a block from one projection of one frame is predicted from a different projection having a different shift in a preceding frame. E.g. referring to
Alternatively in embodiments of the present invention in which frames are each split into a plurality of projections, the motion prediction may be between two corresponding projections from different frames, i.e. between projections having the same shift within their respective frames. For example referring to
If using inter prediction, the transformed, quantized samples are subject instead to the intra prediction module 45. In this case the transformed, quantized coefficients from a block of the current frame F(t) are encoded relative to a block within the same frame, typically a neighbouring block. The encoder then only needs to transmit the residual difference between the predicted version of the block and the neighbouring block. Again, because the residual values tend to be smaller they require fewer bits to encode when passed through the entropy encoder 48.
In embodiments of the present invention, the intra prediction module 45 predicts between blocks of the same projection in the same frame, e.g. in the case of
The prediction may present more opportunities for reducing the size of the residual, because corresponding counterpart samples from the different projections will tend to be similar and therefore result in a small residual. In embodiments the intra prediction module 45 may be configured to select which of the projections to use as the base projection and which to encode relative to the base projection. E.g. so the intra prediction module could instead choose projection (c) as the base projection and then encode projections (a), (b) and (d) relative to projection (c). The intra prediction module 45 may be configured to select which is the base projection in order to minimize or at least reduce the residual, e.g. by trying all or a subset of possibilities and selecting that which results in the smallest overall residual bitrate to encode.
Once encoded by the intra prediction coding module 45 or inter prediction coding module 46, the blocks of samples of the different projections are passed to the entropy encoder 48 where they are subject to a further, lossless encoding stage. The encoded video output by the entropy encoder 48 is then passed to the transmitter 18, which transmits the encoded video 33 to the receiver 28 of the receiving terminal 22 over the network 32, in embodiments a packet-based network such as the Internet.
In operation, each projection is individually passed through the decoder 50 and treated as a separate stream.
The entropy decoder 58 performs a lossless decoding operation on each projection of the encoded video signal 33 in accordance with entropy coding techniques, and passes the resulting output to either the intra prediction decoding module 55 or the inter prediction decoding module 56 for further decoding, depending on whether intra prediction or inter prediction (motion prediction) was used in the encoding.
If inter prediction was used, the inter prediction module 56 uses the motion vector received in the encoded signal to predict a block from one frame based on a portion of a preceding frame. As discussed, this prediction could be between different projections of different frames, or the same projection in different frames. In the former case the motion vector and shift are added as shown in
If intra prediction was used, the intra prediction module 55 predicts a block from another block in the same frame. In embodiments, this comprises predicting blocks of one projection based on blocks of another, base projection.
The decoded projections are then passed through the reverse quantization module 54 where the quantized levels are converted onto a de-quantized scale, and the reverse transform module 52 where the de-quantized coefficients are converted from the transform domain into lower resolution samples in the spatial domain. The dequantized, reverse transformed samples are supplied on to the super resolution module 70.
The super resolution module uses the lower resolution samples from the different projections of the same frame to “stich together” a higher resolution version of the video image represented by the signal being decoded. As discussed, this can be achieved by taking overlapping lower resolution samples from different projections (either from different frames or the same frame), and generating a higher resolution sample corresponding to the region of overlap. The value of the higher resolution sample is found by extrapolating between the values of the overlapping lower resolution samples, e.g. by talking an average. E.g. see the shaded region overlapped by four lower resolution samples S from the four different projections (a) to (d) in
In other embodiments, the process may involve some degradation. For example this may be the case if each lower resolution sample represents four higher resolution samples of the original input frame, but the four projections with shifts of (0,0); (0, +½); (−½, +½); and (+½, 0) are spread out in time over different successive frames as in
In other embodiments the process of reconstructing the frame from a plurality of projections may be lossless. For example this may be the case if each lower resolution sample represents four higher resolution samples of the original input frame as shown in
This process is performed over all frames in the video signal being decoded. If different projections are provided in different frames as in
The different projections are transmitted over the network 32 from the transmitting terminal 12 to the receiving terminal 22 in separate packet streams. Thus each projection is transmitted in a separate set of packets making up the respective stream, in embodiments being distinguished by a separate stream identifier for each stream included in the packets of that stream. At least one of the streams is independently encoded, i.e. using a self-contained encoding, not relative to any others of the streams carrying the other projections. In embodiments more or all of the streams may be encoded in this way, or alternatively some others may be encoded relative to a base projection in one of the streams.
A result of transmitting in different streams is that one or more of the streams can be dropped, or packets of those streams dropped, and it is still possible to decode at least a lower resolution version of the video from one of the remaining projections, or potentially a higher (but not full) resolution version from a subset of remaining projections. The streams or packets may be deliberately dropped, or may be lost in transmission.
Projections may be dropped at various stages of transmission for various reasons. Projections may be dropped by the transmitting terminal 12. It may be configured to do this in response to feedback from the receiving terminal 22 that there are insufficient resources at the receiving terminal (e.g. insufficient processing cycles or downlink bandwidth) to handle a full or higher resolution version of the video, or that a full or higher resolution is not necessarily required by a user of the receiving terminal; or in response to feedback from the network 32 that there are insufficient resources at one or more elements of the network to handle a full or higher resolution version of the video, e.g. there is network congestion such that one or more routers have packet queues full enough that they discard packets or whole streams, or an intermediate server has insufficient processing resources or up or downlink bandwidth. Another case of dropping may occur where the transmitting terminal 12 does not have enough resources to encode at a full or higher resolution (e.g. insufficient processing cycles or uplink bandwidth). Alternatively or additionally, one or more of the streams carrying the different projections may be dropped by an intermediate element of the network 32 such as a router or intermediate server, in response to network conditions (e.g. congestion) or information from the receiving terminal 22 that there are insufficient resources to handle a full or higher resolution or that such resolution is not necessarily required at the receiving terminal 22.
For example, say a signal is split into four projections (a) to (d) at the encoder side, each in a separate stream. If the receiving terminal 22 receives all four streams, the decoding system can recreate a full resolution version of that frame. If however one or more streams are dropped, e.g. the streams carrying projections (b) and (d), the decoding system can still reconstruct a higher (but not full) resolution version of the video by extrapolating only between overlapping samples of the projections (a) and (c) from the remaining streams. Alternatively if only one stream remains, e.g. carrying projection (a), this can be used alone to display only a lower resolution version of the frame. Thus there may be provided a new form of layered or scaled coding based on splitting a video signal into different projections.
If prediction between projections is used then the base projection will not be dropped if it can be avoided, but one, some or all of the other projections predicted from the base projection may be dropped. To this end, the base projection may be marked as a priority by including a tag as side information in the encoded stream of the base projection. Elements of the networks 32 such as routers or servers may then be configured to read the tag (or note the absence of it) to determine which streams can be dropped and which should not be dropped if possible (i.e. dropping the higher priority base stream should be avoided).
In some embodiments a hierarchical prediction could be used, whereby one projection is predicted from the base projection, then one or more further projections are predicted in turn from each previously predicted projection. E.g. so a second projection (b) may be predicted from a first projection (a), and a third projection (c) may be predicted from the second projection (b), and in turn a fourth projection (d) may be predicted from the projection (c). Further levels may be included if there are more than four projections. Each projection may be tagged with a respective priority corresponding to its order in the prediction hierarchy, and any dropping of projections or the streams carrying the projections may be performed in dependence on this hierarchical tag.
In embodiments the encoder uses a predetermined shift pattern that is assumed by both the encoder side and decoder side without having to be signalled between them, over the network, e.g. both being pre-programmed to use a pattern such as (0,0); (0, +½); (+½, +½); (+½, 0) as described above in relation to
Alternatively if the encoding system is configured to select which to use as a base projection, it may be that an indication concerning the shift pattern is included in the encoded signal. If any expected indication is lost in transmission, the decoding system may be configured to use a default one of the projections alone so at least to be able to display a lower resolution version.
It will be appreciated that the above embodiments have been described only by way of example.
For instance, the various embodiments are not limited to lower resolutions samples formed from 2×2 or 4×4 samples corresponding samples nor any particular number, nor to square or rectangular samples nor any particular shape of sample. The grid structure used to form the lower resolution samples is not limited to being a square or rectangular grid, and other forms of grid are possible. Nor need the grid structure define uniformly sized or shaped samples. As long as there is an overlap between two or more lower resolution samples from two or more different projections, a higher resolution sample can be found from an intersection of lower resolution samples.
In embodiments the encoding is lossless. This may be achieved by preserving edge samples, i.e. explicitly encoding and sending the individual, higher-resolution samples from the edges of each frame in addition to the lower-resolution projections (edge samples cannot be fully reconstructed using the super resolution technique discussed above). Alternatively the edge samples need not be preserved in this manner. Instead the super resolution based technique of splitting a video into projections may be applied only to a portion of a frame (some but not all of the frame) in the interior of the frame, using more conventional coding for regions around the edges. This may also be lossless.
In other embodiments, the encoding need not be lossless—for example some degradation at frame edges may be tolerated.
The various embodiments can be implemented as an intrinsic part of an encoder or decoder, e.g. incorporated as an update to an H.264 or H.265 standard, or as a pre-processing and post-processing stage, e.g. as an add-on to an H.264 or H.265 standard. Further, the various embodiments are not limited to VoIP communications or communications over any particular kind of network, but could be used in any network capable of communicating digital data, or in a system for storing encoded data on a storage medium.
Generally, any of the functions described herein can be implemented using software, firmware, hardware (e.g., fixed logic circuitry), or a combination of these implementations. The terms “module,” “functionality,” “component” and “logic” as used herein generally represent software, firmware, hardware, or a combination thereof. In the case of a software implementation, the module, functionality, or logic represents program code that performs specified tasks when executed on a processor (e.g. CPU or CPUs). The program code can be stored in one or more computer readable memory devices. The features of the techniques described below are platform-independent, meaning that the techniques may be implemented on a variety of commercial computing platforms having a variety of processors.
For example, the user terminals may also include an entity (e.g. software) that causes hardware of the user terminals to perform operations, e.g., processors functional blocks, and so on. For example, the user terminals may include a computer-readable medium that may be configured to maintain instructions that cause the user terminals, and more particularly the operating system and associated hardware of the user terminals to perform operations. Thus, the instructions function to configure the operating system and associated hardware to perform the operations and in this way result in transformation of the operating system and associated hardware to perform functions. The instructions may be provided by the computer-readable medium to the user terminals through a variety of different configurations.
One such configuration of a computer-readable medium is signal bearing medium and thus is configured to transmit the instructions (e.g. as a carrier wave) to the computing device, such as via a network. The computer-readable medium may also be configured as a computer-readable storage medium and thus is not a signal bearing medium. Examples of a computer-readable storage medium include a random-access memory (RAM), read-only memory (ROM), an optical disc, flash memory, hard disk memory, and other memory devices that may us magnetic, optical, and other techniques to store instructions and other data.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.