The present invention relates to image compression, more specifically to a method for coefficient bit modeling and an apparatus for coefficient bit modeling.
This section is intended to provide a background or context to the invention that is recited in the claims. The description herein may include concepts that could be pursued, but are not necessarily ones that have been previously conceived or pursued. Therefore, unless otherwise indicated herein, what is described in this section is not prior art to the description and claims in this application and is not admitted to be prior art by inclusion in this section.
The Joint Photographic Experts Group (JPEG) has published a standard for compressing image data known as the JPEG standard. The JPEG standard uses a discrete cosine transform (DCT) compression algorithm that uses Huffman encoding. To improve compression quality for a broader range of applications, the JPEG has developed the “JPEG 2000 standard” (International Telecommunications Union (ITU) Recommendation T.800, August 2002). The JPEG 2000 standard uses discrete wavelet transform (DWT) and adaptive binary arithmetic coding compression.
Various embodiments provide a method and apparatus for encoding images.
Various aspects of examples of the invention are provided in the detailed description.
According to a first aspect, there is provided a method comprising:
obtaining a stripe comprising a magnitude bit of two or more coefficients, each magnitude bit belonging to the same bit-plane, said coefficients representing an image or a part of the image;
obtaining a context matrix comprising significance state of said coefficients and significance state of coefficients neighboring said two or more coefficients on a current bit-plane;
obtaining a previous layer context matrix comprising the significance state of said coefficients and the significance state of coefficients neighboring said two or more coefficients on a previous bit-plane which is one layer above the current bit-plane;
obtaining a context stripe of a bit-plane which is one layer above the previous bit-plane comprising the significance state of said coefficients on a bit-plane which is two layers above the current bit-plane;
obtaining a significance propagation state context matrix comprising the significance propagation significance state of said coefficients and significance propagation significance state of coefficients neighboring the said two or more coefficients on the current bit-plane;
using at least one of said matrices and/or stripes to construct a context label for each said two or more magnitude bits in parallel by assigning a context label selected from a set of context labels.
According to a second aspect, there is provided an apparatus comprising:
means for obtaining a stripe comprising a magnitude bit of two or more coefficients, each magnitude bit belonging to the same bit-plane, said coefficients representing an image or a part of the image;
means for obtaining a context matrix comprising significance state of said coefficients and significance state of coefficients neighboring said two or more coefficients on a current bit-plane;
means for obtaining a previous layer context matrix comprising the significance state of said coefficients and the significance state of coefficients neighboring said two or more coefficients on a previous bit-plane which is one layer above the current bit-plane;
means for obtaining a context stripe of a bit-plane which is one layer above the previous bit-plane comprising the significance state of said coefficients on a bit-plane which is two layers above the current bit-plane;
means for obtaining a significance propagation state context matrix comprising the significance propagation significance state of said coefficients and significance propagation significance state of coefficients neighboring said two or more coefficients on the current bit-plane;
means for using at least one of said matrices and/or stripes to construct a context label for each said two or more magnitude bits in parallel by assigning a context label selected from a set of context labels.
For a more complete understanding of example embodiments of the present invention, reference is now made to the following descriptions taken in connection with the accompanying drawings in which:
The following embodiments are exemplary. Although the specification may refer to “an”, “one”, or “some” embodiment(s) in several locations, this does not necessarily mean that each such reference is to the same embodiment(s), or that the feature only applies to a single embodiment. Single features of different embodiments may also be combined to provide other embodiments.
In the following some details of digital images are provided. An image may be comprised of one or more components, as shown in
In some situations, an image may be quite large in comparison to the amount of memory available to the codec. Consequently, it may not always be feasible to code the entire image as a single unit. Therefore, an image may be broken into smaller pieces, each of which may be independently coded. More specifically, an image may be partitioned into one or more disjoint rectangular regions called tiles. An example of such partitioning is depicted in
Since tiles may be coded independently of one another, the input image may be processed one tile at a time.
In the following, the operation of each of the above blocks is explained in more detail.
The forward multicomponent transform block 110 may apply a multicomponent transform to the tile-component data. Such a transform may operate on all of the components together, and may serve to reduce the correlation between components, leading to improved coding efficiency.
The multicomponent transforms may be an irreversible color transform (ICT) or a reversible color transform (RCT). The irreversible color transform is nonreversible and real-to-real in nature, while the reversible color transform is reversible and integer-to-integer. Both of these transforms map image data from the RGB to YCrCb color space. The transforms may operate on the first three components of an image, with the assumption that components 0, 1, and 2 correspond to the red, green, and blue color planes. Due to the nature of these transforms, the components on which they operate are sampled at the same resolution. In other words, the components have the same size. After the multicomponent transform stage in the encoder 100, data from each component may be treated independently.
The intracomponent transform block 120 may operate on individual components.
An example of the intracomponent transform is the discrete wavelet transform (DWT), wherein the intracomponent transform block 120 may apply a two-dimensional discrete wavelet transform (2D DWT). Another example of intracomponent transform is the change from unsigned number representation to signed number representation, and further example is change to zero DC offset, where the median value is represented with number zero and smallest value with smallest negative number of the range and the largest value with the largest positive value of the range. The discrete wavelet transform splits a component into numerous frequency bands (i.e., subbands). Due to the statistical properties of these subband signals, the transformed data may be coded more efficiently than the original untransformed data. Both reversible integer-to-integer and nonreversible real-to-real discrete wavelet transforms may be employed by the encoder 100. The discrete wavelet transform may apply a number of filter banks to the pre-processed image samples and generate a set of wavelet coefficients for each tile.
Since an image is a two-dimensional (2D) signal, the discrete wavelet transform is applied in both the horizontal and vertical directions. The wavelet transform may then be calculated by recursively applying the two-dimensional discrete wavelet transform to the lowpass subband signal obtained at each level in the decomposition.
In the following, it is supposed that a (R-1)-level wavelet transform is to be employed. The forward transform may be computed to the tile-component data in an iterative manner, as is illustrated in
Transformed coefficients may be obtained by the two-dimensional discrete wavelet transform so that a number of coefficients are collected from each repetition as is depicted in
The bits of the coefficients may be arranged in different bit-planes e.g. as follows. Signs of the coefficients may form a sign layer, the most significant bits (MSB) of the coefficients may form a most significant bit-plane, or layer n-2, if n is the number of bits of the coefficients (including the sign), the next most significant bits of the coefficients may form a next bit-plane, or layer n-3, etc. The least significant bits (LSB) of the coefficients may form a least significant bit-plane, or layer 0. The bit-plane other than the sign layer may also be called as magnitude bit-planes υ(n-2), to υ(0). The sign bit-plane may be called χ.
The quantization block 130 quantizes the transformed coefficients obtained by the two-dimensional discrete wavelet transform. Quantization may allow greater compression to be achieved by representing transform coefficients with smaller precision but high enough required to obtain the desired level of image quality. Transform coefficients may be quantized using a scalar quantization. A different quantizer may be employed for the coefficients of each subband, and each quantizer may have only one parameter, a step size. Quantization of transform coefficients is one source of information loss in the coding path, wherein, in a lossless encoding, the quantization may not be performed. The quantized wavelet coefficients may then be arithmetic coded, for example. Each subband of coefficients may be encoded independently of the other subbands, and a block coding approach may be used.
The coefficients for each subband may be partitioned into code-blocks e.g. in the tier-1 coding block 140. Code-blocks are rectangular in shape, and their nominal size may be a free parameter of the coding process, subject to certain constraints. The nominal width and height of a code-block may be an integer power of two, and the product of the nominal width and height may not exceed a certain value, such as 4096. Since code-blocks are not permitted to cross precinct boundaries, a reduction in the nominal code-block size may be required if the precinct size is sufficiently small. The size of the code-blocks of different subbands may be the same or the size of the code-blocks may be different in different subbands.
The encoding of the code-blocks may also be referred to as coefficient bit modeling (CBM), that may be followed by arithmetic encoding. In context modeling, the coefficients in a code-block are processed bit-plane by bit-plane, starting from the bit-plane which has the coefficient with the most significant non-zero bit in the code-block. A context label is generated for each coefficient in the bit-plane in one of three passes: significance propagation pass (SPP), magnitude refinement pass (MRP), or clean up pass (CU), and each context label is used to describe the context (CX) of that coefficient in that bit-plane. In addition a decision bit (D) is given with each context. A coefficient can become significant in the significance propagation pass or the clean up pass, when the first non-zero magnitude bit is encountered. The significance state of a coefficient bit that has magnitude of 0 (the value of the bit is 0) can anyhow impact to the context of its neighbor coefficients.
After a subband has been partitioned into code-blocks, each of the code-blocks may be independently coded. For each code-block, an embedded code may be produced, comprised of numerous coding passes. The output of the tier-1 encoding process is, therefore, arithmetic encoding of a collection CX-D pairs (from which sign-context-decision pair (SCD-SD) is another example) of coding passes for the various code-blocks. In accordance with an embodiment, the coefficient bit modelling is performed using the parallel single-pass coefficient bit modelling unit described later in this specification.
In the tier-2 coding block 150 code-blocks are grouped into so called precincts. The input to the tier-2 encoding process is the set of bit-plane coding passes generated during tier-1 encoding. In tier-2 encoding, the coding pass information is packaged into data units called packets, in a process referred to as packetization. The resulting packets are then output to the final code stream. The packetization process imposes a particular organization on coding pass data in the output code stream. This organization facilitates many of the desired codec features including rate scalability and progressive recovery by fidelity or resolution.
A packet is a collection of coding pass data comprising e.g. two parts: a header and a body. The header indicates which coding passes are included in the packet, while the body contains the actual coding pass data itself In a coded bit stream, the header and body need not appear together but they may also be transmitted separately.
Each coding pass is associated with a particular component, resolution level, subband, and code-block. In tier-2 coding, one packet may be generated for each component, resolution level, layer, and precinct 4-tuple. A packet need not contain any coding pass data at all. That is, a packet can be empty. Empty packets may sometimes be needed since a packet should be generated for every component-resolution-layer precinct combination even if the resulting packet conveys no new information.
Since coding pass data from different precincts are coded in separate packets, using smaller precincts reduces the amount of data contained in each packet. If less data is contained in a packet, a bit error is likely to result in less information loss (since, to some extent, bit errors in one packet do not affect the decoding of other packets). Thus, using a smaller precinct size leads to improved error resilience, while coding efficiency may be degraded due to the increased overhead of having a larger number of packets.
The rate control block 160 may achieve rate scalability through layers. The coded data for each tile is organized into L layers, numbered from 0 to L-1, where L≧1. Each coding pass is either assigned to one of the L layers or discarded. The coding passes containing the most important data may be included in the lower layers, while the coding passes associated with finer details may be included in higher layers. During decoding, the reconstructed image quality may improve incrementally with each successive layer processed. In the case of lossy compression, some coding passes may be discarded, wherein the rate control block 160 may decide which passes to include in the final code stream. In the lossless case, all coding passes should be included. If multiple layers are employed (i.e., L>1), rate control block 160 may decide in which layer each coding pass is to be included. Since some coding passes may be discarded, tier-2 coding may be one source of information loss in the coding path. Rate control can also adjust the quantizer used in the quantization block 130.
In the following, more detailed description of the parallel single-pass coefficient bit encoder of tier-1 encoding is provided with reference to the flow diagram of
In the following, it is assumed that the size of the code-blocks is 32×32 bits and each DWT coefficient has 11 bits. However, the principles may be implemented with other code-block sizes, such as 64×64 bits, and coefficient sizes different from 11 bits. Furthermore, the code-block need not be square but may also be rectangular. According to the vertical stripe scanning model, samples of code-blocks are scanned in the order illustrated in
For each coefficient of each bit-plane of the code-block may be assigned a variable called significance state. The significance state value may be, for example, 1, if the sample is significant and 0, if the sample is not significant (i.e. insignificant). In the beginning of the encoding of a bit-plane the significance state of each sample may be assigned a default value “not significant”. The significance state may then toggle to significant during propagation of the encoding process.
The magnitude bit-planes of the code-block may be examined, beginning e.g. from the most significant magnitude bit-plane in which at least one bit is non-zero (i.e. is one). This bit-plane may be called as a most significant non-zero bit-plane. Then, the scanning of samples of the code-block may be started from the most significant non-zero bit-plane using the vertical stripe scanning model.
Transformed and quantized coefficients 700 of code-blocks or parts of them may have been stored into a code-block memory 702. In accordance with an embodiment, there may be a significance memory 704 from which two past significance states (σ1 and σ2) of coefficients of a stripe in bit-planes one and two layer higher, respectively, can be read.
A context generator block 706 may operate as follows. The context generator block 706 reads significance states 61 and 62 and the magnitude stripe u and sign stripe χ of the next stripe in processing order. From these, the magnitude u and significance 62 are passed directly to the parallel single-pass context modelling and run-length coding blocks. For the others context matrices as illustrated in
The context matrices contain two dimensions, one in time t and one in bit order i. In order to facilitate efficient computing of parallel single-pass coefficient bit modelling, the context matrices can be extended outside the stripe region with topmost and bottom level containing always value zero. When the context matrix generator creates a new set of significance bits, they become the values on column t0. In the beginning of each processing step, values of t0 becomes t1 and values of t1 becomes t2. For the processing, the current stripe is located in time t1, and this is where the magnitude u and significance σ2 stripes are also aligned.
The significance state of a coefficient of a stripe σSPPt0 of the significance propagation pass context matrix SPP may be obtained 802 e.g. as follows. This is illustrated in
Next, some of the markings used in
The elements of the final context matrix σ corresponding the stripe where the current sample location belongs to may be indicated as σ[i], 0≦i<4, or σt1[i], 0≦i<4. Correspondingly, the elements of the final context matrix σ corresponding the stripe to the left of the current sample location may be indicated as σt2[i], 0≦i<4, and the elements of the final context matrix σ corresponding the stripe to the right of the current sample location may be indicated as σt0[i], 0≦i<4. Similar notations may be used with the other matrices as well (σ1t2[i], σ1t1[i], σ1t0[i]; σSPPt2[i], σSPPt1[i], σSPPt0[i]; χt2[i], χt1[i], χt0[i]). In accordance with an embodiment, the size (height) of the stripe is 4 bits, wherein the size of the context matrix can be 6 bits high and 3 bits wide. However, the stripe and context matrix may also have other sizes, such as 2 bits and 4×3 bits; 8 bits and 10×3 bits; etc. The width of the stripe may also be other than one bit, e.g. two bits, wherein the context matrix may then also be wider than the above examples.
In the beginning of processing a code-block, the context generator block 706 may initialize all context matrices σSPP, σ, σ1, and χ and context memory of σ1 and σ2, so that each element of the matrices indicates an insignificant state (e.g. the elements are set to 0). Also, in the beginning of processing a stripe row, the context generator block 706 may initialize context matrices σSPP, σ1, and χ, so that when the current stripe is being processed in t1, the t2 values are all insignificant.
The context generator block 706 may construct and output to the parallel single-pass context modeling block 142 and to the run-length encoder 143 e.g. the following information regarding the current stripe 144 as illustrated in
The above mentioned data is input to the parallel single-pass context modeling block 142 for bit-plane encoding. Together with context matrix generator, this block performs the processing depicted in
The significance state of the bit in the same column but in the next row of the bit-plane which is one layer above of the current bit-plane may be examined, i.e. the value of the previous context matrix σlt1[i+1]. If the significance state is significant (i.e. σ1t1[i+1]≠0), significance propagation pass masks may be used 412 in encoding the context and decision pairs for this magnitude bit. This condition is illustrated in the first row in the block 410 of the flow diagram of
Further, the significance state of bits in the next column t0 of the bit-plane which is one layer above of the current bit-plane may be examined, i.e. the values of the previous context matrix σ1t0[i−1], σ1t0[i] and σ1t0[i+1]. If the significance state is significant (i.e. σ1t0[i−1]≠0 or a σ1t0[i]≠0 or σ1t0[i+1]≠0), significance propagation pass masks may be used 412 in encoding the context and decision pairs for this magnitude bit. This condition is illustrated in the second row in the block 410 of the flow diagram of
The significance state of bits in the previous column t2 of the current bit-plane may be examined, i.e. the values of the significance propagation context matrix σSPPt2[i−1], σSPPt2[i] and σSPPt2[i+1]. If the significance state is significant (i.e. σSPPt2[i−1]≠0 or σSPPt2[i]≠0 or σSPPt2[i+1]≠0), significance propagation pass masks may be used 412 in encoding the context and decision pairs for this magnitude bit. This condition is illustrated in the third row in the block 410 of the flow diagram of
When the current bit is the first bit in the stripe (i.e. i=0), the previous row refers outside of the current stripe row, i.e. i−1<0. Hence, in accordance with an embodiment, the significance state value of “insignificant” (0) is used for such bit positions. Correspondingly, when the current bit is the last bit in the stripe (i.e. i=3), the next row refers outside of the current stripe row, i.e. i+1>3. Hence, in accordance with an embodiment, the significance state value of “insignificant” (0) is used for such bit positions.
The significance state of the bit in the same column but in the previous row of the current bit-plane may also be examined, i.e. the value of the significance propagation pass matrix σSPPt1[i−1]. If the significance state is significant (i.e. σSPPt1[i−1]≠0), significance propagation pass masks may be used 412 in encoding the context and decision pairs for this magnitude bit. This condition is illustrated in the fourth row in the block 410 of the flow diagram of
If none of the above mentioned examinations indicate that the significance state is significant, the process may continue to block 414 and use clean up masks in encoding the context and decision pairs for this magnitude bit.
If either of SPP or CU mask was used and the current magnitude bit is one, current magnitude bit will become significant and therefore sign coding context-decision pair CXS-S may also be given. This pair (728, 730) may share the ID (722) of the primary CX-D pair (724,726).
It should be noted here that the above mentioned four examinations may be performed in another order than described. Further, it is not necessary to perform all these four examinations if the significance state of some of the examined bits is found significant.
In other words, the examinations in block 410 may be interrupted when the first significant state has been found.
After performing the parallel context modelling with significance propagation pass mask 412, the clean up mask 414 or the magnitude refinement mask 408, the value of the parameter i may be examined 416 to determine whether all the samples of the current stripe has been examined. If not so (i<3), the parameter i is incremented by one 418 to take the next sample in the stripe under examination and the process is repeated from the block 404. If the samples of the stripe has been examined (i=3), it is further examined 420 whether the stripe was the last strip of the stripe row. If so, the next stripe may be examined, if any. Otherwise, the next stripe row may be examined by setting 422 the parameters to correspond with the new stripe: t0=new column (i.e. the next stripe of the new stripe to be examined), t1=t0 (the new stripe to be examined) and t2=t1 (the stripe just examined, which now becomes the previous stripe to the new stripe).
It should be noted that the functions 440 may be done in parallel, i.e. there is no actual advancement of i, but this is for illustration purposes only. The i may have value 0, 1, 2, and 3 simultaneously, therefore also outputting all the context fields (
Then, after processing of the current bit-plane the previous context matrix σ1 becomes the second-most previous context stripe 62 i.e. the second-most previous context stripe 62 gets the values of the previous context matrix σ1. Also the final context matrix σ becomes the previous context matrix σ1 i.e. the previous context matrix σ1 gets the values of the final context matrix σ. These can be done e.g. by changing the order of buffers which are used to store the values of the matrices. Hence, no actual copying of values may be needed.
The process explained above may be repeated until all stripe rows of the code-block on the current bit-plane have been examined.
The process explained above may be repeated until all code-blocks of the current tile have been examined.
The process explained above may be repeated until all the tiles of the current image have been examined.
In the following, the use of significance propagation pass mask 412, the clean up pass mask 414, and the magnitude refinement pass mask 408 are described in more detail with reference to
The significance propagation pass mask 412 structure illustrated in
The clean up mask 414 structure, illustrated in
The magnitude refinement pass mask 408 structure, illustrated in
As a non-limiting example of the processing method of
Since the context selection may be implementation specific and does not affect to the selection of the passes 408, 412, 414, no further details are provided in this context.
The described embodiment may also comprise a run-length coding element 143, which may perform run-length coding for the magnitude bits of the stripe and give out the run-length context RL in
The output of the above described parallel single-pass context modeling element 142 may be a context label and decision pair for each bit of a stripe 710. A non-limiting example of the context output 710 for one stripe is depicted in
An example of a content of one bit in the context output 710 is depicted in
In accordance with an embodiment, the context output 710 may have e.g. two bits for the run-length, two bits for the uniform field, and four 11-bit context words for each bit of the stripe, as is illustrated in
The context outputs 710 may be input to the arithmetic encoder 144 which encodes the context outputs and provides the encoding result to the tier-2 coding block 150. The rate control block 160 may perform rate control to adjust the amount of data to be transmitted.
As was already mentioned above, the decoder 200 may perform decoding operations which may mainly correspond to inverse operations of the encoder 100. The encoded code stream may be received and provided to the tier-2 decoding block 210 to form reconstructed arithmetic code words. These code words may be decoded by the tier-1 decoding block 220. The resulting reconstructed quantized coefficient values may be dequantized by the dequantization block 230 to produce reconstructed dequantized coefficient values. These may be inverse transform by the inverse intracomponent transform block 240 and the inverse multicomponent transform block 250 to produce reconstructed pixel values of the encoded image.
In the above description the tier-1 encoding was performed on quantized coefficient values obtained from the discrete wavelet transform. However, similar encoding operations may also be performed to other kind of data in a rectangular form, such as to pixel values of the original image. However, omitting the discrete wavelet transform may cause less efficient compression of the image.
Further, in the above examples the significance state value for “significant” was 1 and the significance state value for “insignificant” was 0. However, these may also be defined otherwise, for example the other way round. Then, the significance state value for “significant” were 0 and the significance state value for “insignificant” were 1.
The architecture of the apparatus 100 and/or 200 may be realized e.g. as a general purpose field programmable gate array (FPGA), application specific instruction set processor (ASIP), an application specific integrated circuit (ASIC) implementation or other kind of integrated circuit implementation, or any combination of these, which performs the procedures described above.
The following describes in further detail suitable apparatus and possible mechanisms for implementing the embodiments of the invention. In this regard reference is first made to
The electronic device 50 may for example be a mobile terminal or user equipment of a wireless communication system. However, it would be appreciated that embodiments of the invention may be implemented within any electronic device or apparatus which may require transmission of radio frequency signals.
The apparatus 50 may comprise a housing 30 for incorporating and protecting the device. The apparatus 50 further may comprise a display 32 in the form of a liquid crystal display. In other embodiments of the invention the display may be any suitable display technology suitable to display an image or video. The apparatus 50 may further comprise a keypad 34. In other embodiments of the invention any suitable data or user interface mechanism may be employed. For example the user interface may be implemented as a virtual keyboard or data entry system as part of a touch-sensitive display. The apparatus may comprise a microphone 36 or any suitable audio input which may be a digital or analogue signal input. The apparatus 50 may further comprise an audio output device which in embodiments of the invention may be any one of: an earpiece 38, speaker, or an analogue audio or digital audio output connection. The apparatus 50 may also comprise a battery 40 (or in other embodiments of the invention the device may be powered by any suitable mobile energy device such as solar cell, fuel cell or clockwork generator). The term battery discussed in connection with the embodiments may also be one of these mobile energy devices. Further, the apparatus 50 may comprise a combination of different kinds of energy devices, for example a rechargeable battery and a solar cell. The apparatus may further comprise an infrared port 41 for short range line of sight communication to other devices. In other embodiments the apparatus 50 may further comprise any suitable short range communication solution such as for example a Bluetooth wireless connection or a USB/firewire wired connection.
The apparatus 50 may comprise a controller 56 or processor for controlling the apparatus 50. The controller 56 may be connected to memory 58 which in embodiments of the invention may store both data and/or may also store instructions for implementation on the controller 56. The controller 56 may further be connected to codec circuitry 54 suitable for carrying out coding and decoding of audio and/or video data or assisting in coding and decoding carried out by the controller 56.
The apparatus 50 may further comprise a card reader 48 and a smart card 46, for example a UICC reader and UICC for providing user information and being suitable for providing authentication information for authentication and authorization of the user at a network.
The apparatus 50 may comprise radio interface circuitry 52 connected to the controller and suitable for generating wireless communication signals for example for communication with a cellular communications network, a wireless communications system or a wireless local area network. The apparatus 50 may further comprise an antenna 60 connected to the radio interface circuitry 52 for transmitting radio frequency signals generated at the radio interface circuitry 52 to other apparatus(es) and for receiving radio frequency signals from other apparatus(es).
In some embodiments of the invention, the apparatus 50 comprises a camera 42 capable of recording or detecting imaging.
With respect to
For example, the system shown in
The example communication devices shown in the system 10 may include, but are not limited to, an electronic device or apparatus 50, a combination of a personal digital assistant (PDA) and a mobile telephone 14, a PDA 16, an integrated messaging device (IMD) 18, a desktop computer 20, a notebook computer 22, a tablet computer. The apparatus 50 may be stationary or mobile when carried by an individual who is moving. The apparatus 50 may also be located in a mode of transport including, but not limited to, a car, a truck, a taxi, a bus, a train, a boat, an airplane, a bicycle, a motorcycle or any similar suitable mode of transport.
Some or further apparatus may send and receive calls and messages and communicate with service providers through a wireless connection 25 to a base station 24. The base station 24 may be connected to a network server 26 that allows communication between the mobile telephone network 11 and the internet 28. The system may include additional communication devices and communication devices of various types.
The communication devices may communicate using various transmission technologies including, but not limited to, code division multiple access (CDMA), global systems for mobile communications (GSM), universal mobile telecommunications system
(UMTS), time divisional multiple access (TDMA), frequency division multiple access (FDMA), transmission control protocol-internet protocol (TCP-IP), short messaging service (SMS), multimedia messaging service (MMS), email, instant messaging service (IMS), Bluetooth, IEEE 802.11, Long Term Evolution wireless communication technique (LTE) and any similar wireless communication technology. A communications device involved in implementing various embodiments of the present invention may communicate using various media including, but not limited to, radio, infrared, laser, cable connections, and any suitable connection. In the following some example implementations of apparatuses utilizing the present invention will be described in more detail.
Although the above examples describe embodiments of the invention operating within a wireless communication device, it would be appreciated that the invention as described above may be implemented as a part of any apparatus comprising a circuitry in which radio frequency signals are transmitted and received. Thus, for example, embodiments of the invention may be implemented in a mobile phone, in a base station, in a computer such as a desktop computer or a tablet computer comprising radio frequency communication means (e.g. wireless local area network, cellular radio, etc.).
In general, the various embodiments of the invention may be implemented in hardware or special purpose circuits or any combination thereof. While various aspects of the invention may be illustrated and described as block diagrams or using some other pictorial representation, it is well understood that these blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
Embodiments of the inventions may be practiced in various components such as integrated circuit modules. The design of integrated circuits is by and large a highly automated process. Complex and powerful software tools are available for converting a logic level design into a semiconductor circuit design ready to be etched and formed on a semiconductor substrate.
Programs, such as those provided by Synopsys, Inc. of Mountain View, California and Cadence Design, of San Jose, California automatically route conductors and locate components on a semiconductor chip using well established rules of design as well as libraries of pre stored design modules. Once the design for a semiconductor circuit has been completed, the resultant design, in a standardized electronic format (e.g., Opus, GDSII, or the like) may be transmitted to a semiconductor fabrication facility or “fab” for fabrication.
The foregoing description has provided by way of exemplary and non-limiting examples a full and informative description of the exemplary embodiment of this invention. However, various modifications and adaptations may become apparent to those skilled in the relevant arts in view of the foregoing description, when read in conjunction with the accompanying drawings and the appended claims. However, all such and similar modifications of the teachings of this invention will still fall within the scope of this invention.
Number | Date | Country | Kind |
---|---|---|---|
1503689.0 | Mar 2015 | GB | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/FI2016/050093 | 2/15/2016 | WO | 00 |