This Application is a Section 371 National Stage Application of International Application No. PCT/FR2015/052194, filed Aug. 11, 2015 which is incorporated by reference in its entirety and published as WO 2016/024067 on Feb. 18, 2016, not in English.
The present invention relates in a general manner to the field of image processing, and more precisely to the coding and decoding of digital images and of sequences of digital images.
The coding/decoding of digital images applies particularly to images from at least one video sequence having:
The present invention similarly applies to 2D- or 3D-type image coding/decoding.
The invention may particularly, but not exclusively, apply to video coding implemented in current AVC and HEVC video coders and their extensions (MVC, 3D-AVC, MV-HEVC, 3D-HEVC, etc.), and to the corresponding decoding.
Current video coders (MPEG, H.264, HEVC, etc.) use a block representation of the video sequence. The images are cut into blocks, which are able to be cut again recursively. Each block is then coded by intra-image or inter-image prediction. Thus, some images are coded by spatial prediction (intra-prediction), and other images are also coded by temporal prediction (inter-prediction) in relation to one or more coded-decoded reference images, using motion compensation known to a person skilled in the art.
For each block, a residue block, also called prediction residual, corresponding to the original block diminished by a prediction, is coded. The residue blocks are transformed using a mathematical transformation operation, and are then quantized using a mathematical quantization operation, for example of scalar type. Coefficients are obtained at the end of the quantization step. They are then scanned in an order of reading that is dependent on the coding mode that has been chosen. In the HEVC standard, for example, the order of reading is dependent on the prediction performed and can be carried out in the “horizontal”, “vertical” or “diagonal” order.
At the end of the aforementioned scanning, a one-dimensional list of coefficients is obtained. The coefficients of this list are then coded in the form of bits by an entropy coding, the aim of which is to code the coefficients without losses.
The bits obtained after entropy coding are written in a data stream or signal that is intended to be transmitted to the decoder.
In a manner that is known per se, such a signal comprises:
Once the stream has been received by the decoder, decoding is performed image by image, and for each image, block by block. For each block, the corresponding elements of the stream are read. The inverse quantization and the inverse transformation of the coefficients of the blocks are performed in order to produce the decoded prediction residual. The prediction for the block is then calculated and the block is reconstructed by adding the prediction to the decoded prediction residual.
The conventional coding/decoding technique that has just been described certainly allows improvements in coding performance levels. In the video context, it particularly allows:
However, such coding performance levels are not currently optimized and have authority to yet be improved, particularly for the point of view of minimization of the rate/distortion cost or even of the choice of the best effectiveness/complexity compromise, which are criteria well known to a person skilled in the art.
In particular, such optimization could concern the aforementioned mathematical transformation operation. This is conventionally a linear transformation that, when applied to a residue block containing a determined number of K pixels (K≥1), allows a set of K real coefficients to be obtained.
In the field of video coding, discrete cosine transforms, DCT, or discrete sine transforms, DST, are generally privileged, particularly for the following reasons:
However, with such transforms, the algorithmic complexity of the video coder increases in a marked fashion particularly if the image to be coded is cut into blocks of large size, for example into blocks of size 16×16 or 32×32.
By way of example, for a block of 16×16 pixels that is to be coded, a number K=256 pixels is intended to undergo a transformation of the aforementioned type.
Conventionally, such a transformation involves applying a first transformation operation A to a residue block x having K pixels organized in the form of a 16×16 matrix, where A is a data matrix having an identical size to that of the residue block x, that is to say 16×16 in the presented example. At the end of this first transformation, a first transformed block A·x is obtained.
A transposition operation t is then applied to the transformed block A·x. At the end of this transposition, a transposed block (A·x)t is obtained.
Finally, a second transformation operation A is applied to the transposed block (A·x)t. At the end of this second transformation, a second transformed block X having K=16×16 pixels is obtained, such that:
X=A·(Ax)t
If only a residue block x presenting itself in the form of a column matrix having 16 pixels is considered, then the algorithms used for the coder already perform numerous operations, such as 2×81 additions and 2×33 multiplications, in order to obtain a transformed block X in the form of a column matrix having 16 pixels. Thus, it clearly appears that in the case of the residue block of 16×16 pixels in the presented example, the algorithms used for the coder must perform the number of aforementioned operations 16 times, or 16×2×81 additions and 16×2×33 multiplications, which amounts to performing 2592 additions and 1056 multiplications in total in order to obtain the K=256 coefficients transformed for a block of size 16×16.
The following table summarizes the number of operations to be performed according to the size of the current pixel block to be coded that is under consideration, when a transformation of DCT type is applied:
It can be concluded from the table above that, particularly for blocks of large size, 16×16 and 32×32, the number of operations becomes particularly large.
Further, when the transform is applied to blocks of size 16×16 or 32×32, this is followed by a large number of coefficients, 256 or 1024 respectively, being obtained that are subsequently quantized, and then coded by entropy coding.
In the HEVC standard, for example, entropy coding (for example of arithmetic coding or Huffman coding type) is performed in the following manner:
Thus, the greater the number of transformed coefficients, the more complex the subsequent quantization and entropy coding operations are to perform.
It is also worth noting that the very large number of coefficients obtained following such transforms is very costly to signal to the decoder and brings about a large reduction in the transmission rate gain for the coded data between the coder and the decoder.
As far as the decoding of a current block is concerned, for example a current block of 16×16 pixels to be decoded, the 256 coefficients obtained at the end of the aforementioned entropy coding undergo a similar transformation to that performed in the coding.
Consequently, for the decoder, the complexity of the algorithms necessary for transforming the coefficients obtained at the end of the entropy coding is the same as for the algorithms used in the coder to perform the transformation of the current residual block.
One of the aims of the invention is to overcome disadvantages of the aforementioned prior art.
To this end, one subject of the present invention concerns a method for coding at least one image cut into blocks, comprising, for a current block having K pixels to be coded, where K≥1, the steps involving:
The coding method according to the invention is remarkable in that it comprises the steps involving:
Since the determined data set systematically contains a number Mi of data lower than the number K of pixels of the current block, such a provision advantageously allows less data to be coded than in the prior art, which potentially orders K data to be coded.
According to a particular embodiment, the group of pixel blocks containing a number Mi of blocks having K pixels and each representing a predetermined texture belongs to L groups of pixel blocks respectively containing a number M1, M2, . . . , Mi, . . . , ML of blocks having K pixels and each representing a predetermined texture, where 1≤i≤L, the coding method comprising, prior to the step of coding the Mi data of the determined data set, the steps involving:
When the selected data set contains a number of data lower than the number K of pixels of the current block, such a provision allows fewer data to be coded than in the prior art, which systematically orders K data to be coded. Moreover, on account of a larger number of texture block groups being available for coding, it is thus possible to test multiple possibilities for coding a current block, and, in the coding context, to meet a compromise between a small number of data to be coded that can be obtained and a high restoration quality for the reconstructed image.
According to a particular embodiment, the set of Mi data determined or even selected according to a predetermined coding performance criterion contains a single datum.
Such a provision advantageously allows optimization of the reduction in the coding cost and avoidance of indication of the position of the coded datum associated with the determined or selected set of Mi data, since Mi=1.
According to another particular embodiment, the predetermined coding performance criterion is the minimization of the rate-distortion cost of the current block to be coded.
The choice of such a criterion optimizes the selection of the set of Mi data from among the L available data sets.
According to another particular embodiment, the coding method further comprises a step of preparation of a data signal containing:
Such a provision advantageously allows reduction of the cost of signaling if the selected data set contains a number of data that is smaller than the number K of pixels of the current block.
The various aforementioned embodiments or implementation features can be added independently or in combination with one another to the steps of the coding method as defined above.
The invention also concerns a device for coding at least one image cut into blocks, comprising, for a current block having K pixels to be coded, where K≥1, a prediction module for predicting the current block using at least one predictor block having K pixels and for determining a residue block having K pixels and representing the difference between the predictor block and the current block.
Such a coding device is remarkable in that it comprises:
Such a coding device is particularly capable of implementing the aforementioned coding method.
The invention also concerns a method for decoding a data signal representing at least one image cut into blocks, comprising, for a current block having K pixels to be reconstructed, where K≥1, a step involving determining at least one predictor block for the current block to be reconstructed, said predictor block containing K pixels.
Such a decoding method is remarkable in that it comprises the steps involving:
Such a provision advantageously allows, if the read data set contains a number of data that is smaller than the number K of pixels of the current block to be decoded, decoding of a smaller number of data than in the prior art, which systematically orders the decoding of K data. This results in less complex and faster decoding.
According to a particular embodiment, the read set of Mi data contains a single datum.
According to a particular embodiment, the group of pixel blocks containing a given number Mi of blocks having K pixels and each representing a predetermined texture belongs to L groups of pixel blocks respectively containing a number Mi, M2, . . . , Mi, . . . , ML of blocks having K pixels and each representing a predetermined texture, where 1≤i≤L.
According to another particular embodiment:
According to yet another particular embodiment, the method further entails the steps involving:
Such a provision allows significant reduction of the bit rate gain, owing to the fact that it is possible to reduce the cost of signaling the L groups of pixel blocks respectively containing a number M1, M2, . . . , Mi, . . . , ML of blocks having K pixels and each representing a predetermined texture. By way of example, if Mi=1, the positive or negative sign of the unique datum read allows identification of 2L groups of blocks having a predetermined texture, which are coded on log 2(L) bits, for example, rather than on log 2(2L) bits.
The various aforementioned embodiments or implementation features can be added independently or in combination with one another to the steps of the decoding method as defined above.
Correlatively, the invention also concerns a device for decoding a data signal representing at least one image cut into blocks, comprising a determining module for determining, for a current block of K pixels to be reconstructed, where K≥1, at least one predictor block for the current block to be reconstructed, said predictor block containing K pixels.
Such a decoding device is remarkable in that it comprises:
Such a decoding device is particularly capable of implementing the aforementioned decoding method.
The invention also concerns a computer program having instructions for implementing one of the coding and decoding methods according to the invention when it is executed on a computer.
This program can use any programming language and be in the form of source code, object code, or intermediate code between source code and object code, such as in a partial compiled form, or in any other desirable form.
The invention is also directed at a computer-readable recording medium on which is recorded a computer program, this program having instructions adapted to implement one of the coding or decoding methods according to the invention, as are described above.
The invention is also directed at a computer-readable recording medium on which is recorded a computer program, this program having instructions adapted to implementing the coding or decoding method according to the invention, as are described above.
The recording medium may be any entity or device capable of storing the program. By way of example, the medium may have a storage means, such as ROM, for example a CD-ROM or a microelectronic circuit ROMi or even a magnetic recording means, for example a USB key or a hard disk.
On the other hand, the recording medium may be a transmissible medium such as an electrical or optical signal, which can be conveyed via an electrical or optical cable, by radio or by other means. The program according to the invention can particularly be downloaded on an Internet-type network.
Alternatively, the recording medium may be an integrated circuit in which the program is incorporated, the circuit being adapted to execute or to be used in the execution of the aforementioned coding or decoding method.
Other features and advantages will emerge on reading a preferred embodiment described referring to the figures, in which:
A first embodiment of the invention will now be described, in which the coding method according to the invention is used to code an image or a sequence of images according to a binary stream close to that obtained by coding conforming to the HEVC standard, for example.
In this embodiment, the coding method according to the invention is implemented in a software or hardware manner, for example, by modifications to a coder initially conforming to the HEVC standard. The coding method according to the invention is represented in the form of an algorithm having steps C1 to C11 as represented in
According to the embodiment of the invention, the coding method according to the invention is implemented in a coding device CO represented in
As illustrated in
The coding method represented in
During a step C1 represented in
In particular, such a coding unit groups together sets of pixels of rectangular or square shape, also called blocks, macroblocks, or sets of pixels having other geometric shapes.
Said blocks B1, B2, . . . , Bu, . . . , BS are intended to be coded according to a predetermined order of scanning, which is of raster scan type, for example. This means that the blocks are coded one after the other, from left to right.
Other types of scanning are of course possible. Thus, it is possible to cut the image ICj into multiple subimages called slices and to independently apply a cut of this type to each subimage. It is also possible to code not a succession of rows, as explained above, but a succession of columns. It is also possible to scan the rows or columns in one direction or in the other.
According to one example, the blocks B1, B2, . . . , Bu, . . . , BS have a square shape and contain all K pixels, where K≥1. Depending on the size of the image, which is not necessarily a multiple of the size of the blocks, the last blocks on the left and the last blocks at the bottom may not be square. In an alternative embodiment, the blocks may be of rectangular size and/or not aligned with one another, for example.
Each block may furthermore itself be divided into subblocks that are themselves subdivisible.
During a step C2 represented in
During a step C3 represented in
Such a predictor block is a block of K pixels that has already been coded or even coded and then decoded, or not. Such a predictor block is previously stored in the buffer memory TAMP_CO of the coder CO as represented in
At the end of prediction step C3, an optimum predictor block BPopt is obtained following entry of said predetermined prediction modes into competition, for example by minimization of a rate-distortion criterion well known to a person skilled in the art. The block BPopt is considered to be an approximation of the current block Bu. The pieces of information relating to this prediction are intended to be written to a data signal or stream to be transmitted to a decoder. Such pieces of information particularly comprise the type of prediction (inter or intra) and, if need be, the selected prediction mode, the type of partitioning of the current block if the latter has been subdivided, the reference image index and the displacement vector used in the case where an inter-prediction mode has been selected. These pieces of information are compressed by the coder CO.
During a step C4 represented in
A set of residual data, called residue block Bru, is then obtained at the end of step C4. The residue block Bru contains K residual pixels x1, x2, . . . , xg, . . . xK, where 1≤g≤K.
Steps C3 and C4 are implemented by a predictive coding software module PRED_CO represented in
During a step C5 represented in
A block BTv having a predetermined texture and considered in the group Gi contains K pixels tv,1, tv,2, . . . , tv,g, . . . , tv,K. The predefined group Gi of blocks BT1, BT2, . . . BTv, . . . , BTMi is previously stored in the memory MEM_CO of the coder CO of
More precisely, in accordance with step C5, Mi data Xu,1, Xu,2, . . . , Xu,Mi are calculated, which are scalar values obtained using the following relationship:
Such a step C5 is implemented by a calculation software module CAL1_CO represented in
Referring to
During a step C6 represented in
During a step C7 represented in
In another embodiment, the Mi quantized data are coded in a similar manner to that used by HEVC. To this end, the Mi quantized data are subjected to entropy coding (for example of arithmetic coding or Huffman coding type) as follows:
Such an entropy coding step is implemented by an entropy coding software module MCE represented in
At the end of the aforementioned coding step C7, a data signal or stream F that contains the set of the coded data of the block Bqu of quantized data is then delivered. Such a stream is then transmitted by a communication network (not represented) to a remote terminal. The latter has the decoder DO represented in
During a step C8 represented in
Such a dequantization step is performed by an inverse quantization software module MQ−1_CO, as represented in
During a step C9 represented in
At the end of step C9, Mi decoded residue blocks BDru,1, BDru,2, . . . , BDru,Mi are obtained.
During a step C10 represented in
In the particular case in which Mi=1, that is to say that a single decoded residue block BDru,v has been calculated, the decoded residue block BDru is equal to the decoded residue block BDru,v.
When Mi≠1, more precisely, the decoded residue block BDru is obtained by adding the Mi decoded residue blocks BDru,1, BDru,2, . . . , BDru,Mi using the following relationship:
The set of the aforementioned steps C9 to C10 are carried out by a calculation software module CAL2_CO as represented in
During a step C11 represented in
Such a step is implemented by an inverse predictive coding software module PRED−1_CO represented in
Coding steps C2 to C11 that have just been described above are then implemented for each of the blocks B1, B2, . . . , Bu, . . . , BS to be coded from the current image ICj under consideration.
The coding method that has just been described above requires fewer calculation operations than in the aforementioned prior art, given that in calculation step C5, the residue block Bru is multiplied pixel by pixel by a predefined number Mi of blocks having K pixels BT1, BT2, . . . BTv, . . . , BTMi that is smaller than the number K of pixels of the current block Bu.
The result of this is that the data coded following entropy coding step C7 are fewer than in the prior art, which makes it possible to improve the transmission rate gain for such data to the decoder DO of
The four exemplary embodiments mentioned below indicate the number of operations in order to obtain the Mi data Xu,1, Xu,2, . . . , Xu,Mi following the implementation of calculation step C5:
In the case of HEVC coding, it is further possible to reduce the bit rate in the course of entropy coding step C7. As has already been explained earlier on in the introduction to the description, in accordance with HEVC coding, the quantized data are transmitted by indicating the position of the last nonzero quantized datum. Such a position is represented by two indicators called posX and posY that indicate, in the block Bqu of Mi quantized data Xqu,1, Xqu,2, . . . , Xqu,Mi, the coordinates of the last nonzero quantized datum.
By virtue of the invention, the values of posX and/or posY can be prevented from being transmitted in the stream F.
According to a first example, in an embodiment in which Mi=1, there is just a single quantized datum to be transmitted. The position of this single quantized datum is therefore known since it is necessarily the first quantized datum of the block Bqu. Thus, neither of the values of posX and/or posY is signaled in the stream F.
According to a second example, in an embodiment in which Mi=2, there are just two quantized data Xqu,1, Xqu,2 to be transmitted. To this end, only the value of posX or of posY needs to be signaled, a single signaling being sufficient to cover the alternatives Xqu,1, or Xqu,2. Such a value of posX or posY may even be coded on a single bit, for example a bit set to 0 to indicate that the last nonzero quantized datum is Xqu,1 and a bit set to 1 to indicate that the last nonzero quantized datum is Xqu,2.
It is worth noting, however, that the distortion of the current image ICj may be increased taking account of the reduction in coded data that is obtained by virtue of the aforementioned coding method.
To limit this effect, according to another exemplary embodiment, the memory MEM_CO of the coder CO of
Referring to
The first group G1 contains a number M1 of blocks having a predetermined texture that is, by way of example, equal to the number of pixels of the current block, or K=16. Therefore, M1=16. Thus, as illustrated in
The second group G2 contains a number M2 of blocks having a predetermined texture, such that M2=1. Thus, as illustrated in
The third group G3 contains a number M3 of blocks having a predetermined texture, such that M3=2. Thus, as illustrated in
In accordance with the embodiment of
During step C51 represented in
More precisely, during step C51, for a considered group Gi of blocks having a predetermined texture BTi,1, BTi,2 . . . BTi,v, . . . , BTi,Mi, for which a block BTi,v having a predetermined texture and considered in the group Gi contains K pixels ti,v,1, ti,v,2, . . . , ti,v,g, . . . , ti,v,K, Mi data Xu,i,1, Xu,i,2, . . . , Xu,i,Mi are calculated, which are scalar values obtained using the following relationship:
At the end of step C51, L sets each having M1, M2, . . . , Mi, . . . ML data are obtained.
During a step C610 represented in
During a step C611 represented in
At the end of step C611, L blocks of coded data BCu,1, BCu,2, . . . , BCu,i, . . . , BCu,L are obtained.
During a step C612 represented in
During a step C613 represented in
During a step C614 represented in
Such a calculation step is performed by an information coding software module MCI_CO, as represented in
Identical steps to each of steps C8 to C11 of
The coding method that has just been described in connection with
According to an embodiment of the coding method of
According to a preferred embodiment:
In a particular case in which Mi=K, the operation allowing:
Thus, the obtainment of the K data Xu,1, Xu,2, . . . , Xu,K or of the K data Xu,i,1, Xu,i,2, . . . , Xu,i,K is advantageously implemented using a fast algorithm well known to a person skilled in the art.
Advantageously, the number L−1 is chosen as a power of 2. In this way, the index of the group associated with the block BCu,o of coded data that is selected following step C613 of
In the case of a flag set to 1, the index i of the group Gi associated with the block BCu,o of coded data that is selected following step C613 and that forms part of the L−1 remaining groups of pixel blocks that each contain a single block having a predetermined texture is coded during the aforementioned step C614. Such an index may be represented by a code having a fixed or variable length (of Huffman or arithmetic code type).
In accordance with the preferred embodiment described above, it has been observed that with L=5, L=9, L=17 or L=33, the coder CO of
More precisely, the best coding performance levels are obtained:
According to a variant embodiment represented in
Whether the coding method according to the invention is implemented according to the embodiment represented in
Such a method involves generating one or more groups of blocks having a predetermined texture by learning.
To this end, prior to the coding method according to the invention, a large number of blocks of residue data is collected.
Learning is carried out in two main steps.
According to a first step, one or more different groups GP1, GP2, . . . , GPw, . . . , GPL of blocks of residual data, where 1≤w≤L, are gathered together.
This gathering step is performed by comparing a distortion criterion obtained after coding/decoding residual data by each group of texture blocks of a set of L groups of texture blocks G1, G2, . . . , Gw, . . . , GL that each comprise M1, M2, . . . , Mw, . . . , ML texture blocks. A considered block of residual data is assigned to a considered group GPw if the lowest distortion is obtained by the coding of this considered block of residual data with the corresponding group Gw of texture blocks.
A group GPw containing a given number Nw of blocks BRw,1, BRw,2, . . . , BRw,z, . . . , BRw,Nw of residual data is then considered from among the groups GP1, GP2, . . . , GPw, . . . , GPL.
According to a second step, for said considered group GPw, the group of texture blocks for this group is updated.
To this end:
The texture blocks of the group Gw are then updated in order to make them more suited to the coding of the residual data blocks. This update is performed by an identification method between the set of residual data of the considered group GPw and the quantized residual data with the group Gw of texture blocks. This identification of the optimum group of texture blocks can be performed conventionally by singular value decomposition (SVD) or by analysis of main components in order to update the set of texture blocks BTw,1, BTw,2, . . . BTw,v, . . . , BTw,Mw allowing the group of texture blocks minimizing a distortion criterion to be found.
The above process is reiterated, namely the blocks of residual data of the group GPw are again coded by the set of updated texture blocks BTw,1, BTw,2, . . . BTw,v, . . . , BTw,Mw, and then said texture blocks are updated until convergence of the distortion criterion is obtained.
The operations performed during this step are reiterated for each of the groups GP1 to GPL of blocks of residual data in order to obtain G1 to GL groups, respectively, of corresponding texture blocks.
The first and second steps are then reiterated until convergence of the sum of the distortions obtained for each of the groups GP1, GP2, . . . , GPw, . . . , GPL of blocks of residual data is obtained. The aforementioned first and second steps may also include a metric taking into account the number of quantized data transmitted, particularly using an estimation of the bit rate for a given quantization.
Detailed Description of the Decoding Part
An embodiment of the decoding method according to the invention will now be described, in which the decoding method is implemented in a software or hardware manner by modifications to a decoder initially conforming to the HEVC standard. The decoding method according to the invention is represented in the form of an algorithm having steps D1 to D9 as represented in
As illustrated in
The decoding method represented in
To this end, information representing the current image ICj to be decoded is identified in the data signal or stream F received on the decoder, as delivered following the coding method of
Referring to
Such an identification step is implemented by a stream analysis identification module MI_DO, as represented in
The quantized blocks Bq1, Bq2, . . . , Bqu, . . . BqS are intended to be decoded in a predetermined scanning order, which is sequential, for example, that is to say that said blocks are intended to be decoded one after the other in accordance with the raster scan order in which they have been coded.
Types of handling other than that which has just been described above are of course possible and depend on the handling order chosen on coding, examples of which have been mentioned above.
In the preferred embodiment, the blocks to be decoded have a square shape and all have the same size. Depending on the size of the image, which is not necessarily a multiple of the size of the blocks, the last blocks on the left and the last blocks at the bottom may not be square. In an alternative embodiment, the blocks may be of rectangular size and/or not aligned with one another, for example.
Each block may moreover be itself divided into subblocks that are themselves subdivisible.
During a step D2 represented in
During a step D3 represented in
Only when the coding method of
Such a piece of identification information consists of the index i of the group Gi, for example.
This reading step D3 is of no use when the coding method of
The reading step D3 is performed by a reading software module ML_DO, as represented in
More precisely, the reading step D3 comprises a substep D31 of decoding the amplitude and sign information associated with each of the Mi coded quantized data of the block Bqu of quantized data. In the preferred embodiment, the data decoding is entropy decoding of arithmetic or Huffman type. The substep D31 then involves:
At the end of the aforementioned substep D31, a number Mi of pieces of digital information each associated with the Mi quantized data Xqu,1, Xqu,2, . . . , Xqu,Mi that have been coded during the aforementioned step C7 is obtained.
As is known per se, during the aforementioned substep D31, the index opt of the optimum predictor block BPopt that has been used to predict the current block in step C3 of
Such an entropy decoding substep D31 is implemented by an entropy decoding software module MDE represented in
During a step D4 represented in
Such a dequantization step is performed by an inverse quantization software module MQ−1_DO, as represented in
During a step D5 represented in
The abovementioned identification step D5 is performed by a processing software module TR_DO, as represented in
In the example represented in
When it is the coding method of
When it is the coding method of
In a manner similar to decoding method according to the invention, a block BTv having a predetermined texture and considered in the group Gi contains K pixels tv,1, tv,2, . . . , tv,g, . . . , tv,K.
During a step D6 represented in
At the end of step D6, Mi decoded blocks Bru,1, Bru,2, . . . , Bru,Mi of residual data are obtained.
During a step D7 represented in
In the particular case in which Mi=1 and K=16, that is to say that a single decoded residual block Bru,v has been calculated, the decoded residual block Bru is equal to the decoded residual block Bru,v.
Such a provision is particularly advantageous on decoding because the number of data to be decoded is far smaller than in the prior art for which Mi=K. The result of this is less complex and faster decoding.
When Mi≠1, the decoded residual block Bru is obtained by adding the Mi decoded residual blocks Bru,1, Bru,2, . . . , Bru,Mi.
More generally, step D7 is expressed according to the following relationship:
At the end of step D7, a decoded residual block Bru having K pixels xq1, xq2, . . . , xqK is obtained.
The set of the aforementioned steps D6 to D7 is carried out by a calculation software module CAL_DO as represented in
During a step D8 represented in
Such a step is implemented by an inverse predictive decoding software module PRED−1_DO represented in
At the end of step D8, the decoded block Bu obtained is stored in the buffer memory TAMP_DO of
During a step D9 represented in
Such a step is implemented by an image reconstruction software module URI as represented in
Decoding steps D2 to D9 that have just been described above are then implemented for each of the blocks B1, B2, . . . , Bu, . . . , BS to be decoded for the current image ICj under consideration.
In a variant embodiment of the decoding method as represented in
Such a step is implemented by the reading module ML_DO of
Still referring to
It goes without saying that the embodiments that have been described above have been provided purely by way of indication and without applying any limitation, and that numerous modifications can easily be made by a person skilled in the art without, however, departing from the scope of the invention.
Number | Date | Country | Kind |
---|---|---|---|
14 57768 | Aug 2014 | FR | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/FR2015/052194 | 8/11/2015 | WO | 00 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2016/024067 | 2/18/2016 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
6763070 | Lee | Jul 2004 | B1 |
20090175332 | Karczewicz | Jul 2009 | A1 |
20110274164 | Sole | Nov 2011 | A1 |
20130089151 | Do | Apr 2013 | A1 |
20150071344 | Tourapis | Mar 2015 | A1 |
20150373330 | Jeong | Dec 2015 | A1 |
Entry |
---|
Li et al. “Joint Group and residual sparse coding for image compressive sensing” State Key Lab of Integrated Services Networks, Xidian University, Jan. 24, 2019 (Year: 2019). |
Vaswani LS-CS-Residual (LS-CS): Compressive Sensing on Least Squares Residual, IEEE, vol. 58, No. 8, Aug. 2010 (Year: 2010). |
Mun et al. “Residual Reconstruction for Block-Based Compressed Sensing of Video”, Mississippi State University, IEEE, 2011 (Year: 2011). |
Babacan et al. “Reference-Guided Sparsifying Transform Design for Compressive Sensing MRI”, IEEE, 2011 (Year: 2011). |
Ma et al. “Group-Based Truncated L1-2 Model for Image Inpainting”, IEEE, Sep. 2017 (Year: 2017). |
Shu et al. “Non-Local Compressive Sampling Recovery”, University of Illinois at Urbana-Champaign, 2014 (Year: 2014). |
International Search Report dated Oct. 22, 2015 for corresponding International Application No. PCT/FR2015/052194, filed Aug. 11, 2015. |
Yifu Zhang et al., “A novel Image/Video Coding Method Based on Compressed Sensing Theory”, Acoustics, Speech and Signal Processing, 2008. ICASSP 2008. IEEE International Conference ON, IEEE, Piscataway, NJ, USA, Mar. 31, 2008, pp. 1361-1364, XP031250813. |
R.J. Clarke, “Transform Coding of Images: Chapter 3 Orthogonal Transforms for Image Coding, IN: Transform Coding of Images: Chapter 3, Orthogonal Transforms for Image Coding”, Jan. 1, 1990, Academic Press, Inc, XP055181713. |
Written Opinion of the International Searching Authority dated Oct. 22, 2015, for corresponding International Application No. PCT/FR2015/052194, filed Aug. 11, 2015. |
ISO/IEC/23008-2 ITU-T Recommendation H.265 High Efficiency Video Coding (HEVC); “Series H: Audiovisual and Multimedia Systems Infrastructure of audiovisual services—Coding of moving video” Oct. 2014. |
Zhu et al. “A reversibility-gain model for integer Karhunen-Loeve transform design in video coding.” Frontiers of Information Technology & Electronic Engineering, 16(10), pp. 883-891. 2015. |
Lee et al. “DCT Block Conversion for H.264/AVC Video Transcoding.” Dep. of Computer Engineering, Pusan National Univ., Jangjeon-dong, Geumjeong-gu, Busan, 609-735, Korea,Cunha J.C., Medeiros P.D. (eds) Euro-Par 2005 Parallel Processing. Euro-Par 2005. Lecture Notes in Computer Science, vol. 3648. Springer, Berlin, Heidelberg (2005). |
Pei et al. “The Integer Transforms Analogous to Discrete Trigonometric Transforms.” IEEE Transactions on Signal Processing, vol. 48, No. 12, pp. 3345-3364. Dec. 2000. |
Erturk. “Warped Discrete Cosine Transform-Based Low Bit-Rate Block Coding Using Image Downsampling.” EURASIPJournal on Advances in Signal Processing, Hindawi Publishing Corporation, Article ID 43948. 2007. |
Number | Date | Country | |
---|---|---|---|
20170230688 A1 | Aug 2017 | US |