IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD

Information

  • Patent Application
  • 20220337865
  • Publication Number
    20220337865
  • Date Filed
    September 23, 2020
    4 years ago
  • Date Published
    October 20, 2022
    2 years ago
Abstract
The present disclosure relates to an image processing device and an image processing method capable of suppressing deterioration of image quality and deterioration of encoding efficiency.
Description
TECHNICAL FIELD

The present disclosure relates to an image processing device and an image processing method, and more particularly to an image processing device and an image processing method capable of suppressing deterioration of image quality and deterioration of encoding efficiency.


BACKGROUND ART

In recent years, in order to further improve the encoding efficiency for AVC (Advanced Video Coding) and HEVC (High Efficiency Video Coding), the standardization of a coding method called VVC (Versatile Video Coding) is being progressed (see also supports of the embodiment described later).


For example, NPL 1 discloses a technique of applying motion compensation to a luminance component using an optical flow.


CITATION LIST
Non Patent Literature



  • [NPL 1]

  • Jiancong (Daniel) Luo, Yuwen He, CE4: Prediction refinement with optical flow for affine mode (Test 2.1), JVET-00070, (version 5, date 2019-07-10)



SUMMARY
Technical Problem

By the way, conventionally, motion compensation using optical flow is applied only to the luminance component, and motion compensation using optical flow is not applied to the color difference component. Therefore, when there is a movement that requires a large affine transform, the difference between the luminance component and the color difference component becomes large, and it is considered that deterioration of subjective image quality and deterioration of encoding efficiency occur.


The present disclosure has been made in view of such a situation, and is intended to suppress deterioration of image quality and deterioration of encoding efficiency.


Solution to Problem

An image processing device according to a first aspect of the present disclosure includes: an inter-prediction unit performs motion compensation processing to which optical flow processing is applied on a color difference component of a current prediction block that is subject to an encoding process as color difference optical flow processing to generate a prediction pixel in the current prediction block; and an encoding unit that encodes a current pixel in the current prediction block using the prediction pixel.


An image processing method according to a first aspect of the present disclosure includes: allowing an image processing device to execute: performing motion compensation processing to which optical flow processing is applied on a color difference component of a current prediction block that is subject to an encoding process as color difference optical flow processing to generate a prediction pixel in the current prediction block; and encoding a current pixel in the current prediction block using the prediction pixel.


In the first aspect of the present disclosure, a prediction pixel in the current prediction block is generated by performing motion compensation processing to which optical flow processing is applied on a color difference component of a current prediction block that is subject to an encoding process as color difference optical flow processing, and a current pixel in the current prediction block is encoded using the prediction pixel.


An image processing device according to a second aspect of the present disclosure includes: an inter-prediction unit performs motion compensation processing to which optical flow processing is applied on a color difference component of a current prediction block that is subject to an encoding process as color difference optical flow processing to generate a prediction pixel in the current prediction block; and a decoding unit that decodes a current pixel in the current prediction block using the prediction pixel.


An image processing method according to a second aspect of the present disclosure includes: allowing an image processing device to execute: performing motion compensation processing to which optical flow processing is applied on a color difference component of a current prediction block that is subject to an encoding process as color difference optical flow processing to generate a prediction pixel in the current prediction block; and decoding a current pixel in the current prediction block using the prediction pixel.


In the second aspect of the present disclosure, a prediction pixel in the current prediction block is generated by performing motion compensation processing to which optical flow processing is applied on a color difference component of a current prediction block that is subject to an encoding process as color difference optical flow processing, and a current pixel in the current prediction block is decoded using the prediction pixel.





BRIEF DESCRIPTION OF DRAWINGS


FIG. 1 is a diagram illustrating a block and a sub-block.



FIG. 2 is a diagram illustrating a motion vector.



FIG. 3 is a block diagram showing a configuration example of an embodiment of an image processing system to which the present technology is applied.



FIG. 4 is a diagram illustrating a first method of calculating a motion vector for a color difference component.



FIG. 5 is a diagram illustrating a second method of calculating a motion vector for a color difference component.



FIG. 6 is a diagram illustrating an effective situation in which the present technology is applied.



FIG. 7 is a block diagram showing a configuration example of an embodiment of a computer-based system to which the present technology is applied.



FIG. 8 is a block diagram showing a configuration example of an embodiment of an image encoding device.



FIG. 9 is a flowchart illustrating an encoding process.



FIG. 10 is a block diagram showing a configuration example of an embodiment of an image decoding device.



FIG. 11 is a flowchart illustrating a decoding process.



FIG. 12 is a block diagram showing an example of a configuration of one embodiment of a computer to which the present technology is applied.





DESCRIPTION OF EMBODIMENTS
Documents that Support Technical Content and Terms

The scope disclosed in the present specification is not limited to the content of the embodiments. The disclosures in the following reference documents REF1 to REF5, which have been publicly known at the time of filing of the present application, are also incorporated into the present specification by reference. In other words, the disclosures in the following reference documents REF1 to REF5 also serve as the grounds for determination on the support requirements. In addition, the documents referenced in the reference documents REF1 to REF5 also serve as the grounds for determination on the support requirements.


For example, even if Quad-Tree Block Structure, QTBT (Quad Tree Plus Binary Tree) Block Structure, and MTT (Multi-type Tree) Block Structure are not directly defined in the detailed description of the invention, the structures are considered to be included within the scope of the present disclosure and to satisfy the support requirements of the claims. For example, the same applies to the technical terms such as Parsing, Syntax, and Semantics. Even if these technical terms are not directly defined in the detailed description of the invention, the technical terms are considered to be included within the scope of the present disclosure and to satisfy the support requirements of the claims.

  • REF1: Recommendation ITU-T H.264 (April 2017) “Advanced video coding for generic audiovisual services”, April 2017
  • REF2: Recommendation ITU-T H.265 (February 2018) “High efficiency video coding”, February 2018
  • REF3: Benjamin Bross, Jianle Chen, Shan Liu, Versatile Video Coding (Draft 6), JVET-O2001-v14 (version 14, date 2019 Jul. 31)
  • REF4: Jianle Chen, Yan Ye, Seung Hwan Kim, Algorithm description for Versatile Video Coding and Test Model 6 (VTM 6), JVET-O2002-v1 (version 1, date 2019 Aug. 15)
  • REF5: Jiancong (Daniel) Luo, Yuwen He, CE4: Prediction refinement with optical flow for affine mode (Test 2.1). JVET-O0070, (version 5, date 2019-07-10)


Terminology

In this application, the following terms are defined as follows.


<Block>

A “block” (not a block indicating a processing unit) used for description as a partial area or a unit of processing of an image (picture) indicates an arbitrary partial area in a picture unless otherwise specified, and the size, shape, characteristics, and the like of the block are not limited. For example, the “block” includes an arbitrary partial area (unit of processing) such as TB (Transform Block), TU (Transform Unit). PB (Prediction Block), PU (Prediction Unit), SCU (Smallest Coding Unit), CU (Coding Unit), LCU (Largest Coding Unit), CTB (Coding Tree Block), CTU (Coding Tree Unit), conversion block, sub-block, macro-block, tile, slice, and the like.


<Specifying Block Size>

Furthermore, in specifying the size of such a block, not only the block size is directly specified but also the block size may be indirectly specified. For example, the block size may be specified using identification information for identifying the size. Furthermore, for example, the block size may be specified by a ratio or a difference from the size of a reference block (for example, an LCU, an SCU, or the like). For example, in a case of transmitting information for specifying the block size as a syntax element or the like, information for indirectly specifying the size as described above may be used as the information. By doing so, the amount of information can be reduced, and the encoding efficiency can be improved in some cases. Furthermore, the specification of the block size also includes specification of a range of the block size (for example, specification of a range of an allowable block sizes, or the like).


<Unit of Information and Processing>

The data unit in which various types of information described above are set and the data unit to be processed by various types of processing are arbitrary, and are not limited to the above-described examples. For example, these pieces of information and processing may be set for each TU (Transform Unit), TB (Transform Block), PU (Prediction Unit), PB (Prediction Block), CU (Coding Unit), LCU (Largest Coding Unit), sub-block, block, tile, slice, picture, sequence, or component, or data in these data units may be used. Of course, this data unit can be set for each information and processing, and the data units of all pieces of information and processing need not to be unified. Note that the storage location of these pieces of information is arbitrary, and may be stored in a header, a parameter, or the like of the above-described data unit. Furthermore, the information may be stored in a plurality of locations.


<Control Information>

Control information regarding the present technology may be transmitted from the encoding side to the decoding side. For example, control information (e.g., enabled_flag) for controlling whether to permit (or prohibit) application of the above-described present technology may be transmitted. Furthermore, for example, control information indicating an object to which the above-described present technology is applied (or an object to which the present technology is not applied) may be transmitted. For example, control information for specifying a block size (upper limit, lower limit, or both) to which the present technology is applied (or application is permitted or prohibited), a frame, a component, a layer, or the like may be transmitted.


<Flag>

In the present specification, the “flag” is information for identifying a plurality of states and includes not only information used to identify two states of true (1) and false (0) but also information for identifying three or more states. Accordingly, a value of the “flag” may be a binary value of 1/0 or may be, for example, a ternary value or more. That is, any number of bits in the “flag” can be used and may be 1 bit or a plurality of bits. For identification information (also including the flag), it is assumed that the identification information is included in a bit stream and differential information of the identification information with respect to information serving as a certain standard is included in a bit steam. Therefore, in the present specification, the “flag” or the “identification information” includes not only the information but also differential information with respect to information serving as a standard.


<Association with Metadata>


Furthermore, various types of information (metadata and the like) about encoded data (bitstream) may be transmitted or recorded in any form as long as they are associated with the encoded data. Here, the term “associate” means, for example, making other information available (linkable) when one piece of information is processed. That is, associated information may be collected as one piece of data or may be individual information. For example, information associated with encoded data (image) may be transmitted on a transmission path different from that for the encoded data (image). Further, for example, information associated with encoded data (image) may be recorded on a recording medium (or another recording area of the same recording medium) different from that for the encoded data (image). Meanwhile, this “association” may be for part of data, not the entire data. For example, an image and information corresponding to the image may be associated with a plurality of frames, one frame, or any unit such as a part in the frame.


In the present specification, a term such as “combining,” “multiplexing,” “adding,” “integrating,” “including.” “storing,” “pushing,” “entering,” or “inserting” means that a plurality of things is collected as one, for example, encoded data and meta data are collected as one piece of data and means one method of the above-described “associating”. Further, in the present specification, encoding includes not only the entire process of converting an image into a bitstream but also a part of the process. For example, encoding includes not only processing that includes prediction processing, orthogonal transform, quantization, and arithmetic encoding, but also includes processing that collectively refers to quantization and arithmetic encoding, and processing that includes prediction processing, quantization, and arithmetic encoding. Similarly, decoding includes not only the entire process of converting a bitstream into an image but also a part of the process. For example, not only includes processing that includes inverse arithmetic decoding, inverse quantization, inverse orthogonal transform, prediction processing, but also processing that includes inverse arithmetic decoding and inverse quantization, inverse arithmetic decoding, inverse quantization, and prediction processing. Including comprehensive processing.


A prediction block means a block that is the unit of processing when performing inter-prediction, and includes sub-blocks in the prediction block. In addition, if the processing unit is unified with an orthogonal transform block that is the unit of processing when performing orthogonal transform or an encoding block that is the unit of processing when performing encoding processing, the prediction block means the same block as the orthogonal transform block and the encoding block.


Inter-prediction is a general term for processing that involves prediction between frames (prediction blocks) such as derivation of motion vectors by motion detection (Motion Prediction/Motion Estimation) and motion compensation using motion vectors. The inter-prediction includes some processes (for example, motion compensation process only) used when generating a prediction image, or all processes (for example, motion detection process and motion compensation process). An inter-prediction mode is meant to include variables (parameters) referred to when deriving the inter-prediction mode, such as the mode number when performing inter-prediction, the index of the mode number, the block size of the prediction block, and the size of a sub-block that is the unit of processing in the prediction block.


In the present disclosure, identification data that identifies a plurality of patterns can be set as the syntax of a bitstream. In this case, the decoder can perform processing more efficiently by parsing and referencing the identification data. A method (data) for identifying the block size includes a method (data) for identifying the difference value with respect to a reference block size (maximum block size, minimum block size, and the like) rather than just digitizing (bitifying) the block size itself.


Hereinafter, specific embodiments to which the present technology is applied will be described in detail with reference to the drawings.


<Conventional Motion Compensation>

With reference to FIGS. 1 and 2, the processing of motion compensation (MC) using affine transform when the format of an input image is Chroma Format: 4:2:0 will be described.


In VVC, motion compensation processing using affine transform is performed by further dividing a motion compensation block into 4×4 samples called sub-blocks.


As shown in FIG. 1, a luminance component Y is subjected to motion compensation processing in 8×8 blocks, and color difference components Cb and Cr are subjected to motion compensation processing in 4×4 blocks. That is, the sizes of the 8×8 blocks of the luminance component Y and the 4×4 blocks of the color difference components Cb and Cr match. At this time, in the motion compensation using the affine transform, the size of the sub-blocks is 4×4, so that the 8×8 blocks of the luminance component Y are divided into four sub-blocks.


Then, by the technique of JVET-O0070, the processing of optical flow is added to the motion compensation of the luminance component Y.


As shown in FIG. 2, after performing motion compensation for the luminance component Y using the motion vector VSB of the 4×4 sub-block, optical flow processing is applied using the motion vector ΔV(i, j) at the pixel level indicated by a blank arrow. On the other hand, since the optical flow processing is not applied to the color difference components Cb and Cr, it is considered that deterioration of subjective image quality and deterioration of encoding efficiency occur as described above.


Therefore, the present technology proposes to apply the optical flow processing to the color difference components Cb and Cr (chroma signals).


<Configuration Example of Image Processing System>


FIG. 3 is a block diagram showing a configuration example of an embodiment of an image processing system to which the present technology is applied.


As shown in FIG. 3, an image processing system 11 includes an image encoding device 12 and an image decoding device 13. For example, in the image processing system 11, the image input to the image encoding device 12 is encoded, the bitstream obtained by the encoding is transmitted to the image decoding device 13, and the decoded image decoded from the bitstream in the image decoding device 13 is output.


The image encoding device 12 has an inter-prediction unit 21, an encoding unit 22, and a setting unit 23, and the image decoding device 13 has an inter prediction unit 31 and a decoding unit 32.


The inter-prediction unit 21 performs motion compensation processing to which an interpolation filter is applied with respect to a current prediction block that is subject to an encoding process, and performs inter-prediction to generate prediction pixels in the current prediction block. At this time, the inter prediction unit 21 is configured to perform motion compensation processing (hereinafter referred to as color difference optical flow processing) to which optical flow processing for the color difference component of the current prediction block that is subject to an encoding process is applied. That is, the inter-prediction unit 21 performs the color difference optical flow processing on the color difference component as well as the luminance component.


The encoding unit 22 encodes the current pixels in the current prediction block using the prediction pixels generated by the inter-prediction unit 21 to generate a bitstream.


The setting unit 23 sets identification data for identifying whether to apply the color difference optical flow processing, block size identification data for identifying the block size of a predicted block to which the color difference optical flow processing is applied, and the like. Then, the encoding unit 22 generates a bitstream including the identification data set by the setting unit 23.


Similarly to the inter-prediction unit 21, the inter-prediction unit 31 also performs color difference optical flow processing on the color difference component of the current prediction block that is subject to a decoding process, and generates prediction pixels in the current prediction block. The inter prediction unit 31 can refer to the identification data contained in the bitstream, identify whether or not to apply cross-component inter-prediction, and identify the block size of the prediction block to which the cross-component inter prediction is applied.


The decoding unit 32 decodes the current pixel in the current prediction block using the prediction pixel generated by the inter-prediction unit 31.


In the image processing system 11 configured as described above, the inter prediction unit 21 and the inter-prediction unit 31 derives pixel-level motion vectors ΔVCb(i, j) and ΔVCr(i, j) of the color difference components Cb and Cr from the calculated motion vector ΔV(i, j) used for the luminance component Y. Then, in the image processing system 11, by applying the color difference optical flow processing to the color difference components Cb and Cr, it is possible to suppress deterioration of subjective image quality and deterioration of encoding efficiency.



FIG. 4 is a diagram illustrating a first method of calculating motion vectors for the color difference components Cb and Cr from a motion vector for the luminance component Y.


For example, the first method is to calculate the pixel-level motion vector ΔVCb of the color difference component Cb and the pixel-level motion vector ΔVCr of the color difference component Cr from the average of the motion vectors ΔV used for the luminance component Y.


That is, as shown in FIG. 4, one pixel of the color difference components Cb and Cr corresponds to four pixels of the luminance component Y, and the average of the four motion vectors ΔV(i, j) used in the optical flow processing for the four pixels is calculated and used as the motion vectors ΔVCb(i, j) and ΔVCr(i, j) of the color difference components Cb and Cr.


The x component ΔVCbx(i, j) of the motion vector of the color difference component Cb is calculated according to the following equation (1) using the x component ΔVlx(i, j) of the motion vector on the upper left corner of the luminance component Y, the x component ΔVlx(i+1, j) of the motion vector on the upper right corner of the luminance component Y, the x component ΔVlx(i,j+1) of the motion vector on the lower left corner of the luminance component Y, and the x component ΔVlx(i+1, j+1) of the motion vector on the lower right corner of the luminance component Y. Similarly, the y component ΔVCby(i, j) of the motion vector of the color difference component Cb is calculated according to the following equation (1) using the y component ΔVly(i, j) of the motion vector on the upper left corner of the luminance component Y, the y component ΔVly(i+1, j) of the motion vector on the upper right corner of the luminance component Y, the y component ΔVly(i,j+1) of the motion vector on lower left corner of the luminance component Y, and the y component ΔVly(i+1, j+1) of the motion vector of the lower right corner of the luminance component Y.









[

Math
.

1

]









{





Δ


V

Cbx

(

i
,
j

)



=


1
4



(


Δ


V

lx

(

i
,
j

)



+

Δ


V

lx

(


i
+
1

,
j

)



+

Δ


V

lx

(

i
,

j
+
1


)



+

Δ


V

lx

(


i
+
1

,

j
+
1


)




)









Δ


V

Cby

(

i
,
j

)



=


1
4



(


Δ


V

ly

(

i
,
j

)



+

Δ


V

ly

(


i
+
1

,
j

)



+

Δ


V

ly

(

i
,

j
+
1


)



+

Δ


V

ly

(


i
+
1

,

j
+
1


)




)










(
1
)







Similarly, using this equation (1), the x component ΔVCrx(i, j) and the y component ΔVCry(i, j) of the motion vector of the color difference component Cr can be calculated.


The amount of change ΔCb(i, j) of the color difference component Cb(i, j) is calculated according to the following equation (2) using the gradient gCbx(i, j) in the x direction and the gradient gCby(i, j) in the y direction of the color difference component Cb and the x component ΔVCbx(i, j) and the y component ΔVCby(i, j) of the motion vector ΔVCb(i, j).













Δ


Cb

(

i
,
j

)



=



g

Cbx

(

i
,
j

)


×
Δ


V

Cbx

(

i
,
j

)



+


g

Cby

(

i
,
j

)


×
Δ


V

Cby

(

i
,
j

)










{





g

Cbx

(

i
,
j

)


=


Cb

(


i
+
1

,
j

)


-

Cb

(


i
-
1

,
j

)










g

Cby

(

i
,
j

)


=


Cb

(

i
,

j
+
1


)


-

Cb

(

i
,

j
-
1


)














(
2
)







Then, the color difference component Cb′(i, j) corrected by applying the color difference optical flow processing to the color difference component Cb(i, j) at the position(i, j) is calculated according to the following equation (3) by adding the amount of change ΔCb(i, j) calculated by this equation (2) to the color difference component Cb(i, j) as a correction value.





[Math. 3]





ΔCb′(i,j)=Cb(i,j)+ΔCb(i,j)  (3)


Similarly, for the color difference component Cr(i, j) at the position(i, j), the color difference component Cr′(i, j) corrected by applying the color difference optical flow processing can be calculated using the equations (2) and (3).



FIG. 5 is a diagram illustrating a second method of calculating motion vectors for the color difference components Cb and Cr from the motion vector for the luminance component Y.


For example, in the second method, one of the motion vectors ΔV(i, j) used for the luminance component Y is used as the pixel-level motion vectors ΔVCb(i, j) and ΔVCr(i, j) of the color difference components Cb and Cr.


That is, as shown in FIG. 5, one pixel of the color difference components Cb and Cr corresponds to four pixels of the luminance component Y, and one with similar motion (in the example shown in FIG. 5, the motion vector of the upper-left pixel) among the four motion vectors ΔV(i, j) used in the optical flow processing for the four pixels is used as the motion vectors ΔVCb(i, j) and ΔVCr(i, j) of the color difference components Cb and Cr.


As shown in the following equation (4), the x component ΔVCbx(i, j) and the y component ΔVCby(i, j) of the motion vector of the color difference component Cb are the x component ΔVlx(i, j) and the y component ΔVly(i, j) of the motion vector on the upper left corner of the luminance component Y.









[

Math
.

4

]









{





Δ


V

Cbx

(

i
,
j

)



=

Δ


V

lx

(

i
,
j

)










Δ


V

Cby

(

i
,
j

)



=

Δ


V

ly

(

i
,
j

)











(
4
)







Then, similarly to the first method, the color difference component Cb′(i, j) and the color difference component Cr′(i, j) corrected by applying the color difference optical flow processing can be calculated using the above-mentioned equations (2) and (3). By adopting the second method, the amount of calculation can be reduced as compared with the first method.


As described above, the image processing system 11 can improve the accuracy of motion compensation by performing motion compensation for the color difference components Cb and Cr at the sub-block level and then performing color difference optical flow processing. Then, by performing the optical flow processing on the luminance component Y and performing the color difference optical flow processing on the color difference components Cb and Cr, it is possible to reduce the shift between the corrected luminance component Y and the color difference components Cb and Cr and suppress deterioration of image quality and deterioration of encoding efficiency.


An effective situation in which the present technology is applied will be described with reference to FIG. 6.


For example, there is a concern that the processing amount will increase when the present technology is applied, and it is preferable to apply the present technology in an effective situation in which the processing amount can be suppressed. In other words, it is effective to apply the present technology with motion compensation in which the motion of the affine transform is large. Therefore, the condition that the time referred to by the motion compensation is large can be expected to be effective when the present technology is applied.


As shown in A of FIG. 6, a reference POC distance or a Temporal ID is used as a threshold value, and whether or not the present technique will be applied can be determined based on whether or not the correction of the affine transform is expected to be large. For example, it is considered that the present technology is to be used in the affine transform of a large reference POC distance. The same effect can be obtained with Temporal ID when hierarchical encoding is used. That is, a large motion is compensated under the condition that the Temporal ID is smaller or larger than a predetermined threshold value, and the present technology can be effective in this condition.


Further, as shown in B of FIG. 6, it is considered that the determination using the reference direction is also effective.


For example, although POC 8 is Bi-prediction, L0 and L1 are references in the same time direction as shown by the solid arrow. For other POC 4 and the like, the directions of L0 and L1 prediction are past and future as shown by the broken line arrow. Therefore, when the past and future can be used for reference as in POC 4, motion compensation with a certain degree of accuracy can be performed without correction by optical flow.


In contrast, in the case of past-only references such as POC 8, optical flow correction can be expected to be effective. Therefore, it is expected that the present technology is more effective in conditions that the reference directions are the same.


In the present embodiment, it has been described that the optical flow processing is applied to the luminance signal Y and the color difference optical flow processing is applied to the chroma signals Cb and Cr, but there is no limitation thereto. For example, the optical flow processing may be applied to the luminance signal Y, and the color difference optical flow processing may be applied to the difference signals U and V.


<Computer-Based System Configuration Example>


FIG. 7 is a block diagram showing a configuration example of an embodiment of a computer-based system to which the present technology is applied.



FIG. 7 is a block diagram showing a configuration example of a network system in which one or more computers, servers, and the like are connected via a network. The hardware and software environment shown in the embodiment of FIG. 7 is shown as an example capable of providing a platform for implementing the software and/or method according to the present disclosure.


As shown in FIG. 7, a network system 101 includes a computer 102, a network 103, a remote computer 104, a web server 105, a cloud storage server 106, and a computer server 107. Here, in the present embodiment, a plurality of instances are executed by one or a plurality of the functional blocks shown in FIG. 7.


Further, in FIG. 7, a detailed configuration of the computer 102 is illustrated. The functional block shown in the computer 102 is shown for establishing an exemplary function, and is not limited to such a configuration. Further, although the detailed configurations of the remote computer 104, the web server 105, the cloud storage server 106, and the computer server 107 are not shown, they include the same configurations as the functional blocks shown in the computer 102.


The computer 102 may be a personal computer, desktop computer, laptop computer, tablet computer, netbook computer, personal digital assistant, smartphone, or other programmable electronic device capable of communicating with other devices on the network.


The computer 102 includes a bus 111, a processor 112, a memory 113, a non-volatile storage 114, a network interface 115, a peripheral interface 116, and a display interface 117. Each of these functions is, in one embodiment, implemented in an individual electronic subsystem (integrated circuit chip or combination of chips and related devices), or in other embodiments, some of the functions may be combined and mounted on a single chip (SoC (System on Chip)).


The bus 111 can employ a variety of proprietary or industry standard high-speed parallel or serial peripheral interconnect buses.


The processor 112 may employ one designed and/or manufactured as one or more single or multi-chip microprocessors.


The memory 113 and the non-volatile storage 114 are storage media that can be read by the computer 102. For example, the memory 113 can employ any suitable volatile storage device such as DRAM (Dynamic Random Access Memory) or SRAM (Static RAM). The non-volatile storage 114 can employ at least one or more of a flexible disk, a hard disk, an SSD (Solid State Drive), a ROM (Read Only Memory), an EPROM (Erasable and Programmable Read Only Memory), a flash memory, a compact disk (CD or CD-ROM), and a DVD (Digital Versatile Disc), a card-type memory, and a stick-type memory.


Further, a program 121 is stored in the non-volatile storage 114. The program 121 is, for example, a collection of machine-readable instructions and/or data used to create, manage, and control specific software functions. In a configuration in which the memory 113 is much faster than the non-volatile storage 114, the program 121 can be transferred from the non-volatile storage 114 to the memory 113 before being executed by the processor 112.


The computer 102 can communicate and interact with other computers via the network 103 via the network interface 115. The network 103 may adopt, for example, a configuration including a wired, wireless, or optical fiber connection using a LAN (Local Area Network), a WAN (Wide Area Network) such as the Internet, or a combination of LAN and WAN. In general, the network 103 consists of any combination of connections and protocols that support communication between two or more computers and related devices.


The peripheral interface 116 can input and output data to and from other devices that may be locally connected to the computer 102. For example, the peripheral interface 116 provides a connection to an external device 131. The external device 131 includes a keyboard, mouse, keypad, touch screen, and/or other suitable input device. The external device 131 may also include, for example, a thumb drive, a portable optical or magnetic disk, and a portable computer readable storage medium such as a memory card.


In embodiments of the present disclosure, for example, the software and data used to implement the program 121 may be stored in such a portable computer readable storage medium. In such embodiments, the software may be loaded directly into the non-volatile storage 114 or directly into the memory 113 via the peripheral interface 116. The peripheral interface 116 may use an industry standard such as RS-232 or USB (Universal Serial Bus) for connection with the external device 131.


The display interface 117 can connect the computer 102 to the display 132, and can present a command line or graphical user interface to the user of the computer 102 using the display 132. For example, the display interface 117 may employ industry standards such as VGA (Video Graphics Array), DVI (Digital Visual Interface), DisplayPort, and HDMI (High-Definition Multimedia Interface) (registered trademark).


<Configuration Example of Image Encoding Device>


FIG. 8 shows the configuration of an embodiment of an image encoding device as an image processing device to which the present disclosure is applied.


An image encoding device 201 shown in FIG. 8 encodes image data using a prediction process. Here, as the encoding method, for example, a VVC (Versatile Video Coding) method, a HEVC (High Efficiency Video Coding) method, or the like is used.


The image encoding device 201 of FIG. 8 has an A/D conversion unit 202, a screen rearrangement buffer 203, a calculation unit 204, an orthogonal transform unit 205, a quantization unit 206, a lossless encoding unit 207, and a storage buffer 208. Further, the image encoding device 201 includes an inverse quantization unit 209, an inverse orthogonal transform unit 210, a calculation unit 211, a deblocking filter 212, an adaptive offset filter 213, an adaptive loop filter 214, a frame memory 215, a selection unit 216, an intra-prediction unit 217, a motion prediction/compensation unit 218, a prediction image selection unit 219, and a rate control unit 220.


The A/D conversion unit 202 performs A/D conversion of the input image data (Picture(s)) and supplies the same to the screen rearrangement buffer 203. It should be noted that an image of digital data may be input without providing the A/D conversion unit 202.


The screen rearrangement buffer 203 stores the image data supplied from the A/D conversion unit 202, and encodes the images of the frames in the stored display order according to the GOP (Group of Picture) structure. Sort by frame order. The screen rearrangement buffer 203 outputs the image in which the order of the frames is rearranged to the calculation unit 204, the intra-prediction unit 217, and the motion prediction/compensation unit 218.


The calculation unit 204 subtracts the prediction image supplied from the intra-prediction unit 217 or the motion prediction/compensation unit 218 via the prediction image selection unit 219 from the image output from the screen rearrangement buffer 203 to obtain the difference information, and output the same to the orthogonal transform unit 205.


For example, in the case of an image to be intra-encoded, the calculation unit 204 subtracts the prediction image supplied from the intra-prediction unit 217 from the image output from the screen rearrangement buffer 203. Further, for example, in the case of an image to be inter-encoded, the calculation unit 204 subtracts the prediction image supplied from the motion prediction/compensation unit 218 from the image output from the screen rearrangement buffer 203.


The orthogonal transform unit 205 performs orthogonal transform such as discrete cosine transform and Karhunen-Loeve transform on the difference information supplied from the calculation unit 204, and supplies the transform coefficient to the quantization unit 206.


The quantization unit 206 quantizes the transform coefficient output by the orthogonal transform unit 205. The quantization unit 206 supplies the quantized transform coefficient to the lossless encoding unit 207.


The lossless encoding unit 207 applies lossless encoding such as variable-length encoding and arithmetic encoding to the quantized transform coefficient.


The lossless encoding unit 207 acquires parameters such as information indicating the intra-prediction mode from the intra-prediction unit 217, and acquires parameters such as information indicating the inter-prediction mode and motion vector information from the motion prediction/compensation unit 218.


The lossless encoding unit 207 encodes the quantized transform coefficient and encodes the acquired parameters (syntax elements) to include (multiplex) the same in a part of the header information of the encoded data. The lossless encoding unit 207 supplies the encoded data obtained by encoding to the storage buffer 208 and stores the same therein.


For example, the lossless encoding unit 207 performs a lossless encoding process such as variable-length encoding or arithmetic encoding. Examples of variable-length encoding include CAVLC (Context-Adaptive Variable Length Coding). Examples of arithmetic encoding include CABAC (Context-Adaptive Binary Arithmetic Coding).


The storage buffer 208 temporarily holds the encoded stream (Encoded Data) supplied from the lossless encoding unit 207, and outputs the encoded stream to a recording device or transmission path (not shown) in the subsequent stage, for example, as an encoded image at a predetermined timing. That is, the storage buffer 208 is also a transmission unit that transmits an encoded stream.


Further, the transform coefficient quantized in the quantization unit 206 is also supplied to the inverse quantization unit 209. The inverse quantization unit 209 dequantizes the quantized transform coefficient by a method corresponding to the quantization by the quantization unit 206. The inverse quantization unit 209 supplies the obtained transform coefficient to the inverse orthogonal transform unit 210.


The inverse orthogonal transform unit 210 performs inverse orthogonal transform on the supplied transform coefficient by a method corresponding to the orthogonal transform process by the orthogonal transform unit 205. The output (restored difference information) that has been subject to inverse orthogonal transform is supplied to the calculation unit 211.


The calculation unit 211 adds the prediction image supplied from the intra-prediction unit 217 or the motion prediction/compensation unit 218 via the prediction image selection unit 219 to the inverse orthogonal transform result supplied from the inverse orthogonal transform unit 210, that is, the restored difference information to obtain a locally decoded image (decoded image).


For example, when the difference information corresponds to an image to be intra-encoded, the calculation unit 211 adds the prediction image supplied from the intra-prediction unit 217 to the difference information. Further, for example, when the difference information corresponds to an image to be inter-encoded, the calculation unit 211 adds the prediction image supplied from the motion prediction/compensation unit 218 to the difference information.


The decoded image which is the addition result is supplied to the deblocking filter 212 and the frame memory 215.


The deblocking filter 212 suppresses the block distortion of the decoded image by appropriately performing the deblocking filter processing on the image from the calculation unit 211, and supplies the filter processing result to the adaptive offset filter 213. The deblocking filter 212 has parameters β and Tc obtained based on a quantization parameter QP. The parameters β and Tc are threshold values (parameters) used for determination regarding the deblocking filter.


The parameters β and Tc of the deblocking filter 212 are extended from β and Tc defined by the HEVC method. Each offset of the parameters β and Tc is encoded by the lossless encoding unit 207 as a parameter of the deblocking filter, and is transmitted to the image decoding device 301 of FIG. 10, which will be described later.


The adaptive offset filter 213 mainly performs an offset filter (SAO: Sample adaptive offset) process for suppressing ringing on the image filtered by the deblocking filter 212.


There are nine types of offset filters, two types of band offset, six types of edge offset, and no offset. The adaptive offset filter 213 applies filter processing on the image filtered by the deblocking filter 212 using a quad-tree structure in which the type of offset filter is determined for each divided area and an offset value for each divided area. The adaptive offset filter 213 supplies the filtered image to the adaptive loop filter 214.


In the image encoding device 201, the quad-tree structure and the offset value for each divided area are calculated and used by the adaptive offset filter 213. The calculated quad-tree structure and the offset value for each divided area are encoded by the lossless encoding unit 207 as an adaptive offset parameter and transmitted to the image decoding device 301 of FIG. 10, which will be described later.


The adaptive loop filter 214 performs adaptive loop filter (ALF: Adaptive Loop Filter) processing for each processing unit on the image filtered by the adaptive offset filter 213 using the filter coefficient. In the adaptive loop filter 214, for example, a two-dimensional Wiener filter is used as the filter. Of course, a filter other than the Wiener filter may be used. The adaptive loop filter 214 supplies the filter processing result to the frame memory 215.


Although not shown in the example of FIG. 8, in the image encoding device 201, the filter coefficient calculated and used by the adaptive loop filter 214 for each processing unit so as to minimize the residue from the original image from the screen rearrangement buffer 203. The calculated filter coefficient is encoded by the lossless encoding unit 207 as an adaptive loop filter parameter and transmitted to the image decoding device 301 of FIG. 10, which will be described later.


The frame memory 215 outputs the stored reference image to the intra-prediction unit 217 or the motion prediction/compensation unit 218 via the selection unit 216 at a predetermined timing.


For example, in the case of an image to be intra-encoded, the frame memory 215 supplies the reference image to the intra-prediction unit 217 via the selection unit 216. Further, for example, when interencoding is performed, the frame memory 215 supplies the reference image to the motion prediction/compensation unit 218 via the selection unit 216.


When the reference image supplied from the frame memory 215 is an image to be intra-encoded, the selection unit 216 supplies the reference image to the intra-prediction unit 217. Further, when the reference image supplied from the frame memory 215 is an image to be inter-encoded, the selection unit 216 supplies the reference image to the motion prediction/compensation unit 218.


The intra-prediction unit 217 performs intra-prediction (in-screen prediction) that generates a prediction image using the pixel values in the screen. The intra-prediction unit 217 performs intra-prediction in a plurality of modes (intra-prediction modes).


The intra-prediction unit 217 generates a prediction image in all intra-prediction modes, evaluates each prediction image, and selects the optimum mode. When the optimum intra-prediction mode is selected, the intra-prediction unit 217 supplies the prediction image generated in the optimum mode to the calculation unit 204 and the calculation unit 211 via the prediction image selection unit 219.


Further, as described above, the intra-prediction unit 217 supplies parameters such as intra-prediction mode information indicating the adopted intra-prediction mode to the lossless encoding unit 207 as appropriate.


The motion prediction/compensation unit 218 performs motion prediction on the image to be inter-encoded using the input image supplied from the screen rearrangement buffer 203 and the reference image supplied from the frame memory 215 via the selection unit 216. Further, the motion prediction/compensation unit 218 performs motion compensation processing according to the motion vector detected by the motion prediction, and generates a prediction image (inter-prediction image information).


The motion prediction/compensation unit 218 performs inter-prediction processing in all candidate inter-prediction modes and generates a prediction image. The motion prediction/compensation unit 218 supplies the generated prediction image to the calculation unit 204 and the calculation unit 211 via the prediction image selection unit 219. Further, the motion prediction/compensation unit 218 supplies parameters such as inter-prediction mode information indicating the adopted inter-prediction mode and motion vector information indicating the calculated motion vector to the lossless encoding unit 207.


The prediction image selection unit 219 supplies the output of the intra-prediction unit 217 to the calculation unit 204 and the calculation unit 211 in the case of an image to be intra-encoded, and supplies the output of the motion prediction/compensation unit 218 to the calculation unit 204 and the calculation unit 211 in the case of an image to be inter-encoded.


The rate control unit 220 controls the rate of the quantization operation of the quantization unit 206 so that overflow or underflow does not occur based on the compressed image stored in the storage buffer 208.


The image encoding device 201 is configured in this way, the lossless encoding unit 207 corresponds to the encoding unit 22 in FIG. 3, and the motion prediction/compensation unit 218 corresponds to the inter-prediction unit 21 in FIG. 3. Therefore, as described above, the image encoding device 201 can further suppress deterioration of subjective image quality and deterioration of encoding efficiency.


<Operation of Image Encoding Device>

With reference to FIG. 9, the flow of the encoding process executed by the image encoding device 201 as described above will be described.


In step S101, the A/D conversion unit 202 performs A/D conversion of the input image.


In step S102, the screen rearrangement buffer 203 stores the image A/D-converted by the A/D conversion unit 202, and rearranges the image from the display order of each picture to the encoding order.


When the processing target image supplied from the screen rearrangement buffer 203 is an image of a block to be intra-processed, the referenced decoded image is read from the frame memory 215 and is supplied to the intra-prediction unit 217 via the selection unit 216.


Based on these images, in step S103, the intra-prediction unit 217 intra-predicts the pixels of the processing target block in all candidate intra-prediction modes. As the referenced decoded pixel, a pixel not filtered by the deblocking filter 212 is used.


By this process, the intra-prediction is performed in all candidate intra-prediction modes, and the cost function value is calculated for all candidate intra-prediction modes. Then, the optimum intra-prediction mode is selected based on the calculated cost function value, and the prediction image generated by the intra-prediction in the optimum intra-prediction mode and the cost function value thereof are supplied to the prediction image selection unit 219.


When the processing target image supplied from the screen rearrangement buffer 203 is an image to be inter-processed, the referenced image is read from the frame memory 215 and supplied to the motion prediction/compensation unit 218 via the selection unit 216. Based on these images, in step S104, the motion prediction/compensation unit 218 performs motion prediction/compensation processing.


By this processing, motion prediction processing is performed in all candidate interprediction modes, cost function values are calculated for all candidate inter prediction modes, and the optimum interprediction mode is determined based on the calculated cost function values. Then, the prediction image generated by the optimum inter-prediction mode and the cost function value thereof are supplied to the prediction image selection unit 219.


In step S105, the prediction image selection unit 219 determines one of the optimum intra-prediction mode and the optimum inter-prediction mode as the optimum prediction mode based on the cost function values output from the intra-prediction unit 217 and the motion prediction/compensation unit 218. Then, the prediction image selection unit 219 selects the prediction image in the determined optimum prediction mode and supplies the prediction image to the calculation units 204 and 211. This prediction image is used for the calculation of steps S106 and S111 described later.


The selection information of the prediction image is supplied to the intra-prediction unit 217 or the motion prediction/compensation unit 218. When the prediction image of the optimum intra-prediction mode is selected, the intra-prediction unit 217 supplies information indicating the optimum intra-prediction mode (that is, parameters related to the intra-prediction) to the lossless encoding unit 207.


When the prediction image of the optimum inter-prediction mode is selected, the motion prediction/compensation unit 218 outputs information indicating the optimum inter-prediction mode and the information corresponding to the optimum inter-prediction mode (that is, parameters related to the motion prediction) to the lossless encoding unit 207. Examples of the information corresponding to the optimum inter-prediction mode include motion vector information and reference frame information.


In step S106, the calculation unit 204 calculates the difference between the images rearranged in step S102 and the prediction image selected in step S105. The prediction image is supplied to the calculation unit 204 from the motion prediction/compensation unit 218 in the case of inter-prediction and from the intra-prediction unit 217 in the case of intra-prediction via the prediction image selection unit 219.


The amount of difference data is smaller than that of the original image data. Therefore, the amount of data can be compressed as compared with the case where the image is encoded as it is.


In step S107, the orthogonal transform unit 205 performs orthogonal transform on the difference information supplied from the calculation unit 204. Specifically, orthogonal transform such as discrete cosine transform and Karhunen-Loeve transform is performed, and the transform coefficient is output.


In step S108, the quantization unit 206 quantizes the transform coefficient. In this quantization, the rate is controlled as described in the process of step S118 described later.


The difference information quantized as described above is locally decoded as follows. That is, in step S109, the inverse quantization unit 209 dequantizes the transform coefficient quantized by the quantization unit 206 with the characteristics corresponding to the characteristics of the quantization unit 206. In step S110, the inverse orthogonal transform unit 210 performs inverse orthogonal transform on the transform coefficient dequantized by the inverse quantization unit 209 with the characteristics corresponding to the characteristics of the orthogonal transform unit 205.


In step S111, the calculation unit 211 adds the prediction image input via the prediction image selection unit 219 to the locally decoded difference information to generate the locally decoded image (image corresponding to the input to the calculation unit 204).


In step S112, the deblocking filter 212 performs deblocking filter processing on the image output from the calculation unit 211. At this time, as the threshold value for the determination regarding the deblocking filter, the parameters β and Tc extended from β and Tc defined by the HEVC method are used. The filtered image from the deblocking filter 212 is output to the adaptive offset filter 213.


It should be noted that the offsets of the parameters β and Tc used in the deblocking filter 212, which are input by the user operating the operation unit or the like, are supplied to the lossless encoding unit 207 as the parameters of the deblocking filter.


In step S113, the adaptive offset filter 213 performs adaptive offset filter processing. By this processing, filter processing is applied to the image filtered by the deblocking filter 212 using a quad-tree structure in which the type of offset filter is determined for each divided area and an offset value for each divided area. The filtered image is supplied to the adaptive loop filter 214.


The determined quad-tree structure and the offset value for each divided area are supplied to the lossless encoding unit 207 as an adaptive offset parameter.


In step S114, the adaptive loop filter 214 performs adaptive loop filter processing on the image filtered by the adaptive offset filter 213. For example, the image filtered by the adaptive offset filter 213 is filtered for each processing unit using the filter coefficient, and the filter processing result is supplied to the frame memory 215.


In step S115, the frame memory 215 stores the filtered image. Images not filtered by the deblocking filter 212, the adaptive offset filter 213, and the adaptive loop filter 214 are also supplied from the calculation unit 211 and stored in the frame memory 215.


On the other hand, the transform coefficient quantized in step S108 described above is also supplied to the lossless encoding unit 207. In step S116, the lossless encoding unit 207 encodes the quantized transform coefficient output from the quantization unit 206 and the supplied parameters. That is, the difference image is losslessly encoded and compressed by variable-length encoding, arithmetic encoding, and the like. Here, examples of the encoded parameters include deblocking filter parameters, adaptive offset filter parameters, adaptive loop filter parameters, quantization parameters, motion vector information and reference frame information, prediction mode information, and the like.


In step S117, the storage buffer 208 stores the encoded difference image (that is, the encoded stream) as a compressed image. The compressed image stored in the storage buffer 208 is appropriately read and transmitted to the decoding side via the transmission path.


In step S118, the rate control unit 220 controls the rate of the quantization operation of the quantization unit 206 so that overflow or underflow does not occur based on the compressed image stored in the storage buffer 208.


When the process of step S118 ends, the encoding process ends.


In the encoding process as described above, when the motion prediction/compensation unit 218 performs the motion prediction/compensation process to generate a prediction image in step S104, the color difference optical flow processing is applied to the color difference components Cb and Cr of the current prediction block.


<Configuration Example of Image Decoding Device>


FIG. 10 shows the configuration of an embodiment of an image decoding device as an image processing device to which the present disclosure is applied. An image decoding device 301 shown in FIG. 10 is a decoding device corresponding to the image encoding device 201 of FIG. 8.


It is assumed that the encoded stream (Encoded Data) encoded by the image encoding device 201 is transmitted to and decoded by the image decoding device 301 corresponding to the image encoding device 201 via a predetermined transmission path.


As shown in FIG. 10, the image decoding device 301 includes a storage buffer 302, a lossless decoding unit 303, an inverse quantization unit 304, an inverse orthogonal transform unit 305, an calculation unit 306, a deblocking filter 307, an adaptive offset filter 308, an adaptive loop filter 309, a screen rearrangement buffer 310, a D/A conversion unit 311, a frame memory 312, a selection unit 313, an intra-prediction unit 314, a motion prediction/compensation unit 315, and a selection unit 316.


The storage buffer 302 is also a receiving unit that receives the transmitted encoded data. The storage buffer 302 receives and stores the transmitted encoded data. This encoded data is encoded by the image encoding device 201. The lossless decoding unit 303 decodes the encoded data read from the storage buffer 302 at a predetermined timing by a method corresponding to the encoding method of the lossless encoding unit 207 of FIG. 8.


The lossless decoding unit 303 supplies parameters such as information indicating the decoded intra-prediction mode to the intra-prediction unit 314, and supplies parameters such as information indicating the inter-prediction mode and motion vector information to the motion prediction/compensation unit 315. Further, the lossless decoding unit 303 supplies the decoded deblocking filter parameters to the deblocking filter 307, and supplies the decoded adaptive offset parameters to the adaptive offset filter 308.


The inverse quantization unit 304 dequantizes the coefficient data (quantization coefficient) decoded by the lossless decoding unit 303 by a method corresponding to the quantization method of the quantization unit 206 of FIG. 8. That is, the inverse quantization unit 304 performs inverse quantization of the quantization coefficient by the same method as the inverse quantization unit 209 of FIG. 8 using the quantization parameters supplied from the image encoding device 201.


The inverse quantization unit 304 supplies the dequantized coefficient data, that is, the orthogonal transform coefficient to the inverse orthogonal transform unit 305. The inverse orthogonal transform unit 305 performs inverse orthogonal transform on the orthogonal transform coefficient by a method corresponding to the orthogonal transform method of the orthogonal transform unit 205 of FIG. 8 to obtain decoded residue data corresponding to the residue data before being subject to orthogonal transform in the image encoding device 201.


The decoded residue data obtained by the inverse orthogonal transform is supplied to the calculation unit 306. Further, the calculation unit 306 is supplied with a prediction image from the intra-prediction unit 314 or the motion prediction/compensation unit 315 via the selection unit 316.


The calculation unit 306 adds the decoded residue data and the prediction image to obtain the decoded image data corresponding to the image data before the prediction image is subtracted by the calculation unit 204 of the image encoding device 201. The calculation unit 306 supplies the decoded image data to the deblocking filter 307.


The deblocking filter 307 suppresses the block distortion of the decoded image by appropriately performing the deblocking filter processing on the image from the calculation unit 306, and supplies the filter processing result to the adaptive offset filter 308. The deblocking filter 307 is basically configured in the same manner as the deblocking filter 212 of FIG. 8. That is, the deblocking filter 307 has parameters β and Tc obtained based on the quantization parameters. The parameters β and Tc are threshold values used for determination regarding the deblocking filter.


The parameters β and Tc of the deblocking filter 307 are extended from β and Tc defined by the HEVC method. Each offset of the parameters β and Tc of the deblocking filter encoded by the image encoding device 201 is received by the image decoding device 301 as a parameter of the deblocking filter, decoded by the lossless decoding unit 303, and used by the deblocking filter 307.


The adaptive offset filter 308 mainly performs offset filter (SAO) processing for suppressing ringing on the image filtered by the deblocking filter 307.


The adaptive offset filter 308 applies filter processing on the image filtered by the deblocking filter 307 using a quad-tree structure in which the type of offset filter is determined for each divided area and an offset value for each divided area.


The adaptive offset filter 308 supplies the filtered image to the adaptive loop filter 309.


The quad-tree structure and the offset value for each divided area are calculated by the adaptive offset filter 213 of the image encoding device 201, encoded as an adaptive offset parameter, and sent. Then, the quad-tree structure and the offset value for each divided area encoded by the image encoding device 201 are received by the image decoding device 301 as an adaptive offset parameter, decoded by the lossless decoding unit 303, and used by the adaptive offset filter 308.


The adaptive loop filter 309 performs filter processing on the image filtered by the adaptive offset filter 308 for each processing unit using the filter coefficient, and supplies the filter processing result to the frame memory 312 and the screen rearrangement buffer 310.


Although not shown in the example of FIG. 10, in the image decoding device 301, the filter coefficient is calculated for each LUC by the adaptive loop filter 214 of the image encoding device 201, encoded and sent as an adaptive loop filter parameter, and decoded and used by the lossless decoding unit 303.


The screen rearrangement buffer 310 performs rearrangement of the images and supplies the same to the D/A conversion unit 311. That is, the order of the frames rearranged for the encoding order by the screen rearrangement buffer 203 of FIG. 8 is rearranged in the original display order.


The D/A conversion unit 311 performs D/A conversion on an image (Decoded Picture(s)) supplied from the screen rearrangement buffer 310, outputs the image to a display (not shown), and displays the image. In addition, the image may be output as it is as digital data without providing the D/A conversion unit 311.


The output of the adaptive loop filter 309 is also supplied to the frame memory 312.


The frame memory 312, the selection unit 313, the intra-prediction unit 314, the motion prediction/compensation unit 315, and the selection unit 316 correspond to the frame memory 215, the selection unit 216, the intra-prediction unit 217, the motion prediction/compensation unit 218, and the prediction image selection unit 219 of the image encoding device 201, respectively.


The selection unit 313 reads the image to be inter-processed and the referenced image from the frame memory 312, and supplies the same to the motion prediction/compensation unit 315. Further, the selection unit 313 reads the image used for the intra-prediction from the frame memory 312 and supplies the same to the intra-prediction unit 314.


Information indicating the intra-prediction mode obtained by decoding the header information and the like are appropriately supplied from the lossless decoding unit 303 to the intra-prediction unit 314. Based on this information, the intra-prediction unit 314 generates a prediction image from the reference image acquired from the frame memory 312, and supplies the generated prediction image to the selection unit 316.


Information obtained by decoding the header information (prediction mode information, motion vector information, reference frame information, flags, various parameters, and the like) is supplied from the lossless decoding unit 303 to the motion prediction/compensation unit 315.


The motion prediction/compensation unit 315 generates a prediction image from the reference image acquired from the frame memory 312 based on the information supplied from the lossless decoding unit 303, and supplies the generated prediction image to the selection unit 316.


The selection unit 316 selects a prediction image generated by the motion prediction/compensation unit 315 or the intra-prediction unit 314 and supplies the same to the calculation unit 306.


The image decoding device 301 is configured in this way, the lossless decoding unit 303 corresponds to the decoding unit 32 of FIG. 3, and the motion prediction/compensation unit 315 corresponds to the interprediction unit 31 of FIG. 3. Therefore, as described above, the image decoding device 301 can further suppress deterioration of subjective image quality and deterioration of encoding efficiency.


<Operation of Image Decoding Device>

With reference to FIG. 11, an example of the flow of the decoding process executed by the image decoding device 301 as described above will be described.


When the decoding process is started, in step S201, the storage buffer 302 receives and stores the transmitted encoded stream (data). In step S202, the lossless decoding unit 303 decodes the encoded data supplied from the storage buffer 302. The I picture, P picture, and B picture encoded by the lossless encoding unit 207 of FIG. 8 are decoded.


Prior to decoding the picture, parameter information such as motion vector information, reference frame information, and prediction mode information (intra-prediction mode or inter-prediction mode) is also decoded.


When the prediction mode information is the intra-prediction mode information, the prediction mode information is supplied to the intra-prediction unit 314. When the prediction mode information is inter-prediction mode information, the prediction mode information and the corresponding motion vector information and the like are supplied to the motion prediction/compensation unit 315. The deblocking filter parameters and the adaptive offset parameter are also decoded and supplied to the deblocking filter 307 and the adaptive offset filter 308, respectively.


In step S203, the intra-prediction unit 314 or the motion prediction/compensation unit 315 each performs a prediction image generation process corresponding to the prediction mode information supplied from the lossless decoding unit 303.


That is, when the intra-prediction mode information is supplied from the lossless decoding unit 303, the intra-prediction unit 314 generates an intra-prediction image of the intra-prediction mode. When the inter-prediction mode information is supplied from the lossless decoding unit 303, the motion prediction/compensation unit 315 performs the motion prediction/compensation processing in the inter-prediction mode and generates the inter-prediction image.


By this processing, the prediction image (intra-prediction image) generated by the intra-prediction unit 314 or the prediction image (inter-prediction image) generated by the motion prediction/compensation unit 315 is supplied to the selection unit 316.


In step S204, the selection unit 316 selects a prediction image. That is, the prediction image generated by the intra-prediction unit 314 or the prediction image generated by the motion prediction/compensation unit 315 is supplied. Therefore, the supplied prediction image is selected and supplied to the calculation unit 306, and is added to the output of the inverse orthogonal transform unit 305 in step S207 described later.


In step S202 described above, the transform coefficient decoded by the lossless decoding unit 303 is also supplied to the inverse quantization unit 304. In step S205, the inverse quantization unit 304 dequantizes the transform coefficient decoded by the lossless decoding unit 303 with characteristics corresponding to the characteristics of the quantization unit 206 of FIG. 8.


In step S206, the inverse orthogonal transform unit 305 performs inverse orthogonal transform on the transform coefficients dequantized by the inverse quantization unit 304 with the characteristics corresponding to the characteristics of the orthogonal transform unit 205 of FIG. 8. As a result, the difference information corresponding to the input of the orthogonal transform unit 205 (output of the calculation unit 204) in FIG. 8 is decoded.


In step S207, the calculation unit 306 adds the prediction image selected in the process of step S204 described above and input via the selection unit 316 to the difference information. In this way, the original image is decoded.


In step S208, the deblocking filter 307 performs deblocking filter processing on the image output from the calculation unit 306. At this time, as the threshold value for the determination regarding the deblocking filter, the parameters β and Tc extended from β and Tc defined by the HEVC method are used. The filtered image from the deblocking filter 307 is output to the adaptive offset filter 308. In the deblocking filter processing, each offset of the deblocking filter parameters β and Tc supplied from the lossless decoding unit 303 is also used.


In step S209, the adaptive offset filter 308 performs adaptive offset filter processing. By this processing, the filter processing is performed on the image filtered by the deblocking filter 307 using the quad-tree structure in which the type of the offset filter is determined for each divided area and the offset value for each divided area. The filtered image is supplied to the adaptive loop filter 309.


In step S210, the adaptive loop filter 309 performs adaptive loop filter processing on the image filtered by the adaptive offset filter 308. The adaptive loop filter 309 performs filter processing on the input image for each processing unit using the filter coefficient calculated for each processing unit, and supplies the filter processing result to the screen rearrangement buffer 310 and the frame memory 312.


In step S211 the frame memory 312 stores the filtered image.


In step S212, the screen rearrangement buffer 310 rearranges the images filtered by the adaptive loop filter 309 and then supplies the images to the D/A conversion unit 311. That is, the order of the frames rearranged for encoding by the screen rearrangement buffer 203 of the image encoding device 201 is rearranged in the original display order.


In step S213, the D/A conversion unit 311 performs D/A conversion on the images rearranged by the screen rearrangement buffer 310 and outputs the same to a display (not shown), and the images are displayed.


When the process of step S213 ends, the decoding process ends.


In the decoding process as described above, when the motion prediction/compensation unit 315 performs motion prediction/compensation processing to generate a prediction image in step S203, color difference optical flow processing is performed on the color difference components Cb and Cr of the current prediction block.


<Configuration Example of Computer>

The above-described series of processing (image processing method) can be executed by hardware or software. In a case where the series of processing is executed by software, a program that configures the software is installed in a general-purpose computer or the like.



FIG. 12 is a block diagram showing an example of a configuration of an embodiment of a computer in which a program for executing the aforementioned series of processing is installed.


The program can be recorded in advance in a hard disk 1006 or a ROM 1003 as a recording medium included in the computer.


Alternatively, the program can be stored (recorded) in a removable recording medium 1011 driven by a drive 1009. The removable recording medium 1011 can be provided as so-called package software. Here, there is a flexible disk, a compact disc read only memory (CD-ROM), a magneto optical (MO) disk, a digital versatile disc (DVD), a magnetic disk, a semiconductor memory, or the like, for example, as the removable recording medium 1011.


Note that the program can be downloaded to the computer through a communication network or a broadcast network and installed in the hard disk 1005 included in the computer in addition to being installed from the aforementioned removable recording medium 1011 to the computer. That is, the program can be transmitted from a download site to the computer through an artificial satellite for digital satellite broadcast in a wireless manner or transmitted to the computer through a network such as a local area network (LAN) or the Internet in a wired manner, for example.


The computer includes a central processing unit (CPU) 1002 and an input/output interface 1010 is connected to the CPU 1002 through a bus 1001.


When a user operates the input unit 1007, or the like to input a command through the input/output interface 1010, the CPU 1002 executes a program stored in the read only memory (ROM) 1003 according to the command. Alternatively, the CPU 1002 loads a program stored in the hard disk 1005 to a random access memory (RAM) 1004 and executes the program.


Accordingly, the CPU 1002 performs processing according to the above-described flowcharts or processing executed by components of the above-described block diagrams. In addition, the CPU 1002, for example, outputs a processing result from an output unit 1006 through the input/output interface 1010 or transmits the processing result from a communication unit 1008 and additionally records the processing result in the hard disk 1005, or the like as necessary.


Note that the input unit 1007 is configured as a keyboard, a mouse, a microphone, or the like. In addition, the output unit 1006 is configured as a liquid crystal display (LCD), a speaker, or the like.


Here, processing executed by a computer according to a program is not necessarily performed according to a sequence described as a flowchart in the present description. That is, processing executed by a computer according to a program also includes processing executed in parallel or individually (e.g., parallel processing or processing according to objects).


In addition, a program may be processed by a single computer (processor) or may be processed by a plurality of computers in a distributed manner. Further, a program may be transmitted to a distant computer and executed.


Further, in the present description, the system means a set of a plurality of components (devices, modules (parts), etc.), and it does not matter whether or not all the components are arranged in a single housing. Thus, a plurality of devices accommodated in separate housings and connected via a network, and one device in which a plurality of modules are accommodated in one housing are both systems.


Further, for example, the configuration described as one device (or one processing unit) may be divided to be configured as a plurality of devices (or processing units). In contrast, the configuration described as the plurality of devices (or processing units) may be collected and configured as one device (or processing unit). A configuration other than the above-described configuration may be added to the configuration of each device (or each processing unit). Further, when the configuration or the operation are substantially the same in the entire system, a part of the configuration of a certain device (or processing unit) may be included in the configuration of another device (or another processing unit).


Further, for example, the present technology may have a cloud computing configuration in which one function is shared with and processed by a plurality of devices via a network.


Further, for example, the program described above may be executed on any device. In this case, the device may have a necessary function (a functional block or the like) and may be able to obtain necessary information.


Further, for example, the respective steps described in the above-described flowchart may be executed by one device or in a shared manner by a plurality of devices. Furthermore, in a case where a plurality of steps of processing are included in one step, the plurality of steps of processing included in one step may be executed by one device or by a plurality of devices in a shared manner. In other words, a plurality of kinds of processing included in one step can also be executed as processing of a plurality of steps. In contrast, processing described as a plurality of steps can be collectively performed as one step.


For example, for a program executed by a computer, processing of steps describing the program may be performed chronologically in order described in the present specification or may be performed in parallel or individually at a necessary timing such as the time of calling. That is, processing of each step may be performed in order different from the above-described order as long as inconsistency does not occur. Further, processing of steps describing the program may be performed in parallel to processing of another program or may be performed in combination with processing of another program.


Note that the present technology described as various modes in the present description may be implemented independently alone as long as no contradiction arises. Of course, any plurality of technologies may be implemented together. For example, some or all of the present technologies described in several embodiments may be implemented in combination with some or all of the present technologies described in the other embodiments. A part or all of any above-described present technology can also be implemented together with another technology which has not been described above.


COMBINATION EXAMPLES OF CONFIGURATIONS

The present technology can also be configured as follows.


(1) An image processing device including: an inter-prediction unit performs motion compensation processing to which optical flow processing is applied on a color difference component of a current prediction block that is subject to an encoding process as color difference optical flow processing to generate a prediction pixel in the current prediction block; and an encoding unit that encodes a current pixel in the current prediction block using the prediction pixel.


(2) The image processing device according to (1), wherein the inter-prediction unit derives a color difference correction motion vector for the color difference component of the current prediction block using a luminance correction motion vector used when performing optical flow processing for the luminance component of the current prediction block as luminance optical flow processing.


(3) The image processing device according to (2), wherein the inter-prediction unit derives a color difference correction motion vector for the color difference component of the current prediction block using an average of a plurality of luminance correction motion vectors used when performing optical flow processing for a plurality of luminance components of the current prediction block as luminance optical flow processing.


(4) The image processing device according to (2), wherein the inter-prediction unit uses one of a plurality of luminance correction motion vectors used when performing optical flow processing for a plurality of luminance components of the current prediction block as luminance optical flow processing as a color difference correction motion vector for the color difference component of the current prediction block.


(5) The image processing device according to any one of (1) to (4), wherein the inter-prediction unit generates a first color difference component of the prediction pixel in the current prediction block by performing the color difference optical flow processing on the first color difference component of the current prediction block, and generates a second color difference component of the prediction pixel in the current prediction block by performing the color difference optical flow processing on the second color difference component of the current prediction block.


(6) The image processing device according to any one of (1) to (5), wherein a Y signal, a Cb signal, and a Cr signal, or a Y signal, a U signal, and a V signal are used as the luminance component, the first color difference component, and the second color difference component.


(7) The image processing device according to any one of (1) to (6), further including: a setting unit that sets identification data for identifying whether to apply the color difference optical flow processing, wherein the encoding unit generates a bitstream including the identification data set by the setting unit.


(8) The image processing device according to any one of (1) to (7), wherein the setting unit sets block size identification data for identifying a block size of a prediction block to which the color difference optical flow processing is applied, and


the encoding unit generates a bitstream including the identification data set by the setting unit.


(9) An image processing method including: allowing an image processing device to execute: performing motion compensation processing to which optical flow processing is applied on a color difference component of a current prediction block that is subject to an encoding process as color difference optical flow processing to generate a prediction pixel in the current prediction block; and encoding a current pixel in the current prediction block using the prediction pixel.


(10) An image processing device including: an inter-prediction unit performs motion compensation processing to which optical flow processing is applied on a color difference component of a current prediction block that is subject to an encoding process as color difference optical flow processing to generate a prediction pixel in the current prediction block; and a decoding unit that decodes a current pixel in the current prediction block using the prediction pixel.


(11) An image processing method including: allowing an image processing device to execute: performing motion compensation processing to which optical flow processing is applied on a color difference component of a current prediction block that is subject to an encoding process as color difference optical flow processing to generate a prediction pixel in the current prediction block; and decoding a current pixel in the current prediction block using the prediction pixel.


Note that embodiments of the present technology are not limited to the above-mentioned embodiments and can be modified in various manners without departing from the gist of the present technology. The effects described in the present description are merely illustrative and not restrictive, and other effects may be obtained.


REFERENCE SIGNS LIST




  • 11 Image processing system


  • 12 Image encoding device


  • 13 Image decoding device


  • 21 Inter-prediction unit


  • 22 Encoding unit


  • 23 Setting unit


  • 31 Inter-prediction unit


  • 32 Decoding unit


Claims
  • 1. An image processing device comprising: an inter-prediction unit that performs motion compensation processing to which optical flow processing is applied on a color difference component of a current prediction block that is subject to an encoding process as color difference optical flow processing to generate a prediction pixel in the current prediction block; andan encoding unit that encodes a current pixel in the current prediction block using the prediction pixel.
  • 2. The image processing device according to claim 1, wherein the inter-prediction unit derives a color difference correction motion vector for the color difference component of the current prediction block using a luminance correction motion vector used in a case of performing optical flow processing for the luminance component of the current prediction block as luminance optical flow processing.
  • 3. The image processing device according to claim 2, wherein the inter-prediction unit derives a color difference correction motion vector for the color difference component of the current prediction block using an average of a plurality of luminance correction motion vectors used in a case of performing optical flow processing for a plurality of luminance components of the current prediction block as luminance optical flow processing.
  • 4. The image processing device according to claim 2, wherein the inter-prediction unit uses one of a plurality of luminance correction motion vectors used in a case of performing optical flow processing for a plurality of luminance components of the current prediction block as luminance optical flow processing as a color difference correction motion vector for the color difference component of the current prediction block.
  • 5. The image processing device according to claim 2, wherein the inter-prediction unit generates a first color difference component of the prediction pixel in the current prediction block by performing the color difference optical flow processing on the first color difference component of the current prediction block, and generates a second color difference component of the prediction pixel in the current prediction block by performing the color difference optical flow processing on the second color difference component of the current prediction block.
  • 6. The image processing device according to claim 5, wherein a Y signal, a Cb signal, and a Cr signal, or a Y signal, a U signal, and a V signal are used as the luminance component, the first color difference component, and the second color difference component.
  • 7. The image processing device according to claim 1, further comprising: a setting unit that sets identification data for identifying whether to apply the color difference optical flow processing, whereinthe encoding unit generates a bitstream including the identification data set by the setting unit.
  • 8. The image processing device according to claim 7, wherein the setting unit sets block size identification data for identifying a block size of a prediction block to which the color difference optical flow processing is applied, andthe encoding unit generates a bitstream including the identification data set by the setting unit.
  • 9. An image processing method comprising: allowing an image processing device to execute:performing motion compensation processing to which optical flow processing is applied on a color difference component of a current prediction block that is subject to an encoding process as color difference optical flow processing to generate a prediction pixel in the current prediction block; andencoding a current pixel in the current prediction block using the prediction pixel.
  • 10. An image processing device comprising: an inter-prediction unit performs motion compensation processing to which optical flow processing is applied on a color difference component of a current prediction block that is subject to an encoding process as color difference optical flow processing to generate a prediction pixel in the current prediction block; anda decoding unit that decodes a current pixel in the current prediction block using the prediction pixel.
  • 11. An image processing method comprising: allowing an image processing device to execute: performing motion compensation processing to which optical flow processing is applied on a color difference component of a current prediction block that is subject to an encoding process as color difference optical flow processing to generate a prediction pixel in the current prediction block; anddecoding a current pixel in the current prediction block using the prediction pixel.
PCT Information
Filing Document Filing Date Country Kind
PCT/JP2020/035763 9/23/2020 WO
Provisional Applications (1)
Number Date Country
62904453 Sep 2019 US