The present disclosure relates to a feature encoding/decoding method and apparatus, and more specifically, to a feature encoding/decoding method and apparatus for compressing feature data in a plurality of layers from a high-dimensional channel to a low-dimensional channel and reconstructing it back to an original-dimensional channel and a recording medium storing a bitstream generated by the feature encoding method/apparatus of the present disclosure.
With the development of machine learning technology, demand for image processing-based artificial intelligence services is increasing. In order to effectively process a vast amount of image data required for artificial intelligence services within limited resources, image compression technology optimized for machine task performance is essential. However, existing image compression technology has been developed with the goal of high-resolution, high-quality image processing for human vision, and has the problem of being unsuitable for artificial intelligence services. Accordingly, research and development on new machine-oriented image compression technology suitable for artificial intelligence services is actively underway.
An object of the present disclosure is to provide a feature encoding/decoding method and apparatus with improved encoding/decoding efficiency.
Another object of the present disclosure is to provide a feature encoding/decoding method and apparatus for compressing and reconstructing channels of feature data in a plurality of layers in units of images or frames.
Another object of the present disclosure is to provide a feature encoding/decoding method and apparatus for allocating different numbers of channels after reduction transform in layers based on a degree of influence on machine task performance.
Another object of the present disclosure is to provide a feature encoding/decoding method and apparatus for setting different quantization parameter values with respect to layers based on a degree of influence on machine task performance.
Another object of the present disclosure is to provide a method of transmitting a bitstream generated by a feature encoding method or apparatus according to the present disclosure.
Another object of the present disclosure is to provide a recording medium storing a bitstream generated by a feature encoding method or apparatus according to the present disclosure.
Another object of the present disclosure is to provide a recording medium storing a bitstream received, decoded and used to reconstruct a feature by a feature decoding apparatus according to the present disclosure.
The technical problems solved by the present disclosure are not limited to the above technical problems and other technical problems which are not described herein will become apparent to those skilled in the art from the following description.
A feature decoding method according to an aspect of the present disclosure is a feature decoding method performed by a feature decoding apparatus. The feature decoding method may comprise obtaining parameter information related to a plurality of layers from a bitstream and reconstructing feature information of the plurality of layers based on the parameter information. The parameter information may comprise at least one of information about the number of feature channels or information about a quantization parameter (QP).
A feature encoding method according to another aspect of the present disclosure is a feature encoding method performed by a feature encoding apparatus. The feature encoding method may comprise extracting feature information of a plurality of layers from an image, generating parameter information related to the plurality of layers, and encoding the parameter information. The parameter information may comprise at least one of information about the number of feature channels or information about a quantization parameter (QP).
A recording medium according to another aspect of the present disclosure may store a bitstream generated by the feature encoding method or the feature encoding apparatus of the present disclosure.
A bitstream transmission method according to another aspect of the present disclosure may transmit a bitstream generated by the feature encoding method or the feature encoding apparatus of the present disclosure to a feature decoding apparatus.
The features briefly summarized above with respect to the present disclosure are merely exemplary aspects of the detailed description below of the present disclosure, and do not limit the scope of the present disclosure.
According to the present disclosure, it is possible to provide a feature encoding/decoding method and apparatus with improved encoding/decoding efficiency.
Additionally, according to the present disclosure, the number of channels after reduction transform is set in units of images or frames, thereby improving the efficiency of feature encoding and decoding.
Additionally, according to the present disclosure, since the number of channels after reduction transform in layers is differently set depending on a degree of influence on machine task performance, the accuracy of the machine task can be improved with the same bitstream size, and the efficiency of feature encoding and decoding can be improved.
Additionally, according to the present disclosure, since QP values of layers are adaptively set according to a degree of influence on machine task performance, the accuracy of the machine task can be improved while maintaining the amount of bits of a bitstream.
It will be appreciated by persons skilled in the art that that the effects that can be achieved through the present disclosure are not limited to what has been particularly described hereinabove and other advantages of the present disclosure will be more clearly understood from the detailed description.
Hereinafter, the embodiments of the present disclosure will be described in detail with reference to the accompanying drawings so as to be easily implemented by those skilled in the art. However, the present disclosure may be implemented in various different forms, and is not limited to the embodiments described herein.
In describing the present disclosure, in case it is determined that the detailed description of a related known function or construction renders the scope of the present disclosure unnecessarily ambiguous, the detailed description thereof will be omitted. In the drawings, parts not related to the description of the present disclosure are omitted, and similar reference numerals are attached to similar parts.
In the present disclosure, when a component is “connected”, “coupled” or “linked” to another component, it may include not only a direct connection relationship but also an indirect connection relationship in which an intervening component is present. In addition, when a component “includes” or “has” other components, it means that other components may be further included, rather than excluding other components unless otherwise stated.
In the present disclosure, the terms first, second, etc. may be used only for the purpose of distinguishing one component from other components, and do not limit the order or importance of the components unless otherwise stated. Accordingly, within the scope of the present disclosure, a first component in one embodiment may be referred to as a second component in another embodiment, and similarly, a second component in one embodiment may be referred to as a first component in another embodiment.
In the present disclosure, components that are distinguished from each other are intended to clearly describe each feature, and do not mean that the components are necessarily separated. That is, a plurality of components may be integrated and implemented in one hardware or software unit, or one component may be distributed and implemented in a plurality of hardware or software units. Therefore, even if not stated otherwise, such embodiments in which the components are integrated or the component is distributed are also included in the scope of the present disclosure.
In the present disclosure, the components described in various embodiments do not necessarily mean essential components, and some components may be optional components. Accordingly, an embodiment consisting of a subset of components described in an embodiment is also included in the scope of the present disclosure. In addition, embodiments including other components in addition to components described in the various embodiments are included in the scope of the present disclosure.
The present disclosure relates to encoding and decoding of an image, and terms used in the present disclosure may have a general meaning commonly used in the technical field, to which the present disclosure belongs, unless newly defined in the present disclosure.
The present disclosure may be applied to a method disclosed in a Versatile Video Coding (VVC) standard and/or a Video Coding for Machines (VCM) standard. In addition, the present disclosure may be applied to a method disclosed in an essential video coding (EVC) standard, AOMedia Video 1 (AV1) standard, 2nd generation of audio video coding standard (AVS2), or a next-generation video/image coding standard (e.g., H.267 or H.268, etc.).
This disclosure provides various embodiments related to video/image coding, and, unless otherwise stated, the embodiments may be performed in combination with each other. In the present disclosure, “video” refers to a set of a series of images according to the passage of time. An “image” may be information generated by artificial intelligence (AI). Input information used in the process of performing a series of tasks by AI, information generated during the information processing process, and the output information may be used as images. In the present disclosure, a “picture” generally refers to a unit representing one image in a specific time period, and a slice/tile is a coding unit constituting a part of a picture in encoding. One picture may be composed of one or more slices/tiles. In addition, a slice/tile may include one or more coding tree units (CTUs). The CTU may be partitioned into one or more CUs. A tile is a rectangular region present in a specific tile row and a specific tile column in a picture, and may be composed of a plurality of CTUs. A tile column may be defined as a rectangular region of CTUs, may have the same height as a picture, and may have a width specified by a syntax element signaled from a bitstream part such as a picture parameter set. A tile row may be defined as a rectangular region of CTUs, may have the same width as a picture, and may have a height specified by a syntax element signaled from a bitstream part such as a picture parameter set. A tile scan is a certain continuous ordering method of CTUs partitioning a picture. Here, CTUs may be sequentially ordered according to a CTU raster scan within a tile, and tiles in a picture may be sequentially ordered according to a raster scan order of tiles of the picture. A slice may contain an integer number of complete tiles, or may contain a continuous integer number of complete CTU rows within one tile of one picture. A slice may be exclusively included in a single NAL unit. One picture may be composed of one or more tile groups. One tile group may include one or more tiles. A brick may indicate a rectangular region of CTU rows within a tile in a picture. One tile may include one or more bricks. The brick may refer to a rectangular region of CTU rows in a tile. One tile may be split into a plurality of bricks, and each brick may include one or more CTU rows belonging to a tile. A tile which is not split into a plurality of bricks may also be treated as a brick.
In the present disclosure, a “pixel” or a “pel” may mean a smallest unit constituting one picture (or image). In addition, “sample” may be used as a term corresponding to a pixel. A sample may generally represent a pixel or a value of a pixel, and may represent only a pixel/pixel value of a luma component or only a pixel/pixel value of a chroma component.
In an embodiment, especially when applied to VCM, when there is a picture composed of a set of components having different characteristics and meanings, a pixel/pixel value may represent a pixel/pixel value of a component generated through independent information or combination, synthesis, and analysis of each component. For example, in RGB input, only the pixel/pixel value of R may be represented, only the pixel/pixel value of G may be represented, or only the pixel/pixel value of B may be represented. For example, only the pixel/pixel value of a luma component synthesized using the R, G, and B components may be represented. For example, only the pixel/pixel values of images and information extracted through analysis of R, G, and B components from components may be represented.
In the present disclosure, a “unit” may represent a basic unit of image processing. The unit may include at least one of a specific region of the picture and information related to the region. One unit may include one luma block and two chroma (e.g., Cb and Cr) blocks. The unit may be used interchangeably with terms such as “sample array”, “block” or “area” in some cases. In a general case, an M×N block may include samples (or sample arrays) or a set (or array) of transform coefficients of M columns and N rows. In an embodiment, In particular, especially when applied to VCM, the unit may represent a basic unit containing information for performing a specific task.
In the present disclosure, “current block” may mean one of “current coding block”, “current coding unit”, “coding target block”, “decoding target block” or “processing target block”. When prediction is performed, “current block” may mean “current prediction block” or “prediction target block”. When transform (inverse transform)/quantization (dequantization) is performed, “current block” may mean “current transform block” or “transform target block”. When filtering is performed, “current block” may mean “filtering target block”.
In addition, in the present disclosure, a “current block” may mean “a luma block of a current block” unless explicitly stated as a chroma block. The “chroma block of the current block” may be expressed by including an explicit description of a chroma block, such as “chroma block” or “current chroma block”.
In the present disclosure, the term “/“and”,” should be interpreted to indicate “and/or.” For instance, the expression “A/B” and “A, B” may mean “A and/or B.” Further, “A/B/C” and “A/B/C” may mean “at least one of A, B, and/or C.”
In the present disclosure, the term “or” should be interpreted to indicate “and/or.” For instance, the expression “A or B” may comprise 1) only “A”, 2) only “B”, and/or 3) both “A and B”. In other words, in the present disclosure, the term “or” should be interpreted to indicate “additionally or alternatively.”
The present disclosure relates to video/image coding for machines (VCM).
VCM refers to a compression technology that encodes/decodes part of a source image/video or information obtained from the source image/video for the purpose of machine vision. In VCM, the encoding/decoding target may be referred to as a feature. The feature may refer to information extracted from the source image/video based on task purpose, requirements, surrounding environment, etc. The feature may have a different information form from the source image/video, and accordingly, the compression method and expression format of the feature may also be different from those of the video source.
VCM may be applied to a variety of application fields. For example, in a surveillance system that recognizes and tracks objects or people, VCM may be used to store or transmit object recognition information. In addition, in intelligent transportation or smart traffic systems, VCM may be used to transmit vehicle location information collected from GPS, sensing information collected from LIDAR, radar, etc., and various vehicle control information to other vehicles or infrastructure. Additionally, in the smart city field, VCM may be used to perform individual tasks of interconnected sensor nodes or devices.
The present disclosure provides various embodiments of feature/feature map coding. Unless otherwise specified, embodiments of the present disclosure may be implemented individually, or may be implemented in combination of two or more.
Referring to
The encoding apparatus 10 may compress/encode a feature/feature map extracted from a source image/video to generate a bitstream, and transmit the generated bitstream to the decoding apparatus 20 through a storage medium or network. The encoding apparatus 10 may also be referred to as a feature encoding apparatus. In a VCM system, the feature/feature map may be generated at each hidden layer of a neural network. The size and number of channels of the generated feature map may vary depending on the type of neural network or the location of the hidden layer. In the present disclosure, a feature map may be referred to as a feature set, and a feature or feature map may be referred to as “feature information”.
The encoding apparatus 10 may include a feature acquisition unit 11, an encoding unit 12, and a transmission unit 13.
The feature acquisition unit 11 may acquire a feature/feature map for the source image/video. Depending on the embodiment, the feature acquisition unit 11 may acquire a feature/feature map from an external device, for example, a feature extraction network. In this case, the feature acquisition unit 11 performs a feature reception interface function. Alternatively, the feature acquisition unit 11 may acquire a feature/feature map by executing a neural network (e.g., CNN, DNN, etc.) using the source image/video as input. In this case, the feature acquisition unit 11 performs a feature extraction network function.
Depending on the embodiment, the encoding apparatus 10 may further include a source image generator (not shown) for acquiring the source image/video. The source image generator may be implemented with an image sensor, a camera module, etc., and may acquire the source image/video through an image/video capture, synthesis, or generation process. In this case, the generated source image/video may be sent to the feature extraction network and used as input data for extracting the feature/feature map.
The encoding unit 12 may encode the feature/feature map acquired by the feature acquisition unit 11. The encoding unit 12 may perform a series of procedures such as prediction, transform, and quantization to increase encoding efficiency. The encoded data (encoded feature/feature map information) may be output in the form of a bitstream. The bitstream containing the encoded feature/feature map information may be referred to as a VCM bitstream.
The transmission unit 13 may obtain feature/feature map information or data output in the form of a bitstream and forward it to the decoding apparatus 20 or another external object through a digital storage medium or network in the form of a file or streaming. Here, digital storage media may include various storage media such as USB, SD, CD, DVD, Blu-ray, HDD, and SSD. The transmission unit 13 may include elements for generating a media file with a predetermined file format or elements for transmitting data through a broadcasting/communication network. The transmission unit 13 may be provided as a separate transmission device from the encoder 12. In this case, the transmission device may include at least one processor that acquires feature/feature map information or data output in the form of a bitstream and a transmission unit for transmitting it in the form of a file or streaming.
The decoding apparatus 20 may acquire feature/feature map information from the encoding apparatus 10 and reconstruct the feature/feature map based on the acquired information.
The decoding apparatus 20 may include a reception unit 21 and a decoding unit 22.
The reception unit 21 may receive a bitstream from the encoding apparatus 10, acquire feature/feature map information from the received bitstream, and send it to the decoding unit 22.
The decoding unit 22 may decode the feature/feature map based on the acquired feature/feature map information. The decoding unit 22 may perform a series of procedures such as dequantization, inverse transform, and prediction corresponding to the operation of the encoding unit 12 to increase decoding efficiency.
Depending on the embodiment, the decoding apparatus 20 may further include a task analysis/rendering unit 23.
The task analysis/rendering unit 23 may perform task analysis based on the decoded feature/feature map. Additionally, the task analysis/rendering unit 23 may render the decoded feature/feature map into a form suitable for task performance. Various machine (oriented) tasks may be performed based on task analysis results and the rendered features/feature map.
As described above, the VCM system may encode/decode the feature extracted from the source image/video according to user and/or machine requests, task purpose, and surrounding environment, and performs various machine (oriented) tasks based on the decoded feature. The VCM system may be implemented by expanding/redesigning the video/image coding system and may perform various encoding/decoding methods defined in the VCM standard.
Referring to
The first pipeline 210 may include a first stage 211 for encoding an input image/video and a second stage 212 for decoding the encoded image/video to generate a reconstructed image/video. The reconstructed image/video may be used for human viewing, that is, human vision.
The second pipeline 220 may include a third stage 221 for extracting a feature/feature map from the input image/video, a fourth stage 222 for encoding the extracted feature/feature map, and a fifth stage 223 for decoding the encoded feature/feature map to generate a reconstructed feature/feature map. The reconstructed feature/feature map may be used for a machine (vision) task. Here, the machine (vision) task may refer to a task in which images/videos are consumed by a machine. The machine (vision) task may be applied to service scenarios such as, for example, Surveillance, Intelligent Transportation, Smart City, Intelligent Industry, Intelligent Content, etc. Depending on the embodiment, the reconstructed feature/feature map may be used for human vision.
Depending on the embodiment, the feature/feature map encoded in the fourth stage 222 may be transferred to the first stage 221 and used to encode the image/video. In this case, an additional bitstream may be generated based on the encoded feature/feature map, and the generated additional bitstream may be transferred to the second stage 222 and used to decode the image/video.
Depending on the embodiment, the feature/feature map decoded in the fifth stage 223 may be transferred to the second stage 222 and used to decode the image/video.
Meanwhile, in the first pipeline 210, the first stage 211 may be performed by an image/video encoder, and the second stage 212 may be performed by an image/video decoder. Additionally, in the second pipeline 220, the third stage 221 may be performed by a VCM encoder (or feature/feature map encoder), and the fourth stage 222 may be performed by a VCM decoder (or feature/feature map encoder). Hereinafter, the encoder/decoder structure will be described in detail.
Referring to
The image partitioner 310 may partition an input image (or picture, frame) input to the image/video encoder 300 into one or more processing units. As an example, the processing unit may be referred to as a coding unit (CU). The coding unit may be recursively partitioned according to a quad-tree binary-tree ternary-tree (QTBTTT) structure from a coding tree unit (CTU) or largest coding unit (LCU). For example, one coding unit may be partitioned into a plurality of coding units of deeper depth based on a quad tree structure, binary tree structure, and/or ternary structure. In this case, for example, the quad tree structure may be applied first and the binary tree structure and/or ternary structure may be applied later. Alternatively, the binary tree structure may be applied first. The image/video coding procedure according to the present disclosure may be performed based on a final coding unit that is no longer partitioned. In this case, the largest coding unit may be used as the final coding unit based on coding efficiency according to image characteristics, or, if necessary, the coding unit may be recursively partitioned into coding units of deeper depth to use a coding unit with an optimal size as the final coding unit. Here, the coding procedure may include procedures such as prediction, transform, and reconstruction, which will be described later. As another example, the processing unit may further include a prediction unit (PU) or a transform unit (TU). In this case, the prediction unit and the transform unit may each be divided or partitioned from the final coding unit described above. The prediction unit may be a unit of sample prediction, and the transform unit may be a unit for deriving a transform coefficient and/or a unit for deriving a residual signal from the transform coefficient.
In some cases, the unit may be used interchangeably with terms such as block or area. In a general case, an M×N block may represent a set of samples or transform coefficients consisting of M columns and N rows. A sample may generally represent a pixel or a pixel value, and may represent only a pixel/pixel value of a luma component, or only a pixel/pixel value of a chroma component. The sample may be used as a term corresponding to pixel or pel.
The image/video encoder 300 may generate a residual signal (residual block, residual sample array) by subtracting a prediction signal (predicted block, prediction sample array) output from the inter predictor 321 or the intra predictor 322 from the input image signal (original block, original sample array) and transmit the generated residual signal to the transformer 332. In this case, as shown, the unit that subtracts the prediction signal (prediction block, prediction sample array) from the input image signal (original block, original sample array) within the image/video encoder 300 may be referred to as the subtractor 331. The predictor may perform prediction on a processing target block (hereinafter referred to as a current block) and generate a predicted block including prediction samples for the current block. The predictor may determine whether intra prediction or inter prediction is applied in current block or CU units. The predictor may generate various information related to prediction, such as prediction mode information, and transfer it to the entropy encoder 340. Information about prediction may be encoded in the entropy encoder 340 and output in the form of a bitstream.
The intra predictor 322 may predict the current block by referring to the samples in the current picture. At this time, the referenced samples may be located in the neighbor of the current block or may be located away from the current block, depending on the prediction mode. In intra prediction, the prediction modes may include a plurality of non-directional modes and a plurality of directional modes. The non-directional mode may include, for example, a DC mode and a planar mode. The directional mode may include, for example, 33 directional prediction modes or 65 directional prediction modes according to the degree of detail of the prediction direction. However, this is merely an example, more or less directional prediction modes may be used depending on settings. The intra predictor 322 may determine the prediction mode applied to the current block by using a prediction mode applied to a neighboring block.
The inter predictor 321 may derive a predicted block for the current block based on a reference block (reference sample array) specified by a motion vector on a reference picture. In this case, in order to reduce the amount of motion information transmitted in the inter prediction mode, the motion information may be predicted in block, subblock, or sample units based on correlation of motion information between the neighboring block and the current block. The motion information may include a motion vector and a reference picture index. The motion information may further include inter prediction direction (L0 prediction, L1 prediction, Bi prediction, etc.) information. In the case of inter prediction, the neighboring block may include a spatial neighboring block present in the current picture and a temporal neighboring block present in the reference picture. The reference picture including the reference block and the reference picture including the temporal neighboring block may be the same or different. The temporal neighboring block may be called a collocated reference block, a co-located CU (colCU), and the like, and the reference picture including the temporal neighboring block may be called a collocated picture (colPic). For example, the inter predictor 321 may construct a motion information candidate list based on neighboring blocks and generate information indicating which candidate is used to derive a motion vector and/or reference picture index of the current block. Inter prediction may be performed based on various prediction modes, and, for example, in the case of a skip mode and a merge mode, the inter predictor 321 may use motion information of the neighboring block as motion information of the current block. In the case of the skip mode, unlike the merge mode, the residual signal may not be transmitted. In the case of the motion vector prediction (MVP) mode, the motion vector of the neighboring block may be used as a motion vector predictor, and a motion vector difference may be signaled to indicate the motion vector of the current block.
The predictor 320 may generate a prediction signal based on various prediction methods. For example, the predictor may not only apply intra prediction or inter prediction but also simultaneously apply both intra prediction and inter prediction, for prediction of one block. This may be called combined inter and intra prediction (CIIP). In addition, the predictor may be based on an intra block copy (IBC) prediction mode or a palette mode for prediction of the block. The IBC prediction mode or the palette mode may be used for content image/video coding of a game or the like, for example, screen content coding (SCC). IBC basically performs prediction within the current picture, but may be performed similarly to inter prediction in that a reference block is derived within the current picture. That is, IBC may use at least one of the inter prediction techniques described in the present disclosure. The palette mode may be regarded as an example of intra coding or intra prediction. When the palette mode is applied, the sample values within the picture may be signaled based on information about a palette table and a palette index.
The prediction signal generated by the predictor 320 may be used to generate a reconstructed signal or to generate a residual signal. The transformer 332 may generate transform coefficients by applying a transform technique to the residual signal. For example, the transform technique may include at least one of a discrete cosine transform (DCT), a discrete sine transform (DST), a karhunen-loève transform (KLT), a graph-based transform (GBT), or a conditionally non-linear transform (CNT). Here, the GBT refers to transform obtained from a graph when relationship information between pixels is represented by the graph. The CNT refers to transform acquired based on a prediction signal generated using all previously reconstructed pixels. In addition, the transform process may be applied to square pixel blocks having the same size or may be applied to non-square blocks having a variable size.
The quantizer 130 may quantize the transform coefficients and transmit them to the entropy encoder 190. The entropy encoder 190 may encode the quantized signal (information on the quantized transform coefficients) and output a bitstream. The information on the quantized transform coefficients may be referred to as residual information. The quantizer 130 may reorder quantized transform coefficients in a block form into a one-dimensional vector form based on a coefficient scan order and generate information on the quantized transform coefficients based on the quantized transform coefficients in the one-dimensional vector form. The entropy encoder 340 may perform various encoding methods such as, for example, exponential Golomb, context-adaptive variable length coding (CAVLC), context-adaptive binary arithmetic coding (CABAC), and the like. The entropy encoder 340 may encode information necessary for video/image reconstruction other than quantized transform coefficients (e.g., values of syntax elements, etc.) together or separately. Encoded information (e.g., encoded video/image information) may be transmitted or stored in units of network abstraction layers (NALs) in the form of a bitstream. The video/image information may further include information on various parameter sets such as an adaptation parameter set (APS), a picture parameter set (PPS), a sequence parameter set (SPS), or a video parameter set (VPS). In addition, the video/image information may further include general constraint information. In addition, the video/image information may further include a method of generating and using encoded information, a purpose, and the like. In the present disclosure, information and/or syntax elements transferred/signaled from the image/video encoder to the image/video decoder may be included in image/video information. The image/video information may be encoded through the above-described encoding procedure and included in the bitstream. The bitstream may be transmitted over a network or may be stored in a digital storage medium. The network may include a broadcasting network and/or a communication network, and the digital storage medium may include various storage media such as USB, SD, CD, DVD, Blu-ray, HDD, SSD, and the like. A transmitter (not shown) transmitting a signal output from the entropy encoder 340 and/or a storage unit (not shown) storing the signal may be configured as internal/external element of the image/video encoder 300, or the transmitter may be included in the entropy encoder 340.
The quantized transform coefficients output from the quantizer 130 may be used to generate a prediction signal. For example, the residual signal (residual block or residual samples) may be reconstructed by applying dequantization and inverse transform to the quantized transform coefficients through the dequantizer 334 and the inverse transformer 335. The adder 350 adds the reconstructed residual signal to the prediction signal output from the inter predictor 321 or the intra predictor 322 to generate a reconstructed signal (reconstructed picture, reconstructed block, reconstructed sample array). In a case where there is no residual for the processing target block, such as a case where the skip mode is applied, the predicted block may be used as the reconstructed block. The adder 350 may be called a reconstructor or a reconstructed block generator. The generated reconstructed signal may be used for intra prediction of a next processing target block in the current picture and may be used for inter prediction of a next picture through filtering as described below.
Meanwhile, luma mapping with chroma scaling (LMCS) is applicable in a picture encoding and/or reconstruction process.
The filter 360 may improve subjective/objective image quality by applying filtering to the reconstructed signal. For example, the filter 360 may generate a modified reconstructed picture by applying various filtering methods to the reconstructed picture, and store the modified reconstructed picture in the memory 370, specifically, a DPB of the memory 370. The various filtering methods may include, for example, deblocking filtering, a sample adaptive offset, an adaptive loop filter, a bilateral filter, and the like. The filter 360 may generate various information related to filtering and transmit the generated information to the entropy encoder 190. The information related to filtering may be encoded by the entropy encoder 340 and output in the form of a bitstream.
The modified reconstructed picture transmitted to the memory 370 may be used as the reference picture in the inter predictor 321. Through this, prediction mismatch between the encoder and the decoder may be avoided and encoding efficiency may be improved.
The DPB of the memory 370 may store the modified reconstructed picture for use as a reference picture in the inter predictor 321. The memory 370 may store the motion information of the block from which the motion information in the current picture is derived (or encoded) and/or the motion information of the blocks in the already reconstructed picture. The stored motion information may be transferred to the inter predictor 321 for use as the motion information of the spatial neighboring block or the motion information of the temporal neighboring block. The memory 370 may store reconstructed samples of reconstructed blocks in the current picture and may transfer the stored reconstructed samples to the intra predictor 322.
Meanwhile, the VCM encoder (or feature/feature map encoder) basically performs a series of procedures such as prediction, transform, and quantization to encode the feature/feature map and thus may basically have the same/similar structure as the image/video encoder 300 described with reference to
Referring to
When a bitstream containing video/image information is input, the image/video decoder 400 may reconstruct an image/video in correspondence with the process in which the image/video information is processed in the image/video encoder 300 of
The image/video decoder 400 may receive a signal output from the encoder of
The dequantizer 421 may dequantize the quantized transform coefficients and output transform coefficients. The dequantizer 421 may rearrange the quantized transform coefficients into a two-dimensional block form. In this case, rearranging may be performed based on the coefficient scan order performed in the image/video encoder. The dequantizer 321 may perform dequantization on quantized transform coefficients using quantization parameters (e.g., quantization step size information) and acquire transform coefficients.
The inverse transformer 422 inversely transforms the transform coefficients to acquire a residual signal (residual block, residual sample array).
The predictor 430 may perform prediction on the current block and generate a predicted block including prediction samples for the current block. The predictor may determine whether intra prediction or inter prediction is applied to the current block based on information about prediction output from the entropy decoder 410, and may determine a specific intra/inter prediction mode.
The predictor 420 may generate a prediction signal based on various prediction methods. For example, the predictor may not only apply intra prediction or inter prediction for prediction of one block, but also may apply intra prediction and inter prediction simultaneously. This may be called combined inter and intra prediction (CIIP). Additionally, the predictor may be based on an intra block copy (IBC) prediction mode or a palette mode for prediction of a block. The IBC prediction mode or palette mode may be used, for example, for image/video coding of content such as games, such as screen content coding (SCC). In IBC, prediction is basically performed within the current picture, but may be performed similarly to inter prediction in that a reference block is derived within the current picture. That is, IBC may use at least one of the inter prediction techniques described in this document. The palette mode may be viewed as an example of intra coding or intra prediction. When the palette mode is applied, information about the palette table and palette index may be included and signaled in the image/video information.
The intra predictor 431 may predict the current block by referencing samples in the current picture. The referenced samples may be located in the neighbor of the current block, or may be located away from the current block, depending on the prediction mode. In intra prediction, prediction modes may include a plurality of non-directional modes and a plurality of directional modes. The intra predictor 431 may determine the prediction mode applied to the current block using the prediction mode applied to the neighboring block.
The inter predictor 432 may derive a predicted block for the current block based on a reference block (reference sample array) specified by a motion vector in the reference picture. At this time, in order to reduce the amount of motion information transmitted in the inter prediction mode, motion information may be predicted in block, subblock, or sample units based on the correlation of motion information between the neighboring block and the current block. The motion information may include a motion vector and a reference picture index. The motion information may further include inter prediction direction (L0 prediction, L1 prediction, Bi prediction, etc.) information. In the case of inter prediction, neighboring blocks may include a spatial neighboring block present in the current picture and a temporal neighboring block present in the reference picture. For example, the inter predictor 432 may construct a motion information candidate list based on neighboring blocks and derive a motion vector and/or reference picture index of the current block based on received candidate selection information. Inter prediction may be performed based on various prediction modes, and information about prediction may include information indicating the mode of inter prediction for the current block.
The adder 440 may generate a reconstructed signal (reconstructed picture, reconstructed block, reconstructed sample array) by adding the acquired residual signal to a prediction signal (predicted block, prediction sample array) output from the predictor (including the inter predictor 432 and/or the intra predictor 431). If there is no residual for a processing target block, such as when skip mode is applied, the predicted block may be used as a reconstruction block.
The adder 440 may be called a reconstructor or a reconstruction block generator. The generated reconstructed signal may be used for intra prediction of a next processing target block in the current picture, may be output after filtering as described later, or may be used for inter prediction of a next picture.
Meanwhile, luma mapping with chroma scaling (LMCS) is applicable in a picture decoding process.
The filter 450 can improve subjective/objective image quality by applying filtering to the reconstructed signal. For example, the filter 450 may generate a modified reconstructed picture by applying various filtering methods to the reconstructed picture, and transmit the modified reconstructed picture in the memory 460, specifically the DPB of the memory 460. Various filtering methods may include, for example, deblocking filtering, sample adaptive offset, adaptive loop filter, bilateral filter, etc.
The (modified) reconstructed picture stored in the DPB of the memory 460 may be used as a reference picture in the inter predictor 432. The memory 460 may store motion information of a block from which motion information in the current picture is derived (or decoded) and/or motion information of blocks in an already reconstructed picture. The stored motion information may be transferred to the inter predictor 432 for use as motion information of spatial neighboring blocks or motion information of temporal neighboring blocks. The memory 460 may store reconstructed samples of reconstructed blocks in the current picture and transfer them to the intra predictor 431.
Meanwhile, the VCM decoder (or feature/feature map decoder) performs a series of procedures such as prediction, inverse transform, and dequantization to decode the feature/feature map, and may basically have the same/similar structure as the image/video decoder 400 described above with reference to
Referring to
The prediction procedure (S510) may be performed by the predictor 320 described above with reference to
Specifically, the intra predictor 322 may predict a current block (that is, a set of current encoding target feature elements) by referencing feature elements in a current feature/feature map. Intra prediction may be performed based on the spatial similarity of feature elements constituting the feature/feature map. For example, feature elements included in the same region of interest (RoI) within an image/video may be estimated to have similar data distribution characteristics. Accordingly, the intra predictor 322 may predict the current block by referencing the already reconstructed feature elements within the region of interest including the current block. At this time, the referenced feature elements may be located adjacent to the current block or may be located away from the current block depending on the prediction mode. Intra prediction modes for feature/feature map encoding may include a plurality of non-directional prediction modes and a plurality of directional prediction modes. The non-directional prediction modes may include, for example, prediction modes corresponding to the DC mode and planar mode of the image/video encoding procedure. Additionally, the directional modes may include prediction modes corresponding to, for example, 33 directional modes or 65 directional modes of an image/video encoding procedure. However, this is an example, and the type and number of intra prediction modes may be set/changed in various ways depending on the embodiment.
The inter predictor 321 may predict the current block based on a reference block (i.e., a set of referenced feature elements) specified by motion information on the reference feature/feature map. Inter prediction may be performed based on the temporal similarity of feature elements constituting the feature/feature map. For example, temporally consecutive features may have similar data distribution characteristics. Accordingly, the inter predictor 321 may predict the current block by referencing the already reconstructed feature elements of features temporally adjacent to the current feature. At this time, motion information for specifying the referenced feature elements may include a motion vector and a reference feature/feature map index. The motion information may further include information about an inter prediction direction (e.g., L0 prediction, L1 prediction, Bi prediction, etc.). In the case of inter prediction, neighboring blocks may include spatial neighboring blocks present within the current feature/feature map and temporal neighboring blocks present within the reference feature/feature map. A reference feature/feature map including a reference block and a reference feature/feature map including a temporal neighboring block may be the same or different. The temporal neighboring block may be referred to as a collocated reference block, etc., and a reference feature/feature map including a temporal neighboring block may be referred to as a collocated feature/feature map. The inter predictor 321 may construct a motion information candidate list based on neighboring blocks and generate information indicating which candidate is used to derive the motion vector and/or reference feature/feature map index of the current block. Inter prediction may be performed based on various prediction modes. For example, in the case of the skip mode and the merge mode, the inter predictor 321 may use motion information of the neighboring block as motion information of the current block. In the case of the skip mode, unlike the merge mode, the residual signal may not be transmitted. In the case of the motion vector prediction (MVP) mode, the motion vector of the neighboring block is used as a motion vector predictor, and the motion vector of the current block may be indicated by signaling the motion vector difference. The predictor 320 may generate a prediction signal based on various prediction methods in addition to intra prediction and inter prediction described above.
The prediction signal generated by the predictor 320 may be used to generate a residual signal (residual block, residual feature elements) (S520). The residual processing procedure (S520) may be performed by the residual processor 330 described above with reference to
Meanwhile, the feature/feature map encoding procedure may further include not only a procedure (S530) for encoding information for feature/feature map reconstruction (e.g., prediction information, residual information, partitioning information, etc.) and outputting it in the form of a bitstream, a procedure for generating a reconstructed feature/feature map for the current feature/feature map and a procedure (optional) for applying in-loop filtering to the reconstructed feature/feature map.
The VCM encoder may derive (modified) residual feature(s) from the quantized transform coefficient(s) through dequantization and inverse transform, and generate a reconstructed feature/feature map based on the predicted feature(s) and (modified) residual feature(s) that are the output of step S510. The reconstructed feature/feature map generated in this way may be the same as the reconstructed feature/feature map generated in the VCM decoder. When an in-loop filtering procedure is performed on the reconstructed feature/feature map, a modified reconstructed feature/feature map may be generated through the in-loop filtering procedure on the reconstructed feature/feature map. The modified reconstructed feature/feature map may be stored in a decoded feature buffer (DFB) or memory and used as a reference feature/feature map in the feature/feature map prediction procedure later. Additionally, (in-loop) filtering-related information (parameters) may be encoded and output in the form of a bitstream. Through the in-loop filtering procedure, noise that may occur during feature/feature map coding may be removed, and feature/feature map-based task performance may be improved. In addition, by performing an in-loop filtering procedure at both the encoder stage and the decoder stage, the identity of the prediction result can be guaranteed, the reliability of feature/feature map coding can be improved, and the amount of data transmission for feature/feature map coding can be reduced.
Referring to
Embodiments of the present disclosure propose a method of generating a bitstream related to a prediction process necessary to compress an activation (feature) map generated in a hidden layer of a deep neural network.
Input data input to the deep neural network goes through an operation process of several hidden layers, and the operation results of each hidden layer are output as a feature/feature map having various sizes and channel numbers depending on the type of deep neural network being used and the location of the hidden layer within the deep neural network.
Referring to
An encoding apparatus 720 may compress the output feature map and output it in the form of a bitstream, and a decoding apparatus 730 may reconstruct the (compressed) feature map from the output bitstream. The encoding apparatus 720 may correspond to the encoding unit 12 of
The width, height and channel size of an input source (video source) are W, H, and C, respectively, and the width, height and channel size of a feature set, which are output values, are W′, H′, and C′. For example, if the input source is RGB, C may be 3. C′ of the output value refers to the number of features that make up the feature set, and the value may vary depending on the applied picture extraction method. Generally, C′ has a value larger than the number of channels C of the input source.
The left side of
The FPN may be widely used in machine tasks such as object detection and object segmentation. Since the size of the object in the image is not fixed, the FPN has a structure that applies a pyramid structure to hierarchically extract feature data to correspond to various sizes. Through this, the FPN may be used to perform accurate machine tasks regardless of the size of the object in the image.
However, in order to perform prediction for a machine task, there is a need for feature data within a plurality of layers (rectangles indicated by dotted lines) in the pyramid hierarchal structure. Therefore, in an FPN and a neural network with a similar structure, not only the number of channels in a layer increases, but also the number of layers through which feature data is transmitted also increases. In other words, in order to perform a machine task using an FPN and a neural network with a similar structure, only one feature data is not signaled, but a plurality of feature data must be signaled.
The present disclosure proposes embodiments of generating a compressed bitstream for signaling feature data (feature information) of a plurality of layers in a neural network.
Referring to
The feature encoding apparatus 10 may generate parameter information related to the plurality of layers (S1120), and encode the generated parameter information into a bitstream and signal it (S1130).
The parameter information may include at least one of information about the number of feature channels or information about a quantization parameter (QP). The information about the number of feature channels may indicate the number of feature channels of each layer. The number of feature channels indicated by the information about the number of feature channels may be the number of feature channels dimensionally reduced and transformed. The information about the QP may indicate a QP value or QP difference value applied to each layer.
Referring to
When feature information is extracted using an FPN or a neural network with a similar structure, the feature information may be divided into a plurality of layers, and the parameter information may be information related to the plurality of layers.
For example, the parameter information may include at least one of information about the number of feature channels or information about a quantization parameter (QP). The information about the number of feature channels may indicate the number of feature channels of each layer. The number of feature channels indicated by the information about the number of feature channels may be the number of feature channels dimensionally reduced and transformed. The information about the QP may indicate a QP value or QP difference value applied to each layer.
The feature decoding apparatus 20 may reconstruct feature information based on the parameter information (S1220). For example, the feature decoding apparatus 20 may reconstruct the dimension of the feature information based on information about the number of feature channels. As another example, the feature decoding apparatus 20 may reconstruct the feature information by performing dequantization based on the information about the QP.
Hereinafter, specific embodiments proposed through the present disclosure will be described. Specific embodiments proposed through the present disclosure include 1) a method of compressing feature information of a plurality of layers from a high-dimensional feature channel to a low-dimensional feature channel for each single image and reconstructing it back to an original-dimensional (high-dimensional) feature channel; 2) a method of determining the importance of each layer according to a degree of error propagation considering a hierarchical structure and compressing the feature information of each layer into different numbers of feature channels according to the determined importance, 3) a method of compressing feature information of a layer with high importance to relatively high quality and compressing feature information of a layer with low importance to relatively low quality, etc.
Even if feature information is obtained from the same neural network, the number of channels after efficient reduction may be different for each image or each frame within the image. However, in a conventional feature channel number compression method (dimension reduction transform method), the same number of channels after reduction was applied to all images. Therefore, according to the conventional method, the same number of channels after reduction is applied to all images for feature information extracted from the same neural network, which may reduce efficiency.
Embodiment 1 is a method of applying the number of channels after reduction in units of images or frames.
The feature information of the C0 to C2 layers may correspond to an intermediate result of transforming the feature information of the S0 to S2 layers into feature information with the same n channels using a convolution network (Conv).
Ultimately, in order to perform a machine task, feature information of three layers, P0 to P2, may be required. The feature information of the P2 layer may be derived to be the same value as the feature information of the C2 layer. The feature information of the P1 layer may be derived to be the sum of the value obtained by up-sampling the feature information of the C2 layer to twice the size horizontally and vertically and the feature information of the C1 layer. The feature information of the P0 layer may be derived to be the sum of the value obtained by up-sampling the feature information of the C1 layer to twice the size horizontally and vertically and the feature information of the C0 layer.
In general, the feature information of a layer may consist of a larger number of channels than the original input image. The feature information of the P0 to P2 layers may also have a larger number n of channels than the original input image. In this case, if the dimension of the feature information of the P0 to P2 layers is reduced to a low dimension using a dimension reduction transform method such as PCA (principal component analysis), the feature information may be encoded with a relatively small amount of bits.
Referring to
Referring to
In general, in PCA and dimension reduction transform corresponding thereto, the larger the number of channels (n′ in
For example, in
On the other hand, as the number of channels after dimension reduction transform increases, the amount of information that must be encoded increases. For example, assuming that the number of channels of the original feature information is n=64, the size of the bitstream generated through the feature encoding apparatus 10 increases in the case of dimension reduction transform with n′=16 rather than the case of dimension reduction transform with n′=8.
An example of signaling information about the number of feature channels according to Embodiment 1 is shown in Table 1.
In Table 1, Picture_Channels may correspond to information about the number of feature channels. Picture_Channels may specify the number of channels after dimension reduction transform commonly applied to a plurality of layers. That is, Picture_Channels may represent the same value for the plurality of layers.
Picture_Channels may be encoded and obtained through the picture level of a bitstream. For example, as expressed in Table 1, Picture_Channels may be signaled and obtained through a feature picture parameter set (Feature_Picture_Parameter_set( )) level of the bitstream.
Therefore, the dimension reduction transform method and the dimension reconstruction transform method of Embodiment 1 may be applied in units of images or frames. That is, according to Embodiment 1, a different number of channels after dimension reduction may be applied to each image or frame. Therefore, according to Embodiment 1, since the number of channels after efficient reduction may be applied to each image or frame, efficiency can be improved.
As described above, in a neural network for performing a machine task, there may be cases where a plurality of layers must be encoded simultaneously, and in general, the layers may have an increased number of channels compared to the original. Therefore, there is a need to transform feature information in a plurality of layers into a reduced number of channels through dimension reduction transform and then encode it. On the other hand, as the number of channels after dimension reduction transform increases, the error after dimension reconstruction transform decreases, but the amount of bits in a bitstream may increase.
Therefore, by considering the effect of the layers on the machine task and adaptively setting the number of channels after dimension reduction transform of the layers, the accuracy of the machine task can be improved while maintaining the amount of bits in the bitstream.
However, the conventional method uniformly applied the same number of channels after reduction transform regardless of importance when encoding a plurality of feature information. According to this conventional method, encoding efficiency may be reduced.
Embodiment 2 is a method of adaptively setting the number of channels after dimension reduction transform according to the importance of layers. Specifically, Embodiment 2 is a method of performing machine tasks more accurately with the same bitstream size, by allocating a larger number of channels after dimension reduction transform to important layers that have more influence on the performance of the machine task, and smaller numbers of channels after dimension reduction transform to layers that have less influence on the performance of the machine task.
Referring to
Therefore, it can be seen that error resulting from dimension reduction/reconstruction transform of feature information of the C2 layer will have more influence on the accuracy of the machine task than error resulting from dimension reduction/reconstruction transform of feature information of other layers. In the same concept, it can be seen that the error resulting from the dimension reduction/reconstruction transform of the feature information of the C1 layer will have more influence on the accuracy of the machine task than the error resulting from the dimension reduction/reconstruction transform of the feature information of the C0 layer. Ultimately, the importance of the layers may be determined in the following order: C2 layer, C1 layer, and C0 layer.
Referring to
Since the importance of the C0 to C2 layers is in the order of C2 layer, C1 layer, and C0 layer, the number of channels after dimension reduction transform of C′0 to C′2 layers may have the relationship of n′0<n′1<n′2. This is the result of allocating a larger number of channels after dimension reduction transform to the feature information of a layer with high importance.
Referring to
Examples of signaling information about the number of feature channels according to Embodiment 2 are shown in Tables 2 and 3. Table 2 shows information signaled in picture units, and Table 3 shows information signaled in each layer unit.
In Table 2, Picture_Channels may specify the number of channels after dimension reduction transform commonly applied to a plurality of layers of the picture. Picture_Channels may represent the same value for the plurality of layers.
Picture_LayerChannels_flag may specify whether a layer (or layer feature data) with a number of channels after dimension reduction transform that is different from Picture_Channels is present in the picture. That is, Picture_LayerChannels_flag may specify whether a layer with a number of feature channels that is different from the value of Picture_Channels is present among a plurality of layers. Picture_LayerChannels_flag may be referred to as second channel information.
Picture_Num_Layers may specify the number of layers in a picture. LayerChannels_flag may specify whether a current layer has a number of channels after dimension reduction transform that is different from Picture_Channels. LayerChannels_flag may be encoded and obtained when Picture_LayerChannels_flag specifies that ‘a layer with a number of feature channels different from the value of Picture_Channels is present in a plurality of layers.’ LayerChannels_flag may be referred to as first channel information.
LayerChannels can indicate the number of channels after dimension reduction transform of the current layer. LayerChannels may be encoded and obtained when Picture_LayerChannels_flag specifies ‘a layer with a different number of feature channels different from the value of Picture_Channels’ is present in a plurality of layers, and LayerChannels_flag specifies ‘the current layer has a number of channels after dimension reduction transform that is different from Picture_Channels’.
LayerChannels_flag and LayerChannels may be included and signaled within Feature_Picture_Parameter_set, or may be signaled separately.
Referring to
The feature encoding apparatus 10 may determine whether a layer with a different number of channels is present (S1620). As a result, if a layer with different numbers of channels is not present, Picture_LayerChannels_flag (second channel information) of a second value may be encoded (S1630). In contrast, if a layer with a different number of channels is present, Picture_LayerChannels_flag of a first value may be encoded (S1640).
Picture_LayerChannels_flag of the second value may indicate that the layer with the different number of channels is not present and Picture_LayerChannels_flag of the first value may indicate that the layer with the different number of channels is present.
The feature encoding apparatus 10 may determine whether the number of channels of a current layer is different (S1650). The current layer may be included in the plurality of layers and may be referred to as a first layer. If the number of channels of the current layer is not different, LayerChannels_flag (first channel information) of a second value may be encoded (S1660). In contrast, when the number of channels of the current layer is different, LayerChannels_flag of a first value and LayerChannels may be encoded (S1670). LayerChannels_flag of the second value may indicate that the number of channels of the current layer is not different, and LayerChannels_flag of the first value may indicate that the number of channels of the current layer is different.
Since LayerChannels is encoded when the number of channels of the current layer is different (LayerChannels_flag of the first value), LayerChannels (information about the number of feature channels of the first layer) may indicate a value different from Picture_Channels (information about the number of feature channels of the second layer).
The feature encoding apparatus 10 may determine whether encoding of the number of channels of all layers has been completed (S1680). If encoding of the number of channels of all layers has not been completed, the feature encoding apparatus 10 proceeds to a next layer (S1690) and repeatedly performs steps S1650 to S1680, thereby completing encoding of the number of channels OF all layers.
Referring to
The feature decoding apparatus 20 may determine whether a layer with a different number of channels is present based on the value of Picture_LayerChannels_flag (S1720). Picture_LayerChannels_flag of a second value may indicate that the layer with the different number of channels is not present, and Picture_LayerChannels_flag of a first value may indicate that the layer with the different number of channels is present.
The feature decoding apparatus 20 may obtain LayerChannels_flag from the bitstream when Picture_LayerChannels_flag indicates a first value (S1730). LayerChannels_flag of a second value may indicate that the number of channels of the current layer is not different, and LayerChannels_flag of a first value may indicate that the number of channels of the current layer is different.
The feature decoding apparatus 20 may determine whether the number of channels of a current layer is different based on LayerChannels_flag (S1740). The feature decoding apparatus 20 may obtain LayerChannels from the bitstream when LayerChannels_flag indicates the first value (S1750).
The feature decoding apparatus 20 may determine whether obtaining of the number of channels of all layers has been completed when LayerChannels_flag indicates the second value in step S1740 or when LayerChannels is obtained in step S1750 (S1760). If obtaining of the number of channels of all layers has not been completed, the feature decoding apparatus 20 may proceed to a next layer (S1770) and repeatedly perform steps S1740 to S1760, by completing obtaining of the number of channels of all layers.
In a neural network for performing image-related machine tasks, feature information of a plurality of layers dimensionally reduced and transformed is finally transformed into a bitstream through an encoder. In this case, a QP value must be set as a parameter for encoding.
If the importance of each layer is different depending on the influence on the machine task, the accuracy of the machine task can be improved while maintaining the amount of bits in the bitstream, by adaptively setting the QP values of the layers according to the importance.
However, the conventional method related to setting of the QP value uniformly applies the same QP value regardless of importance when encoding a plurality of feature information. According to this conventional method, encoding efficiency may be reduced.
Embodiment 3 is a method of setting different QP values according to importance rather than setting the same QP value when encoding feature information of a plurality of layers. Specifically, Embodiment 3 is a method of performing more accurately performing machine tasks with the same bitstream size by allocating smaller QP values to important layers that have more influence on the performance of the machine task and allocating larger QP values to layers that have less influence on the performance of the machine task.
Referring to
As in the example described in Embodiment 2, assuming that the importance of feature information of C′0 to C′2 layers is in the order of C′2 layer, C′1 layer, and C0 layer, the QP value of each layer may be set as q0>q1>q2. This may mean that the feature information of the C′2 layer consumes more bits when encoding than the feature information of the C′1 layer, but a difference from the original is smaller when reconstructed. In addition, this may mean that the feature information of the C′1 layer consumes more bits when encoding than the feature information of the C0 layer, but a difference from the original is smaller when reconstructed.
Therefore, according to Embodiment 3, since the feature information of the more important layer has a smaller error compared to the original feature information, a higher machine task accuracy is achieved at the same amount of bits, compared to the conventional method of applying the same QP value to all layers.
Examples of signaling information about QP according to Embodiment 3 are shown in Tables 4 to 6. Table 4 shows information signaled in picture units, and Tables 5 and 6 may show information signaled in each layer unit. Table 5 shows an example of signaling the QP value applied to the layer, and Table 6 shows an example of signaling the QP difference value.
In Table 4, Picture_InitQP may indicate a basic QP value applied to a plurality of layers (or layer feature data) in the picture. That is, Picture_InitQP may represent the same value for the plurality of layers.
Picture_LayerQP_flag may indicate whether a layer (or layer feature data) with a QP value different from Picture_InitQP is present in the picture. That is, Picture_LayerQP_flag may indicate whether a layer with a QP value different from the value of Picture_InitQP is present among the plurality of layers. Picture_LayerQP_flag may be referred to as second QP information.
Picture_Num_Layers may indicate the number of layers in a picture. LayerQP_flag may indicate whether a current layer has a QP value different from Picture_InitQP. LayerQP_flag may be encoded and obtained when Picture_LayerQP_flag indicates that ‘a layer with a QP value different from the value of Picture_InitQP is present among a plurality of layers’. LayerQP_flag may be referred to as first channel information.
LayerQP may indicate the QP value of the current layer. LayerQP will be encoded and obtained when Picture_LayerQP_flag indicates that ‘a layer with a QP value different from the value of Picture_InitQP is present among a plurality of layers’ and LayerQP_flag indicates ‘the current layer has a QP value different from Picture_InitQP’.
LayerQPDelta_flag may indicate whether the current layer is a layer for which a QP value different from Picture_InitQP is applied and a QP difference value is signaled. LayerQPDelta_flag may be encoded and obtained when Picture_LayerQP_flag indicates that ‘a layer with a QP value different from the value of Picture InitQP is present among a plurality of layers’. LayerQPDelta_flag may be referred to as first channel information.
LayerQPDelta may represent a difference between the QP value of the current layer and Picture_InitQP, that is, the QP difference value. LayerQPDelta may be encoded and obtained when Picture_LayerQP_flag indicates ‘a layer with a QP value different from the value of Picture_InitQP is present among a plurality of layers’ and LayerQPDelta_flag indicates ‘the current layer corresponds to a layer for which the QP difference value is signaled’.
LayerChannels_flag, LayerChannels, LayerQPDelta_flag, and LayerQPDelta may be included and signaled within Feature_Picture_Parameter_set, or may be signaled separately.
Referring to
The feature encoding apparatus 10 may determine whether a layer with a different QP value is present (S1920). As a result, if a layer with the different QP value is not present, Picture_LayerQP_flag (second QP information) of a second value may be encoded (S1930). In contrast, if the layer with the different QP value is present, Picture_LayerQP_flag of a first value may be encoded (S1940). Picture_LayerQP_flag of the second value may indicate that the layers with the different QP value is not present, and Picture_LayerQP_flag of the first value may indicate that layer with the different QP value is present.
The feature encoding apparatus 10 may determine whether the QP value of a current layer is different (S1950). The current layer may be included in a plurality of layers and may be referred to as a first layer. If the number of channels of the current layer is not different, LayerQP_flag/LayerQPDelta_flag of a second value (first QP information) may be encoded (S1960). In contrast, when the QP value of the current layer is different, LayerQP_flag/LayerQPDelta_flag of the first value and LayerQP/LayerQPDelta may be encoded (S1970).
LayerQP_flag/LayerQPDelta_flag of the second value may indicate that the QP value of the current layer is not different, and LayerQP_flag/LayerQPDelta_flag of the first value may indicate that the QP value of the current layer is different.
Since LayerQP/LayerQPDelta is encoded when the QP value of the current layer is different (LayerQP_flag/LayerQPDelta_flag of the first value), LayerQP/LayerQPDelta (information about the QP of the first layer) may indicate a different value from Picture InitQP (information about the QP of the second layer).
The feature encoding apparatus 10 may determine whether information about the QPs of all layers has been encoded (S1980). If the information about the QP of some layers has not been encoded, the feature encoding apparatus 10 proceeds to a next layer (S1990) and repeatedly performs the steps S1950 to S1980, thereby encoding the information about the QP of all layers.
Referring to
The feature decoding apparatus 20 may determine whether a layer with a different QP value is present based on the value of Picture_LayerQP_flag (S2020). Picture_LayerQP_flag of a second value may indicate that layer with the different QP value is not present, and Picture_LayerQP_flag of a first value may indicate that layer with the different QP value is present.
The feature decoding apparatus 20 may obtain LayerQP_flag/LayerQPDelta_flag from the bitstream when Picture_LayerQP_flag indicates a first value (S2030). LayerQP_flag/LayerQPDelta_flag of a second value may indicate that the QP value of the current layer is not different, and LayerQP_flag/LayerQPDelta_flag of a first value may indicate that the QP value of the current layer is different.
The feature decoding apparatus 20 may determine whether the QP value of the current layer is different based on LayerQP_flag/LayerQPDelta_flag (S2040). The feature decoding apparatus 20 may obtain LayerQP/LayerQPDelta from the bitstream when LayerQP_flag/LayerQPDelta_flag indicates a first value (S2050).
The feature decoding apparatus 20 may determine whether information about the QPs of all layers has been obtained, when LayerQP_flag/LayerQPDelta_flag indicates the second value in step S2040 or when LayerQP/LayerQPDelta is obtained in step S2050 (S2060). If a layer for which information about the QP has not been obtained is present, the feature decoding apparatus 20 proceeds to a next layer (S2070) and repeatedly performs steps S2040 to S2060, thereby obtaining information about the QPs of all layers.
Embodiment 4 is a method of differently setting the number of channels after the dimension reduction transform of the layers as described in Embodiment 2 and differently setting the QP values of the layers when encoding the feature information dimensionally reduced and transformed as described in Embodiment 3, according to the importance influencing the machine task.
In Embodiments 2 and 3, only examples were described in which a relatively large number of channels after reduction transform and a relatively small QP value are applied to a layer with relatively large importance. However, depending on the embodiments, a relatively small number of channels after dimension reduction transform and a relatively small QP value may be applied to a layer with relatively large importance. Additionally, depending on embodiments, a relatively large number of channels after dimension reduction transform and a relatively large QP value may be applied to a layer with relatively large importance.
Examples of signaling of information about the number of feature channels and information about a QP according to Embodiment 4 are shown in Tables 7 to 9. Table 7 shows information signaled in picture units, and Tables 8 and 9 may show information signaled in each layer unit. Table 8 shows an example of signaling the QP value applied to the layer, and Table 9 shows an example of signaling a QP difference value.
The semantics of each syntax expressed in Tables 7 to 9 may be the same as the semantics of the syntaxes expressed in Tables 2 to 6. Additionally, the encoding and obtaining conditions of each syntax represented in Tables 7 to 9 may be the same as the encoding and obtaining conditions of the syntaxes represented in Tables 2 to 6.
The feature encoding method according to Embodiment 4 may be the same as a combination of the feature encoding method of
While the exemplary methods of the present disclosure described above are represented as a series of operations for clarity of description, it is not intended to limit the order in which the steps are performed, and the steps may be performed simultaneously or in different order as necessary. In order to implement the method according to the present disclosure, the described steps may further include other steps, may include remaining steps except for some of the steps, or may include other additional steps except for some steps.
In the present disclosure, the image encoding apparatus or the image decoding apparatus that performs a predetermined operation (step) may perform an operation (step) of confirming an execution condition or situation of the corresponding operation (step). For example, if it is described that predetermined operation is performed when a predetermined condition is satisfied, the image encoding apparatus or the image decoding apparatus may perform the predetermined operation after determining whether the predetermined condition is satisfied.
The various embodiments of the present disclosure are not a list of all possible combinations and are intended to describe representative aspects of the present disclosure, and the matters described in the various embodiments may be applied independently or in combination of two or more.
Embodiments described in the present disclosure may be implemented and performed on a processor, microprocessor, controller, or chip. For example, the functional units shown in each drawing may be implemented and performed on a computer, processor, microprocessor, controller, or chip. In this case, information for implementation (e.g., information on instructions) or algorithm may be stored in a digital storage medium.
In addition, the decoder (decoding apparatus) and the encoder (encoding apparatus), to which the embodiment(s) of the present disclosure are applied, may be included in a multimedia broadcasting transmission and reception device, a mobile communication terminal, a home cinema video device, a digital cinema video device, a surveillance camera, a video chat device, a real time communication device such as video communication, a mobile streaming device, a storage medium, a camcorder, a video on demand (VoD) service providing device, an OTT video (over the top video) device, an Internet streaming service providing device, a three-dimensional (3D) video device, an argument reality (AR) device, a video telephony video device, a transportation terminal (e.g., vehicle (including autonomous vehicle) terminal, robot terminal, airplane terminal, ship terminal, etc.) and a medical video device, and the like, and may be used to process video signals or data signals. For example, the OTT video devices may include a game console, a blu-ray player, an Internet access TV, a home theater system, a smartphone, a tablet PC, a digital video recorder (DVR), or the like.
Additionally, a processing method to which the embodiment(s) of the present disclosure is applied may be produced in the form of a program executed by a computer and stored in a computer-readable recording medium. Multimedia data having a data structure according to the embodiment(s) of this document may also be stored in a computer-readable recording medium. Computer-readable recording media include all types of storage devices and distributed storage devices that store computer-readable data. Computer-readable recording media include, for example, Blu-ray Disc (BD), Universal Serial Bus (USB), ROM, PROM, EPROM, EEPROM, RAM, CD-ROM, magnetic tape, floppy disk, and optical data storage device. Additionally, computer-readable recording media include media implemented in the form of carrier waves (e.g., transmission via the Internet). Additionally, the bitstream generated by the encoding method may be stored in a computer-readable recording medium or transmitted through a wired or wireless communication network.
Additionally, the embodiment(s) of the present disclosure may be implemented as a computer program product by program code, and the program code may be executed on a computer by the embodiment(s) of the present disclosure. The program code may be stored on a carrier readable by a computer.
Referring to
The encoding server compresses contents input from multimedia input devices such as a smartphone, a camera, a camcorder, etc. into digital data to generate a bitstream and transmits the bitstream to the streaming server. As another example, when the multimedia input devices such as smartphones, cameras, camcorders, etc. directly generate a bitstream, the encoding server may be omitted.
The bitstream may be generated by an image encoding method or an image encoding apparatus, to which the embodiment of the present disclosure is applied, and the streaming server may temporarily store the bitstream in the process of transmitting or receiving the bitstream.
The streaming server transmits the multimedia data to the user device based on a user's request through the web server, and the web server serves as a medium for informing the user of a service. When the user requests a desired service from the web server, the web server may deliver it to a streaming server, and the streaming server may transmit multimedia data to the user. In this case, the contents streaming system may include a separate control server. In this case, the control server serves to control a command/response between devices in the contents streaming system.
The streaming server may receive contents from a media storage and/or an encoding server. For example, when the contents are received from the encoding server, the contents may be received in real time. In this case, in order to provide a smooth streaming service, the streaming server may store the bitstream for a predetermined time.
Examples of the user device may include a mobile phone, a smartphone, a laptop computer, a digital broadcasting terminal, a personal digital assistant (PDA), a portable multimedia player (PMP), navigation, a slate PC, tablet PCs, ultrabooks, wearable devices (e.g., smartwatches, smart glasses, head mounted displays), digital TVs, desktops computer, digital signage, and the like.
Each server in the contents streaming system may be operated as a distributed server, in which case data received from each server may be distributed.
Referring to
In an embodiment, the analysis server may perform a task requested by the user terminal after decoding the encoded information received from the user terminal (or from the encoding server). At this time, the analysis server may transmit the result obtained through the task performance back to the user terminal or may transmit it to another linked service server (e.g., web server). For example, the analysis server may transmit a result obtained by performing a task of determining a fire to a fire-related server. In this case, the analysis server may include a separate control server. In this case, the control server may serve to control a command/response between each device associated with the analysis server and the server. In addition, the analysis server may request desired information from a web server based on a task to be performed by the user device and the task information that may be performed. When the analysis server requests a desired service from the web server, the web server transmits it to the analysis server, and the analysis server may transmit data to the user terminal. In this case, the control server of the content streaming system may serve to control a command/response between devices in the streaming system.
The embodiments of the present disclosure may be used to encode or decode a feature/feature map.
| Filing Document | Filing Date | Country | Kind |
|---|---|---|---|
| PCT/KR2022/021032 | 12/22/2022 | WO |