The present disclosure generally relates to video processing, and more particularly, to lightweight spatial upsampling models used for processing video data suitable for machine vision tasks.
With the rise of machine learning technologies and machine vision applications, the amount of videos and images (collectively referred to as “image data”) consumed by machines has been rapidly growing. Typical use cases include autonomous driving, intelligent transportation, smart city, intelligent content management, etc., which incorporate machine vision tasks such as object detection, instance segmentation, and object tracking.
Due to the large volume of image data required by machine vision tasks, it is essential to compress the image data for efficient transmission and storage. However, conventional image/video compression techniques have been focusing on ensuring the image/video quality as perceived by humans, yet machines consume and understand visual data differently to human vision. As a result, the image/video compression techniques suitable for machine vision could be different from the conventional one. New compression techniques are therefore needed to achieve optimized performance for machine usage.
The present disclosure provides performing lightweight spatial upsampling for machine vision tasks. Specifically, the disclosed embodiments of the present disclosure provide a method for decoding a bitstream to output one or more pictures for a video stream, a method for encoding a video sequence into a bitstream, and a non-transitory computer readable storage medium storing a bitstream of a video.
In some embodiments of the present disclosure, there is provided a method for decoding a bitstream to output one or more pictures for a video stream, including: receiving a bitstream; and decoding, using coded information of the bitstream, one or more pictures, wherein the decoding includes: generating one or more decompressed pictures by decompressing one or more compressed pictures included in the bitstream; and performing spatial upsampling on the one or more decompressed pictures by a spatial upsampling model to obtain one or more reconstructed pictures, respectively, wherein a total length of coding bits of parameters of the spatial upsampling model is less than a threshold that is pre-determined based on a desired quality of the reconstructed pictures.
In some embodiments of the present disclosure, there is provided a method for encoding a video sequence into a bitstream, including: receiving a video sequence; encoding one or more pictures of the video sequence; and generating a bitstream associated with the one or more pictures, wherein the encoding includes: performing spatial downsampling on the one or more pictures by a spatial downsampling model to obtain one or more downsampled pictures, respectively; generating one or more compressed pictures by compressing the one or more downsampled pictures; and generating parameters for a spatial upsampling model for decoding the one or more compressed pictures, wherein a total length of coding bits of the parameters of the spatial upsampling model is less than a threshold that is pre-determined based on a desired quality of reconstructed pictures of the one or more pictures.
In some embodiments of the present disclosure, there is provided a non-transitory computer readable storage medium that stores a bitstream generated by operations including: generating one or more decompressed pictures by decompressing one or more compressed pictures included in the bitstream; and performing spatial upsampling on the one or more decompressed pictures by a spatial upsampling model to obtain one or more reconstructed pictures, respectively, wherein a total length of coding bits of parameters of the spatial upsampling model is less than a threshold that is pre-determined based on a desired quality of the reconstructed pictures.
Embodiments and various aspects of the present disclosure are illustrated in the following detailed description and the accompanying figures. Various features shown in the figures are not drawn to scale.
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise represented. The implementations set forth in the following description of exemplary embodiments do not represent all implementations consistent with the invention. Instead, they are merely examples of apparatuses and methods consistent with aspects related to the invention as recited in the appended claims. Particular aspects of the present disclosure are described in greater detail below. The terms and definitions provided herein control, if in conflict with terms and/or definitions incorporated by reference.
The present disclosure is directed to “Video Coding for Machines” (VCM), which aims at compressing input videos and images or feature maps for machine vision tasks. The disclosed techniques are suitable for compressing image data used by any machine vision tasks, such as object recognition and tracking, face recognition, image/video search, mobile augmented reality (MAR), autonomous vehicles, Internet of Things (IoT), images matching, 3-dimension structure construction, stereo correspondence, motion tracking, etc.
As shown in
Referring to
More specifically, source device 120 may further include various devices (not shown) for providing source image data to be preprocessed by image/video preprocessor 122. The devices for providing the source image data may include an image/video capture device, such as a camera, an image/video archive or storage device containing previously captured images/videos, or an image/video feed interface to receive images/videos from an image/video content provider.
Image/video encoder 124 and image/video decoder 144 each may be implemented as any of a variety of suitable encoder or decoder circuitry, such as one or more microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), discrete logic, software, hardware, firmware, or any combinations thereof. When the encoding or decoding is implemented partially in software, image/video encoder 124 or image/video decoder 144 may store instructions for the software in a suitable, non-transitory computer-readable medium and execute the instructions in hardware using one or more processors to perform the techniques consistent this disclosure. Each of image/video encoder 124 or image/video decoder 144 may be included in one or more encoders or decoders, either of which may be integrated as part of a combined encoder/decoder (CODEC) in a respective device.
Image/video encoder 124 and image/video decoder 144 may operate according to any video coding standard, such as Advanced Video Coding (AVC), High Efficiency Video Coding (HEVC), Versatile Video Coding (VVC), AOMedia Video 1 (AV1), Joint Photographic Experts Group (JPEG), Moving Picture Experts Group (MPEG), etc. Alternatively, image/video encoder 124 and image/video decoder 144 may be customized devices that do not comply with the existing standards. Although not shown in
Output interface 126 may include any type of medium or device capable of transmitting encoded bitstream 162 from source device 120 to destination device 140. For example, output interface 126 may include a transmitter or a transceiver configured to transmit encoded bitstream 162 from source device 120 directly to destination device 140 in real-time. Encoded bitstream 162 may be modulated according to a communication standard, such as a wireless communication protocol, and transmitted to destination device 140.
Communication medium 160 may include transient media, such as a wireless broadcast or wired network transmission. For example, communication medium 160 may include a radio frequency (RF) spectrum or one or more physical transmission lines (e.g., a cable). Communication medium 160 may form part of a packet-based network, such as a local area network, a wide-area network, or a global network such as the Internet. In some embodiments, communication medium 160 may include routers, switches, base stations, or any other equipment that may be useful to facilitate communication from source device 120 to destination device 140. For example, a network server (not shown) may receive encoded bitstream 162 from source device 120 and provide encoded bitstream 162 to destination device 140, e.g., via network transmission.
Communication medium 160 may also be in the form of a storage media (e.g., non-transitory storage media), such as a hard disk, flash drive, compact disc, digital video disc, Blu-ray disc, volatile or non-volatile memory, or any other suitable digital storage media for storing encoded image data. In some embodiments, a computing device of a medium production facility, such as a disc stamping facility, may receive encoded image data from source device 120 and produce a disc containing the encoded video data.
Input interface 142 may include any type of medium or device capable of
receiving information from communication medium 160. The received information includes encoded bitstream 162. For example, input interface 142 may include a receiver or a transceiver configured to receive encoded bitstream 162 in real-time.
Machine vision applications 146 include various hardware and/or software for utilizing the decoded image data generated by image/video decoder 144. For example, machine vision applications 146 may include a display device that displays the decoded image data to a user and may include any of a variety of display devices such as a cathode ray tube (CRT), a liquid crystal display (LCD), a plasma display, an organic light emitting diode (OLED) display, or another type of display device. As another example, machine vision applications 146 may include one or more processors configured to use the decoded image data to perform various machine-vision applications, such as object recognition and tracking, face recognition, images matching, image/video search, augmented reality, robot vision and navigation, autonomous driving, 3-dimension structure construction, stereo correspondence, motion tracking, etc.
Next, exemplary image data encoding and decoding techniques are described in connection with
In
The encoder can perform process 200A iteratively to encode each original BPU of the original picture (in the forward path) and generate predicted reference 224 for encoding the next original BPU of the original picture (in the reconstruction path). After encoding all original BPUs of the original picture, the encoder can proceed to encode the next picture in video sequence 202.
Referring to process 200A, the encoder can receive video sequence 202 generated by a video capturing device (e.g., a camera). The term “receive” used herein can refer to receiving, inputting, acquiring, retrieving, obtaining, reading, accessing, or any action in any manner for inputting data.
At prediction stage 204, at a current iteration, the encoder can receive an original BPU and prediction reference 224 and perform a prediction operation to generate prediction data 206 and predicted BPU 208. Prediction reference 224 can be generated from the reconstruction path of the previous iteration of process 200A. The purpose of prediction stage 204 is to reduce information redundancy by extracting prediction data 206 that can be used to reconstruct the original BPU as predicted BPU 208 from prediction data 206 and prediction reference 224.
Ideally, predicted BPU 208 can be identical to the original BPU. However, due to non-ideal prediction and reconstruction operations, predicted BPU 208 is generally slightly different from the original BPU. For recording such differences, after generating predicted BPU 208, the encoder can subtract it from the original BPU to generate residual BPU 210. For example, the encoder can subtract values (e.g., greyscale values or RGB values) of pixels of predicted BPU 208 from values of corresponding pixels of the original BPU. Each pixel of residual BPU 210 can have a residual value as a result of such subtraction between the corresponding pixels of the original BPU and predicted BPU 208. Compared with the original BPU, prediction data 206 and residual BPU 210 can have fewer bits, but they can be used to reconstruct the original BPU without significant quality deterioration. Thus, the original BPU is compressed.
To further compress residual BPU 210, at transform stage 212, the encoder can reduce spatial redundancy of residual BPU 210 by decomposing it into a set of two-dimensional “base patterns,” each base pattern being associated with a “transform coefficient.” The base patterns can have the same size (e.g., the size of residual BPU 210). Each base pattern can represent a variation frequency (e.g., frequency of brightness variation) component of residual BPU 210. None of the base patterns can be reproduced from any combinations (e.g., linear combinations) of any other base patterns. In other words, the decomposition can decompose variations of residual BPU 210 into a frequency domain. Such a decomposition is analogous to a discrete Fourier transform of a function, in which the base patterns are analogous to the base functions (e.g., trigonometry functions) of the discrete Fourier transform, and the transform coefficients are analogous to the coefficients associated with the base functions.
Different transform algorithms can use different base patterns. Various transform algorithms can be used at transform stage 212, such as, for example, a discrete cosine transform, a discrete sine transform, or the like. The transform at transform stage 212 is invertible. That is, the encoder can restore residual BPU 210 by an inverse operation of the transform (referred to as an “inverse transform”). For example, to restore a pixel of residual BPU 210, the inverse transform can be multiplying values of corresponding pixels of the base patterns by respective associated coefficients and adding the products to produce a weighted sum. For a video coding standard, both the encoder and decoder can use the same transform algorithm (thus the same base patterns). Thus, the encoder can record only the transform coefficients, from which the decoder can reconstruct residual BPU 210 without receiving the base patterns from the encoder. Compared with residual BPU 210, the transform coefficients can have fewer bits, but they can be used to reconstruct residual BPU 210 without significant quality deterioration. Thus, residual BPU 210 is further compressed.
The encoder can further compress the transform coefficients at quantization stage 214. In the transform process, different base patterns can represent different variation frequencies (e.g., brightness variation frequencies). Because human eyes are generally better at recognizing low-frequency variation, the encoder can disregard information of high-frequency variation without causing significant quality deterioration in decoding. For example, at quantization stage 214, the encoder can generate quantized transform coefficients 216 by dividing each transform coefficient by an integer value (referred to as a “quantization parameter”) and rounding the quotient to its nearest integer. After such an operation, some transform coefficients of the high-frequency base patterns can be converted to zero, and the transform coefficients of the low-frequency base patterns can be converted to smaller integers. The encoder can disregard the zero-value quantized transform coefficients 216, by which the transform coefficients are further compressed. The quantization process is also invertible, in which quantized transform coefficients 216 can be reconstructed to the transform coefficients in an inverse operation of the quantization (referred to as “inverse quantization”).
Because the encoder disregards the remainders of such divisions in the rounding operation, quantization stage 214 can be lossy. Typically, quantization stage 214 can contribute the most information loss in process 200A. The larger the information loss is, the fewer bits the quantized transform coefficients 216 can need. For obtaining different levels of information loss, the encoder can use different values of the quantization parameter or any other parameter of the quantization process.
At binary coding stage 226, the encoder can encode prediction data 206 and quantized transform coefficients 216 using a binary coding technique, such as, for example, entropy coding, variable length coding, arithmetic coding, Huffman coding, context-adaptive binary arithmetic coding, or any other lossless or lossy compression algorithm. In some embodiments, besides prediction data 206 and quantized transform coefficients 216, the encoder can encode other information at binary coding stage 226, such as, for example, a prediction mode used at prediction stage 204, parameters of the prediction operation, a transform type at transform stage 212, parameters of the quantization process (e.g., quantization parameters), an encoder control parameter (e.g., a bitrate control parameter), or the like. The encoder can use the output data of binary coding stage 226 to generate video bitstream 228. In some embodiments, video bitstream 228 can be further packetized for network transmission.
Referring to the reconstruction path of process 200A, at inverse quantization stage 218, the encoder can perform inverse quantization on quantized transform coefficients 216 to generate reconstructed transform coefficients. At inverse transform stage 220, the encoder can generate reconstructed residual BPU 222 based on the reconstructed transform coefficients. The encoder can add reconstructed residual BPU 222 to predicted BPU 208 to generate prediction reference 224 that is to be used in the next iteration of process 200A.
It should be noted that other variations of the process 200A can be used to encode video sequence 202. In some embodiments, stages of process 200A can be performed by the encoder in different orders. In some embodiments, one or more stages of process 200A can be combined into a single stage. In some embodiments, a single stage of process 200A can be divided into multiple stages. For example, transform stage 212 and quantization stage 214 can be combined into a single stage. In some embodiments, process 200A can include additional stages. In some embodiments, process 200A can omit one or more stages in
Generally, prediction techniques can be categorized into two types: spatial prediction and temporal prediction. Spatial prediction (e.g., an intra-picture prediction or “intra prediction”) can use pixels from one or more already coded neighboring BPUs in the same picture to predict the current BPU. That is, prediction reference 224 in the spatial prediction can include the neighboring BPUs. The spatial prediction can reduce the inherent spatial redundancy of the picture. Temporal prediction (e.g., an inter-picture prediction or “inter prediction”) can use regions from one or more already coded pictures to predict the current BPU. That is, prediction reference 224 in the temporal prediction can include the coded pictures. The temporal prediction can reduce the inherent temporal redundancy of the pictures.
Referring to process 200B, in the forward path, the encoder performs the prediction operation at spatial prediction stage 2042 and temporal prediction stage 2044. For example, at spatial prediction stage 2042, the encoder can perform the intra prediction. For an original BPU of a picture being encoded, prediction reference 224 can include one or more neighboring BPUs that have been encoded (in the forward path) and reconstructed (in the reconstructed path) in the same picture. The encoder can generate predicted BPU 208 by extrapolating the neighboring BPUs. The extrapolation technique can include, for example, a linear extrapolation or interpolation, a polynomial extrapolation or interpolation, or the like. In some embodiments, the encoder can perform the extrapolation at the pixel level, such as by extrapolating values of corresponding pixels for each pixel of predicted BPU 208. The neighboring BPUs used for extrapolation can be located with respect to the original BPU from various directions, such as in a vertical direction (e.g., on top of the original BPU), a horizontal direction (e.g., to the left of the original BPU), a diagonal direction (e.g., to the down-left, down-right, up-left, or up-right of the original BPU), or any direction defined in the used video coding standard. For the intra prediction, prediction data 206 can include, for example, locations (e.g., coordinates) of the used neighboring BPUs, sizes of the used neighboring BPUs, parameters of the extrapolation, a direction of the used neighboring BPUs with respect to the original BPU, or the like.
For another example, at temporal prediction stage 2044, the encoder can perform the inter prediction. For an original BPU of a current picture, prediction reference 224 can include one or more pictures (referred to as “reference pictures”) that have been encoded (in the forward path) and reconstructed (in the reconstructed path). In some embodiments, a reference picture can be encoded and reconstructed BPU by BPU. For example, the encoder can add reconstructed residual BPU 222 to predicted BPU 208 to generate a reconstructed BPU. When all reconstructed BPUs of the same picture are generated, the encoder can generate a reconstructed picture as a reference picture. The encoder can perform an operation of “motion estimation” to search for a matching region in a scope (referred to as a “search window”) of the reference picture. The location of the search window in the reference picture can be determined based on the location of the original BPU in the current picture. For example, the search window can be centered at a location having the same coordinates in the reference picture as the original BPU in the current picture and can be extended out for a predetermined distance. When the encoder identifies (e.g., by using a pel-recursive algorithm, a block-matching algorithm, or the like) a region similar to the original BPU in the search window, the encoder can determine such a region as the matching region. The matching region can have different dimensions (e.g., being smaller than, equal to, larger than, or in a different shape) from the original BPU. Because the reference picture and the current picture are temporally separated in the timeline, it can be deemed that the matching region “moves” to the location of the original BPU as time goes by. The encoder can record the direction and distance of such a motion as a “motion vector.” When multiple reference pictures are used, the encoder can search for a matching region and determine its associated motion vector for each reference picture. In some embodiments, the encoder can assign weights to pixel values of the matching regions of respective matching reference pictures.
The motion estimation can be used to identify various types of motions, such as, for example, translations, rotations, zooming, or the like. For inter prediction, prediction data 206 can include, for example, locations (e.g., coordinates) of the matching region, the motion vectors associated with the matching region, the number of reference pictures, weights associated with the reference pictures, or the like.
For generating predicted BPU 208, the encoder can perform an operation of “motion compensation.” The motion compensation can be used to reconstruct predicted BPU 208 based on prediction data 206 (e.g., the motion vector) and prediction reference 224. For example, the encoder can move the matching region of the reference picture according to the motion vector, in which the encoder can predict the original BPU of the current picture. When multiple reference pictures are used, the encoder can move the matching regions of the reference pictures according to the respective motion vectors and average pixel values of the matching regions. In some embodiments, if the encoder has assigned weights to pixel values of the matching regions of respective matching reference pictures, the encoder can add a weighted sum of the pixel values of the moved matching regions.
In some embodiments, the inter prediction can be unidirectional or bidirectional. Unidirectional inter predictions can use one or more reference pictures in the same temporal direction with respect to the current picture. Unidirectional inter predictions use a reference picture that precedes the current picture. Bidirectional inter predictions can use one or more reference pictures at both temporal directions with respect to the current picture.
Still referring to the forward path of process 200B, after spatial prediction 2042 and temporal prediction stage 2044, at mode decision stage 230, the encoder can select a prediction mode (e.g., one of the intra prediction or the inter prediction) for the current iteration of process 200B. For example, the encoder can perform a rate-distortion optimization technique, in which the encoder can select a prediction mode to minimize a value of a cost function depending on a bit rate of a candidate prediction mode and distortion of the reconstructed reference picture under the candidate prediction mode. Depending on the selected prediction mode, the encoder can generate the corresponding predicted BPU 208 and predicted data 206.
In the reconstruction path of process 200B, if intra prediction mode has been selected in the forward path, after generating prediction reference 224 (e.g., the current BPU that has been encoded and reconstructed in the current picture), the encoder can directly feed prediction reference 224 to spatial prediction stage 2042 for later usage (e.g., for extrapolation of a next BPU of the current picture). If the inter prediction mode has been selected in the forward path, after generating prediction reference 224 (e.g., the current picture in which all BPUs have been encoded and reconstructed), the encoder can feed prediction reference 224 to loop filter stage 232, at which the encoder can apply a loop filter to prediction reference 224 to reduce or eliminate distortion (e.g., blocking artifacts) introduced by the inter prediction. The encoder can apply various loop filter techniques at loop filter stage 232, such as, for example, deblocking, sample adaptive offsets, adaptive loop filters, or the like. The loop-filtered reference picture can be stored in buffer 234 (or “decoded picture buffer”) for later use (e.g., to be used as an inter-prediction reference picture for a future picture of video sequence 202). The encoder can store one or more reference pictures in buffer 234 to be used at temporal prediction stage 2044. In some embodiments, the encoder can encode parameters of the loop filter (e.g., a loop filter strength) at binary coding stage 226, along with quantized transform coefficients 216, prediction data 206, and other information.
In
The decoder can perform process 300A iteratively to decode each encoded BPU of the encoded picture and generate predicted reference 224 for encoding the next encoded BPU of the encoded picture. After decoding all encoded BPUs of the encoded picture, the decoder can output the picture to video stream 304 for display and proceed to decode the next encoded picture in video bitstream 228.
At binary decoding stage 302, the decoder can perform an inverse operation of the binary coding technique used by the encoder (e.g., entropy coding, variable length coding, arithmetic coding, Huffman coding, context-adaptive binary arithmetic coding, or any other lossless compression algorithm). In some embodiments, besides prediction data 206 and quantized transform coefficients 216, the decoder can decode other information at binary decoding stage 302, such as, for example, a prediction mode, parameters of the prediction operation, a transform type, parameters of the quantization process (e.g., quantization parameters), an encoder control parameter (e.g., a bitrate control parameter), or the like. In some embodiments, if video bitstream 228 is transmitted over a network in packets, the decoder can depacketize video bitstream 228 before feeding it to binary decoding stage 302.
In process 300B, for an encoded basic processing unit (referred to as a “current BPU”) of an encoded picture (referred to as a “current picture”) that is being decoded, prediction data 206 decoded from binary decoding stage 302 by the decoder can include various types of data, depending on what prediction mode was used to encode the current BPU by the encoder. For example, if intra prediction was used by the encoder to encode the current BPU, prediction data 206 can include a prediction mode indicator (e.g., a flag value) indicative of the intra prediction, parameters of the intra prediction operation, or the like. The parameters of the intra prediction operation can include, for example, locations (e.g., coordinates) of one or more neighboring BPUs used as a reference, sizes of the neighboring BPUs, parameters of extrapolation, a direction of the neighboring BPUs with respect to the original BPU, or the like. For another example, if inter prediction was used by the encoder to encode the current BPU, prediction data 206 can include a prediction mode indicator (e.g., a flag value) indicative of the inter prediction, parameters of the inter prediction operation, or the like. The parameters of the inter prediction operation can include, for example, the number of reference pictures associated with the current BPU, weights respectively associated with the reference pictures, locations (e.g., coordinates) of one or more matching regions in the respective reference pictures, one or more motion vectors respectively associated with the matching regions, or the like.
Based on the prediction mode indicator, the decoder can decide whether to perform a spatial prediction (e.g., the intra prediction) at spatial prediction stage 2042 or a temporal prediction (e.g., the inter prediction) at temporal prediction stage 2044. The details of performing such spatial prediction or temporal prediction are described in
In process 300B, the decoder can feed predicted reference 224 to spatial prediction stage 2042 or temporal prediction stage 2044 for performing a prediction operation in the next iteration of process 300B. For example, if the current BPU is decoded using the intra prediction at spatial prediction stage 2042, after generating prediction reference 224 (e.g., the decoded current BPU), the decoder can directly feed prediction reference 224 to spatial prediction stage 2042 for later usage (e.g., for extrapolation of a next BPU of the current picture). If the current BPU is decoded using the inter prediction at temporal prediction stage 2044, after generating prediction reference 224 (e.g., a reference picture in which all BPUs have been decoded), the encoder can feed prediction reference 224 to loop filter stage 232 to reduce or eliminate distortion (e.g., blocking artifacts). The decoder can apply a loop filter to prediction reference 224, in a way as described in
Referring back to
Apparatus 400 can also include memory 404 configured to store data (e.g., a set of instructions, computer codes, intermediate data, or the like). For example, as shown in
Bus 410 can be a communication device that transfers data between components inside apparatus 400, such as an internal bus (e.g., a CPU-memory bus), an external bus (e.g., a universal serial bus port, a peripheral component interconnect express port), or the like.
For ease of explanation without causing ambiguity, processor 402 and other data processing circuits are collectively referred to as a “data processing circuit” in this disclosure. The data processing circuit can be implemented entirely as hardware, or as a combination of software, hardware, or firmware. In addition, the data processing circuit can be a single independent module or can be combined entirely or partially into any other component of apparatus 400.
Apparatus 400 can further include network interface 406 to provide wired or wireless communication with a network (e.g., the Internet, an intranet, a local area network, a mobile communications network, or the like). In some embodiments, network interface 406 can include any combination of any number of a network interface controller (NIC), a radio frequency (RF) module, a transponder, a transceiver, a modem, a router, a gateway, a wired network adapter, a wireless network adapter, a Bluetooth adapter, an infrared adapter, a near-field communication (“NFC”) adapter, a cellular network chip, or the like.
In some embodiments, optionally, apparatus 400 can further include peripheral interface 408 to provide a connection to one or more peripheral devices. As shown in
It should be noted that video codecs (e.g., a codec performing process 200A, 200B, 300A, or 300B) can be implemented as any combination of any software or hardware modules in apparatus 400. For example, some or all stages of process 200A, 200B, 300A, or 300B can be implemented as one or more software modules of apparatus 400, such as program instructions that can be loaded into memory 404. For another example, some or all stages of process 200A, 200B, 300A, or 300B can be implemented as one or more hardware modules of apparatus 400, such as a specialized data processing circuit (e.g., an FPGA, an ASIC, an NPU, or the like).
It has been observed that the abundance of visual data has increased
considerably in recent decades. This trend can be attributed to the advancements in multimedia acquisition, representation, and applications. The efficient representation of visual data is crucial in various visual data-centric applications. Visual data exhibits a vast number of spatial correlations, and several spatial resampling techniques have been explored to enhance compression efficiency, while ensuring optimal human perception quality. The advent of machine vision has further accelerated this trend, as machines have surpassed human perception as the primary consumers of visual data in several applications, such as intelligent safety and autopilot.
Various algorithms have been proposed to improve compression efficiency for human perception. Specifically, a downsampling-based paradigm was proposed to improve image compression performance at low bitrates. The local random convolution kernel was proposed to preserve the high-frequency information at low bitrate compression. In the deep learning era, a convolutional neural network-based end-to-end image compression framework was built to achieve apparent performance improvement, compared with several existing coding standards.
Spatial resampling can also be applied to visual data compression towards machine vision. Specifically, a joint loss function was proposed to improve the spatial resampling model towards machine vision tasks, which consists of signal-level distortion and machine vision loss function. Moreover, to improve the spatial resampling performance, a codec simulation network was proposed for joint optimization with the spatial resampling model. Though performance improvements have been achieved with spatial resampling, the representation expense of the spatial upsampling model has yet to be fully considered, which may introduce extra bandwidth costs if the upsampling model transmission is required.
There are numerous spatial resampling algorithms proposed towards machine vision, achieving noticeable performance improvements.
The present disclosure provides embodiments for using lightweight spatial upsampling model to decode video data suitable for machine vision tasks, which will be described in detail below.
In step 602, the decoder may receive a bitstream.
In step 604, the decoder may decode, using coded information of the bitstream, one or more pictures. Specifically, the decoder can perform spatial upsampling on pictures to obtain corresponding reconstructed pictures for machine vision applications, for example.
In sub-step 612, the decoding module of the decoder may generate one or more decompressed pictures by decompressing one or more compressed pictures included in the bitstream. The decoding module may decode pictures according to the standard in which the pictures are encoded.
In sub-step 614, the upsampling module of the decoder may perform spatial upsampling on the one or more decompressed pictures by a lightweight spatial upsampling model to obtain one or more reconstructed pictures, respectively. That is, the lightweight spatial upsampling model can be implemented in the upsampling module. Herein, a total length of coding bits of parameters of the lightweight spatial upsampling module can be less than a threshold that is pre-determined based on a desired quality of the reconstructed pictures. That is, the decoder, specially the structure of the upsampling module, can be represented with limited bits compared to conventional upsampling schemes.
In some embodiments, the decoder may be equipped with necessary hardware of the upsampling module. However, the structure of the upsampling module may need to be realized according to addition configuration information. For example, in some embodiments, step 604 may further include sub-step 616, which is also implemented by the decoder. In sub-step 616, the decoder, specifically, the upsampling module, may extract the parameters included in the bitstream. The parameter may include weights and bias for respective layers of the lightweight spatial upsampling model, for example. As described above, the coding bits of the parameters of the lightweight spatial upsampling module can be less than the threshold. The singled parameters in the bitstream can be decrease dramatically compared to conventional upsampling schemes. Hence, the coding efficiency for the video stream can be improved.
In some embodiments, the lightweight spatial upsampling model can be achieved by reducing the convolutional layers of the network.
In some embodiments, as shown in
In some embodiments, the Bottleneck Resblocks each may include three convolutional layers, as shown in
In some embodiments, the spatial upsampling model can be light-weighted by reducing the numerical precision. For example, the parameters of the lightweight spatial upsampling model are quantized into a pre-determined format. Specifically, the model weights can be quantized into float16, int8, and even binary bit-depth for inference, achieving a significant reduction of representation cost.
The lightweight of a spatial upsampling model can be further improved by combining the convolutional layer reduction, Bottleneck Resblock, and numerical precision reduction jointly.
In step 902, the encoder can receive a video sequence.
In step 904, the encoder can encode one or more pictures of the video sequence.
In step 906, the encoder generates a bitstream associated with the encoded pictures. The bitstream may include the encoded results generated in step 904.
In sub-step 912, the downsampling module of the encoder may perform spatial downsampling on the one or more pictures by a spatial downsampling model to obtain one or more downsampled pictures, respectively. The downsampling model can be specified as with three convolution layers that is adopted by the video coding standards.
In sub-step 914, the encoding module of the encoder may generate one or more compressed pictures by compressing the one or more downsampled pictures.
In sub-step 916, the downsampling module may generate parameters for a lightweight spatial upsampling model for decoding the one or more compressed pictures at the decoder side. That is, the encoder may determine the parameters of the upsampling model in the decoder and signal them for the decoder's usage. In some embodiments, the encoder's structure can be fixed, e.g., include three convolutional layers, while the decoder's structure can be determined by the organizer of the encoder according to its desired quality of the reconstructed pictures. In some embodiments, a total length of coding bits of parameters of the lightweight spatial upsampling model can be less than a threshold that is pre-determined based on a desired quality of reconstructed pictures of the one or more pictures.
Generally speaking, machine vision may not require pictures with ultra high
resolution. Hence, the lightweight spatial upsampling model can be relatively small in scale for saving bits to code the parameters thereof. As such, the coding efficiency and transmission budget can be improved.
In some embodiments, the decoder may be equipped with necessary hardware of the upsampling module. However, the structure of the upsampling module may need to be realized according to addition configuration information. In some embodiments, the encoder may further signal the parameters of the lightweight spatial upsampling model into the bitstream in step 906, for example. The parameter may include weights and bias for respective layers of the lightweight spatial upsampling model, for example. As described above, the coding bits of the parameters of the lightweight spatial upsampling module can be less than the threshold. The singled parameters in the bitstream can be decrease dramatically compared to conventional upsampling schemes.
As described above, the lightweight spatial upsampling model can be achieved by reducing the convolutional layers of the network, so as to reduce the parameters to be signalled in the bitstream.
As shown in
As shown in
In some embodiments, the Bottleneck Resblocks each may include three convolutional layers, as shown in
In some embodiments, the spatial upsampling model can be light-weighted by reducing the numerical precision. For example, the parameters of the lightweight spatial upsampling model are quantized into a pre-determined format. Specifically, the model weights can be quantized into float16, int8, and even binary bit-depth for inference, achieving a significant reduction of representation cost.
In some embodiments, in order to further reduce the parameters to be signalled in the bitstream, the lightweight of a spatial upsampling model can be further improved by combining the convolutional layer reduction, Bottleneck Resblock, and numerical precision reduction jointly.
It is appreciated that an embodiments of the present disclosure can be combined with another embodiments or some other embodiments.
In some embodiments, a non-transitory computer-readable storage medium storing one or more bitstreams processed according to the above-described methods is also provided. For example, the one or more bitstreams may be encoded and decoded using the lightweight spatial upsampling models illustrated in
In some embodiments, a non-transitory computer-readable storage medium including instructions is also provided, and the instructions may be executed by a device (such as the disclosed encoder and decoder), for performing the above-described methods. Common forms of non-transitory media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM or any other flash memory, NVRAM, a cache, a register, any other memory chip or cartridge, and networked versions of the same. The device may include one or more processors (CPUs), an input/output interface, a network interface, and/or a memory.
The embodiments may further be described using the following clauses:
It should be noted that, the relational terms herein such as “first” and “second” are used only to differentiate an entity or operation from another entity or operation, and do not require or imply any actual relationship or sequence between these entities or operations. Moreover, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items.
As used herein, unless specifically stated otherwise, the term “or” encompasses all possible combinations, except where infeasible. For example, if it is stated that a database may include A or B, then, unless specifically stated otherwise or infeasible, the database may include A, or B, or A and B. As a second example, if it is stated that a database may include A, B, or C, then, unless specifically stated otherwise or infeasible, the database may include A, or B, or C, or A and B, or A and C, or B and C, or A and B and C.
It is appreciated that the above described embodiments can be implemented by hardware, or software (program codes), or a combination of hardware and software. If implemented by software, it may be stored in the above-described computer-readable media. The software, when executed by the processor can perform the disclosed methods. The computing units and other functional units described in the present disclosure can be implemented by hardware, or software, or a combination of hardware and software. One of ordinary skill in the art will also understand that multiple ones of the above described modules/units may be combined as one module/unit, and each of the above described modules/units may be further divided into a plurality of sub-modules/sub-units.
In the foregoing specification, embodiments have been described with reference to numerous specific details that can vary from implementation to implementation. Certain adaptations and modifications of the described embodiments can be made. Other embodiments can be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims. It is also intended that the sequence of steps shown in figures are only for illustrative purposes and are not intended to be limited to any particular sequence of steps. As such, those skilled in the art can appreciate that these steps can be performed in a different order while implementing the same method.
In the drawings and specification, there have been disclosed exemplary embodiments. However, many variations and modifications can be made to these embodiments. Accordingly, although specific terms are employed, they are used in a generic and descriptive sense only and not for purposes of limitation.
The disclosure claims the benefits of priority to U.S. Provisional Application No. 63/511,660, filed on Jul. 2, 2023, which is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
63511660 | Jul 2023 | US |