The present disclosure relates to the field of video processing technologies, in particular to the field of video coding technologies, and specifically to a method for sample adaptive offset, an electronic device, and a computer-readable storage medium.
In the related technologies, prediction, transformation, quantization, inverse quantization, inverse transformation, reconstruction, entropy encoding and other processing can be performed on a video frame in a video encoding process, and entropy decoding, inverse quantization, inverse transformation and other processing can be performed on the video frame in a video decoding process. For a boundary area of the video frame where the pixel value changes drastically, ripples may be generated in the boundary area of a reconstructed frame due to the loss of high-frequency components during the encoding process. This distortion is called the ringing effect.
The Sample Adaptive Offset (SAO for short) algorithm can offset the reconstructed frame, so as to reduce or even remove the ripples in the reconstructed frame. In order to reduce or even remove the ripples in the reconstructed frame, inverse quantization, inverse transformation, and reconstruction can be performed at the encoder to obtain the reconstructed frame. The reconstructed frame can be divided into a plurality of image blocks, and SAO information of each image block can be determined based on a residual between the reconstructed frame and the original video frame. The encoder can perform entropy encoding on the SAO information of each image block, and send it to the decoder in the encoded stream, so that the decoder can perform entropy decoding on the received encoded stream to obtain the SAO information, thereby offsetting the corresponding image blocks in the reconstructed frame to reduce or even remove the ripples in the reconstructed frame.
The methods described in this section are not necessarily methods that have been previously conceived or employed. It should not be assumed that any of the methods described in this section is considered to be the prior art just because they are included in this section, unless otherwise indicated expressly. Similarly, the problem mentioned in this section should not be considered to be universally recognized in any prior art, unless otherwise indicated expressly.
The present disclosure provides a method, an electronic device, and a computer-readable storage medium.
According to an aspect of the present disclosure, there is provided a method, including: obtaining a residual of each of a plurality of target pixels of a target image, the residual being represented by a first preset number of bits; and modifying, in response to determining that the plurality of residuals are within a preset value range, each of the plurality of residuals to be represented by a second preset number of bits, wherein the preset value range being a value range that is representable by the second preset number of bits, and the second preset number being less than the first preset number.
According to another aspect of the present disclosure, there is further provided an electronic device, including: one or more processors; and a memory, storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: obtaining a residual of each of a plurality of target pixels of a target image, the residual being represented by a first preset number of bits; and modifying, in response to determining that the plurality of residuals are within a preset value range, each of the plurality of residuals to be represented by a second preset number of bits, wherein the preset value range being a value range that is representable by the second preset number of bits, and the second preset number being less than the first preset number.
According to another aspect of the present disclosure, there is further provided a non-transitory computer-readable storage medium, wherein the non-transitory computer-readable storage medium stores one or more programs including instructions that, when executed by one or more processors of a computing device, cause the computing device to perform operations including: obtaining a residual of each of a plurality of target pixels of a target image, the residual being represented by a first preset number of bits; and modifying, in response to determining that the plurality of residuals are within a preset value range, each of the plurality of residuals to be represented by a second preset number of bits, wherein the preset value range being a value range that is representable by the second preset number of bits, and the second preset number being less than the first preset number.
It should be understood that the content described in this section is not intended to identify critical or important features of the embodiments of the present disclosure, and is not used to limit the scope of the present disclosure either. Other features of the present disclosure will be easily understood through the following description.
The accompanying drawings exemplarily show embodiments and form a part of the specification, and are used to explain exemplary implementations of the embodiments together with a written description of the specification. The embodiments shown are merely for illustrative purposes and do not limit the scope of the claims. Throughout the accompanying drawings, the same reference numerals denote similar but not necessarily same elements.
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, where various details of the embodiments of the present disclosure are included to facilitate understanding, and should only be considered as exemplary. Therefore, those of ordinary skill in the art should be aware that various changes and modifications can be made to the embodiments described herein, without departing from the scope of the present disclosure. Likewise, for clarity and conciseness, the description of well-known functions and structures is omitted in the following description.
In the present disclosure, unless otherwise stated, the terms “first”, “second”, etc., used to describe various elements are not intended to limit the positional, temporal or importance relationship of these elements, but rather only to distinguish one component from another. In some examples, the first element and the second element may refer to the same instance of the element, and in some cases, based on contextual descriptions, the first element and the second element may also refer to different instances.
The terms used in the description of the various examples in the present disclosure are merely for the purpose of describing particular examples, and are not intended to be limiting. If the number of elements is not specifically defined, there may be one or more elements, unless otherwise expressly indicated in the context. Moreover, the term “and/or” used in the present disclosure encompasses any of and all possible combinations of listed items.
The embodiments of the present disclosure will be described below in detail with reference to the accompanying drawings.
For ease of understanding, a brief description of a video encoding process is given first.
The general process of video encoding is as follows: An original video frame is divided into a plurality of image blocks, and encoded in encoding units of image blocks. The image block may be, for example, a square image with a size of 16×16, 32×32, or 64×64, or a rectangular image. As shown in
The intra-frame prediction encoding module may determine intra-frame prediction data and intra-frame prediction related information and the inter-frame prediction encoding module may determine inter-frame prediction data and inter-frame prediction related information. The intra-frame prediction encoding module or inter-frame prediction encoding module may be selected for use by using a switch, prediction data output by the selected prediction module is used as one of the inputs of an adder, the original video frame is used as another input of the adder, and the adder outputs a predicted residual. The predicted residual is transformed and quantized to obtain a quantization coefficient. The quantization coefficient, the intra-frame prediction related information, and the inter-frame prediction related information are input into the entropy encoding module, and an encoded stream is obtained through the entropy encoding and sent to the decoder. The reconstruction module at the encoder may add the predicted residual back to the corresponding intra-frame prediction data or inter-frame prediction data to obtain a reconstructed frame.
It should be noted that the predicted residual here is calculated based on the predicted frame and the original video frame, which is different from the residual calculated based on the reconstructed frame and the original video frame below.
The general process of video decoding is as follows: As shown in
Due to the loss of high-frequency components during the transformation and quantization at the decoder, ripples may be generated in the reconstructed frame due to the ringing effect.
In order to solve the above technical problems, as shown in
SAO may offset the reconstructed frame, so as to reduce or even remove the ripples in the reconstructed frame. In order to reduce or even remove the ripples in the reconstructed frame, the reconstructed frame is divided into a plurality of image blocks at the encoder, and the SAO information corresponding to each image block may be determined sequentially. The encoder may send the SAO information of each image block to the decoder, such that the decoder can offset each image block in the reconstructed frame based on the corresponding SAO information, so as to reduce or even remove the ripples in the reconstructed frame.
Any video frame may be composed of three different components, for example, a YUV video frame is composed of a luminance component (Y), a first chrominance component (U), and a second chrominance component (V). Therefore, the SAO information may also include three different types of component information, for example, luminance component information, first chrominance component information, and second chrominance component information.
A video frame may include a plurality of image blocks of different components, for example, a YUV video frame may include a luminance image block, a first chrominance image block, and a second chrominance image block. Image blocks of different components in the reconstructed frame may each be offset through SAO processing.
SAO may be applicable to, but not limited to, High Efficient Video Coding (HEVC for short). HEVC can provide higher compression efficiency than previous generations of coding technologies. HEVC can provide a video having the same definition as H.264 at half the bit rate, which can save a large storage space and video transmission bandwidth.
In related technologies, SAO may include Edge Offset (EO offset for short) and Band Offset (BO offset for short).
The general process of EO offset may be as follows: A current pixel is classified by comparing a reconstructed pixel value of the current pixel with a reconstructed pixel value of a pixel adjacent to the current pixel. According to the positions of adjacent pixels, there are four patterns: horizontal, vertical, 135° diagonal, and 45° diagonal. For any EO offset pattern, a current pixel can be classified into the following five categories based on a reconstructed pixel value of the current pixel and reconstructed pixel values of two adjacent pixels: For the first category, the reconstructed pixel value of the current pixel is less than the reconstructed pixel values of the two adjacent pixels; For the second category, the reconstructed pixel value of the current pixel is less than the reconstructed pixel value of one of the adjacent pixels, and equal to the reconstructed pixel value of the other adjacent pixel; For the third category, the reconstructed pixel value of the current pixel is greater than the reconstructed pixel value of one of the adjacent pixels, and equal to the reconstructed pixel value of the other adjacent pixel; For the fourth category, the reconstructed pixel value of the current pixel is greater than the reconstructed pixel values of the two adjacent pixels; For the fifth category, the reconstructed pixel value of the current pixel is equal to the reconstructed pixel values of the two adjacent pixels. For any EO offset pattern, a category value of pixels belonging to the first category may be set to 0, a category value of pixels belonging to the second category may be set to 1, and a category value of pixels belonging to the third category may be set to 2, a category value of pixels belonging to the fourth category may be set to 3, and a category value of pixels belonging to the fifth category may be set to 4.
For any EO offset pattern, if the category value of the current pixel is 0 or 1, an offset value is greater than 0; if the category value of the current pixel is 2 or 3, an offset value is less than 0, so as to achieve a smoothing effect. If the category value of the current pixel is 4, the offset value is equal to 0, and no offset is performed. The encoder may transmit only the absolute value of the offset value, and the decoder may determine, based on the category value of the EO offset pattern of the current pixel, whether the offset value is greater than 0 or less than 0.
The general process of BO offset may be as follows: A pixel value range is divided equally into a plurality of bands by classification based on the pixel values. For example, an 8-bit image is used as an example. A pixel value range is [0, 255], and it may be set that each band contains eight pixel values, and the kth band has a pixel value range of [8k, 8k+7], which is similar to a quantification technique. In the encoding process, it is necessary to count the residuals and the number of pixels of 0-31 bands of a current image block (CTB). For BO offset, SAO information is specifically band information, which may include the residuals and the number of pixels of 0-31 bands in the current image block.
For both EO offset and BO offset, it is necessary to calculate a residual of a pixel, and the residual of the pixel is a difference between a reconstructed pixel value of the pixel in a reconstructed frame and an original pixel value of the pixel in an original video frame. In order to prevent an overflow error in the residual of the pixel, the number of bits of the residual of the pixel is set to be greater than the number of bits of the reconstructed pixel value and the original pixel value. For example, for an 8-bit image, the pixel value of the pixel is represented by 8 bits, and the residual of the pixel is represented by 16 bits. However, this will lead to a large memory usage and reduce the efficiency of the algorithm.
However, the inventor finds through research that a difference (i.e., a residual) between a reconstructed pixel value of a pixel and an original pixel value of the pixel in an original video frame is generally small, as shown in
In view of this, the present disclosure provides an improved method for sample adaptive offset. In this method, after a plurality of residuals of a target image are calculated, whether the plurality of residuals are within a preset value range is determined. In response to determining that the plurality of residuals are all within the preset value range, each of the plurality of residuals may be truncated to reduce the number of bits of the residuals, thereby reducing occupied memory and improving efficiency of the algorithm.
According to one or more embodiments of the present disclosure, in response to determining that a plurality of residuals of an image are all within a preset value range, each of the plurality of residuals may be truncated to reduce the number of bits of the residuals, thereby enabling a reduction in occupied memory and improving efficiency of the algorithm.
A video frame is an image, and the video frame and the image may be used interchangeably in the embodiments of the present disclosure. The target image in the embodiments of the present disclosure may be the entire video frame, or a part of the video frame.
The technical solutions of the present disclosure are applicable to both EO offset and BO offset, and are also applicable to other methods involving the calculation of residuals, for example, a method for transforming a predicted residual, etc.
The method for sample adaptive offset in the embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.
As for the method for sample adaptive offset, the method itself does not necessarily perform adaptive offset, but is used for adaptive offset, for example, it may perform a preparation for adaptive offset.
For example, an 8-bit image is used as an example. Both an original pixel value and a reconstructed pixel value of a target pixel are represented by 8 bits, and a calculated residual of the target pixel is represented by 16 bits. In response to determining that the residual of each target pixel of the target image is within the interval [−128, 127], the residual may be truncated to modify the residual to be represented by eight bits. It can be understood that, an 8-bit image is merely used as an example here to describe the specific principles of the present disclosure, and it is not limited that the technical solutions of the present disclosure are applicable to only 8-bit images.
The target image may be, but not limited to, an image block of a video frame. The target image may be, for example, a luminance image block of a video frame, or an image block of another component. For example, for a YUV video frame, the target image may alternatively be a first chrominance image block or a second chrominance image block.
The target image may be an image block in a reconstructed frame obtained by reconstruction at the encoder, and the residual of each target pixel of the target image may be calculated based on the reconstructed frame and the original video frame. The reconstructed frame has the same size as the original video frame and they include the same number of pixels. The value of the pixel in the reconstructed frame is the reconstructed pixel value of the pixel at the corresponding position in the original video frame, and the specific reconstruction process has been described above.
For ease of description and understanding, in the embodiments of the present disclosure, a pixel in the reconstructed frame and a pixel at the corresponding position in the original video frame are referred to as the same pixel.
The residual of a pixel may be a difference between a reconstructed pixel value of the pixel in the reconstructed frame and an original pixel value of the pixel in the original video frame. The residual of the pixel may be positive, 0, or negative.
According to some embodiments, step S202 of determining whether the plurality of residuals of the target image are within a preset value range may include: determining a maximum residual and a minimum residual of the plurality of residuals. In this case, modifying each of the plurality of residuals to be represented by the second preset number of bits may be performed in response to determining that the maximum residual and the minimum residual are within the preset value range. As long as the maximum residual and the minimum residual are within the preset value range, it can be determined that the plurality of residuals are all within the preset value range, thereby improving the efficiency of the algorithm.
According to some embodiments, the method may further include: dividing, in response to determining that the plurality of residuals are within the preset value range, the plurality of target pixels of the target image into a plurality of pixel groups based at least on the second preset number, each pixel group including a plurality of adjacent target pixels. Therefore, offset information of each pixel group can be calculated separately, which implements the optimization of the instruction set and improves the efficiency of the algorithm, and the offset information of the target image can be determined based on the offset information of the plurality of pixel groups. A specific implementation of how to calculate the offset information of each pixel group will be described in detail below.
Exemplarily, a plurality of pixels in each row of the target image may, but not limited to, be classified into at least one pixel group, that is, a plurality of target pixels included in each pixel group are located in the same row. It can be understood that, the plurality of target pixels included in each pixel group may alternatively be distributed as a plurality of rows of pixels (including two rows of pixels) of the target image. The plurality of target pixels included in each pixel group may be distributed in a matrix. For example, an image block of the reconstructed frame illustrated in
According to some embodiments, the method may further include: determining an offset category of the target image, the offset category including a plurality of category values; obtaining a category value of each target pixel of the target image; and calculating offset information of each pixel group based on the residuals and the category values of the plurality of adjacent target pixels in the pixel group.
Further, the method may further include: determining offset information of the target image based on the respective pieces of offset information of the plurality of pixel groups; and encoding the offset information of the target image to be sent to the decoder. Therefore, the offset information of the target image is calculated based on the pixel groups, which implements the optimization of the instruction set and thereby simplifies the computational complexity of the offset information of the target image. A specific implementation of how to determine the offset information of the target image based on the respective pieces of offset information of the plurality of pixel groups will be described in detail below.
For the sample adaptive offset technology, the offset categories may include, for example, EO offset and BO offset. Any EO offset pattern may include five category values (see above), and offset information is edge information. For BO offset, the category value is a band value, and offset information is band information.
For BO offset, the band information can be calculated based on band values and residuals of a plurality of target pixels of a target image.
An 8-bit image is used as an example. A pixel value is [0, 255], and it may be set that each band includes eight pixel values, and the kth band has a pixel value range of [8k, 8k+7]. The pixel value may be divided by 8 and rounded, for example, the pixel whose pixel value is 15 has a band value of 2. It should be noted that this is only an example to illustrate how to calculate a band value of a pixel, and a method for dividing bands is not limited to only the one described above, and according to actual requirements, a method for dividing the pixel value range can be set based on the pixel value range to determine the band value of the pixel.
Since there are 32 bands in BO offset, which exceeds the number of vector registers. In related technologies, the residuals and the number of pixels of 0-31 bands of a current image block (CTB) can be counted based on C language, which may specifically be as follows: For the current image block, two temporary arrays: stats[32] and count[32] are defined, and each has an initial value of 0; a band value bt of a current pixel is calculated, and for example, for an 8-bit pixel, a reconstructed pixel value is 15, and therefore bt=15/8=1; a difference between the current reconstructed pixel and the original pixel, that is, a residual diff, is calculated; and the residuals are accumulated and the number of pixels is calculated: stats[bt]+=diff, count[bt]++. It can be seen that, a residual for each pixel needs to be calculated and added to the accumulated residual sum for the corresponding band value, and the number of pixels for the corresponding band value is added by 1, causing the computational complexity very high.
In view of this, the inventor finds through research that for each image block, although pixel values of a plurality of adjacent reconstructed pixels are mostly different, most of them belong to the same band, but not completely belong to the same band. For some special areas, such as flat areas, most of a plurality of adjacent reconstructed pixels are located in several consecutive bands.
It can be seen from
It can be seen from
The above only uses the luminance component of the reconstructed frame as an example to illustrate the distribution characteristics of the band values of the plurality of adjacent reconstructed pixels. It is easy to understand that the band values of other components of the reconstructed frame (such as the first chrominance component and the second chrominance component) also have the above distribution characteristics.
In view of this, according to some embodiments, as shown in
According to some embodiments, for BO offset, the method may further include: determining band information of the target image based on the respective pieces of band information of the plurality of pixel groups; and encoding the band information of the target image to be sent to the decoder. Therefore, the band information of the target image is calculated based on the pixel groups, and the technical solutions of the present disclosure can simplify the computational complexity of the band information of each pixel group, thereby simplifying the computational complexity for the target image. A specific implementation of how to determine the band information of the target image based on the respective pieces of band information of the plurality of pixel groups will be described in detail below.
After the target image is divided into a plurality of pixel groups, the following band information calculation may be performed for the plurality of adjacent target pixels in each pixel group.
According to some embodiments, the number of the plurality of adjacent target pixels in each pixel group may be an even number. In this case, as shown in
For example, that a plurality of adjacent target pixels are located in the same row is used as an example. The first half of the plurality of target pixels may be used as the first part, and the second half of the plurality of target pixels may be used as the second part. In the embodiments illustrated in
It can be understood that the number of the plurality of adjacent target pixels is not limited to an even number, and may alternatively be an odd number. Exemplarily, in a case that the number of the plurality of adjacent target pixels is an odd number, the method for obtaining the maximum band value and the minimum band value of the plurality of target pixels is similar to that described above, for example, it may be as follows: a plurality of adjacent target pixels are divided into a first part and a second part, and a difference between the number of pixels in the first part and the number of pixels in the second part may be, but not limited to, 1. Likewise, a maximum value obtaining operation may be performed for each target pixel in the first part and the corresponding target pixel in the second part to obtain a plurality of first pixel values, and a minimum value obtaining operation may be performed for each target pixel in the first part and the corresponding target pixel in the second part to obtain a plurality of second pixel values. In contrast to the above, the maximum band value of the plurality of target pixels can be determined based on the plurality of first pixel values and one redundant target pixel in the first part (the target pixel has no corresponding target pixel in the second part), and the minimum band value of the plurality of target pixels can be determined based on the plurality of second pixel values and the one redundant target pixel in the first part.
After the maximum band value and the minimum band value of the plurality of adjacent target pixels are determined, step S402 may be performed to determine a plurality of preset band values based on the maximum band value and the minimum band value.
According to some embodiments, the plurality of preset band values determined in step S402 may include the maximum band value max and the minimum band value min of the plurality of target pixels, and one or more band values between the maximum band value max and the minimum band value min. For example, the plurality of preset band values may include all integers within the interval [min, max].
After the plurality of preset band values are determined based on the maximum band value and the minimum band value of the plurality of adjacent target pixels, step S403 may be performed to calculate band information for each of the plurality of preset band values.
According to some embodiments, as shown in
Exemplarily, a plurality of mask pixels included in the band mask corresponding to each preset band value are in a one-to-one correspondence with the plurality of adjacent target pixels, a value of a mask pixel of the band mask that is corresponding to the target pixel whose band value being the preset band value in the plurality of target pixels is a set value, and a value of a mask pixel corresponding to the target pixel whose band value being not the preset band value in the plurality of target pixels is zero. The set value may be, for example, but not limited to, 1, to simplify the calculation of the band information, see band mask 801 in
According to some embodiments, a plurality of adjacent target pixels in each pixel group may be distributed in a matrix. In this case, step S701 of determining the band mask of the plurality of target pixels that is corresponding to the preset band value may include: constructing a band matrix having the same size as the matrix of the plurality of target pixels, values of the elements in the band matrix being all equal to the preset band value; and determining the band mask based on a result of comparison between the band values of the plurality of target pixels and corresponding elements in the band matrix. Therefore, the band matrix in which the element values are all the preset band value is constructed, and the band values of the plurality of target pixels are compared with the values of corresponding elements in the band matrix, so that whether there is a target pixel whose band value being the preset band value can be determined, and a position of at least one target pixel whose band value being the preset band value can be determined, thereby quickly determining the band mask corresponding to the preset band value, which improves the efficiency of the algorithm.
Exemplarily, the band mask obtained based on the above solution may be distributed in a matrix, and may have the same size as the matrix of the plurality of target pixels, so as to facilitate calculation. The specific principle will be described below.
The embodiments illustrated in
In the example illustrated in
Exemplarily, the band information for each preset band value may include an accumulated residual sum and the number of pixels. The target image may be an image block in a reconstructed frame obtained by the encoder. The value of the target pixel in the target image is a reconstructed pixel value, and a difference between the value of the pixel in the original video frame and the value of the target pixel in the target image is the residual.
According to some embodiments, in a case that the band information includes an accumulated residual sum and the number of pixels, step S403 of calculating band information for each of the plurality of preset band values based on the band values and the residuals of the plurality of target pixels may include: calculating the accumulated residual sum for the preset band value of the plurality of target pixels based on the residuals of the plurality of target pixels and the corresponding band mask. The band mask can represent the position of the target pixel whose band value being the preset band value in the plurality of target pixels, which can implement the optimization of the instruction set, and the calculation for the plurality of target pixels at a time can be implemented to obtain the accumulated residual sum for the preset band value, thereby simplifying the computational complexity and improving the efficiency of the algorithm.
In exemplary embodiments, calculating the accumulated residual sum for the preset band value of the plurality of target pixels may include: constructing a residual matrix; multiplying the residual of each of the plurality of target pixels by a pixel value of the mask pixel corresponding to the position of the target pixel in the corresponding band mask; and storing a calculated result in the residual matrix. Therefore, the residuals obtained by simultaneously performing calculation for the plurality of target pixels can be stored in the residual matrix, and the residuals can be accumulated based on the residual matrix, which facilitates the implementation of the algorithm. In addition, the multiplication enables the binary representation length of the element values of the residual matrix to be doubled relative to the binary representation length of the residual of each target pixel, avoiding the overflow problem.
In the example illustrated in
It can be understood that step S403 is not limited to using one of the above methods to calculate the accumulated residual for the preset band value of the plurality of target pixels. For example, instead of constructing a residual matrix, the residual of each of the plurality of target pixels can be directly multiplied by the pixel value of the mask pixel corresponding to the position of the target pixel in the corresponding band mask, the plurality of products for the plurality of target pixels that are obtained by the multiplication are summed to obtain the accumulated residual sum for preset band value of the plurality of target pixels.
According to some embodiments, as shown in
It can be understood that the number of the plurality of adjacent target pixels may alternatively be an odd number. In this case, for example, the pixel value of the first target pixel or the last target pixel may be directly stored in the residual matrix, and summation may be performed for the remaining target pixels with an even number using the above technical solutions, so as to reduce the occupied memory.
According to some embodiments, the accumulated residual sum for the same preset band value of the plurality of pixel groups may be calculated, that is, the residuals for the same preset band value of the plurality of pixel groups are summed, to obtain the accumulated residual sum for each preset band value of the target image, thereby reducing the computational complexity of the band information of the target image.
In exemplary embodiments, in a case that the accumulated residual sum for the preset band value is obtained by constructing the residual matrix, the residuals corresponding to the same preset band value of the plurality of pixel groups may be accumulated by accumulating the values of the elements in the residual matrix that are corresponding to the same preset band value of the plurality of pixel groups. The residuals obtained by simultaneously performing calculation for the plurality of target pixels can be stored in the residual matrix, and the residuals can be accumulated based on the residual matrix, thereby facilitating the implementation of the algorithm and reducing the computational complexity of the band information of the target image.
Exemplarily, as shown in
Accumulated residual matrices corresponding to a plurality of preset band values can be constructed sequentially based on the maximum band value and the minimum band value of each pixel group, so that a required accumulated residual matrix can be constructed according to actual requirements.
In the following, an 8-bit image is used as an example to illustrate that a residual of a pixel may be represented by eight bits.
For a 64×64 CTB, for ease of presentation, only a 5×64 pixel area is shown in
In the example illustrated in
Since each element in s1-s16 is obtained by summing 256 residuals, it needs to occupy 8 bits (28=256), each residual is represented by 8 bits, so that s1-s16 can be represented by 16 bits, that is, 216=28×28. The signed number that can be represented by 8 bits ranges from −128 to 127. Therefore, as long as the residuals of the target pixels in the current CTB are in the interval [−128,127], all residuals in the current CTB can be represented by 8 bits, so that the throughput can be doubled and the processing time can be reduced by half. It can be understood that the elements in the accumulated residual matrix calculated for all target pixels having the same band value can be represented by 8 bits, and in a case that the band values of a plurality of target pixels are different, the elements in the accumulated residual matrix calculated for each band value must be represented by 8 bits, because the number of pixels corresponding to each band value is small.
The above describes how to calculate the accumulated residual sum for the preset band value based on the corresponding band mask. Likewise, the number of pixels for the preset band value can also be calculated based on the corresponding band mask.
According to some embodiments, step S403 of calculating band information for each of the plurality of preset band values based on the band values and the residuals of the plurality of target pixels may include: calculating the number of pixels for the preset band value based on the corresponding band mask. Since the band mask is capable of representing the position of the target pixel whose band value being the preset band value in the plurality of target pixels, it is possible to calculate the number of pixels for the preset band value based on the corresponding band mask.
Exemplarily, a plurality of mask pixels included in the band mask corresponding to each preset band value are in a one-to-one correspondence with the plurality of adjacent target pixels, a value of a mask pixel of the band mask that is corresponding to the target pixel whose band value being the preset band value in the plurality of target pixels is a set value, and a value of a mask pixel corresponding to the target pixel whose band value being not the preset band value in the plurality of target pixels is zero. The set value may be, for example, but not limited to, 1, to simplify the calculation of the band information. When a plurality of adjacent target pixels include at least one target pixel whose band value being the preset band value, the values of a plurality of mask pixels included in the band mask can be summed to obtain the number of the at least one target pixel (i.e., the number of pixels of the plurality of target pixels for the preset band value). It can be understood that if the plurality of adjacent target pixels do not include the target pixel whose band value being the preset band value, the values of the elements of the band mask corresponding to the preset band value are all 0, and the number of pixels for the preset band value is 0.
It can be understood that step S403 is not limited to using one of the above methods to calculate the number of pixels of the plurality of target pixels for the preset band value. For example, it is also possible to set a variable i with an initial value of 0, and sequentially determine whether the band value of each of the plurality of target pixels is equal to the preset band value, if so, i++ is executed, or if not, whether the band value of the next target pixel is equal to the preset band value continues to be determined, until the determination for the plurality of target pixels is completed, and the final value i is used as the number of pixels of the plurality of target pixels for preset band value.
According to some embodiments, the method may further include: calculating the number of pixels corresponding to the same preset band value in the plurality of pixel groups to obtain the band information of the target image, thereby reducing the computational complexity of the band information of the target image.
Exemplarily, the total number of pixels corresponding to the same preset band value in the plurality of pixel groups may be calculated by accumulating the pixel values of the plurality of mask pixels of the band mask corresponding to the same preset band value of the plurality of pixel groups.
Referring to
Accumulated number matrices corresponding to a plurality of preset band values can be constructed sequentially based on the maximum band value and the minimum band value of each pixel group, so that a required accumulated number matrix can be constructed according to actual requirements.
As shown in
In step S1201, an 8-bit video frame is used as an example, for each image block in a reconstructed frame (i.e., a target image), a plurality of residuals of a plurality of target pixels are obtained, and the residuals are represented by 16 bits (i.e., stored in 16 bits).
In step S1202, whether each of the plurality of residuals is within the interval [−128,127], that is, whether each of the plurality of residuals can be represented by 8 bits (i.e., stored in 8 bits), is determined.
In step S1203, if each of the plurality of residuals is within the interval [−128,127], each of the plurality of residuals is modified to be represented by 8 bits, that is, stored in 8 bits; or if not each of the plurality of residuals is within the interval [−128,127], step S1204 is directly performed.
In step S1204, each image block in the reconstructed frame is divided into a plurality of pixel groups.
In step S1205, a band value of each of the plurality of target pixels included in each pixel group is obtained.
In step S1206, a maximum band value max and a minimum band value min of the plurality of target pixels included in the pixel group are determined.
In step S1207, it is determined that the value interval of the plurality of preset band values is [min, max].
In step S1208, the band information of the pixel group for the preset band value min is calculated.
In step S1209, the preset band value is added by 1.
In step S1210, whether the preset band value is not greater than max is determined, if not, the process ends, or if yes, step S1211 is performed.
In step S1211, the band information of the pixel group for the preset band value is calculated, and step S1209 and step S1210 are performed iteratively until the preset band value is greater than max.
The technical solutions of the present disclosure may be applied to, but not limited to, video encoding for live streaming or on-demand broadcast.
In a video encoding process, generally, the residual of an 8-bit image or a 10-bit image can be represented by 8 bits. Therefore, the 16-bit residual is truncated and modified to be represented by 8 bits, the throughput can be doubled and the calculation amount in the process of SAO information statistics can be greatly reduced. According to experimental statistics, the above technical solutions can reduce the calculation amount of SAO information by 40%, thereby achieving a video encoding acceleration of about 4% for live streaming or on-demand broadcast, in turn reducing power consumption. Therefore, the technical solutions of the present disclosure are of great practical value.
According to another aspect of the present disclosure, there is further provided an apparatus for sample adaptive offset. As shown in
According to another aspect of the present disclosure, there is further provided an electronic device, including: at least one processor; and a memory communicatively connected to the at least one processor, where the memory stores a computer program that, when executed by the at least one processor, implements the above method.
According to another aspect of the present disclosure, there is further provided a non-transitory computer-readable storage medium storing a computer program, where when the computer program is executed by a processor, the above method is implemented.
According to another aspect of the present disclosure, there is further provided a computer program product, including a computer program, where when the computer program is executed by a processor, the above method is implemented.
Referring to
As shown in
The processor 1401 may be a single processing unit or a plurality of processing units, and all the processing units may include a single computing unit or a plurality of computing units or a plurality of cores. The processor 1401 may be implemented as one or more microprocessors, microcomputers, micro-controllers, digital signal processors, central processing units, state machines, logic circuits, and/or any devices that manipulate signals based on operation instructions. The processor 1401 may be configured to obtain and execute computer-readable instructions stored in the working memory 1402, the storage unit 1407, or other computer-readable media, such as program code of an operating system 1402a, program code of an application program 1402b, etc.
The working memory 1402 and the storage unit 1407 are examples of the computer-readable storage medium used for storing instructions, and the instructions are executed by the processor 1401 to implement the various functions described above. The working memory 1402 may include both a volatile memory and a non-volatile memory (e.g. a RAM, a ROM, etc.). In addition, the storage unit 1407 may include a hard disk drive, a solid state drive, a removable medium, including external and removable drives, a memory card, a flash memory, a floppy disk, an optical disk (e.g. CD, DVD), a storage array, a network attached storage, a storage area network, etc. The working memory 1402 and the storage unit 1407 may be collectively referred to herein as a memory or a computer-readable storage medium, and may be a non-transitory medium capable of storing computer-readable and processor-executable program instructions as computer program code. The computer program code may be executed by the processor 1401 as a specific machine configured to implement the operations and functions described in the examples herein.
The input unit 1404 may be any category of device capable of entering information to the electronic device 1400. The input unit 1404 may receive entered digit or character information, and generate a key signal input related to user settings and/or function control of the electronic device, and may include, but is not limited to, a mouse, a keyboard, a touchscreen, a trackpad, a trackball, a joystick, a microphone, and/or a remote controller. The output unit may be any type of device capable of presenting information, and may include, but is not limited to, a display unit 1405, a speaker 1406 and other output unit 1409, and other output unit 1409 may include, but is not limited to, a video/audio output terminal, a vibrator, and/or a printer. The communications unit 1408 allows the electronic device 1400 to exchange information/data with other devices via a computer network such as the Internet and/or various telecommunications networks, and may include, but is not limited to, a modem, a network interface card, an infrared communications device, a wireless communications transceiver, and/or a chipset, for example, a Bluetooth™ device, an 1302.11 device, a WiFi device, a WiMax device, and/or a cellular communications device.
The application program 1402b in the working register 1402 can be loaded to execute various methods and processing described above, such as steps S201 to step S203 in
Various implementations of the systems and technologies described herein above can be implemented in a digital electronic circuit system, an integrated circuit system, a field programmable gate array (FPGA), an application-specific integrated circuit (ASIC), an application-specific standard product (ASSP), a system-on-chip (SOC) system, a complex programmable logical device (CPLD), computer hardware, firmware, software, and/or a combination thereof. These various implementations may include: The systems and technologies are implemented in one or more computer programs, where the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor. The programmable processor may be a dedicated or general-purpose programmable processor that can receive data and instructions from a storage system, at least one input apparatus, and at least one output apparatus, and transmit data and instructions to the storage system, the at least one input apparatus, and the at least one output apparatus.
Program codes used to implement the method of the present disclosure can be written in any combination of one or more programming languages. These program codes may be provided for a processor or a controller of a general-purpose computer, a special-purpose computer, or other programmable data processing apparatuses, such that when the program codes are executed by the processor or the controller, the functions/operations specified in the flowcharts and/or block diagrams are implemented. The program codes may be completely executed on a machine, or partially executed on a machine, or may be, as an independent software package, partially executed on a machine and partially executed on a remote machine, or completely executed on a remote machine or a server.
In the context of the present disclosure, the machine-readable medium may be a tangible medium, which may contain or store a program for use by an instruction execution system, apparatus, or device, or for use in combination with the instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination thereof. More specific examples of the machine-readable storage medium may include an electrical connection based on one or more wires, a portable computer disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination thereof.
In order to provide interaction with a user, the systems and technologies described herein can be implemented on a computer which has: a display apparatus (for example, a cathode-ray tube (CRT) or a liquid crystal display (LCD) monitor) configured to display information to the user; and a keyboard and a pointing apparatus (for example, a mouse or a trackball) through which the user can provide an input to the computer. Other categories of apparatuses can also be used to provide interaction with the user; for example, feedback provided to the user can be any form of sensory feedback (for example, visual feedback, auditory feedback, or tactile feedback), and an input from the user can be received in any form (including an acoustic input, a voice input, or a tactile input).
The systems and technologies described herein can be implemented in a computing system (for example, as a data server) including a backend component, or a computing system (for example, an application server) including a middleware component, or a computing system (for example, a user computer with a graphical user interface or a web browser through which the user can interact with the implementation of the systems and technologies described herein) including a frontend component, or a computing system including any combination of the backend component, the middleware component, or the frontend component. The components of the system can be connected to each other through digital data communication (for example, a communications network) in any form or medium. Examples of the communications network include: a local area network (LAN), a wide area network (WAN), and the Internet.
A computer system may include a client and a server. The client and the server are generally far away from each other and usually interact through a communications network. A relationship between the client and the server is generated by computer programs running on respective computers and having a client-server relationship with each other.
It should be understood that steps may be reordered, added, or deleted based on the various forms of procedures shown above. For example, the steps recorded in the present disclosure may be performed in parallel, in order, or in a different order, provided that the desired result of the technical solutions disclosed in the present disclosure can be achieved, which is not limited herein.
Although the embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it should be appreciated that the method, system, and device described above are merely exemplary embodiments or examples, and the scope of the present invention is not limited by the embodiments or examples, but defined only by the granted claims and the equivalent scope thereof. Various elements in the embodiments or examples may be omitted or substituted by equivalent elements thereof. Moreover, the steps may be performed in an order different from that described in the present disclosure. Further, various elements in the embodiments or examples may be combined in various ways. It is important that, as the technology evolves, many elements described herein may be replaced with equivalent elements that appear after the present disclosure.
Number | Date | Country | Kind |
---|---|---|---|
202110287404.3 | Mar 2021 | CN | national |
The present application is a continuation under 35 U.S.C. 111(a) of PCT International Application No. PCT/CN2022/077212, filed on Feb. 22, 2022, which claims priority to Chinese patent application No. 202110287404.3, filed on Mar. 17, 2021, the entire contents of which are hereby incorporated by reference in their entirety for all purposes.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/CN2022/077212 | Feb 2022 | US |
Child | 18211790 | US |