This application relates to the field of computer technologies, and in particular, to an image file processing method and apparatus, and a storage medium.
With development of the mobile Internet, downloading traffic of a terminal device is greatly increased, and among downloading traffic of a user, image file traffic accounts for a very large proportion. A large quantity of image files also cause very large pressure on network transmission bandwidth load. If a size of an image file can be reduced, not only a loading speed can be improved, but also bandwidth and storage costs can be significantly reduced.
Embodiments of this application provide an image file processing method and apparatus, and a storage medium, to encode RGB data and transparency data respectively by using video encoding modes, thereby improving a compression ratio of an image file and ensuring quality of the image file.
According to a first aspect of this application, an embodiment of this application provides an image file processing method performed at a computing device having one or more processors and memory storing a plurality of programs to be executed by the one or more processors, the method comprising:
obtaining RGBA data corresponding to a first image in an image file;
separating the RGBA data to obtain RGB data and transparency data of the first image, the RGB data being color data comprised in the RGBA data, and the transparency data being transparency data comprised in the RGBA data;
encoding the RGB data of the first image, to generate first stream data;
encoding the transparency data of the first image, to generate second stream data; and
combining the first stream data and the second stream data into a stream data segment of the image file;
wherein at least image header information corresponding to the image file comprises image feature information indication of transparency data in the image file.
According to a second aspect of this application, an embodiment of this application provides a computing device having one or more processors, memory coupled to the one or more processors, and a plurality of programs stored in the memory. The plurality of programs, when executed by the one or more processors, cause the computing device to perform the aforementioned image file processing method.
According to a third aspect of this application, an embodiment of this application provides non-transitory computer readable storage medium storing a plurality of machine readable instructions in connection with a computing device having one or more processors. The plurality of machine readable instructions, when executed by the one or more processors, cause the computing device to perform the aforementioned image file processing method.
To describe the technical solutions in embodiments of this application more clearly, the following briefly describes the accompanying drawings required for describing the embodiments of this application. Apparently, the accompanying drawings in the following description show merely some embodiments of this application, and a person of ordinary skill in the art may derive other drawings from these accompanying drawings without creative efforts.
The following clearly and completely describes the technical solutions in the embodiments of this application with reference to the accompanying drawings in the embodiments of this application. Apparently, the described embodiments are merely some but not all of the embodiments of this application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of this application without creative efforts shall fall within the protection scope of this application.
Generally, when a large quantity of image files need to be transmitted, to reduce bandwidth or storage costs, a method is to reduce quality of the image file, for example, reduce quality of an image file in a Joint Photographic Experts Group (jpeg) format from jpeg80 to jpeg70 or even lower; however, image file quality is also greatly decreased, and user experience is greatly affected. Another method is to use a more efficient image file compression method. Current mainstream image file formats mainly include jpeg, Portable Network Graphic (png), Graphics Interchange (gif), and the like. All the formats have a problem of low compression efficiency if quality of an image file needs to be ensured.
In view of this, some embodiments of this application provide an image file processing method and apparatus, and a storage medium, to encode RGB data and transparency data respectively by using video encoding modes, thereby improving a compression ratio of an image file and ensuring quality of the image file. In the embodiments of this application, when a first image is RGBA data, an encoding apparatus obtains RGBA data corresponding to a first image in an image file, and separates the RGBA data to obtain RGB data and transparency data of the first image; encodes the RGB data of the first image according to a first video encoding mode, to generate first stream data; encodes the transparency data of the first image according to a second video encoding mode, to generate second stream data; and writes the first stream data and the second stream data into a stream data segment. In this way, through encoding by using the video encoding modes, a compression ratio of the image file can be improved, and a size of the image file can be reduced, so that a picture loading speed can be increased, and network transmission bandwidth and storage costs can be reduced. In addition, the RGB data and the transparency data in the image file are encoded respectively, to use the video encoding modes and reserve the transparency data in the image file, thereby ensuring quality of the image file.
Step 101: Obtain RGBA data corresponding to a first image in an image file, and separate the RGBA data to obtain RGB data and transparency data of the first image.
Specifically, an encoding apparatus run on the terminal device obtains the RGBA data corresponding to the first image in the image file, and separates the RGBA data to obtain the RGB data and the transparency data of the first image. Data corresponding to the first image is the RGBA data. The RGBA data is a color space representing red, green, blue, and transparency information (Alpha). The RGBA data corresponding to the first image is separated into the RGB data and the transparency data. The RGB data is color data included in the RGBA data, and the transparency data is transparency data included in the RGBA data.
For example, if the data corresponding to the first image is the RGBA data, because the first image is formed by many pixels, and each pixel corresponds to one piece of RGBA data, the first image formed by N pixels includes N pieces of RGBA data. A form of the RGBA data is as follows:
RGBA RGBA RGBA RGBA RGBA RGBA . . . RGBA
Therefore, according to this embodiment of this application, the encoding apparatus needs to separate the RGBA data of the first image, to obtain the RGB data and the transparency data of the first image, for example, perform a separation operation on the foregoing first image formed by the N pixels, and then obtain RGB data and transparency data of each of the N pixels. Forms of the RGB data and the transparency data are as follows:
Further, after the RGB data and the transparency data of the first image are obtained, step 102 and step 103 are performed respectively.
Step 102: Encode the RGB data of the first image according to a first video encoding mode, to generate first stream data.
Specifically, the encoding apparatus encodes the RGB data of the first image according to the first video encoding mode, to generate the first stream data. The first image may be a frame of image included in an image file in a static format; or the first image may be any one of a plurality of frames of images included in an image file in a dynamic format.
Step 103: Encode the transparency data of the first image according to a second video encoding mode, to generate second stream data.
Specifically, the encoding apparatus encodes the transparency data of the first image according to the second video encoding mode, to generate the second stream data.
For step 102 and step 103, the first video encoding mode or the second video encoding mode may include, but is not limited to, an intra-frame prediction (I) frame encoding mode and an inter-frame prediction (P) frame encoding mode. An I frame indicates a key frame, and when I-frame data is decoded, only a current frame of data is required to reconstruct a complete image. A complete image can be reconstructed by a P frame with reference to a previous encoded frame. A video encoding mode used for each frame of image in the image file in the static format or the image file in the dynamic format is not limited in this embodiment of this application.
For example, for the image file in the static format, because the image file in the static format includes only one frame of image, namely, the first image in this embodiment of this application, I-frame encoding is performed on the RGB data and the transparency data of the first image. For another example, for the image file in the dynamic format, the image file in the dynamic format generally includes at least two frames of images. Therefore, in this embodiment of this application, I-frame encoding is performed on RGB data and transparency data of the first frame of image in the image file in the dynamic format; and I-frame encoding or P-frame encoding may be performed on RGB data and transparency data of a non-first frame of image.
Step 104: Write the first stream data and the second stream data into a stream data segment of the image file.
Specifically, the encoding apparatus writes, into the stream data segment of the image file, the first stream data generated from the RGB data of the first image, and the second stream data generated from the transparency data of the first image. The first stream data and the second stream data are complete stream data corresponding to the first image, that is, the RGBA data of the first image can be obtained by decoding the first stream data and the second stream data.
It should be noted that, step 102 and step 103 are not limited to a particular order during execution.
It should be noted that, in this embodiment of this application, the RGBA data input before encoding may be obtained by decoding image files in various formats. A format of the image file may be any one of formats such as JPEG, Bitmap (BMP), PNG, Animated Portable Network Graphics (APNG), and GIF. A format of the image file before encoding is not limited in this embodiment of this application.
It should be noted that, the first image in this embodiment of this application is the RGBA data including the RGB data and the transparency data. However, when the first image includes only the RGB data, after obtaining the RGB data corresponding to the first image, the encoding apparatus may perform step 102 for the RGB data, to generate the first stream data, and determine the first stream data as complete stream data corresponding to the first image. In this way, the first image including only the RGB data can still be encoded by using the video encoding mode, to compress the first image.
In this embodiment of this application, when the first image is the RGBA data, the encoding apparatus obtains the RGBA data corresponding to the first image in the image file, and separates the RGBA data to obtain the RGB data and the transparency data of the first image; encodes the RGB data of the first image according to the first video encoding mode, to generate the first stream data; encodes the transparency data of the first image according to the second video encoding mode, to generate the second stream data; and writes the first stream data and the second stream data into the stream data segment. In this way, through encoding by using the video encoding modes, a compression ratio of the image file can be improved, and a size of the image file can be reduced, so that a picture loading speed can be increased, and network transmission bandwidth and storage costs can be reduced. In addition, the RGB data and the transparency data in the image file are encoded respectively, to use the video encoding modes and reserve the transparency data in the image file, thereby ensuring quality of the image file.
Step 201: Obtain RGBA data corresponding to a first image corresponding to a kth frame in an image file in a dynamic format, and separate the RGBA data to obtain RGB data and transparency data of the first image.
Specifically, an encoding apparatus run on the terminal device obtains the to-be-encoded image file in the dynamic format. The image file in the dynamic format includes at least two frames of images. The encoding apparatus obtains the first image corresponding to the kth frame in the image file in the dynamic format. The kth frame may be any one of the at least two frames of images, where k is a positive integer greater than 0.
According to some embodiments of this application, the encoding apparatus may perform encoding in an order of images corresponding to all frames in the image file in the dynamic format, that is, may first obtain an image corresponding to the first frame in the image file in the dynamic format. An order in which the encoding apparatus obtains an image included in the image file in the dynamic format is not limited in this embodiment of this application.
Further, if data corresponding to the first image is the RGBA data, the RGBA data is a color space representing Red, Green, Blue, and Alpha. The RGBA data corresponding to the first image is separated into the RGB data and the transparency data. Specifically, because the first image is formed by many pixels, and each pixel corresponds to one piece of RGBA data, the first image formed by N pixels includes N pieces of RGBA data. A form of the RGBA data is as follows:
RGBA RGBA RGBA RGBA RGBA RGBA . . . RGBA
Therefore, the encoding apparatus needs to separate the RGBA data of the first image, to obtain the RGB data and the transparency data of the first image, for example, perform a separation operation on the foregoing first image formed by the N pixels, and then obtain RGB data and transparency data of each of the N pixels. Forms of the RGB data and the transparency data are as follows:
Further, after the RGB data and the transparency data of the first image are obtained, step 202 and step 203 are performed respectively.
Step 202: Encode the RGB data of the first image according to a first video encoding mode, to generate first stream data.
Specifically, the encoding apparatus encodes the RGB data of the first image according to the first video encoding mode, to generate the first stream data. The RGB data is color data obtained by separating the RGBA data corresponding to the first image.
Step 203: Encode the transparency data of the first image according to a second video encoding mode, to generate second stream data.
Specifically, the encoding apparatus encodes the transparency data of the first image according to the second video encoding mode, to generate the second stream data. The transparency data is obtained by separating the RGBA data corresponding to the first image.
It should be noted that, step 202 and step 203 are not limited to a particular order during execution.
Step 204: Write the first stream data and the second stream data into a stream data segment of the image file.
Specifically, the encoding apparatus writes, into the stream data segment of the image file, the first stream data generated from the RGB data of the first image, and the second stream data generated from the transparency data of the first image. The first stream data and the second stream data are complete stream data corresponding to the first image, that is, the RGBA data of the first image can be obtained by decoding the first stream data and the second stream data.
Step 205: Determine whether the kth frame is the last frame in the image file in the dynamic format.
Specifically, the encoding apparatus determines whether the kth frame is the last frame in the image file in the dynamic format. If the kth frame is the last frame, it indicates that encoding of the image file in the dynamic format is completed, and then step 207 is performed. If the kth frame is not the last frame, it indicates that there is an image that is not encoded in the image file in the dynamic format, and then step 206 is performed.
Step 206: Update k if the kth frame is not the last frame in the image file in the dynamic format, and trigger execution of the operation of obtaining RGBA data corresponding to a first image corresponding to a kth frame in an image file in a dynamic format, and separating the RGBA data to obtain RGB data and transparency data of the first image.
Specifically, if determining that the kth frame is not the last frame in the image file in the dynamic format, the encoding apparatus encodes an image corresponding to a next frame, that is, updates k by using a value of (k+1), and after updating k, triggers execution of the operation of obtaining RGBA data corresponding to a first image corresponding to a kth frame in an image file in a dynamic format, and separating the RGBA data to obtain RGB data and transparency data of the first image.
It may be understood that, an image obtained by using updated k and an image obtained before k is updated are not an image corresponding to the same frame. For ease of description, herein, the image corresponding to the kth frame before k is updated is set as the first image, and the image corresponding to the kth frame after k is updated is set as a second image, to facilitate distinguishing.
In some embodiments of this application, when step 202 to step 204 are performed for the second image, RGBA data corresponding to the second image includes RGB data and transparency data. The encoding apparatus encodes the RGB data of the second image according to a third video encoding mode, to generate third stream data; encodes the transparency data of the second image according to a fourth video encoding mode, to generate fourth stream data; and writes the third stream data and the fourth stream data into a stream data segment of the image file.
For step 202 and step 203, the first video encoding mode, the second video encoding mode, the third video encoding mode, or the fourth video encoding mode above may include, but is not limited to, an I-frame encoding mode and a P-frame encoding mode. An I frame indicates a key frame, and when I-frame data is decoded, only a current frame of data is required to reconstruct a complete image. A complete image can be reconstructed by a P frame with reference to a previous encoded frame. A video encoding mode used for RGB data and transparency data in each frame of image in the image file in the dynamic format is not limited in this embodiment of this application. For example, RGB data and transparency data in the same frame of image may be encoded according to different video encoding modes; or may be encoded according to the same video encoding mode. RGB data in different frames of images may be encoded according to different video encoding modes; or may be encoded according to the same video encoding mode. Transparency data in different frames of images may be encoded according to different video encoding modes; or may be encoded according to the same video encoding mode.
It should be further noted that, the image file in the dynamic format includes a plurality of stream data segments. In some embodiments of this application, one frame of image corresponds to one stream data segment. Alternatively, in some other embodiments of this application, one piece of stream data corresponds to one stream data segment. Therefore, the stream data segment into which the first stream data and the second stream data are written is different from the stream data segment into which the third stream data and the fourth stream data are written.
For example, also refer to
It should be noted that, the foregoing merely shows that the image file in the dynamic format is encoded in an optional encoding solution; or the encoding apparatus may further encode each of the first frame, the second frame, the third frame, the fourth frame, and the like by using the I-frame encoding mode.
Step 207: Complete, if the kth frame is the last frame in the image file in the dynamic format, encoding of the image file in the dynamic format.
Specifically, if the encoding apparatus determines that the kth frame is the last frame in the image file in the dynamic format, it indicates that encoding of the image file in the dynamic format is completed.
In some embodiments of this application, the encoding apparatus may generate frame header information for stream data generated from an image corresponding to each frame, and generate image header information for the image file in the dynamic format. In this way, whether the image file includes the transparency data may be determined by using the image header information, and then whether to obtain only the first stream data generated from the RGB data or obtain the first stream data generated from the RGB data and the second stream data generated from the transparency data in a decoding process may be determined.
It should be noted that, the image corresponding to each frame in the image file in the dynamic format in this embodiment of this application is RGBA data including RGB data and transparency data. However, when the image corresponding to each frame in the image file in the dynamic format includes only RGB data, the encoding apparatus may perform step 202 for the RGB data of each frame of image, to generate the first stream data and write the first stream data into the stream data segment of the image file, and finally determine the first stream data as complete stream data corresponding to the first image. In this way, the first image including only the RGB data can still be encoded by using the video encoding mode, to compress the first image.
It should be further noted that, in this embodiment of this application, the RGBA data input before encoding may be obtained by decoding image files in various dynamic formats. The dynamic format of the image file may be any one of formats such as APNG and GIF. The dynamic format of the image file before encoding is not limited in this embodiment of this application.
In this embodiment of this application, when the first image in the image file in the dynamic format is the RGBA data, the encoding apparatus obtains the RGBA data corresponding to the first image in the image file, and separates the RGBA data to obtain the RGB data and the transparency data of the first image; encodes the RGB data of the first image according to the first video encoding mode, to generate the first stream data; encodes the transparency data of the first image according to the second video encoding mode, to generate the second stream data; and writes the first stream data and the second stream data into the stream data segment. In addition, the image corresponding to each frame in the image file in the dynamic format can be encoded according to a manner of the first image. In this way, through encoding by using the video encoding modes, a compression ratio of the image file can be improved, and a size of the image file can be reduced, so that a picture loading speed can be increased, and network transmission bandwidth and storage costs can be reduced. In addition, the RGB data and the transparency data in the image file are encoded respectively, to use the video encoding modes and reserve the transparency data in the image file, thereby ensuring quality of the image file.
Step 301: Obtain RGBA data corresponding to a first image in an image file, and separate the RGBA data to obtain RGB data and transparency data of the first image.
Specifically, an encoding apparatus run on the terminal device obtains the RGBA data corresponding to the first image in the image file, and separates the RGBA data to obtain the RGB data and the transparency data of the first image. Data corresponding to the first image is the RGBA data. The RGBA data is a color space representing Red, Green, Blue, and Alpha. The RGBA data corresponding to the first image is separated into the RGB data and the transparency data. The RGB data is color data included in the RGBA data, and the transparency data is transparency data included in the RGBA data.
For example, if the data corresponding to the first image is the RGBA data, because the first image is formed by many pixels, and each pixel corresponds to one piece of RGBA data, the first image formed by N pixels includes N pieces of RGBA data. A form of the RGBA data is as follows:
RGBA RGBA RGBA RGBA RGBA RGBA . . . RGBA
Therefore, according to this embodiment of this application, the encoding apparatus needs to separate the RGBA data of the first image, to obtain the RGB data and the transparency data of the first image, for example, perform a separation operation on the foregoing first image formed by the N pixels, and then obtain RGB data and transparency data of each of the N pixels. Forms of the RGB data and the transparency data are as follows:
Further, after the RGB data and the transparency data of the first image are obtained, step 302 and step 303 are performed respectively.
Step 302: Encode the RGB data of the first image according to a first video encoding mode, to generate first stream data.
Specifically, the encoding apparatus encodes the RGB data of the first image according to the first video encoding mode, to generate the first stream data. The first image may be a frame of image included in an image file in a static format; or the first image may be any one of a plurality of frames of images included in an image file in a dynamic format.
In some embodiments of this application, a specific process in which the encoding apparatus encodes the RGB data of the first image according to the first video encoding mode and generates the first stream data is: converting the RGB data of the first image into first YUV data; and encoding the first YUV data according to the first video encoding mode, to generate the first stream data. In some embodiments of this application, the encoding apparatus may convert the RGB data into the first YUV data according to a preset YUV color space format. For example, the preset YUV color space format may include, but is not limited to, YUV420, YUV422, and YUV444.
Step 303: Encode the transparency data of the first image according to a second video encoding mode, to generate second stream data.
Specifically, the encoding apparatus encodes the transparency data of the first image according to the second video encoding mode, to generate the second stream data.
The first video encoding mode in step 302 or the second video encoding mode in step 303 may include, but is not limited to, an I-frame encoding mode and a P-frame encoding mode. An I frame indicates a key frame, and when I-frame data is decoded, only a current frame of data is required to reconstruct a complete image. A complete image can be reconstructed by a P frame with reference to a previous encoded frame. A video encoding mode used for each frame of image in the image file in the static format or the image file in the dynamic format is not limited in this embodiment of this application.
For example, for the image file in the static format, because the image file in the static format includes only one frame of image, namely, the first image in this embodiment of this application, I-frame encoding is performed on the RGB data and the transparency data of the first image. For another example, for the image file in the dynamic format, the image file in the dynamic format includes at least two frames of images. Therefore, in this embodiment of this application, I-frame encoding is performed on RGB data and transparency data of the first frame of image in the image file in the dynamic format; and I-frame encoding or P-frame encoding may be performed on RGB data and transparency data of a non-first frame of image.
In some embodiments of this application, a specific process in which the encoding apparatus encodes the transparency data of the first image according to the second video encoding mode and generates the second stream data is: converting the transparency data of the first image into second YUV data; and encoding the second YUV data according to the second video encoding mode, to generate the second stream data.
The converting, by the encoding apparatus, the transparency data of the first image into second YUV data is specifically: in some embodiments of this application, setting, by the encoding apparatus, the transparency data of the first image as a Y component in the second YUV data, and skipping setting a U component and a V component in the second YUV data; or in some other embodiments of this application, setting the transparency data of the first image as a Y component in the second YUV data, and setting a U component and a V component in the second YUV data as preset data. In this embodiment of this application, the encoding apparatus may convert the transparency data into the second YUV data according to a preset YUV color space format, where for example, the preset YUV color space format may include, but is not limited to, YUV400, YUV420, YUV422, and YUV444, and may set the U component and the V component according to the YUV color space format.
Further, if data corresponding to the first image is the RGBA data, the encoding apparatus obtains the RGB data and the transparency data of the first image by separating the RGBA data of the first image. Then, an example is used to describe conversion of the RGB data of the first image into the first YUV data and conversion of the transparency data of the first image into the second YUV data. An example in which the first image includes four pixels is used for description. The RGB data of the first image is RGB data of the four pixels, the transparency data of the first image is transparency data of the four pixels, and for a specific process of converting the RGB data and the transparency data of the first image, refer to exemplary descriptions of
Further,
If the YUV color space format is YUV400, U and V components are not set, and Y components of the four pixels are determined as the second YUV data of the first image (as shown in
If the YUV color space format is a format in which U and V components exist other than YUV400, the U and V components are set as preset data, as shown in
It should be noted that, step 302 and step 303 are not limited to a particular order during execution.
Step 304: Write the first stream data and the second stream data into a stream data segment of the image file.
Specifically, the encoding apparatus writes, into the stream data segment of the image file, the first stream data generated from the RGB data of the first image, and the second stream data generated from the transparency data of the first image. The first stream data and the second stream data are complete stream data corresponding to the first image, that is, the RGBA data of the first image can be obtained by decoding the first stream data and the second stream data.
Step 305: Generate image header information and frame header information that correspond to the image file.
Specifically, the encoding apparatus generates the image header information and the frame header information that correspond to the image file. The image file may be an image file in a static format, that is, includes only the first image; or the image file is an image file in a dynamic format, that is, includes the first image and another image. Regardless of whether the image file is the image file in the static format or the image file in the dynamic format, the encoding apparatus needs to generate the image header information corresponding to the image file. The image header information includes image feature information indicating whether there is transparency data in the image file, so that a decoding apparatus determines, by using the image feature information, whether the image file includes the transparency data, to determine how to obtain stream data and whether the obtained stream data includes the second stream data generated from the transparency data.
Further, the frame header information is used to indicate the stream data segment of the image file, so that the decoding apparatus determines, by using the frame header information, the stream data segment from which the stream data can be obtained, thereby decoding the stream data.
It should be noted that, in this embodiment of this application, an order of step 305 of generating image header information and frame header information that correspond to the image file and step 302, step 303, and step 304 is not limited.
Step 306: Write the image header information into an image header information data segment of the image file.
Specifically, the encoding apparatus writes the image header information into an image header information data segment of the image file. The image header information includes an image file identifier, a decoder identifier, a version number, and the image feature information; the image file identifier is used to indicate a type of the image file; the decoder identifier is used to indicate an identifier of an encoding/decoding standard used for the image file; and the version number is used to indicate a profile of the encoding/decoding standard used for the image file.
In some embodiments of this application, the image header information may further include a user defined information data segment. The user defined information data segment includes a user defined information start code, a length of the user defined information data segment, and user defined information. The user defined information includes Exchangeable Image File (EXIF) information, for example, an aperture, a shutter, white balance, the International Organization for Standardization (ISO), a focal length, a date, a time, and the like during photographing, a photographing condition, a camera brand, a model, color encoding, sound recorded during photographing, Global Positioning System data, a thumbnail, and the like. The user defined information includes information that can be defined and set by a user, This is not limited in this embodiment of this application.
The image feature information further includes an image feature information start code, an image feature information data segment length, whether the image file is an image file in a static format, whether the image file is the image file in the dynamic format, whether the image file is losslessly encoded, a YUV color space value domain used for the image file, a width of the image file, a height of the image file, and a frame quantity used for indication if the image file is the image file in the dynamic format. In some embodiments of this application, the image feature information may further include a YUV color space format used for the image file.
For example,
The image sequence header data segment includes an image file identifier, a decoder identifier, a version number, and the image feature information.
The image file identifier (image_identifier) is used to indicate a type of the image file, and may be indicated by a preset identifier. For example, the image file identifier occupies four bytes. For example, the image file identifier is a bit string ‘AVSP’, used to indicate that this is an AVS image file.
The decoder identifier is used to indicate an identifier of an encoding/decoding standard used to compress the current image file, and is, for example, indicated by using four bytes, or may be explained as indicating a model of a decoder kernel used for current picture decoding. When an AVS2 kernel is used, the decoder identifier code_id is ‘AVS2’.
The version number is used to indicate a profile of an encoding/decoding standard indicated by a compression standard identifier. For example, profiles may include a baseline profile, a main profile, and an extended profile. For example, an 8-bit unsigned number identifier is used. As shown in Table 1, a type of the version number is provided.
Also refer to
The image feature information start code is a field used to indicate a start location of the image feature information data segment of the image file, and is, for example, indicated by using one byte, and a field D0 is used.
The image feature information data segment length indicates a quantity of bytes occupied by the image feature information data segment, and is, for example, indicated by using two bytes. For example, for the image file in the dynamic format, the image feature information data segment in
The image transparency flag is used to indicate whether an image in the image file carries transparency data, and is, for example, indicated by using one bit. 0 indicates that the image in the image file carries no transparency data, and 1 indicates that the image in the image file carries transparency data. It may be understood that, whether there is an alpha channel and whether transparency data is included represent the same meaning.
The dynamic image flag is used to indicate whether the image file is the image file in the dynamic format or the image file in the static format, and is, for example, indicated by using one bit. 0 indicates that the image file is the image file in the static format, and 1 indicates that the image file is the image file in the dynamic format.
The YUV color space format is used to indicate a chrominance component format used to convert the RGB data of the image file into the YUV data, and is, for example, indicated by using two bits, as shown in the following Table 2.
The lossless mode flag is used to indicate whether lossless encoding or lossy compression is used, and is, for example, indicated by using one bit. 0 indicates lossy encoding, and 1 indicates lossless encoding. If the RGB data of the image file is directly encoded by using a video encoding mode, it indicates that lossless encoding is used; and if the RGB data of the image file is first converted into YUV data, and then the YUV data is encoded, it indicates that lossy encoding is used.
The YUV color space value domain flag is used to indicate that a YUV color space value domain range conforms to the ITU-R BT.601 standard, and is, for example, indicated by using one bit. 1 indicates that a value domain range of the Y component is [16, 235] and a value domain range of the U and V components is [16, 240]; and 0 indicates that a value domain range of the Y component and the U and V components is [0, 255].
The reserved bit is a 10-bit unsigned integer. Redundant bits in a byte are set as reserved bits.
The image width is used to indicate a width of each image in the image file, and may be, for example, indicated by using two bytes if the image width ranges from 0 to 65535.
The image height is used to indicate a height of each image in the image file, and may be, for example, indicated by using two bytes if the image height ranges from 0 to 65535.
The image frame quantity exists only in a case of the image file in the dynamic format, is used to indicate a total quantity of frames included in the image file, and is, for example, indicated by using three bytes.
Also refer to
The user defined information start code is a field used to indicate a start location of the user defined information, and is, for example, indicated by using one byte. For example, a bit string ‘0x000001BC’ identifies the beginning of the user defined information.
A user defined information data segment length indicates a data length of current user defined information, and is, for example, indicated by using two bytes.
The user defined information is used to write data that a user needs to import, for example, information such as EXIF, and a quantity of occupied bytes may be determined according to a length of the user defined information.
It should be noted that, the foregoing is merely exemplary description, and a name of each piece of information included in the image header information, a location of each piece of information in the image header information, and a quantity of bits occupied for indicating each piece of information are not limited in this embodiment of this application.
Step 307: Write the frame header information into a frame header information data segment of the image file.
Specifically, the encoding apparatus writes the frame header information into the frame header information data segment of the image file.
In some embodiments of this application, one frame of image in the image file corresponds to one piece of frame header information. Specifically, when the image file is the image file in the static format, the image file in the static format includes one frame of image, namely, the first image, and therefore, the image file in the static format includes one piece of frame header information. When the image file is the image file in the dynamic format, the image file in the dynamic format usually includes at least two frames of images, and one piece of frame header information is added to each of the at least two frames of images.
In some other embodiments of this application, one piece of stream data in one frame of image in the image file corresponds to one piece of frame header information. Specifically, in a case of the image file in the static format, the image file in the static format includes one frame of image, namely, the first image, and the first image including the transparency data corresponds to two pieces of stream data that are respectively the first stream data and the second stream data. Therefore, the first stream data in the image file in the static format corresponds to one piece of frame header information, and the second stream data corresponds to the other piece of frame header information. In a case of the image file in the dynamic format, the image file in the dynamic format includes at least two frames of images, each frame of image including transparency data corresponds to two pieces of stream data that are respectively the first stream data and the second stream data, and one piece of frame header information is added to each of the first stream data and the second stream data of each frame of image.
In some embodiments of this application, when one piece of stream data in one frame of image in the image file corresponds to one piece of frame header information, the encoding apparatus may arrange, in a preset order, the frame header information data segment and the first stream data segment that correspond to the first stream data, and the frame header information data segment and the second stream data segment that correspond to the second stream data. For example, a first stream data segment, a second stream data segment, and frame header information data segments corresponding to various pieces of stream data of one frame of image may be arranged according to the frame header information data segment and the first stream data segment that correspond to the first stream data, and the frame header information data segment and the second stream data segment that correspond to the second stream data. In this way, in a decoding process, the decoding apparatus can determine, in stream data segments indicated by two pieces of frame header information and two frame headers that indicate the frame of image, a stream data segment from which the first stream data can be obtained, and a stream data segment from which the second stream data can be obtained. It may be understood that, herein, the first stream data is stream data generated from the RGB data, and the second stream data is stream data generated from the transparency data.
Further, the frame header information includes a frame header information start code and delay time information used for indication if the image file is the image file in the dynamic format. In some embodiments of this application, the frame header information further includes at least one of a frame header information data segment length and a stream data segment length of a stream data segment indicated by the frame header information. Further, in some embodiments of this application, the frame header information further includes specific information for distinguishing from another frame of image, for example, encoding area information, transparency information, and a color table. This is not limited in this embodiment of this application.
When first stream data and second stream data that are obtained by encoding one frame of image correspond to one piece of frame header information, for the frame header information, refer to an example diagram of frame header information shown in
The frame header information start code is a field used to indicate a start location of the frame header information, and is, for example, indicated by using one byte.
The frame header information data segment length indicates a length of the frame header information, and is, for example, indicated by using one byte. The information is optional information
The stream data segment length indicates a stream length of a stream data segment indicated by the frame header information. If the first stream data and the second stream data correspond to one piece of frame header information, the stream length herein is a sum of a length of the first stream data and a length of the second stream data. The information is optional information.
The delay time information exists only when the image file is an image file in a dynamic format, indicates a difference between a time at which an image corresponding to a current frame is displayed and a time at which an image corresponding to a next frame is displayed, and is, for example, indicated by using one byte.
It should be noted that, the foregoing is merely exemplary description, and a name of each piece of information included in the frame header information, a location of each piece of information in the frame header information, and a quantity of bits occupied for indicating each piece of information are not limited in this embodiment of this application.
When each of the first stream data and the second stream data corresponds to one piece of frame header information, the frame header information is divided into image frame header information and transparent channel frame header information. Also refer to
The image frame header information start code is a field used to indicate a start location of the image frame header information, and is, for example, indicated by using one byte, for example, a bit string ‘0x000001BA’.
The image frame header information data segment length indicates a length of the image frame header information, and is, for example, indicated by using one byte. The information is optional information.
The first stream data segment length indicates a stream length of the first stream data segment indicated by the image frame header information. The information is optional information
The delay time information exists only when the image file is an image file in a dynamic format, indicates a difference between a time at which an image corresponding to a current frame is displayed and a time at which an image corresponding to a next frame is displayed, and is, for example, indicated by using one byte.
The transparent channel frame header information start code is a field used to indicate a start location of the transparent channel frame header information, and is, for example, indicated by using one byte, for example, a bit string ‘0x000001BB’.
The transparent channel frame header information data segment length indicates a length of the transparent channel frame header information, and is, for example, indicated by using one byte. The information is optional information.
The second stream data segment length indicates a stream length of the second stream data segment indicated by the transparent channel frame header information. The information is optional information.
The delay time information exists only when the image file is an image file in a dynamic format, indicates a difference between a time at which an image corresponding to a current frame is displayed and a time at which an image corresponding to a next frame is displayed, and is, for example, indicated by using one byte. The information is optional information. When the transparent channel frame header information includes no delay time information, refer to the delay time information in the image frame header information.
In this embodiment of this application, terms such as the image file, the image, the first stream data, the second stream data, the image header information, the frame header information, each piece of information included in the image header information, and each piece of information included in the frame header information may occur with other names. For example, the image file is described by using a “picture”, and as long as a function of each term is similar to that in this application, the term falls within the protection scope of the claims of this application and an equivalent technology thereof.
It should be further noted that, in this embodiment of this application, the RGBA data input before encoding may be obtained by decoding image files in various formats. A format of an image file may be any one of formats such as JPEG, BMP, PNG, APNG, and GIF. A format of the image file before encoding is not limited in this embodiment of this application.
It should be noted that, a form of each start code in this embodiment of this application is unique in entire compressed image data, to achieve a function of uniquely identifying each data segment. The image file in this embodiment of this application is used to indicate a complete image file or image file that may include one or more images, and an image is a frame of drawing. Video frame data in this embodiment of this application is stream data obtained after video encoding is performed on each frame of image in the image file. For example, the first stream data obtained after the RGB data is encoded may be considered as one piece of video frame data, and the second stream data obtained after the transparency data is encoded may also be considered as one piece of video frame data.
In this embodiment of this application, when the first image is the RGBA data, the encoding apparatus obtains the RGBA data corresponding to the first image in the image file, and separates the RGBA data to obtain the RGB data and the transparency data of the first image; encodes the RGB data of the first image according to the first video encoding mode, to generate the first stream data; encodes the transparency data of the first image according to the second video encoding mode, to generate the second stream data; generates the image header information and the frame header information that correspond to the image file including the first image; and finally writes the first stream data and the second stream data into the stream data segment, writes the image header information into the image header information data segment, and writes the frame header information into the frame header information data segment. In this way, through encoding by using the video encoding modes, a compression ratio of the image file can be improved, and a size of the image file can be reduced, so that a picture loading speed can be increased, and network transmission bandwidth and storage costs can be reduced. In addition, the RGB data and the transparency data in the image file are encoded respectively, to use the video encoding modes and reserve the transparency data in the image file, thereby ensuring quality of the image file.
Step 401: Obtain, from a stream data segment of an image file, first stream data and second stream data that are generated from a first image in the image file.
Specifically, a decoding apparatus run on the terminal device obtains, from the stream data segment of the image file, the first stream data and the second stream data that are generated from the first image in the image file.
Step 402: Decode the first stream data according to a first video decoding mode, to generate RGB data of the first image.
Specifically, the decoding apparatus run on the terminal device decodes the first stream data according to the first video decoding mode. The first stream data and the second stream data are data that is generated from the first image and that is read from a stream data segment by the decoding apparatus by parsing the image file, and stream data related to the first image is obtained. The first image is an image included in the image file. When the image file includes transparency data, the decoding apparatus obtains the first stream data and the second stream data that indicate the first image. The first image may be a frame of image included in an image file in a static format; or the first image may be any one of a plurality of frames of images included in an image file in a dynamic format.
In some embodiments of this application, when the image file includes the RGB data and the transparency data, the image file has information used to indicate a stream data segment, and for an image file in a dynamic format, the image file has information used to indicate stream data segments corresponding to different frames of images, so that the decoding apparatus can obtain the first stream data generated from the RGB data of the first image and the second stream data generated from the transparency data of the first image.
Further, the decoding apparatus decodes the first stream data, to generate the RGB data of the first image.
Step 403: Decode the second stream data according to a second video decoding mode, to generate transparency data of the first image.
Specifically, the decoding apparatus decodes the second stream data according to the second video decoding mode, to generate the transparency data of the first image. The second stream data is also read in the same manner as that of reading the first stream data in step 402. Details are not described herein again.
For step 402 and step 403, the first video decoding mode or the second video decoding mode may be determined based on a video encoding mode used to generate the first stream data or generate the second stream data. For example, the first stream data is used as an example for description. If I-frame encoding is used for the first stream data, the first video decoding mode is that the RGB data can be generated according to current stream data; or if P-frame encoding is used for the first stream data, the first video decoding mode is that RGB data of a current frame is generated according to previous decoded data. For the second video decoding mode, refer to the descriptions of the first video decoding mode. Details are not described herein again.
It should be noted that, step 402 and step 403 are not limited to a particular order during execution.
Step 404: Generate, according to the RGB data and the transparency data of the first image, RGBA data corresponding to the first image.
Specifically, the decoding apparatus generates, according to the RGB data and the transparency data of the first image, the RGBA data corresponding to the first image. The RGBA data is a color space representing Red, Green, Blue, and Alpha. The RGB data and the transparency data can be combined into the RGBA data. In this way, corresponding RGBA data can be generated, by using a corresponding video decoding mode, from stream data obtained by performing encoding according to a video encoding mode, to use the video encoding/decoding modes and reserve the transparency data in the image file, thereby ensuring quality and a display effect of the image file.
For example, forms of the RGB data and the transparency data of the first image that are obtained through decoding by the decoding apparatus are as follows:
Therefore, the decoding apparatus combines the corresponding RGB data and transparency data, to obtain the RGBA data of the first image. A form of the RGBA data is as follows:
RGBA RGBA RGBA RGBA RGBA RGBA . . . RGBA
It should be noted that, the image file in this embodiment of this application includes the RGB data and the transparency data, and therefore the first stream data from which the RGB data can be generated and the second stream data from which the transparency data can be generated can be read by parsing the image file, and step 402 and step 403 are performed respectively. However, when the image file includes only the RGB data, the first stream data from which the RGB data can be generated can be read by parsing the image file, and step 402 is performed, to generate the RGB data, that is, decoding of the first stream data is completed.
In this embodiment of this application, the decoding apparatus decodes the first stream data according to the first video decoding mode, to generate the RGB data of the first image; decodes the second stream data according to the second video decoding mode, to generate the transparency data of the first image; and generates, according to the RGB data and the transparency data of the first image, the RGBA data corresponding to the first image. The first stream data and the second stream data in the image file are decoded respectively, to obtain the RGBA data, to use video encoding/decoding modes and reserve the transparency data in the image file, thereby ensuring quality of the image file.
Step 501: Obtain first stream data and second stream data that are generated from a first image corresponding to a kth frame in an image file in a dynamic format.
Specifically, a decoding apparatus run on the terminal device parses the image file in the dynamic format, to obtain, from a stream data segment of the image file, the first stream data and the second stream data that are generated from the first image corresponding to the kth frame. When the image file includes transparency data, the decoding apparatus obtains the first stream data and the second stream data that indicate the first image. The image file in the dynamic format includes at least two frames of images, and the kth frame may be any one of the at least two frames of images, where k is a positive integer greater than 0.
In some embodiments of this application, when the image file in the dynamic format includes RGB data and transparency data, the image file has information used to indicate stream data segments corresponding to different frames of images, so that the decoding apparatus can obtain the first stream data generated from the RGB data of the first image and the second stream data generated from the transparency data of the first image.
In some embodiments of this application, the decoding apparatus may perform decoding in an order of stream data corresponding to all frames in the image file in the dynamic format, that is, may first obtain and decode stream data corresponding to the first frame in the image file in the dynamic format. An order in which the decoding apparatus obtains the stream data, indicating all frames of images, of the image file in the dynamic format is not limited in this embodiment of this application.
In some embodiments of this application, the decoding apparatus may determine, by using image header information and frame header information of the image file, the stream data indicating an image corresponding to each frame. Refer to detailed descriptions of the image header information and the frame header information in a next embodiment.
Step 502: Decode the first stream data according to a first video decoding mode, to generate RGB data of the first image.
Specifically, the decoding apparatus decodes the first stream data according to the first video decoding mode, to generate the RGB data of the first image. In some embodiments of this application, the decoding apparatus decodes the first stream data according to the first video decoding mode, to generate first YUV data of the first image; and converts the first YUV data into the RGB data of the first image.
Step 503: Decode the second stream data according to a second video decoding mode, to generate transparency data of the first image.
Specifically, the decoding apparatus decodes the second stream data according to the second video decoding mode, to generate the transparency data of the first image. In some embodiments of this application, the decoding apparatus decodes the second stream data according to the second video decoding mode, to generate second YUV data of the first image; and converts the second YUV data into the transparency data of the first image. In some embodiments of this application, the decoding apparatus sets a Y component in the second YUV data as the transparency data of the first image, and discards a U component and a V component in the second YUV data.
It should be noted that, step 502 and step 503 are not limited to a particular order during execution.
Step 504: Generate, according to the RGB data and the transparency data of the first image, RGBA data corresponding to the first image.
Specifically, the decoding apparatus generates, according to the RGB data and the transparency data of the first image, the RGBA data corresponding to the first image. The RGBA data is a color space representing Red, Green, Blue, and Alpha. The RGB data and the transparency data can be combined into the RGBA data. In this way, corresponding RGBA data can be generated, by using a corresponding video decoding mode, from stream data obtained by performing encoding according to a video encoding mode, to use the video encoding/decoding modes and reserve the transparency data in the image file, thereby ensuring quality and a display effect of the image file.
For example, forms of the RGB data and the transparency data of the first image that are obtained through decoding by the decoding apparatus are as follows:
Therefore, the decoding apparatus combines the corresponding RGB data and transparency data, to obtain the RGBA data of the first image. A form of the RGBA data is as follows:
RGBA RGBA RGBA RGBA RGBA RGBA . . . RGBA
Step 505: Determine whether the kth frame is the last frame in the image file in the dynamic format.
Specifically, the decoding apparatus determines whether the kth frame is the last frame in the image file in the dynamic format. In some embodiments of this application, whether decoding of the image file is completed may be determined by detecting a quantity of frames included in the image header information. If the kth frame is the last frame in the image file in the dynamic format, it indicates that decoding of the image file in the dynamic format is completed, and step 507 is performed. If the kth frame is not the last frame in the image file in the dynamic format, step 506 is performed.
Step 506: Update k if the kth frame is not the last frame in the image file in the dynamic format, and trigger execution of the operation of obtaining first stream data and second stream data of a first image corresponding to a kth frame in an image file in a dynamic format.
Specifically, if determining that the kth frame is not the last frame in the image file in the dynamic format, the decoding apparatus decodes stream data of an image corresponding to a next frame, that is, updates k by using a value of (k+1), and after updating k, triggers execution of the operation of obtaining first stream data and second stream data of a first image corresponding to a kth frame in an image file in a dynamic format.
It may be understood that, an image obtained by using updated k and an image obtained before k is updated are not an image corresponding to the same frame. For ease of description, herein, the image corresponding to the kth frame before k is updated is set as the first image, and the image corresponding to the kth frame after k is updated is set as a second image, to facilitate distinguishing.
When step 502 to step 504 are performed for the second image, in some embodiments of this application, stream data indicating the second image is third stream data and fourth stream data; the third stream data is decoded according to a third video decoding mode, to generate RGB data of the second image; the fourth stream data is decoded according to a fourth video decoding mode, to generate transparency data of the second image, where the third stream data is generated according to the RGB data of the second image, and the fourth stream data is generated according to the transparency data of the second image; and RGBA data corresponding to the second image is generated according to the RGB data and the transparency data of the second image.
For step 502 and step 503, the first video decoding mode, the second video decoding mode, the third video decoding mode, or the fourth video decoding mode above is determined based on a video encoding mode used to generate stream data. For example, the first stream data is used as an example for description. If I-frame encoding is used for the first stream data, the first video decoding mode is that the RGB data can be generated according to current stream data; or if P-frame encoding is used for the first stream data, the first video decoding mode is that RGB data of a current frame is generated according to previous decoded data. For another video decoding mode, refer to the descriptions of the first video decoding mode. Details are not described herein again.
It should be further noted that, the image file in the dynamic format includes a plurality of stream data segments. In some embodiments of this application, one frame of image corresponds to one stream data segment. Alternatively, in some other embodiments of this application, one piece of stream data corresponds to one stream data segment. Therefore, the stream data segment from which the first stream data and the second stream data are read is different from the stream data segment from which the third stream data and the fourth stream data are read.
Step 507: If the kth frame is the last frame in the image file in the dynamic format, complete decoding of the image file in the dynamic format.
Specifically, if the decoding apparatus determines that the kth frame is the last frame in the image file in the dynamic format, it indicates that decoding of the image file in the dynamic format is completed.
In some embodiments of this application, the decoding apparatus may parse the image file, to obtain image header information and frame header information of the image file in the dynamic format. In this way, whether the image file includes the transparency data may be determined by using the image header information, and then whether to obtain only the first stream data generated from the RGB data or obtain the first stream data generated from the RGB data and the second stream data generated from the transparency data in a decoding process may be determined.
It should be noted that, the image corresponding to each frame in the image file in the dynamic format in this embodiment of this application is RGBA data including RGB data and transparency data. However, when the image corresponding to each frame in the image file in the dynamic format includes only RGB data, stream data indicating each frame of image is only the first stream data, and therefore the decoding apparatus may perform step 502 for first stream data indicating each frame of image, to generate the RGB data. In this way, the stream data including only the RGB data can still be decoded by using a video decoding mode.
In this embodiment of this application, when determining that the image file in the dynamic format includes the RGB data and the transparency data, the decoding apparatus decodes, according to the first video decoding mode, the first stream data indicating each frame of image, to generate the RGB data of the first image; decodes, according to the second video decoding mode, the second stream data indicating each frame of image, to generate the transparency data of the first image; and generates, according to the RGB data and the transparency data of the first image, the RGBA data corresponding to the first image. The first stream data and the second stream data in the image file are decoded respectively, to obtain the RGBA data, to use video encoding/decoding modes and reserve the transparency data in the image file, thereby ensuring quality of the image file.
Step 601: Parse an image file, to obtain image header information and frame header information of the image file.
Specifically, a decoding apparatus run on the terminal device parses the image file, to obtain the image header information and the frame header information of the image file. The image header information includes image feature information indicating whether there is transparency data in the image file, and whether the transparency data is included may be determined, to determine how to obtain stream data and whether the obtained stream data includes second stream data generated from the transparency data. The frame header information is used to indicate a stream data segment of the image file, and the stream data segment from which the stream data can be obtained may be determined by using the frame header information, thereby decoding the stream data. For example, the frame header information includes a frame header information start code, and the stream data segment can be determined by identifying the frame header information start code.
In some embodiments of this application, the parsing, by the decoding apparatus, the image file, to obtain the image header information of the image file may be specifically: reading the image header information of the image file from an image header information data segment of the image file.
In some embodiments of this application, the parsing, by the decoding apparatus, the image file, to obtain the frame header information of the image file may be specifically: reading the frame header information of the image file from a frame header information data segment of the image file.
It should be noted that, for the image header information and the frame header information in this embodiment of this application, refer to the exemplary descriptions of
Step 602: Read stream data from a stream data segment indicated by the frame header information of the image file.
Specifically, if determining, by using the image feature information, that the image file includes the transparency data, the decoding apparatus reads the stream data from the stream data segment indicated by the frame header information of the image file. The stream data includes first stream data and second stream data.
In some embodiments of this application, one frame of image in the image file corresponds to one piece of frame header information, that is, the frame header information may be used to indicate the stream data segment including the first stream data and the second stream data. Specifically, when the image file is the image file in the static format, the image file in the static format includes one frame of image, namely, the first image, and therefore, the image file in the static format includes one piece of frame header information. When the image file is the image file in the dynamic format, the image file in the dynamic format usually includes at least two frames of images, and each of the at least two frames of images has one piece of frame header information. If determining that the image file includes the transparency data, the decoding apparatus reads the first stream data and the second stream data according to the stream data segment indicated by the frame header information.
In some other embodiments of this application, one piece of stream data in one frame of image in the image file corresponds to one piece of frame header information, that is, a stream data segment indicated by one piece of frame header information includes one piece of stream data. Specifically, in a case of the image file in the static format, the image file in the static format includes one frame of image, namely, the first image, and the first image including the transparency data corresponds to two pieces of stream data that are respectively the first stream data and the second stream data. Therefore, the first stream data in the image file in the static format corresponds to one piece of frame header information, and the second stream data corresponds to the other piece of frame header information. In a case of the image file in the dynamic format, the image file in the dynamic format includes at least two frames of images, each frame of image including transparency data corresponds to two pieces of stream data that are respectively the first stream data and the second stream data, and one piece of frame header information is added to each of the first stream data and the second stream data of each frame of image. Therefore, if determining that the image file includes the transparency data, the decoding apparatus respectively obtains the first stream data and the second stream data according to two stream data segments respectively indicated by two pieces of frame header information.
It should be noted that, when one piece of stream data in one frame of image in the image file corresponds to one piece of frame header information, an encoding apparatus may arrange, in a preset order, a frame header information data segment and a first stream data segment that correspond to the first stream data, and a frame header information data segment and a second stream data segment that correspond to the second stream data, and the decoding apparatus may determine the arrangement order of the encoding apparatus. For example, a first stream data segment, a second stream data segment, and frame header information data segments corresponding to various pieces of stream data of one frame of image may be arranged according to the frame header information data segment and the first stream data segment that correspond to the first stream data, and the frame header information data segment and the second stream data segment that correspond to the second stream data. In this way, in a decoding process, the decoding apparatus can determine, in stream data segments indicated by two pieces of frame header information and two frame headers that indicate the frame of image, a stream data segment from which the first stream data can be obtained, and a stream data segment from which the second stream data can be obtained. It may be understood that, herein, the first stream data is stream data generated from the RGB data, and the second stream data is stream data generated from the transparency data.
Step 603: Decode the first stream data according to a first video decoding mode, to generate RGB data of the first image.
Step 604: Decode the second stream data according to a second video decoding mode, to generate transparency data of the first image.
Step 605: Generate, according to the RGB data and the transparency data of the first image, RGBA data corresponding to the first image.
For step 603 to step 605, refer to detailed descriptions of corresponding steps in the embodiments in
In this embodiment of this application, when the image file includes the RGB data and the transparency data, the decoding apparatus parses the image file, to obtain the image header information and the frame header information of the image file, and reads the stream data in the stream data segment indicated by the frame header information of the image file; decodes, according to the first video decoding mode, the first stream data indicating each frame of image, to generate the RGB data of the first image; decodes, according to the second video decoding mode, the second stream data indicating each frame of image, to generate the transparency data of the first image; and generates, according to the RGB data and the transparency data of the first image, the RGBA data corresponding to the first image. The first stream data and the second stream data in the image file are decoded respectively, to obtain the RGBA data, to use video encoding/decoding modes and reserve the transparency data in the image file, thereby ensuring quality of the image file.
Step 701: Generate image header information and frame header information that correspond to an image file.
Specifically, an image file processing apparatus run on the terminal device generates the image header information and the frame header information that correspond to the image file. The image file may be an image file in a static format, that is, includes only the first image; or the image file is an image file in a dynamic format, that is, includes the first image and another image. Regardless of whether the image file is the image file in the static format or the image file in the dynamic format, the image file processing apparatus needs to generate the image header information corresponding to the image file. The image header information includes image feature information indicating whether there is transparency data in the image file, so that a decoding apparatus determines, by using the image feature information, whether the image file includes the transparency data, to determine how to obtain stream data and whether the obtained stream data includes the second stream data generated from the transparency data.
Further, the frame header information is used to indicate a stream data segment of the image file, so that the decoding apparatus determines, by using the frame header information, the stream data segment from which the stream data can be obtained, thereby decoding the stream data. For example, the frame header information includes a frame header information start code, and the stream data segment can be determined by identifying the frame header information start code.
Step 702: Write the image header information into an image header information data segment of the image file.
Specifically, the image file processing apparatus writes the image header information into the image header information data segment of the image file.
Step 703: Write the frame header information into a frame header information data segment of the image file.
Specifically, the image file processing apparatus writes the frame header information into the frame header information data segment of the image file.
Step 704: Encode, according to a first video encoding mode, RGB data included in RGBA data corresponding to the first image, to generate first stream data, and encode, according to a second video encoding mode, transparency data included in the RGBA data corresponding to the first image, to generate second stream data, if it is determined, according to image feature information included in the image header information, that the image file includes transparency data.
Specifically, if determining that the first image in the image file includes the transparency data, the image file processing apparatus encodes, according to the first video encoding mode, the RGB data included in the RGBA data corresponding to the first image, to generate the first stream data, and encodes, according to the second video encoding mode, the transparency data included in the RGBA data corresponding to the first image, to generate the second stream data.
In some embodiments of this application, after obtaining the RGBA data corresponding to the first image in the image file, the image file processing apparatus separates the RGBA data to obtain the RGB data and the transparency data of the first image.
The RGB data is color data included in the RGBA data, and the transparency data is transparency data included in the RGBA data. Further, the RGB data and the transparency data are encoded respectively. For a specific encoding process, refer to detailed descriptions of the embodiments shown in
Step 705: Write the first stream data and the second stream data into a stream data segment indicated by frame header information corresponding to the first image.
Specifically, the image file processing apparatus writes the first stream data and the second stream data into the stream data segment indicated by the frame header information corresponding to the first image.
It should be noted that, for the image header information and the frame header information in this embodiment of this application, refer to the exemplary descriptions of
It should be further noted that, in this embodiment of this application, the RGBA data input before encoding may be obtained by decoding image files in various formats. A format of an image file may be any one of formats such as JPEG, BMP, PNG, APNG, and GIF. A format of the image file before encoding is not limited in this embodiment of this application.
In this embodiment of this application, the image file processing apparatus generates the image header information and the frame header information that correspond to the image file. The decoding apparatus can determine, by using the image feature information that is included in the image header information and that indicates whether there is transparency data in the image file, how to obtain stream data and whether the obtained stream data includes the second stream data generated from the transparency data. The decoding apparatus can obtain the stream data in the stream data segment by using the stream data segment of the image file that is indicated by the frame header information, thereby decoding the stream data.
Step 801: Parse an image file, to obtain image header information and frame header information of the image file.
Specifically, an image file processing apparatus run on the terminal device parses the image file, to obtain the image header information and the frame header information of the image file. The image header information includes image feature information indicating whether there is transparency data in the image file, and whether the image file includes the transparency data may be determined, to determine how to obtain stream data and whether the obtained stream data includes second stream data generated from the transparency data. The frame header information is used to indicate a stream data segment of the image file, and the stream data segment from which the stream data can be obtained may be determined by using the frame header information, thereby decoding the stream data. For example, the frame header information includes a frame header information start code, and the stream data segment can be determined by identifying the frame header information start code.
In some embodiments of this application, the parsing, by the image file processing apparatus, the image file, to obtain the image header information of the image file may be specifically: reading the image header information of the image file from an image header information data segment of the image file.
In some embodiments of this application, the parsing, by the image file processing apparatus, the image file, to obtain the frame header information of the image file may be specifically: reading the frame header information of the image file from a frame header information data segment of the image file.
It should be noted that, for the image header information and the frame header information in this embodiment of this application, refer to the exemplary descriptions of
Step 802: Read, if it is determined, by using image feature information, that the image file includes transparency data, stream data from a stream data segment indicated by the frame header information of the image file, where the stream data includes first stream data and second stream data.
Specifically, if determining, by using the image feature information, that the image file includes the transparency data, the image file processing apparatus reads the stream data from the stream data segment indicated by the frame header information of the image file. The stream data includes the first stream data and the second stream data.
In some embodiments of this application, one frame of image in the image file corresponds to one piece of frame header information, that is, the frame header information may be used to indicate the stream data segment including the first stream data and the second stream data. Specifically, when the image file is the image file in the static format, the image file in the static format includes one frame of image, namely, the first image, and therefore, the image file in the static format includes one piece of frame header information. When the image file is the image file in the dynamic format, the image file in the dynamic format usually includes at least two frames of images, and one piece of frame header information is added to each of the at least two frames of images. If determining that the image file includes the transparency data, the image file processing apparatus reads the first stream data and the second stream data according to the stream data segment indicated by the frame header information.
In some other embodiments of this application, one piece of stream data in one frame of image in the image file corresponds to one piece of frame header information, that is, a stream data segment indicated by one piece of frame header information includes one piece of stream data. Specifically, in a case of the image file in the static format, the image file in the static format includes one frame of image, namely, the first image, and the first image including the transparency data corresponds to two pieces of stream data that are respectively the first stream data and the second stream data. Therefore, the first stream data in the image file in the static format corresponds to one piece of frame header information, and the second stream data corresponds to the other piece of frame header information. In a case of the image file in the dynamic format, the image file in the dynamic format includes at least two frames of images, each frame of image including transparency data corresponds to two pieces of stream data that are respectively the first stream data and the second stream data, and one piece of frame header information is added to each of the first stream data and the second stream data of each frame of image. Therefore, if determining that the image file includes the transparency data, the image file processing apparatus respectively obtains the first stream data and the second stream data according to two stream data segments respectively indicated by two pieces of frame header information.
It should be noted that, when one piece of stream data in one frame of image in the image file corresponds to one piece of frame header information, an encoding apparatus may arrange, in a preset order, a frame header information data segment and a first stream data segment that correspond to the first stream data, and a frame header information data segment and a second stream data segment that correspond to the second stream data, and the image file processing apparatus may determine the arrangement order of the encoding apparatus. For example, a first stream data segment, a second stream data segment, and frame header information data segments corresponding to various pieces of stream data of one frame of image may be arranged according to the frame header information data segment and the first stream data segment that correspond to the first stream data, and the frame header information data segment and the second stream data segment that correspond to the second stream data. In this way, in a decoding process, the image file processing apparatus can determine, in stream data segments indicated by two pieces of frame header information and two frame headers that indicate the frame of image, a stream data segment from which the first stream data can be obtained, and a stream data segment from which the second stream data can be obtained. It may be understood that, herein, the first stream data is stream data generated from the RGB data, and the second stream data is stream data generated from the transparency data.
Step 803: Decode the first stream data and the second stream data respectively.
Specifically, after the image file processing apparatus obtains the first stream data and the second stream data from the stream data segment, the image file processing apparatus decodes the first stream data and the second stream data respectively.
It should be noted that, the image file processing apparatus may decode the first stream data and the second stream data with reference to an execution process of the decoding apparatus in the embodiments shown in
In this embodiment of this application, the image file processing apparatus may parse the image file, to obtain the image header information and the frame header information, and can determine, by using the image feature information that is included in the image header information and that indicates whether there is transparency data in the image file, how to obtain the stream data and whether the obtained stream data includes the second stream data generated from the transparency data; and obtains the stream data in the stream data segment by using the stream data segment of the image file that is indicated by the frame header information, thereby decoding the stream data.
The data obtaining module 11 is configured to: obtain RGBA data corresponding to a first image in an image file, and separate the RGBA data to obtain RGB data and transparency data of the first image, where the RGB data is color data included in the RGBA data, and the transparency data is transparency data included in the RGBA data.
The first encoding module 12 is configured to encode the RGB data of the first image according to a first video encoding mode, to generate first stream data.
The second encoding module 13 is configured to encode the transparency data of the first image according to a second video encoding mode, to generate second stream data.
The data writing module 14 is configured to write the first stream data and the second stream data into a stream data segment of the image file, where the first image is an image included in the image file.
In some embodiments of this application, as shown in
The first data conversion unit 121 is configured to convert the RGB data of the first image into first YUV data.
The first stream generation unit 122 is configured to encode the first YUV data according to the first video encoding mode, to generate the first stream data.
In some embodiments of this application, as shown in
The second data conversion unit 131 is configured to convert the transparency data of the first image into second YUV data.
The second stream generation unit 132 is configured to encode the second YUV data according to the second video encoding mode, to generate the second stream data.
In some embodiments of this application, the second data conversion unit 131 is configured to: set the transparency data of the first image as a Y component in the second YUV data, and skip setting a U component and a V component in the second YUV data. Alternatively, the second data conversion unit 131 is configured to: set the transparency data of the first image as a Y component in the second YUV data, and set a U component and a V component in the second YUV data as preset data.
In some embodiments of this application, the data obtaining module 11 is configured to: determine, if the image file is an image file in a dynamic format and the first image is an image corresponding to a kth frame in the image file, whether the kth frame is the last frame in the image file, where k is a positive integer greater than 0; and obtain, if the kth frame is not the last frame in the image file, RGBA data corresponding to a second image corresponding to a (k+1)th frame in the image file, and separate the RGBA data corresponding to the second image to obtain RGB data and transparency data of the second image.
The first encoding module 12 is further configured to encode the RGB data of the second image according to a third video encoding mode, to generate third stream data.
The second encoding module 13 is further configured to encode the transparency data of the second image according to a fourth video encoding mode, to generate fourth stream data.
The data writing module 14 is further configured to write the third stream data and the fourth stream data into a stream data segment of the image file.
In some embodiments of this application, as shown in
an information generation module 15, configured to generate image header information and frame header information that correspond to the image file, where the image header information includes image feature information indicating whether there is transparency data in the image file, and the frame header information is used to indicate the stream data segment of the image file.
In some embodiments of this application, the data writing module 13 is further configured to write the image header information generated by the information generation module 15 into an image header information data segment of the image file.
In some embodiments of this application, the data writing module 13 is further configured to write the frame header information generated by the information generation module 15 into a frame header information data segment of the image file.
It should be noted that, modules and units executed by and a beneficial effect brought by the encoding apparatus 1 described in this embodiment of this application may be specifically implemented according to the methods in the method embodiments shown in
In the encoding apparatus 1000 shown in
obtaining RGBA data corresponding to a first image in an image file, and separating the RGBA data to obtain RGB data and transparency data of the first image, where the RGB data is color data included in the RGBA data, and the transparency data is transparency data included in the RGBA data;
encoding the RGB data of the first image according to a first video encoding mode, to generate first stream data;
encoding the transparency data of the first image according to a second video encoding mode, to generate second stream data; and
writing the first stream data and the second stream data into a stream data segment of the image file.
In an embodiment, when encoding the RGB data of the first image according to the first video encoding mode, to generate the first stream data, the processor 1001 specifically performs the following operations:
converting the RGB data of the first image into first YUV data; and encoding the first YUV data according to the first video encoding mode, to generate the first stream data.
In an embodiment, when encoding the transparency data of the first image according to the second video encoding mode, to generate the second stream data, the processor 1001 specifically performs the following operations:
converting the transparency data of the first image into second YUV data; and
encoding the second YUV data according to the second video encoding mode, to generate the second stream data.
In an embodiment, when converting the transparency data of the first image into the second YUV data, the processor 1001 specifically performs the following operations:
setting the transparency data of the first image as a Y component in the second YUV data, and skipping setting a U component and a V component in the second YUV data;
or setting the transparency data of the first image as a Y component in the second YUV data, and setting a U component and a V component in the second YUV data as preset data.
In an embodiment, the processor 1001 further performs the following steps:
determining, if the image file is an image file in a dynamic format and the first image is an image corresponding to a kth frame in the image file, whether the kth frame is the last frame in the image file, where k is a positive integer greater than 0; and obtaining, if the kth frame is not the last frame in the image file, RGBA data corresponding to a second image corresponding to a (k+1)th frame in the image file, and separating the RGBA data corresponding to the second image to obtain RGB data and transparency data of the second image;
encoding the RGB data of the second image according to a third video encoding mode, to generate third stream data;
encoding the transparency data of the second image according to a fourth video encoding mode, to generate fourth stream data; and
writing the third stream data and the fourth stream data into a stream data segment of the image file.
In an embodiment, the processor 1001 further performs the following step:
generating image header information and frame header information that correspond to the image file, where the image header information includes image feature information indicating whether there is transparency data in the image file, and the frame header information is used to indicate the stream data segment of the image file.
In an embodiment, the processor 1001 further performs the following step:
writing the image header information into an image header information data segment of the image file.
In an embodiment, the processor 1001 further performs the following step:
writing the frame header information into a frame header information data segment of the image file.
It should be noted that, steps performed by and a beneficial effect brought by the processor 1001 described in this embodiment of this application may be specifically implemented according to the methods in the method embodiments shown in
The first data obtaining module 26 is configured to obtain, from a stream data segment of an image file, first stream data and second stream data that are generated from a first image in the image file.
The first decoding module 21 is configured to decode the first stream data according to a first video decoding mode, to generate RGB data of the first image.
The second decoding module 22 is configured to decode the second stream data according to a second video decoding mode, to generate transparency data of the first image.
The data generation module 23 is configured to generate, according to the RGB data and the transparency data of the first image, RGBA data corresponding to the first image.
In some embodiments of this application, as shown in
The first data generation unit 211 is configured to decode the first stream data according to the first video decoding mode, to generate first YUV data of the first image.
The first data conversion unit 212 is configured to convert the first YUV data into the RGB data of the first image.
In some embodiments of this application, as shown in
The second data generation unit 221 is configured to decode the second stream data according to the second video decoding mode, to generate second YUV data of the first image.
The second data conversion unit 222 is configured to convert the second YUV data into the transparency data of the first image.
In some embodiments of this application, the second data conversion unit 222 is specifically configured to: set a Y component in the second YUV data as the transparency data of the first image, and discard a U component and a V component in the second YUV data.
In some embodiments of this application, as shown in
a second data obtaining module 24, configured to: determine, if the image file is an image file in a dynamic format and the first image is an image corresponding to a kth frame in the image file in the dynamic format, whether the kth frame is the last frame in the image file, where k is a positive integer greater than 0; and obtain, if the kth frame is not the last frame in the image file, from the stream data segment of the image file, third stream data and fourth stream data that are generated from a second image corresponding to a (k+1)th frame in the image file.
The first decoding module 21 is further configured to decode the third stream data according to a third video decoding mode, to generate RGB data of the second image.
The second decoding module 22 is further configured to decode the fourth stream data according to a fourth video decoding mode, to generate transparency data of the second image.
The data generation module 23 is further configured to generate, according to the RGB data and the transparency data of the second image, RGBA data corresponding to the second image.
In some embodiments of this application, as shown in
The file parsing module 25 is configured to parse the image file, to obtain image header information and frame header information of the image file, where the image header information includes image feature information indicating whether there is transparency data in the image file, and the frame header information is used to indicate the stream data segment of the image file.
In some embodiments of this application, the file parsing module 25 is specifically configured to read the image header information of the image file from an image header information data segment of the image file.
In some embodiments of this application, the file parsing module 25 is specifically configured to read the frame header information of the image file from a frame header information data segment of the image file.
In some embodiments of this application, the first data obtaining module 26 is configured to read, if it is determined, by using the image feature information, that the image file includes transparency data, stream data from a stream data segment indicated by the frame header information of the image file, where the stream data includes first stream data and second stream data.
It should be noted that, modules and units executed by and a beneficial effect brought by the decoding apparatus 2 described in this embodiment of this application may be specifically implemented according to the methods in the method embodiments shown in
In the decoding apparatus 2000 shown in
obtaining, from a stream data segment of an image file, first stream data and second stream data that are generated from a first image in the image file;
decoding the first stream data according to a first video decoding mode, to generate RGB data of the first image;
decoding the second stream data according to a second video decoding mode, to generate transparency data of the first image; and
generating, according to the RGB data and the transparency data of the first image, RGBA data corresponding to the first image, where the first stream data and the second stream data are data that is generated from the first image and that is read from a stream data segment of the image file.
In an embodiment, when decoding the first stream data according to the first video decoding mode, to generate the RGB data of the first image, the processor 2001 specifically performs the following operations:
decoding the first stream data according to the first video decoding mode, to generate first YUV data of the first image; and converting the first YUV data into the RGB data of the first image.
In an embodiment, when decoding the second stream data according to the second video decoding mode, to generate the transparency data of the first image, the processor 2001 specifically performs the following operations:
decoding the second stream data according to the second video decoding mode, to generate second YUV data of the first image; and converting the second YUV data into the transparency data of the first image.
In an embodiment, when converting the second YUV data into the transparency data of the first image, the processor 2001 specifically performs the following operation:
setting a Y component in the second YUV data as the transparency data of the first image, and discarding a U component and a V component in the second YUV data.
In an embodiment, the processor 2001 further performs the following steps:
determining, if the image file is an image file in a dynamic format and the first image is an image corresponding to a kth frame in the image file in the dynamic format, whether the kth frame is the last frame in the image file, where k is a positive integer greater than 0; and obtaining, if the kth frame is not the last frame in the image file, from the stream data segment of the image file, third stream data and fourth stream data that are generated from a second image corresponding to a (k+1)th frame in the image file;
decoding the third stream data according to a third video decoding mode, to generate RGB data of the second image;
decoding the fourth stream data according to a fourth video decoding mode, to generate transparency data of the second image; and
generating, according to the RGB data and the transparency data of the second image, RGBA data corresponding to the second image.
In an embodiment, before decoding the first stream data according to the first video decoding mode, to generate the RGB data of the first image, the processor 2001 further performs the following step:
parsing the image file, to obtain image header information and frame header information of the image file, where the image header information includes image feature information indicating whether there is transparency data in the image file, and the frame header information is used to indicate the stream data segment of the image file.
In an embodiment, when parsing the image file, to obtain the image header information of the image file, the processor 2001 specifically performs the following operation:
reading the image header information of the image file from an image header information data segment of the image file.
In an embodiment, when parsing the image file, to obtain the frame header information of the image file, the processor 2001 specifically performs the following operation:
reading the frame header information of the image file from a frame header information data segment of the image file.
In an embodiment, the processor 2001 further performs the following step:
reading, if it is determined, by using the image feature information, that the image file includes transparency data, stream data from a stream data segment indicated by the frame header information of the image file, where the stream data includes first stream data and second stream data.
It should be noted that, steps performed by and a beneficial effect brought by the processor 2001 described in this embodiment of this application may be specifically implemented according to the methods in the method embodiments shown in
The information generation module 31 is configured to generate image header information and frame header information that correspond to an image file, where the image header information includes image feature information indicating whether there is transparency data in the image file, and the frame header information is used to indicate a stream data segment of the image file.
In some embodiments of this application, the image file processing apparatus 3 further includes:
a first information writing module 32, configured to write the image header information into an image header information data segment of the image file.
The image file processing apparatus 3 further includes a second information writing module 33.
The second information writing module 33 is configured to write the frame header information into a frame header information data segment of the image file.
The image file processing apparatus 3 further includes a data encoding module 34 and a data writing module 35.
The data encoding module 34 is configured to: encode, if it is determined, according to the image feature information, that the image file includes the transparency data, RGB data included in RGBA data corresponding to a first image included in the image file, to generate first stream data, and encode the included transparency data, to generate second stream data.
The data writing module 35 is configured to write the first stream data and the second stream data into a stream data segment indicated by frame header information corresponding to the first image.
It should be noted that, modules executed by and a beneficial effect brought by the image file processing apparatus 3 described in this embodiment of this application may be specifically implemented according to the method in the method embodiment shown in
In some embodiments of this application, the image file processing apparatus 3000 includes a user interface 3003. The user interface 3003 may include a display screen (Display) 30031 and a keyboard 30032. As shown in
In the image file processing apparatus 3000 shown in
generating image header information and frame header information that correspond to an image file, where the image header information includes image feature information indicating whether there is transparency data in the image file, and the frame header information is used to indicate a stream data segment of the image file.
In an embodiment, the processor 3001 further performs the following step:
writing the image header information into an image header information data segment of the image file.
In an embodiment, the processor 3001 further performs the following step:
writing the frame header information into a frame header information data segment of the image file.
In an embodiment, the processor 3001 further performs the following steps:
encoding, if it is determined, according to the image feature information, that the image file includes the transparency data, RGB data included in RGBA data corresponding to a first image included in the image file, to generate first stream data, and encoding the included transparency data, to generate second stream data; and writing the first stream data and the second stream data into a stream data segment indicated by frame header information corresponding to the first image.
It should be noted that, steps performed by and a beneficial effect brought by the processor 3001 described in this embodiment of this application may be specifically implemented according to the method in the method embodiment shown in
The file parsing module 41 is configured to parse an image file, to obtain image header information and frame header information of the image file, where the image header information includes image feature information indicating whether there is transparency data in the image file, and the frame header information is used to indicate a stream data segment of the image file.
In some embodiments of this application, the file parsing module 41 is specifically configured to read the image header information of the image file from an image header information data segment of the image file.
In some embodiments of this application, the file parsing module 41 is specifically configured to read the frame header information of the image file from a frame header information data segment of the image file.
In some embodiments of this application, the image file processing apparatus 4 further includes a data reading module 42 and a data decoding module 43.
The data reading module 42 is configured to read, if it is determined, by using the image feature information, that the image file includes transparency data, stream data from a stream data segment indicated by the frame header information of the image file, where the stream data includes first stream data and second stream data.
The data decoding module 43 is configured to decode the first stream data and the second stream data respectively.
It should be noted that, modules executed by and a beneficial effect brought by the image file processing apparatus 4 described in this embodiment of this application may be specifically implemented according to the method in the method embodiment shown in
In the image file processing apparatus 4000 shown in
parsing an image file, to obtain image header information and frame header information of the image file, where the image header information includes image feature information indicating whether there is transparency data in the image file, and the frame header information is used to indicate a stream data segment of the image file.
In an embodiment, when parsing the image file, to obtain the image header information of the image file, the processor 4001 specifically performs the following operation:
reading the image header information of the image file from an image header information data segment of the image file.
In an embodiment, when parsing the image file, to obtain the frame header information of the image file, the processor 4001 specifically performs the following operation:
reading the frame header information of the image file from a frame header information data segment of the image file.
In an embodiment, the processor 4001 further performs the following steps:
reading, if it is determined, by using the image feature information, that the image file includes the transparency data, stream data from a stream data segment indicated by the frame header information of the image file, where the stream data includes first stream data and second stream data; and decoding the first stream data and the second stream data respectively.
It should be noted that, steps performed by and a beneficial effect brought by the processor 4001 described in this embodiment of this application may be specifically implemented according to the method in the method embodiment shown in
In some embodiments of this application, the encoding device 5001 may be the encoding apparatus shown in
In some other embodiments of this application, the encoding device 5001 may be the image file processing apparatus shown in
The encoding apparatus, the decoding apparatus, the image file processing apparatus, and the terminal device in the embodiments of this application may include devices such as a tablet computer, a mobile phone, an electronic reader, a PC, a notebook computer, an in-vehicle device, a network television, and a wearable device. This is not limited in the embodiments of this application.
Further, the encoding device 5001 and the decoding device 5002 in the embodiments of this application are described in detail with reference to
During specific implementation, for an image file in a static format, first, the encoding module 6000 receives input RGBA data of the image file, and divides the RGBA data into RGB data and transparency data by using the RGB data and transparency data separation submodule 6001; then the first video encoding mode submodule 6002 encodes the RGB data according to a first video encoding mode, to generate first stream data; next, the second video encoding mode submodule 6003 encodes the transparency data according to a second video encoding mode, to generate second stream data; and subsequently, the image header information and frame header information encapsulation submodule 6004 generates image header information and frame header information of the image file, writes the first stream data, the second stream data, the frame header information, and the image header information into corresponding data segments, and then generates compressed image data corresponding to the RGBA data.
For an image file in a dynamic format, first, the encoding module 6000 determines a quantity of included frames, and then divides each frame of RGBA data into RGB data and transparency data by using the RGB data and transparency data separation submodule 6001; the first video encoding mode submodule 6002 encodes the RGB data according to a first video encoding mode, to generate first stream data; the second video encoding mode submodule 6003 encodes the transparency data according to a second video encoding mode, to generate second stream data; the image header information and frame header information encapsulation submodule 6004 generates frame header information corresponding to each frame, and writes each piece of stream data and frame header information into corresponding data segments; and finally, the image header information and frame header information encapsulation submodule 6004 generates image header information of the image file, writes the image header information into a corresponding data segment, and then generates compressed image data corresponding to the RGBA data.
In some embodiments of this application, the compressed image data may alternatively be described by using a name such as a compressed stream or an image sequence. This is not limited in this embodiment of this application.
Also refer to
During specific implementation, for an image file in a static format, first, the decoding module 7000 parses compressed image data of the image file by using the image header information and frame header information parsing submodule 7001, to obtain image header information and frame header information of the image file, and obtains, if determining, according to the image header information, that there is transparency data in the image file, first stream data and second stream data from a stream data segment indicated by the frame header information; then, the first video decoding mode submodule 7002 decodes the first stream data according to a first video decoding mode, to generate RGB data; next, the second video decoding mode submodule 7003 decodes second stream data according to a second video decoding mode, to generate transparency data; and finally, the RGB data and transparency data combination submodule 7004 combines the RGB data and the transparency data, to generate RGBA data, and outputs the RGBA data.
For an image file in a dynamic format, first, the decoding module 7000 parses compressed image data of the image file by using the image header information and frame header information parsing submodule 7001, to obtain image header information and frame header information of the image file, and determines a quantity of frames included in the image file; and then, obtains, if determining, according to the image header information, that there is transparency data in the image file, first stream data and second stream data from a stream data segment indicated by frame header information of each frame of image; the first video decoding mode submodule 7002 decodes, according to a first video decoding mode, first stream data corresponding to each frame of image, to generate RGB data; the second video decoding mode submodule 7003 decodes, according to a second video decoding mode, second stream data corresponding to each frame of image, to generate transparency data; and finally, the RGB data and transparency data combination submodule 7004 combines the RGB data and the transparency data of each frame of image, to generate RGBA data, and outputs RGBA data of all frames included in the compressed image data.
For the image file processing system shown in
A person of ordinary skill in the art may understand that all or some of the processes of the methods in the embodiments may be implemented by a computer program instructing relevant hardware. The program may be stored in a computer-readable storage medium. When the program is run by a processor, the processes of the methods in the embodiments are performed. The foregoing storage medium may be a magnetic disc, an optical disc, a read-only memory (ROM), a RAM, or the like.
The foregoing disclosure is merely exemplary embodiments of this application, and certainly is not intended to limit the protection scope of this application. Therefore, equivalent variations made in accordance with the claims of this application shall fall within the scope of this application.
Number | Date | Country | Kind |
---|---|---|---|
201710225910.3 | Apr 2017 | CN | national |
This application is a continuation application of PCT/CN2018/079113, entitled “IMAGE FILE PROCESSING METHOD AND APPARATUS, AND STORAGE MEDIUM” filed on Mar. 15, 2018, which claims priority to Chinese Patent Application No. 201710225910.3, entitled “IMAGE FILE PROCESSING METHOD” filed with the Chinese Patent Office on Apr. 8, 2017, all of which are incorporated by reference in their entirety.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/CN2018/079113 | Mar 2018 | US |
Child | 16595008 | US |