This application relates to the field of image processing technologies, and in particular, to an image scaling technology.
An image scaling technology is widely applied to various scenarios, including fields such as extended reality (XR), autonomous driving, face recognition, and gesture recognition, which is intended to adjust an image to a proper size for subsequent processing.
In a conventional image scaling method, a bilinear interpolation algorithm is widely used. Specifically, a scaling coefficient is obtained based on a size of a destination image and a size of a source image, then coordinates of several source pixel points on the source image corresponding to a to-be-processed pixel point on the destination image are determined based on the scaling coefficient, and an interpolation coefficient is determined at the same time. A pixel value of the to-be-processed pixel point is finally obtained based on the pixel values of the several source pixel points and the interpolation coefficient.
In the bilinear interpolation algorithm provided in the foregoing related technology, a floating-point division method is used during calculation of the scaling coefficient, and a subsequent calculation process also includes a large number of complex floating-point operations, resulting in a relatively large operation amount and relatively low operation efficiency.
Aspects described herein provide an image scaling method and apparatus, a device, and a storage medium. The technical solutions provided in the aspects described herein include the following.
According to an aspect described herein, an image scaling method is provided, the method being performed by a computer device, and including:
According to an aspect described herein, an image scaling apparatus is provided, the apparatus being deployed on a computer device, and including:
According to an aspect described herein, a computer device is provided, including a processor and a memory, the memory having a computer program stored therein, the computer program being loaded and executed by the processor to implement the foregoing image scaling method.
According to an aspect described herein, a computer-readable storage medium is provided, having a computer program stored therein, the computer program being loaded and executed by a processor to implement the foregoing image scaling method.
According to an aspect described herein, a computer program product is provided, including a computer program, the computer program being stored in a computer-readable storage medium, and a processor being configured to read the computer program from the computer-readable storage medium and execute the computer program to implement the foregoing image scaling method.
The technical solutions provided in the aspects described herein include at least the following beneficial effects.
When the source image needs to be scaled to obtain the destination image that meets a size requirement, the transverse scaling coefficient and the longitudinal scaling coefficient as integers may be obtained through the fixed-point operation based on the size of the source image, the size of the destination image, and a preset displacement value as an integer. Then for each to-be-processed pixel point in the destination image, the coordinates of the source pixel point in the source image corresponding to the to-be-processed pixel point, and the fixed-point interpolation coefficient as an integer are obtained through the fixed-point operation based on the coordinates of the to-be-processed pixel point, the transverse scaling coefficient, the longitudinal scaling coefficient, and the displacement value. Next, the pixel value of the to-be-processed pixel point is obtained through the fixed-point operation based on the pixel value of the source pixel point, the fixed-point interpolation coefficient, and the displacement value. In the foregoing process, a scaling coefficient as an integer is obtained through operation, then the fixed-point interpolation coefficient as an integer is calculated based on the scaling coefficient as an integer, the pixel value of the to-be-processed pixel point is calculated based on the fixed-point interpolation coefficient as an integer, and then the destination image after the source image is scaled is obtained based on the pixel value of each to-be-processed pixel point in the destination image, so that data as an integer is used in the whole image scaling process to perform the fixed-point operation, thereby avoiding a floating-point operation, reducing an operation amount, and improving operation efficiency.
To make objectives, technical solutions, and advantages described herein clearer, implementations described herein are to be further described in detail below with reference to the accompanying drawings.
In computer image processing and computer graphics, image scaling refers to a process of adjusting a size of a digital image, i.e., changing the size of the image by adding or removing a pixel. Due to a trade-off between processing efficiency and image quality (such as smoothness and clarity), image scaling is not a simple process. When an image is upscaled, pixels forming an image are also increased, and the image looks “soft”. When the image is downscaled, the image becomes smooth and clear.
Bilinear interpolation is a commonly used algorithm in the field of image scaling, which takes both the processing efficiency and the image quality into consideration. In mathematics, bilinear interpolation is an extension of linear interpolation on a two-dimensional rectangular grid, which is configured for performing interpolation on a bivariate function (for example, x and y). A core idea thereof is to perform linear interpolation in two directions separately.
The image scaling method based on bilinear interpolation of a floating-point operation provided in the related art is described below. An example in which the size includes a width size and a height size is used, and a process thereof is as follows:
1. A transverse scaling coefficient and a longitudinal scaling coefficient is calculated based on a width size and a height size of a source image and a width size and a height size of a destination image.
The source image is downscaled or upscaled to obtain the destination image. Illustratively, as shown in
The width size and the height size of the source image include a width value and a height value of the source image. The width value of the source image is denoted as Widthsre herein, and the height value of the source image is denoted as Heightsre. The width size and the height size of the destination image include a width value and a height value of the destination image. The width value of the destination image is denoted as Widthdst herein, and the height value of the destination image is denoted as Heightdst. The width value and the height value of the source image may be obtained by measuring a to-be-scaled source image. The width value and the height value of the destination image may be set in advance based on an actual scaling requirement.
The transverse scaling coefficient may be a scaling coefficient in a width direction, and the transverse scaling coefficient is denoted as Wratio herein. The transverse scaling coefficient is equal to the width value of the source image divided by the width value of the destination image, i.e., Wratio=Widthsrc/Widthdst.
The longitudinal scaling coefficient may refer to a scaling coefficient in a height direction, and the longitudinal scaling coefficient is denoted as Hratio herein. The longitudinal scaling coefficient is equal to the height value of the source image divided by the height value of the destination image, i.e., Hratio=Heightsrc/Heightdst.
2. For a to-be-processed pixel point in the destination image, coordinates of at least one source pixel point in the source image corresponding to the to-be-processed pixel point, and an interpolation coefficient are calculated based on coordinates of the to-be-processed pixel point, the transverse scaling coefficient, and the longitudinal scaling coefficient.
The to-be-processed pixel point may be any pixel point in the destination image. The coordinates of the to-be-processed pixel point refer to coordinates of the to-be-processed pixel point in the destination image. Illustratively, a two-dimensional rectangular coordinate system may be established, an upper left vertex of the destination image is used as an origin, a width direction is used as an x-axis, and a height direction is used as a y-axis. In this way, coordinates of each pixel point in the destination image may be determined. Similarly, a two-dimensional rectangular coordinate system may be established, an upper left vertex of the source image is used as an origin, a width direction is used as an x-axis, and a height direction is used as a y-axis. In this way, coordinates of each pixel point in the source image may be determined.
As shown in
The interpolation coefficient includes a transverse interpolation coefficient and a longitudinal interpolation coefficient. The transverse interpolation coefficient refers to an interpolation coefficient in a width direction. The transverse interpolation coefficient is denoted as u herein, and the transverse interpolation coefficient u=fraction (Xdst*Wratio). The longitudinal interpolation coefficient refers to an interpolation coefficient in a height direction. The longitudinal interpolation coefficient is denoted as v herein, and the longitudinal interpolation coefficient v=fraction (Ydst*Hratio), where fraction means taking a decimal part.
3. The pixel value of the to-be-processed pixel point is calculated based on the pixel value of the foregoing at least one source pixel point and the interpolation coefficient.
A pixel value Fast of the to-be-processed pixel point is equal to (1−u)*(1−v)*F00+(1−u)*v*F01+u*(1−v)*F10+u*v*F11, F00, F10, F01, and F11 being respectively the pixel values of the first source pixel point, the second source pixel point, the third source pixel point, and the fourth source pixel point.
4. The destination image after the source image is scaled is obtained based on the pixel value of each to-be-processed pixel point in the destination image.
For each to-be-processed pixel point in the destination image, the pixel value of the to-be-processed pixel point may be calculated in the manners provided in the foregoing operations 2-3. After the pixel value of each to-be-processed pixel point in the destination image is calculated, the destination image after the source image is scaled is obtained.
It may be learned from the foregoing process that a floating-point division method is used during calculation of the scaling coefficient, and a subsequent calculation process also includes a large number of complex floating-point operations, resulting in a relatively large operation amount and relatively low operation efficiency.
The terminal device 10 may be an electronic device such as a mobile phone, a tablet computer, a personal computer (PC), an on-board terminal, a smart home appliance, or a multimedia playback device. A client running an application such as a social application, an image processing application, or a gaming application may be installed in the terminal device 10, which is not limited in this application.
The server 20 may be one server, or may be a server cluster including a plurality of servers, or a cloud computing service center. The server 20 may be a background server of the foregoing application, and is configured to provide background services for the client of the application.
The terminal device 10 may communicate with the server 20 through a network, such as a wireless or wired network.
According to the image scaling method provided in the aspects described herein, each operation may be performed by a computer device. The computer device may be any electronic device having computing and storage capabilities. Illustratively, the computer device may be the terminal device 10 in the solution implementation environment shown in
Illustratively, the foregoing computer device may further be an embedded device. The embedded device is also referred to as an embedded system, which may include hardware and software and is a device that can operate independently. Software content thereof includes a software operating environment and an operating system thereof. Hardware content includes many aspects such as a signal processor, a memory, and a communication module. Compared with a general computer processing system, the embedded system has a great difference, which cannot implement a mass storage function since no mass storage medium matching the embedded system exists. Most of the storage media used are an erasable programmable read only memory (E-PROM), an electrically erasable PROM (EEPROM), and the like. For a software part, an application programming interface (API) is used as a core of a development platform. Illustratively, the embedded device includes, but is not limited to the following types: an embedded microprocessor, an embedded microcontroller, an embedded digital signal processor, an embedded system on a chip, and the like.
The method provided in the aspects described herein may relate to an artificial intelligence (AI) technology, and in particular, to a computer vision (CV) technology in the AI technology.
With the research and progress of artificial intelligence (AI) technologies, the AI technology has been researched and applied in many fields, such as a common smart home, a smart wearable device, a virtual assistant, a smart speaker, smart marketing, unmanned driving, automatic driving, an unmanned aerial vehicle, a robot, smart medical care, smart customer service, and the like. It is believed that with the development of technologies, the AI technology is to be applied in more fields and plays increasingly important value.
For example, with gradual popularization of AI applications, some conventional image processing algorithms are gradually replaced by deep learning. Through an AI model constructed based on a neural network, different functions such as target detection, face recognition, and gesture recognition may be implemented on an input image. A size of the input image of the AI model constructed based on the neural network is usually fixed (for example, fixed at a size of 224*224), and a size of a source image is arbitrary. Therefore, the source image of any size needs to be scaled and adjusted to a destination image of a fixed size, and then the destination image is inputted into the AI model constructed based on the neural network for processing.
Furthermore, with the application of the AI application to the embedded device, after obtaining a source image, the embedded device needs to perform an image scaling method to scale the source image to the destination image of the fixed size, and then input the destination image into the AI model constructed based on the neural network for processing. A detection or recognition result of the destination image is outputted by the AI model.
Gesture recognition in the field of XR is used as an example. An inputted hand image is recognized through the AI model constructed based on the neural network, to obtain a gesture recognition result of the image. As shown in
The method provided in the aspects described herein may be applied to a business scenario of a game. For example, in a virtual reality (VR) game, a hand image can be efficiently scaled for subsequent gesture recognition through an AI model, thereby enhancing processing efficiency and improving game experience.
Operation 410: Obtain a transverse scaling coefficient and a longitudinal scaling coefficient as integers through a fixed-point operation based on a size of a source image, a size of a destination image, and a displacement value as an integer.
The source image may refer to a to-be-scaled image. The source image may be a picture, or may be an image frame in a video, which is not limited in this application. The source image is downscaled or upscaled to obtain the destination image.
In this aspect described herein, the size may refer to a width size and a height size. The width size and the height size of the source image may include a width value and a height value of the source image. The width value of the source image is denoted as Widthsrc herein, and the height value of the source image is denoted as Heightsre. The width size and the height size of the destination image include a width value and a height value of the destination image. The width value of the destination image is denoted as Widthdst herein, and the height value of the destination image is denoted as Heightdst. The width value and the height value of the source image may be obtained by measuring a to-be-scaled source image. The width value and the height value of the destination image may be set in advance based on an actual scaling requirement.
In this aspect described herein, a magnitude relationship between the width value Widthsrc of the source image and the width value Widthdst of the destination image is not limited. For example, Widthsrc is less than Widthdst, or Widthsrc is greater than Widthdst, or Widthsrc is equal to Widthdst. A magnitude relationship between the height value Heightsrc of the source image and the height value Heightdst of the destination image is not limited. For example, Heightsrc is less than Heightdst, or Heightsrc is greater than Heightdst, or Heightsrc is equal to Heightdst.
In addition, the image scaling in this aspect described herein may refer to adjusting a size of the source image, to obtain a destination image of a certain fixed size. The size of the source image is different from the fixed size (i.e., the size of the destination image). At least one of a width value and a height value of the source image is not equal to that of the destination image. For example, the width value of the source image is not equal to the width value of the destination image, and the height value of the source image is not equal to the height value of the destination image either. Alternatively, the width value of the source image is equal to the width value of the destination image, and the height value of the source image is not equal to the height value of the destination image. Alternatively, the width value of the source image is not equal to the width value of the destination image, and the height value of the source image is equal to the height value of the destination image. The image scaling in this aspect described herein may refer to downscaling (for example, decreasing the width value and the height value of the source image to obtain a destination image), or may refer to upscaling (for example, increasing both the width value and the height value of the source image to obtain a destination image), but may not be limited thereto. For example, the image scaling may further be decreasing the width value of the source image and increasing the height value of the source image, to obtain a destination image, or increasing the width value of the source image and decreasing the height value of the source image, to obtain a destination image.
In an example, it is assumed that the width size and the height size of the source image are 500*600, which needs to be decreased to a destination size of 200*200. The width value Widthsrc of the source image is 500, representing that a quantity of pixels of the source image in a width direction is 500. The height value Heightsrc of the source image is 600, representing that a quantity of pixels of the source image in a height direction is 600. The width value Heightsrc of the destination image is 200, representing that the quantity of pixels of the destination image in the width direction is 200. The height value Heightdst of the destination image is also 200, representing that the quantity of pixels of the destination image in the height direction is also 200.
In addition, in this aspect described herein, a displacement value as an integer is set in advance. For example, the displacement value may be 7 or 8. For specific description of the displacement value, reference may be made to the following aspect.
The transverse scaling coefficient may refer to a scaling coefficient in a horizontal direction (for example, the width direction), and the transverse scaling coefficient is denoted as Wratio herein. The longitudinal scaling coefficient may refer to a scaling coefficient in a vertical direction (for example, the height direction), and the longitudinal scaling coefficient is denoted as Hratio herein. In this aspect described herein, the transverse scaling coefficient and the longitudinal scaling coefficient are numerical values as integers obtained through the fixed-point operation. The fixed-point operation refers to an operation in which decimal point positions of all numbers are fixed during the operation.
In some aspects, the fixed-point operation includes an integer division manner (e.g., using the integer portion of the quotient and ignoring the remainder), and the size includes a width size and a height size. The width size and the height size include a width value and a height value. In this case, the transverse scaling coefficient and the longitudinal scaling coefficient may be determined by dividing, by the width value of the destination image in the integer division manner, the width value of the source image shifted to the left by a first quantity of bits, to obtain a quotient as the transverse scaling coefficient, and dividing, by the height value of the destination image in the integer division manner, the height value of the source image shifted to the left by the first quantity of bits, to obtain a quotient as the longitudinal scaling coefficient. The first quantity of bits may be determined based on the displacement value. A numerical value relationship between the first quantity of bits and the displacement value is not limited in this aspect described herein. Illustratively, the first quantity of bits may be twice the displacement value.
The transverse scaling coefficient Wratio is equal to (Widthsrc<<(2×SHIFT_BIT)) % Widthdst; and
Operation 420: Obtain, for each to-be-processed pixel point in the destination image, coordinates of a source pixel point in the source image corresponding to the to-be-processed pixel point, and a fixed-point interpolation coefficient as an integer through the fixed-point operation based on the coordinates of the to-be-processed pixel point, the transverse scaling coefficient, the longitudinal scaling coefficient, and the displacement value.
The to-be-processed pixel point may be any pixel point in the destination image. The coordinates of the to-be-processed pixel point may refer to coordinates of the to-be-processed pixel point in the destination image. Illustratively, a two-dimensional rectangular coordinate system may be established, an upper left vertex of the destination image is used as an origin, a horizontal direction is used as an x-axis, and a vertical direction is used as a y-axis. In this way, coordinates of each pixel point in the destination image may be determined. Similarly, a two-dimensional rectangular coordinate system may be established, an upper left vertex of the source image is used as an origin, a horizontal direction is used as an x-axis, and a vertical direction is used as a y-axis. In this way, coordinates of each pixel point in the source image may be determined.
The source pixel point in the source image corresponding to the to-be-processed pixel point may include one or more source pixel points. In a possible implementation, a quantity of source pixel points in the source image corresponding to the to-be-processed pixel point may be 4. For example, the source pixel point in the source image corresponding to the to-be-processed pixel point includes a first source pixel point, a second source pixel point, a third source pixel point, and a fourth source pixel point.
In this case, coordinates of a plurality of source pixel points have the following relationship. An abscissa of the second source pixel point is a smaller value of the width value of the source image and a first numerical value, an ordinate of the second source pixel point is the same as an ordinate of the first source pixel point, and the first numerical value is an abscissa of the first source pixel point plus 1. An abscissa of the third source pixel point is the same as the abscissa of the first source pixel point, an ordinate of the third source pixel point is a smaller value of the height value of the source image and a second numerical value, and the second numerical value is the ordinate of the first source pixel point plus 1. An abscissa of the fourth source pixel point is the same as the abscissa of the second source pixel point, and an ordinate of the fourth source pixel point is the same as the ordinate of the third source pixel point.
Assuming that coordinates of a to-be-processed pixel point are (Xdst, Ydst), Xdst represents an abscissa of the to-be-processed pixel point, and Ydst represents an ordinate of the to-be-processed pixel point, then:
In some aspects, a manner of obtaining the coordinates of the source pixel point in the source image corresponding to the to-be-processed pixel point through the fixed-point operation based on the coordinates of the to-be-processed pixel point, the transverse scaling coefficient, the longitudinal scaling coefficient, and the displacement value may be shifting a product of the abscissa of the to-be-processed pixel point and the transverse scaling coefficient to the right by the first quantity of bits, to obtain an abscissa of the first source pixel point in the source image corresponding to the to-be-processed pixel point, and shifting a product of the ordinate of the to-be-processed pixel point and the longitudinal scaling coefficient to the right by the first quantity of bits, to obtain an ordinate of the first source pixel point, the first quantity of bits being determined based on the displacement value. Illustratively, the first quantity of bits is twice the displacement value. After the abscissa and the ordinate of the first source pixel point are obtained, the coordinates of the source pixel point in the source image corresponding to the to-be-processed pixel point may be obtained based on the abscissa and the ordinate of the first source pixel point and the size of the source image through the method described above.
When the source pixel point in the source image corresponding to the to-be-processed pixel point includes the first source pixel point, the second source pixel point, the third source pixel point, and the fourth source pixel point, based on the relationship between the coordinates of the four source pixel points described above, a manner of obtaining the coordinates of the source pixel point in the source image corresponding to the to-be-processed pixel point based on the abscissa and the ordinate of the first source pixel point and the size of the source image may be determining a smaller value of the width value of the source image and the first numerical value as an abscissa of the second source pixel point, and determining the ordinate of the first source pixel point as an ordinate of the second source pixel point, the first numerical value being the abscissa of the first source pixel point plus 1; determining the abscissa of the first source pixel point as an abscissa of the third source pixel point, and determining a smaller value of the height value of the source image and a second numerical value as an ordinate of the third source pixel point, the second numerical value being the ordinate of the first source pixel point plus 1; determining the abscissa of the second source pixel point as an abscissa of the fourth source pixel point, and determining the ordinate of the third source pixel point as an ordinate of the fourth source pixel point; and forming the coordinates of the source pixel point corresponding to the to-be-processed pixel point in the source image based on the abscissa and the ordinate of the first source pixel point, the abscissa and the ordinate of the second source pixel point, the abscissa and the ordinate of the third source pixel point, and the abscissa and the ordinate of the fourth source pixel point.
In other words,
In this aspect described herein, the interpolation coefficient is represented in a form of a fixed-point number, and is therefore referred to as a fixed-point interpolation coefficient. The fixed-point interpolation coefficient includes a transverse fixed-point interpolation coefficient and a longitudinal fixed-point interpolation coefficient. The transverse fixed-point interpolation coefficient is an interpolation coefficient in a width direction, and the transverse fixed-point interpolation coefficient is denoted as u herein. The longitudinal fixed-point interpolation coefficient is an interpolation coefficient in a height direction, and the longitudinal fixed-point interpolation coefficient is denoted as v herein.
In some aspects, after the abscissa and the ordinate of the first source pixel point are obtained, a manner of determining the fixed-point interpolation coefficient as an integer may be obtaining the fixed-point interpolation coefficient based on the abscissa and the ordinate of the first source pixel point, the abscissa and the ordinate of the to-be-processed pixel point, the transverse scaling coefficient, the longitudinal scaling coefficient, and the displacement value.
In some aspects, the fixed-point interpolation coefficient may include a transverse fixed-point interpolation coefficient and a longitudinal fixed-point interpolation coefficient. In this case, a manner of determining the fixed-point interpolation coefficient as an integer may be: subtracting, from a value obtained by multiplying the abscissa of the to-be-processed pixel point by the transverse scaling coefficient, a value obtained by the abscissa of the first source pixel point shifted to the left by the first quantity of bits, to obtain a first difference, and shifting the first difference to the right by a second quantity of bits, to obtain the transverse fixed-point interpolation coefficient; and subtracting, from a value obtained by multiplying the ordinate of the to-be-processed pixel point by the longitudinal scaling coefficient, a value obtained by the ordinate of the first source pixel point shifted to the left by the first quantity of bits, to obtain a second difference, and shifting the second difference to the right by the second quantity of bits, to obtain the longitudinal fixed-point interpolation coefficient, the first quantity of bits and the second quantity of bits being determined based on the displacement value. Illustratively, the first quantity of bits is twice the displacement value, and the second quantity of bits is equal to the displacement value.
In other words,
In the foregoing aspects, a scaling coefficient is used in the process of calculating the coordinates of at least one source pixel point and the fixed-point interpolation coefficient. The scaling coefficient is determined by the width sizes and the height sizes of the source image and the destination image, which are not fixed. However, in the technical solution provided in the foregoing aspect, a unified operation method is used for different scaling coefficients, and project maintainability is high.
Operation 430: Obtain a pixel value of the to-be-processed pixel point through the fixed-point operation based on a pixel value of the source pixel point, the fixed-point interpolation coefficient, and the displacement value.
In some aspects, this operation includes the following sub-operations:
1. Multiply a difference between 1 shifted to the left by a second quantity of bits and the transverse fixed-point interpolation coefficient by a first saturation operation result, to obtain a first intermediate value, the first saturation operation result being obtained by performing a saturation operation on a first calculated value, the first calculated value being obtained by multiplying a difference between 1 shifted to the left by the second quantity of bits and the longitudinal fixed-point interpolation coefficient by a pixel value of the first source pixel point shifted to the right by the second quantity of bits, and
The first intermediate value is equal to (1<<SHIFT_BIT−u)×SAT {(1<<SHIFT_BIT−v)×F00>>SHIFT_BIT)}, where SAT represents a saturation operation, i.e., when an operation result is greater than an upper limit or less than a lower limit, the result is equal to the upper limit or the lower limit. Foo represents the pixel value of the first source pixel point.
2. Multiply the transverse fixed-point interpolation coefficient by a second saturation operation result, to obtain a second intermediate value, the second saturation operation result being obtained by performing a saturation operation on a second calculated value, the second calculated value being obtained by multiplying the difference between 1 shifted to the left by the second quantity of bits and the longitudinal fixed-point interpolation coefficient by a pixel value of the second source pixel point shifted to the right by the second quantity of bits, and
3. Multiply the difference between 1 shifted to the left by the second quantity of bits and the transverse fixed-point interpolation coefficient by a third saturation operation result, to obtain a third intermediate value, the third saturation operation result being obtained by performing a saturation operation on a third calculated value, the third calculated value being obtained by shifting a product of the longitudinal fixed-point interpolation coefficient and a pixel value of the third source pixel point to the right by the second quantity of bits, and
4. Multiply the transverse fixed-point interpolation coefficient by a fourth saturation operation result, to obtain a fourth intermediate value, the fourth saturation operation result being obtained by performing a saturation operation on a fourth calculated value, the fourth calculated value being obtained by shifting a product of the longitudinal fixed-point interpolation coefficient and a pixel value of the fourth source pixel point to the right by the second quantity of bits, and
5. Obtain the pixel value of the to-be-processed pixel point based on the first intermediate value, the second intermediate value, the third intermediate value, and the fourth intermediate value.
Illustratively, the first intermediate value, the second intermediate value, the third intermediate value, and the fourth intermediate value are added to obtain a summation result, and a saturation operation is performed on a numerical value obtained by shifting the summation result to the right by the second quantity of bits, to obtain the pixel value of the to-be-processed pixel point.
In other words, the summation result Ftmp is equal to (1<<SHIFT_BIT−u)×SAT(1<<SHIFT_BIT−v)×F00>>SHIFT_BIT)}+(1<<SHIFT_BIT−u) ×SAT{(v×F01)>>SHIFT_BIT}+u×SAT{((1<<SHIFT_BIT−v) ×F10)>>SHIFT_BIT}+u×SAT{(v×F11)>>SHIFT_BIT}.
The pixel value Fast of the to-be-processed pixel point is equal to SAT{Ftmp>>SHIFT_BIT}.
According to the image scaling method provided in the foregoing aspect, in the whole process, operations such as a multiply-accumulate operation, a shifting operation, and the saturation operation are mainly adopted. These operations may be accelerated in parallel through single instruction multiple data (SIMD) very conveniently, thereby reducing device power consumption while improving operation efficiency.
A quantity of bits of the foregoing saturation operation is related to a value of the displacement value. In some aspects, the displacement value is an integer in an interval of [7, 11]. For example, when the displacement value is 7, an 8-bit saturation operation is used, namely, an upper limit of the saturation operation result is 255, and a lower limit is 0, and when the displacement value is greater than 7, a 16-bit saturation operation is used, namely, an upper limit of the saturation operation result is 65535, and a lower limit is 0.
In addition, for selection of the displacement value, requirements for the operation efficiency and precision may be comprehensively considered. A larger displacement value indicates higher precision, but the operation efficiency is reduced. A smaller displacement value indicates higher operation efficiency, but the precision is reduced. When the displacement value is 7, the operation process in the foregoing image scaling method may be implemented through an 8×8 multiply-accumulator (MAC). When the displacement value is a numerical value between 8 and 11, the operation process in the foregoing image scaling method needs to be implemented through a 16×16 MAC.
Operation 440: Obtain, based on the pixel value of each to-be-processed pixel point in the destination image, the destination image after the source image is scaled.
For each to-be-processed pixel point in the destination image, the pixel value of the to-be-processed pixel point may be calculated in the manners provided in the foregoing operations 420-430. After the pixel value of each to-be-processed pixel point in the destination image is calculated, the destination image after the source image is scaled is obtained.
According to the technical solutions provided in the aspects described herein, when the source image needs to be scaled to obtain the destination image that meets a size requirement, the transverse scaling coefficient and the longitudinal scaling coefficient as integers may be obtained through the fixed-point operation based on the size of the source image, the size of the destination image, and a preset displacement value as an integer. Then for each to-be-processed pixel point in the destination image, the coordinates of the source pixel point in the source image corresponding to the to-be-processed pixel point, and the fixed-point interpolation coefficient as an integer are obtained through the fixed-point operation based on the coordinates of the to-be-processed pixel point, the transverse scaling coefficient, the longitudinal scaling coefficient, and the displacement value. Next, the pixel value of the to-be-processed pixel point is obtained through the fixed-point operation based on the pixel value of the source pixel point, the fixed-point interpolation coefficient, and the displacement value. In the foregoing process, a scaling coefficient as an integer is obtained through operation, then the fixed-point interpolation coefficient as an integer is calculated based on the scaling coefficient as an integer, the pixel value of the to-be-processed pixel point is calculated based on the fixed-point interpolation coefficient as an integer, and then the destination image after the source image is scaled is obtained based on the pixel value of each to-be-processed pixel point in the destination image, so that data as an integer is used in the whole image scaling process to perform the fixed-point operation, thereby avoiding a floating-point operation, reducing an operation amount, and improving operation efficiency.
Because of this, this solution may be applied to an embedded device that has a high requirement for power consumption and does not support the floating-point operation, which has high software portability.
In the foregoing aspect, transverse interpolation and longitudinal interpolation are both performed on 4 source pixel points (namely, an interpolation operation is performed on the 4 source pixel points), to obtain the pixel value of the to-be-processed pixel point. In some other aspects, the interpolation operation may also be performed in 2 operations, namely, interpolation calculation is first performed on the source pixel point in a certain direction to obtain a pixel value of an intermediate pixel point, and then interpolation calculation is performed on the intermediate pixel point in another direction to obtain the pixel value of the to-be-processed pixel point.
In some aspects, an example in which the transverse interpolation is first performed and then the longitudinal interpolation is performed is used. The pixel value of the to-be-processed pixel point is obtained through the following method:
1. A sum of a fifth saturation operation result and a sixth saturation operation result is determined as a pixel value of the first intermediate pixel point, the fifth saturation operation result being obtained by performing a saturation operation on a fifth calculated value, the fifth calculated value being obtained by multiplying the difference between 1 shifted to the left by a second quantity of bits and the transverse fixed-point interpolation coefficient by a pixel value of the first source pixel point shifted to the right by the second quantity of bits, the sixth saturation operation result being obtained by performing a saturation operation on a sixth calculated value, and the sixth calculated value being obtained by multiplying the transverse fixed-point interpolation coefficient by a pixel value of the second source pixel point shifted to the right by the second quantity of bits.
In other words, the pixel value R0 of the first intermediate pixel point is equal to SAT{(1<<SHIFT_BIT−u)×F00>>SHIFT_BIT)}+SAT{(u×F10)>>SHIFT_BIT, and the first intermediate pixel point is a pixel point obtained after the transverse interpolation is performed on the first source pixel point and the second source pixel point.
2. A sum of a seventh saturation operation result and an eighth saturation operation result is determined as a pixel value of a second intermediate pixel point, the seventh saturation operation result being obtained by performing a saturation operation on a seventh calculated value, the seventh calculated value being obtained by multiplying the difference between 1 shifted to the left by the second quantity of bits and the transverse fixed-point interpolation coefficient by a pixel value of the third source pixel point shifted to the right by the second quantity of bits, the eighth saturation operation result being obtained by performing a saturation operation on an eighth calculated value, and the eighth calculated value being obtained by multiplying the transverse fixed-point interpolation coefficient by a pixel value of the fourth source pixel point shifted to the right by the second quantity of bits.
In other words, the pixel value R1 of the second intermediate pixel point is equal to SAT{(1<<SHIFT_BIT−u)×F01>>SHIFT_BIT)}+SAT{(u×F11)>>SHIFT_BIT}, and the second intermediate pixel point is a pixel point obtained after the transverse interpolation is performed on the third source pixel point and the fourth source pixel point.
3. A saturation operation is performed on a numerical value obtained by shifting a sum of a first product result and a second product result to the right by the second quantity of bits, to obtain the pixel value of the to-be-processed pixel point, the first product result being obtained by multiplying, by the pixel value of the first intermediate pixel point, a difference between 1 shifted to the left by the second quantity of bits and the longitudinal fixed-point interpolation coefficient, and the second product result being obtained by multiplying the longitudinal fixed-point interpolation coefficient by the pixel value of the second intermediate pixel point.
In other words, a sum Ftmp of a first product result and a second product result is equal to (1<<SHIFT_BIT−v)×R0+v×R1.
The pixel value Fdst of the to-be-processed pixel point is equal to SAT{Ftmp>>SHIFT_BIT}.
In this aspect, the transverse interpolation is performed first and then the longitudinal interpolation is performed to obtain the pixel value of the to-be-processed pixel point.
In some aspects, an example in which the longitudinal interpolation is first performed and then the transverse interpolation is performed is used. The pixel value of the to-be-processed pixel point is obtained through the following method:
1. A sum of a ninth saturation operation result and a tenth saturation operation result is determined as a pixel value of a third intermediate pixel point, the ninth saturation operation result being obtained by performing a saturation operation on a ninth calculated value, the ninth calculated value being obtained by multiplying a difference between 1 shifted to the left by a second quantity of bits and the longitudinal fixed-point interpolation coefficient by a pixel value of the first source pixel point shifted to the right by the second quantity of bits, the tenth saturation operation result being obtained by performing a saturation operation on a tenth calculated value, and the tenth calculated value being obtained by multiplying the longitudinal fixed-point interpolation coefficient by a pixel value of the third source pixel point shifted to the right by the second quantity of bits.
In other words, the pixel value Lo of the third intermediate pixel point is equal to SAT{(1<<SHIFT_BIT−v)×F00>>SHIFT_BIT)}+SAT{(v×F01)>>SHIFT_BIT}, and the third intermediate pixel point is a pixel point obtained after the longitudinal interpolation is performed on the first source pixel point and the third source pixel point.
2. A sum of an eleventh saturation operation result and a twelfth saturation operation result is determined as a pixel value of a fourth intermediate pixel point, the eleventh saturation operation result being obtained by performing a saturation operation on an eleventh calculated value, the eleventh calculated value being obtained by multiplying a difference between 1 shifted to the left by the second quantity of bits and the longitudinal fixed-point interpolation coefficient by a pixel value of the second source pixel point shifted to the right by the second quantity of bits, the twelfth saturation operation result being obtained by performing a saturation operation on a twelfth calculated value, and the twelfth calculated value being obtained by multiplying the longitudinal fixed-point interpolation coefficient by a pixel value of the fourth source pixel point shifted to the right by the second quantity of bits.
In other words, the pixel value L1 of the fourth intermediate pixel point is equal to SAT{(1<<SHIFT_BIT−v)×F10>>SHIFT_BIT)}+SAT{(v×F11)>>SHIFT_BIT}, and the fourth intermediate pixel point is a pixel point obtained after the longitudinal interpolation is performed on the second source pixel point and the fourth source pixel point.
3. A saturation operation is performed on a numerical value obtained by shifting a sum of a third product result and a fourth product result to the right by the second quantity of bits, to obtain the pixel value of the to-be-processed pixel point, the third product result being obtained by multiplying, by the pixel value of the third intermediate pixel point, a difference between 1 shifted to the left by the second quantity of bits and the transverse fixed-point interpolation coefficient, and the fourth product result being obtained by multiplying the transverse fixed-point interpolation coefficient by the pixel value of the fourth intermediate pixel point.
In other words, the sum Ftmp of the third product result and the fourth product result is equal to (1<<SHIFT_BIT−u)×L0+u×L1.
The pixel value Fast of the to-be-processed pixel point is equal to SAT{Ftmp>>SHIFT_BIT}.
In this aspect, the longitudinal interpolation is performed first and then the transverse interpolation is performed to obtain the pixel value of the to-be-processed pixel point.
For the solution of transverse interpolation first and then longitudinal interpolation, after transverse interpolation calculation is performed, a transverse interpolation result (namely, pixel values of the first intermediate pixel point and the second intermediate pixel point) may be stored first, and then the pixel value of the to-be-processed pixel point is obtained through longitudinal interpolation calculation. In this way, repeated calculation of the transverse interpolation can be avoided, and the operation amount can be reduced.
Similarly, for the solution of longitudinal interpolation first and then transverse interpolation, after longitudinal interpolation calculation is performed, a longitudinal interpolation result (namely, pixel values of the third intermediate pixel point and the fourth intermediate pixel point) may be stored first, and then the pixel value of the to-be-processed pixel point is obtained through transverse interpolation calculation. In this way, repeated calculation of the longitudinal interpolation can be avoided, and the operation amount can be reduced.
In actual application, a solution of simultaneously performing the transverse interpolation and the longitudinal interpolation, the solution of transverse interpolation first and then longitudinal interpolation, or the solution of longitudinal interpolation first and then transverse interpolation may be selected based on an actual situation. An advantage of the solution of simultaneously performing the transverse interpolation and the longitudinal interpolation is that application of additional memory is not needed, which may avoid a delay and power consumption caused by a read/write operation of the memory. An advantage of the solution of transverse interpolation first and then longitudinal interpolation or the solution of longitudinal interpolation first and then transverse interpolation is that some repeated interpolation operations can be avoided, but application of additional memory is required to store an interpolation result. Therefore, in actual application, an appropriate solution may be selected to perform image scaling after the pros and cons are weighed. For example, when the size of the source image is greater than the size of the destination image, the solution of simultaneously performing the transverse interpolation and the longitudinal interpolation is preferably selected for use. When the size of the source image is less than the size of the destination image, the solution of transverse interpolation first and then longitudinal interpolation is preferably selected to avoid repeated calculation of transverse linear interpolation, or the solution of longitudinal interpolation first and then transverse interpolation is preferably selected to avoid repeated calculation of longitudinal linear interpolation.
An apparatus aspect described herein is described below, which may be configured for performing the method aspect described herein. For details not disclosed in the apparatus aspect described herein, reference is made to the method aspect described herein.
The scaling coefficient determination module 510 is configured to obtain a transverse scaling coefficient and a longitudinal scaling coefficient as integers through a fixed-point operation based on a size of a source image, a size of a destination image, and a displacement value as an integer.
The interpolation coefficient determination module 520 is configured to obtain, for each to-be-processed pixel point in the destination image, coordinates of a source pixel point in the source image corresponding to the to-be-processed pixel point, and a fixed-point interpolation coefficient as an integer through the fixed-point operation based on coordinates of the to-be-processed pixel point, the transverse scaling coefficient, the longitudinal scaling coefficient, and the displacement value.
The pixel value determination module 530 is configured to obtain a pixel value of the to-be-processed pixel point through the fixed-point operation based on a pixel value of the source pixel point, the fixed-point interpolation coefficient, and the displacement value.
The image generation module 540 is configured to obtain, based on the pixel value of each to-be-processed pixel point in the destination image, the destination image after the source image is scaled.
In some aspects, the fixed-point operation includes an integer division method, the size including a width size and a height size, the width size and the height size including a width value and a height value. The scaling coefficient determination module 510 includes a transverse coefficient determination sub-module and a longitudinal coefficient determination sub-module.
The transverse coefficient determination sub-module is configured to divide, by the width value of the destination image in the integer division manner, the width value of the source image shifted to the left by a first quantity of bits, to obtain a quotient as the transverse scaling coefficient, the first quantity of bits being determined based on the displacement value.
The longitudinal coefficient determination sub-module is configured to divide, by the height value of the destination image in the integer division manner, the height value of the source image shifted to the left by the first quantity of bits, to obtain a quotient as the longitudinal scaling coefficient.
In some aspects, the interpolation coefficient determination module 520 includes an abscissa determination sub-module, an ordinate determination sub-module, a coordinate determination sub-module, and an interpolation coefficient determination sub-module.
The abscissa determination sub-module is configured to shift a product of an abscissa of the to-be-processed pixel point and the transverse scaling coefficient to the right by a first quantity of bits, to obtain an abscissa of a first source pixel point in the source image corresponding to the to-be-processed pixel point, the first quantity of bits being determined based on the displacement value.
The ordinate determination sub-module is configured to shift a product of an ordinate of the to-be-processed pixel point and the longitudinal scaling coefficient to the right by the first quantity of bits, to obtain an ordinate of the first source pixel point.
The coordinate determination sub-module is configured to obtain the coordinates of the source pixel point in the source image corresponding to the to-be-processed pixel point based on the abscissa and the ordinate of the first source pixel point and the size of the source image.
The interpolation coefficient determination sub-module is configured to obtain the fixed-point interpolation coefficient based on the abscissa and the ordinate of the first source pixel point, the abscissa and the ordinate of the to-be-processed pixel point, the transverse scaling coefficient, the longitudinal scaling coefficient, and the displacement value.
In some aspects, the size includes a width size and a height size, the width size and the height size including a width value and a height value, and the source pixel point in the source image corresponding to the to-be-processed pixel point including the first source pixel point, a second source pixel point, a third source pixel point, and a fourth source pixel point.
The coordinate determination sub-module is specifically configured to:
In some aspects, the fixed-point interpolation coefficient includes a transverse fixed-point interpolation coefficient and a longitudinal fixed-point interpolation coefficient. The interpolation coefficient determination sub-module is configured to: subtract, from a value obtained by multiplying the abscissa of the to-be-processed pixel point by the transverse scaling coefficient, a value obtained by the abscissa of the first source pixel point shifted to the left by the first quantity of bits, to obtain a first difference, and shift the first difference to the right by a second quantity of bits, to obtain the transverse fixed-point interpolation coefficient, the first quantity of bits and the second quantity of bits being determined based on the displacement value; and
In some aspects, the pixel value determination module 530 includes a first intermediate value determination sub-module, a second intermediate value determination sub-module, a third intermediate value determination sub-module, a fourth intermediate value determination sub-module, and a pixel value determination sub-module.
The first intermediate value determination sub-module is configured to multiply a difference between 1 shifted to the left by a second quantity of bits and the transverse fixed-point interpolation coefficient by a first saturation operation result, to obtain a first intermediate value, the first saturation operation result being obtained by performing a saturation operation on a first calculated value, the first calculated value being obtained by multiplying a difference between 1 shifted to the left by the second quantity of bits and the longitudinal fixed-point interpolation coefficient by a pixel value of the first source pixel point shifted to the right by the second quantity of bits, and the second quantity of bits being determined based on the displacement value.
The second intermediate value determination sub-module is configured to multiply the transverse fixed-point interpolation coefficient by a second saturation operation result, to obtain a second intermediate value, the second saturation operation result being obtained by performing a saturation operation on a second calculated value, and the second calculated value being obtained by multiplying the difference between 1 shifted to the left by the second quantity of bits and the longitudinal fixed-point interpolation coefficient by a pixel value of the second source pixel point shifted to the right by the second quantity of bits.
The third intermediate value determination sub-module is configured to multiply the difference between 1 shifted to the left by the second quantity of bits and the transverse fixed-point interpolation coefficient by a third saturation operation result, to obtain a third intermediate value, the third saturation operation result being obtained by performing a saturation operation on a third calculated value, and the third calculated value being obtained by shifting a product of the longitudinal fixed-point interpolation coefficient and a pixel value of the third source pixel point to the right by the second quantity of bits.
The fourth intermediate value determination sub-module is configured to multiply the transverse fixed-point interpolation coefficient by a fourth saturation operation result, to obtain a fourth intermediate value, the fourth saturation operation result being obtained by performing a saturation operation on a fourth calculated value, and the fourth calculated value being obtained by shifting a product of the longitudinal fixed-point interpolation coefficient and a pixel value of the fourth source pixel point to the right by the second quantity of bits.
The pixel value determination sub-module is configured to obtain the pixel value of the to-be-processed pixel point based on the first intermediate value, the second intermediate value, the third intermediate value, and the fourth intermediate value.
In some aspects, the pixel value determination sub-module is configured to add the first intermediate value, the second intermediate value, the third intermediate value, and the fourth intermediate value to obtain a summation result, and perform a saturation operation on a numerical value obtained by shifting the summation result to the right by the second quantity of bits, to obtain the pixel value of the to-be-processed pixel point.
In some aspects, the pixel value determination module 530 includes a first determination sub-module, a second determination sub-module, and a saturation operation sub-module.
The first determination sub-module is configured to determine a sum of a fifth saturation operation result and a sixth saturation operation result as a pixel value of the first intermediate pixel point, the fifth saturation operation result being obtained by performing a saturation operation on a fifth calculated value, the fifth calculated value being obtained by multiplying the difference between 1 shifted to the left by a second quantity of bits and the transverse fixed-point interpolation coefficient by a pixel value of the first source pixel point shifted to the right by the second quantity of bits, the sixth saturation operation result being obtained by performing a saturation operation on a sixth calculated value, and the sixth calculated value being obtained by multiplying the transverse fixed-point interpolation coefficient by a pixel value of the second source pixel point shifted to the right by the second quantity of bits.
The first determination sub-module is configured to determine a sum of a seventh saturation operation result and an eighth saturation operation result as a pixel value of a second intermediate pixel point, the seventh saturation operation result being obtained by performing a saturation operation on a seventh calculated value, the seventh calculated value being obtained by multiplying the difference between 1 shifted to the left by the second quantity of bits and the transverse fixed-point interpolation coefficient by a pixel value of the third source pixel point shifted to the right by the second quantity of bits, the eighth saturation operation result being obtained by performing a saturation operation on an eighth calculated value, and the eighth calculated value being obtained by multiplying the transverse fixed-point interpolation coefficient by a pixel value of the fourth source pixel point shifted to the right by the second quantity of bits.
The saturation operation sub-module is configured to perform a saturation operation on a numerical value obtained by shifting a sum of a first product result and a second product result to the right by the second quantity of bits, to obtain the pixel value of the to-be-processed pixel point, the first product result being obtained by multiplying, by the pixel value of the first intermediate pixel point, a difference between 1 shifted to the left by the second quantity of bits and the longitudinal fixed-point interpolation coefficient, and the second product result being obtained by multiplying the longitudinal fixed-point interpolation coefficient by the pixel value of the second intermediate pixel point.
In some aspects, the pixel value determination module 530 includes a third determination sub-module, a fourth determination sub-module, and a saturation operation sub-module.
The third determination sub-module is configured to determine a sum of a ninth saturation operation result and a tenth saturation operation result as a pixel value of a third intermediate pixel point, the ninth saturation operation result being obtained by performing a saturation operation on a ninth calculated value, and the ninth calculated value being obtained by multiplying a difference between 1 shifted to the left by a second quantity of bits and the longitudinal fixed-point interpolation coefficient by a pixel value of the first source pixel point shifted to the right by the second quantity of bits, the tenth saturation operation result being obtained by performing a saturation operation on a tenth calculated value, and the tenth calculated value being obtained by multiplying the longitudinal fixed-point interpolation coefficient by a pixel value of the third source pixel point shifted to the right by the second quantity of bits.
The fourth determination sub-module is configured to determine a sum of an eleventh saturation operation result and a twelfth saturation operation result as a pixel value of a fourth intermediate pixel point, the eleventh saturation operation result being obtained by performing a saturation operation on an eleventh calculated value, the eleventh calculated value being obtained by multiplying a difference between 1 shifted to the left by the second quantity of bits and the longitudinal fixed-point interpolation coefficient by a pixel value of the second source pixel point shifted to the right by the second quantity of bits, the twelfth saturation operation result being obtained by performing a saturation operation on a twelfth calculated value, and the twelfth calculated value being obtained by multiplying the longitudinal fixed-point interpolation coefficient by a pixel value of the fourth source pixel point shifted to the right by the second quantity of bits.
The saturation operation sub-module is configured to perform a saturation operation on a numerical value obtained by shifting a sum of a third product result and a fourth product result to the right by the second quantity of bits, to obtain the pixel value of the to-be-processed pixel point, the third product result being obtained by multiplying, by the pixel value of the third intermediate pixel point, a difference between 1 shifted to the left by the second quantity of bits and the transverse fixed-point interpolation coefficient, and the fourth product result being obtained by multiplying the transverse fixed-point interpolation coefficient by the pixel value of the fourth intermediate pixel point.
In some aspects, the displacement value is an integer in an interval of [7, 11].
When the apparatus provided in the foregoing aspect implements the functions of the apparatus, only division of the foregoing functional modules is used as an example for description. In a practical application, the functions may be completed by different functional modules as required. To be specific, an internal structure of a device is divided into different functional modules to complete all or part of the functions described above. In addition, the apparatus provided in the foregoing aspect belongs to the same idea as the method aspect. For details of the specific implementation process thereof, reference is made to the method aspect. Details are not described herein again.
Generally, the computer device 600 includes a processor 601 and a memory 602.
The processor 601 may include one or more processing cores, for example, a 4-core processor or an 8-core processor. The processor 601 may be implemented in at least one hardware form of digital signal processing (DSP), a field programmable gate array (FPGA), and a programmable logic array (PLA). The processor 601 may also include a main processor and a coprocessor. The main processor is a processor configured to process data in an awake state, and is also referred to as a central processing unit (CPU). The coprocessor is a low-power processor configured to process data in a standby state. In some aspects, the processor 601 may be integrated with a graphics processing unit (GPU). The GPU is configured to render and draw content that needs to be displayed on a display screen. In some aspects, the processor 601 may further include an AI processor. The AI processor is configured to process computing operations related to machine learning.
The memory 602 may include one or more computer-readable storage media. The computer-readable storage medium may be non-transient. The memory 602 may further include a high-speed random-access memory (RAM) and a nonvolatile memory, for example, one or more disk storage devices or flash storage devices. In some aspects, the non-transitory computer-readable storage medium in the memory 602 is configured to store a computer program. The computer program is configured to be executed by one or more processors to implement the foregoing image scaling method.
A person skilled in the art may understand that the structure shown in
In some aspects, a computer-readable storage medium is further provided, having a computer program stored therein, the computer program being loaded and executed by a processor to implement the foregoing image scaling method.
In some aspects, the computer-readable storage medium may include a ROM, a RAM, a solid state drive (SSD), an optical disc, or the like. The RAM may include a resistive RAM (ReRAM) and a dynamic RAM (DRAM).
In some aspects, a computer program product is further provided, including a computer program, the computer program being stored in a computer-readable storage medium, and a processor reading the computer program from the computer-readable storage medium and executing the computer program, to implement the foregoing image scaling method.
For a face (or another biological feature) recognition technology involved in this application, when the foregoing aspect described herein is applied to a specific product or technology, the process of collection, use, and processing of related data needs to comply with requirements of national laws and regulations. Before face (or another biological feature) information is collected, an information processing rule needs to be notified and individual consent (or a legal basis) of a destination object needs to be obtained, the face (or another biological feature) information is processed in strict accordance with the requirements of laws and regulations and a personal information processing rule, and technical measures are taken to ensure security of relevant data.
“A plurality of” mentioned herein means two or more. The term “and/or” is an association relationship for describing associated objects and represents that three relationships may exist. For example, A and/or B may represent the following three cases: only A exists, both A and B exist, and only B exists. The character “/” generally indicates an “or” relationship between a preceding associated object and a latter associated object. In addition, the operation numbers described in this specification merely illustratively show a possible execution sequence of the operations. In some other aspects, the foregoing operations may not be performed based on the number sequence. For example, two operations with different numbers may be performed simultaneously, or two operations with different numbers may be performed based on a sequence contrary to the sequence shown in the figure. This is not limited in the aspects described herein.
The foregoing descriptions are merely illustrative aspects described herein, and are not intended to limit this application. Any modification, equivalent replacement, or improvement made within the spirit and principle described herein shall fall within the protection scope described herein.
| Number | Date | Country | Kind |
|---|---|---|---|
| 202310440006.X | Apr 2023 | CN | national |
This application is a continuation application of PCT Application PCT/CN2024/073743, filed Jan. 24, 2024, which claims priority to Chinese Patent Application No. 202310440006.X, filed on Apr. 12, 2023, each entitled “IMAGE SCALING METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM”, and each of which is incorporated herein by reference in its entirety.
| Number | Date | Country | |
|---|---|---|---|
| Parent | PCT/CN2024/073743 | Jan 2024 | WO |
| Child | 19079633 | US |