This application claims priority to Chinese Patent Application No. 202010345814.4, filed with the China National Intellectual Property Administration on Apr. 27, 2020 and entitled “SKIN COLOR DETECTION METHOD AND APPARATUS, TERMINAL, AND STORAGE MEDIUM”, which is incorporated herein by reference in its entirety.
This application relates to the field of image processing technologies, and in particular, to a skin color detection method and apparatus, a terminal, and a storage medium.
With intelligent development of a terminal such as a mobile phone, the terminal can implement increasingly more functions. For example, skin health detection may be implemented by using a mobile phone, to further implement skin care. Currently, a skin detection manner implemented by using a mobile phone is based on processing of a shot image of a skin, to determine a skin color. However, there is a relatively large difference between a skin color detected by a terminal and a real skin color. Consequently, a detection result is inaccurate.
Embodiments of this application provide a skin color detection method and apparatus, a terminal, and a storage medium, to improve accuracy of skin color detection.
According to a first aspect, an embodiment of this application provides a skin color detection method, including: obtaining a face image; determining a face key point in the face image; determining a skin color estimation region of interest and an illumination estimation region of interest in the face image based on the face key point; obtaining a detected skin color value corresponding to the skin color estimation region of interest; obtaining a detected illumination color value corresponding to the illumination estimation region of interest; and using the detected skin color value and the detected illumination color value as feature input of a skin color estimation model, and obtaining a corrected skin color value output by the skin color estimation model.
Optionally, the obtaining a detected skin color value corresponding to the skin color estimation region of interest includes: obtaining, through screening by using an elliptic skin color model, a pixel that is in the skin color estimation region of interest and that meets a preset condition; and calculating an average value of detected skin color values of the skin color estimation region of interest based on the pixel that is in the skin color estimation region of interest and that meets the preset condition, and using the average value as the detected skin color value corresponding to the skin color estimation region of interest.
Optionally, the obtaining a detected illumination color value corresponding to the illumination estimation region of interest is: obtaining the detected illumination color value corresponding to the illumination estimation region of interest by using Gaussian filtering and a shades of gray algorithm.
Optionally, the corrected skin color value output by the skin color estimation model includes an RGB color value or an LAB color value.
Optionally, the obtaining a corrected skin color value output by the skin color estimation model includes: obtaining an RGB color value output by the skin color estimation model. The skin color detection method further includes: converting the RGB color value output by the skin color estimation model into an HSV color value, and determining a skin color tone based on the HSV color value; and converting the RGB color value output by the skin color estimation model into an LAB color value, obtaining a corresponding individual typology angle ITA based on the LAB color value, and determining a skin color code based on the ITA.
Optionally, the obtaining a corrected skin color value output by the skin color estimation model includes: obtaining an RGB color value and an individual typology angle ITA that are output by the skin color estimation model. The skin color detection method further includes: converting the RGB color value output by the skin color estimation model into an HSV color value, and determining a skin color tone based on the HSV color value; and determining a skin color code based on the ITA output by the skin color estimation model.
Optionally, the skin color detection method further includes: generating a skin detection result interface, where the skin detection result interface includes a skin color bar, and the skin color bar includes the skin color code; and generating a skin color details interface in response to a selection instruction of the skin color bar, where the skin color details interface includes the skin color code, the skin color tone, and a skin care suggestion.
Optionally, before the obtaining a face image, the skin color detection method further includes: generating a skin detection interface, where the skin detection interface includes skin detection history information and a skin detection icon; and in response to a selection instruction of the skin detection icon, entering a process of obtaining the face image.
According to a second aspect, an embodiment of this application provides a skin color detection apparatus, including: an image obtaining module, configured to obtain a face image; a key point determining module, configured to determine a face key point in the face image; a region determining module, configured to determine a skin color estimation region of interest and an illumination estimation region of interest in the face image based on the face key point; a skin color detection module, configured to obtain a detected skin color value corresponding to the skin color estimation region of interest; an illumination detection module, configured to obtain a detected illumination color value corresponding to the illumination estimation region of interest; and a skin color correction module, configured to use the detected skin color value and the detected illumination color value as feature input of a skin color estimation model, and obtain a corrected skin color value output by the skin color estimation model.
According to a third aspect, an embodiment of this application provides a skin color detection apparatus, including a processor and a memory. The memory is configured to store at least one instruction, and when the instruction is loaded and executed by the processor, the foregoing skin color detection method is implemented.
According to a fourth aspect, an embodiment of this application provides a terminal, including the foregoing skin color detection apparatus.
According to a fifth aspect, an embodiment of this application provides a computer-readable storage medium. The computer-readable storage medium stores a computer program, and when the computer program is run on a computer, the computer is enabled to perform the foregoing skin color detection method.
According to the skin color detection method and apparatus, the terminal, and the storage medium in the embodiments of this application, the skin color estimation region of interest and the illumination estimation region of interest are determined based on the face key point, and detection is separately performed on the two regions to obtain the detected skin color value and the detected illumination color value. The two color values are used as the feature input of the skin color estimation model, to obtain the corrected skin color value output by the skin color estimation model. The skin color estimation model may be established in advance based on training data of different ambient light sources and different terminals. Therefore, the corrected skin color value output by the skin color estimation model is closer to a real skin color, impact on the skin color caused by different characteristics of camera photosensitive elements in the terminals and different ambient illumination is reduced, and accuracy of skin color detection is improved.
Terms used in embodiments of this application are only used to explain specific embodiments of this application, but are not intended to limit this application.
Before the embodiments of this application are described, a problem of a conventional technology is briefly described first. For a terminal such as a mobile phone, different models of cameras have different imaging qualities, and each photosensitive element of the mobile phone responds differently to illumination at different frequencies. When an external illumination environment changes, different white balance algorithms are triggered. As a result, photos shot by different mobile phones and in environments with different light intensity and color temperatures usually have a relatively large color cast. Consequently, accuracy of a skin color detection method in a current terminal is relatively poor. The following describes the embodiments of this application.
As shown in
Step 101: Obtain a face image.
Step 102: Determine a face key point in the face image.
Specifically, as shown in
Step 103: Determine a skin color estimation region of interest (Region Of Interest, ROI) and an illumination estimation region of interest ROI in the face image based on the face key point.
For selection of the skin color estimation ROI, after the face key points are determined, spatial location information of the face and five sense organs may be more accurately obtained. To more accurately determine a real skin color of a person in the image, an appropriate region needs to be selected for analysis. This is because not all regions of the face are skins, and eyes, eyebrows, lips, and the like are not in a skin color analysis range. Due to an angle, a large area of shade appears in a visible region at the back of cheeks, and interferes with subsequent skin color analysis. Therefore, in this embodiment of this application, a flat region under the eyes is selected as the ROI for subsequent skin color estimation, to avoid a shade region, a reflection region, and a region that does not fall within the skin color analysis range. Because the face is left-right symmetrical, two symmetrical regions on left and right sides are selected during ROI selection. The following uses a left region (from a perspective of an observer) as an example to explain how to select an ROI region. As shown in
For selection of the illumination estimation ROI, first, the face image includes a background region in addition to a face region. A skin color estimation model is used in a subsequent step, and the skin color estimation model is obtained in advance through training based on training data. A collection scenario of the training data is a photo studio, and a background is relatively simple. If the model is trained on a basis of global illumination estimation, the model is easily interfered by a background when being applied to an actual scenario. Therefore, illumination estimation is only considered in the face region. An approximate rectangular face region may be first determined by using a face detection algorithm, and the region may be used as an illumination estimation ROI. For example, in this embodiment of this application, as shown in
Step 104: Obtain a detected skin color value corresponding to the skin color estimation region of interest.
Step 105: Obtain a detected illumination color value corresponding to the illumination estimation region of interest.
It should be noted that an execution sequence between step 104 and step 105 is not limited in this embodiment of this application.
Step 106: Use the detected skin color value and the detected illumination color value as feature input of the skin color estimation model, and obtain a corrected skin color value output by the skin color estimation model.
Specifically, the skin color estimation model is a regression model, which may be obtained in advance through training based on training data. For example, a face image is collected in advance, and a real color value of a skin color of a person is detected by using a skin color detector, for example, color values of a total of 12 points on left and right cheeks of a photographed person are detected. The color value may include an RGB color value and an LAB color value. An individual typology angle (Individual Typology Angle, ITA) may be further detected. The RGB color value includes R, G, and B, where R represents a red color value, G represents a green color value, and B represents a blue color value. The LAB color value represents coordinates in an LAB color space. L represents black-and-white lightness, which is mainly affected by melanin in skin color measurement. A value of a represents red-green chromaticity, where a positive value represents red, a negative value represents green, and the red-green chromaticity is mainly affected by an oxygenation degree of oxygenated hemoglobin in skin color measurement. A value of b represents yellow-blue chromaticity in a horizontal axis, where a positive value represents yellow, a negative value represents blue, and the yellow-blue chromaticity can reflect a yellow degree of a skin. The ITA can reflect a degree of pigmentation in the skin. A higher ITA indicates a lighter skin color, and a lower ITA indicates a darker skin color. An average value of the detected color values corresponding to the 12 points may be used as the real value of the skin color. For example, as shown in Table 1, a plurality of mobile phones are used for each person to shoot a face image in 18 light source environments shown in Table 1, a camera is 20 to 40 cm away from a face, and a flash is turned on.
Table 1 shows 18 light source environments, and each light source environment has a corresponding illumination color value. For example, a mobile phone T1 is used to shoot a face image of a tester P1 under a light source type 1, to obtain a face image K1. A real color value X1 of a skin color is obtained by using the skin color detector to measure the tester P1. The mobile phone T1 is used to perform skin color detection on the face image K1, to obtain a detected skin color value Y1. A skin color detection process herein may be the same as the manner in step 104. The mobile phone T1 is used to perform illumination detection on the face image K1, to obtain a detected illumination color value Z1. An illumination detection process herein may be the same as the manner in step 105. A first set of training data is obtained, including X1, Y1, and Z1. The mobile phone T1 is used to shoot a face image of the tester P1 under a light source type 2, to obtain a face image K2. The real color value of the skin color of the tester P1 is X1. The mobile phone T1 is used to perform skin color detection on the face image K2, to obtain a detected skin color value Y2. The mobile phone T1 is used to perform illumination detection on the face image K2, to obtain a detected illumination color value Z2. A second set of training data is obtained, including X1, Y2, and Z2. By analogy, a plurality of sets of training data are obtained by using different mobile phones and in different light source environments, and the obtained plurality of sets of training data are used to perform training, to establish a skin color estimation model. A training process is performed in advance. In the skin color detection method in this embodiment of this application, in step 106, the detected skin color value and the detected illumination color value that are currently detected are used as the feature input of the skin color estimation model. Because the skin color estimation model is established based on training data of different light source environments and different terminals during shooting, the corrected skin color value obtained based on the skin color estimation model is closer to the real skin color.
According to the skin color detection method in this embodiment of this application, the skin color estimation region of interest and the illumination estimation region of interest are determined based on the face key point, and detection is separately performed on the two regions to obtain the detected skin color value and the detected illumination color value. The two color values are used as the feature input of the skin color estimation model, to obtain the corrected skin color value output by the skin color estimation model. The skin color estimation model may be established in advance based on training data of different ambient light sources and different terminals. Therefore, the corrected skin color value output by the skin color estimation model is closer to the real skin color, impact on the skin color caused by different characteristics of camera photosensitive elements in the terminals and different ambient illumination is reduced, and accuracy of skin color detection is improved.
Optionally, step 104 of obtaining a detected skin color value corresponding to the skin color estimation region of interest includes: obtaining, through screening by using an elliptic skin color model, a pixel that is in the skin color estimation region of interest and that meets a preset condition; and calculating an average value of detected skin color values of the skin color estimation region of interest based on the pixel that is in the skin color estimation region of interest and that meets the preset condition, and using the average value as the detected skin color value corresponding to the skin color estimation region of interest.
Specifically, a skin condition of each person varies from person to person, some people have a smooth face, but some people have many pimples, spots, moles, or other non-skin color regions, and these regions are prone to interfere with skin color estimation to different degrees. Therefore, in step 104, these non-skin color pixels need to be excluded, only pure skin pixels are retained, and skin color detection is performed based on the pure skin pixels. A skin color is mainly determined by blood and melanin. The blood causes a skin to be yellow and red in tone, and the melanin causes the skin to be chocolate-brown in tone. Saturation of the skin is limited. Illumination changes in a daily living environment are limited. Under the limited change condition, skin colors can converge in a specific limited region in a color space. The elliptic skin color model uses a YCrCb color space. It is assumed that a color of a pixel is (R, G, B). After being converted into the YCrCb color space, the color is as follows:
Because the skin color is greatly affected by a value of Y, Y needs to be adjusted. That is, piecewise nonlinear transformation is performed on each YCbCr color value. After the transformation, the color value is expressed as follows:
Herein, Wcb=46.95, WLcb=23, WHcb=14, Wcr=38.76, WLcr=20, WHci=10, Kl=25, Kh=188, Ymi=16, and Yma=235. Finally, the distribution may be expressed by using an elliptic equation:
Herein, cx=109.38, cy=152.02, 0=2.53, ecx=2.41, a=25.39, and b=14.03.
As shown in
Optionally, step 105 of obtaining a detected illumination color value corresponding to the illumination estimation region of interest is: obtaining the detected illumination color value corresponding to the illumination estimation region of interest by using Gaussian filtering and a shades of gray shades of gray algorithm.
Specifically, besides a characteristic of a camera photosensitive element of a terminal, an illumination condition is another important factor that affects an image color. Under a standard light source, imaging of an object is considered to be closest to a real color. When a color temperature or intensity of illumination on a surface of the object changes, an imaging color of the object also changes. Objects that have different colors and that are matched with different illumination may even present a same color. The shades of gray algorithm is an unsupervised algorithm in which a color of a scene light source is detected based on an image feature. An advantage of the algorithm is that no training needs to be performed in advance. The shades of gray algorithm is obtained by using a Minkowski normal form to improve a gray world gray world algorithm; and is based on an assumption that an image pixel still has no chromatic aberration after nonlinear reversible transformation, and an average color of an image is an illumination color of the image. A formal description is as follows:
Herein, x represents a pixel value, and p represents the Minkowski normal form. Therefore, illumination chromaticity may be calculated:
Further, it is proved that Gaussian pre-filtering performed on an image may further improve an illumination estimation effect. Therefore, a final illumination estimation form is as follows, where Gσ represents a Gaussian filter.
An illumination estimation result obtained by using the foregoing algorithm is the detected illumination color value corresponding to the illumination estimation region of interest.
Optionally, in step 106, the corrected skin color value output by the skin color estimation model includes an RGB color value or an LAB color value.
Specifically, for example, the detected skin color value obtained in step 104 is an RGB color value (R1, G1, B1), the detected illumination color value obtained in step 105 is an RGB color value (R2, G2, B2), and the corrected skin color value output by the skin color estimation model is an RGB color value (R3, G3, B3). The skin color estimation model may be a support vector regression (support vector regression, SVR) model, and the SVR may be considered as a regression version of a support vector machine (support vector machine, SVM). The SVM is mainly used for a classification problem. The SVM maps an input feature to a high-dimensional space depending on a kernel function, and finds a maximized support vector interval, to determine a hyperplane to separate features of different samples. The SVM aims to maximize a distance from a nearest sample point to the hyperplane, and the SVR aims to minimize a distance from a farthest sample point to the hyperplane. The skin color estimation model may include three models SVR1, SVR2, and SVR3, which are respectively used to implement mapping relationships between color values of three channels.
R
3=SVR1(R1,G1,B1,R2,G2,B2)
G
3=SVR2(R1,G1,B1,R2,G2,B2)
B
3=SVR3(R1,G1,B1,R2,G2,B2)
In another feasible implementation, input and output of the skin color estimation model may be LAB color values, that is, LAB coordinates are used as the input and the output of the model. In addition, in another feasible implementation, the skin color estimation model may alternatively be another type of model.
Optionally, in step 106, the obtaining a corrected skin color value output by the skin color estimation model includes: obtaining an RGB color value output by the skin color estimation model. The skin color detection method further includes:
converting the RGB color value output by the skin color estimation model into an HSV color value, and determining a skin color tone based on the HSV color value; and
converting the RGB color value output by the skin color estimation model into an LAB color value, obtaining a corresponding individual typology angle (Individual Typology Angle, ITA) based on the LAB color value, and determining a skin color code based on the ITA.
Specifically, the HSV color value is a coordinate value in a hue, saturation, and value (Hue, Saturation, Value) color space, and the skin color tone needs to be determined based on the HSV color value. Therefore, the RGB color value needs to be first converted into the HSV color value. The ITA can reflect a degree of pigmentation in the skin. A higher ITA indicates a lighter skin color, and a lower ITA indicates a darker skin color. The skin color is determined by the ITA, and the ITA needs to be calculated by using the LAB color value. Therefore, the RGB color value needs to be first converted into the LAB color value, then the ITA is calculated based on the LAB color value, and finally, the skin color code is determined based on the ITA. The skin color tone and the skin color code together may be used as a basis for skin care. A process of calculating the ITA based on the LAB color value may be specifically
where L represents black-and-white lightness in the LAB color space, and b represents yellow-blue chromaticity in the LAB color space.
Optionally, in step 106, the obtaining a corrected skin color value output by the skin color estimation model includes: obtaining an RGB color value and an individual typology angle ITA that are output by the skin color estimation model. The skin color detection method further includes:
converting the RGB color value output by the skin color estimation model into an HSV color value, and determining a skin color tone based on the HSV color value; and
determining a skin color code based on the ITA output by the skin color estimation model.
Specifically, the skin color estimation model may further include a fourth model SVR4, and ITA=SVR4(R1, G1, B1, R2, G2, B2). The SVR4 is used to indicate a mapping relationship between an ITA and a color value. That is, in this embodiment, the ITA does not need to be calculated by converting the RGB color value, but the ITA is directly output by the skin color estimation model. The skin color code is determined by using the ITA. For the skin color tone, the RGB color value may be first converted into the HSV color value in a similar manner, and then the skin color tone is determined based on the HSV color value.
For the determining of the skin color tone, in the technical solution in this embodiment of this application, a cold, neutral, and warm tone division method of Pantone skin colors is used as a standard, statistics on a distribution law of cold, neutral, and warm Pantone skin colors in an H channel are collected, and a tone threshold is set. A detected skin color is converted to the HSV space, and a tone level is divided based on an H value. For example, if H>12, the tone is determined to be warm; if 12≥H>10, the tone is determined to be neutral; and if H≤10, the tone is determined to be cold.
In addition, it should be noted that, in this embodiment of this application, only the RGB color value is used as an example for description. In this embodiment of this application, a color value representation form of a skin color or illumination is not limited, and may be an RGB color value, or may be a color value represented by coordinates in another color space.
Optionally, as shown in
Specifically, for example, on the skin detection result interface shown in
Optionally, as shown in
Specifically, for example, rectangular boxes in
The following provides a comparison between a result detected in this embodiment of this application and a real value detected by a dedicated detector, to further describe an effect of this embodiment of this application, as shown in Table 2.
Table 2 shows average errors at a plurality of terminals, including average errors between RGB color values detected in this embodiment of this application and real RGB color values detected by a dedicated measuring instrument at three terminals, and average errors between ITAs detected in this embodiment of this application and real ITAs detected by the dedicated measuring instrument at the three terminals. It may be learned from the table that an error between a result detected in this embodiment of this application and a real result is relatively small.
As shown in
An embodiment of this application further provides a skin color detection apparatus, including: an image obtaining module, configured to obtain a face image; a key point determining module, configured to determine a face key point in the face image; a region determining module, configured to determine a skin color estimation region of interest and an illumination estimation region of interest in the face image based on the face key point; a skin color detection module, configured to obtain a detected skin color value corresponding to the skin color estimation region of interest; an illumination detection module, configured to obtain a detected illumination color value corresponding to the illumination estimation region of interest; and a skin color correction module, configured to use the detected skin color value and the detected illumination color value as feature input of a skin color estimation model, and obtain a corrected skin color value output by the skin color estimation model.
The skin color detection apparatus may use the skin color detection method in the foregoing embodiment. A specific process and principle are the same as the process and the principle in the foregoing embodiment, and details are not described herein. It should be understood that division of the modules of the skin color detection apparatus is merely division of logical functions, and during actual implementation, the modules may be all or partially integrated into one physical entity, or may be physically separated. In addition, all of these modules may be implemented in a form of software to be invoked by a processing element, or may be implemented in a form of hardware. Alternatively, some modules may be implemented in a form of software to be invoked by a processing element, and some modules may be implemented in a form of hardware. For example, the skin color correction module may be a separately disposed processing element, or may be integrated into, for example, a chip of a terminal for implementation. In addition, the skin color correction module may alternatively be stored in a memory of the terminal in a form of a program, and a processing element of the terminal invokes and executes functions of the foregoing modules. Implementation of another module is similar. In addition, all or some of these modules may be integrated together, or may be independently implemented. The processing element may be an integrated circuit chip and has a signal processing capability. In an implementation process, steps in the foregoing methods or the foregoing modules can be implemented by using a hardware integrated logical circuit in the processing element, or by using instructions in a form of software.
For example, the foregoing modules may be one or more integrated circuits configured to perform the foregoing method, such as one or more application-specific integrated circuits (ASIC), one or more microprocessors (digital signal processor, DSP), or one or more field programmable gate arrays (FPGA). For another example, when one of the foregoing modules is implemented by a processing element by scheduling a program, the processing element may be a general purpose processor, for example, a central processing unit (CPU) or another processor that may invoke a program. For another example, the modules may be integrated together and implemented in a form of a system-on-a-chip (SOC).
An embodiment of this application further provides a skin color detection apparatus, including a processor and a memory. The memory is configured to store at least one instruction, and when the instruction is loaded and executed by the processor, the skin color detection method in the foregoing embodiment is implemented.
A specific process and principle of the skin color detection method are the same as the process and the principle in the foregoing embodiment, and details are not described herein. There may be one or more processors, and the processor and the memory may be connected by using a bus or in another manner. As a non-transient computer-readable storage medium, the memory may be configured to store a non-transient software program and a non-transient computer-executable program and module. The processor runs the non-transient software program, instruction, and module stored in the memory, to execute various functional applications and data processing, that is, implement the method in any of the foregoing method embodiments. The memory may include a program storage area and a data storage area. The program storage area may store an operating system, an application required by at least one function, necessary data, and the like. In addition, the memory may include a high-speed random access memory, and may further include a non-transient memory, for example, at least one magnetic disk storage device, a flash device, or another non-transient solid-state storage device.
An embodiment of this application further provides a terminal, including the foregoing skin color detection apparatus. The terminal may be specifically an intelligent terminal device such as a mobile phone or a tablet computer, and the terminal further includes a display apparatus. The terminal may implement communication with a server. A skin color estimation model may be established in advance through training based on data stored in the server. When performing skin color detection, the terminal may invoke the model trained on the server, to obtain a result output by the skin color estimation model, and display the result.
An embodiment of this application further provides a computer-readable storage medium. The computer-readable storage medium stores a computer program, and when the computer program is run on a computer, the computer is enabled to perform the foregoing skin color detection method.
All or some of the foregoing embodiments may be implemented by using software, hardware, firmware, or any combination thereof. When software is used to implement the embodiments, all or a part of the embodiments may be implemented in a form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, all or some of the procedures or functions in this application are generated. The computer may be a general-purpose computer, a dedicated computer, a computer network, or another programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or may be transmitted from a computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center to another website, computer, server, or data center in a wired (for example, a coaxial cable, an optical fiber, or a digital subscriber line) or wireless (for example, infrared, radio, or microwave) manner. The computer-readable storage medium may be any usable medium accessible by the computer, or a data storage device, for example, a server or a data center, integrating one or more usable media. The usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, or a magnetic tape), an optical medium (for example, a DVD), a semiconductor medium (for example, a solid-state drive (Solid State Disk)), or the like.
In this application, at least one means one or more, and a plurality of means two or more. The term “and/or” describes an association relationship for describing associated objects and represents that three relationships may exist. For example, A and/or B may represent the following three cases: Only A exists, both A and B exist, and only B exists, where A and B may be singular or plural. The character “I” usually indicates an “or” relationship between associated objects. At least one of the following items (pieces) or a similar expression thereof indicates any combination of these items, including a single item (piece) or any combination of a plurality of items (pieces). For example, at least one of a, b, or c may indicate a, b, c, a and b, a and c, b and c, or a, b, and c, where a, b, and c may be singular or plural.
The foregoing is merely a preferred embodiment of this application, but is not intended to limit this application. For a person skilled in the art, various modifications and variations may be made to this application. Any modification, equivalent replacement, or improvement made without departing from the spirit and principle of this application shall fall within the protection scope of this application.
Number | Date | Country | Kind |
---|---|---|---|
202010345814.4 | Apr 2020 | CN | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/CN2021/088275 | 4/20/2021 | WO |