This application claims priority to Chinese Patent Application No. 202110978065.3, filed on Aug. 24, 2021, the entire contents of which are incorporated herein in their entireties by reference.
The present disclosure relates to a field of audio processing technology, and in particular, to a field of speech synthesis technology.
An electroacoustic effect may be used as a sound filter to adjust and beautify a sound, and have a wide range of applications in scenes such as a K song work or a small video work. A high-quality electroacoustic effect can improve a sound quality of a work. If an application product can provide the high-quality electroacoustic effect, it may enhance the competitiveness of the product, enrich the gameplay of the product, and increase the interest of users.
The present disclosure provides a method and an apparatus of processing audio data, a device, a storage medium, and a program product.
According to an aspect of the present disclosure, a method of processing audio data is provided, including: decomposing original audio data to obtain voice audio data and background audio data; performing electroacoustic processing on the voice audio data to obtain electroacoustic voice data; and combining the electroacoustic voice data and the background audio data to obtain target audio data.
According to another aspect of the present disclosure, an apparatus of processing audio data is provided, including: a decomposition module configured to decompose original audio data to obtain voice audio data and background audio data; an electroacoustic processing module configured to perform electroacoustic processing on the voice audio data to obtain electroacoustic voice data; and a synthesis module configured to combine the electroacoustic voice data and the background audio data to obtain target audio data.
According to another aspect of the present disclosure, an electronic device is provided, including: at least one processor; and a memory communicatively connected to the at least one processor, wherein the memory stores instructions executable by the at least one processor, and the instructions, when executed by the at least one processor, cause the at least one processor to implement the method in embodiments of the present disclosure.
According to another aspect of the present disclosure, a non-transitory computer-readable storage medium having computer instructions therein is provided, wherein the computer instructions are configured to cause a computer to implement the method in embodiments of the present disclosure.
According to another aspect of the present disclosure, a computer program product containing a computer program or instructions is provided, wherein the computer program or instructions, when executed by a processor, causes or cause the processor to implement the method in embodiments of the present disclosure.
It should be understood that content described in this section is not intended to identify key or important features in embodiments of the present disclosure, nor is it intended to limit the scope of the present disclosure. Other features of the present disclosure will be easily understood through the following description.
The accompanying drawings are used to understand the present disclosure better and do not constitute a limitation to the present disclosure, wherein:
Exemplary embodiments of the present disclosure will be described below with reference to the accompanying drawings, which include various details of embodiments of the present disclosure to facilitate understanding and should be considered as merely exemplary. Therefore, those of ordinary skilled in the art should realize that various changes and modifications may be made to embodiments described herein without departing from the scope and spirit of the present disclosure. Likewise, for clarity and conciseness, descriptions of well-known functions and structures are omitted in the following description.
A method of processing audio data according to an embodiment of the present disclosure will be described below with reference to
As shown in
In operation S110, original audio data is decomposed to obtain voice audio data and background audio data.
In operation S120, electroacoustic processing is performed on the voice audio data to obtain electroacoustic voice data.
In operation S130, the electroacoustic voice data and the background audio data are combined to obtain target audio data.
According to an embodiment of the present disclosure, the original audio data may include, for example, voice information and background sound information, where the voice may be, for example, singing, and the background sound may be, for example, accompaniment music. In this embodiment, the voice information in the original audio data and the background information in the original audio data may be separated by using, for example, a sound source separation algorithm, so as to obtain voice audio data including the voice information and background audio data including the background sound information.
According to an embodiment of the present disclosure, by separating the voice information and the background sound information in the original audio data, performing electroacoustic processing on the voice information, and combining the electroacoustic voice information and the background sound information, it is possible to achieve the electro-acousticization of audio data having both background sound information and the voice information.
According to an embodiment of the present disclosure, a neural network may be used to implement the sound source separation algorithm to decompose the original audio data. An input of the neural network may be the audio data having the background sound information and the voice information. An output of the neural network may include the voice audio data containing the voice information and the background audio data containing the background sound information.
According to an embodiment of the present disclosure, a music file and a voice file may be acquired in advance. The music file and the voice file may be cut into segments of equal length to obtain a plurality of music segments X and a plurality of voice segments Y. Each music segment X and corresponding one voice segment Y may be combined to obtain original audio data Z. The neural network is trained with each original audio data Z as the input of the neural network, and the music segment X and the voice segment Y corresponding to the original audio data Z as a desired output. In addition, in order to improve the training effect and speed up the network convergence, the music segment X, the voice segment Y and the original audio data Z may be pre-processed into Mel-spectrogram. The output of the neural network is also based on the Mel-spectrogram. For example, an output result in a form of Mel-spectrogram may synthesize corresponding original audio data by using an algorithm such as a Griffin-Lim algorithm.
Based on this, a method of decomposing original audio data shown above will be further described below with reference to
As shown in
In operation S211, original Mel-spectrogram data corresponding to the original audio data is determined.
In operation S212, background Mel-spectrogram data corresponding to the original Mel-spectrogram data and voice Mel-spectrogram data corresponding to the original Mel-spectrogram data are determined by using a neural network.
According to an embodiment of the present disclosure, the background Mel-spectrogram data may include background sound information in the original Mel-spectrogram data. The voice Mel-spectrogram data may include voice information in the original Mel-spectrogram data.
In operation S213, the background audio data is generated according to the background Mel-spectrogram data, and the voice audio data is generated according to the voice Mel-spectrogram data.
According to an embodiment of the present disclosure, the background audio data may be generated according to the background Mel-spectrogram data by using the algorithm such as the Griffin-Lim algorithm. The voice audio data may be generated according to the voice Mel-spectrogram data by using the algorithm such as the Griffin-Lim algorithm.
According to an embodiment of the present disclosure, the electroacoustic processing on the voice audio data may be achieved by performing quantization processing on a fundamental frequency of voice data. For example, the fundamental frequency, a spectral envelope and an aperiodic parameter of the voice data may be determined. The fundamental frequency represents a vibration frequency of a vocal cord during pronunciation, which is reflected in the audio frequency as a pitch. Next, the fundamental frequency may be quantized. The voice data may be re-synthesized according to the quantized fundamental frequency, the spectral envelope and the aperiodic parameter, so as to achieve performing the electroacoustic processing on the voice audio data. The re-synthesized voice data is electroacoustic voice data containing voice information with an electroacoustic effect.
A method of performing the electroacoustic processing on the voice audio data shown above will be further described below with reference to
As shown in
In operation S321, an original fundamental frequency of the voice audio data is extracted.
According to an embodiment of the present disclosure, for example, the original fundamental frequency may be extracted from the voice audio data according to an algorithm such as DIO, Harvest, etc.
In operation S322, the original fundamental frequency is corrected to obtain a first fundamental frequency.
According to an embodiment of the present disclosure, the electroacoustic effect may be improved by correcting the fundamental frequency. For example, in this embodiment, the voice audio data may be divided into a plurality of audio segments. Then, for each audio segment of the plurality of audio segments, an energy of the audio segment and a zero-crossing rate of the audio segment are determined. According to the energy of the audio segment and the zero-crossing rate of the audio segment, whether the audio segment is a voiced audio segment or not is determined. Then, a fundamental frequency of the voiced audio segment is corrected by using a linear interpolation algorithm.
According to an embodiment of the present disclosure, the voice audio data may be divided into the plurality of audio segments based on a preset unit length. A length of each audio segment is one preset unit length. The preset unit length may be set as desired in practice. For example, in this embodiment, the preset unit length may be any value in a range from 10 ms to 40 ms.
According to an embodiment of the present disclosure, the audio segment includes a plurality of sampling points. The energy of the audio segment may be determined according to a value of each sampling point in the audio segment. For example, the energy of the audio segment may be calculated according to the following equation:
wherein xi represents a value of the ith sampling point, and n is the number of sampling points.
According to an embodiment of the present disclosure, the number n of sampling points may be determined according to the length of the audio segment and a sampling rate for the audio segment. Taking a preset unit length of 10 ms as an example, the number n of sampling points may be calculated according to:
According to an embodiment of the present disclosure, for every set of two adjacent sampling points in the audio segment, it is determined whether one of the two adjacent sampling points has a positive value and the other one of the two adjacent sampling points has a negative value. Then, a ratio of a number of sets of two adjacent sampling points having a positive value and a negative value respectively to a total number of the sampling points in the audio segment is determined as the zero-crossing rate.
According to an embodiment of the present disclosure, the zero-crossing rate of the audio segment may be calculated according to the following equation:
wherein ZCR is the zero-crossing rate of the audio segment, n is the number of sampling points in the audio segment, xi represents the value of the ith sampling point in the audio segment, and xi−1 represents a value of the (i−1)th sampling point in the audio segment.
According to an embodiment of the present disclosure, the number n of sampling points may be determined according to the length of the audio segment and the sampling rate for the audio segment. Taking the preset unit length of 10 ms as an example, the number n of sampling points may be calculated according to:
When a body is pronouncing, as the vocal cord does not vibrate for unvoiced vocalizations, a corresponding fundamental frequency is zero, and as the vocal cord vibrates for voiced vocalizations, a corresponding fundamental frequency is not zero. Based on this, in this embodiment, the fundamental frequency may be corrected by using the above characteristics.
For example, for each audio segment, if an energy E of the audio segment is less than a threshold e_min and a zero-crossing rate ZCR of the audio segment is greater than a threshold zcr_max, then the audio segment is an unvoiced audio segment having a fundamental frequency of zero. Otherwise, the audio segment is a voiced audio segment having a fundamental frequency of non-zero, where e_min and zcr_max may be set as desired in practice.
For each unvoiced audio segment, a fundamental frequency of the unvoiced audio segment may be set to zero. For each voiced audio segment, a fundamental frequency of each voiced audio segment may be extracted according to an algorithm such as DIO, Harvest, etc., and then it is determined whether the fundamental frequency value of each voiced audio segment is zero or not. For a voiced audio segment having the fundamental frequency value of zero, linear interpolation may be performed, based on a linear interpolation algorithm, by using a value of a voiced audio segment near the voiced audio segment, so as to obtain a non-zero fundamental frequency value as the fundamental frequency value of the voiced audio segment.
For example, there are six voiced audio segments, and fundamental frequency values of the six voiced audio segments are 100, 100, 0, 0, 160, and 100, respectively. As the fundamental frequency value of the third voiced audio segment and the fundamental frequency value of the fourth voiced audio segment are zero, linear interpolation may be performed according to a non-zero fundamental frequency value near the fundamental frequency values of the third and fourth voiced audio segments, that is, linear interpolation may be performed according to the second fundamental frequency value of 100 and the fifth fundamental frequency value of 160. In this way, the obtained fundamental frequency value of the third audio segment and fundamental frequency value of the fourth voiced audio segment are 120 and 140, respectively. That is, the corrected six fundamental frequency values are 100, 100, 120, 140, 160, and 100, respectively.
Then, in operation S323, the first fundamental frequency is adjusted according to a pre-determined electroacoustic parameter to obtain a second fundamental frequency.
According to an embodiment of the present disclosure, the pre-determined electroacoustic parameter may include, for example, an electroacoustic degree parameter and/or an electroacoustic tone parameter. The electroacoustic degree parameter may be used to control the electroacoustic degree. The electroacoustic tone parameter may be used to control the tone. For example, in this embodiment, the electroacoustic degree parameter may include, for example, 1, 1.2, and 1.4. The greater the electroacoustic degree parameter is, the better the electroacoustic effect is. The electroacoustic tone parameter may include, for example, −3, −2, −1, +1, +2, and +3, where −1, −2, −3 represent one tone down, two tones down, and three tones down, respectively; +1, +2, and +3 represent one tone up, two tones up, and three tones up, respectively.
In related art, a parameter for the electroacoustic effect cannot be adjusted, and thus the effect is single. According to an embodiment of the present disclosure, based on the electroacoustic characteristics, the electroacoustic degree parameter and the electroacoustic tone parameter are provided to control the electroacoustic effect, so that different user desires may be satisfied.
According to an embodiment of the present disclosure, a fundamental frequency variance and/or a fundamental frequency mean value may be determined according to the fundamental frequency of the voiced audio segment. A corrected fundamental frequency variance is determined according to the electroacoustic degree parameter and the fundamental frequency variance, and/or a corrected fundamental frequency mean value is determined according to the electroacoustic degree parameter and the fundamental frequency mean value. Then, the first fundamental frequency is adjusted according to the corrected fundamental frequency variance and/or the corrected fundamental frequency mean value, to obtain the second fundamental frequency.
For example, in this embodiment, the variance of the fundamental frequencies of all voiced audio segments may be calculated as the fundamental frequency variance. The mean value of the fundamental frequencies of all the voiced audio segments may be calculated as the fundamental frequency mean value.
Then, the corrected fundamental frequency variance may be calculated according to:
The corrected fundamental frequency mean value may be calculated according to:
Then, the second fundamental frequency may be calculated according to:
wherein F0′ is the second fundamental frequency.
In operation S324, quantization processing is performed on the second fundamental frequency to obtain a third fundamental frequency.
In natural audio, the tone of the sound is cadence and changes gradually, while in the electroacoustic audio, the tone is quantized to a specific scale, so that the tone does not change continuously, which is similar to a tone produced by an electronic musical instrument. Based on this, according to an embodiment of the present disclosure, the fundamental frequency of the voice data may be quantized by taking each key frequency of a piano as a target frequency.
For example, in this embodiment, a frequency range may be determined according to:
wherein scale is the frequency range, and F0′ is the second fundamental frequency.
Then, based on the frequency range, the third fundamental frequency may be determined according to:
wherein F0″ is the third fundamental frequency.
In operation S325, the electroacoustic voice data is determined according to the third fundamental frequency.
According to an embodiment of the present disclosure, a spectral envelope and an aperiodic parameter may be determined according to the voice audio data and the first fundamental frequency. Then, the electroacoustic voice data may be determined according to the third fundamental frequency, the spectral envelope and the aperiodic parameter.
The method of processing audio data described above will be further described below with reference to
As shown in
In operation S401, it is determined whether the audio data (audio for short) contains accompaniment music (accompaniment for short) or not. If the accompaniment is contained, operation S402 is performed. If only voice is contained and no accompaniment is contained, then operation S403 is performed.
In operation S402, the voice is separated from the accompaniment by using the sound source separation algorithm. Then, operation S403 is performed for the separated voice.
In operation S403, a zero-crossing rate, a fundamental frequency f0 and an energy are extracted for the voice.
In operation S404, the fundamental frequency is corrected based on the zero-crossing rate and the energy to obtain F0.
In operation S405, a spectral envelope SP and an aperiodic parameter AP are calculated by using the voice and the corrected fundamental frequency F0.
In operation S406, the fundamental frequency is adjusted to obtain F0′ according to an electroacoustic degree parameter a and an electroacoustic tone parameter b set by a user.
In operation S407, the fundamental frequency F0′ is quantized to obtain F0″.
In operation S408, a voice with electroacoustic effect is synthesized by using the fundamental frequency F0″, the spectral envelope SP and the aperiodic parameter AP.
In operation S409, if the audio contains the accompaniment, operation S410 is performed. Otherwise, operation S411 is performed.
In operation S410, the accompaniment is also combined into the voice to generate a final audio with electroacoustic effect.
In operation S411, the audio with the electroacoustic effect is output.
According to the method of processing audio data of the embodiment of the present disclosure, it is possible to flexibly and efficiently add the electroacoustic effect to the audio data, and improve the entertainment interest of users.
As shown in
The decomposition module 510 is used to decompose original audio data to obtain voice audio data and background audio data.
The electroacoustic processing module 520 is used to perform electroacoustic processing on the voice audio data to obtain electroacoustic voice data.
The synthesis module 530 is used to combine the electroacoustic voice data and the background audio data to obtain target audio data.
According to an embodiment of the present disclosure, the decomposition module may include a Mel-spectrogram determination sub-module, a decomposition sub-module and a generation sub-module. The Mel-spectrogram determination sub-module is used to determine original Mel-spectrogram data corresponding to the original audio data. The decomposition sub-module is used to determine, by using a neural network, background Mel-spectrogram data corresponding to the original Mel-spectrogram data and voice Mel-spectrogram data corresponding to the original Mel-spectrogram data. The generation sub-module is used to generate the background audio data according to the background Mel-spectrogram data, and generate the voice audio data according to the voice Mel-spectrogram data.
According to an embodiment of the present disclosure, the electroacoustic processing module may include an extraction sub-module, a correction sub-module, an adjustment sub-module, a quantization sub-module and an electroacoustic determination sub-module. The extraction sub-module is used to extract an original fundamental frequency of the voice audio data. The correction sub-module is used to correct the original fundamental frequency to obtain a first fundamental frequency. The adjustment sub-module is used to adjust, according to a pre-determined electroacoustic parameter, the first fundamental frequency to obtain a second fundamental frequency. The quantization sub-module is used to perform quantization processing on the second fundamental frequency to obtain a third fundamental frequency. The electroacoustic determination sub-module is used to determine the electroacoustic voice data according to the third fundamental frequency.
According to an embodiment of the present disclosure, the correction sub-module may include a segmentation unit, an energy determination unit, a zero-crossing rate determination unit, a voiced determination unit and a correction unit. The segmentation unit is used to divide the voice audio data into a plurality of audio segments. The energy determination unit is used to determine, for each audio segment of the plurality of audio segments, an energy of the audio segment. The zero-crossing rate determination unit is used to determine, for each audio segment of the plurality of audio segments, a zero-crossing rate of the audio segment. The voiced determination unit is used to determine, according to the energy of the audio segment and the zero-crossing rate of the audio segment, whether the audio segment is a voiced audio segment or not. The correction unit is used to correct a fundamental frequency of the voiced audio segment by using a linear interpolation algorithm.
According to an embodiment of the present disclosure, the audio segment includes a plurality of sampling points. The energy determination unit is further used to determine the energy of the audio segment according to a value of each sampling point in the audio segment.
According to an embodiment of the present disclosure, the zero-crossing rate determination unit is further used to determine, for every set of two adjacent sampling points in the audio segment, whether a value of one of the two adjacent sampling points has a sign opposite to a sign of a value of the other one of the two adjacent sampling points, and then determine a ratio of a number of sets of two adjacent sampling points having values of opposite signs to a total number of the sampling points in the audio segment as the zero-crossing rate.
According to an embodiment of the present disclosure, the pre-determined electroacoustic parameter may include the electroacoustic degree parameter and/or the electroacoustic tone parameter. The adjustment sub-module may include a first determination unit, a second determination unit and an adjustment unit. The first determination unit is used to determine, according to the fundamental frequency of the voiced audio segment, a fundamental frequency variance and/or a fundamental frequency mean value. The second determination unit is used to determine a corrected fundamental frequency variance according to the electroacoustic degree parameter and the fundamental frequency variance, and/or determine a corrected fundamental frequency mean value according to the electroacoustic degree parameter and the fundamental frequency mean value. The adjustment unit is used to adjust, according to the corrected fundamental frequency variance and/or the corrected fundamental frequency mean value, the first fundamental frequency to obtain the second fundamental frequency.
According to an embodiment of the present disclosure, the quantization sub-module may include a frequency range determination unit and a third fundamental frequency determination unit.
The frequency range determination unit is used to determine a frequency range according to:
wherein scale is the frequency range, and F0′ is the second fundamental frequency.
The third fundamental frequency determination unit is used to determine, based on the frequency range, the third fundamental frequency according to:
wherein F0″ is the third fundamental frequency.
According to an embodiment of the present disclosure, the above-mentioned apparatus of processing audio data may further include a determination module, which is used to determine a spectral envelope and an aperiodic parameter according to the voice audio data and the first fundamental frequency.
According to an embodiment of the present disclosure, the electroacoustic determination sub-module is further used to determine the electroacoustic voice data according to the third fundamental frequency, the spectral envelope and the aperiodic parameter.
According to an embodiment of the present disclosure, the present disclosure further provides an electronic device, a readable storage medium, and a computer program product.
As shown in
Various components in the device 600, including an input unit 606 such as a keyboard, a mouse, etc., an output unit 607 such as various types of displays, speakers, etc., a storage unit 608 such as a magnetic disk, an optical disk, etc., and a communication unit 609 such as a network card, a modem, a wireless communication transceiver, etc., are connected to the I/O interface 605. The communication unit 609 allows the device 600 to exchange information/data with other devices through a computer network such as the Internet and/or various telecommunication networks.
The computing unit 601 may be various general-purpose and/or special-purpose processing components with processing and computing capabilities. Some examples of the computing unit 601 include but are not limited to a central processing unit (CPU), a graphics processing unit (GPU), various dedicated artificial intelligence (AI) computing chips, various computing units running machine learning model algorithms, a digital signal processor (DSP), and any appropriate processor, controller, microcontroller, and so on. The computing unit 601 may perform the various methods and processes described above, such as the method of processing audio data. For example, in some embodiments, the method of processing audio data may be implemented as a computer software program that is tangibly contained on a machine-readable medium, such as a storage unit 608. In some embodiments, part or all of a computer program may be loaded and/or installed on the device 600 via the ROM 602 and/or the communication unit 609. When the computer program is loaded into the RAM 603 and executed by the computing unit 601, one or more steps of the method of processing audio data described above may be performed. Alternatively, in other embodiments, the computing unit 601 may be configured to perform the method of processing audio data in any other appropriate way (for example, by means of firmware).
Various embodiments of the systems and technologies described herein may be implemented in a digital electronic circuit system, an integrated circuit system, a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), an application specific standard product (ASSP), a system on chip (SOC), a complex programmable logic device (CPLD), a computer hardware, firmware, software, and/or combinations thereof. These various embodiments may be implemented by one or more computer programs executable and/or interpretable on a programmable system including at least one programmable processor. The programmable processor may be a dedicated or general-purpose programmable processor, which may receive data and instructions from the storage system, the at least one input device and the at least one output device, and may transmit the data and instructions to the storage system, the at least one input device, and the at least one output device.
Program codes for implementing the method of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or a controller of a general-purpose computer, a special-purpose computer, or other programmable data processing devices, so that when the program codes are executed by the processor or the controller, the functions/operations specified in the flowchart and/or block diagram may be implemented. The program codes may be executed completely on the machine, partly on the machine, partly on the machine and partly on the remote machine as an independent software package, or completely on the remote machine or the server.
In the context of the present disclosure, the machine readable medium may be a tangible medium that may contain or store programs for use by or in combination with an instruction execution system, device or apparatus. The machine readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine readable medium may include, but not be limited to, electronic, magnetic, optical, electromagnetic, infrared or semiconductor systems, devices or apparatuses, or any suitable combination of the above. More specific examples of the machine readable storage medium may include electrical connections based on one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, convenient compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
In order to provide interaction with users, the systems and techniques described here may be implemented on a computer including a display device (for example, a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user), and a keyboard and a pointing device (for example, a mouse or a trackball) through which the user may provide the input to the computer. Other types of devices may also be used to provide interaction with users. For example, a feedback provided to the user may be any form of sensory feedback (for example, visual feedback, auditory feedback, or tactile feedback), and the input from the user may be received in any form (including acoustic input, voice input or tactile input).
The systems and technologies described herein may be implemented in a computing system including back-end components (for example, a data server), or a computing system including middleware components (for example, an application server), or a computing system including front-end components (for example, a user computer having a graphical user interface or web browser through which the user may interact with the implementation of the system and technology described herein), or a computing system including any combination of such back-end components, middleware components or front-end components. The components of the system may be connected to each other by digital data communication (for example, a communication network) in any form or through any medium. Examples of the communication network include a local area network (LAN), a wide area network (WAN), and Internet.
The computer system may include a client and a server. The client and the server are generally far away from each other and usually interact through a communication network. The relationship between the client and the server is generated through computer programs running on the corresponding computers and having a client-server relationship with each other. The server may be a cloud server, a server of a distributed system, or a server combined with a blockchain.
It should be understood that steps of the processes illustrated above may be reordered, added or deleted in various manners. For example, the steps described in the present disclosure may be performed in parallel, sequentially, or in a different order, as long as a desired result of the technical solution of the present disclosure may be achieved. This is not limited in the present disclosure.
The above-mentioned specific embodiments do not constitute a limitation on the scope of protection of the present disclosure. Those skilled in the art should understand that various modifications, combinations, sub-combinations and substitutions may be made according to design requirements and other factors. Any modifications, equivalent replacements and improvements made within the spirit and principles of the present disclosure shall be contained in the scope of protection of the present disclosure.
Number | Date | Country | Kind |
---|---|---|---|
202110978065.3 | Aug 2021 | CN | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/CN2022/082305 | 3/22/2022 | WO |