This application claims priority to Chinese Application No. 202410030700.9 filed Jan. 8, 2024, the disclosure of which is incorporated herein by reference in its entity.
Embodiments of the present disclosure relate to the field of computer technologies, and in particular, to a method and apparatus, an electronic device, and a storage medium for audio processing.
With continuous development of science and technology, there are increasingly diversified audio signal processing means, such as the use of a reverberation technique to process an audio signal to simulate sound reflection and attenuation effects of the audio signal in different environments, to provide an immersive experience.
The present disclosure provides a method and apparatus, an electronic device, and a storage medium for audio processing, to achieve a more vivid reverberation effect of an audio signal, and improve sense of space and layering of a sound, so that the sound is richer and three-dimensional.
According to a first aspect, an embodiment of the present disclosure provides an audio processing method. The method includes:
According to a second aspect, an embodiment of the present disclosure further provides an apparatus for audio processing. The apparatus includes:
According to a third aspect, an embodiment of the present disclosure further provides an electronic device. The electronic device includes:
According to a fourth aspect, an embodiment of the present disclosure further provides a computer-readable medium storing computer instructions that, when executed by a processor, cause the audio processing method according to any of the above embodiments to be implemented.
In embodiments of the present disclosure, when the first audio signal is received, the target impulse response used to perform reverberation on the first audio signal is determined. The target impulse response is a stereo impulse response for simulating the sound reflection and attenuation effect of the first audio signal in the acoustic space. The reverberated second audio signal is obtained by convolving the first audio signal with the target impulse response. Then, the third audio signal may be output by performing audio mixing on the first audio signal and the second audio signal.
It should be understood that the content described in this section is not intended to identify critical or important features of the embodiments of the present disclosure, and is not used to limit the scope of the present disclosure. Other features of the present disclosure will be easily understood through the following description.
The foregoing and other features, advantages, and aspects of embodiments of the present disclosure become more apparent with reference to the following specific implementations and in conjunction with the accompanying drawings. Throughout the accompanying drawings, the same or similar reference numerals denote the same or similar elements. It should be understood that the accompanying drawings are schematic and that parts and elements are not necessarily drawn to scale.
The embodiments of the present disclosure are described in more detail below with reference to the accompanying drawings. Although some embodiments of the present disclosure are shown in the accompanying drawings, it should be understood that the present disclosure may be implemented in various forms and should not be construed as being limited to the embodiments set forth herein. Rather, these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the accompanying drawings and the embodiments of the present disclosure are only for exemplary purposes, and are not intended to limit the scope of protection of the present disclosure.
It should be understood that the various steps described in the method implementations of the present disclosure may be performed in different orders, and/or performed in parallel. Furthermore, additional steps may be included and/or the execution of the illustrated steps may be omitted in the method implementations. The scope of the present disclosure is not limited in this respect.
The term “include/comprise” used herein and the variations thereof are an open-ended inclusion, namely, “include/comprise but not limited to”. The term “based on” is “at least partially based on”. The term “an embodiment” means “at least one embodiment”. The term “another embodiment” means “at least one another embodiment”. The term “some embodiments” means “at least some embodiments”. Related definitions of the other terms will be given in the description below.
It should be noted that concepts such as “first” and “second” mentioned in the present disclosure are only used to distinguish different apparatuses, modules, or units, and are not used to limit the sequence of functions performed by these apparatuses, modules, or units or interdependence.
It should be noted that the modifiers “one” and “a plurality of” mentioned in the present disclosure are illustrative and not restrictive, and those skilled in the art should understand that unless the context clearly indicates otherwise, the modifiers should be understood as “one or more”.
The names of messages or information exchanged between a plurality of apparatuses in the implementations of the present disclosure are used for illustrative purposes only, and are not used to limit the scope of these messages or information.
Mono impulse response data generated using a simulator is not realistic enough, resulting in a poor reverberation effect and a less natural and realistic sound, thus affecting the reverberation quality and fidelity, which in turn prevents the use of reverberation processing for sound effect generation from being widely popularized and applied.
As shown in
S110: determining a target impulse response for a first audio signal, and the target impulse response is a stereo impulse response for simulating a sound reflection and attenuation effect of the first audio signal in an acoustic space.
With reference to
Reverberation describes an auditory feeling formed after the sound is reflected and attenuated a plurality of times in the acoustic space. A plurality of times of reflection, scattering, and attenuation of a “lingering sound” after a direct sound enhance sense of space, a sound depth, and articulation in terms of subjective auditory sense. With reference to
The stereo impulse response may describe a reflection and attenuation status of an input audio signal. The stereo impulse response includes an acoustic characteristic and space information. Through analysis on the stereo impulse response, responses at difference frequencies and a propagation and reflection status of the sound in the space may be obtained. In audio processing, the stereo impulse response is used to simulate different acoustic effects in a space, such as reverberation, reflection, and attenuation in a room. The target impulse response is applied to audio signal processing to simulate different acoustic reverberation effects in a space.
S120: Obtaining a second audio signal by convolving the first audio signal with the target impulse response.
For modeling-based implementation of a reverberation effect, the reverberation effect is usually implemented using a digital filter, such as a Schroeder algorithm and a Moorer algorithm. This type of algorithm is easy to implement, and has low calculation complexity. However, such reverberation has been lacking in realistic and natural enough. Therefore, after an appropriate stereo impulse response is generated for the first audio signal, the first audio signal may be convolved with the target impulse response to apply the target impulse response to audio signal processing to simulate different reverberation effects in a space. In this process, as stereo impulse response data, the target impulse response may simulate a reflection, scattering, and attenuation process of the sound in the space. Appropriate reverberation is performed on the first audio signal, so that the first audio signal may be more vivid, natural, three-dimensional, mellow, and clear.
Alternatively, during convolution of the first audio signal and the target impulse response, the target impulse response indicates a reflection and attenuation status of the sound of the first audio signal in the acoustic space. The target impulse response may be considered as a model function, and describes a status obtained after the sound is emitted from the sound source and reflected and attenuated a plurality of times in the acoustic space. Propagation and reflection of the sound in the acoustic space may be simulated by convolving the first audio signal with the target impulse response, to obtain the second audio signal with a reverberation effect.
In an alternative but nonrestrictive implementation, the obtaining a second audio signal by convolving the first audio signal with the target impulse response includes the following steps A1 and A2.
Step A1: convolving left- and right-channel audio signals of the first audio signal with the target impulse response respectively to obtain a left-channel audio processing result and a right-channel audio processing result corresponding to the first audio signal.
Step A2: performing left-right channel mixing on the left-channel audio processing result and the right-channel audio processing result corresponding to the first audio signal to output to left and right channels respectively, and generate the second audio signal with a stereo effect based on audio signals output to the left and right channels.
With reference to
With reference to
In an alternative but nonrestrictive implementation, convolving left- and right-channel audio signals of the first audio signal with the target impulse response respectively includes the following steps B1 to B3.
Step B1: performing framing and windowing on the left- and right-channel audio signals of the first audio signal respectively, and perform Fourier transform on each windowed left/right-channel audio frame segment respectively to obtain frequency domain results respectively corresponding to the left- and right-channel audio signals of the first audio signal.
Step B2: performing windowing on the target impulse response, and performing Fourier transform on the windowed target impulse response to obtain a frequency domain result of the target impulse response.
Step B3: performing frequency domain multiplication on the frequency domain results respectively corresponding to the left- and right-channel audio signals of the first audio signal and the frequency domain result of the target impulse response and then perform inverse Fourier transform respectively.
With reference to
With reference to
Alternatively, transforming the multiplication results of the frequency domain results respectively corresponding to the left- and right-channel audio signals of the first audio signal and the frequency domain result of the target impulse response to time domain means that processing previously performed in frequency domain or another domain is transformed back to time domain. This may be implemented through inverse Fourier transform (IFFT) or another appropriate time domain transform method.
Alternatively, the overlap-add method or the overlap-save method for splicing data is a common data splicing method in signal processing, and is used to combine data processed in segments into one continuous output signal. Overlap-add method: In this method, adjacent audio frame segments have an overlapping part in terms of time. During splicing, an overlapping part of each audio frame segment is added to a start part of a next audio frame segment for smooth transition. Overlap-save method: This method is similar to the overlap-add method. However, during splicing, an overlapping part of each audio frame segment is saved instead of being added. In this way, more original audio signal information can be saved.
In an alternative but nonrestrictive implementation, performing left-right channel mixing on the left-channel audio processing result and the right-channel audio processing result corresponding to the first audio signal respectively to output to left and right channels includes the following steps C1 and C2.
Step C1: mixing the left-channel audio processing result corresponding to the first audio signal with the right-channel audio processing result corresponding to the first audio signal for output to the left channel based on a preset left-right channel mixing ratio.
Step C2: mixing the right-channel audio processing result corresponding to the first audio signal with the left-channel audio processing result corresponding to the first audio signal for output to the right channel based on a preset left-right channel mixing ratio to generate the second audio signal with the stereo effect.
The stereo effect obtained by mixing the left-channel audio processing result wet_L corresponding to the first audio signal and the right-channel audio processing result wet_R corresponding to the first audio signal is S. Mixing may be performed in the following process. For the left-channel audio processing result wet_L corresponding to the first audio signal and the right-channel audio processing result wet_R corresponding to the first audio signal, mixed left- and right-channel audio signals may be respectively calculated based on a mixing weight (for example, p) as follows:
The left-channel audio signal L and the right-channel audio signal R are respectively input to the left and right channels, to generate the stereo effect S. A value range of the mixing weight p is 0 to 1, and is used to control balance between the left and right channels. This may be set in response to a user operation to determine a value of the mixing weight.
S130: Outputting a third audio signal by performing audio mixing on the first audio signal and the second audio signal.
Alternatively, the first audio signal (dry sound) is mixed with the second audio signal (wet sound) according to a specified ratio, to obtain the third audio signal with a richer and more natural audio effect. For example, the first audio signal (dry sound) and the second audio signal (wet sound) are obtained, a dry-wet sound mixing ratio for the first audio signal (dry sound) and the second audio signal (wet sound) is determined, and the first audio signal (dry sound) is mixed with the second audio signal (wet sound) according to the determined dry-wet sound mixing ratio. This may be implemented through arithmetical operations such as an adder or a multiplier. The mixed third audio signal includes the first audio signal and a component of the processed second audio signal, to achieve a richer and more natural audio effect.
According to the technical solutions in this embodiment of the present disclosure, when the first audio signal is received, the target impulse response used to perform reverberation on the first audio signal is determined. The target impulse response is a stereo impulse response for simulating the sound reflection and attenuation effect of the first audio signal in the acoustic space. The reverberated second audio signal is obtained by convolving the first audio signal with the target impulse response. Then, the third audio signal may be output by performing audio mixing on the first audio signal and the second audio signal. This solution may resolve a problem that a reverberation effect is poor because mono impulse response data generated by a simulator is not real enough. Stereo impulse response data that may simulate a reflection and attenuation effect of a sound in a space is synchronously determined when reverberation is performed on an audio signal. Reverberation is performed on the audio signal by using the stereo impulse response data, so that it sounds as if the sound is generated in different space environments. In this way, a reverberation effect of the audio signal is more vivid, and sense of space and layering of the sound is improved, so that the sound is richer and three-dimensional.
As shown in
S310: determining a reference impulse response for the first audio signal, and the reference impulse response is a mono impulse response that represents the sound reflection and attenuation effect of the first audio signal in the acoustic space, and the reference impulse response is obtained by measuring or synthesizing a sound response in the acoustic space.
Alternatively, the reference impulse response is the mono impulse response that is collected in a real environment or that is obtained through physical modeling or psychoacoustic modeling.
During reverberation, a professional recording device is usually used to collect multi-channel impulse response data in a real environment, and then an input audio signal is processed by using the really collected impulse response data to generate sense of space of the sound. The professional recording device has a better effect in collecting the multi-channel impulse response data in the real environment for reverberation. However, the professional recording device is usually expensive, and needs to perform recoding in a specific environment. This may require additional costs. Consequently, sound effect generation performed through reverberation cannot be widely popularized and applied. Therefore, the mono impulse response may be obtained through physical modeling or psychoacoustic modeling.
S320: Delaying and performing gain processing on the reference impulse response for the first audio signal, to obtain the target impulse response for the first audio signal, the target impulse response being able to simulate a time difference and an intensity difference generated when a sound of the first audio signal arrives at a first sound pickup position and a second sound pickup position in the space.
The target impulse response is a stereo impulse response for simulating a sound reflection and attenuation effect of the first audio signal in an acoustic space.
With reference to
With reference to
In an alternative but nonrestrictive implementation, delaying and performing gain processing on the reference impulse response for the first audio signal, to obtain the target impulse response for the first audio signal includes the following steps D1 to D3.
Step D1: Determining, as a target time difference, a difference between the times at which the sound source of the first audio signal respectively arrives at the first sound pickup position and the second sound pickup position.
With reference to
Step D2: Determining, as a target intensity difference, a sound pressure level difference generated due to different intensities when the sound source of the first audio signal respectively arrives at the first sound pickup position and the second sound pickup position.
With reference to
Step D3: Delaying the reference impulse response for the first audio signal based on the target time difference, and after the reference impulse response for the first audio signal is delayed, performing gain processing on the reference impulse response for the first audio signal based on the target time difference, to obtain the target impulse response for the first audio signal.
With reference to
S330: Obtaining a second audio signal by convolving the first audio signal with the target impulse response.
S340: Outputting a third audio signal by performing audio mixing on the first audio signal and the second audio signal.
According to the technical solutions in this embodiment of the present disclosure, when the first audio signal is received, the target impulse response used to perform reverberation on the first audio signal is determined. The target impulse response is a stereo impulse response for simulating the sound reflection and attenuation effect of the first audio signal in the acoustic space. The reverberated second audio signal is obtained by convolving the first audio signal with the target impulse response. Then, the third audio signal may be output by performing audio mixing on the first audio signal and the second audio signal. This solution may resolve a problem that a reverberation effect is poor because mono impulse response data generated by a simulator is not real enough. Stereo impulse response data that can simulate a reflection and attenuation effect of a sound in a space is synchronously determined when reverberation is performed on an audio signal. Reverberation is performed on the audio signal by using the stereo impulse response data, so that it sounds as if the sound is generated in different space environments. In this way, a reverberation effect of the audio signal is more vivid, and sense of space and layering of the sound is improved, so that the sound is richer and three-dimensional.
As shown in
The determination module 510 is configured to determine a target impulse response for a first audio signal, and the target impulse response is a stereo impulse response for simulating a sound reflection and attenuation effect of the first audio signal in an acoustic space.
The reverberation processing module 520 is configured to obtain a second audio signal by convolving the first audio signal with the target impulse response.
The audio output module 530 is configured to output a third audio signal by performing audio mixing on the first audio signal and the second audio signal.
Based on the technical solutions in the above embodiments, alternatively, determining a target impulse response for a first audio signal includes:
Based on the technical solutions in the above embodiments, alternatively, the reference impulse response is the mono impulse response that is collected in a real environment or that is obtained through physical modeling or psychoacoustic modeling.
Based on the technical solutions in the above embodiments, alternatively, delaying and performing gain processing on the reference impulse response for the first audio signal, to obtain the target impulse response for the first audio signal includes:
Based on the technical solutions in the above embodiments, alternatively, obtaining a second audio signal by convolving the first audio signal with the target impulse response includes:
Based on the technical solutions in the above embodiments, alternatively, convolving left- and right-channel audio signals of the first audio signal with the target impulse response respectively includes:
Based on the technical solutions in the above embodiments, alternatively, performing left-right channel mixing on the left-channel audio processing result and the right-channel audio processing result corresponding to the first audio signal respectively to output to left and right channels includes:
According to the technical solutions in this embodiment of the present disclosure, when the first audio signal is received, the target impulse response used to perform reverberation on the first audio signal is determined. The target impulse response is a stereo impulse response for simulating the sound reflection and attenuation effect of the first audio signal in the acoustic space. The reverberated second audio signal is obtained by convolving the first audio signal with the target impulse response. Then, the third audio signal may be output by performing audio mixing on the first audio signal and the second audio signal. This solution can resolve a problem that a reverberation effect is poor because mono impulse response data generated by a simulator is not real enough. Stereo impulse response data that can simulate a reflection and attenuation effect of a sound in a space is synchronously determined when reverberation is performed on an audio signal. Reverberation is performed on the audio signal by using the stereo impulse response data, so that it sounds as if the sound is generated in different space environments. In this way, a reverberation effect of the audio signal is more vivid, and sense of space and layering of the sound is improved, so that the sound is richer and three-dimensional.
The audio processing apparatus provided in this embodiment of the present disclosure may perform the audio processing method provided in any of the embodiments of the present disclosure, and has corresponding functional modules and beneficial effects for performing the audio processing method.
It is worth noting that the units and modules included in the above apparatus are obtained through division merely according to functional logic, but are not limited to the above division, as long as corresponding functions can be implemented. In addition, specific names of the functional units are merely used for mutual distinguishing, and are not used to limit the protection scope of the embodiments of the present disclosure.
As shown in
Generally, the following apparatuses may be connected to the I/O interface 605: an input apparatus 606 including, for example, a touchscreen, a touchpad, a keyboard, a mouse, a camera, a microphone, an accelerometer, and a gyroscope; an output apparatus 607 including, for example, a liquid crystal display (LCD), a speaker, and a vibrator; the storage apparatus 608 including, for example, a magnetic tape and a hard disk; and a communication apparatus 609. The communication apparatus 609 may allow the electronic device 600 to perform wireless or wired communication with other devices to exchange data. Although
In particular, according to an embodiment of the present disclosure, the process described above with reference to the flowchart may be implemented as a computer software program. For example, this embodiment of the present disclosure includes a computer program product, which includes a computer program carried on a non-transitory computer-readable medium. The computer program includes program code for performing the audio processing method shown in the flowchart. In such an embodiment, the computer program may be downloaded from a network through the communication apparatus 609 and installed, installed from the storage apparatus 608, or installed from the ROM 602. When the computer program is executed by the processing apparatus 601, the above functions defined in the audio processing method of the embodiment of the present disclosure are performed.
The names of messages or information exchanged between a plurality of apparatuses in the implementations of the present disclosure are used for illustrative purposes only, and are not used to limit the scope of these messages or information.
The electronic device provided in this embodiment of the present disclosure and the audio processing method provided in the above embodiment belong to the same inventive concept. For the technical details not described in detail in this embodiment, reference can be made to the above embodiment, and this embodiment and the above embodiment have the same beneficial effects.
An embodiment of the present disclosure provides a computer storage medium storing a computer program thereon. When executed by a processor, the program implements the audio processing method provided in the above embodiment.
It should be noted that the above computer-readable medium described in the present disclosure may be a computer-readable signal medium, a computer-readable storage medium, or any combination thereof. The computer-readable storage medium may be, for example but not limited to, electric, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatuses, or devices, or any combination thereof. A more specific example of the computer-readable storage medium may include, but is not limited to: an electrical connection having one or more wires, a portable computer magnetic disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM) (or a flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination thereof. In the present disclosure, the computer-readable storage medium may be any tangible medium containing or storing a program which may be used by or in combination with an instruction execution system, apparatus, or device. In the present disclosure, the computer-readable signal medium may include a data signal propagated in a baseband or as a part of a carrier, the data signal carrying computer-readable program code. The propagated data signal may be in various forms, including but not limited to an electromagnetic signal, an optical signal, or any suitable combination thereof. The computer-readable signal medium may also be any computer-readable medium other than the computer-readable storage medium. The computer-readable signal medium can send, propagate, or transmit a program used by or in combination with an instruction execution system, apparatus, or device. The program code contained in the computer-readable medium may be transmitted by any suitable medium, including but not limited to: electric wires, optical cables, radio frequency (RF), or the like, or any suitable combination thereof.
In some implementations, a client and a server may communicate using any currently known or future-developed network protocol such as the Hypertext Transfer Protocol (HTTP), and may be connected to digital data communication (for example, a communication network) in any form or medium. Examples of the communication network include a local area network (“LAN”), a wide area network (“WAN”), an internetwork (for example, the Internet), a peer-to-peer network (for example, an ad hoc peer-to-peer network), and any currently known or future-developed network.
The above computer-readable medium may be contained in the above electronic device. Alternatively, the computer-readable medium may exist independently, without being assembled into the electronic device.
The above computer-readable medium carries one or more programs that, when executed by the electronic device, cause the electronic device to: determine a target impulse response for a first audio signal, the target impulse response being a stereo impulse response for simulating a sound reflection and attenuation effect of the first audio signal in an acoustic space; obtain a second audio signal by convolving the first audio signal with the target impulse response; and output a third audio signal by performing audio mixing on the first audio signal and the second audio signal.
Computer program code for performing operations of the present disclosure can be written in one or more programming languages or a combination thereof, where the programming languages include but are not limited to object-oriented programming languages, such as Java, Smalltalk, and C++, and further include conventional procedural programming languages, such as “C” language or similar programming languages. The program code may be completely executed on a computer of a user, partially executed on a computer of a user, executed as an independent software package, partially executed on a computer of a user and partially executed on a remote computer, or completely executed on a remote computer or server. In the case of the remote computer, the remote computer may be connected to the computer of the user through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (for example, connected through the Internet with the aid of an Internet service provider).
The flowchart and block diagram in the accompanying drawings illustrate the possibly implemented architecture, functions, and operations of the system, method, and computer program product according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagram may represent a module, program segment, or part of code, and the module, program segment, or part of code contains one or more executable instructions for implementing the specified logical functions. It should also be noted that, in some alternative implementations, the functions marked in the blocks may also occur in an order different from that marked in the accompanying drawings. For example, two blocks shown in succession can actually be performed substantially in parallel, or they can sometimes be performed in the reverse order, depending on the functions involved. It should also be noted that each block in the block diagram and/or the flowchart, and a combination of the blocks in the block diagram and/or the flowchart may be implemented by a dedicated hardware-based system that executes specified functions or operations, or may be implemented by a combination of dedicated hardware and computer instructions.
The related units described in the embodiments of the present disclosure may be implemented by software, or may be implemented by hardware. Names of the units do not constitute a limitation on the units themselves in some cases, for example, a first obtaining unit may alternatively be described as “a unit for obtaining at least two Internet Protocol addresses”.
The functions described herein above may be performed at least partially by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a field programmable gate array (FPGA), an application-specific integrated circuit (ASIC), an application-specific standard product (ASSP), a system-on-chip (SOC), a complex programmable logic device (CPLD), and the like.
In the context of the present disclosure, a machine-readable medium may be a tangible medium that may contain or store a program used by or in combination with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination thereof. More specific examples of the machine-readable storage medium may include an electrical connection based on one or more wires, a portable computer disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM) (or a flash memory), an optic fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination thereof.
The foregoing descriptions are merely preferred embodiments of the present disclosure and explanations of the applied technical principles. Those skilled in the art should understand that the scope of disclosure involved in the present disclosure is not limited to the technical solutions formed by specific combinations of the foregoing technical features, and shall also cover other technical solutions formed by any combination of the foregoing technical features or equivalent features thereof without departing from the foregoing concept of disclosure. For example, a technical solution formed by a replacement of the foregoing features with technical features with similar functions disclosed in the present disclosure (but not limited thereto) also falls within the scope of the present disclosure.
In addition, although the various operations are depicted in a specific order, it should not be construed as requiring these operations to be performed in the specific order shown or in a sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Similarly, although several specific implementation details are included in the foregoing discussions, these details should not be construed as limiting the scope of the present disclosure. Some features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. In contrast, various features described in the context of a single embodiment may alternatively be implemented in a plurality of embodiments individually or in any suitable sub combination.
Although the subject matter has been described in a language specific to structural features and/or logical actions of the method, it should be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or actions described above. In contrast, the specific features and actions described above are merely exemplary forms of implementing the claims.
Number | Date | Country | Kind |
---|---|---|---|
202410030700.9 | Jan 2024 | CN | national |