A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
1. This disclosure relates to the field of acoustic technology, particularly to a signal processing method and an acoustic system.
Some acoustic systems comprise both a speaker and a sound sensor. In these systems, the ambient sound collected by the sound sensor may comprise sound emitted from the speaker, which is detrimental to the operation of the acoustic system. For example, in a hearing aid system, the sound sensor collects ambient sound during operation, amplifies the gain of the ambient sound, and then plays it through the speaker to compensate for the wearer's hearing loss. When the sound emitted by the speaker is recaptured by the sound sensor, a closed-loop circuit is formed in the acoustic system, causing the sound emitted by the speaker to be continuously amplified in the loop, leading to acoustic feedback, which results in discomfort for the wearer. Additionally, in a telephone system or a conference system, voice signals from a remote user are played through the local speaker and are then collected by the local sound sensor along with the voice from the local user, and transmitted back to the remote end. As a result, the remote user may experience interference from echo.
This disclosure provides a signal processing method and an acoustic system, which can reduce the signal component derived from a speaker module in a composite signal, thereby suppressing howling or eliminating echo.
In a first aspect, this disclosure provides an acoustic system, comprising: a speaker module, configured to receive an input signal and output a target sound during operation; a sound sensor module, comprising at least: a first sound sensor and a second sound sensor, wherein the first sound sensor collects an ambient sound and generates a first signal during operation, the second sound sensor collects the ambient sound and generates a second signal during operation, and the ambient sound comprises at least the target sound; and a signal processing circuit, wherein the signal processing circuit is connected to the sound sensor module and configured to execute during operation: obtaining a first signal, obtaining a second signal, performing a first target operation on the first signal and the second signal to generate a composite signal, wherein the composite signal is a synthesized signal of a signal in a first frequency band and a signal in a second frequency band, the signal in the first frequency band is derived from a sound pickup result signal of the sound sensor module in a target sound pickup state, the target sound pickup state corresponds to a zero sound pickup direction of the sound sensor module pointing toward the speaker module, and performing a second target operation on the composite signal.
In a second aspect, this disclosure provides a signal processing method, comprising: obtaining a first signal, wherein the first signal is obtained by a first sound sensor in a sound sensor module during operation to capture an ambient sound, the ambient sound at least comprises a target sound, and the target sound is a sound output by a speaker module during operation; obtaining a second signal, wherein the second signal is obtained by a second sound sensor in the sound sensor module during operation to capture the ambient sound; performing a first target operation on the first signal and the second signal to generate a composite signal, wherein the composite signal is a synthesized signal of a signal in a first frequency band and a signal in a second frequency band, the signal in the first frequency band is derived from a sound pickup result signal of the sound sensor module in a target sound pickup state, the target sound pickup state corresponds to a zero sound pickup direction of the sound sensor module pointing toward the speaker module; and performing a second target operation on the composite signal.
From the above technical solutions, it can be known that this disclosure provides a signal processing method and an acoustic system, the method comprising: obtaining a first signal and a second signal, where the first signal is obtained by collecting ambient sound when a first sound sensor in a sound sensor module is operating, and the second signal is obtained by collecting sound when a second sound sensor in the sound sensor module is operating, the ambient sound at least including a target sound output by a speaker module when it is operating; performing a first target operation on the first signal and the second signal to generate a composite signal; and performing a second target operation on the composite signal, where the composite signal is a composite signal of a signal in a first frequency band and a signal in a second frequency band, the signal in the first frequency band coming from a sound pickup result signal of the sound sensor module in a target sound pickup state, the target sound pickup state corresponding to a zero sound pickup direction of the sound sensor module pointing toward the speaker module. In the above solution, since the sound pickup result signal is obtained by sound pickup when the zero sound pickup direction of the sound sensor module points to the speaker module, the sound pickup result signal contains no or fewer signal components from the speaker module. Furthermore, since the first frequency band in the composite signal comes from the sound pickup result signal, the signal components from the speaker module in the composite signal are reduced, thereby reducing the pickup of the target sound emitted by the speaker module by the sound sensor module, enabling the above acoustic system to achieve the effect of suppressing howling or eliminating echo.
Other functions of the acoustic system provided by this disclosure and the signal processing method applied to the acoustic system will be partially listed in the following description. The inventive aspects of the acoustic system provided by this disclosure and the signal processing method applied to the acoustic system can be fully explained through practice or use of the methods, devices, and combinations described in the detailed examples below.
To more clearly illustrate the technical solutions in the embodiments of this disclosure, the drawings required for the description of the embodiments will be briefly introduced below. Obviously, the drawings described below are merely some exemplary embodiments of this disclosure. For a person of ordinary skill in the art, other drawings can also be obtained based on these drawings without any creative effort.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
The following description provides specific disclosure scenarios and requirements of this disclosure, with the aim of enabling a person skilled in the art to make and use the content of this disclosure. For a person skilled in the art, various local modifications to the disclosed embodiments will be apparent, and the general principles defined herein may be applied to other embodiments and disclosures without departing from the spirit and scope of this disclosure. Therefore, this disclosure is not limited to the embodiments shown, but rather conforms to the broadest scope consistent with the claims.
The terms used herein are for the purpose of describing specific example embodiments and are not meant to be restrictive. For example, unless otherwise explicitly stated in the context, the singular forms “a,” “an,” and “the” may also include the plural forms. When used in this disclosure, the terms “include,” “comprise,” and/or “contain” mean that the associated integer, step, operation, element, and/or component is present but do not exclude the presence of one or more other features, integers, steps, operations, elements, components, and/or groups, or the possibility of adding other features, integers, steps, operations, elements, components, and/or groups to the system/method.
Given the following description, these features and other features of the disclosure, as well as the operation and functionality of the related elements of the structure, and the combination and manufacturability of the parts, can be significantly improved. The accompanying drawings, which form part of this disclosure, are referenced for illustration. However, it should be clearly understood that the drawings are for illustration and description purposes only and are not intended to limit the scope of this disclosure. It should also be understood that the drawings are not drawn to scale.
The flowcharts used in this disclosure illustrate the operations of the system implementation according to some exemplary embodiments of this disclosure. It should be clearly understood that the operations in the flowcharts may not be implemented in a specific order. Instead, the operations may be performed in reverse order or concurrently. Additionally, one or more other operations may be added to the flowcharts, or one or more operations may be removed from them.
22. For the convenience of description, the terms appearing in this disclosure are first explained below.
23. Howling: Howling is a phenomenon that frequently occurs in acoustic systems. The process of howling generation is explained below with reference to
24. Echo: Echo is also a phenomenon that frequently occurs in acoustic systems. The process of echo generation is explained below with reference to
25. Continuing with reference to
26. Background noise: Refers to the surrounding ambient sound other than the sound source being measured. In this disclosure, any sound that is unwanted by the user, undesired by the user, or interferes with the user's hearing can be called noise.
27. Sound pickup direction pattern: Refers to a pattern used to represent the sensitivity of the sound sensor/sound sensor module to sounds from different directions. In simple terms, the sound pickup direction pattern can represent the ability of the sound sensor/sound sensor module to pick up sounds from different directions. Typically, the sound pickup direction pattern can comprise: omnidirectional, heart-shaped, number 8 shaped, super-cardioid-shaped, etc.
28. Zero sound pickup direction: Theoretically, if the sound sensor/sound sensor module has a sensitivity to sound from a certain direction of 0 or close to 0, that direction is referred to as the zero sound pickup direction. It should be understood that when the sound source is located at the zero sound pickup direction, the sound sensor/sound sensor module will theoretically not capture any sound emitted by the sound source. In practice, due to manufacturing errors of the sound sensor/sound sensing module and the fact that sound sources in reality are not necessarily ideal point sources, the sound sensor/sound sensing module may still capture a small amount of sound from the zero sound pickup direction. It should be noted that in this disclosure, the zero sound pickup direction can refer to a specific direction or to a range of directions that comprises multiple directions.
29. Far-field sound source: Refers to a sound source that is relatively far from the sound sensor/sound sensor module. Generally speaking, when the distance between the sound source to be measured and the sound sensor/sound sensor module is greater than N times the physical size of the sound sensor/sound sensor module, the sound source can be approximated as a far-field sound source. It should be noted that in different disclosure scenarios, the value of N can vary. For example, in the case of headphones, the physical size of the sound sensor/sound sensor module may be less than or equal to 0.01 m, and at this time, the value of N can be greater than or equal to 10. This means that a sound source located at a distance greater than or equal to 0.1 m from the sound sensor/sound sensor module can be considered a far-field sound source. Compared to a near-field sound source, the sound waves from the far-field sound source are approximately planar, and the amplitude of the sound waves decreases less as they propagate.
30. Near-field sound source: Refers to a sound source that is relatively close to the sound sensor/sound sensor module. Generally speaking, when the distance between the sound source to be measured and the sound sensor/sound sensor module is less than 2 to 3 times the physical size of the sound sensor/sound sensor module, the sound source can be approximated as a near-field sound source. For example, in the case of headphones, a sound source at a distance less than 0.1 m can be considered a near-field sound source. Compared to the aforementioned far-field sound source, the sound waves from the near-field sound source are closer to spherical, and the amplitude of the sound waves decreases more significantly as they propagate.
31. Before describing the specific embodiments of this disclosure, the disclosure scenarios of this disclosure are introduced as follows: The signal processing method and acoustic system provided in this disclosure can be applied to scenarios that require squeal suppression (such as the scenario shown in
32. It should be noted that the squeal suppression scenario and the echo cancellation scenario are just some of the various application scenarios provided in this disclosure. The signal processing method and acoustic system provided in this disclosure can also be applied to other similar scenarios. A person skilled in the art should understand that the signal processing method and acoustic system provided in this disclosure applied to other usage scenarios are also within the scope of protection of this disclosure.
33.
34. It should be noted that in the acoustic system 003 shown in
35. In some exemplary embodiments, the acoustic system 003 is a hearing system, and the acoustic system 003 may also comprise a housing, with the speaker module 110, the sound sensor module 120, and the signal processing circuit 150 being disposed inside the housing. The housing serves to protect the internal components and makes it convenient for the user to handle and wear. The acoustic system 003 can be worn on the user's head. For example, the acoustic system 003 can be worn in the ear or over the ear at the user's ear region. When the acoustic system 003 is worn on the user's head, the sound-emitting end of the speaker module 110 is oriented towards the user's head, for example, towards the user's ear canal opening or near the ear canal opening. The pickup end of at least one sound sensor in the sound sensor module 120 is located on the side of the housing away from the user's head. In this way, on one hand, it facilitates the pickup of ambient sound, and on the other hand, it can minimize the pickup of sound emitted by the speaker module 110.
36. The first sound sensor 120-1 and the second sound sensor 120-2 can be the same sound sensor or different sound sensors. For ease of description, in this disclosure, the sound sensor in the sound sensor module 120 that is closer to the speaker module 110 is referred to as the second sound sensor 120-2, while the sound sensor farther from the speaker module 110 is referred to as the first sound sensor 120-1. That is to say, the second sound sensor 120-2 is closer to the speaker module 110 compared to the first sound sensor 120-1. In this way, the sound emitted by the speaker module 110 will first be captured by the second sound sensor 120-2 and then by the first sound sensor 120-1. In this case, the phase of the signal component corresponding to the target sound in the second signal is earlier than the phase of the signal component corresponding to the target sound in the first signal. When the distances between the two sound sensors and the speaker module 110 are equal, either one can be referred to as the first sound sensor 120-1, and the other as the second sound sensor 120-2. In this case, the phase of the signal component corresponding to the target sound in the second signal is equal to the phase of the signal component corresponding to the target sound in the first signal. Additionally, the speaker module 110, the first sound sensor 120-1, and the second sound sensor 120-2 can be integrated into the same electronic device or can be independent of each other, and this disclosure does not impose any limitations on this.
37. The speaker module 110 may comprise one speaker or multiple speakers. In the following description, unless otherwise specified, it is illustrated by taking one speaker in the speaker module 110 as an example. When the speaker module 110 comprises multiple speakers, the multiple speakers can be arranged in an array, for example, a linear array, a planar array, a spherical array, or other arrays. Among them, a speaker can also be called an electro-acoustic transducer, which is a device used to convert electrical signals into sound signals. For example, the speaker can be a loud speaker.
38. When operating, the speaker module 110 receives an input signal and converts it into audio for playback. Here, the aforementioned input signal refers to an electrical signal carrying sound information, and the aforementioned audio refers to the sound played through the speaker module 110. In some exemplary embodiments, the input signal received by the speaker module 110 may come from the sound sensor module 120. This situation may correspond to the acoustic scenario shown in
39. The sound sensor (first sound sensor 120-1 and/or second sound sensor 120-2) can also be referred to as an acoustic-electric transducer or a sound pickup device, which is a device used to capture sound and convert it into an electrical signal. For example, a sound sensor can be a microphone (Microphone, MIC). When operating, the sound sensor captures ambient sound and converts it into an electrical signal carrying sound information. The sound sensor can be an omnidirectional sound sensor, in which case it can capture ambient sound from all directions. The sound sensor can also be a directional sound sensor, in which case it can capture ambient sound from specific directions.
40. Continuing to refer to
41. The signal processing circuit 150 is connected to the sound sensor module 120, as shown in
42. Continuing to refer to
43. In some exemplary embodiments, when the speaker module 110, the sound sensor module 120, and the signal processing circuit 150 are deployed in a first acoustic device, the first acoustic device can be communicatively connected to a second acoustic device. In this case, the second target operation 50 may comprise: sending the composite signal to the second acoustic device to reduce echo in the second acoustic device. The above solution can be applied to the echo cancellation scenario as shown in
44. The signal processing circuit 150 can be configured to execute the signal processing method described in this disclosure. In some exemplary embodiments, the signal processing circuit 150 may comprise multiple hardware circuits with interconnection relationships, each hardware circuit comprising one or more electrical components, which, during operation, implement one or more steps of the signal processing method described in this disclosure. These multiple hardware circuits cooperate with each other during operation to realize the signal processing method described in this disclosure. In some exemplary embodiments, the signal processing circuit 150 may comprise hardware equipment with data information processing capabilities and the necessary programs required to drive the operation of this hardware equipment. The hardware equipment executes these programs to implement the signal processing method described in this disclosure. The signal processing method will be described in detail in the following sections.
45.
46. Continuing to refer to
47. The storage medium 210 may comprise a data storage device. The data storage device can be a non-transitory storage medium or a transitory storage medium. For example, the data storage device may comprise one or more of a magnetic disk 2101, a read-only storage medium (ROM) 2102, or a random access storage medium (RAM) 2103. The storage medium 210 also comprises at least one instruction set stored in the data storage device. The instruction set comprises instructions, which are computer program code. The computer program code may comprise programs, routines, objects, components, data structures, procedures, modules, etc., for executing the signal processing method provided in this disclosure.
48. The at least one processor 220 is configured to execute the aforementioned at least one instruction set. When the acoustic system 003 is running, the at least one processor 220 reads the at least one instruction set and, according to the instructions of the at least one instruction set, executes the signal processing method provided in this disclosure. The processor 220 can perform all or part of the steps included in the aforementioned signal processing method. The processor 220 may be in the form of one or more processors. In some exemplary embodiments, the processor 220 may comprise one or more hardware processors, such as a microcontroller, microprocessor, reduced instruction set computer (RISC), application-specific integrated circuit (ASIC), application-specific instruction set processor (ASIP), central processing unit (CPU), graphics processing unit (GPU), physics processing unit (PPU), microcontroller unit, digital signal processor (DSP), field-programmable gate array (FPGA), advanced RISC machine (ARM), programmable logic device (PLD), any circuit or processor capable of executing one or more functions, or any combination thereof. For illustrative purposes only, the acoustic system 003 shown in
49.
50. S310: Obtain a first signal, where the first signal is obtained by a first sound sensor in a sound sensor module collecting an ambient sound during operation, the ambient sound comprises at least a target sound, and the target sound is a sound output by a speaker during operation.
51. S320: Obtain a second signal, where the second signal is obtained by a second sound sensor in the sound sensor module collecting the ambient sound during operation.
52. The signal processing circuit 150 can obtain the first signal from the first sound sensor 120-1 and the second signal from the second sound sensor 120-2. As mentioned earlier, since the ambient sound comprises both the sound emitted by the target sound source 160 and the target sound emitted by the speaker module 110, the first signal and the second signal obtained by the signal processing circuit 150 both contain signal components from the target sound source 160 as well as signal components from the speaker module 110.
53. It should be noted that this disclosure does not impose any restrictions on the execution order of S310 and S320. The order of execution of the two can be interchangeable, or they can also be executed simultaneously.
54. S330: Perform a first target operation on the first signal and the second signal to generate a composite signal, where the composite signal is a synthesized signal of signals from a first frequency band and a second frequency band, and the signal of the first frequency band comes from the sound pickup result signal of the sound sensor module in a target sound pickup state, the target sound pickup state corresponds to a zero sound pickup direction of the sound sensor module pointing towards the speaker module.
55. The signal processing circuit 150 can generate a composite signal by performing a first target operation 40 on the first signal and the second signal. The purpose of the first target operation 40 is to reduce the pickup of the target sound by the sound sensor module 120, thereby reducing the signal components from the speaker module 110 in the composite signal (i.e., reducing the signal components of the target sound in the composite signal). Here, “reducing the pickup of the target sound by the sound sensor module 120” means that, compared to the pickup of the target sound by the sound sensor module 120 without performing the first target operation, the pickup of the target sound by the sound sensor module 120 is reduced when the first target operation is performed.
56. The composite signal is a signal synthesized by frequency bands; specifically, the composite signal can be a synthesized signal of the signals from the first frequency band and the second frequency band. The signal of the first frequency band comes from the sound pickup result signal of the sound sensor module 120 in a target sound pickup state, where the target sound pickup state corresponds to the zero sound pickup direction of the sound sensor module 120 pointing towards the speaker module 110. The signal of the second frequency band may not be derived from the sound pickup result signal but is obtained through other means. In other words, a portion of the frequency band signals in the composite signal comes from the sound pickup result signal, while another portion of the frequency band signals does not come from the sound pickup result signal.
57. It should be noted that the aforementioned “zero sound pickup direction pointing towards the speaker module 110” should be understood as the zero sound pickup direction generally pointing towards the speaker module 110. For example, the zero sound pickup direction may point to the center point of the speaker module 110. As another example, the zero sound pickup direction may point to any point on the sound output surface of the speaker module 110. As yet another example, the zero sound pickup direction may point to a preset area on the sound output surface of the speaker module 110. As a further example, assuming the direction angle corresponding to the center point of the speaker module 110 is θ, the direction angle corresponding to the zero sound pickup direction may fall within the range [θ−Δφ, θ+Δφ].
58. The sound pickup result signal refers to a single-channel signal obtained by merging/superimposing the pickup signals of the first sound sensor 120-1 and the second sound sensor 120-2 when the zero sound pickup direction of the sound sensor module 120 points towards the speaker module 110. It should be understood that when the zero sound pickup direction of the sound sensor module 120 points towards the speaker module 110 (i.e., in the target sound pickup state), the sound emitted by the speaker module 110 is not captured, or is captured to a lesser extent, by the sound sensor module 120. Therefore, the sound pickup result signal does not contain signal components from the speaker module 110, or contains relatively few signal components from the speaker module 110.
59. It can be understood that since the sound pickup result signal does not contain, or contains relatively few, signal components from the speaker module 110, when the first frequency band in the composite signal is derived from the sound pickup result signal, the composite signal also does not contain, or contains relatively few, signal components from the speaker module 110. Consequently, compared to the first signal and the second signal, the composite signal can reduce the signal components from the speaker module 110.
60. In some exemplary embodiments, the signal of the second frequency band may come from the first signal or the second signal. Since both the first signal and the second signal are raw signals collected by the sound sensor, compared to the sound pickup result signal, the first signal and the second signal can more accurately reflect certain characteristics of the real ambient sound (e.g., background noise features). Therefore, when the signal of the second frequency band comes from the first signal or the second signal, the composite signal retains components of the original signal, allowing the composite signal to more accurately reflect the characteristics of the real ambient sound. It can be understood that when the signal of the first frequency band in the composite signal comes from the sound pickup result signal and the signal of the second frequency band comes from the first signal or the second signal, the composite signal both preserves the components of the original signal collected by the sound sensor and reduces the signal components from the speaker module 110. As a result, the composite signal can reduce the signal components from the speaker module 110 while striving to accurately reflect the real ambient sound, thereby improving the accuracy of the composite signal.
61. For ease of description, the following descriptions will take the example where the signal of the second frequency band comes from the first signal. It should be understood that when the signal of the second frequency band comes from the second signal, the implementation is similar, and this will not be elaborated again herein.
62.
63. In some exemplary embodiments, the aforementioned zero differential operation 41 and/or signal synthesis operation 42 may be implemented by the processor 220 in the signal processing circuit 150, meaning that the processor 220 executes a set of instructions and performs the zero differential operation 41 and/or signal synthesis operation 42 according to the instructions. In some exemplary embodiments, the signal processing circuit 150 may comprise a zero differential circuit, and the aforementioned zero differential operation 41 may be implemented through this zero differential circuit. In some exemplary embodiments, the signal processing circuit 150 may comprise a signal synthesis circuit, and the aforementioned signal synthesis operation 42 may be implemented through this signal synthesis circuit.
64. Several implementation methods of the zero differential operation 41 are exemplified below in conjunction with
65.
66. The zero differential operation 41a shown in
67. In some exemplary embodiments, when performing the first delay operation 411, the signal processing circuit 150 can determine the delay duration T corresponding to the second signal based on the following formula (1):
68. Where d is the distance between the first sound sensor 120-1 and the second sound sensor 120-2, and c is the speed of sound.
69. After performing the aforementioned first delay operation 411, since the phase of the signal components from the speaker module 110 in the second delayed signal has been aligned with the phase of the signal components from the speaker module 110 in the first signal, the signal processing circuit 150 can perform the first differential operation 413 (i.e., subtracting the second delayed signal from the first signal). This allows the signal components from the speaker module 110 in the first signal to cancel out the signal components from the speaker module 110 in the second delayed signal, resulting in the first differential signal exhibiting a zero pickup characteristic in the direction of the speaker module 110.
70.
71. Further, the inventors analyzed and experimented on the zero differential operation 41a shown in
72. Therefore, in some exemplary embodiments, with continued reference to
73. As previously mentioned, the zero differential operation 41a shown in
74.
75. The first delay operation 411 is configured to delay the second signal to obtain a second delayed signal. In some exemplary embodiments, the delay duration of the second signal can be determined based on the aforementioned formula (1), which will not be elaborated herein. The second delay operation 412 is configured to delay the first signal to obtain a first delayed signal. In some exemplary embodiments, the delay duration of the first signal can be determined based on the aforementioned formula (1), which will not be elaborated herein. The first differential operation 413 is configured to perform a differential operation between the first signal and the second delayed signal (i.e., subtract the second delayed signal from the first signal) to obtain a first differential signal. The second differential operation 414 is configured to perform a differential operation between the second signal and the first delayed signal (i.e., subtract the first delayed signal from the second signal) to obtain a second differential signal. The third differential operation 415 is configured to perform a differential operation between the first differential signal and the second differential signal (i.e., subtract the second differential signal from the first differential signal) to obtain a third differential signal.
76. It should be understood that the principle of sound pickup direction adjustment for the zero differential operation 41b shown in
77. From the above-mentioned sound pickup direction pattern in a shape of number “8” (i.e., pattern III), it can be seen that the sound sensor module 120 exhibits a sound pickup zero characteristic in the 90-degree and 270-degree directions. When the speaker module 110 is located in or near the 90-degree/270-degree directions, the sound sensor module 120 does not collect (or collects very little) sound emitted by the speaker module 110. Accordingly, it can be seen that the double-delay-based zero differential operation 41b shown in
78.
79. Furthermore, referring again to
80. In the case of using the zero differential operation 41b shown in
81.
82. It should be understood that the second differential signal corresponds to pattern II in
83. It should be noted that the various zero differential operations 41 involved in this disclosure (e.g., zero differential operation 41a shown in
84. In some exemplary embodiments, any one or more of the aforementioned first delay operation 411, second delay operation 412, first differential operation 413, second differential operation 414, third differential operation 415, gain compensation operation 416, multiplication operation 417, and target parameter generation operation 418 may be implemented by the processor 220 in the signal processing circuit 150. That is, the processor 220 executes an instruction set and performs one or more of the above operations according to the instructions in the set. In some exemplary embodiments, the signal processing circuit 150 may comprise a first delay circuit, and the aforementioned first delay operation 411 may be implemented by the first delay circuit. In some exemplary embodiments, the signal processing circuit 150 may comprise a second delay circuit, and the aforementioned second delay operation 412 may be implemented by the second delay circuit. In some exemplary embodiments, the signal processing circuit 150 may comprise a first differential circuit, and the aforementioned first differential operation 413 may be implemented by the first differential circuit. In some exemplary embodiments, the signal processing circuit 150 may comprise a second differential circuit, and the aforementioned second differential operation 414 may be implemented by the second differential circuit. In some exemplary embodiments, the signal processing circuit 150 may comprise a third differential circuit, and the aforementioned third differential operation 415 may be implemented by the third differential circuit. In some exemplary embodiments, the signal processing circuit 150 may comprise a gain compensation circuit, and the aforementioned gain compensation operation 416 may be implemented by the gain compensation circuit. In some exemplary embodiments, the signal processing circuit 150 may comprise a multiplication circuit, and the aforementioned multiplication operation 417 may be implemented by the multiplication circuit. In some exemplary embodiments, the signal processing circuit 150 may comprise a target parameter generation circuit, and the aforementioned target parameter generation operation 418 may be implemented by the target parameter generation circuit.
85. As described earlier, the zero differential operation 41 (such as zero differential operation 41a shown in
86. Therefore, in some exemplary embodiments, the second frequency band may comprise the frequency band corresponding to the background noise of the current environment, while the first frequency band comprises all frequency bands except the second frequency band. This way, when the composite signal has components from the second frequency band (i.e., the frequency band corresponding to the background noise) from the first signal, and the components from the first frequency band (i.e., the frequency band excluding the background noise) from the sound pickup result signal, the first signal, which has not undergone zero differential operation 41, can accurately reflect the background noise features of the current environment. Thus, the problem of elevating the background noise components is avoided.
87. In some exemplary embodiments, the signal processing method P100 may also comprise: determining the background noise feature of the current environment based on the first and second signals, and then determining the frequency range corresponding to the first frequency band and the frequency range corresponding to the second frequency band based on the background noise feature. For example, the background noise feature may comprise the frequency band corresponding to the background noise, or it may comprise other characteristics that indicate the frequency band of the background noise. In this way, the signal processing circuit 150, based on the background noise feature, can identify the frequency band corresponding to the background noise and then define this frequency band as the second frequency band, with the remaining frequency bands being defined as the first frequency band. In this approach, the signal processing circuit 150 can adaptively adjust the frequency ranges of the first and second frequency bands based on the current environment's background noise feature, thus reducing the signal components from the speaker module 110 in the composite signal without elevating the background noise in any scenario.
88. In practical applications, considering that the background noise in the environment is typically low-frequency noise, in some exemplary embodiments, the frequency in the first frequency band may be higher than the frequency in the second frequency band. For example, the first frequency band could be a high-frequency band, and the second frequency band could be a low-frequency band. Alternatively, the first frequency band could be a mid-to-high frequency band, and the second frequency band could be a low-frequency band. Another example could be that the first frequency band is a high-frequency band, and the second frequency band is a mid-to-low frequency band. Yet another example could be that the first frequency band is a mid-frequency band, and the second frequency band is a low-frequency band, and so on. In other words, the lower frequency range in the sound pickup frequency band of the sound sensor module 120 is designated as the second frequency band, and the components of the second frequency band in the composite signal are derived from the first signal, thereby avoiding the problem of elevating the background noise components.
89. The low-frequency band mentioned above refers to a frequency band generally below 1 kHz, the mid-frequency band refers to a frequency band generally between 1 kHz and 4 kHz, the high-frequency band refers to a frequency band above 4 kHz, the mid-low frequency band refers to a frequency band generally below 4 kHz, and the mid-high frequency band refers to a frequency band generally above 1 kHz. One skilled in the art should understand that the division of these frequency bands is merely given as an example with approximate ranges. The definition of these frequency bands can change depending on different industries, applications, and classification standards. For example, in some application scenarios, the low-frequency band may refer to a frequency band roughly between 20 Hz and 150 Hz, the mid-frequency band may refer to a frequency band roughly between 150 Hz and 5 kHz, the high-frequency band may refer to a frequency band roughly between 5 kHz and 20 KHz, the mid-low frequency band may refer to a frequency band roughly between 150 Hz and 500 Hz, and the mid-high frequency band may refer to a frequency band between 500 Hz and 5 kHz. In other application scenarios, the low-frequency band may refer to a frequency band roughly between 20 Hz and 80 Hz, the mid-low frequency band may refer to a frequency band roughly between 80 Hz and 160 Hz, the mid-frequency band may refer to a frequency band roughly between 160 Hz and 1280 Hz, the mid-high frequency band may refer to a frequency band roughly between 1280 Hz and 2560 Hz, and the high-frequency band may refer to a frequency band roughly between 2560 Hz and 20 KHz.
90.
91. As shown in
92. Thus, the signal processing circuit 150 extracts the component corresponding to the second frequency band from the first signal by performing the first filtering operation 421, and extracts the component corresponding to the first frequency band from the sound pickup result signal by performing the second filtering operation 422. Then, by performing the synthesis operation 424, the components of the first and second frequency bands are synthesized to generate the composite signal, such that the first frequency band in the composite signal comes from the sound pickup result signal, and the second frequency band comes from the first signal.
93. In some exemplary embodiments, the first filtering and second filtering can be complementary filters, meaning that the transfer function of the first filtering and the transfer function of the second filtering together equal 1. For example, if the transfer function corresponding to the first filtering is expressed by the following formula (2), the transfer function corresponding to the second filtering can be expressed by the following formula (3).
94. As can be seen from formula (2) and formula (3), the denominator expressions corresponding to the transfer functions of the two filtering operations are the same, both being A(z). The numerator expressions corresponding to the transfer functions of the two filtering operations are B(z) and A(z)-B(z), respectively. This design ensures that the sum of the transfer functions of the two filtering operations equals 1, thereby presenting an all-pass characteristic when the two filtering operations work together. For ease of understanding,
95. It should be noted that the complementary filtering operations illustrated in formulas (2) and (3) are merely one possible example. One skilled in the art will understand that any filter group capable of performing frequency division and synthesis is feasible. For instance, in some exemplary embodiments, the above-mentioned first filtering and second filtering can also be implemented using first and second filters with the same cutoff frequency. Specifically, the cutoff frequency of the first filter is w1, the cutoff frequency of the second filter is w2, and the cutoff frequencies of both filters are the same: w1=w2. Additionally, the amplitude responses of both filters at the cutoff frequency may satisfy |Filter1(w
96.
97. Continuing with reference to
98. In some exemplary embodiments, in a scenario where the speaker module 110 comprises multiple speakers, when the sound emission frequency bands corresponding to different speakers are different, different zero differential schemes can be adopted for different frequency bands, so that the zero sound pickup direction of the sound sensor module 120 in different frequency bands points to the speaker corresponding to that frequency band, thereby minimizing the signal components from each speaker in the composite signal as much as possible. Take the example where the speaker module 110 comprises a first speaker 110-1 and a second speaker 110-2 for illustration. Assume that the sound emission frequency band of the first speaker 110-1 comprises the first sub-frequency band, and the sound emission frequency band of the second speaker 110-2 comprises the second sub-frequency band. In this case, the signal of the first sub-frequency band in the composite signal comes from the first sound pickup result signal obtained by the sound sensor module 120 in the first sound pickup state, where the first sound pickup state corresponds to the zero sound pickup direction of the sound sensor module 120 pointing to the first speaker 110-1. The signal of the second sub-frequency band in the composite signal comes from the second sound pickup result signal obtained by the sound sensor module 120 in the second sound pickup state, where the second sound pickup state corresponds to the zero sound pickup direction of the sound sensor module 120 pointing to the second speaker 120-2, and the sound pickup direction patterns corresponding to the first sound pickup state and the second sound pickup state are different. That is to say, when the signal processing circuit 150 adjusts the zero sound pickup direction, it can adopt different zero differential operations 41 for the first sub-frequency band and the second sub-frequency band, respectively, so that the sound pickup direction pattern corresponding to the first sub-frequency band is different from the sound pickup direction pattern corresponding to the second sub-frequency band. In practical applications, considering that each zero differential scheme may have different zero attenuation effects in different frequency bands (i.e., the sound intensity attenuation in the zero sound pickup direction), it is possible to refer to the zero attenuation performance of each zero differential operation 41 in different frequency bands and adopt the zero differential operation 41 with better zero attenuation effects for the first sub-frequency band and the second sub-frequency band respectively, thereby improving the overall zero attenuation effect across the entire frequency band.
99.
100. Continuing to refer to
101. Based on the comparison of zero attenuation effects shown in
102.
103. Continuing to refer to
104. It should be noted that, in the scenario where the speaker module 110 comprises the first speaker 110-1 and the second speaker 110-2, the signal processing process shown in
105. Similar to what is shown in
106. In some exemplary embodiments, any one or more of the aforementioned first filtering operation 421, second filtering operation 422, third filtering operation 423, and synthesis operation 424 may be implemented by the processor 220 in the signal processing circuit 150. That is, the processor 220 executes an instruction set and performs one or more of the above operations according to the instructions in the instruction set. In some exemplary embodiments, the signal processing circuit 150 may comprise a first filtering circuit, and the aforementioned first filtering operation 421 may be implemented by the first filtering circuit. In some exemplary embodiments, the signal processing circuit 150 may comprise a second filtering circuit, and the aforementioned second filtering operation 422 may be implemented by the second filtering circuit. In some exemplary embodiments, the signal processing circuit 150 may comprise a third filtering circuit, and the aforementioned third filtering operation 423 may be implemented by the third filtering circuit. In some exemplary embodiments, the signal processing circuit 150 may comprise a synthesis circuit, and the aforementioned synthesis operation 424 may be implemented by the synthesis circuit.
107. In the aforementioned embodiments, when generating the composite signal, the signal processing circuit 150 first performs a zero differential operation 41 on the first signal and the second signal to obtain a sound pickup result signal, and then filters the sound pickup result signal and the first signal to extract the component of the first frequency band from the sound pickup result signal and the component of the second frequency band from the first signal, subsequently synthesizing the components of the two frequency bands to obtain the composite signal. In some exemplary embodiments, the signal processing circuit 150 may also swap the order of the zero differential operation 41 and the filtering. Specifically, taking the two-frequency-band synthesis scheme shown in
108. S340: Perform a second target operation on the composite signal.
109. After obtaining the composite signal, the signal processing circuit 150 can perform a second target operation 50 on the composite signal based on the requirements of the application scenario. In some exemplary embodiments, continuing to refer to
110. In some exemplary embodiments, the speaker module 110, the sound sensor module 120, and the signal processing circuit 150 are integrated into a first acoustic device, and the first acoustic device is communicatively connected to a second acoustic device. In this case, the second target operation 50 may comprise: sending the composite signal to the second acoustic device to reduce the echo of the second acoustic device. The above scheme can be applied to the echo cancellation scenario as shown in
111. In summary, the present disclosure provides a signal processing method and an acoustic system. The method comprises: obtaining a first signal and a second signal, where the first signal is obtained by a first sound sensor in the sound sensor module collecting ambient sound while operating, and the second signal is obtained by a second sound sensor in the sound sensor module collecting ambient sound while operating, with the ambient sound including at least the target sound output by the speaker during operation; performing a first target operation on the first signal and the second signal to generate a composite signal; and performing a second target operation on the composite signal. The composite signal is a synthesized signal of a first frequency band signal and a second frequency band signal, where the first frequency band signal comes from a sound pickup result signal of the sound sensor module in a target sound pickup state, and the target sound pickup state corresponds to the zero sound pickup direction of the sound sensor module pointing to the speaker. In the above scheme, since the sound pickup result signal is obtained by the sound sensor module picking up sound when its zero sound pickup direction points to the speaker, the sound pickup result signal contains little or no signal components from the speaker. Furthermore, since the first frequency band in the composite signal comes from the sound pickup result signal, the signal components from the speaker in the composite signal are reduced. Therefore, the acoustic system can achieve the effect of suppressing howling or eliminating echo.
112. Another aspect of the present disclosure provides a non-transitory storage medium storing at least one set of executable instructions for performing signal processing. When the executable instructions are executed by a processor, the executable instructions guide the processor to implement the steps of the signal processing method P100 described in the present disclosure. In some possible implementations, various aspects of the present disclosure may also be realized in the form of a program product, which comprises program code. When the program product runs on an acoustic system 003, the program code is used to cause the acoustic system 003 to execute the steps of the signal processing method P100 described in the present disclosure. The program product for implementing the above method may use a portable compact disc read-only memory (CD-ROM) that comprises program code and can run on the acoustic system 003. However, the program product of the present disclosure is not limited to this. In the present disclosure, the readable storage medium can be any tangible medium that contains or stores a program, which can be used by or in combination with an instruction execution system. The program product may adopt any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples of readable storage media comprise: an electrical connection with one or more wires, a portable disk, a hard disk, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination thereof. The computer-readable storage medium may comprise a data signal propagated in baseband or as part of a carrier wave, carrying readable program code. Such propagated data signals may take various forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination thereof. The readable storage medium may also be any readable medium other than a readable storage medium that can send, propagate, or transmit a program for use by or in combination with an instruction execution system, apparatus, or device. The program code contained on the readable storage medium may be transmitted using any suitable medium, including but not limited to wireless, wired, optical cable, RF, etc., or any suitable combination thereof. The program code for performing the operations of the present disclosure may be written in any combination of one or more programming languages, including object-oriented programming languages—such as Java, C++, etc.—and conventional procedural programming languages—such as the “C” language or similar programming languages. The program code may execute entirely on the acoustic system 003, partially on the acoustic system 003, as a standalone software package, partially on the acoustic system 003 and partially on a remote computing device, or entirely on a remote computing device.
113. The above description pertains to specific embodiments of the present disclosure. Other embodiments are within the scope of the appended claims. In some cases, the actions or steps described in the claims can be performed in a sequence different from the one in the embodiments and still achieve the desired result. Additionally, the processes depicted in the drawings do not necessarily require a specific order or continuous sequence to achieve the desired outcome. In certain embodiments, multitasking and parallel processing are also possible or may be beneficial.
114. In summary, after reading this detailed disclosure, a person skilled in the art can understand that the aforementioned detailed disclosure is presented only by way of example and is not intended to be limiting. Although not explicitly stated here, a person skilled in the art will appreciate that the disclosure encompasses various reasonable alterations, improvements, and modifications to the embodiments. These alterations, improvements, and modifications are intended to be within the spirit and scope of the exemplary embodiments presented in this disclosure.
115. In addition, certain terms in this disclosure have been used to describe the embodiments of the disclosure. For example, the terms “an embodiment,” “embodiment,” and/or “some exemplary embodiments” mean that specific features, structures, or characteristics described in connection with that embodiment may be included in at least one embodiment of the disclosure. Therefore, it should be emphasized and understood that references to “embodiment,” “an embodiment,” or “alternative embodiment” in various parts of this disclosure do not necessarily refer to the same embodiment. Additionally, specific features, structures, or characteristics may be appropriately combined in one or more embodiments of the disclosure.
116. It should be understood that in the foregoing description of the embodiments of the disclosure, in order to aid in understanding a feature and simplify the presentation, various features are combined in a single embodiment, drawing, or description. However, this does not mean that the combination of these features is required. A person skilled in the art, upon reading this disclosure, could very well consider part of the equipment marked as a separate embodiment. In other words, the embodiments in this disclosure can also be understood as the integration of multiple sub-embodiments. And each sub-embodiment is valid even when it comprises fewer features than a single full embodiment disclosed above.
117. Each patent, patent disclosure, publication of a patent disclosure, and other materials, such as articles, books, specifications, publications, documents, articles, etc., cited herein, except for any historical prosecution documents to which it relates, which may be inconsistent with or any identities that conflict, or any identities that may have a restrictive effect on the broadest scope of the claims, are hereby incorporated by reference for all purposes now or hereafter associated with this document. Furthermore, in the event of any inconsistency or conflict between the description, definition, and/or use of a term associated with any contained material, the term used in this document shall prevail.
118. Finally, it should be understood that the embodiments of the disclosure disclosed herein are illustrative of the principles of the embodiments of this disclosure. Other modified embodiments are also within the scope of this disclosure. Therefore, the embodiments disclosed in this disclosure are merely examples and not limitations. A person skilled in the art can adopt alternative configurations based on the embodiments in this disclosure to implement the disclosure. Thus, the embodiments of this disclosure are not limited to the embodiments described in the disclosure in precise detail.
This application is a continuation application of PCT application No. PCT/CN2023/094375, filed on May 15, 2023, and the content of which is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/CN2023/094375 | May 2023 | WO |
Child | 19082302 | US |