Aspects disclosed herein generally relate to an omnidirectional adaptive loudspeaker assembly. This aspect and others will be discussed in more detail below.
Conventional loudspeakers were designed to be directional based on its transducer radiation pattern and speaker positioning. The loudspeaker has no prior knowledge of the number of listeners will be listening and what their respective relative positioning in the space will be. In recent years, due to the advancement of voice assistance, smart homes, and working from home; loudspeakers are shifting from corners of the room into portable omnidirectional usage. Hence, the industry has started seeing a new form factor of 360-degree audio speaker emerging. This form factor may deliver 360-degree sound for consistent, uniform coverage. Namely, by placing the loudspeaker in a middle of a room where everyone may be able to perceive remarkably similar sound experience. Furthermore, in some configurations, this form factor may also be able to simulate 3D sound and perform better sound effect than a conventional Bluetooth stereo speaker.
In at least one embodiment, a system for providing an adaptive loudspeaker assembly is provided. The system includes a loudspeaker array, a microphone array, and at least one controller. The loudspeaker array transmits an audio output signal in an omnidirectional sound mode in a room having a plurality of walls. The microphone array is coupled to the loudspeaker array to capture the audio output signal in the room. The at least one controller is programmed to receive the captured audio output signal and to determine that at least one first wall of the plurality of walls is closest to the loudspeaker array based on the captured audio output signal. The at least one controller is further programmed to change a sound mode of the loudspeaker array from transmitting the audio output signal in the omnidirectional mode into a beamforming sound mode to transmit the audio output signal away from the at least one first wall of the plurality walls.
In at least one embodiment, a method for providing an adaptive loudspeaker assembly is provided. The method includes transmitting, a loudspeaker array, an audio output signal in an omnidirectional sound mode in a room having a plurality of walls and capturing, via a microphone array, the audio output signal in the room. The method further includes determining with at least one controller that at least one first wall of the plurality of walls is closest to the loudspeaker array based on the captured audio output signal and changing a sound mode of the loudspeaker array from transmitting the audio output signal in the omnidirectional mode into a beamforming sound mode to transmit the audio output signal away from the at least one first wall of the plurality walls.
In at least one embodiment, a system for providing an adaptive loudspeaker assembly is provided. The system includes a circular loudspeaker array, a microphone array, and at least one controller. The circular loudspeaker array transmits an audio output signal in an omnidirectional sound mode in a room having a plurality of walls. The circular microphone array is coupled to the circular loudspeaker array to capture the audio output signal in the room. The at least one controller programmed to receive the captured audio output signal indicating a plurality of sound reflections from the plurality of walls and to determine that at least one first wall of the plurality of walls is closest to the circular loudspeaker array based on a first sound reflection from the at least one first wall being the strongest reflection out of the plurality of sound reflections. The at least one controller is further programmed to change a sound mode of the loudspeaker array from transmitting the audio output signal in the omnidirectional mode into a beamforming sound mode to transmit the audio output signal away from the at least one first wall of the plurality walls.
The embodiments of the present disclosure are pointed out with particularity in the appended claims. However, other features of the various embodiments will become more apparent and will be best understood by referring to the following detailed description in conjunction with the accompanying drawings in which:
As required, detailed embodiments of the present invention are disclosed herein; however, it is to be understood that the disclosed embodiments are merely exemplary of the invention that may be embodied in various and alternative forms. The figures are not necessarily to scale; some features may be exaggerated or minimized to show details of particular components. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a representative basis for teaching one skilled in the art to variously employ the present invention.
It is recognized that the controllers as disclosed herein may include various microprocessors, integrated circuits, memory devices (e.g., FLASH, random access memory (RAM), read only memory (ROM), electrically programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM), or other suitable variants thereof), and software which co-act with one another to perform operation(s) disclosed herein. In addition, such controllers as disclosed utilizes one or more microprocessors to execute a computer-program that is embodied in a non-transitory computer readable medium that is programmed to perform any number of the functions as disclosed. Further, the controller(s) as provided herein includes a housing and the various number of microprocessors, integrated circuits, and memory devices ((e.g., FLASH, random access memory (RAM), read only memory (ROM), electrically programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM)) positioned within the housing. The controller(s) as disclosed also include hardware-based inputs and outputs for receiving and transmitting data, respectively from and to other hardware-based devices as discussed herein.
In general, there may be two types of structures of the loudspeaker speaker product that can be claimed as 360-loudspeaker. One is an upward loudspeaker and the other is a downward loudspeaker with a waveguide design such as a reflector. While the mechanical design may be able to achieve an omnidirectional radiation pattern, when the loudspeaker is placed close to a wall or other obstacles, it may sound unnatural or colored. This may be due to near field interaction around the loudspeaker such as the reflected sound interfered with the direct sound and thus lead to frequency response alternations.
Another configuration is to position multiple transducers around a unit circle in the horizontal plane such as distributing full range drivers uniformly around the circle. This configuration enables different transducers to run different processing based on the environment and hence alleviate the coloration problem. However, the current existing market solutions are either controlled manually or fixed while on the factory floor. It makes the form factor loses its flexibility and inconvenience to the end users.
In general, the microphone array 106 may detect audio that is being output by the loudspeaker array 102 and transmit the detected audio back to the controller 104. In turn, the controller 104 (e.g., the DSP 109) may then determine the distance (e.g., location) of the closest wall 110 to the loudspeaker array 102 and then control the sound mode of the loudspeaker array 102. This may entail transmitting the processed audio output signal from the omnidirectional mode to the beamforming mode. In general, the controller 104 determines the strongest reflection of audio from the wall 110 (i.e., the closest wall) to then either deactivate one or more loudspeakers in the array 102 that is closest to the wall 110 or apply beamforming to direct the audio output in a desired direction.
It is recognized that the loudspeaker array 102 may be implemented as a circular array of m loudspeakers that are uniformly distributed on a horizontal plane. It is also recognized that the microphone array 106 may also be implemented as a circular array of n microphones. The microphone array 106 may be positioned parallel with the loudspeaker array 102.
Referring back to
The loudspeaker array 102 may include any number of loudspeakers 120, M that is greater than, or equal to two. Similarly, the microphone array 106 may include any number of microphones, N that is greater than, or equal to two. Thus, the combination of M loudspeakers 120 and N microphones 130 will be able to form K direction of microphone beams where K is greater than 1. For the example illustrated in
The equalization/limiter block 206 receives the incoming audio signal and equalizes the same to generate a reference signal that is provided to the loudspeaker beamforming block 208 and the first processing stage 202. The first processing stage 202 also receives an output signal from the microphone array 106 (i.e., received signal) which corresponds to the captured audio output in the room 108. In general, the first processing stage 202 may extract acoustic impulse responses from the reference signal and the received signal as provided by the loudspeaker array 102.
Equation (1) as set forth below includes the reference signal as defined by r(n) (or the speaker playing signal as provided by the equalization limiter block 206), and a jth microphone input signal mj(n) containing a background signal v(n) (as received from the microphone array 106 via the received signal). Thus, the first processing stage 202 (e.g., the AEC block) may compute the jth unknown impulse responses hj(n) based on the following,
mj(n)=r(n)*hj(n)+v(n) (1)
where * is the convolution operator. Since the background signal and the reference signal is usually uncorrelated, it is possible to reduce the background signal while obtaining the impulse responses hj(n) by using an adaptive algorithm, such as, for example, a Normalized Least-Mean-Square (NLMS) algorithm as expressed as,
where ej(n), ĥj(n), μNLMS and δNLMS are the instantaneous estimation error, NLMS adaptively estimated impulse response, step size with the range 0 to 2 and a small positive constant used to avoid division by zero, respectively.
The first processing stage 202 may then transmit the impulse responses e.g., (n) to the second processing stage 204. As noted above, the second processing stage 204 may employ MVDR that is provided by,
wopt=Rhh−1f(fHRhh−1f)−1 (4)
where Rhh is an autocorrelation matrix of the impulse responses, and f is a desired response vector, which is determined by the detected angles of the sound in 360 degrees. The second processing stage 204 is generally configured to minimize a variance of the received signal. When the controller 104 is programmed or set to a target detection angle, the MVDR block (or the second processing stage 204) may maximize the signal received from the programmed direction while minimizing the signal from other directions. If there is a wall 110 in this direction with respect to the microphone array 106 (or the loudspeaker array 102 since the microphone array 106 is attached thereto), the sound reflection may be stronger, and the second processing stage 204 (or the MVDR block) may detect and distinguish this reflection signal. Therefore, we can determine which direction the wall 110 is most likely to be. Speaker beamforming may be bypassed at this point until the location (e.g., distance, angle, etc.) of the wall 110 relative to the array 102 is known. The target detection angle may also be known as the microphone beamforming angle which is determined by the performance of the DSP 109 and/or criteria. The target detection angle is pre-defined and different from the desired response vector, f as set forth in equation (4) above. In general, microphone beamforming may be like a probe that requires instruction with respect to which direction to detect and analyze.
After the second processing stage 204 detects wall directions (e.g., distance, angle) relative to the 360 degrees circular array of loudspeakers (or the loudspeaker array 102), the controller 104 then ceases to perform wall detection and waits for a next detection trigger event to initiate performing wall detection in the event this operation is being requested again by a user. After wall detection, the controller 104 activates the loudspeaker beamforming block 208 to set a beamforming target angle according to the direction of the wall 110 that is closest to the loudspeaker array 102. For example, the loudspeaker beamformer block 208 may execute a speaker beamforming algorithm and utilize a weighted delay-and-sum approach which is given by,
y(n)=Σi=0N-1wix(n−τi) (5)
where N, wi, x, y and τi are the number of microphones, weight of the ith speaker, input signal, output signal and the delay for the ith microphone, respectively.
Hence, if the controller 104 detects the wall 110 or other obstacle at 0 degrees, the controller 104 may select the beamforming target angle at 180 degrees to avoid reflection causing the sound coloration. On the other hand, if the controller 104 detects the wall 110 or other obstacle at a far distance from the microphone array 106 (or from the loudspeaker array 102), the controller 104 may bypass the beamforming mode and control the audio output from the loudspeaker array 102 to remain in the omnidirectional sound mode, as a 360-degree loudspeaker. In one example, a distance that is less than one meter to the wall 110 may be adequate to transition the sound mode of the system 100 from the omnidirectional mode into the beamforming mode. Otherwise, the system 100 remains in the omnidirectional mode.
For the sake of clarification, it is recognized that the controller 104 may determine the location of any one or more walls 110 with respect to the loudspeaker array 102 and also enter into the beamforming mode to transmit the audio from any number of the walls 110 that are closest to the loudspeaker array 102. Assuming, for example, that the controller 104 determines that both a first wall 110a and a second wall 110b are positioned within a predetermined distance (e.g., one meter) of the loudspeaker array 102, the controller 104 enters into the beamforming mode and transmits the audio output signal away from each of the first wall 110a and the second wall 110b. In this case, the controller 104 provides a first beamforming pattern to direct the audio output signal away from the first wall 110a and also provides a second beamforming pattern to direct the audio output signal away from the second wall 110b.
While exemplary embodiments are described above, it is not intended that these embodiments describe all possible forms of the invention. Rather, the words used in the specification are words of description rather than limitation, and it is understood that various changes may be made without departing from the spirit and scope of the invention. Additionally, the features of various implementing embodiments may be combined to form further embodiments of the invention.
Number | Name | Date | Kind |
---|---|---|---|
9762999 | Johnson | Sep 2017 | B1 |
10149087 | Choisel | Dec 2018 | B1 |
10299039 | Kriegel | May 2019 | B2 |
10777214 | Shi | Sep 2020 | B1 |
11399255 | Johnson | Jul 2022 | B2 |
20180352324 | Choisel et al. | Dec 2018 | A1 |
20190222931 | Kriegel et al. | Jul 2019 | A1 |
Entry |
---|
Extended European Search Report dated Apr. 4, 2023 for European Patent Application No. 22205717.6, 8 pages. |
Number | Date | Country | |
---|---|---|---|
20230145713 A1 | May 2023 | US |