Surround sound systems typically consist of several speakers distributed around a room. For example, in a living room in a home, speakers may be placed near a television and behind or near seating areas, such that viewers of the television can experience an immersive sound. Such systems are typically expensive and difficult to install.
Distributed audio systems can also be created using other types of audio devices, such as positioning several mobile phones around a room. However, there is not a clear way for the distributed audio system to understand where the user is at a fine-scale resolution. Use of a global positioning system (GPS) is a common technique for identifying a user's absolute position, but it does not operate well indoors and cannot provide accurate indoor location.
The present disclosure provides a mechanism to synchronously drive distributed speakers around a user based on localization outputs of ultra wideband (UWB) communication chips already existing in devices. Distances may be determined between a user device, such as a phone or wearable, and a plurality of distributed speakers or other devices. Based on an intersection point of such distances, the user's location can be identified. Such location can be used to modify how audio is played on each of the plurality of distributed speakers.
One aspect of the disclosure provides a user device configured to be worn or carried by a user, the user device comprising an ultra wideband sensor, a communication interface, and one or more processors in communication with the ultra wideband sensor and the communication interface. The one or more processors may be configured to detect, using the ultra wideband sensor, a distance between the user device and each of a plurality of audio playback devices, determine, based on the detected distances, a location of the user device, and communicate, using the communication interface, information to one or more of the plurality of audio playback devices for playing spatialized audio based on the determined location.
According to some examples, the determined location is a relative location with respect to the plurality of audio playback devices. The information communicated to the one or more of the plurality of audio playback devices may include the determined location of the user device, instructions for playing the spatialized audio, and/or other information.
In determining the location of the user device the one or more processors may be configured to determine a point at which relative distances between the user device and each of the plurality of audio playback devices intersect.
In determining the location of the user device the one or more processors may be configured to compute a maximum likelihood estimation based on locations of each audio playback device.
In detecting the distance between the user device and each of the plurality of audio playback devices the one or more processors may be further configured to transmit one or more signals across wide spectrum frequency to each of the plurality of audio playback devices, receive a response from each of the plurality of audio playback devices, and compute, for each response received, based on a time of the transmitting and a time of the receiving, the distance between the user device and the audio playback device.
Another aspect of the disclosure provides a method, comprising detecting, using an ultra wideband sensor, a distance between a user device and each of a plurality of audio playback devices, determining, with one or more processors based on the detected distances, a location of the user device, and communicating information to one or more of the plurality of audio playback devices for playing spatialized audio based on the determined location. The determined location may be a relative location with respect to the plurality of audio playback devices. The information communicated to the one or more of the plurality of audio playback devices may include the determined location of the user device. Communicating information to the one or more of the plurality of audio playback devices may include sending instructions for playing the spatialized audio.
According to some examples, determining the location of the user device may include determining, with one or more processors, a point at which relative distances between the user device and each of the plurality of audio playback devices intersects. According to some examples, determining the location of the user device includes computing, with one or more processors, a maximum likelihood estimation based on locations of each audio playback device. Detecting the distance between the user device and each of the plurality of audio playback devices may include transmitting one or more signals across wide spectrum frequency to each of the plurality of audio playback devices, receiving a response from each of the plurality of audio playback devices, and computing, for each response received, based on a time of the transmitting and a time of the receiving, the distance between the user device and the audio playback device.
Yet another aspect of the disclosure provides a non-transitory computer-readable medium storing instructions executable by one or more processors for performing a method of localization of a user device for audio spatialization, the method comprising detecting, using an ultra wideband sensor, a distance between a user device and each of a plurality of audio playback devices, determining, based on the detected distances, a location of the user device, and communicating information to one or more of the plurality of audio playback devices for playing spatialized audio based on the determined location.
The determined location may be a relative location with respect to the plurality of audio playback devices. The information communicated to the one or more of the plurality of audio playback devices may include the determined location of the user device. Communicating information to the one or more of the plurality of audio playback devices may include sending instructions for playing the spatialized audio.
According to some examples, determining the location of the user device may include determining a point at which relative distances between the user device and each of the plurality of audio playback devices intersects. According to some examples, determining the location of the user device includes computing a maximum likelihood estimation based on locations of each audio playback device. Detecting the distance between the user device and each of the plurality of audio playback devices may include transmitting one or more signals across wide spectrum frequency to each of the plurality of audio playback devices, receiving a response from each of the plurality of audio playback devices, and computing, for each response received, based on a time of the transmitting and a time of the receiving, the distance between the user device and the audio playback device.
The present disclosure provides a mechanism to synchronously drive distributed speakers around a user based on localization outputs of ultra wideband (UWB) communication chips already existing in devices. UWB chips can easily fit in a set of speakers as they are small in footprint and cheap in cost. A wearable device, such as a watch, earbud, etc. worn by the user, or a pseudo-wearable device such as a phone carried by the user also has UWB capabilities. By using robust maximum likelihood estimation (MLE) based inference, from pairwise distances that the UWB channels measure, the location of the user can be determined within a few centimeters accuracy. This information is broadcasted back to the distributed speaker set to then devise a way to modulate and isolate different parts of the audio such that the user at the particular location has a perceptually pleasing experience. Based on the user's detected location, sounds may be played back differently at each speaker, such as by adjusting volumes, adjusting the content that is played, etc. As one example, a first speaker may output dialogue or speech while a second speaker may output background music.
The audio playback devices 110, 120, 130 may each store data indicating a relative speaker topology. By way of example, the relative topology can be learned by each audio playback device 110, 120, 130, such as by using UWB. For example, a first device, such as the audio playback device 110, may be set as an origin point (0, 0) and locations of the other audio playback devices 120, 130 may be established relative to the origin point. For example, the first audio playback device 110 may send UWB pulses to each of the other audio playback devices 120, 130 and receive responses that can be used to determine relative distances between the audio playback devices 110, 120 and between the audio playback devices 110, 130. The other devices may do the same. The audio playback device 110 may further receive information from one of second or third audio playback devices 120, 130 indicating a relative distance between those two audio playback devices 120, 130. Using all of this information, the audio playback device 110 may be able to determine its position relative to each of the other audio playback devices 120, 130.
According to some examples, an orientation of the user, such as a direction the user is facing, may also be determined from the information. By way of example, if one audio playback device 110, 120, 130 includes a display depicting visual content relative to the audio content being played, such as a movie corresponding to sound output through the audio devices, it may be inferred that the user is facing the display. According to some examples, the distance detection and location determination may be periodically or continuously updated to account for movement of the user.
Where the location is determined by a given device, such as the user device 105, the determined location may be communicated from that device to the other devices, such as the audio playback devices 110, 120, 130. Such communication may be performed over any type of short range wireless pairing, local area network, or other connection. By way of example only, the user device 105 may broadcast the determined location to the audio playback devices 110, 120, 130 over a wireless pairing connection, such as Bluetooth. In other examples, the location may be sent to a controller device that relays it to further devices. In further examples, determination of the location may be performed by any of the audio playback devices 110, 120, 130 or by a separate controller device, or by any combination of the devices operating in parallel.
The determined location may be used to determine how each of the audio playback devices 110, 120, 130 should output sound to provide a spatialized audio experience for the user. For example, each speaker may play a sound that is spatially-dependent amplitude-modulated and thus for the user, it feels like the audio is streaming ambiently throughout space, not from a specific point. Such spatially-dependent amplitude modulated sound changes an amount of modulation per sound channel based on the determined location of the user. The output may similarly be phase modulated. According to some examples, the devices may output audio at different relative volumes, output different content, etc. By way of example only, devices that are closer to the user may output audio at a different volume than playback devices that are further away. As another example, the devices that are behind the user may output different content, such as background music, while devices in front of the user output speech or dialogue. In this example, the user's orientation may be determined using one or more sensors in the user device 105. For example, a magnetometer or compass may supply information that can be matched with information from the audio playback devices to infer a relative orientation of the user. As another example, speaker-to-user UWB angles may be analyzed, such as described further below in connection with
The user device 105 may be a wearable device, such as a watch, earbuds, smartglasses, pendant, smart clothing, augmented reality or virtual reality headset, or any other types of electronic device adapted to be worn by a user. According to other examples, the user device may be a semi-wearable device, such as a mobile phone, laptop, portable gaming console, or other device that may be held by the user, carried in the user's pocket, or the like. According to further examples, the user device 105 may be a collection of devices in communication with one another. By way of example only, a phone plus a wearable device may operate together as the user device 105.
The audio playback devices 110, 120, 130 may be, for example, speakers, smart home devices such as hubs, assistants, or the like, or any other device capable of emitting sound. Each of the audio playback devices 110, 120, 130 may include UWB capabilities, such that the user device 105 can detect the audio playback devices 110, 120, 130 and/or the audio playback devices 110, 120, 130 can detect the user device 105. While three audio playback devices 110, 120, 130 are illustrated in the present example, it should be understood that any number of playback devices may be used. For example, two, four, or any number of additional devices may be used.
The distances between the user device 205 and each audio playback device 210, 220, 230, 240 are represented by circles 212, 222, 232, 242 having respective radii 214, 224, 234, 244. The radius 214, 224, 234, 244 of each circle 212, 222, 232, 242 corresponds to the measured distance between the audio playback device and the user device 205. Because UWB may measure distance but not direction, a distance can be measured in any direction around each playback device 210, 220, 230, 240 and therefore the circles 212, 222, 232, 242 represent all possible positions of the user device 205 with respect to each audio playback device 210, 220, 230, 240. The actual location of the user device 205 is at an intersection 260 of all the circles 212, 222, 232, 242.
In practice, distance measurements may be noisy. For example, noise may be caused by objects in a path between the user device 205 and playback device, other nearby electronics, or the like. In such instances where measurements are noisy, a common intersection point may not exist. To find the most likely common intersection point given noise, a maximum-likelihood estimation (MLE) approach may be used. This assumes that the time-of-flight (ToF) values for signals transmitted between the user device and the audio devices across a wide frequency spectrum are corrupted with additive Gaussian noise.
In this equation, v represents coordinates to be determined of the user device, vk represents coordinates of a given audio playback device k, c is the speed of light, and tk is an estimated time between sending a UWB signal from the user device 105 to the given audio playback device k and receiving a response at the user device 105 from the given audio playback device k. Variable k is an index indicating a number of audio playback devices. For example, k=1, 2, 3, . . . N to indicate a number of audio playback devices up to N audio playback devices. σ is a tunable parameter that describes noise levels of time delay observations. For example, σ can be an inverse of received signal strength indication (RSSI) values associated with the UWB measurements.
The recovery algorithm is basically solving an optimization problem that maximizes the likelihood function over the coordinate parameters given the pairwise distance data. Once a reasonable localization result for the user is determined, the result may be broadcast back to the plurality of speakers to perform their respective modulation. According to some examples, such broadcasting can be done over local networks, such as Bluetooth low energy (BLE), and IP-based mesh network, Wi-Fi, etc.
User device 305, shown here as a watch worn by the user, can detect relative distances d1, d2, d3, d4 between the user device and each audio playback device 310, 320, 330, 340 using UWB. These relative distances may be used to calculate the position of the user. According to some examples, it may be inferred that if the user has a relative location that is generally inside a perimeter set by the audio playback devices 110, 120, 130, 140, then the user is facing the display 350. According to other examples, additional sensors may be used to determine the user's orientation.
The wearable device 505 includes various components, such as a processor 291, memory 292 including data and instructions, transceiver 294, sensors 295, and other components typically present in wearable wireless computing devices. The wearable device 505 may have all of the components normally used in connection with a wearable computing device such as a processor, memory (e.g., RAM and internal hard drives) storing data and instructions, user input, and output.
The wearable device 505 may also be equipped with short range wireless pairing technology, such as a Bluetooth transceiver, allowing for wireless coupling with other devices. For example, transceiver 294 may include an antenna, transmitter, and receiver that allows for wireless coupling with another device. The wireless coupling may be established using any of a variety of techniques, such as Bluetooth, Bluetooth low energy (BLE), ultra wide band (UWB), etc.
The sensors 295 may be capable of detecting relative proximity of the wearable device 505 to the audio playback devices 510, etc. The sensors may include, for example, UWB sensor 299. According to some examples, the sensors 295 may further include other types of sensors, such as for detecting movement of the wearable device 505. Such additional sensors may include IMU sensors 297, such as an accelerometer, gyroscope, etc. For example, the gyroscopes may detect inertial positions of the wearable device 505, while the accelerometers detect linear movements of the wearable device 505. Such sensors may detect direction, speed, and/or other parameters of the movements. The sensors may additionally or alternatively include any other type of sensors capable of detecting changes in received data, where such changes may be correlated with user movements. For example, the sensors may include a barometer, motion sensor, temperature sensor, a magnetometer, a pedometer, a global positioning system (GPS), proximity sensor, strain gauge, camera 298, microphone 296, etc. The one or more sensors of each device may operate independently or in concert.
The UWB sensor 299 or other proximity sensor may be used to determine a relative position, such as angle and/or distance, between two or more devices. Such information may be used to detect a relative position of devices, and therefore detect a relative position of the user with respect to the audio playback devices 510.
The audio playback devices 510 may include components similar to those described above with respect to the wearable device. For example, the audio playback devices 510 may each include a processor 271, memory 272, transceiver 264, and sensors 265. Such sensors may include, without limitation, UWB sensor 269, one or more cameras 268 or other image capture devices, such as thermal recognition, etc.
The sensors 265 may be used to determine relative position of the audio playback devices 510 with the wearable device 505. For example, using UWB sensor 269, a relative distance between the devices may be determined. When the relative distances of each of the audio playback devices 510 is analyzed together, the location of the wearable device 505 may be determined.
According to some examples, additional sensors may be used to further improve an accuracy of the determined location of the wearable device 505. By way of example, the camera 268 may capture images of the user, provided that the user has enabled the camera and configured the camera to receive input for use in association with other devices in detecting location.
Input 276 and output 275 may be used to receive information from a user and provide information to the user. The input may include, for example, one or more touch sensitive inputs, a microphone, sensors, etc. Moreover, the input 276 may include an interface for receiving data from the wearable device 505 and/or other wearable devices or other audio playback devices. The output 275 may include, for example, a speaker, display, haptic feedback, etc.
The one or more processors 271 may be any conventional processors, such as commercially available microprocessors. Alternatively, the one or more processors may be a dedicated device such as an application specific integrated circuit (ASIC) or other hardware-based processor. Although
Memory 272 may store information that is accessible by the processors 271, including instructions 273 that may be executed by the processors 271, and data 274. The memory 272 may be of a type of memory operative to store information accessible by the processors 271, including a non-transitory computer-readable medium, or other medium that stores data that may be read with the aid of an electronic device, such as a hard-drive, memory card, read-only memory (“ROM”), random access memory (“RAM”), optical disks, as well as other write-capable and read-only memories. The subject matter disclosed herein may include different combinations of the foregoing, whereby different portions of the instructions 273 and data 274 are stored on different types of media.
Data 274 may be retrieved, stored or modified by processors 271 in accordance with the instructions 273. For instance, although the present disclosure is not limited by a particular data structure, the data 274 may be stored in computer registers, in a relational database as a table having a plurality of different fields and records, XML documents, or flat files. The data 274 may also be formatted in a computer-readable format such as, but not limited to, binary values, ASCII or Unicode. By further way of example only, the data 274 may be stored as bitmaps comprised of pixels that are stored in compressed or uncompressed, or various image formats (e.g., JPEG), vector-based formats (e.g., SVG) or computer instructions for drawing graphics. Moreover, the data 274 may comprise information sufficient to identify the relevant information, such as numbers, descriptive text, proprietary codes, pointers, references to data stored in other memories (including other network locations) or information that is used by a function to calculate the relevant data.
While the processor 271 and memory 272 of the audio playback devices 510 are described in detail, it should be understood that the processor 291 and memory 292 of the wearable device 100 may include similar structure, features, and functions. In addition, the instructions of the wearable device may be executed by the processor 291 to detect the relative location of the wearable device 505 relative to the audio playback devices 510. Such location information may be communicated by the wearable device 505 to the audio playback devices 510, such as over a short-range wireless pairing, wireless local area network, or other type of connection.
In block 610, distances between the user device and each of the audio playback devices are determined using UWB. For example, a transmitter in the user device sends one or more signals to the audio playback devices across a wide frequency spectrum and receives a reply from each audio playback device. Each reply may be accompanied by information identifying the audio playback device that sent the reply, such as an identifier, location coordinates of the audio playback device, etc. Based on a time between transmission of the one or more signals and receipt of the reply, the user device may determine a relative distance between the user device and the audio playback device. According to some examples, the user device may further determine an angle between the user device and the audio playback device based on the UWB.
In block 620, a location of the user device is determined based on the detected distances between the user and each audio playback device. According to one example, the location may be determined by finding an intersection point of a plurality of circles, each circle having a center corresponding to a location of an audio playback device and a radius corresponding to the detected distance between the user device and that audio playback device. The intersection point may correspond to the location of the user device. According to another example, the location of the user device may be determined using a maximum likelihood estimation. According to this approach, the location of the user device may be computed using coordinates of the audio playback devices, the detected distances between the devices, the speed of light, and an estimated time delay.
In block 630, the determined location information is communicated to each of the plurality of audio playback devices for use in generating spatialized audio output. For example, the location information may be broadcast from the user device to the plurality of audio playback devices, such as over a short-range wireless pairing connection, local area network connection, or other connection. In other examples, the location information may be communicated to one device and relayed to the other devices.
According to some examples, the user device may also send instructions to each audio playback device for outputting spatialized audio. For example, the user device may receive a selection from the user of a type of spatialized audio output. Such types of spatialized audio may include, for example, uniform equalization or proximity equalization, as discussed above in connection with
In other examples, the audio playback devices may use the received location information to determine how to output spatialized audio. For example, each audio playback device may be programmed to determine, based on the received location information, what content to output and at what volume. In some examples, one of the audio playback devices may operate as a controller having knowledge of each audio playback device's position and relative location to the user device, and providing instructions to the other audio playback devices. In further examples, each audio playback device may have knowledge of the other audio playback devices' relative locations to the user. For example, the relative topology of the audio playback devices can be learned by each audio playback device, such as by using UWB. A first device may be set as an origin point (0, 0) and locations of the other audio playback devices may be established relative to the origin point. For example, the first audio playback device may send UWB pulses to each of the other audio playback devices and receive responses that can be used to determine relative distances between the audio playback devices. The other devices may do the same. Using such information, the relative location of each audio playback device may be derived.
The location detection may be updated, for example, periodically or continually. Accordingly, as the user moves about, the spatialized audio output may be adjusted to accommodate the user's updated location.
The foregoing techniques are advantageous in that they provide for an improved listening experience without costly dedicated devices, cumbersome setup, or the like.
Unless otherwise stated, the foregoing alternative examples are not mutually exclusive, but may be implemented in various combinations to achieve unique advantages. As these and other variations and combinations of the features discussed above can be utilized without departing from the subject matter defined by the claims, the foregoing description of the embodiments should be taken by way of illustration rather than by way of limitation of the subject matter defined by the claims. In addition, the provision of the examples described herein, as well as clauses phrased as “such as,” “including” and the like, should not be interpreted as limiting the subject matter of the claims to the specific examples; rather, the examples are intended to illustrate only one of many possible embodiments. Further, the same reference numbers in different drawings can identify the same or similar elements.
This application is a continuation of and claims priority to U.S. Non-Provisional patent application Ser. No. 17/702,472, filed on Mar. 23, 2022, the disclosure of which is incorporated by reference herein in its entirety.
Number | Date | Country | |
---|---|---|---|
Parent | 17702472 | Mar 2022 | US |
Child | 18800811 | US |