Aspects and implementations of the present disclosure are generally directed to audio systems, for example, audio systems which include a peripheral device and a wearable audio device.
Audio systems, for example, augmented reality audio systems, may utilize a technique referred to as sound externalization to render audio signals to a listener to trick their mind into believing they are perceiving sound from physical locations within an environment. Specifically, when listening to audio, particularly audio through stereo headphones, many listeners perceive the sound as coming from “inside their head”. Sound externalization refers to the process of simulating and rendering sounds such that they are perceived by the user as though they are coming from the surrounding environment, i.e. the sounds are “external” to the listener.
As these augmented reality audio systems are capable of being executed using mobile devices, simulating or externalizing sound sources at predetermined positions may not be desirable to some users.
The present disclosure relates to audio systems, methods, and computer program products which include a wearable audio device and a peripheral device. The wearable audio device and the peripheral device are capable of determining their respective positions and/or orientations within an environment as well as their respective positions and/or orientations with respect to each other. Once the relative positions and orientations between, e.g., the wearable audio device and the peripheral device are known, virtual sound sources may be generated at fixed positions and orientations relative to the peripheral device such that any change in position and/or orientation of the peripheral device produces a proportional change in the position and/or orientation of the virtual sound sources. Additionally, one or more orders of reflected audio paths may be simulated for each virtual sound source to increase the sense of realism of the simulated sources. For instance, each sound path, e.g., direct sound paths, as well as the first order and second order reflected sound paths, can be produced by modifying the original audio signal using a plurality of left head-related transfer functions (HRTFs) and a plurality of right HRTFs to simulate audio as though it were perceived by the user's left and right ears, respectively, coming from each virtual sound source.
Thus, the disclosure includes audio systems, methods, and computer program products to produce spatialized and externalized audio that is “pinned” to the peripheral device. The systems, methods, and computer program products can utilize: 1) a means of tracking the user's head location and/or orientation; 2) means of tracking the location and/or orientation of the peripheral device; and, 3) a means of rendering spatialized audio signals where the locations of the virtual sound sources are anchored or pinned in some way to the peripheral device. This could include placing virtual sound sources to the virtual left and virtual right of the peripheral device for left and right channel audio signals. It can also include a discrete, extracted, or phantom center virtual sound source for center channel audio. The concepts disclosed herein also scale to additional channels, e.g., could include additional channels for implementation of virtual surround sound systems (e.g., virtual 5.1 or 7.1). The concepts can also include object-oriented rendering like, for example, the object-oriented rendering provided by Dolby Atmos systems, which can add virtual height channels to the virtual surround sound system (e.g., virtual 5.1.2 or 5.1.4).
In one example, a computer program product for simulating audio signals is provided, the computer program product including a set of non-transitory computer-readable instructions stored in a memory, the set of non-transitory computer-readable instructions being executable on a processor and are configured to: obtain or receive an orientation of a wearable audio device relative to a peripheral device within an environment; generate a first modified audio signal, wherein the first modified audio signal is modified using a first head-related transfer function (HRTF) based at least in part on the orientation of the wearable audio device relative to the peripheral device; generate a second modified audio signal, wherein the second modified audio signal is modified using a second head-related transfer function (HRTF) based at least in part on the orientation of the wearable audio device relative to the peripheral device; send the first modified audio signal and the second modified audio signal to the wearable audio device, wherein the first modified audio signal is configured to be rendered using a first speaker of the wearable audio device and the second modified audio signal is configured to be rendered using a second speaker of the wearable audio device.
In one aspect, the set of non-transitory computer readable instructions are further configured to: obtain or receive a position of the wearable audio device relative to a position of the peripheral device within the environment and wherein modifying the first modified audio signal and modifying the second modified audio signal include attenuation based at least in part on a calculated distance between the position of the wearable audio device and the position of the peripheral device.
In one aspect, the set of non-transitory computer readable instructions are further configured to: obtain or receive an orientation of the peripheral device relative to the wearable audio device, wherein the first HRTF and the second HRTF are based in part on the orientation of the peripheral device relative to the wearable device.
In one aspect, the first modified audio signal and the second modified audio signal are configured to simulate a first direct sound originating from a first virtual sound source proximate a center of the peripheral device.
In one aspect, generating the first modified audio signal and generating the second modified audio signal include simulating a first direct sound originating from a first virtual sound source proximate a position of the peripheral device within the environment and simulating a second direct sound originating from a second virtual sound source proximate the position of the peripheral device.
In one aspect, generating the first modified audio signal and generating the second modified audio signal include simulating surround sound.
In one aspect, generating the first modified audio signal and generating the second modified audio signal includes using the first HRTF and the second HRTF, respectively, for only a subset of all available audio frequencies and/or channels.
In one aspect, the first HRTF and the second HRTF are further configured to utilize localization data from a localization module within the environment corresponding to locations of a plurality of acoustically reflective surfaces within the environment.
In one aspect, generating the first modified audio signal includes simulating a first direct sound originating from a first virtual sound source proximate the peripheral device and simulating a primary reflected sound corresponding to a simulated reflection of the first direct sound off of a first acoustically reflective surface of the plurality of acoustically reflective surfaces.
In one aspect, generating the first modified audio signal includes simulating a secondary reflected sound corresponding to a simulated reflection of the primary reflected sound off of a second acoustically reflective surface of the plurality of acoustically reflective surfaces.
In one aspect, the first modified audio signal and the second modified audio signal correspond to video content displayed on the peripheral device.
In one aspect, the orientation of the wearable audio device relative to the peripheral device is determined using at least one sensor, wherein the at least one sensor is located on, in, or in proximity to the wearable audio device or the peripheral device, and the at least one sensor is selected from: a gyroscope, an accelerometer, a magnetometer, a global positioning sensor (GPS), a proximity sensor, a microphone, a lidar sensor, or a camera.
In another example, a method of simulating audio signals is provided, the method including: receiving, via a wearable audio device from a peripheral device, a first modified audio signal, wherein the first modified audio signal is modified using a first head-related transfer function (HRTF) based at least in part on an orientation of the wearable audio device relative to the peripheral device; receiving, via the wearable audio device from the peripheral device, a second modified audio signal, wherein the second modified audio signal is modified using a second head-related transfer function (HRTF) based at least in part on the orientation of the wearable audio device relative to the peripheral device; rendering the first modified audio signal using a first speaker of the wearable audio device; and rendering the second modified audio signal using a second speaker of the wearable audio device.
In an aspect, the method further includes: obtaining a position of a wearable audio device relative to the peripheral device within an environment and wherein modifying the first modified audio signal and modifying the second modified audio signal are based at least in part on a calculated distance between the position of the wearable audio device and a position of the peripheral device.
In an aspect, the method further includes obtaining an orientation of the peripheral device relative to the wearable audio device, wherein the first HRTF and the second HRTF are based in part on the orientation of the peripheral device.
In an aspect, the first modified audio signal and the second modified audio signal are configured to simulate a first direct sound originating from a first virtual sound source proximate a center of the peripheral device.
In an aspect, rendering the first modified audio signal and rendering the second modified audio signal include simulating a first direct sound originating from a first virtual sound source proximate a position of the peripheral device within the environment and simulating a second direct sound originating from a second virtual sound source proximate the position of the peripheral device.
In one aspect, generating the first modified audio signal and generating the second modified audio signal include simulating surround sound.
In one aspect, generating the first modified audio signal and generating the second modified audio signal includes using the first HRTF and the second HRTF, respectively, for only a subset of all available audio frequencies and/or channels.
In an aspect, the method further includes receiving localization data from a localization module within the environment; and determining locations of a plurality of acoustically reflective surfaces within the environment based on the localization data.
In an aspect, rendering the first modified audio signal includes simulating a first direct sound originating from a first virtual sound source proximate the peripheral device and simulating a primary reflected sound corresponding to a simulated reflection of the first direct sound off of a first acoustically reflective surface of the plurality of acoustically reflective surfaces.
In an aspect, rendering the first modified audio signal includes simulating a secondary reflected sound corresponding to a simulated reflection of the primary reflected sound off of a second acoustically reflective surface of the plurality of acoustically reflective surfaces.
In an aspect, the peripheral device includes a display configured to display video content associated with the first modified audio signal and second modified audio signal.
In an aspect, the orientation of the wearable audio device relative to the peripheral device is determined using at least one sensor, wherein the at least one sensor is located on, in, or in proximity to the wearable audio device or the peripheral device, and the at least one sensor is selected from: a gyroscope, an accelerometer, a magnetometer, a global positioning sensor (GPS), a proximity sensor, a microphone, a lidar sensor, or a camera.
In a further example, an audio system for simulating audio is provided, the system including a peripheral device configured to obtain or receive an orientation of a wearable audio device relative to the peripheral device within an environment, the peripheral device further configured to generate a first modified audio signal using a first head-related transfer function (HRTF) based on the orientation of the wearable audio device with respect to the peripheral device and generate a second modified audio signal using a second head-related transfer function (HRTF) based on the orientation of the wearable audio device with respect to the peripheral device; and, the wearable audio device. The wearable audio device includes a processor configured to receive the first modified audio signal and receive the second modified audio signal; a first speaker configured to render the first modified audio signal using the first speaker; and a second speaker configured to render the second modified audio signal using the second speaker.
These and other aspects of the various embodiments will be apparent from and elucidated with reference to the embodiment(s) described hereinafter.
In the drawings, like reference characters generally refer to the same parts throughout the different views. Also, the drawings are not necessarily to scale, emphasis instead generally being placed upon illustrating the principles of the various embodiments.
The present disclosure relates to audio systems, methods, and computer program products which include a wearable audio device (e.g., headphones or earbuds) and a peripheral device, such as a mobile peripheral device (e.g., a smartphone or tablet computer). The wearable audio device and the peripheral device are capable of determining their respective positions and/or orientations within an environment as well as their respective positions and/or orientations with respect to each other. Once the relative positions and orientations between, e.g., the wearable audio device and the peripheral device are known, virtual sound sources may be generated at fixed positions and orientations relative to the peripheral device such that any change in position and/or orientation of the peripheral device produces a proportional change in the position and/or orientation of the virtual sound sources. Additionally, one or more orders of reflected audio paths (e.g., first order, and optionally also second order) may be simulated for each virtual sound source to increase the sense of realism of the simulated sources. Each sound path, e.g., direct sound paths, as well as the orders of reflected sound paths (e.g., the first order, and optionally the second order), can be produced by modifying the original audio signal using a plurality of left head-related transfer functions (HRTFs) and a plurality of right HRTFs to simulate audio as though it were perceived by the user's left and right ears, respectively, coming from each virtual sound source.
The term “wearable audio device”, as used in this application, in addition to its ordinary meaning to those with skill in the art, is intended to mean a device that fits around, on, in, or near an ear (including open-ear audio devices worn on the head or shoulders of a user) and that radiates acoustic energy into or towards the ear. Wearable audio devices are sometimes referred to as headphones, earphones, earpieces, headsets, earbuds or sport headphones, and can be wired or wireless. A wearable audio device includes an acoustic driver to transduce audio signals to acoustic energy, which could utilize air conduction and/or bone conduction techniques. The acoustic driver may be housed in an earcup. While some of the figures and descriptions following may show a single wearable audio device, having a pair of earcups (each including an acoustic driver) it should be appreciated that a wearable audio device may be a single stand-alone unit having only one earcup. Each earcup of the wearable audio device may be connected mechanically to another earcup or headphone, for example by a headband and/or by leads that conduct audio signals to an acoustic driver in the ear cup or headphone. A wearable audio device may include components for wirelessly receiving audio signals. A wearable audio device may include components of an active noise reduction (ANR) system. Wearable audio devices may also include other functionality such as a microphone so that they can function as a headset. While
The term “head related transfer function” or acronym “HRTF” as used herein, in addition to its ordinary meaning to those with skill in the art, is intended to broadly reflect any manner of calculating, determining, or approximating the binaural sound that a human ear perceives such that the listener can approximate the sound's position of origin in space. For example, a HRTF may be a mathematical formula or collection of mathematical formulas that can be applied or convolved with an audio signal such that a user listening to the modified audio signal can perceive the sound as originating at a particular point in space. These HRTFs, as referred to herein, may be generated specific to each user, e.g., taking into account that user's unique physiology (e.g., size and shape of the head, ears, nasal cavity, oral cavity, etc.). Alternatively, it should be appreciated that a generalized HRTF may be generated that is applied to all users, or a plurality of generalized HRTFs may be generated that are applied to subsets of users (e.g., based on certain physiological characteristics that are at least loosely indicative of that user's unique head related transfer function, such as age, gender, head size, ear size, or other parameters). In one example, certain aspects of the HRTFs may be accurately determined, while other aspects are roughly approximated (e.g., accurately determines the inter-aural delays, but coarsely determines the magnitude response).
The following description should be read in view of
As illustrated in
First circuitry 106 also includes at least one sensor, i.e., first sensor 118. First sensor 118 can be located on, in, or in communication with wearable audio device 102. First sensor 118 is a selected from at least one of: a gyroscope, an accelerometer, a magnetometer, a global positioning sensor (GPS), a proximity sensor, a microphone or plurality of microphones, a camera or plurality of cameras (e.g., front and rear mounted cameras), or any other sensor device capable of obtaining at least one of: a first position P1 of wearable audio device 102 within environment E, a first position P1 relative to peripheral device 104; a first orientation O1 of the wearable audio device 102 relative to environment E; a first orientation O1 of the wearable audio device 102 relative to peripheral device 104; or the distance between wearable audio device 102 and peripheral device 104. First position P1 and first orientation O1 will be discussed below in further detail. Furthermore, first circuitry 106 can also include at least one speaker 120. In one example, first sensor 118 is a camera or plurality of cameras, e.g., front and rear-mounted cameras, that are capable of obtaining image data of the environment E and/or the relative location and orientation of peripheral device 104 as will be discussed below. In one example, first circuitry 106 includes a plurality of speakers 120A-120B configured to receive an audio signal, e.g., modified audio signals 146A-146B (discussed below) and generate an audio playback APB to produce audible acoustic energy associated with the audio signal proximate a user's ear.
As illustrated in
Second circuitry 122 can also include at least one sensor, i.e., second sensor 134. Second sensor 134 can be located on, in, or in communication with peripheral device 104. Second sensor 134 is selected from at least one of: a gyroscope, an accelerometer, a magnetometer, a global positioning sensor (GPS), a proximity sensor, a microphone, a camera or plurality of cameras (e.g., front and rear cameras), or any other sensor device capable of obtaining at least one of: a second position P2 of peripheral device 104 within environment E, a second position P2 relative to wearable audio device 102; a second orientation O2 of the peripheral device 104 relative to environment E; a second orientation O2 of the peripheral device 104 relative to wearable audio device 102; or the distance between wearable audio device 102 and peripheral device 104. Second position P2 and second orientation O2 will be discussed below in further detail. In one example, second sensor 134 is a camera or plurality of cameras, e.g., front and rear-mounted cameras, that are capable of obtaining image data of the environment E and/or the relative location and orientation of wearable audio device 102 as will be discussed below.
Furthermore, second circuitry 122 can also include at least one device speaker 136, and a display 138. In one example, at least one device speaker 136 is configured to receive an audio signal or a portion of an audio signal, e.g., modified audio signals 146A-146B (discussed below) and generate an audio playback APB to produce audible acoustic energy associated with the audio signal at the second position P2 of the peripheral device 104 at a fixed distance from the wearable audio device 102. Display 138 is intended to be a screen capable of displaying video content 140. In one example, display 138 is a Liquid-Crystal Display (LCD) and may also include touch-screen functionality, e.g., is capable of utilizing resistive or capacitive sensing to determine contact with, and position of, a user's finger against the screen surface. It should also be appreciated that display 138 can be selected from at least one of: a Light-Emitting Diode (LED) screen, an Organic Light-Emitting Diode (OLED) screen, a plasma screen, or any other display technology capable of presenting pictures or video, e.g., video content 140, to a viewer or user.
As mentioned above, wearable audio device 102 and/or peripheral device 104 are configured to obtain their respective positions and orientations within environment E and/or relative to each other using first sensor 118 and second sensor 134, respectively. In one example environment E is a room, e.g., a space defined by a floor surrounded by at least one wall and capped by a ceiling or roof and within which single positions can be modeled and defined by a three-dimensional Cartesian coordinate system as having X, Y, and Z, positions within the defined space associated with a length dimension, a width dimension, and a height dimension, respectively. Therefore, obtaining first position P1 of wearable audio device 102 can be absolute within environment E, e.g., defined purely by its Cartesian coordinate within the room, or can be relative to the position of the other device, i.e., peripheral device 104.
Similarly, each device can obtain its own orientation defined by a respective yaw, pitch, and roll within a spherical coordinate system with an origin point at the center of each device, where yaw includes rotation about a vertical axis through the device and orthogonal to the floor beneath the device, pitch includes rotation about a first horizontal axis orthogonal to the vertical axis and extending from the at least one wall of the room, and roll includes rotation about a second horizontal axis orthogonal to the vertical axis and the first horizontal axis. In one example, where first orientation O1 of wearable audio device 102 and second orientation O2 of peripheral device 104 are defined relative to each other, each device may determine a vector representative of a relative elevation between each device and a relative azimuth angle, which are based in part on the yaw, pitch, and roll of each device. It should also be appreciated that first orientation O1 and second orientation O2 can also be obtained absolutely within environment E, e.g., with respect to a predetermined and/or fixed position within environment E.
As mentioned above, the respective circuitries of the devices of audio system 100, e.g., first circuitry 106 of wearable audio device 102 and second circuitry 122 of peripheral device 104, are capable of establishing, and sending and/or receiving wired or wireless data over, a data connection 142. For example, first antenna 116 of first communication module 114 is configured to establish data connection 142 with second antenna 132 of second communications module 130. Data connection 142 can utilize one or more wired or wireless data protocols selected from at least one of: Bluetooth, Bluetooth Low-Energy (BLE) or LE Audio, Radio Frequency Identification (RFID) communications, Low-Power Radio frequency transmission (LP-RF), Near-Field Communications (NFC), or any other protocol or communication standard capable of establishing a permanent or semi-permanent connection, also referred to as paired connection, between first circuitry 106 and second circuitry 122. It should be appreciated that data connection 142 can be utilized by first circuitry 106 of wearable audio device 102 and second circuitry 122 of peripheral device 104 to send and/or receive data relating to the respective positions and orientations of each device as discussed above, e.g., first position P1, second position P2, first orientation O1, second orientation O2, and the distance between devices, such that each device can be aware of the position and orientation of itself and/or the other devices within audio system 100. Additionally, as mentioned above, data connection 142 can also be used to send and/or receive audio data, e.g., modified audio signals 146A-146B (discussed below) between the devices of audio system 100.
In addition to the ability to obtain respective positions and orientations of each device of audio system 100, audio system 100 is also configured to render externalized sound to the user within environment E, using, for example, modified audio signals 146A-146B (discussed below) that have been filtered or modified using at least one head-related transfer function (HRTF) (also discussed below). In one example of audio system 100, sound externalization for use in augmented reality audio systems and programs is achieved by modeling an environment E, creating virtual sound sources at various positions within environment E, e.g., virtual sound sources 144A-144G (collectively referred to as “plurality of virtual sound sources 144” or “virtual sound sources 144”), and modeling or simulating sound waves and their respective paths from the virtual sound sources 144 (shown in
In some examples, the positions of each virtual sound source of plurality of virtual sound sources 144 with respect to the position of the wearable audio device 102 can be utilized to calculate and simulate a respective plurality of direct sound paths 148A-148G (collectively referred to as “plurality of direct sound paths 148” or “direct sound paths 148”), i.e., at least one direct sound path 148 from each virtual sound source 144 directly to the user's ears. Each sound path can be associated with a calculated distance (e.g., calculated distance D1 shown in
In one example, illustrated in
Similarly to virtual sound source 144A associated with a center channel audio signal, left channel and right channel audio signals may be simulated through additional virtual sound sources, e.g., 144B and 144C, as illustrated in
Additionally, other virtual sound source configurations are possible. For example,
Alternatively, and although not illustrated, it should be appreciated that one or more virtual sound sources 144 within any of the foregoing exemplary configurations may be replaced by a real sound source e.g., a real tangible speaker placed within environment E at the approximate location of the virtual sound source that it is intended to replace. For example, the center channel audio signal, rendered at the locations indicated for virtual sound source 144A, could be replaced, i.e., not generated virtually at that position and the at least one device speaker 136 can render audio playback APB at the location of peripheral device 104 where the audio playback APB only includes center channel audio. Similarly, as it may be difficult to simulate directionality of audio corresponding to a base audio channel, a real subwoofer can be placed within environment E to replace a virtual equivalent base sound source. In addition to, or in the alternative to, the foregoing, it should be appreciated that one or more virtual sound sources 144 within any of the foregoing exemplary configurations can be rendered by wearable audio device 102 without being virtualized or spatialized as discussed herein. For example, in a configuration that utilizes left, right, and center audio channels, as discussed above, audio system 100 can choose to virtualize or spatialize any of those channels by generating a virtual audio source 144 within the environment E that simulates one or more of those channels. However, audio system 100 can, in addition to, or in the alternative to spatializing one or more of those channels, render audio at the speakers of the wearable audio device 102 that is unspatialized, e.g., one or more of those channels may be rendered to audible sound by the wearable audio device 102 and perceived by the user as though it were coming from inside the user's head.
In addition, in some implementations, the techniques described herein to spatially pin audio to a given location (such as the center of the display of the peripheral device) could separate the audio to be spatially pinned by frequency and/or channel, such that portions of the audio is spatially pinned and other portions are not. For instance, the portions of the audio that relate to low frequencies, such as those for a subwoofer channel, could be excluded from being spatialized using the techniques variously described herein as those low frequencies are relatively spatially/directionally agnostic compared to other frequencies. In other words, in the case of low frequencies and/or a subwoofer channel, there is little information a user's brain can use to localize the source of the low frequencies and/or subwoofer channel, and so including those frequencies and/or that channel when transforming the audio to be spatially pinned would add computational cost with little to no psychoacoustic benefit (as the user wouldn't be able to tell where those low frequencies and/or subwoofer channel was coming from, anyway). This is why subwoofers in audio systems can generally be placed anywhere in a room, as low frequencies are directionally agnostic. In some such implementations, the techniques include separating out the frequency, channel, and/or portion (e.g., low frequencies and/or the subwoofer channel) prior to performing the spatial pinning as variously described herein, performing the spatial pinning for the remainder of the frequencies, channels, and/or portions, and then combining the non-spatially pinned aspect (e.g., low frequencies and/or the subwoofer channel) with the spatially pinned aspect (e.g., all other frequencies and/or all other channels).
In the following examples, corresponding to
During operation, as illustrated in
In another example, audio system 100 may utilize localization data to further increase the simulated realism of the externalized and/or virtualized sound sources 144. As mentioned above, in addition to simulating direct sound paths from each virtual sound source 144, one way to increase the realism of the simulated sound is to add additional virtual sound sources 144 which simulate primary and secondary reflections that real audio sources produce when propagating sound signals reflect off of acoustically reflective surfaces and back to the user. In other words, real sound sources create spherical waves, not just directional waves, which reflect off, e.g., acoustically reflective surfaces 154A-154D (collectively referred to as “acoustically reflective surfaces 154” or “surfaces 154”), which can include but are not limited to walls, floors, ceilings, and other acoustically reflective surfaces such as furniture. Therefore, localization refers to the process of obtaining data of the immediate or proximate area or environment E surrounding the user, e.g., surrounding the wearable audio device 102 and/or the peripheral device 104, which would indicate the locations, orientations, and/or acoustically reflective properties of the objects within the user's environment E. Once located, reflective paths may be calculated between each virtual sound source 144 and each surface 154. The point where the paths contact each surface 154, herein referred to as contact points CP, can be utilized to generate a new virtual sound source which, when simulated, produces sound that simulates an acoustic reflection of the original virtual sound source 144. One way to generate these new virtual sound sources, is to create mirrored virtual sound sources for each virtual sound source, where the mirrored virtual sound sources are mirrored about the acoustically reflective surface 154 as will be described with respect to
Once localization data is obtained using, e.g., localization module 156, and in addition to direct sound paths 148A and 148B discussed above, paths between each virtual sound source 144 and each acoustically reflective surface 154 can be determined. At the junction between each determined path and each acoustically reflective surface 154, there is a contact point CP. In one example, as illustrated in
Similarly, audio system 100 can generate secondary mirrored virtual sound sources 162A-162B (collectively referred to as “secondary mirrored virtual sound sources 162” or secondary mirrored sources 162″). Each secondary mirrored virtual sound source 162, is a new virtual sound source generated at a position equivalent to the position of the original virtual sound source 144 and mirrored about a different acoustically reflective surface 154. For example, as illustrated, a two-part path (shown by two dashed lines in
Similarly to the example described above with respect to
It should be appreciated that primary reflected sound paths 160 and secondary reflected sound paths 164 can be simulated using primary mirrored virtual sound sources 158 and secondary mirrored virtual sound sources 162 for every virtual sound source configuration discussed above, e.g., 5.1, 7.1, and 9.1 surround sound configurations as well as configurations which include at least one virtual subwoofer associated with base channel audio signals. Additionally, the present disclosure is not limited to primary and secondary reflections. For example, higher order reflections are possible, e.g., third order reflections, fourth order reflections, fifth order reflections, etc., are possible; however, as additional order reflections and therefore the number of virtual sound sources simulated increases, the computational processing power and processing time scales exponentially. In one example, audio system 100 is configured to simulate six virtual sound sources 144, e.g., corresponding to a 5.1 surround sound configuration. For each virtual sound source 144, a direct sound path 148 is calculated. For each virtual sound source 144 there are six first order or primary reflected sound paths 160, corresponding to a first order reflection off of four walls, a ceiling, and a floor (e.g., acoustically reflective surfaces 154). Each first order reflected path may again reflect off of the other five remaining surfaces 154 producing an exponential number of virtual sources and reflected sound paths. It should be appreciated that, in some example implementations of audio system 100, the number of second order reflections 164 is dependent on the geometry of the environment E, e.g., the shape of the room with respect to the position of the wearable audio device 102 and the virtual sound sources 144. For example, in a rectangular room geometry, once a first order or primary reflected sound path 160 is selected, certain second order reflections 164 may not be physically possible, e.g., where the contact points CP would need to be positioned outside of the room to obtain a valid second order reflection path. Thus, in an example with a rectangular room geometry, it should be appreciated that rather than simulating five secondary reflected sound paths 164 for each first order reflected sound path 160, only three secondary reflected sound paths 164 may be simulated to account for invalid second order reflections 164 caused by the particular room geometry. For example, rather than simulating six first order reflections 160 and thirty second order reflections 164 (e.g., where each of the six first order sound paths 160 are each reflected off of the five remaining walls), audio system 100 can simulate six first order reflections 160 and only eighteen secondary reflected sound paths 164 (e.g., each of the six first order reflections 160 off of three of the five remaining walls). It should also be appreciated that audio system 100 can be configured to perform a validity test across all simulated paths to ensure that the path from each simulated source to, e.g., the wearable audio device 102 is a valid path, i.e., is physically realizable dependent on the geometry of the environment E.
Additionally, due to the potential processing power required to generate these first order and second order reflections in real-time, in one example, audio system 100 utilizes the processing capacity of second circuitry 122 of peripheral device 104, e.g., using second processor 124, second memory 126 and/or second set of non-transitory computer-readable instructions 128. However, it should be appreciated that, in some example implementations of audio system 100, audio system 100 can utilize the processing capacity of first circuitry 106 of wearable audio devices 102 to simulate the first and second order reflected sound sources discussed herein, e.g., using first processor 108, first memory 110, and/or first set of non-transitory computer-readable instructions 112. Furthermore, it should be appreciated that audio system 100 can split the processing load between first circuitry 106 and second circuitry 122 in any conceivable combination.
During operation, as illustrated in
All definitions, as defined and used herein, should be understood to control over dictionary definitions, definitions in documents incorporated by reference, and/or ordinary meanings of the defined terms.
The indefinite articles “a” and “an,” as used herein in the specification and in the claims, unless clearly indicated to the contrary, should be understood to mean “at least one.”
The phrase “and/or,” as used herein in the specification and in the claims, should be understood to mean “either or both” of the elements so conjoined, i.e., elements that are conjunctively present in some cases and disjunctively present in other cases. Multiple elements listed with “and/or” should be construed in the same fashion, i.e., “one or more” of the elements so conjoined. Other elements may optionally be present other than the elements specifically identified by the “and/or” clause, whether related or unrelated to those elements specifically identified.
As used herein in the specification and in the claims, “or” should be understood to have the same meaning as “and/or” as defined above. For example, when separating items in a list, “or” or “and/or” shall be interpreted as being inclusive, i.e., the inclusion of at least one, but also including more than one, of a number or list of elements, and, optionally, additional unlisted items. Only terms clearly indicated to the contrary, such as “only one of” or “exactly one of,” or, when used in the claims, “consisting of,” will refer to the inclusion of exactly one element of a number or list of elements. In general, the term “or” as used herein shall only be interpreted as indicating exclusive alternatives (i.e. “one or the other but not both”) when preceded by terms of exclusivity, such as “either,” “one of” “only one of,” or “exactly one of.”
As used herein in the specification and in the claims, the phrase “at least one,” in reference to a list of one or more elements, should be understood to mean at least one element selected from any one or more of the elements in the list of elements, but not necessarily including at least one of each and every element specifically listed within the list of elements and not excluding any combinations of elements in the list of elements. This definition also allows that elements may optionally be present other than the elements specifically identified within the list of elements to which the phrase “at least one” refers, whether related or unrelated to those elements specifically identified.
It should also be understood that, unless clearly indicated to the contrary, in any methods claimed herein that include more than one step or act, the order of the steps or acts of the method is not necessarily limited to the order in which the steps or acts of the method are recited.
In the claims, as well as in the specification above, all transitional phrases such as “comprising,” “including,” “carrying,” “having,” “containing,” “involving,” “holding,” “composed of,” and the like are to be understood to be open-ended, i.e., to mean including but not limited to. Only the transitional phrases “consisting of” and “consisting essentially of” shall be closed or semi-closed transitional phrases, respectively.
The above-described examples of the described subject matter can be implemented in any of numerous ways. For example, some aspects may be implemented using hardware, software or a combination thereof. When any aspect is implemented at least in part in software, the software code can be executed on any suitable processor or collection of processors, whether provided in a single device or computer or distributed among multiple devices/computers.
The present disclosure may be implemented as a system, a method, and/or a computer program product at any possible technical detail level of integration. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present disclosure.
The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
Computer readable program instructions for carrying out operations of the present disclosure may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++, or the like, and procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some examples, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present disclosure.
Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to examples of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.
The computer readable program instructions may be provided to a processor of a, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various examples of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.
Other implementations are within the scope of the following claims and other claims to which the applicant may be entitled.
While various examples have been described and illustrated herein, those of ordinary skill in the art will readily envision a variety of other means and/or structures for performing the function and/or obtaining the results and/or one or more of the advantages described herein, and each of such variations and/or modifications is deemed to be within the scope of the examples described herein. More generally, those skilled in the art will readily appreciate that all parameters, dimensions, materials, and configurations described herein are meant to be exemplary and that the actual parameters, dimensions, materials, and/or configurations will depend upon the specific application or applications for which the teachings is/are used. Those skilled in the art will recognize, or be able to ascertain using no more than routine experimentation, many equivalents to the specific examples described herein. It is, therefore, to be understood that the foregoing examples are presented by way of example only and that, within the scope of the appended claims and equivalents thereto, examples may be practiced otherwise than as specifically described and claimed. Examples of the present disclosure are directed to each individual feature, system, article, material, kit, and/or method described herein. In addition, any combination of two or more such features, systems, articles, materials, kits, and/or methods, if such features, systems, articles, materials, kits, and/or methods are not mutually inconsistent, is included within the scope of the present disclosure.
This application is a continuation of U.S. patent application Ser. No. 16/904,087, filed Jun. 17, 2020, where the entire contents of the application are hereby incorporated by reference.
Number | Date | Country | |
---|---|---|---|
Parent | 16904087 | Jun 2020 | US |
Child | 17713147 | US |