The disclosure pertains to systems and methods for rendering audio for playback by some or all speakers (for example, each activated speaker) of a set of speakers.
Audio devices, including but not limited to smart audio devices, have been widely deployed and are becoming common features of many homes. Although existing systems and methods for controlling audio devices provide benefits, improved systems and methods would be desirable.
Throughout this disclosure, including in the claims, the terms “speaker,” “loudspeaker” and “audio reproduction transducer” are used synonymously to denote any sound-emitting transducer (or set of transducers). A typical set of headphones includes two speakers. A speaker may be implemented to include multiple transducers (e.g., a woofer and a tweeter), which may be driven by a single, common speaker feed or multiple speaker feeds. In some examples, the speaker feed(s) may undergo different processing in different circuitry branches coupled to the different transducers.
Throughout this disclosure, including in the claims, the expression performing an operation “on” a signal or data (e.g., filtering, scaling, transforming, or applying gain to, the signal or data) is used in a broad sense to denote performing the operation directly on the signal or data, or on a processed version of the signal or data (e.g., on a version of the signal that has undergone preliminary filtering or pre-processing prior to performance of the operation thereon).
Throughout this disclosure including in the claims, the expression “system” is used in a broad sense to denote a device, system, or subsystem. For example, a subsystem that implements a decoder may be referred to as a decoder system, and a system including such a subsystem (e.g., a system that generates X output signals in response to multiple inputs, in which the subsystem generates M of the inputs and the other X−M inputs are received from an external source) may also be referred to as a decoder system.
Throughout this disclosure including in the claims, the term “processor” is used in a broad sense to denote a system or device programmable or otherwise configurable (e.g., with software or firmware) to perform operations on data (e.g., audio, or video or other image data). Examples of processors include a field-programmable gate array (or other configurable integrated circuit or chip set), a digital signal processor programmed and/or otherwise configured to perform pipelined processing on audio or other sound data, a programmable general purpose processor or computer, and a programmable microprocessor chip or chip set.
Throughout this disclosure including in the claims, the term “couples” or “coupled” is used to mean either a direct or indirect connection. Thus, if a first device couples to a second device, that connection may be through a direct connection, or through an indirect connection via other devices and connections.
As used herein, a “smart device” is an electronic device, generally configured for communication with one or more other devices (or networks) via various wireless protocols such as Bluetooth, Zigbee, near-field communication, Wi-Fi, light fidelity (Li-Fi), 3G, 4G, 5G, etc., that can operate to some extent interactively and/or autonomously. Several notable types of smart devices are smartphones, smart cars, smart thermostats, smart doorbells, smart locks, smart refrigerators, phablets and tablets, smartwatches, smart bands, smart key chains and smart audio devices. The term “smart device” may also refer to a device that exhibits some properties of ubiquitous computing, such as artificial intelligence.
Herein, we use the expression “smart audio device” to denote a smart device which is either a single-purpose audio device or a multi-purpose audio device (e.g., an audio device that implements at least some aspects of virtual assistant functionality). A single-purpose audio device is a device (e.g., a television (TV)) including or coupled to at least one microphone (and optionally also including or coupled to at least one speaker and/or at least one camera), and which is designed largely or primarily to achieve a single purpose. For example, although a TV typically can play (and is thought of as being capable of playing) audio from program material, in most instances a modern TV runs some operating system on which applications run locally, including the application of watching television. In this sense, a single-purpose audio device having speaker(s) and microphone(s) is often configured to run a local application and/or service to use the speaker(s) and microphone(s) directly. Some single-purpose audio devices may be configured to group together to achieve playing of audio over a zone or user configured area.
One common type of multi-purpose audio device is an audio device that implements at least some aspects of virtual assistant functionality, although other aspects of virtual assistant functionality may be implemented by one or more other devices, such as one or more servers with which the multi-purpose audio device is configured for communication. Such a multi-purpose audio device may be referred to herein as a “virtual assistant.” A virtual assistant is a device (e.g., a smart speaker or voice assistant integrated device) including or coupled to at least one microphone (and optionally also including or coupled to at least one speaker and/or at least one camera). In some examples, a virtual assistant may provide an ability to utilize multiple devices (distinct from the virtual assistant) for applications that are in a sense cloud-enabled or otherwise not completely implemented in or on the virtual assistant itself. In other words, at least some aspects of virtual assistant functionality, e.g., speech recognition functionality, may be implemented (at least in part) by one or more servers or other devices with which a virtual assistant may communication via a network, such as the Internet. Virtual assistants may sometimes work together, e.g., in a discrete and conditionally defined way. For example, two or more virtual assistants may work together in the sense that one of them, e.g., the one which is most confident that it has heard a wakeword, responds to the wakeword. The connected virtual assistants may, in some implementations, form a sort of constellation, which may be managed by one main application which may be (or implement) a virtual assistant.
Herein, “wakeword” is used in a broad sense to denote any sound (e.g., a word uttered by a human, or some other sound), where a smart audio device is configured to awake in response to detection of (“hearing”) the sound (using at least one microphone included in or coupled to the smart audio device, or at least one other microphone). In this context, to “awake” denotes that the device enters a state in which it awaits (in other words, is listening for) a sound command. In some instances, what may be referred to herein as a “wakeword” may include more than one word, e.g., a phrase.
Herein, the expression “wakeword detector” denotes a device configured (or software that includes instructions for configuring a device) to search continuously for alignment between real-time sound (e.g., speech) features and a trained model. Typically, a wakeword event is triggered whenever it is determined by a wakeword detector that the probability that a wakeword has been detected exceeds a predefined threshold. For example, the threshold may be a predetermined threshold which is tuned to give a reasonable compromise between rates of false acceptance and false rejection. Following a wakeword event, a device might enter a state (which may be referred to as an “awakened” state or a state of “attentiveness”) in which it listens for a command and passes on a received command to a larger, more computationally-intensive recognizer.
As used herein, the terms “program stream” and “content stream” refer to a collection of one or more audio signals, and in some instances video signals, at least portions of which are meant to be heard together. Examples include a selection of music, a movie soundtrack, a movie, a television program, the audio portion of a television program, a podcast, a live voice call, a synthesized voice response from a smart assistant, etc. In some instances, the content stream may include multiple versions of at least a portion of the audio signals, e.g., the same dialogue in more than one language. In such instances, only one version of the audio data or portion thereof (e.g., a version corresponding to a single language) is intended to be reproduced at one time.
At least some aspects of the present disclosure may be implemented via methods. Some such methods may involve audio processing. For example, some methods may involve receiving, by a control system and via an interface system, audio data. The audio data may include one or more audio signals and associated spatial data. The spatial data may indicate an intended perceived spatial position corresponding to an audio signal. In some examples, the spatial data may be, or may include, positional metadata. According to some examples, the spatial data may be, may include or may correspond with channels of a channel-based audio format.
In some examples, the method may involve rendering, by the control system, the audio data for reproduction via a set of loudspeakers of an environment, to produce first rendered audio signals. In some such examples, rendering the audio data for reproduction may involve determining a first relative activation of a set of loudspeakers in the environment according to a first rendering configuration. The first rendering configuration may correspond to a first set of speaker activations. In some examples, the method may involve providing, via the interface system, the first rendered audio signals to at least some loudspeakers of the set of loudspeakers of the environment.
According to some examples, the method may involve receiving, by the control system and via the interface system, a first rendering transition indication. The first rendering transition indication may, for example, indicate a transition from the first rendering configuration to a second rendering configuration.
In some examples, the method may involve determining, by the control system, a second set of speaker activations. According to this example, the second set of speaker activations corresponds to a simplified version of the second rendering configuration. However, in other examples the second set of speaker activations may correspond to a complete, full-fidelity version of the second rendering configuration.
According to some examples, the method may involve performing, by the control system, a first transition from the first set of speaker activations to the second set of speaker activations. In some examples, the method may involve determining, by the control system, a third set of speaker activations. According to this example, the third set of speaker activations corresponds to a complete version of the second rendering configuration. In some examples, the method may involve performing, by the control system, a second transition to the third set of speaker activations without requiring completion of the first transition. In some examples, a single renderer instance may render the audio data for reproduction.
In some examples, the first set of speaker activations, the second set of speaker activations and the third set of speaker activations may be frequency-dependent speaker activations. According to some such examples, the frequency-dependent speaker activations may correspond with and/or be produced by applying, in at least a first frequency band, a model of perceived spatial position that produces a binaural response corresponding to an audio object position at the left and right ears of a listener.
In some examples, the frequency-dependent speaker activations may correspond with and/or be produced by applying, in at least a second frequency band, a model of perceived spatial position that places a perceived spatial position of an audio signal playing from a set of loudspeakers at a center of mass of the set of loudspeakers' positions weighted by the loudspeaker's associated activating gains.
According to some examples, the first set of speaker activations, the second set of speaker activations and/or the third set of speaker activations may be based, at least in part, on a cost function. In some such examples, the first set of speaker activations, the second set of speaker activations and/or the third set of speaker activations may be a result of optimizing a cost that is a function of the following: a model of perceived spatial position of the audio signal played when played back over the set of loudspeakers in the environment; a measure of proximity of the intended perceived spatial position of the audio signal to a position of each loudspeaker of the set of loudspeakers; and/or one or more additional dynamically configurable functions. In some such examples, the one or more additional dynamically configurable functions may be based on one or more of the following: the proximity of loudspeakers to one or more listeners; the proximity of loudspeakers to an attracting force position (wherein an attracting force may be a factor that favors relatively higher activation of loudspeakers in closer proximity to the attracting force position); the proximity of loudspeakers to a repelling force position (wherein a repelling force may be a factor that favors relatively lower activation of loudspeakers in closer proximity to the repelling force position); the capabilities of each loudspeaker relative to other loudspeakers in the environment; synchronization of the loudspeakers with respect to other loudspeakers; wakeword performance; and/or echo canceller performance.
According to some examples, the method may involve receiving, by the control system and via the interface system, a second rendering transition indication. According to some such examples, the second rendering transition indication may indicate a transition to a third rendering configuration. In some such examples, the method may involve determining, by the control system, a fourth set of speaker activations corresponding to the third rendering configuration. In some such examples, the method may involve performing, by the control system, a third transition to the fourth set of speaker activations without requiring completion of the first transition or the second transition. In some examples, the method may involve receiving, by the control system and via the interface system, a third rendering transition indication. In some such examples, the third rendering transition indication may indicate a transition to a fourth rendering configuration. In some such examples, the method may involve determining, by the control system, a fifth set of speaker activations corresponding to the fourth rendering configuration. In some such examples, the method may involve performing, by the control system, a fourth transition to the fifth set of speaker activations without requiring completion of the first transition, the second transition or the third transition.
In some examples, the method may involve receiving, by the control system and via the interface system and sequentially, second through (N)th rendering transition indications. In some such examples, the method may involve determining, by the control system, fourth through (N+2)th sets of speaker activations corresponding to the second through (N)th rendering transition indications. In some such examples, the method may involve performing, by the control system and sequentially, third through (N)th transitions from the fourth set of speaker activations to a (N+1)th set of speaker activations. In some such examples, the method may involve performing, by the control system, an (N+1)th transition to the (N+2)th set of speaker activations without requiring completion of any of the first through (N)th transitions.
According to some examples, the method may involve receiving, by the control system and via the interface system, a second rendering transition indication. In some instances, the second rendering transition indication may indicate a transition to a third rendering configuration. In some such examples, the method may involve determining, by the control system, a fourth set of speaker activations corresponding to a simplified version of the third rendering configuration. In some such examples, the method may involve performing, by the control system, a third transition from the third set of speaker activations to the fourth set of speaker activations. In some such examples, the method may involve determining, by the control system, a fifth set of speaker activations corresponding to a complete version of the third rendering configuration. In some such examples, the method may involve performing, by the control system, a fourth transition to the fifth set of speaker activations without requiring completion of the first transition, the second transition or the third transition.
In some examples, the method may involve receiving, by the control system and via the interface system and sequentially, second through (N)th rendering transition indications. In some such examples, the method may involve determining, by the control system, a first set of speaker activations and a second set of speaker activations for each of the second through (N)th rendering transition indications. In some such examples, the first set of speaker activations may correspond to a simplified version of a rendering configuration and the second set of speaker activations may correspond to a complete version of a rendering configuration for each of the second through (N)th rendering transition indications. In some such examples, the method may involve performing, by the control system and sequentially, third through (2N−1)th transitions from a fourth set of speaker activations to a (2N)th set of speaker activations. In some such examples, the method may involve performing, by the control system, a (2N)th transition to a (2N+1)th set of speaker activations without requiring completion of any of the first through (2N)th transitions.
According to some examples, rendering the audio data for reproduction may involve determining a single set of interpolated activations from the rendering configurations and applying the single set of interpolated activations to produce a single set of rendered audio signals. In some such examples, the single set of rendered audio signals may be fed into a set of loudspeaker delay lines. In some such examples, the set of loudspeaker delay lines may include one loudspeaker delay line for each loudspeaker of a plurality of loudspeakers.
In some examples, rendering of the audio data for reproduction may be performed in the frequency domain. In some such examples, rendering the audio data for reproduction may involve determining and implementing loudspeaker delays in the frequency domain. In some such examples, determining and implementing speaker delays in the frequency domain may involve determining and implementing a combination of transform block delays and sub-block delays applied by frequency domain filter coefficients. In some such examples, the sub-block delays may be residual phase terms that allow for delays that are not exact multiples of a frequency domain transform block size. In some examples, rendering the audio data for reproduction may involve implementing a set of transform block delay lines with separate read offsets.
In some examples, rendering the audio data for reproduction may involve implementing sub-block delay filtering. In some such examples, implementing the sub-block delay filtering may involve implementing multi-tap filters across blocks of the frequency domain transform.
According to some examples, rendering the audio data for reproduction may involve determining and applying interpolated speaker activations and crossfade windows for each rendering configuration. In some such examples, rendering the audio data for reproduction may involve implementing a set of transform block delay lines with separate delay line read offsets. In some such examples, crossfade window selection may be based, at least in part, on the delay line read offsets. In some such examples, the crossfade windows may be designed to have a unity power sum if the delay line read offsets are not identical.
In some examples, the first set of speaker activations may be for each of a corresponding plurality of positions in a three-dimensional space. However, according to some examples, the first set of speaker activations may correspond to a channel-based audio format. In some such examples, the intended perceived spatial position may correspond with a channel of the channel-based audio format.
Some or all of the operations, functions and/or methods described herein may be performed by one or more devices according to instructions (e.g., software) stored on one or more non-transitory media. Such non-transitory media may include memory devices such as those described herein, including but not limited to random access memory (RAM) devices, read-only memory (ROM) devices, etc. Accordingly, some innovative aspects of the subject matter described in this disclosure can be implemented in one or more non-transitory media having software stored thereon.
At least some aspects of the present disclosure may be implemented via apparatus. For example, one or more devices may be capable of performing, at least in part, the methods disclosed herein. In some implementations, an apparatus may include an interface system and a control system. The control system may include one or more general purpose single- or multi-chip processors, digital signal processors (DSPs), application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs) or other programmable logic devices, discrete gates or transistor logic, discrete hardware components, or combinations thereof. In some examples, the apparatus may be one of the above-referenced audio devices. However, in some implementations the apparatus may be another type of device, such as a mobile device, a laptop, a server, etc.
Details of one or more implementations of the subject matter described in this specification are set forth in the accompanying drawings and the description below. Other features, aspects, and advantages will become apparent from the description, the drawings, and the claims. Note that the relative dimensions of the following figures may not be drawn to scale.
Flexible rendering is a technique for rendering spatial audio over an arbitrary number of arbitrarily placed speakers. With the widespread deployment of smart audio devices (e.g., smart speakers) in the home, there is need for realizing flexible rendering technology which allows consumers to perform flexible rendering of audio, and playback of the so-rendered audio, using smart audio devices.
Several technologies have been developed to implement flexible rendering, including: Center of Mass Amplitude Panning (CMAP), and Flexible Virtualization (FV). Both of these technologies cast the rendering problem as one of cost function minimization, where the cost function consists of two terms: a first term that models the desired spatial impression that the renderer is trying to achieve, and a second term that assigns a cost to activating speakers. To date this second term has focused on creating a sparse solution where only speakers in close proximity to the desired spatial position of the audio being rendered are activated.
Some embodiments of the present disclosure are methods for managing playback of multiple streams of audio by at least one (e.g., all or some) of the smart audio devices of a set of smart audio devices (or by at least one (e.g., all or some) of the speakers another set of speakers).
A class of embodiments involves methods for managing playback by at least one (e.g., all or some) of a plurality of coordinated (orchestrated) smart audio devices. For example, a set of smart audio devices present (in a system) in a user's home may be orchestrated to handle a variety of simultaneous use cases, including flexible rendering of audio for playback by all or some (i.e., by speaker(s) of all or some) of the smart audio devices.
Orchestrating smart audio devices (e.g., in the home to handle a variety of simultaneous use cases) may involve the simultaneous playback of one or more audio program streams over an interconnected set of speakers. For example, a user might be listening to a cinematic Atmos soundtrack (or other object-based audio program) over the set of speakers, but then the user may utter a command to an associated smart assistant (or other smart audio device). In this case, the audio playback by the system may by modified (in accordance with some embodiments) to warp the spatial presentation of the Atmos mix away from the location of the talker (the talking user) and away from the nearest smart audio device, while simultaneously warping the playback of the smart audio device's (voice assistant's) corresponding response towards the location of the talker. This may provide important benefits in comparison to merely reducing volume of playback of the audio program content in response to detection of the command (or a corresponding wakeword). Similarly, a user might want to use the speakers to get cooking tips in the kitchen while the same Atmos sound track is playing in an adjacent open living space. In this case, in accordance with some examples, the Atmos soundtrack can be warped away from the kitchen and/or the loudness of one or more rendered signals of the Atmos soundtrack can be modified in response to the loudness of one or more rendered signals of the cooking tips sound track. Additionally, in some implementations the cooking tips playing in the kitchen can be dynamically adjusted to be heard by a person in the kitchen above any of the Atmos sound track that might be bleeding in from the living space.
Some embodiments involve multi-stream rendering systems configured to implement the example use cases set forth above as well as numerous others being contemplated. In a class of embodiments, an audio rendering system may be configured to play simultaneously a plurality of audio program streams over a plurality of arbitrarily placed loudspeakers, wherein at least one of said program streams is a spatial mix and the rendering of said spatial mix is dynamically modified in response to (or in connection with) the simultaneous playback of one or more additional program streams.
In some embodiments, a multi-stream renderer may be configured for implementing the scenario laid out above as well as numerous other cases where the simultaneous playback of multiple audio program streams must be managed. Some implementations of the multi-stream rendering system may be configured to perform one or more of the following operations:
In this example, the apparatus 100 includes an interface system 105 and a control system 110. The interface system 105 may, in some implementations, be configured for communication with one or more devices that are executing, or configured for executing, software applications. Such software applications may sometimes be referred to herein as “applications” or simply “apps.” The interface system 105 may, in some implementations, be configured for exchanging control information and associated data pertaining to the applications. The interface system 105 may, in some implementations, be configured for communication with one or more other devices of an audio environment. The audio environment may, in some examples, be a home audio environment. The interface system 105 may, in some implementations, be configured for exchanging control information and associated data with audio devices of the audio environment. The control information and associated data may, in some examples, pertain to one or more applications with which the apparatus 100 is configured for communication.
The interface system 105 may, in some implementations, be configured for receiving audio program streams. The audio program streams may include audio signals that are scheduled to be reproduced by at least some speakers of the environment. The audio program streams may include spatial data, such as channel data and/or spatial metadata. The interface system 105 may, in some implementations, be configured for receiving input from one or more microphones in an environment.
The interface system 105 may include one or more network interfaces and/or one or more external device interfaces (such as one or more universal serial bus (USB) interfaces). According to some implementations, the interface system 105 may include one or more wireless interfaces. The interface system 105 may include one or more devices for implementing a user interface, such as one or more microphones, one or more speakers, a display system, a touch sensor system and/or a gesture sensor system. In some examples, the interface system 105 may include one or more interfaces between the control system 110 and a memory system, such as the optional memory system 115 shown in
The control system 110 may, for example, include a general purpose single- or multi-chip processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, and/or discrete hardware components.
In some implementations, the control system 110 may reside in more than one device. For example, a portion of the control system 110 may reside in a device within one of the environments depicted herein and another portion of the control system 110 may reside in a device that is outside the environment, such as a server, a mobile device (e.g., a smartphone or a tablet computer), etc. In other examples, a portion of the control system 110 may reside in a device within one of the environments depicted herein and another portion of the control system 110 may reside in one or more other devices of the environment. For example, control system functionality may be distributed across multiple smart audio devices of an environment, or may be shared by an orchestrating device (such as what may be referred to herein as a smart home hub) and one or more other devices of the environment. The interface system 105 also may, in some such examples, reside in more than one device.
In some implementations, the control system 110 may be configured for performing, at least in part, the methods disclosed herein. According to some examples, the control system 110 may be configured for implementing methods of managing playback of multiple streams of audio over multiple speakers.
Some or all of the methods described herein may be performed by one or more devices according to instructions (e.g., software) stored on one or more non-transitory media. Such non-transitory media may include memory devices such as those described herein, including but not limited to random access memory (RAM) devices, read-only memory (ROM) devices, etc. The one or more non-transitory media may, for example, reside in the optional memory system 115 shown in
In some examples, the apparatus 100 may include the optional microphone system 120 shown in
According to some implementations, the apparatus 100 may include the optional loudspeaker system 125 shown in
In some implementations, the apparatus 100 may include the optional sensor system 129 shown in
In some implementations, the apparatus 100 may include the optional display system 135 shown in
According to some such examples the apparatus 100 may be, or may include, a smart audio device. In some such implementations the apparatus 100 may be, or may include, a wakeword detector. For example, the apparatus 100 may be, or may include, a virtual assistant.
Examples of information derived from the microphone inputs and subsequently used to dynamically modify any of the N renderers include but are not limited to:
In this implementation, block 205 involves receiving, via an interface system, a first audio program stream. In this example, the first audio program stream includes first audio signals that are scheduled to be reproduced by at least some speakers of the environment. Here, the first audio program stream includes first spatial data. According to this example, the first spatial data includes channel data and/or spatial metadata. In some examples, block 205 involves a first rendering module of a control system receiving, via an interface system, the first audio program stream.
According to this example, block 210 involves rendering the first audio signals for reproduction via the speakers of the environment, to produce first rendered audio signals. Some examples of the method 200 involve receiving loudspeaker layout information, e.g., as noted above. Some examples of the method 200 involve receiving loudspeaker specification information, e.g., as noted above. In some examples, the first rendering module may produce the first rendered audio signals based, at least in part, on the loudspeaker layout information and/or the loudspeaker specification information.
In this example, block 215 involves receiving, via the interface system, a second audio program stream. In this implementation, the second audio program stream includes second audio signals that are scheduled to be reproduced by at least some speakers of the environment. According to this example, the second audio program stream includes second spatial data. The second spatial data includes channel data and/or spatial metadata. In some examples, block 215 involves a second rendering module of a control system receiving, via the interface system, the second audio program stream.
According to this implementation, block 220 involves rendering the second audio signals for reproduction via the speakers of the environment, to produce second rendered audio signals. In some examples, the second rendering module may produce the second rendered audio signals based, at least in part, on received loudspeaker layout information and/or received loudspeaker specification information.
In some instances, some or all speakers of the environment may be arbitrarily located. For example, at least some speakers of the environment may be placed in locations that do not correspond to any standard prescribed speaker layout, such as Dolby 5.1, Dolby 7.1, Hamasaki 22.2, etc. In some such examples, at least some speakers of the environment may be placed in locations that are convenient with respect to the furniture, walls, etc., of the environment (e.g., in locations where there is space to accommodate the speakers), but not in any standard prescribed speaker layout.
Accordingly, some implementations block 210 or block 220 may involve flexible rendering to arbitrarily located speakers. Some such implementations may involve Center of Mass Amplitude Panning (CMAP), Flexible Virtualization (FV) or a combination of both. From a high level, both these techniques render a set of one or more audio signals, each with an associated desired perceived spatial position, for playback over a set of two or more speakers, where the relative activation of speakers of the set is a function of a model of perceived spatial position of said audio signals played back over the speakers and a proximity of the desired perceived spatial position of the audio signals to the positions of the speakers. The model ensures that the audio signal is heard by the listener near its intended spatial position, and the proximity term controls which speakers are used to achieve this spatial impression. In particular, the proximity term favors the activation of speakers that are near the desired perceived spatial position of the audio signal. For both CMAP and FV, this functional relationship may be conveniently derived from a cost function written as the sum of two terms, one for the spatial aspect and one for proximity:
C(g)=Cspatial(g,{right arrow over (o)},{{right arrow over (s)}i})+Cproximity(g,{right arrow over (o)},{{right arrow over (s)}i}) (1)
Here, the set {{right arrow over (s)}i} denotes the positions of a set of M loudspeakers, {right arrow over (o)} denotes the desired perceived spatial position of the audio signal, and g denotes an M dimensional vector of speaker activations. For CMAP, each speaker activation in the vector represents a gain per speaker, while for FV each speaker activation represents a filter (in this second case g can equivalently be considered a vector of complex values at a particular frequency and a different g is computed across a plurality of frequencies to form the filter). The optimal vector of speaker activations may be found by minimizing the cost function across activations:
g
opt=ming C(g,{right arrow over (o)},{{right arrow over (s)}i}) (2a)
With certain definitions of the cost function, it can be difficult to control the absolute level of the optimal activations resulting from the above minimization, though the relative level between the components of gopt is appropriate. To deal with this problem, a subsequent normalization of gopt may be performed so that the absolute level of the activations is controlled. For example, normalization of the vector to have unit length may be desirable, which is in line with commonly used constant power panning rules:
In some examples, the exact behavior of the flexible rendering algorithm may be dictated by the particular construction of the two terms of the cost function, Cspatial and Cproximity. For CMAP, Cspatial can be derived from a model that places the perceived spatial position of an audio signal playing from a set of loudspeakers at the center of mass of those loudspeakers' positions weighted by their associated activating gains gi (elements of the vector g):
Equation 3 may then be manipulated into a spatial cost representing the squared error between the desired audio position and that produced by the activated loudspeakers, e.g., as follows:
C
spatial(g,{right arrow over (o)},{{right arrow over (s)}i})=∥(Σi=1Mgi){right arrow over (o)}−Σi=1Mgi{right arrow over (s)}i∥2=∥Σi=1Mgi({right arrow over (o)}−{right arrow over (s)}i)∥2 (4)
With FV, the spatial term of the cost function is defined differently. There the goal is to produce a binaural response b corresponding to the audio object position oat the left and right ears of the listener. Conceptually, b is a 2×1 vector of filters (one filter for each ear) but is more conveniently treated as a 2×1 vector of complex values at a particular frequency. Proceeding with this representation at a particular frequency, the desired binaural response may be retrieved from a set of HRTFs index by object position:
b=HRTF{{right arrow over (o)}} (5)
At the same time, the 2×1 binaural response e produced at the listener's ears by the loudspeakers may be modelled as a 2×M acoustic transmission matrix H multiplied with the M×1 vector g of complex speaker activation values:
e=Hg (6)
The acoustic transmission matrix H may be modelled based on the set of loudspeaker positions {{right arrow over (s)}i} with respect to the listener position. Finally, the spatial component of the cost function can be defined as the squared error between the desired binaural response (Equation 5) and that produced by the loudspeakers (Equation 6):
C
spatial(g,{right arrow over (o)},{{right arrow over (s)}i})=(b−Hg)*(b−Hg) (7)
Conveniently, the spatial term of the cost function for CMAP and FV defined in Equations 4 and 7 can both be rearranged into a matrix quadratic as a function of speaker activations g:
C
spatial(g,{right arrow over (o)},{{right arrow over (s)}i})=g*Ag+Bg+C (8)
To this end, the second term of the cost function, Cproximity, may be defined as a distance-weighted sum of the absolute values squared of speaker activations. This can be represented compactly in matrix form as:
C
proximity(g,{right arrow over (o)},{{right arrow over (s)}i})=g*Dg (9a)
The distance penalty function can take on many forms, but the following is a useful parameterization:
Combining the two terms of the cost function defined in Equations 8 and 9a yields the overall cost function:
C(g)=g*Ag+Bg+C+g*Dg=g*(A+D)g+Bg+C (10)
Setting the derivative of this cost function with respect to g equal to zero and solving for g yields the optimal speaker activation solution for this example:
g
opt=½(A+D)−1B (11)
In general, the optimal solution in Equation 11 may yield speaker activations that are negative in value. For the CMAP construction of the flexible renderer, such negative activations may not be desirable, and thus in some examples Equation (11) may be minimized subject to all activations remaining positive.
A class of embodiments involves methods for rendering audio for playback by at least one (e.g., all or some) of a plurality of coordinated (orchestrated) smart audio devices. For example, a set of smart audio devices present (in a system) in a user's home may be orchestrated to handle a variety of simultaneous use cases, including flexible rendering (in accordance with an embodiment) of audio for playback by all or some (i.e., by speaker(s) of all or some) of the smart audio devices. Many interactions with the system are contemplated which require dynamic modifications to the rendering. Such modifications may be, but are not necessarily, focused on spatial fidelity.
Some embodiments are methods for rendering of audio for playback by at least one (e.g., all or some) of the smart audio devices of a set of smart audio devices (or for playback by at least one (e.g., all or some) of the speakers of another set of speakers). The rendering may include minimization of a cost function, where the cost function includes at least one dynamic speaker activation term. Examples of such a dynamic speaker activation term may include (but are not limited to):
The dynamic speaker activation term(s) may enable at least one of a variety of behaviors, including warping the spatial presentation of the audio away from a particular smart audio device so that its microphone can better hear a talker or so that a secondary audio stream may be better heard from speaker(s) of the smart audio device.
Some embodiments implement rendering for playback by speaker(s) of a plurality of smart audio devices that are coordinated (orchestrated). Other embodiments implement rendering for playback by speaker(s) of another set of speakers.
Pairing flexible rendering methods (implemented in accordance with some embodiments) with a set of wireless smart speakers (or other smart audio devices) can yield an extremely capable and easy-to-use spatial audio rendering system. In contemplating interactions with such a system it becomes evident that dynamic modifications to the spatial rendering may be desirable in order to optimize for other objectives that may arise during the system's use. To achieve this goal, a class of embodiments augment existing flexible rendering algorithms (in which speaker activation is a function of the previously disclosed spatial and proximity terms), with one or more additional dynamically configurable functions dependent on one or more properties of the audio signals being rendered, the set of speakers, and/or other external inputs. In accordance with some embodiments, the cost function of the existing flexible rendering given in Equation 1 may be augmented with these one or more additional dependencies, e.g., according to the following equation:
C(g)=Cspatial(g,{right arrow over (o)},{si})+Cproximity(g,{right arrow over (o)},{{right arrow over (s)}i})+ΣjCj(g,{{ô},{ŝi},{ê}}j) (12)
In Equation 12, the terms Cj (g, {{ô},{ŝi},{{right arrow over (e)}}}j) represent additional cost terms, with {ô} representing a set of one or more properties of the audio signals (e.g., of an object-based audio program) being rendered, {ŝi} representing a set of one or more properties of the speakers over which the audio is being rendered, and {ê} representing one or more additional external inputs. Each term C1 (g,{{ô},{ŝi},{{right arrow over (e)}}}j) returns a cost as a function of activations g in relation to a combination of one or more properties of the audio signals, speakers, and/or external inputs, represented generically by the set {{ô},{ŝi},{{right arrow over (e)}}}j. It should be appreciated that in this example the set {{ô},{ŝi},{{right arrow over (e)}}}j. contains at a minimum only one element from any of {ô}, {ŝi}, or {ê}.
Examples of { } include but are not limited to:
Examples of {ŝi} include but are not limited to:
Examples of {ê} include but are not limited to:
With the new cost function defined in Equation 12, an optimal set of activations may be found through minimization with respect to g and possible post-normalization as previously specified in Equations 2a and 2b.
In this implementation, block 285 involves receiving, by a control system and via an interface system, audio data. In this example, the audio data includes one or more audio signals and associated spatial data. According to this implementation, the spatial data indicates an intended perceived spatial position corresponding to an audio signal. In some instances, the intended perceived spatial position may be explicit, e.g., as indicated by positional metadata such as Dolby Atmos positional metadata. In other instances, the intended perceived spatial position may be implicit, e.g., the intended perceived spatial position may be an assumed location associated with a channel according to Dolby 5.1, Dolby 7.1, or another channel-based audio format. In some examples, block 285 involves a rendering module of a control system receiving, via an interface system, the audio data.
According to this example, block 290 involves rendering, by the control system, the audio data for reproduction via a set of loudspeakers of an environment, to produce rendered audio signals. In this example, rendering each of the one or more audio signals included in the audio data involves determining relative activation of a set of loudspeakers in an environment by optimizing a cost function. According to this example, the cost is a function of a model of perceived spatial position of the audio signal when played back over the set of loudspeakers in the environment. In this example, the cost is also a function of a measure of proximity of the intended perceived spatial position of the audio signal to a position of each loudspeaker of the set of loudspeakers. In this implementation, the cost is also a function of one or more additional dynamically configurable functions. In this example, the dynamically configurable functions are based on one or more of the following: proximity of loudspeakers to one or more listeners; proximity of loudspeakers to an attracting force position, wherein an attracting force is a factor that favors relatively higher loudspeaker activation in closer proximity to the attracting force position; proximity of loudspeakers to a repelling force position, wherein a repelling force is a factor that favors relatively lower loudspeaker activation in closer proximity to the repelling force position; capabilities of each loudspeaker relative to other loudspeakers in the environment; synchronization of the loudspeakers with respect to other loudspeakers; wakeword performance; or echo canceller performance.
In this example, block 295 involves providing, via the interface system, the rendered audio signals to at least some loudspeakers of the set of loudspeakers of the environment.
According to some examples, the model of perceived spatial position may produce a binaural response corresponding to an audio object position at the left and right ears of a listener. Alternatively, or additionally, the model of perceived spatial position may place the perceived spatial position of an audio signal playing from a set of loudspeakers at a center of mass of the set of loudspeakers' positions weighted by the loudspeaker's associated activating gains.
In some examples, the one or more additional dynamically configurable functions may be based, at least in part, on a level of the one or more audio signals. In some instances, the one or more additional dynamically configurable functions may be based, at least in part, on a spectrum of the one or more audio signals.
Some examples of the method 280 involve receiving loudspeaker layout information. In some examples, the one or more additional dynamically configurable functions may be based, at least in part, on a location of each of the loudspeakers in the environment.
Some examples of the method 280 involve receiving loudspeaker specification information. In some examples, the one or more additional dynamically configurable functions may be based, at least in part, on the capabilities of each loudspeaker, which may include one or more of frequency response, playback level limits or parameters of one or more loudspeaker dynamics processing algorithms.
According to some examples, the one or more additional dynamically configurable functions may be based, at least in part, on a measurement or estimate of acoustic transmission from each loudspeaker to the other loudspeakers. Alternatively, or additionally, the one or more additional dynamically configurable functions may be based, at least in part, on a listener or speaker location of one or more people in the environment. Alternatively, or additionally, the one or more additional dynamically configurable functions may be based, at least in part, on a measurement or estimate of acoustic transmission from each loudspeaker to the listener or speaker location. An estimate of acoustic transmission may, for example be based at least in part on walls, furniture or other objects that may reside between each loudspeaker and the listener or speaker location.
Alternatively, or additionally, the one or more additional dynamically configurable functions may be based, at least in part, on an object location of one or more non-loudspeaker objects or landmarks in the environment. In some such implementations, the one or more additional dynamically configurable functions may be based, at least in part, on a measurement or estimate of acoustic transmission from each loudspeaker to the object location or landmark location.
Numerous new and useful behaviors may be achieved by employing one or more appropriately defined additional cost terms to implement flexible rendering. All example behaviors listed below are cast in terms of penalizing certain loudspeakers under certain conditions deemed undesirable. The end result is that these loudspeakers are activated less in the spatial rendering of the set of audio signals. In many of these cases, one might contemplate simply turning down the undesirable loudspeakers independently of any modification to the spatial rendering, but such a strategy may significantly degrade the overall balance of the audio content. Certain components of the mix may become completely inaudible, for example. With the disclosed embodiments, on the other hand, integration of these penalizations into the core optimization of the rendering allows the rendering to adapt and perform the best possible spatial rendering with the remaining less-penalized speakers.
This is a much more elegant, adaptable, and effective solution.
Example use cases include, but are not limited to:
In some such examples, loudspeakers with a lower degree of synchronization may be penalized more and therefore excluded from rendering. Additionally, tight synchronization may not be required for certain types of audio signals, for example components of the audio mix intended to be diffuse or non-directional. In some implementations, components may be tagged as such with metadata and a synchronization cost term may be modified such that the penalization is reduced.
We next describe examples of embodiments.
Similar to the proximity cost defined in Equations 9a and 9b, it is also convenient to express each of the new cost function terms Cj (g,{{ô},{ŝi},{ê}}j) as a weighted sum of the absolute values squared of speaker activations:
C
j(g,{{ô},{ŝi},{{right arrow over (e)}}}j)=g*Wj({{ô},{ŝi},{{right arrow over (e)}}}j)g, (13a)
Combining Equations 13a and b with the matrix quadratic version of the CMAP and FV cost functions given in Equation 10 yields a potentially beneficial implementation of the general expanded cost function (of some embodiments) given in Equation 12:
C(g)=g*Ag+Bg+C+g*Dg+Σjg*Wjg=g*(A+D+ΣjWj)g+Bg+C (14)
With this definition of the new cost function terms, the overall cost function remains a matrix quadratic, and the optimal set of activations gopt can be found through differentiation of Equation 14 to yield
g
opt=½(A+D+ΣjWj)−1B (15)
It is useful to consider each one of the weight terms wij as functions of a given continuous penalty value pij=pij ({{ô},{ŝi},{{right arrow over (e)}}}j) for each one of the loudspeakers. In one example embodiment, this penalty value is the distance from the object (to be rendered) to the loudspeaker considered. In another example embodiment, this penalty value represents the inability of the given loudspeaker to reproduce some frequencies. Based on this penalty value, the weight terms wij can be parametrized as:
In case all loudspeakers are penalized, it is often convenient to subtract the minimum penalty from all weight terms in post-processing so that at least one of the speakers is not penalized:
w
ij
→w
ij
′=w
ij−mini(wij) (18)
As stated above, there are many possible use cases that can be realized using the new cost function terms described herein (and similar new cost function terms employed in accordance with other embodiments). Next, we describe more concrete details with three examples: moving audio towards a listener or talker, moving audio away from a listener or talker, and moving audio away from a landmark.
In the first example, what will be referred to herein as an “attracting force” is used to pull audio towards a position, which in some examples may be the position of a listener or a talker a landmark position, a furniture position, etc. The position may be referred to herein as an “attracting force position” or an “attractor location.” As used herein an “attracting force” is a factor that favors relatively higher loudspeaker activation in closer proximity to an attracting force position. According to this example, the weight wij takes the form of equation 17 with the continuous penalty value pij given by the distance of the ith speaker from a fixed attractor location {right arrow over (l)}j and the threshold value τj given by the maximum of these distances across all speakers:
p
ij
=∥{right arrow over (l)}
j
−{right arrow over (s)}
i∥, and (19a)
τj=maxi∥{right arrow over (l)}j−{right arrow over (s)}i∥ (19b)
To illustrate the use case of “pulling” audio towards a listener or talker, we specifically set αj=20, βj=3, and {right arrow over (l)}j to a vector corresponding to a listener/talker position of 180 degrees. These values of αj, βj, and {right arrow over (l)}j are merely examples. In other implementations, αj may be in the range of 1 to 100 and βj may be in the range of 1 to 25.
In the second and third examples, a “repelling force” is used to “push” audio away from a position, which may be a listener position, a talker position or another position, such as a landmark position, a furniture position, etc. In some examples, a repelling force may be used to push audio away from an area or zone of a listening environment, such as an office area, a reading area, a bed or bedroom area (e.g., a baby's bed or bedroom), etc. According to some such examples, a particular position may be used as representative of a zone or area. For example, a position that represents a baby's bed may be an estimated position of the baby's head, an estimated sound source location corresponding to the baby, etc. The position may be referred to herein as a “repelling force position” or a “repelling location.” As used herein an “repelling force” is a factor that favors relatively lower loudspeaker activation in closer proximity to the repelling force position. According to this example, we define pij and τj with respect to a fixed repelling location {right arrow over (l)}j similarly to the attracting force in Equation 19:
p
ij=maxi∥{circumflex over (l)}j−{right arrow over (s)}i∥−∥{right arrow over (l)}j−{right arrow over (s)}l∥, and (19c)
τj=maxi∥{right arrow over (l)}j−{right arrow over (s)}i∥ (19d)
To illustrate the use case of pushing audio away from a listener or talker, we specifically set αj=5, βj=2, and {right arrow over (l)}j to a vector corresponding to a listener/talker position of 180 degrees. These values of αj, βj, and {right arrow over (l)}j are merely examples. As noted above, in some examples αj may be in the range of 1 to 100 and βj may be in the range of 1 to 25.
The third example use case is “pushing” audio away from a landmark which is acoustically sensitive, such as a door to a sleeping baby's room. Similarly to the last example, we set {right arrow over (l)}j to a vector corresponding to a door position of 180 degrees (bottom, center of the plot). To achieve a stronger repelling force and skew the soundfield entirely into the front part of the primary listening space, we set αj=20, βj=5.
Returning now to
According to this example, block 230 involves modifying a rendering process for the second audio signals based at least in part on at least one of the first audio signals, the first rendered audio signals or characteristics thereof, to produce modified second rendered audio signals. In some examples, block 230 may be performed by the second rendering module.
In some implementations, modifying the rendering process for the first audio signals may involve warping the rendering of first audio signals away from a rendering location of the second rendered audio signals and/or modifying the loudness of one or more of the first rendered audio signals in response to a loudness of one or more of the second audio signals or the second rendered audio signals. Alternatively, or additionally, modifying the rendering process for the second audio signals may involve warping the rendering of second audio signals away from a rendering location of the first rendered audio signals and/or modifying the loudness of one or more of the second rendered audio signals in response to a loudness of one or more of the first audio signals or the first rendered audio signals. Some examples are provided below with reference to
However, other types of rendering process modifications are within the scope of the present disclosure. For example, in some instances modifying the rendering process for the first audio signals or the second audio signals may involve performing spectral modification, audibility-based modification or dynamic range modification. These modifications may or may not be related to a loudness-based rendering modification, depending on the particular example. For example, in the aforementioned case of a primary spatial stream being rendered in an open plan living area and a secondary stream comprised of cooking tips being rendered in an adjacent kitchen, it may be desirable to ensure the cooking tips remain audible in the kitchen. This can be accomplished by estimating what the loudness would be for the rendered cooking tips stream in the kitchen without the interfering first signal, then estimating the loudness in the presence of the first signal in the kitchen, and finally dynamically modifying the loudness and dynamic range of both streams across a plurality of frequencies, to ensure audibility of the second signal, in the kitchen.
In the example shown in
According to this example, block 240 involves providing the mixed audio signals to at least some speakers of the environment. Some examples of the method 200 involve playback of the mixed audio signals by the speakers.
As shown in
As described above with reference to
As noted above with reference to
In some such implementations, the control system may be configured for determining whether the first microphone signals correspond to environmental noise. Some such implementations may involve modifying the rendering process for at least one of the first audio signals or the second audio signals based, at least in part, on whether the first microphone signals correspond to environmental noise. For example, if the control system determines that the first microphone signals correspond to environmental noise, modifying the rendering process for the first audio signals or the second audio signals may involve increasing the level of the rendered audio signals so that the perceived loudness of the signals in the presence of the noise at an intended listening position is substantially equal to the perceived loudness of the signals in the absence of the noise.
In some examples, the control system may be configured for determining whether the first microphone signals correspond to a human voice. Some such implementations may involve modifying the rendering process for at least one of the first audio signals or the second audio signals based, at least in part, on whether the first microphone signals correspond to a human voice. For example, if the control system determines that the first microphone signals correspond to a human voice, such as a wakeword, modifying the rendering process for the first audio signals or the second audio signals may involve decreasing the loudness of the rendered audio signals reproduced by speakers near the first sound source position, as compared to the loudness of the rendered audio signals reproduced by speakers farther from the first sound source position. Modifying the rendering process for the first audio signals or the seconds audio signals may alternatively or in addition involve modifying the rendering process to warp the intended positions of the associated program stream's constituent signals away from the first sound source position and/or to penalize the use of speakers near the first sound source position in comparison to speakers farther from the first sound source position.
In some implementations, if the control system determines that the first microphone signals correspond to a human voice, the control system may be configured for reproducing the first microphone signals in one or more speakers near a location of the environment that is different from the first sound source position. In some such examples, the control system may be configured for determining whether the first microphone signals correspond to a child's cry. According to some such implementations, the control system may be configured for reproducing the first microphone signals in one or more speakers near a location of the environment that corresponds to an estimated location of a caregiver, such as a parent, a relative, a guardian, a child care service provider, a teacher, a nurse, etc. In some examples, the process of estimating the caregiver's estimated location may be triggered by a voice command, such as “<wakeword>, don't wake the baby”. The control system would be able to estimate the location of the speaker (caregiver) according to the location of the nearest smart audio device that is implementing a virtual assistant, by triangulation based on DOA information provided by three or more local microphones, etc. According to some implementations, the control system would have a priori knowledge of the baby room location (and/or listening devices therein) would then be able to perform the appropriate processing.
According to some such examples, the control system may be configured for determining whether the first microphone signals correspond to a command. If the control system determines that the first microphone signals correspond to a command, in some instances the control system may be configured for determining a reply to the command and controlling at least one speaker near the first sound source location to reproduce the reply. In some such examples, the control system may be configured for reverting to an unmodified rendering process for the first audio signals or the second audio signals after controlling at least one speaker near the first sound source location to reproduce the reply.
In some implementations, the control system may be configured for executing the command. For example, the control system may be, or may include, a virtual assistant that is configured to control an audio device, a television, a home appliance, etc., according to the command.
With this definition of the minimal and more capable multi-stream rendering systems shown in
We first examine the previously-discussed example involving the simultaneous playback of a spatial movie sound track in a living room and cooking tips in a connected kitchen. The spatial movie sound track is an example of the “first audio program stream” referenced above and the cooking tips audio is an example of the “second audio program stream” referenced above.
In
Many spatial audio mixes include a plurality of constituent audio signals designed to be played back at a particular location in the listening space. For example, Dolby 5.1 and 7.1 surround sound mixes consist of 6 and 8 signals, respectively, meant to be played back on speakers in prescribed canonical locations around the listener. Object-based audio formats, e.g., Dolby Atmos, consist of constituent audio signals with associated metadata describing the possibly time-varying 3D position in the listening space where the audio is meant to be rendered. With the assumption that the renderer of the spatial movie soundtrack is capable of rendering an individual audio signal at any location with respect to the arbitrary set of loudspeakers, the dynamic shift to the rendering depicted in
A second method for achieving the dynamic shift to the spatial rendering may be realized by using a flexible rendering system. In some such implementations, the flexible rendering system may be CMAP, FV or a hybrid of both, as described above. Some such flexible rendering systems attempt to reproduce a spatial mix with all its constituent signals perceived as coming from their intended locations. While doing so for each signal of the mix, in some examples, preference is given to the activation of loudspeakers in close proximity to the desired position of that signal. In some implementations, additional terms may be dynamically added to the optimization of the rendering, which penalize the use of certain loudspeakers based on other criteria. For the example at hand, what may be referred to as a “repelling force” may be dynamically placed at the location of the kitchen to highly penalize the use of loudspeakers near this location and effectively push the rendering of the spatial movie soundtrack away. As used herein, the term “repelling force” may refer to a factor that corresponds with relatively lower speaker activation in a particular location or area of a listening environment. In other words, the phrase “repelling force” may refer to a factor that favors the activation of speakers that are relatively farther from a particular position or area that corresponds with the “repelling force.” However, according to some such implementations the renderer may still attempt to reproduce the intended spatial balance of the mix with the remaining, less penalized speakers. As such, this technique may be considered a superior method for achieving the dynamic shift of the rendering in comparison to that of simply warping the intended positions of the mix's constituent signals.
The described scenario of shifting the rendering of the spatial movie soundtrack away from the cooking tips in the kitchen may be achieved with the minimal version of the multi-stream renderer depicted in
As can be seen, this example use case of the disclosed multi-stream renderer employs numerous, interconnected modifications to the two program streams in order to optimize their simultaneous playback. In summary, these modifications to the streams can be listed as:
A second example use case of the disclosed multi-stream renderer involves the simultaneous playback of a spatial program stream, such as music, with the response of a smart voice assistant to some inquiry by the user. With existing smart speakers, where playback has generally been constrained to monophonic or stereo playback over a single device, an interaction with the voice assistant typically consists of the following stages:
In addition to optimizing the simultaneous playback of the spatial music mix and voice assistant response, the shifting of the spatial music mix may also improve the ability of the set of speakers to understand the listener in step 5. This is because music has been shifted out of the speakers near the listener, thereby improving the voice to other ratio of the associated microphones.
Similar to what was described for the previous scenario with the spatial movie mix and cooking tips, the current scenario may be further optimized beyond what is afforded by shifting the rendering of the spatial mix as a function of the voice assistant response. On its own, shifting the spatial mix may not be enough to make the voice assistant response completely intelligible to the user. A simple solution is to also turn the spatial mix down by a fixed amount, though less than is required with the current state of affairs. Alternatively, the loudness of the voice assistant response program stream may be dynamically boosted as a function of the loudness of the spatial music mix program stream in order to maintain the audibility of the response. As an extension, the loudness of the spatial music mix may also be dynamically cut if this boosting process on the response stream grows too large.
We next describe examples of how some of the noted embodiments may be implemented.
In
First, if the rendering is done in this hierarchical arrangement and each of the single-stream renderer instances is configured to operate in the frequency/transform domain (e.g. QMF), then the mixing of the streams can also happen in the frequency/transform domain and the inverse transform only needs to be run once, for M channels. This is a significant efficiency improvement over running N×M inverse transforms and mixing in the time domain.
Another benefit of a hierarchical approach in the frequency domain is in the calculation of the perceived loudness of each audio stream and the use of this information in dynamically modifying one or more of the other audio streams. To illustrate this embodiment, we consider the previously mentioned example that is described above with reference to
After each audio stream s has been individually rendered and each microphone i captured and transformed to the frequency domain, a source excitation signal Es or Ei can be calculated, which serves as a time-varying estimate of the perceived loudness of each audio stream s or microphone signal i. In this example, these source excitation signals are computed from the rendered streams or captured microphones via transform coefficients Xs for audio steams or Xi for microphone signals, for b frequency bands across time t for c loudspeakers and smoothed with frequency-dependent time constants λb:
E
s(b,t,c)=λbEs(b,t−1,c)+(1−λb)|XS(b,t,c)|2 (20a)
E
i(b,t,c)=λb(b)Ei(b,t−1,c)+(1−λb)|Xi(b,t,c)|2 (20b)
In this example, the raw source excitations are an estimate of the perceived loudness of each stream at a specific position. For the spatial stream, that position is in the middle of the cloud 335b in
The raw source excitations must be translated to the listening position of the audio stream(s) that will be modified by them, to estimate how perceptible they will be as noise at the listening position of each target audio stream. For example, if audio stream 1 is the movie soundtrack and audio stream 2 is the cooking tips, Ê12 would be the translated (noise) excitation. That translation is calculated by applying an audibility scale factor Axs from a source audio stream s to a target audio stream x or Axi from microphone i to a target audio stream x, as a function of each loudspeaker c for each frequency band b. Values for Axs and Axi may be determined by using distance ratios or estimates of actual audibility, which may vary over time.
Ê
xs(b,t,c)=Axs(b,t,c)Es(b,t,c) (21a)
Ê
xi(b,t,c)=Axi(b,t,c)Ei(b,t,c) (21b)
In equation 13a, Êxs represents raw noise excitations computed for source audio streams, without reference to microphone input. In equation 13b, Êxi represents raw noise excitations computed with reference to microphone input. According to this example, the raw noise excitations Êxs or Êxi are then summed across streams 1 to N, microphones 1 to K, and output channels 1 to M to get a total noise estimate Êx for a target stream x:
Ê
x(b,t)=Σc=1M(Σs=1NÊxs(b,t,c)+Σi=1KÊxi(b,t,c)) (22)
According to some alternative implementations, a total noise estimate may be obtained without reference to microphone input by omitting the term Σi=1K{circumflex over (Σ)}xi(b,t,c) in Equation 14.
In this example, the total raw noise estimate is smoothed to avoid perceptible artifacts that could be caused by modifying the target streams too rapidly. According to this implementation, the smoothing is based on the concept of using a fast attack and a slow release, similar to an audio compressor. The smoothed noise estimate Ēx for a target stream x is calculated in this example as:
Once we have a complete noise estimate Ēx(b,t) for stream x, we can reuse the previously calculated source excitation signal Ex(b,t,c) to determine a set of time-varying gains Gx(b,t,c) to apply to the target audio stream x to ensure that it remains audible over the noise. These gains can be calculated using any of a variety of techniques.
In one embodiment, a loudness function L{⋅,⋅} can be applied to the excitations to model various non-linearities in a human's perception of loudness and to calculate specific loudness signals which describe the time-varying distribution of the perceived loudness across frequency. Applying L{⋅,⋅} to the excitations for the noise estimate and rendered audio stream x gives an estimate for the specific loudness of each signal:
L
xn(b,t)=L{Ēx(b,t)} (25a)
L
x(b,t,c)=L{Ex(b,t,c)} (25b)
In Equation 17a, Lxn represents an estimate for the specific loudness of the noise and in Equation 17b, Lx represents an estimate for the specific loudness of the rendered audio stream x. These specific loudness signals represent the perceived loudness when the signals are heard in isolation. However, if the two signals are mixed, masking may occur. For example, if the noise signal is much louder than the stream x signal, it will mask the stream x signal thereby decreasing the perceived loudness of that signal relative to the perceived loudness of that signal heard in isolation. This phenomenon may be modeled with a partial loudness function PL{⋅,⋅} which takes two inputs. The first input is the excitation of the signal of interest, and the second input is the excitation of the competing (noise) signal. The function returns a partial specific loudness signal PL representing the perceived loudness of the signal of interest in the presence of the competing signal. The partial specific loudness of the stream x signal in the presence of the noise signal may then be computed directly from the excitation signals, across frequency bands b, time t, and loudspeaker c:
PLx(b,t,c)=PL{Ex(b,t,c),Ēx(b,t)} (26)
To maintain audibility of the audio stream x signal in the presence of the noise, we can calculate gains Gx(b,t,c) to apply to audio stream x to boost the loudness until it is audible above the noise as shown in Equations 8a and 8b. Alternatively, if the noise is from another audio stream s, we can calculate two sets of gains. In one such example, the first, Gx(b,t,c), is to be applied to audio stream x to boost its loudness and the second, Gs(b,t), is to be applied to competing audio stream s to reduce its loudness such that the combination of the gains ensures audibility of audio stream x, as shown in Equations 9a and 9b. In both sets of equations
x(b,t,c)=PL{Gx(b,t,c)2Ex(b,t,c),Ēx(b,t)} (27a)
such that
L
x(b,t,c)−
PLx(b,t,c)=PL{Gx(b,t,c)2Ex(b,t,c),Gs(b,t)2Ēx(b,t)} (28a)
again, such that
L
x(b,t,c)−
In some examples, the raw gains may be further smoothed across frequency using a smoothing function S{⋅} before being applied to an audio stream, again to avoid audible artifacts.
x(b,t,c)=S{Gx(b,t,c)} (29a)
s(b,t)=S{
In one embodiment these gains may be applied directly to all rendered output channels of an audio stream. In another embodiment they may instead be applied to an audio stream's objects before they are rendered, e.g., using the methods described in US Patent Application Publication No. 2019/0037333A1, which is hereby incorporated by reference. These methods involve calculating, based on spatial metadata of the audio object, a panning coefficient for each of the audio objects in relation to each of a plurality of predefined channel coverage zones. The audio signal may be converted into submixes in relation to the predefined channel coverage zones based on the calculated panning coefficients and the audio objects. Each of the submixes may indicate a sum of components of the plurality of the audio objects in relation to one of the predefined channel coverage zones. A submix gain may be generated by applying an audio processing to each of the submix and may control an object gain applied to each of the audio objects. The object gain may be a function of the panning coefficients for each of the audio objects and the submix gains in relation to each of the predefined channel coverage zones. Applying the gains to the objects has some advantages, especially when combined with other processing of the streams.
In this implementation, time-domain microphone signals from the microphone system 120c are also provided to a quadrature mirror filterbank, so that the loudness estimation module 805c receives microphone signals in the frequency domain. In this implementation, loudness estimation module 805c calculates a loudness estimate for the microphone signals, e.g., as described above with reference to Equations 12b-17a. In this example, the loudness processing module 810 is configured for implementing loudness processing, e.g., as described in Equations 18-21b, and compensation gain application for each single-stream rendering module. In this implementation, the loudness processing module 810 is configured for altering audio signals of program stream 1 and audio signals of program stream 2 in order to preserve their perceived loudness in the presence of one or more interfering signals. In some instances, the control system may determine that the microphone signals correspond to environmental noise above which a program stream should be raised. However, in some examples the control system may determine that the microphone signals correspond to a wakeword, a command, a child's cry, or other such audio that may need to be heard by a smart audio device and/or one or more listeners. In some such implementations, the loudness processing module 810 may be configured for altering the microphone signals in order to preserve their perceived loudness in the presence of interfering audio signals of program stream 1 and/or audio signals of program stream 2. Here, the loudness processing module 810 is configured to provide appropriate gains to the rendering modules 1 and 2.
After the mixer 630c mixes the outputs of the rendering modules 1 through N, an inverse filterbank 635c converts the mix to the time domain and provides mixed speaker feed signals in the time domain to the loudspeakers 1 through M. In this example, the quadrature mirror filterbanks, the rendering modules 1 through N, the mixer 630c and the inverse filterbank 635c are components of the control system 110e.
In this example, a QMF is applied to program stream 1 before the program stream is received by rendering modules 1a and 1b. Similarly, a QMF is applied to program stream 2 before the program stream is received by rendering modules 2a and 2b. In some instances, the output of rendering module 1a may correspond with a desired reproduction of the program stream 1 prior to the detection of a wakeword, whereas the output of rendering module 1b may correspond with a desired reproduction of the program stream 1 after the detection of the wakeword. Similarly, the output of rendering module 2a may correspond with a desired reproduction of the program stream 2 prior to the detection of a wakeword, whereas the output of rendering module 2b may correspond with a desired reproduction of the program stream 2 after the detection of the wakeword. In this implementation, the output of rendering modules 1a and 1b is provided to crossfade module 910a and the output of rendering modules 2a and 2b is provided to crossfade module 910b. The crossfade time may, for example, be in the range of hundreds of milliseconds to several seconds.
After the mixer 630d mixes the outputs of the crossfade modules 910a and 910b, an inverse filterbank 635d converts the mix to the time domain and provides mixed speaker feed signals in the time domain to the loudspeakers 1 through M. In this example, the quadrature mirror filterbanks, the rendering modules, the crossfade modules, the mixer 630d and the inverse filterbank 635d are components of the control system 110f.
In some embodiments it may be possible to precompute the rendering configurations used in each of the single stream renderers 1a, 1b, 2a, and 2b. This is especially convenient and efficient for use cases like the smart voice assistant, as the spatial configurations are often known a priori and have no dependency on other dynamic aspects of the system. In other embodiments it may not be possible or desirable to precompute the rendering configurations, in which case the complete configurations for each single-stream renderer must be calculated dynamically while the system is running.
One of the practical considerations in implementing dynamic cost flexible rendering (in accordance with some embodiments) is complexity. In some cases it may not be feasible to solve the unique cost functions for each frequency band for each audio object in real-time, given that object positions (the positions, which may be indicated by metadata, of each audio object to be rendered) may change many times per second. An alternative approach to reduce complexity at the expense of memory is to use a look-up table that samples the three dimensional space of all possible object positions. The sampling need not be the same in all dimensions.
A desired rendering location will not necessarily correspond with the location for which a speaker activation has been calculated. At runtime, to determine the actual activations for each speaker, some form of interpolation may be implemented. In some such examples, tri-linear interpolation between the speaker activations of the nearest 8 points to a desired rendering location may be used.
As noted above with reference to
In the example shown in
According to the example shown in
In the examples shown in
Another desirable property of a dynamic flexible renderer is the ability to smoothly transition to a new rendering configuration, regardless of the current state of the system. For example, if a third configuration change is requested when the system is in the midst of a transition between a lower-complexity version of a rendering configuration and a higher-quality version of the rendering configuration, some disclosed implementations are configured to start a transition to a new rendering configuration without waiting for the first rendering configuration transition to complete. The present disclosure includes methods for smoothly handling such transitions.
The disclosed examples are generally applicable to any renderer that applies a set of speaker activations (e.g., a set of M speaker activations) to a collection of audio signals to generate outputs (e.g., M outputs), whether the renderer operates in the time domain or the frequency domain. By ensuring that the rendering configuration transition between a current set of speaker activations and a new set of speaker activations is smooth and continuous, but also arbitrarily interruptible, the timing of transitions between rendering configurations can effectively be decoupled from the time it takes to calculate the speaker activations for the new rendering configuration. Some such implementations enable the progressive transition to a new, lower-quality/complexity rendering configuration and then to a corresponding higher-quality/complexity rendering configuration, the latter of which may be computed asynchronously and/or by a device other than the one performing the rendering. Some implementations that are capable of supporting a smooth, continuous, and arbitrarily interruptible transition also have the desirable property of allowing a set of new target rendering activations to be updated dynamically at any time, regardless of any previous transitions that may be in progress.
Rendering Configuration Transitions
Various methods are disclosed herein for implementing smooth, dynamic, and arbitrarily interruptible rendering configuration transitions (which also may be referred to herein as renderer configuration transitions or simply as “transitions”).
According to this example, at time A, a rendering transition indication is received. In some examples, the rendering transition indication may be received by an orchestrating device, such as a smart speaker or a “smart home hub,” that is configured to coordinate or orchestrate the rendering of audio devices in the audio environment. In this example, the rendering transition indication is an indication that the current rendering configuration should transition to Rendering Configuration 2. The rendering transition indication may, for example correspond to (e.g., may be in response to) a detected event in the audio system, such as a wake word utterance, an incoming telephone call, an indication that a second content stream (such as a content stream corresponding to the “cooking tips” example that is described above with reference to
Various mechanisms for managing transitions between rendering configurations are disclosed herein. In the example described above with reference to
According to this implementation, the renderer 1315 maintains active data structures 1312a and 1312b, which are look-up tables A and B in this example. Each of the data structures 1312a and 1312b corresponds to a rendering configuration, or a version of a rendering configuration (such as a simplified version or a complete version). Table A corresponds to the current rendering configuration and Table B corresponds to a target rendering configuration. The data structures 1312a and 1312b may, for example, correspond to a set of speaker activations such as those shown in
In this implementation, when the renderer 1315 is rendering an audio object, the renderer 1315 computes two sets of speaker activations for the audio object's position: here, one set of speaker activations for the audio object's position is based on Table A and the other set of speaker activations for the audio object's position is based on Table B. According to this example, tri-linear interpolation modules 1313a and 1313b are configured to determine the actual activations for each speaker. In some such examples, the tri-linear interpolation modules 1313a and 1313b are configured to determine the actual activations for each speaker according to a tri-linear interpolation process between the speaker activations of the nearest 8 points, as described above with reference to
In this example, module 1314 of the control system 110g is configured to determine a magnitude-normalized interpolation between the two sets of speaker activations, based at least in part on the crossfade time 1311. According to this example, module 1314 of the control system 110g is configured to determine a single table of speaker activations based on the interpolated values for Table A, received from the tri-linear interpolation module 1313a, and based on the interpolated values for Table B, received from the tri-linear interpolation module 1313b. In some examples, the rendering transition indication also indicated the crossfade time 1311, which corresponds to the transition times described above with reference to
In this implementation, module 1316 of the control system 110g is configured to compute a final set of speaker activations for each audio object in the frequency domain according to the magnitude-normalized interpolation. In this example, an inverse filterbank 1335 converts the final set of speaker activations to the time domain and provides speaker feed signals in the time domain to the loudspeakers 1 through M. In alternative implementations, the renderer 1315 may operate in the time domain.
In some instances, e.g., as described above with reference to
According to some examples, the table combination module 1405 may be configured to process interruptions of rendering configuration transitions. In some such examples, if a rendering configuration transition is interrupted by another rendering transition indication, the table combination module 1405 may be configured to combine the above-described data structures 1312a and 1312b (the previous two look-up tables A and B), to create a new current look-up table A′ (data structure 1412a). In some such implementations, the control system 110g is configured to replace the data structure 1312b with a new target look-up table B′ (the data structure 1412b). The new target look-up table B′ may, for example, correspond with a third rendering configuration.
In some examples, block 1414 of the table combination module 1405 may be configured to implement the combination operation by applying the same magnitude-normalized interpolation mechanism at each point of the A and B tables, following an interpolation process: in this implementation, the interpolation process involves separate tri-linear interpolation processes performed on the contents of Tables A and B by block 1413a and 1413b, respectively. The interpolation process may be based, at least in part, on the time at which the previous rendering configuration transition was interrupted (as indicated by previous crossfade interruption time 1411 of
According to some implementations, the new current look-up table A′ (data structure 1412a) may optionally have reduced dimensions compared to one or more of the previous tables (A and/or B). The choice of dimensions for A′ allows for a trade-off between the complexity of the one-time calculation vs. quality during the transition to the new rendering configuration associated with the new target look-up table B′. After the target table B is replaced by the new target table B′, in some instances the table combination module 1405 may be configured to continue live interpolation between the tables until the rendering configuration transition is completed. In some implementations, the table combination module 1405 (e.g., in combination with other disclosed features) may be configured to process multiple rendering configuration transition interruptions with no discontinuities.
Progressive calculation and application of rendering configurations is a way of meeting the low-latency objectives desirable for a good user experience without sacrificing the quality of a flexible rendering configuration, e.g., a flexible rendering configuration that optimizes playback for a heterogenous set of arbitrarily-placed audio devices. Given a set of constraints associated with a particular rendering configuration, some disclosed methods involve calculating a lower-complexity solution (e.g., a lower-complexity version of a rendering configuration) in parallel with a higher-complexity solution (e.g., a higher-complexity version of the rendering configuration). The dimensions of the lower-complexity version of the rendering configuration may, for example, be chosen such that the lower-complexity version of the rendering configuration may be computed with minimal latency. The higher-complexity solution may, in some instances, be computed in parallel. According to some such examples, as soon as the lower-complexity version of the rendering configuration is available, the rendering configuration transition may begin, e.g., using the methods described above. At a later time, when the higher-complexity version of the rendering configuration becomes available, the transition to the lower-complexity version of the rendering configuration can be interrupted (if the rendering configuration transition is not already complete) with a transition to the higher-complexity version of the rendering configuration, e.g., as described above.
Table 1, below, provides some example dimensions for high- and low-complexity rendering configuration look-up tables, to illustrate the order of magnitude of the difference in their calculation.
Additional Examples of Frequency Domain Implementations
In this example, the frequency-domain renderer 1515 is configured to apply a set of speaker activations to the program stream audio data 1505a to generate M outputs, one for each of loudspeakers 1 through M. In addition to speaker activations, in this example the frequency-domain renderer 1515 is configured to apply varying delays to the M outputs, for example to time-align the arrival of sound from each loudspeaker to a listening position. These delays may be implemented as any combination of sample and group delay, e.g., in the case that the M speaker activations are represented by time-domain filter coefficients. In some examples wherein rendering is implemented in the frequency domain, the M speaker activations may be represented by N frequency domain filter coefficients, and the delays may be represented by a combination of transform block delays (implemented via transform block delay lines module 1518 in the example of
In order to support rendering configuration transitions that can be continuously and arbitrarily interrupted, without discontinuities, the aforementioned approach to interpolating the speaker activations may be used, as described above with reference to
For example, module 1614 of the control system 110g may be configured to determine a magnitude-normalized interpolation between the two sets of speaker activations, based at least in part on the crossfade time 1611. According to this example, module 1614 of the control system 110 is configured to determine a single table of speaker activations based on the interpolated values for Table A, received from the tri-linear interpolation module 1613a, and based on the interpolated values for Table B, received from the tri-linear interpolation module 1613b. In this implementation, module 1616 of the control system 110g is configured to compute a final set of speaker activations for each audio object in the frequency domain according to the table of speaker activations determined by module 1614. Here, module 1616 outputs speaker feeds, one for each of the loudspeakers 1 through M, to the transform block delay lines module 1618.
According to this example, the transform block delay lines module 1618 applies a set of delay lines, one delay line for each speaker feed. As in the example described above with reference to
According to this example, each active rendering configuration also has its own corresponding delays and read offsets. Here, the read offset A is for the rendering configuration (or rendering configuration version) corresponding to table A and the read offset B is for the rendering configuration (or rendering configuration version) corresponding to table B. According to some examples, “read offset A” corresponds to a set of M read offsets associated with rendering configuration A, with one read offset for each of M channels. In such examples, “read offset B” corresponds to a set of M read offsets associated with rendering configuration B. In such implementations, the comparison of the delays and choice of using a unity power sum or a unity amplitude sum may be made on a per-channel basis. As described above, according to some examples, after reading from each delay line an additional filtering stage is used to implement the sub-block delays associated with the rendering configurations corresponding to tables A and B. In in this example, the sub-block delays for the active rendering configuration corresponding to look-up table A are implemented by the sub-block delay module 1620a and the sub-block delays for the active rendering configuration corresponding to look-up table B are implemented by the sub-block delay module 1620b.
In this example, the multiple delayed sets of speaker feeds for each configuration (the outputs of the sub-block delay module 1620a and the sub-block delay module 1620b) are crossfaded, by the crossfade module 1625, to produce a single set of M output speaker feeds. In some examples, the crossfade module 1625 may be configured to apply crossfade windows for each rendering configuration. According to some implementations, the crossfade module 1625 may be configured to select crossfade windows based, at least in part, on the delay line read offsets A and B.
There are many possible symmetric crossfade window pairs that may be used. Accordingly, the crossfade module 1625 may be configured to select crossfade window pairs in different ways, depending on the particular implementation. In some implementations, the crossfade module 1625 may be configured to select the crossfade windows to have a unity power sum if the delay line read offsets A and B are not identical, so far as can be determined according to the transform block size sample. Practically speaking, the read offsets A and B will appear to be identical if the total delays for rendering configuration A and B are within the transform block size samples of each other. For example, if the transform block includes 64 samples the corresponding time interval would be approximately 1.333 milliseconds at a 48 kHz sampling rate. differ by more than a threshold amount. According to some examples, this condition may be expressed as follows:
w(i)2+(1−w(i))2=1 (30)
In Equation 30, i represents a block index that correlates to time, but is in the frequency domain. One example of w(i) that meets the criteria of Equation 30 is:
However, if the read offsets A and B, as shown in
The previous crossfade window examples are straightforward. However a more generalized approach to window design may be needed for the crossfade module 1625 of
The L sets of speaker activations may, in some examples, correspond to L rendering configurations. However, as noted elsewhere herein, some implementations may involve multiple sets of speaker activations corresponding to multiple versions of a single rendering configuration. For example, a first set of speaker activations may be for a simplified version of a rendering configuration and a second set of speaker activations may be for a complete version of the rendering configuration. In some such examples, a single rendering transition indication may result in a first transition to the simplified version of the rendering configuration and a second transition to the complete version of the rendering configuration.
Therefore, in the examples described with reference to
For the sake of simplicity, the following discussion will assume that the L sets of speaker activations of
In the example shown in
However, in the example shown in
According to the example shown in
In some implementations, the following approach may be used to design an L-part crossfade window set.
p(i)=Σj=1K(kj(i))2 (34)
l
1 . . . L(i)=n(i)(l1 . . . L(i)). (36)
In some implementations, the more generalized formulation of the power-sum constraint shown in equation 36 supersedes that of equation 31. Using this approach means that the choice of w(i) will impact the shape of a simple power-preserving crossfade window pair used in a direct, non-interrupted transition.
One approach to choosing w(i) is to combine equations 32 and 33:
By selecting an appropriate value for cc (such as ∝=0.75), and using equation 37 as the base window function w(i) in the L-part crossfade design algorithm described in the previous paragraph, one may arrive at a power-preserving crossfade window pair that resembles equation 31 and a linear crossfade window pair that resembles equation 33.
In these examples, during a time interval corresponding with 0≤i<i1, a first rendering configuration transition from rendering configuration A to rendering configuration B was taking place. The first rendering configuration transition may, for example, have been responsive to a first rendering transition indication. According to these examples, at or near a time corresponding with i1, the first rendering configuration transition is interrupted by the receipt of a second rendering transition indication, indicating a transition to rendering configuration C.
In these examples, during a time interval corresponding with i1, ≤i<i2 and without requiring the transition to rendering configuration B to be completed, a second rendering configuration transition, to rendering configuration C, takes place.
According to these examples, at or near a time corresponding with i2, the second rendering configuration transition is interrupted by the receipt of a third rendering transition indication, indicating a transition to rendering configuration D. In these examples, during a time interval corresponding with i2≤i<1 and without requiring the transition to rendering configuration C to be completed, a third rendering configuration transition, to rendering configuration D, takes place. In these examples, the third rendering configuration transition is completed at a time corresponding with i=1.
In the example presented in
According to the example presented in
In the example presented in
According to the example presented in
In the example presented in
Allowing an arbitrary number of interrupts with a frequency domain renderer as shown in
In this implementation, block 3105 involves receiving, by a control system and via an interface system, audio data. Here, the audio data includes one or more audio signals and associated spatial data. In this instance, the spatial data indicates an intended perceived spatial position corresponding to an audio signal. According to some examples (e.g., for audio object implementations such as Dolby Atmos™), the spatial data may be, or may include, positional metadata. However, in some instances the first set of speaker activations may correspond to a channel-based audio format. In some such instances, the intended perceived spatial position may correspond to a channel of the channel-based audio format (e.g., may correspond to a left channel, a right channel, a center channel, etc.).
In this example, block 3110 involves rendering, by the control system, the audio data for reproduction via a set of loudspeakers of an environment, to produce first rendered audio signals. In this implementation, rendering the audio data for reproduction involves determining a first relative activation of a set of loudspeakers in the environment according to a first rendering configuration. In this example, the first rendering configuration corresponds to a first set of speaker activations. According to some examples, the first set of speaker activations may be for each of a corresponding plurality of positions in a three-dimensional space. In this implementation, block 3115 involves providing, via the interface system, the first rendered audio signals to at least some loudspeakers of the set of loudspeakers of the environment.
In this example, block 3120 involves receiving, by the control system and via the interface system, a first rendering transition indication. In this implementation, the first rendering transition indication indicates a transition from the first rendering configuration to a second rendering configuration.
In this implementation, block 3125 involves determining, by the control system, a second set of speaker activations corresponding to a simplified version of the second rendering configuration. According to this example, block 3130 involves performing, by the control system, a first transition from the first set of speaker activations to the second set of speaker activations.
In this implementation, block 3135 involves determining, by the control system, a third set of speaker activations corresponding to a complete version of the second rendering configuration. In some instances, block 3135 may be performed concurrently with block 3125 and/or block 3130. In this example, block 3140 involves performing, by the control system, a second transition to the third set of speaker activations without requiring completion of the first transition.
According to some examples, method 3100 may involve receiving, by the control system and via the interface system, a second rendering transition indication. In some such examples, the second rendering transition indication indicating a transition to a third rendering configuration. In some examples, method 3100 may involve determining, by the control system, a fourth set of speaker activations corresponding to a simplified version of the third rendering configuration. In some examples, method 3100 may involve performing, by the control system, a third transition from the third set of speaker activations to the fourth set of speaker activations. In some examples, method 3100 may involve determining, by the control system, a fifth set of speaker activations corresponding to a complete version of the third rendering configuration and performing, by the control system, a fourth transition to the fifth set of speaker activations without requiring completion of the first transition, the second transition or the third transition.
In some examples, method 3100 may involve receiving, by the control system and via the interface system and sequentially, second through (N)th rendering transition indications. Some such methods may involve determining, by the control system, a first set of speaker activations and a second set of speaker activations for each of the second through (N)th rendering transition indications. The first set of speaker activations may correspond to a simplified version of a rendering configuration and the second set of speaker activations may correspond to a complete version of a rendering configuration for each of the second through (N)th rendering transition indications. In some examples, method 3100 may involve performing, by the control system and sequentially, third through (2N−1)th transitions from a fourth set of speaker activations to a (2N)th set of speaker activations. In some examples, method 3100 may involve performing, by the control system, a (2N)th transition to a (2N+1)th set of speaker activations without requiring completion of any of the first through (2N)th transitions. In some implementations, a single renderer instance may render the audio data for reproduction.
However, it is not necessarily the case that all rendering transition indications will involve a simplified-to-complete transition responsive to a received rendering transition indication. If, as in the example above, there will be a simplified-to-complete rendering transition responsive to a received rendering transition indication, two sets of speaker activations may be determined for the rendering transition indication and there may be two transitions corresponding to the rendering transition indication. However, if there will be no simplified-to-complete transition responsive to a rendering transition indication, one set of speaker activations may be determined for the rendering transition indication and there may be only one transition corresponding to the rendering transition indication.
In some examples, method 3100 may involve receiving, by the control system and via the interface system, a second rendering transition indication. The second rendering transition indication may indicate a transition to a third rendering configuration. In some such examples, method 3100 may involve determining, by the control system, a fourth set of speaker activations corresponding to the third rendering configuration and performing, by the control system, a third transition to the fourth set of speaker activations without requiring completion of the first transition or the second transition.
According to some examples, method 3100 may involve receiving, by the control system and via the interface system, a third rendering transition indication. The third rendering transition indication may indicate a transition to a fourth rendering configuration. In some instance, method 3100 may involve determining, by the control system, a fifth set of speaker activations corresponding to the fourth rendering configuration and performing, by the control system, a fourth transition to the fifth set of speaker activations without requiring completion of the first transition, the second transition or the third transition.
In some examples, method 3100 may involve receiving, by the control system and via the interface system and sequentially, second through (N)th rendering transition indications and determining, by the control system, fourth through (N+2)th sets of speaker activations corresponding to the second through (N)th rendering transition indications. According to some examples, method 3100 may involve performing, by the control system and sequentially, third through (N)th transitions from the fourth set of speaker activations to a (N+1)th set of speaker activations and performing, by the control system, an (N+1)th transition to the (N+2)th set of speaker activations without requiring completion of any of the first through (N)th transitions.
According to some implementations, the first set of speaker activations, the second set of speaker activations and the third set of speaker activations are frequency-dependent speaker activations. In some such examples, applying the frequency-dependent speaker activations may involve applying, in a first frequency band, a model of perceived spatial position that produces a binaural response corresponding to an audio object position at the left and right ears of a listener. Alternatively, or additionally, applying the frequency-dependent speaker activations may involve applying, in at least a second frequency band, a model of perceived spatial position that places a perceived spatial position of an audio signal playing from a set of loudspeakers at a center of mass of the set of loudspeakers' positions weighted by the loudspeaker's associated activating gains.
In some examples, at least one of the first set of speaker activations, the second set of speaker activations or the third set of speaker activations may be a result of optimizing a cost that is a function of a model of perceived spatial position of the audio signal played when played back over the set of loudspeakers in the environment. In some instances, the cost may be a function of a measure of a proximity of the intended perceived spatial position of the audio signal to a position of each loudspeaker of the set of loudspeakers. Alternatively, or additionally, the cost may be a function of a measure of one or more additional dynamically configurable functions based on one or more of: proximity of loudspeakers to one or more listeners; proximity of loudspeakers to an attracting force position, wherein an attracting force is a factor that favors relatively higher activation of loudspeakers in closer proximity to the attracting force position; proximity of loudspeakers to a repelling force position, wherein a repelling force is a factor that favors relatively lower activation of loudspeakers in closer proximity to the repelling force position; capabilities of each loudspeaker relative to other loudspeakers in the environment; synchronization of the loudspeakers with respect to other loudspeakers; wakeword performance; and/or echo canceller performance.
According to some implementations, rendering the audio data for reproduction may involve determining a single set of interpolated activations from the rendering configurations and applying the single set of interpolated activations to produce a single set of rendered audio signals. In some such examples, the single set of rendered audio signals may be fed into a set of loudspeaker delay lines. The set of loudspeaker delay lines may include one loudspeaker delay line for each loudspeaker of a plurality of loudspeakers.
In some examples, rendering of the audio data for reproduction may be performed in the frequency domain. Accordingly, in some instances rendering the audio data for reproduction may involve determining and implementing loudspeaker delays in the frequency domain. According to some such examples, determining and implementing speaker delays in the frequency domain may involve determining and implementing a combination of transform block delays and sub-block delays applied by frequency domain filter coefficients.
In some examples, the sub-block delays may be residual phase terms that allow for delays that are not exact multiples of a frequency domain transform block size. Accordingly, in some examples, rendering the audio data for reproduction may involve implementing sub-block delay filtering. In some implementations, rendering the audio data for reproduction may involve implementing a set of block delay lines with separate read offsets.
In some examples, rendering the audio data for reproduction may involve determining and applying interpolated speaker activations and crossfade windows for each rendering configuration. According to some such examples, rendering the audio data for reproduction may involve implementing a set of block delay lines with separate delay line read offsets. In some such examples, crossfade window selection may be based, at least in part, on the delay line read offsets. In some instances, wherein the crossfade windows may be designed to have a unity power sum if the delay line read offsets differ by more than a threshold amount. According to some examples, the crossfade windows may be designed to have a unity sum if the delay line read offsets are identical or differ by less than a threshold amount.
As noted above, in some implementations a single renderer instance may render the audio data for reproduction.
In this implementation, block 3205 involves receiving, by a control system and via an interface system, audio data. In this example, the audio data includes one or more audio signals and associated spatial data. Here, the spatial data indicates an intended perceived spatial position corresponding to an audio signal.
In this example, block 3210 involves rendering, by the control system, the audio data for reproduction via a set of loudspeakers of an environment, to produce first rendered audio signals. According to this example, rendering the audio data for reproduction involves determining a first relative activation of a set of loudspeakers in an environment according to a first rendering configuration.
In this implementation, the first rendering configuration corresponds to a first set of speaker activations. In some instances, the first set of speaker activations may be for each of a corresponding plurality of positions in a three-dimensional space. In some examples, the spatial data may be, or may include, positional metadata. However, in some examples the first set of speaker activations may correspond to a channel-based audio format. In some such examples, the intended perceived spatial position comprises a channel of the channel-based audio format. In the implementation shown in
According to this example, block 3220 involves receiving, by the control system and via the interface system and sequentially, first through (L−1)th rendering transition indications. In this instance, each of the first through (L−1)th rendering transition indications indicates a transition from a current rendering configuration to a new rendering configuration.
In this implementation, block 3225 involves determining, by the control system, second through (L)th sets of speaker activations corresponding to the first through (L−1)th rendering transition indications. According to this example, block 3230 involves performing, by the control system and sequentially, first through (L−2)th transitions from the first set of speaker activations to the (L−1)th set of speaker activations. In this implementation, block 3225 involves performing, by the control system, an (L−1)th transition to the (L)th set of speaker activations without requiring completion of any of the first through (L−2)th transitions.
In some implementations, a single renderer instance may render the audio data for reproduction. In some instances, rendering of the audio data for reproduction may be performed in the frequency domain. According to some examples, rendering the audio data for reproduction may involve determining a single set of interpolated activations from the rendering configurations and applying the single set of interpolated activations to produce a single set of rendered audio signals.
In some such examples, the single set of rendered audio signals may be fed into a set of loudspeaker delay lines. The set of loudspeaker delay lines may, for example, include one loudspeaker delay line for each loudspeaker of a plurality of loudspeakers.
With reference to
Some aspects of present disclosure include a system or device configured (e.g., programmed) to perform one or more examples of the disclosed methods, and a tangible computer readable medium (e.g., a disc) which stores code for implementing one or more examples of the disclosed methods or steps thereof. For example, some disclosed systems can be or include a programmable general purpose processor, digital signal processor, or microprocessor, programmed with software or firmware and/or otherwise configured to perform any of a variety of operations on data, including an embodiment of disclosed methods or steps thereof. Such a general purpose processor may be or include a computer system including an input device, a memory, and a processing subsystem that is programmed (and/or otherwise configured) to perform one or more examples of the disclosed methods (or steps thereof) in response to data asserted thereto.
Some embodiments may be implemented as a configurable (e.g., programmable) digital signal processor (DSP) that is configured (e.g., programmed and otherwise configured) to perform required processing on audio signal(s), including performance of one or more examples of the disclosed methods. Alternatively, embodiments of the disclosed systems (or elements thereof) may be implemented as a general purpose processor (e.g., a personal computer (PC) or other computer system or microprocessor, which may include an input device and a memory) which is programmed with software or firmware and/or otherwise configured to perform any of a variety of operations including one or more examples of the disclosed methods. Alternatively, elements of some embodiments of the inventive system are implemented as a general purpose processor or DSP configured (e.g., programmed) to perform one or more examples of the disclosed methods, and the system also includes other elements (e.g., one or more loudspeakers and/or one or more microphones). A general purpose processor configured to perform one or more examples of the disclosed methods may be coupled to an input device (e.g., a mouse and/or a keyboard), a memory, and a display device.
Another aspect of present disclosure is a computer readable medium (for example, a disc or other tangible storage medium) which stores code for controlling one or more devices to perform one or more examples of the disclosed methods or steps thereof.
Various features and aspects will be appreciated from the following enumerated example embodiments (“EEEs”):
EEE1. An audio processing method, comprising:
EEE2. The method of claim EEE1, wherein a single renderer instance renders the audio data for reproduction.
EEE3. The method of claim EEE1 or claim EEE2, wherein rendering the audio data for reproduction comprises determining a single set of interpolated activations from the rendering configurations and applying the single set of interpolated activations to produce a single set of rendered audio signals.
EEE4. The method of claim EEE3, wherein the single set of rendered audio signals is fed into a set of loudspeaker delay lines, the set of loudspeaker delay lines including one loudspeaker delay line for each loudspeaker of a plurality of loudspeakers.
EEE5. The method of any one of claims EEE1-EEE4, wherein the rendering of the audio data for reproduction is performed in a frequency domain.
EEE6. The method of any one of claims EEE1-EEE5, wherein the first set of speaker activations are for each of a corresponding plurality of positions in a three-dimensional space.
EEE7. The method of any one of claims EEE1-EEE6, wherein the spatial data comprises positional metadata.
EEE8. The method of any one of claims EEE1-EEE5, wherein the first set of speaker activations correspond to a channel-based audio format.
EEE9. The method of claim EEE8, wherein the intended perceived spatial position comprises a channel of the channel-based audio format.
EEE10. An apparatus configured to perform the method of any one of claims EEE1-EEE9.
EEE11. A system configured to perform the method of any one of claims EEE1-EEE9.
EEE12. One or more non-transitory media having software stored thereon, the software including instructions for controlling one or more devices to perform the method of any one of claims EEE1-EEE9.
While specific embodiments and applications have been described herein, it will be apparent to those of ordinary skill in the art that many variations on the embodiments and applications described herein are possible without departing from the scope described and claimed herein. It should be understood that while certain forms have been shown and described, the scope of the present disclosure is not to be limited to the specific embodiments described and shown or the specific methods described.
This application claims priority of the following applications: U.S. provisional application 63/121,108, filed 3 Dec. 2020 and U.S. provisional application 63/202,003, filed 21 May 2021, each of which is incorporated by reference in its entirety.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2021/061669 | 12/2/2021 | WO |
Number | Date | Country | |
---|---|---|---|
63202003 | May 2021 | US | |
63121108 | Dec 2020 | US |