This disclosure relates generally to audio signal processing.
In some audio processing applications, multiple audio channels are transported over a two-channel audio connection for the purpose of representing a spatial audio scene at a client audio device (e.g., a surround sound system) with, for example, height elements. This is achieved by downmixing an M-channel audio signal into a two-channel stereo signal, and sending the two-channel downmix signal and (optional) spatial parameters to a decoder, where the spatial audio scene is reconstructed using the downmix signal and the spatial parameters.
Implementations are disclosed for a matrix coded stereo signal with periphonic elements. A mixing matrix, suitable for processing a multi-channel audio input signal, is constructed so that the resulting multi-channel output signal contains the same audio elements from the input signal, wherein the spatial relationships between audio elements, as defined by panning rules associated with the input signal format, are preserved in the output signal, as defined by matrix encoding rules associated with the output signal format. The choice of the coefficients of the mixing matrix is governed by a phase-preference rule that is used to determine the appropriate phase to apply to each input signal channel.
In an embodiment, a method of transforming a number of input audio channels to a number of output audio channels comprises: receiving an input audio signal comprising a number of input audio channels; generating a number of output audio channels by: associating each input audio channel with complex gain elements that are defined by a panning function, the panning function configured to adjust a panning behavior in a region around a first location on a unit sphere at an elevation angle and to shift a discontinuity in the region to a second location on the unit sphere at the elevation angle and opposite the first location on the unit sphere.
In an embodiment, the number of output channels is two.
In an embodiment, the elevation angle is 90 degrees.
In an embodiment, the elevation angle is a function of the unit vector.
In an embodiment, the panning function is computed as a product of a complex phase correction coefficient and the complex mixing gains.
In an embodiment, the encoding rules define a magnitude of two gain elements and a relative phase between the two gain elements.
In an embodiment, the encoding rules are defined to include a constraint on the panning function given by product of three-dimensional (3D) panning function and a Hermitian transpose of the 3D panning function.
In an embodiment, the panning function has a dominant direction at a first panning position and a discontinuity at a second panning position.
In an embodiment, a method of transforming a number of input audio channels to a number of output audio channels comprises: receiving an input audio signal comprising a number of input audio channels; generating a number of output audio channels by: associating each input audio channel with a set of mixing gains, where each set of mixing gains contains a number of complex gain elements; associating each of the complex gain elements with a respective one of the output audio channels; associating each of the input channels with a respective unit vector; defining an amplitude and relative phase between the complex gain elements in each of the sets of mixing gains as a function of the respective unit vector for the respective input audio channel according to encoding rules; and determining an absolute phase of the complex gain elements in each of the sets of mixing gains according to a phase-preference rule, wherein the phase-preference rule is chosen to provide a degree of phase compatibility between different sets of mixing gains.
In an embodiment, a method of transforming a number of input audio channels to a number of output audio channels, the method comprising: for each input channel: determining a unit vector associated with the input channel; determining a prototype gain vector for input channel based on the unit vector; determining a dominant elevation angle for the input channel based on function of the unit vector; determining a phase correction for the input channel based on the prototype gain vector and the dominant elevation angle; determining mixing gains for the input channel by applying the phase correction to the prototype gain vector; and generating an output audio channel by applying the mixing gains to the input channel.
Other implementations disclosed herein are directed to a system, apparatus and computer-readable medium. The details of the disclosed implementations are set forth in the accompanying drawings and the description below. Other features, objects and advantages are apparent from the description, drawings and claims.
Particular implementations disclosed herein provide one or more of the following advantages. An N-channel audio input signal representing a spatial audio scene is downmixed to a two-channel downmix signal suitable for transporting the spatial audio scene, according to a continuous panning function that adheres to certain matrix-encoding rules, wherein the effect of a discontinuity in the panning function is minimized based on assumptions regarding the assumed typical distribution of object locations.
In the drawings, specific arrangements or orderings of schematic elements, such as those representing devices, units, instruction blocks and data elements, are shown for ease of description. However, it should be understood by those skilled in the art that the specific ordering or arrangement of the schematic elements in the drawings is not meant to imply that a particular order or sequence of processing, or separation of processes, is required. Further, the inclusion of a schematic element in a drawing is not meant to imply that such element is required in all embodiments or that the features represented by such element may not be included in or combined with other elements in some implementations.
Further, in the drawings, where connecting elements, such as solid or dashed lines or arrows, are used to illustrate a connection, relationship, or association between or among two or more other schematic elements, the absence of any such connecting elements is not meant to imply that no connection, relationship, or association can exist. In other words, some connections, relationships, or associations between elements are not shown in the drawings so as not to obscure the disclosure. In addition, for ease of illustration, a single connecting element is used to represent multiple connections, relationships or associations between elements. For example, where a connecting element represents a communication of signals, data, or instructions, it should be understood by those skilled in the art that such element represents one or multiple signal paths, as may be needed, to affect the communication.
The same reference symbol used in various drawings indicates like elements.
In the following detailed description, numerous specific details are set forth to provide a thorough understanding of the various described embodiments. It will be apparent to one of ordinary skill in the art that the various described implementations may be practiced without these specific details. In other instances, well-known methods, procedures, components, and circuits, have not been described in detail so as not to unnecessarily obscure aspects of the embodiments. Several features are described hereafter that can each be used independently of one another or with any combination of other features.
As used herein, the term “includes” and its variants are to be read as open-ended terms that mean “includes, but is not limited to.” The term “or” is to be read as “and/or” unless the context clearly indicates otherwise. The term “based on” is to be read as “based at least in part on.” The term “one example implementation” and “an example implementation” are to be read as “at least one example implementation.” The term “another implementation” is to be read as “at least one other implementation.” The terms “determined,” “determines,” or “determining” are to be read as obtaining, receiving, computing, calculating, estimating, predicting or deriving. In addition, in the following description and claims, unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skills in the art to which this disclosure belongs.
A spatial audio scene may be represented in the form of N audio objects, where each audio object consists of an audio signal and location information that indicates the position of the audio object within the scene. It is common to simplify a complex audio scene by mixing the audio signals for each respective object into a smaller set of M audio signals (a multi-channel audio scene representation), where commonly M<N. This mixing may be adapted, for each audio object, based on the location information for the audio object. The location information may be fixed or dynamic. The M-channel audio signal is commonly referred to as a downmix signal.
The location of each object may be associated with an (x, y, z) unit vector, indicative of a direction in a three-dimensional (3D) vector space. It is therefore desirable to be able to define the M mixing gains that are used to mix the audio object's audio signal to the M-channel output signal according to the object location, which can be expressed in the form of a panning function p: 3→M:
The use of complex mixing gains allows the mixing process to include both magnitude and phase modifications of the input audio signals. Mixing of N input signals (In1, In2, . . . , InN) to form the M channel output (Out1, Out2, . . . , OutM) may be written as:
where xn, yn and zn represent the (x, y, z) location associated with object n. In the description that follows, the x-axis points in the forward direction, the y-axis points to the left, and the z-axis points directly upwards, following a right-hand rule.
Equation [2] may also be written in matrix form as:
Out=M×In, [3]
where:
and
wherein column n of the matrix M in Equation [4] is formed according to:
The M-channel downmix signal may be optionally re-mixed by a subsequent upmixing process, to enable reconstruction of the original spatial audio scene. Upmixers may make use of data derived from the covariance of the M-channel downmix signal.
Mixer 100 implements the imaginary part of the complex mixing gains by applying 90-degree phase shifting operations using 90-degree phase-shifter 106. In the example shown, the input signal In1 101 is multiplied 102 by the real part of the gain g1,L to produce the real-scaled signal 110. The input signal In1 101 is multiplied 103 by the imaginary part of the gain g1,L to produce the imaginary-scaled signal 111. Summing node 104 combines the real-scaled signals, e.g., 110, and summing node 105 combines imaginary-scaled signals, e.g., 111. The combined imaginary-scaled signal is processed by 90-degree phase-shifter 106 and the phase-shifted signal is summed 107 with the combined real-scaled signal to produce the output the left channel output OutL 108. The right channel output OutL 108 is generated in a similar manner as the left channel output.
Column n of the matrix M contains the complex gains,
that specify how the input signal Inn contributes to the output signals, OutL and OutR. In some embodiments, the gain elements in column n are defined by a panning function:
wherein input channel n is associated with a unit-vector (xn, yn, zn).
The example arrangement of speakers in Table I includes 7 speakers in the horizontal plane (z=0) and 4 speakers at an elevation of 45° (z=0.71). The azimuth angles shown in Table I correspond to the positions where these speakers may be intended for placement in a listening environment to re-create the spatial audio scene accurately. The column labeled “Warped Az” indicates a modified azimuth angle that is used to adjust the relative spacing of the (x, y, z) unit vectors associated with each channel. This warped spacing of the unit vectors associated with each channel may be applied so as to improve the ability of an upmixer to regenerate the spatial audio scene from the downmix signals. It will be appreciated that many variants of the above warping process may be applied, and in general, any warping process may be used
When the number of downmix channels in this example is M=2, and the audio objects are associated with direction vectors that lie in the horizontal ring 201, so that the direction vectors are of the form (x, y, 0), the panning function p(x, y, z) can be adapted to provide a downmix signal that adheres to the matrix encoding rules (hereinafter, also referred to as “encoding rules”), and the spatial audio scene can be reconstructed using an upmixer.
The 2D matrix encoding rules may be defined to include the following constraint on the panning function p (x, y, z) (for the special case where z=0):
where the AH operator indicates the Hermitian transpose of a matrix or vector.
The matrix-encoding rule of Equation [8] assumes that the unit vectors all lie in a 2D plane (the xy plane containing horizontal ring 201 shown in
Note that the matrix encoding rule in Equation [9] is an example of a 3D matrix-encoding rule. In alternative embodiments, the rule may be adapted to include the conjugate of the matrix on the right-hand-side of Equation [9] (a “conjugate matrix encoding rule”). The following discussion will be based on the rule shown in Equation [9], but it will be appreciated by those skilled in the art that the methods described herein may be adapted to also apply to the conjugated matrix encoding rule. Henceforth, a reference to “matrix-encoding rules,” is a reference to the 3D matrix encoding rule of Equation [9]. Note that the matrix-encoding rules are formulated as mathematically matrices and vectors. In practice, however, matrix vector multiplications may be represented in any desired data structure, including as 1D and 2D arrays of values.
There are multiple functions that will satisfy the matrix-encoding rules. If the panning function p(x, y, z) satisfies the matrix encoding rules (i.e., satisfies Equation [9]), then the phase-shifted panning function p′(x, y, z)=λ(x, y, z)×p(x, y, z) (where the phase-shift function is defined as λ(x, y, z)∈ and |λ(x, y, z)|=1) will also satisfy Equation [9].
It will be appreciated by those skilled in the art that, for a given unit vector (x, y, z), the matrix encoding rules define the magnitude of the two gain elements, and the relative phase between the two gain elements. However, the matrix encoding rules do not constrain the absolute phase of the two gain elements. Hence, the phase-shift function, defined by λ(x, y, z), may be applied in the creation of a panning function to suit the characteristics of the soundfield being represented.
In one particular example, two input channels (without loss of generality, assume they are channels 1 and 2) are associated with unit-vectors (x1, y1, z1) and (x2, y2, z2). If these two unit vectors are close together, we may wish to ensure that the panning gains
are close together. This equates to a desire for the panning function to be a continuous function of the (x, y, z) unit vector. However, since no continuous panning function exists, it becomes necessary to make a choice regarding the location (in terms of the (x, y, z) unit vector) where a discontinuity is to exist in the panning function (hereinafter, a “phase-preference” rule). This discontinuity may be moved to an arbitrary location on the unit-circle by choosing the appropriate phase-shift function λ(x, y, z).
Based on the foregoing, in some embodiments a method of transforming an N-channel input audio signal to an M-channel output audio signal comprises: receiving N input audio channels; and generating M output audio channels by: associating each input audio channel with a respective column of a matrix of mixing gains, where each column of the matrix of mixing gains contains a number of complex gain elements; associating each of the complex gain elements with a respective one of the output audio channels; associating each of the input channels with a respective unit vector; defining an amplitude and relative phase between the complex gain elements in each of the columns as a function of the respective unit vector for the respective input audio channel according to matrix encoding rules; and determining an absolute phase of the complex gain elements in each of the columns according to a phase-preference rule, wherein the phase-preference rule is chosen to provide a degree of phase compatibility between different columns of the matrix of mixing gains.
In some embodiments, the following terminology is defined for panning functions:
The pup (x, y, z) function of Equation [10] is defined for all real x, y, z, where x2+y2+z2=1, except for z=−1. To ensure that the function is defined over all unit vectors, (x, y, z), a value of the function is chosen at (x, y, z)=(0,0,−1). In some embodiments, the value chosen is:
The pup(x, y, z) function of Equation [11] satisfies the matrix encoding rules, and hence it would be acceptable to use this function to define the mixing matrix Mup by setting p(x, y, z)=pup(x, y, z). Equation [12] shows the resulting mixing coefficients (note that this equation shows the matrix in transposed form):
In some embodiments, the choice of the elevation angle α may vary as a function α=fα(x, y, z), and the resulting panning function is referred to as pf
Given a unit vector (x, y, z), compute the [2×1] column vector of gains:
where G′ is also referred to herein as a “prototype” gain vector.
Choose a value for dominant elevation angle α that is typically in the range of 0≤α≤90°, according to:
α=fα(x,y,z) [14]
Compute a complex phase-correction coefficient, λ:
where sign( ) is the signum function:
The value of pf
In some embodiments, the dominant elevation angle, a is determined, as a function of the unit-vector (x, y, z) so that unit vectors for the upper forward quarter of the sphere (where x>0 and z>0), fα(x, y, z)=0, and for unit-vectors in the horizontal plane at the back (where x<−0.7 and z=0), fα(x, y, z)=90°.
In some embodiments, the dominant elevation angle, a, is determined as a function of the unit vector (x, y, z) as follows:
α=fα(x,y,z)=270t(x,y,z)2−180t(x,y,z)3
where: t(x,y,z)=max(0,min(1,−√{square root over (2)}x−1/2z)) [17 and 18]
In some embodiments, the pf
Process 600 receives an input audio signal comprising a number of input audio channels (601) and generates a number of output audio channels by associating each input audio channel with complex gain elements that are defined by a panning function, where the panning function is configured to adjust a panning behavior in a region around a first location on a unit sphere at an elevation angle and to shift a discontinuity in the region to a second location on the unit sphere at the elevation angle and opposite the first location on the unit sphere (602).
Process 603 receives an input audio signal comprising a number of input audio channels and generates a number of output audio channels by associating each input audio channel with a set of mixing gains, where each set of mixing gains contains a number of complex gain elements, associating each of the complex gain elements with a respective one of the output audio channels, associating each of the input audio channels with a respective unit vector, defining an amplitude and relative phase between the complex gain elements in each set of mixing gains as a function of the respective unit vector for the respective input audio channel according to encoding rules, and determining an absolute phase of the complex gain elements in each set of mixing gains according to a phase-preference rule, where the phase-preference rule is chosen to provide a degree of phase compatibility between different sets of mixing gains (604).
Process 605 determines a unit vector associated with an input audio channel (606), determines a prototype gain vector for the input audio channel based on the unit vector (607), determines a dominant elevation angle for the input audio channel based on function of the unit vector (608), determines a phase correction for the input audio channel based on the prototype gain vector and the dominant elevation angle (609), determines mixing gains for the input audio channel by applying the phase correction to the prototype gain vector (610), and generates an output audio channel by applying the mixing gains to the input audio channel (611).
Each of the steps recited in processes 600, 603 and 605 are described in further detail in reference to
As shown, the system 700 includes a central processing unit (CPU) 701 which is capable of performing various processes in accordance with a program stored in, for example, a read only memory (ROM) 702 or a program loaded from, for example, a storage unit 708 to a random access memory (RAM) 703. In the RAM 703, the data required when the CPU 701 performs the various processes is also stored, as required. The CPU 701, the ROM 702 and the RAM 703 are connected to one another via a bus 704. An input/output (I/O) interface 705 is also connected to the bus 704.
The following components are connected to the I/O interface 705: an input unit 706, that may include a keyboard, a mouse, or the like; an output unit 707 that may include a display such as a liquid crystal display (LCD) and one or more speakers; the storage unit 708 including a hard disk, or another suitable storage device; and a communication unit 709 including a network interface card such as a network card (e.g., wired or wireless).
In some implementations, the input unit 706 includes one or more microphones in different positions (depending on the host device) enabling capture of audio signals in various formats (e.g., mono, stereo, spatial, immersive, and other suitable formats).
In some implementations, the output unit 707 include systems with various number of speakers. As illustrated in
The communication unit 709 is configured to communicate with other devices (e.g., via a network). A drive 710 is also connected to the I/O interface 705, as required. A removable medium 711, such as a magnetic disk, an optical disk, a magneto-optical disk, a flash drive or another suitable removable medium is mounted on the drive 710, so that a computer program read therefrom is installed into the storage unit 708, as required. A person skilled in the art would understand that although the system 700 is described as including the above-described components, in real applications, it is possible to add, remove, and/or replace some of these components and all these modifications or alteration all fall within the scope of the present disclosure.
In accordance with example embodiments of the present disclosure, the processes described above may be implemented as computer software programs or on a computer-readable storage medium. For example, embodiments of the present disclosure include a computer program product including a computer program tangibly embodied on a machine readable medium, the computer program including program code for performing methods. In such embodiments, the computer program may be downloaded and mounted from the network via the communication unit 809, and/or installed from the removable medium 711, as shown in
Generally, various example embodiments of the present disclosure may be implemented in hardware or special purpose circuits (e.g., control circuitry), software, logic or any combination thereof. For example, the units discussed above can be executed by control circuitry (e.g., a CPU in combination with other components of
While various aspects of the example embodiments of the present disclosure are illustrated and described as block diagrams, flowcharts, or using some other pictorial representation, it will be appreciated that the blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
Additionally, various blocks shown in the flowcharts may be viewed as method steps, and/or as operations that result from operation of computer program code, and/or as a plurality of coupled logic circuit elements constructed to carry out the associated function(s). For example, embodiments of the present disclosure include a computer program product including a computer program tangibly embodied on a machine readable medium, the computer program containing program codes configured to carry out the methods as described above.
In the context of the disclosure, a machine readable medium may be any tangible medium that may contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine readable medium may be a machine readable signal medium or a machine readable storage medium. A machine readable medium may be non-transitory and may include but not limited to an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of the machine readable storage medium would include an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
Computer program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These computer program codes may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus that has control circuitry, such that the program codes, when executed by the processor of the computer or other programmable data processing apparatus, cause the functions/operations specified in the flowcharts and/or block diagrams to be implemented. The program code may execute entirely on a computer, partly on the computer, as a stand-alone software package, partly on the computer and partly on a remote computer or entirely on the remote computer or server or distributed over one or more remote computers and/or servers.
While this document contains many specific implementation details, these should not be construed as limitations on the scope of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable sub combination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can, in some cases, be excised from the combination, and the claimed combination may be directed to a sub combination or variation of a sub combination. Logic flows depicted in the figures do not require the particular order shown, or sequential order, to achieve desirable results. In addition, other steps may be provided, or steps may be eliminated, from the described flows, and other components may be added to, or removed from, the described systems. Accordingly, other implementations are within the scope of the following claims.
Various aspects of the present invention may be appreciated from the following enumerated example embodiments (EEEs):
EEE1. A method of transforming a number of input audio channels to a number of output audio channels, the method comprising:
EEE10. A method of transforming a number of input audio channels to a number of output audio channels, the method comprising:
EEE18. A method of transforming a number of input audio channels to a number of output audio channels, the method comprising:
Number | Date | Country | Kind |
---|---|---|---|
PCT/CN2020/111808 | Aug 2020 | WO | international |
21150688.6 | Jan 2021 | EP | regional |
This application claims priority to International Patent Application No. PCT/CN/2020/111808, filed 27 Aug. 2020; U.S. Provisional Patent Application No. 63/081,937, filed 23 Sep. 2020; U.S. Provisional Patent Application No. 63/127,919, filed 18 Dec. 2020; and European Patent Application No. 21150688.6, filed 8 Jan. 2021, all of which are incorporated herein by reference in their entirety.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2021/047817 | 8/26/2021 | WO |
Number | Date | Country | |
---|---|---|---|
63081937 | Sep 2020 | US | |
63127919 | Dec 2020 | US |