The present disclosure relates to providing an apparatus, system and method for Six Degrees of Freedom (6DoF) audio rendering, in particular in connection with data representations and bitstream structures for 6DoF audio rendering.
There is presently a lack of an adequate solution for rendering audio in combination with Six Degrees of Freedom (6DoF) movement of a user. While there are solutions for rendering channel-, object-, and First/Higher Order Ambisonics (HOA) signals in combination with Three Degrees of Freedom (3DoF) movement (yaw, pitch, roll), there is a lack of support for handling such signals in combination with Six Degrees of Freedom (6DoF) movement of the user (yaw, pitch, roll and translational movement).
In general, 3DoF audio rendering provides a sound field in which one or more audio sources are rendered at angular positions surrounding a pre-determined listener position, referred to as 3DoF position. One example of 3DoF audio rendering is included in the MPEG-H 3D Audio standard (abbreviated as MPEG-H 3DA).
While MPEG-H 3DA was developed to support channel, object, and HOA signals for 3DoF, it is not yet able to handle true 6DoF audio. The envisioned MPEG-I 3D audio implementation is desired to extend the 3DoF (and 3DoF+) functionality towards 6DoF 3D audio appliances in an efficient manner (preferably including efficient signal generation, encoding, decoding and/or rendering), while preferably providing 3DoF rendering backwards compatibility.
In view of the above, it is an object of the present disclosure to provide methods, apparatus and data representations and/or bitstream structures for 3D audio encoding and/or 3D audio rendering, which allow efficient 6DoF audio encoding and/or rending, preferably with backwards compatibility for 3DoF audio rendering, e.g., according to the MPEG-H 3DA standard.
It may be another object of the present disclosure to provide data representations and/or bitstream structures for 3D audio encoding and/or 3D audio rendering, which allow efficient 6DoF audio encoding and/or rending, preferably with backwards compatibility for 3DoF audio rendering, e.g. according to the MPEG-H 3DA standard, and encoding and/or rendering apparatus for efficient 6DoF audio encoding and/or rending, preferably with backwards compatibility for 3DoF audio rendering, e.g. according to the MPEG-H 3DA standard.
According to exemplary aspects, there may be provided a method for encoding an audio signal into a bitstream, in particular at an encoder, the method comprising: encoding and/or including audio signal data associated with 3DoF audio rendering into one or more first bitstream parts of the bitstream; and/or encoding and/or including metadata associated with 6DoF audio rendering into one or more second bitstream parts of the bitstream.
According to exemplary aspects, the audio signal data associated with 3DoF audio rendering includes audio signal data of one or more audio objects.
According to exemplary aspects, the one or more audio objects are positioned on one or more spheres surrounding a default 3DoF listener position.
According to exemplary aspects, the audio signal data associated with 3DoF audio rendering includes directional data of one or more audio objects and/or distance data of one or more audio objects.
According to exemplary aspects, the metadata associated with 6DoF audio rendering is indicative of one or more default 3DoF listener positions.
According to exemplary aspects, the metadata associated with 6DoF audio rendering includes or is indicative of at least one of: a description of 6DoF space, optionally including object coordinates; audio object directions of one or more audio objects; a virtual reality (VR) environment; and/or parameters relating to distance attenuation, occlusion, and/or reverberations.
According to exemplary aspects, the method may further include: receiving audio signals from one or more audio sources; and/or generating the audio signal data associated with 3DoF audio rendering based on the audio signals from the one or more audio sources and a transform function.
According to exemplary aspects, the audio signal data associated with 3DoF audio rendering is generated by transforming the audio signals from the one or more audio sources into 3DoF audio signals using the transform function.
According to exemplary aspects, the transform function maps or projects the audio signals of the one or more audio sources onto respective audio objects positioned on one or more spheres surrounding a default 3DoF listener position.
According to exemplary aspects, the method may further include: determining a parametrization of the transform function based on environmental characteristics and/or parameters relating to distance attenuation, occlusion, and/or reverberations.
According to exemplary aspects, the bitstream is an MPEG-H 3D Audio bitstream or a bitstream using MPEG-H 3D Audio syntax.
According to exemplary aspects, the one or more first bitstream parts of the bitstream represent a payload of the bitstream, and/or the one or more second bitstream parts represent one or more extension containers of the bitstream.
According to yet another exemplary aspect, there may be provided a method for decoding and/or audio rendering, in particular at a decoder or audio renderer, the method comprising: receiving a bitstream which includes audio signal data associated with 3DoF audio rendering in one or more first bitstream parts of the bitstream and further including metadata associated with 6DoF audio rendering in one or more second bitstream parts of the bitstream, and/or performing at least one of 3DoF audio rendering and 6DoF audio rendering based on the received bitstream.
According to exemplary aspects, when performing 3DoF audio rendering, the 3DoF audio rendering is performed based on the audio signal data associated with 3DoF audio rendering in the one or more first bitstream parts of the bitstream, while discarding the metadata associated with 6DoF audio rendering in the one or more second bitstream parts of the bitstream.
According to exemplary aspects, when performing 6DoF audio rendering, the 6DoF audio rendering is performed based on the audio signal data associated with 3DoF audio rendering in the one or more first bitstream parts of the bitstream and the metadata associated with 6DoF audio rendering in the one or more second bitstream parts of the bitstream.
According to exemplary aspects, the audio signal data associated with 3DoF audio rendering includes audio signal data of one or more audio objects.
According to exemplary aspects, the one or more audio objects are positioned on one or more spheres surrounding a default 3DoF listener position.
According to exemplary aspects, the audio signal data associated with 3DoF audio rendering includes directional data of one or more audio objects and/or distance data of one or more audio objects.
According to exemplary aspects, the metadata associated with 6DoF audio rendering is indicative of one or more default 3DoF listener positions.
According to exemplary aspects, the metadata associated with 6DoF audio rendering includes or is indicative of at least one of: a description of 6DoF space, optionally including object coordinates; audio object directions of one or more audio objects; a virtual reality (VR) environment; and/or parameters relating to distance attenuation, occlusion, and/or reverberations.
According to exemplary aspects, the audio signal data associated with 3DoF audio rendering are generated based on the audio signals from the one or more audio sources and a transform function.
According to exemplary aspects, the audio signal data associated with 3DoF audio rendering is generated by transforming the audio signals from the one or more audio sources into 3DoF audio signals using the transform function.
According to exemplary aspects, the transform function maps or projects the audio signals of the one or more audio sources onto respective audio objects positioned on one or more spheres surrounding a default 3DoF listener position.
According to exemplary aspects, the bitstream is an MPEG-H 3D Audio bitstream or a bitstream using MPEG-H 3D Audio syntax.
According to exemplary aspects, the one or more first bitstream parts of the bitstream represent a payload of the bitstream, and/or the one or more second bitstream parts represent one or more extension containers of the bitstream.
According to exemplary aspects, performing 6DoF audio rendering, being based on the audio signal data associated with 3DoF audio rendering in the one or more first bitstream parts of the bitstream and the metadata associated with 6DoF audio rendering in the one or more second bitstream parts of the bitstream, includes generating audio signal data associated with 6DoF audio rendering based on the audio signal data associated with 3DoF audio rendering and an inverse transform function.
According to exemplary aspects, the audio signal data associated with 6DoF audio rendering is generated by transforming the audio signal data associated with 3DoF audio rendering using the inverse transform function and the metadata associated with 6DoF audio rendering.
According to exemplary aspects, the inverse transform function is an inverse function of a transform function which maps or projects audio signals of the one or more audio sources onto respective audio objects positioned on one or more spheres surrounding a default 3DoF listener position.
According to exemplary aspects, performing 3DoF audio rendering based on the audio signal data associated with 3DoF audio rendering in the one or more first bitstream parts of the bitstream results in the same generated sound field as performing 6DoF audio rendering, at a default 3DoF listener position, based on the audio signal data associated with 3DoF audio rendering in the one or more first bitstream parts of the bitstream and the metadata associated with 6DoF audio rendering in one or more second bitstream parts of the bitstream.
According to yet another exemplary aspect, there may be provided a bitstream for audio rendering, the bitstream including audio signal data associated with 3DoF audio rendering in one or more first bitstream parts of the bitstream and further including metadata associated with 6DoF audio rendering in one or more second bitstream parts of the bitstream. This aspect may be combined with any one or more of the above exemplary aspects.
According to yet another exemplary aspect, there may be provided an apparatus, in particular encoder, including a processor configured to: encode and/or include audio signal data associated with 3DoF audio rendering into one or more first bitstream parts of the bitstream; encode and/or include metadata associated with 6DoF audio rendering into one or more second bitstream parts of the bitstream; and/or output the encoded bitstream. This aspect may be combined with any one or more of the above exemplary aspects.
According to yet another exemplary aspect, there may be provided an apparatus, in particular decoder or audio renderer, including a processor configured to: receive a bitstream which includes audio signal data associated with 3DoF audio rendering in one or more first bitstream parts of the bitstream and further including metadata associated with 6DoF audio rendering in one or more second bitstream parts of the bitstream, and/or perform at least one of 3DoF audio rendering and 6DoF audio rendering based on the received bitstream. This aspect may be combined with any one or more of the above exemplary aspects.
According to exemplary aspects, when performing 3DoF audio rendering, the processor is configured to perform the 3DoF audio rendering based on the audio signal data associated with 3DoF audio rendering in the one or more first bitstream parts of the bitstream, while discarding the metadata associated with 6DoF audio rendering in the one or more second bitstream parts of the bitstream.
According to exemplary aspects, when performing 6DoF audio rendering, the processor is configured to perform the 6DoF audio rendering based on the audio signal data associated with 3DoF audio rendering in the one or more first bitstream parts of the bitstream and the metadata associated with 6DoF audio rendering in the one or more second bitstream parts of the bitstream.
According to yet another exemplary aspect, there may be provided a non-transitory computer program product including instructions that, when executed by a processor, cause the processor to execute a method for encoding an audio signal into a bitstream, in particular at an encoder, the method comprising: encoding or including audio signal data associated with 3DoF audio rendering into one or more first bitstream parts of the bitstream; and/or encoding or including metadata associated with 6DoF audio rendering into one or more second bitstream parts of the bitstream. This aspect may be combined with any one or more of the above exemplary aspects.
According to yet another exemplary aspect, there may be provided a non-transitory computer program product including instructions that, when executed by a processor, cause the processor to execute a method for decoding and/or audio rendering, in particular at a decoder or audio renderer, the method comprising: receiving a bitstream which includes audio signal data associated with 3DoF audio rendering in one or more first bitstream parts of the bitstream and further including metadata associated with 6DoF audio rendering in one or more second bitstream parts of the bitstream, and/or performing at least one of 3DoF audio rendering and 6DoF audio rendering based on the received bitstream. This aspect may be combined with any one or more of the above exemplary aspects.
Further aspects of the disclosure relate to corresponding computer programs and computer-readable storing media.
It will be appreciated that method steps and apparatus features may be interchanged in many ways. In particular, the details of the disclosed method can be implemented as an apparatus adapted to execute some or all or the steps of the method, and vice versa, as the skilled person will appreciate. In particular, it is understood that respective statements made with regard to the methods likewise apply to the corresponding apparatus, and vice versa.
Example embodiments of the disclosure are explained below with reference to the accompanying drawings, wherein like reference numbers may indicate like or similar elements, and wherein:
In the following, preferred exemplary aspects will be described in more detail with reference to the accompanying figures. Same or similar features in different drawings and embodiments may be referred to by similar reference numerals. It is to be understood that the detailed description below relating to various preferred exemplary aspects is not to be meant as limiting the scope of the present invention.
As used herein, “MPEG-H 3D Audio” shall refer to the specification as standardized in ISO/IEC 23008-3 and/or any past and/or future amendments, editions or other versions thereof of the ISO/IEC 23008-3 standard.
As used herein, the MPEG-I 3D audio implementation is desired to extend the 3DoF (and 3DoF+) functionality towards 6DoF 3D audio, while preferably providing 3DoF rendering backwards compatibility.
As used herein, 3DoF is typically a system that can correctly handle a user's head movement, in particular head rotation, specified with three parameters (e.g., yaw, pitch, roll). Such systems often are available in various gaming systems, such as Virtual Reality (VR)/Augmented Reality (AR)/Mixed Reality (MR) systems, or other such type acoustic environments.
As used herein, 6DoF is typically a system that can correctly handle 3DoF and translational movement.
Exemplary aspects of the present disclosure relate to an audio system (e.g., an audio system that is compatible with the MPEG-I audio standard), where the audio renderer extends functionality towards 6DoF by converting related metadata to a 3DoF format, such as an audio renderer input format that is compatible with an MPEG standard (e.g., the MPEG-H 3DA standard).
In a method of 3D audio rendering with 3DoF, only angles (e.g. yaw angle y, pitch angle p, roll angle r) of a user's angular orientation at a pre-determined 3DoF position may be input to the 3DoF audio renderer 105. With extended 6DoF functionality, the user's location coordinates (e.g. x, y and z) may additionally be input to the 6DoF audio renderer (extension renderer).
An advantage of the present disclosure includes bit rate improvements for the bitstream transmitted between the encoder and the decoder. The bit stream may be encoded and/or decoded in compliance with a standard, e.g., the MPEG-I Audio standard and/or the MPEG-H 3D Audio standard, or at least backwards compatible with a standard such as with the MPEG-H 3D Audio standard.
In some examples, exemplary aspects of the present disclosure are directed to processing of a single bitstream (e.g., an MPEG-H 3D Audio (3DA) bitstream (BS) or a bitstream that uses syntax of an MPEG-H 3DA BS) that is compatible with a plurality of systems.
For example, in some exemplary aspects, the audio bitstream may be compatible with two or more different renderers, e.g., a 3DoF audio renderer that may be compatible with one standard, (e.g., the MPEG-H 3D Audio standard) and a newly defined 6DoF audio renderer or renderer extension that may be compatible with a second, different standard (e.g., the MPEG-I Audio standard).
Exemplary aspects of the present disclosure are directed to different decoders configured to perform decoding and rendering of the same audio bitstream, preferably in order to produce the same audio output.
For example, exemplary aspects of the present disclosure relate to a 3DoF decoder and/or 3DoF renderer and/or a 6DoF decoder and/or 6DoF renderer configured to produce the same output for the same bitstream (e.g., a 3DA BS or bitstream using the 3DA BS). Exemplarily, the bitstream may include information regarding defined positions of a listener in VR/AR/MR (virtual reality/augmented reality/mixed reality) space, e.g., as part of 6DoF metadata.
The present disclosure exemplarily further relates to encoders and/or decoders configured to encode and/or decode, respectively, 6DoF information (e.g., compatible with an MPEG-I Audio environment), wherein such encoders and/or decoders of the present disclosure provide one or more of the following advantages:
In order to preferably avoid competition between 3DoF- and 6DoF- solutions and to provide a smooth transition between present and future technologies, backwards compatibility is highly beneficial.
For example, backwards compatibility between a 3DoF audio system and a 6DoF audio system may be highly beneficial, such as providing, in a 6DoF audio system, such as MPEG-I Audio, backwards compatibility to a 3DoF audio system, such as MPEG-H 3D Audio
According to exemplary aspects of the present disclosure, this can be realized by providing backward compatibility, e.g., on a bitstream level, for 6DoF-related systems consisting of:
Exemplary aspects of the present disclosure relate to a standard 3DoF bitstream syntax, such as a first type of audio bitstream (e.g., MPEG-H 3DA BS) syntax, that encapsulates 6DoF bitstream elements, such as MPEG-I Audio bitstream elements, e.g. in one or more extension containers of the first type of audio bitstream (e.g., MPEG-H 3DA BS).
In order to provide a system that ensures backwards compatibility on a performance level, the following systems and/or structures may be relevant and may occur:
3
a. The 6DoF system (e.g., the MPEG-I Audio system) shall be able to process both the 3DoF-related and 6DoF-related parts of an audio bitstream and produce audio output that matches the audio output of the 3DoF system (e.g., of MPEG-H 3DA systems) at pre-defined backwards compatible 3DoF position(s) in VR/AR/MR space, i.e. the 6DoF system (decoder/renderer) may preferably be configured to render, at the default 3DoF position(s), the sound field / audio output that matches the 3DoF rendered sound field / audio output; and
4
a. The 6DoF system (e.g., the MPEG-I Audio system) shall provide a smooth change (transition) of the audio output around the pre-defined backwards compatible 3DoF position(s), (i.e., providing a continuous soundfield in a 6DoF space), i.e. the 6DoF system (decoder/renderer) may preferably be configured to render, in the surroundings of the default 3DoF position(s), the sound field / audio output that smoothly transitions, at the default 3DoF position(s), into the 3DoF rendered sound field/audio output.
In some examples, the present disclosure relates to providing a 6DoF audio renderer (e.g., a MPEG-I Audio renderer) that produces the same audio output as a 3DoF audio renderer (e.g., a MPEG-H 3D Audio renderer) in one, more, or some 3DoF position(s).
Presently, there are drawbacks when directly transporting 3DoF-related audio signals and metadata directly to a 6DoF audio system, which include:
Exemplary aspects of the present disclosure are directed to efficiently generating, encoding, decoding and rendering such signal(s) in order to fulfil these goals and to provide 6DoF rendering functionality.
In particular,
Exemplary aspects of the present disclosure relate to recreating the sound field, when using a 6DoF audio renderer (e.g., a MPEG-I Audio renderer), in a “3DoF position” in a way that corresponds to a 3DoF audio renderer (e.g., a MPEG-H Audio renderer) output signal (that may or may not be consistent to physical law sound propagation). This sound field should preferably be based on the original “audio sources” and reflect the influence of the complex geometries of the corresponding VR/AR/MR environment (e.g., effect of “walls”, structures, sound reflections, reverberations, and/or occlusions, etc.).
Exemplary aspects of the present disclosure relate to parametrization by an encoder of all relevant information describing this scenario in a way to ensure fulfilment of one, more, or preferably all corresponding requirements (1a)-(4a) described above.
If two audio rendering modes are ran (i.e., 3DoF and 6DoF) in parallel and an interpolation algorithm is applied to the corresponding outputs in 6DoF space, such an approach would be sub-optimal because it would require:
Exemplary aspects of the present disclosure avoid the drawbacks of the above, in that preferably only a single audio rendering mode is executed (e.g. instead of parallel execution of two audio rendering modes) and/or 3DoF audio data is preferably used for the 6DoF audio rendering with additional metadata for restoring and/or approximating the original sound source(s) signal(s) (e.g. instead of transmitting the 3DoF Audio data and the original sound source(s) data).
Exemplary aspects of the present disclosure relate to (1) a single 6DoF Audio rendering algorithm (e.g., compatible with MPEG-I Audio) that preferably produces exactly the same output as a 3DoF Audio rendering algorithm (e.g., compatible with MPEG-H 3DA) at specific position(s) and/or (2) representing the audio (e.g. 3DoF audio data) and 6DoF related audio metadata to minimize redundancy in 3DoF- and VR/AR/MR-related parts of a 6DoF Audio bitstream data (e.g., a MPEG-I Audio bitstream data).
Exemplary aspects of the present disclosure relate to using a first standardized format bitstream (e.g., MPEG-H 3DA BS) syntax to encapsulate a second standardized format bitstream (e.g., future standards e.g., MPEG-I) or parts thereof and 6DoF related metadata to:
An aspect of the present disclosure relates to a determination of desired “3DoF position(s)” and 3DoF audio system (e.g. MPEG-H 3DA system) compatible signals at an encoder side.
For example, as shown relative to
The inverse function A−1 should, in some exemplary aspects, preferably “un-wet” (i.e. removing the effects of VR environment) these signals should be good as it is necessary for approximating the original “dry” signals x (which are free from the effects of VR environment).
The audio signal(s) for 3DoF rendering ((x3DA)) may preferably be defined in order to provide the same/similar output for both 3DoF and 6DoF audio renderings e.g., based on:
F3DoF(x3DA)→F6DoF(x) for 3DoF Equation No. (1)
The audio objects may be contained in a standardized bit stream. This bit stream may be encoded in complance with a variety of standards, such as MPEG-H 3DA and/or MPEG-I.
The BS may include information regarding object signals, object directions, and object distances.
There may be an approximation of the desired audio rendering included, based on:
F6DoF(x*)≈F6DoF(x) for 6DoF Equation No. (2)
The approximation may be based on the VR environment, wherein environment characteristics may be included in the extension container metadata.
Additionally or optionally, smoothness for a 6DoF audio renderer (e.g. MPEG-I Audio renderer) output may be provided, preferably based on:
F6DoF⊂Gi≥0 for 3DoF+, Gi≥0−geometric continuity class Equation No. (3)
Exemplary aspects of the present disclosure are directed to defining 3DoF audio objects (e.g. MPEG-H 3DA objects) on the encoder side, preferably based on:
x3DA:=A(x), ∥F3DoF(x3DA)−F6DoF(x) for 3DoF∥→min Equation No. (4)
An aspect of the present disclosure relates to recovering of the original objects on the decoder based on:
x*:=A−1(x3DA) Equation No. (5)
wherein, x relates to sound source/object signals, x* relates to an approximation of sound source/object signals, F(x) for 3DoF/for 6DoF relates to an audio rendering function for 3DoF/6DoF listener position(s), 3DoF relates to a given reference compatibility position(s) ϵ6DoF space; 6DoF relates to arbitrary allowed position(s) ϵ VR scene;
The approximated sound sources/object signals are preferably recreated using a 6DoF audio renderer in a “3DoF position” in a way that corresponds to a 3DoF audio renderer output signal.
The sound sources/object signals are preferably approximated based on a sound field that is based on the original “audio sources” and reflects the influence of the complex geometries of the corresponding VR/AR/MR environment (e.g., “walls”, structures, reverberations, occlusions, etc.).
That is, virtual 3DA object signals for 3DA preferably produce the same sound field in a specific 3DoF position (based on signals x3DA) that contain the effects of the VR environment for the specific 3DoF position(s).
The following may be available on the rendering side (e.g., to a decoder that is compliant with a standard such as the MPEG-H or MPEG-I standards):
For 6DoF Audio rendering, additionally there may be 6DoF metadata available at the rendering side for the 6DoF Audio rendering functionality (e.g. to approximate/restore the audio signals x of the one or more audio sources, e.g. based on the 3DoF audio signals x3DA and the 6DoF metadata.
Exemplary aspects of the present disclosure relates to (i) definition of the 3DoF audio objects (e.g. MPEG-H 3DA objects) and/or (ii) recovery (approximation) of the original audio objects.
The audio objects may exemplarily be contained in a 3DoF audio bitstream (such as MPEG-H 3DA BS).
The bitstream may include information regarding object audio signals, object directions, and/or object distances.
An extension container (e.g. of the bitstream such as the MPEG-H 3DA BS) may include at least one of the following metadata: (i) 3DoF (default) position parameters; (ii) 6DoF space description parameters (object coordinates); (iii) (optional) object directionality parameters; (iv) (optional) VR/AR/MR environment parameters; and/or (v) (optional) distance attenuation parameters, occlusion parameters, reverberation parameters, etc.
The present disclosure may provide the following advantages:
Exemplary aspects of the present disclosure may relate to the following signaling in a format compatible with an MPEG standard (e.g. the MPEG-I standard) bitstream:
A 6DoF Audio renderer may specify how to recover the original audio object signals e.g., in an MPEG compatible system (e.g., MPEG-I Audio system).
This proposed concept:
The bitstream BS exemplarily includes a first bitstream part 302 which includes 3DoF encoded audio data (e.g. in a main part or core part of the bitstream). Preferably, the bitstream syntax of the bitstream BS is compatible or compliant with a BS syntax of 3DoF audio rendering, such as e.g. an MPEG-H 3DA bitstream syntax. The 3DoF encoded audio data may be included as payload in one or more packets of the bitstream BS.
As previously described e.g. in connection with
Exemplarily, the BS exemplarily includes a second bitstream part 303 which includes 6DoF metadata for 6DoF audio encoding (e.g. in a metadata part or extension part of the bitstream). Preferably, the bitstream syntax of the bitstream BS is compatible or compliant with a BS syntax of 3DoF audio rendering, such as e.g. an MPEG-H 3DA bitstream syntax. The 6DoF metadata may be included as extension metadata in one or more packets of the bitstream BS (e.g. in one or more extension containers, which are e.g. already provided by the MPEG-H 3DA bitstream structure).
As previously described e.g. in connection with
Specifically, it is exemplarily illustrated in
Specifically, it is exemplarily illustrated in
Accordingly, without or at least with reduced redundancy in the bitstream, the same bitstream can be used by legacy 3DoF audio renderers, which allows for simple and beneficial backwards compatibility, for 3DoF audio rendering and by novel 6DoF audio renderers for 6DoF audio rendering.
Exemplarily, similar to
For 3DoF audio rendering purposes, the audio signals x of the plural audio sources 207 are transformed so as to obtain 3DoF audio signals (audio objects) on a sphere S around a default 3DoF position 206 (e.g. a listener position in a 3DoF sound field). As above, the 3DoF audio signals are referred to as x3DA and may be obtained by using the transformation function A such that:
x3DA=A(x) Equation No. (6)
In the above expression, x denotes the sound source(s)/object signal(s), x3DA denotes the corresponding virtual 3DA object signals for 3DA producing the same sound field in the default 3DoF position 206, and A denotes the transformation function which approximates audio signals x3DA based on the audio signals x. The inverse transformation function A−1 may be used to restore/approximate the sound source signals for 6DoF audio rendering as discussed already above and further below. Note that A A−1=1 and A−1A=1 or at least A A−1≈1 and A−1A≈1.
In a general way, the transformation function A may be regarded as a mapping/projection function that projects or at least maps the audio signals x onto the sphere S surrounding the default 3DoF position 206 in some exemplary aspects of the present disclosure.
It is to be further noted that 3DoF audio rendering is not aware of a VR environment (such as existing walls 203, or the like, or other structures, which may lead to attenuation, reverberations, occlusion effects, or the like). Accordingly, the transformation function A may preferably include effects based on such VR environmental characteristics.
By using the inverse transformation function A−1 and the approximated 3DoF audio signals x3DA obtained as in
x*=A−1(x3DA). Equation No. (7)
Accordingly, the audio signals x* of the audio objects 320 in
The audio signals x* of the audio objects 320 in
When the listener position of the listener is assumed to be at the position 206 (same position as default 3DoF position), the 6DoF audio rendering renders the same sound field as the 3DoF audio rendering based on the audio signals x3DA.
Accordingly, the 6DoF rendering F6DoF(x*) at the default 3DoF position being the assumed listener position is equal (or at least approximately equal) to the 3DoF rendering F3DoF(x3DA).
Furthermore, if the listener position is shifted, e.g. to position 206′ in
As another example, a third listener position 206″ may be assumed and the sound field generated in the 6DoF audio rendering becomes different specifically for the upper left audio signal, which is not obstructed by wall 203 for the third listener position 206″. Preferably, this becomes possible, because the inverse function A−1 restores the original sound source (without environmental effects such as VR environment characteristics).
In step S801, the method (e.g. at a decoder side) receives original audio signal(s) x of one or more audio sources.
In step S802, the method (optionally) determines environment characteristics (such as room shape, walls, wall sound reflection characteristics, objects, obstacles, etc.) and/or determines parameters (parametrizing effects such as attenuation, gain, occlusion, reverberations, etc.).
In step S803, the method (optionally) determines a parametrization of a transformation function A, e.g. based on the results of step S802. Preferably, step S803 provides a parametrized or pre-set transformation function A.
In step S804, the method transforms the original audio signal(s) x of one or more audio sources into corresponding one or more approximated 3DoF audio signal(s) x3DA based on the transformation function A.
In step S805, the method determines 6DoF metadata (which may include one or more 3DoF positions, VR environmental information, and/or parameters and parametrization of environmental effects such as attenuation, gain, occlusion, reverberations, etc.).
In step S806, the method includes (embeds) the 3DoF audio signal(s) x3DA into a first bitstream part (or multiple first bitstream parts).
In step S807, the method includes (embeds) the 6DoF metadata into a second bitstream part (or multiple second bitstream parts).
Then, in step S808, the method continues to encode the bitstream based on the first and second bitstream parts to provide the encoded bitstream that includes the 3DoF audio signal(s) x3DA in the first bitstream part (or multiple first bitstream parts) and the 6DoF metadata in the second bitstream part (or multiple second bitstream parts).
The encoded bitstream can then be provided to a 3DoF decoder/renderer for 3DoF audio rendering based on the 3DoF audio signal(s) x3DA in the first bitstream part (or multiple first bitstream parts) only, or to a 6DoF decoder/renderer for 6DoF audio rendering based on the 3DoF audio signal(s) x3DA in the first bitstream part (or multiple first bitstream parts) and the 6DoF metadata in the second bitstream part (or multiple second bitstream parts).
In step S901, the encoded bitstream that includes the 3DoF audio signal(s) x3DA in the first bitstream part (or multiple first bitstream parts) and the 6DoF metadata in the second bitstream part (or multiple second bitstream parts) is received.
In step S902, the 3DoF audio signal(s) x3DA is/are obtained from the first bitstream part (or multiple first bitstream parts). This can be done by the 3DoF decoder/renderer and also the 6DoF decoder/renderer.
The, if the decoder/renderer is a legacy apparatus for 3DoF audio rendering purposes (or a new 3DoF/6DoF decoder/renderer switched to a 3DoF audio rendering mode), then the method proceeds with step S903, in which the 6DoF metadata is discarded/neglected, and then proceeds to the 3DoF audio rendering operation to render the 3DoF audio based on the 3DoF audio signal(s) x3DA obtained from the first bitstream part (or multiple first bitstream parts).
That is, backwards compatibility is advantageously guaranteed.
On the other hand, if the decoder/renderer is for 6DoF audio rendering purposes (such as aa new 6DoF decoder/renderer or a 3DoF/6DoF decoder/renderer switched to a 6DoF audio rendering mode), then the method proceeds with step S905 to obtain the 6Dof metadata from the second bitstream part(s).
In step S906, the method approximates/restores the audio signals x* of the audio objects/sources from the 3DoF audio signal(s) x3DA obtained from the first bitstream part (or multiple first bitstream parts) based on the 6DoF metadata obtained from the second bitstream part (or multiple second bitstream parts) and the inverse transformation function A−1.
Then, in step S907, the method proceeds to perform the 6DoF audio rendering based on the approximated/restored audio signals x* of the audio objects/sources and based on the listener position (which may be variable within the VR environment).
In exemplary aspects above, there can be provided efficient and reliable methods, apparatus and data representations and/or bitstream structures for 3D audio encoding and/or 3D audio rendering, which allow efficient 6DoF audio encoding and/or rending, beneficially with backwards compatibility for 3DoF audio rendering, e.g. according to the MPEG-H 3DA standard. Specifically, it is possible to provide data representations and/or bitstream structures for 3D audio encoding and/or 3D audio rendering, which allow efficient 6DoF audio encoding and/or rending, preferably with backwards compatibility for 3DoF audio rendering, e.g. according to the MPEG-H 3DA standard, and corresponding encoding and/or rendering apparatus for efficient 6DoF audio encoding and/or rending, with backwards compatibility for 3DoF audio rendering, e.g. according to the MPEG-H 3DA standard.
The methods and systems described herein may be implemented as software, firmware and/or hardware. Certain components may be implemented as software running on a digital signal processor or microprocessor. Other components may be implemented as hardware and or as application specific integrated circuits. The signals encountered in the described methods and systems may be stored on media such as random access memory or optical storage media. They may be transferred via networks, such as radio networks, satellite networks, wireless networks or wireline networks, e.g. the Internet. Typical devices making use of the methods and systems described herein are portable electronic devices or other consumer equipment which are used to store and/or render audio signals.
Example implementations of methods and apparatus according to the present disclosure will become apparent from the following enumerated example embodiments (EEEs), which are not claims.
EEE1 exemplarily relates to a method for encoding audio comprising audio source signals, 3DoF related data and 6DoF related data comprising: encoding, e.g. by an audio source apparatus, such as in particular an encoder, the audio source signals that approximate a desired sound field in 3DoF position(s) to determine 3DoF data; and/or encoding, e.g. by the audio source apparatus, such as in particular the encoder, the 6DoF related data to determine 6DoF metadata, wherein the metadata may be used to approximate original audio source signals for 6DoF rendering.
EEE2 exemplarily relates to the method of EEE1, wherein the 3DoF data relates to at least one of object audio signals, object directions, and object distances.
EEE3 exemplarily relates to the method of EEE1 or EEE2, wherein the 6DoF data relates to at least one of the following: 3DoF (default) position parameters, 6DoF space description (object coordinates) parameters, object directionality parameters, VR environment parameters, distance attenuation parameters, occlusion parameters, and reverberation parameters.
EEE4 exemplarily relates to a method for transporting data, in particular 3DoF and 6DoF renderable audio data, the method comprising: transporting, e.g. in an audio bitstream syntax, audio source signals that may preferably approximate a desired sound field in 3DoF position(s), e.g. when decoded by a 3DoF audio system; and/or transporting, e.g. in an extension part of an audio bitstream syntax, 6DoF related metadata for approximating and/or restoring original audio source signals for 6DoF rendering; wherein the 6DoF related metadata may be parametric data and/or signal data.
EEE5 exemplarily relates to the method of EEE4, wherein the audio bitstream syntax, e.g. including the 3DoF metadata and/or the 6DoF metadata, is/are complaint with at least a version of the MPEG-H Audio standard.
EEE6 exemplarily relates to a method for generating a bitstream, the method comprising: determining 3DoF metadata that is based on audio source signals that approximate a desired sound field in 3DoF position(s); determining 6DoF related metadata, wherein the metadata may be used to approximate original audio source signals for 6DoF rendering; and/or inserting the audio source signal and the 6DoF related metadata into the bitstream.
EEE7 exemplarily relates to a method for audio rendering, said method comprising: preprocessing of 6DoF metadata of approximated audio signals x* of original audio signals x in 3DoF position(s), wherein the 6DoF rendering may provide the same output as 3DoF rendering of transported audio source signals X3DA for 3DoF rendering that approximate a desired soundfield in 3DoF position(s).
EEE8 exemplarily relates to the method of EEE7, wherein the audio rendering is determined based on:
F6DoF(x*)≈F3DoF(x3DA)→F6DoF(x) for 3DoF
wherein F6DoF(x*) relates to an audio rendering function for 6DoF listener position(s), F3DoF(x3DA) relates to audio rendering functions for 3DoF listener position(s), x3DA are audio signals that contain the effects of the VR environment for specific 3DoF position(s), and x* relates to approximated audio signals.
EEE9 exemplarily relates to the method of EEE8, wherein the approximated audio signals x* of original audio signals x are based on:
x*:=A−1(x3DA)
wherein A−1 relates to an inverse of an approximation function A.
EEE10 exemplarily relates to the method of EEE8 or EEE9, wherein metadata used to obtain the approximated audio signals x* of the original audio source signals x using the approximation method A is defined based on:
x3DA:=A(x), ∥F3DoF(x3DA)−F6DoF(x) for 3DoF∥→min
wherein the amount of the metadata is smaller than the amount of audio data needed for transporting the original audio source signals x.
wherein the audio rendering is determined based on:
F6DoF(x*)≈F3DoF(x3DA)→F6DoF(x) for 3DoF
wherein F6DoF(x*) relates to an audio rendering function for 6DoF listener position(s), F3DoF(x3DA) relates to audio rendering functions for 3DoF listener position(s), x3DA are audio signals that contain the effects of the VR environment for specific 3DoF position(s), and x* relates to approximated audio signals.
Exemplary aspects and embodiments of the present disclosure may be implemented in hardware, firmware, or software, or a combination of both (e.g., as a programmable logic array). Unless otherwise specified, the algorithms or processes included as part of the disclosure are not inherently related to any particular computer or other apparatus. In particular, various general-purpose machines may be used with programs written in accordance with the teachings herein, or it may be more convenient to construct more specialized apparatus (e.g., integrated circuits) to perform the required method steps. Thus, the disclosure may be implemented in one or more computer programs executing on one or more programmable computer systems (e.g., an implementation of any of the elements of the figures) each comprising at least one processor, at least one data storage system (including volatile and non-volatile memory and/or storage elements), at least one input device or port, and at least one output device or port. Program code is applied to input data to perform the functions described herein and generate output information. The output information is applied to one or more output devices, in known fashion.
Each such program may be implemented in any desired computer language (including machine, assembly, or high level procedural, logical, or object oriented programming languages) to communicate with a computer system. In any case, the language may be a compiled or interpreted language.
For example, when implemented by computer software instruction sequences, various functions and steps of embodiments of the disclosure may be implemented by multithreaded software instruction sequences running in suitable digital signal processing hardware, in which case the various devices, steps, and functions of the embodiments may correspond to portions of the software instructions.
Each such computer program is preferably stored on or downloaded to a storage media or device (e.g., solid state memory or media, or magnetic or optical media) readable by a general or special purpose programmable computer, for configuring and operating the computer when the storage media or device is read by the computer system to perform the procedures described herein. The inventive system may also be implemented as a computer-readable storage medium, configured with (i.e., storing) a computer program, where the storage medium so configured causes a computer system to operate in a specific and predefined manner to perform the functions described herein.
A number of exemplary aspects and exemplary embodiments of the invention of the present disclosure have been described above. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the invention of the present disclosure. Numerous modifications and variations of the present invention are possible in light of the above teachings. It is to be understood that within the scope of the appended claims, the invention of the present disclosure may be practiced otherwise than as specifically described herein.
This application claims the benefit of U.S. provisional application Ser. No. 62/655,990 filed on 11 Apr. 2018, which application is incorporated herein by reference in its entirety.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2019/058955 | 4/9/2019 | WO |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2019/197404 | 10/17/2019 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
9477307 | Chizeck | Oct 2016 | B2 |
9847088 | Peters | Dec 2017 | B2 |
9860669 | De Bruijn | Jan 2018 | B2 |
9875745 | Peters | Jan 2018 | B2 |
10650590 | Topiwala | May 2020 | B1 |
11232643 | Stevens | Jan 2022 | B1 |
20150149187 | Kastner | May 2015 | A1 |
20150213807 | Breebaart | Jul 2015 | A1 |
20160104494 | Kim | Apr 2016 | A1 |
20170011750 | Liu | Jan 2017 | A1 |
20170110140 | Peters | Apr 2017 | A1 |
20170289720 | Tsukagoshi | Oct 2017 | A1 |
20170366914 | Stein | Dec 2017 | A1 |
20180068664 | Seo | Mar 2018 | A1 |
20180075659 | Browy | Mar 2018 | A1 |
20190235729 | Day | Aug 2019 | A1 |
20190237044 | Day | Aug 2019 | A1 |
20200228780 | Kim | Jul 2020 | A1 |
20210112287 | Lee | Apr 2021 | A1 |
20210168550 | Terentiv | Jun 2021 | A1 |
Number | Date | Country |
---|---|---|
2020527746 | Sep 2020 | JP |
2016204581 | Dec 2016 | WO |
2017134214 | Aug 2017 | WO |
Entry |
---|
11 Draft MPEG-I Architecture and 1-27, Requirements 11, 29-3 122. MPEG Meeting; Apr. 16, 2018-Apr. 20, 2018; San Diego; (Moti on Picture Expert Group or ISO/I EC JTC1/SC29/WG11),No. NI7647, Apr. 21, 2018 (Apr. 21, 2018), XP030024274, p. 1 p. 2 ⋅ figure 1 Points ‘1.10 Interoperability between 3DoF and 6DoF platforms’ and I 1.11 Media Profiles'; p. 5-p. 6 section ‘3 What is standardized in MPEG-I audio’; p. 6-p. 7 point 7of p. 3. |
Bleidt, R. etal “Development of the MPEG-H TV Audio System for ATSC 3.0” IEEE Transactions on Broadcasting, vol. 63, No. 1, Mar. 1, 2017, pp. 202-236. |
Domanski, M. et al. Immersive Visual Media-MPEG-I:360 Video, Virtual Navigation and Beyond, IEEE, May 22-24, 2017. |
Herre, J. et al. “Thoughts on MPEG-I, AR/VR Audio Evaluation” MPEG Meeting Jul. 2017, Motion Picture Expert Group or ISO/IEC JTC1/SC29/WG11. |
Lafruit, G. et al. Requirements on 6Dof (v1) Jul. 2017. |
Murtaza, A et al. “ISO/MPEG-H3D Audio:SAOC 3D Decoding and Rendering” AES Convention, Oct. 23, 2015. |
Yip, Eric “MPEG-I Immersive Media-Towards an Immersive Media Era” Nov. 29, 2017. |
ISO/IEC 23008-3:2015 (MPEG-H 3D Audio Specification, First Edition). |
Number | Date | Country | |
---|---|---|---|
20210168550 A1 | Jun 2021 | US |
Number | Date | Country | |
---|---|---|---|
62655990 | Apr 2018 | US |