Method for describing the composition of audio signals

Information

  • Patent Grant
  • 9002716
  • Patent Number
    9,002,716
  • Date Filed
    Friday, November 28, 2003
    20 years ago
  • Date Issued
    Tuesday, April 7, 2015
    9 years ago
Abstract
Method for describing the composition of audio signals, which are encoded as separate audio objects. The arrangement and the processing of the audio objects in a sound scene is described by nodes arranged hierarchically in a scene description. A node specified only for spatialization on a 2D screen using a 2D vector describes a 3D position of an audio object using said 2D vector and a 1D value describing the depth of said audio object. In a further embodiment a mapping of the coordinates is performed, which enables the movement of a graphical object in the screen plane to be mapped to a movement of an audio object in the depth perpendicular to said screen plane.
Description

This application claims the benefit, under 35 U.S.C. §365 of International Application PCT/EP03/13394, filed Nov. 28, 2003, which was published in accordance with PCT Article 21(2) on Jun. 17, 2004 in English and which claims the benefit of European patent application No. 02026770.4, filed Dec. 2, 2002 and European patent application No. 03016029.5, filed Jul. 15, 2003.


The invention relates to a method and to an apparatus for coding and decoding a presentation description of audio signals, especially for the spatialization of MPEG-4 encoded audio signals in a 3D domain.


BACKGROUND

The MPEG-4 Audio standard as defined in the MPEG-4 Audio standard ISO/IEC 14496-3:2001 and the MPEG-4 Systems standard 14496-1:2001 facilitates a wide variety of applications by supporting the representation of audio objects. For the combination of the audio objects additional information—the so-called scene description—determines the placement in space and time and is transmitted together with the coded audio objects.


For playback the audio objects are decoded separately and composed using the scene description in order to prepare a single soundtrack, which is then played to the listener.


For efficiency, the MPEG-4 Systems standard ISO/IEC 14496-1:2001 defines a way to encode the scene description in a binary representation, the so-called Binary Format for Scene Description (BIFS). Correspondingly, audio scenes are described using so-called AudioBIFS.


A scene description is structured hierarchically and can be represented as a graph, wherein leaf-nodes of the graph form the separate objects and the other nodes describe the processing, e.g., positioning, scaling, effect. The appearance and behavior of the separate objects can be controlled using parameters within the scene description nodes.







INVENTION

The invention is based on the recognition of the following fact. The above mentioned version of the MPEG-4 Audio standard defines a node named “Sound” which allows spatialization of audio signals in a 3D domain. A further node with the name “Sound2D” only allows spatialization on a 2D screen. The use of the “Sound” node in a 2D graphical player is not specified due to different implementations of the properties in a 2D and 3D player. However, from games, cinema and TV applications it is known, that it makes sense to provide the end user with a fully spatialized “3D-Sound” presentation, even if the video presentation is limited to a small flat screen in front. This is not possible with the defined “Sound” and “Sound2D” nodes.


In principle, the inventive coding method comprises the generation of a parametric description of a sound source including information which allows spatialization in a 2D coordinate system. The parametric description of the sound source is linked with the audio signals of said sound source. An additional 1D value is added to said parametric description which allows in a 2D visual context a spatialization of said sound source in a 3D domain.


Separate sound sources may be coded as separate audio objects and the arrangement of the sound sources in a sound scene may be described by a scene description having first nodes corresponding to the separate audio objects and second nodes describing the presentation of the audio objects. A field of a second node may define the 3D spatialization of a sound source.


Advantageously, the 2D coordinate system corresponds to the screen plane and the 1D value corresponds to a depth information perpendicular to said screen plane.


Furthermore, a transformation of said 2D coordinate system values to said 3 dimensional positions may enable the movement of a graphical object in the screen plane to be mapped to a movement of an audio object in the depth perpendicular to said screen plane.


The inventive decoding method comprises, in principle, the reception of an audio signal corresponding to a sound source linked with a parametric description of the sound source. The parametric description includes information which allows spatialization in a 2D coordinate system. An additional 1D value is separated from said parametric description. The sound source is spatialized in a 2D visual contexts in a 3D domain using said additional 1D value.


Audio objects representing separate sound sources may be separately decoded and a single soundtrack may be composed from the decoded audio objects using a scene description having first nodes corresponding to the separate audio objects and second nodes describing the processing of the audio objects. A field of a second node may define the 3D spatialization of a sound source.


Advantageously, the 2D coordinate system corresponds to the screen plane and said 1D value corresponds to a depth information perpendicular to said screen plane.


Furthermore, a transformation of said 2D coordinate system values to said 3 dimensional positions may enable the movement of a graphical object in the screen plane to be mapped to a movement of an audio object in the depth perpendicular to said screen plane.


EXEMPLARY EMBODIMENTS

The Sound2D node is defined as followed:

















Sound2D {












exposedField
SFFloat
intensity
1.0



exposedField
SFVec2f
location
0,0



exposedField
SFNode
source
NULL



field
SFBool
spatialize
TRUE









}











and the Sound node, which is a 3D node, is defined as followed:

















Sound {












exposedField
SFVec3f
direction
0, 0, 1



exposedField
SFFloat
intensity
1.0



exposedField
SFVec3f
location
0, 0, 0



exposedField
SFFloat
maxBack
10.0



exposedField
SFFloat
maxFront
10.0



exposedField
SFFloat
minBack
1.0



exposedField
SFFloat
minFront
1.0



exposedField
SFFloat
priority
0.0



exposedField
SFNode
source
NULL



field
SFBool
spatialize
TRUE









}










In the following the general term for all sound nodes (Sound2D, Sound and DirectiveSound) will be written in lower-case e.g. ‘sound nodes’.


In the simplest case the Sound or Sound2D node is connected via an AudioSource node to the decoder output. The sound nodes contain the intensity and the location information.


From the audio point of view a sound node is the final node before the loudspeaker mapping. In the case of several sound nodes, the output will be summed up. From the systems point of view the sound nodes can be seen as an entry point for the audio sub graph. A sound node can be grouped with non-audio nodes into a Transform node that will set its original location.


With the phasegroup field of the AudioSource node, it is possible to mark channels that contain important phase relations, like in the case of “stereo pair”, “multichannel” etc. A mixed operation of phase related channels and non-phase related channels is allowed. A spatialize field in the sound nodes specifies whether the sound shall be spatialized or not. This is only true for channels, which are not member of a phase group.


The Sound2D can spatialize the sound on the 2D screen. The standard said that the sound should be spatialized on scene of size 2 m×1.5 m in a distance of one meter. This explanation seems to be ineffective because the value of the location field is not restricted and therefore the sound can also be positioned outside the screen size.


The Sound and DirectiveSound node can set the location everywhere in the 3D space. The mapping to the existing loudspeaker placement can be done using simple amplitude panning or more sophisticated techniques.


Both Sound and Sound2D can handle multichannel inputs and basically have the same functionalities, but the Sound2D node cannot spatialize a sound other than to the front.


A possibility is to add Sound and Sound2D to all scene graph profiles, i.e. add the Sound node to the SF2DNode group.


But, one reason for not including the “3D” sound nodes into the 2D scene graph profiles is, that a typical 2D player is not capable to handle 3D vectors (SFVec3f type), as it would be required for the Sound direction and location field.


Another reason is that the Sound node is specially designed for virtual reality scenes with moving listening points and attenuation attributes for far distance sound objects. For this the Listening point node and the Sound maxBack, maxFront, miniBack and minFront fields are defined.


According to one embodiment of the invention, the old Sound2D mode is extended or a new Sound2D depth node is defined. The Sound2Ddepth mode could be similar to the Sound2D node but with an additional depth field.

















Sound2Ddepth {












exposedField
SFFloat
intensity
1.0



exposedField
SFVec2f
location
0,0



exposedField
SFFloat
depth
0.0



exposedField
SFNode
source
NULL



field
SFBool
spatialize
TRUE









}










The intensity field adjusts the loudness of the sound. Its value ranges from 0.0 to 1.0, and this value specifies a factor that is used during the playback of the sound.


The location field specifies the location of the sound in the 2D scene.


The depth field specifies the depth of the sound in the 2D scene using the same coordinate system as the location field. The default value is 0.0 and it refers to the screen position.


The spatialize field specifies whether the sound shall be spatialized. If this flag is set, the sound shall be spatialized with the maximum sophistication possible.


The same rules for multichannel audio spatialization apply to the Sound2Ddepth node as to the Sound (3D) node.


Using the Sound2D node in a 2D scene allows presenting surround sound, as the author recorded it. It is not possible to spatialize a sound other than to the front. Spatialize means moving the location of a monophonic signal due to user interactivities or scene updates.


With the Sound2Ddepth node it is possible to spatialize a sound also in the back, at the side or above the listener, if an audio presentation system has the capability to present such features.


The invention is not restricted to the above embodiment where the additional depth field is introduced into the Sound2D node. Also, the additional depth field could be inserted into a node hierarchically arranged above the Sound2D node.


According to a further embodiment a mapping of the coordinates is performed. An additional field dimensionMapping in the Sound2DDepth node defines a transformation, e.g. as a 2 rows×3 columns Vector used to map the 2D context coordinate-system (ccs) from the ancestor's transform hierarchy to the origin of the node.


The node's coordinate system (ncs) will be calculated as follows:

ncs=ccs×dimensionMapping.


The location of the node is a 3 dimensional position, merged from the 2D input vector location and depth {location.x location.y depth} with regard to ncs.


Example: The node's coordinate system context is (xi, yi). DimensionMapping is (1, 0, 0, 0, 0, 1). This leads to ncs=(xi, 0, yi), which enables the movement of an object in the y-dimension to be mapped to the audio movement in depth field


The field ‘dimensionMapping’ may be defined as MFFloat. The same functionality could also be achieved by using the field data type ‘SFRotation’ that is an other MPEG-4 data type.


The invention allows the spatialization of the audio signal in a 3D domain, even if the playback device is restricted to 2D graphics.

Claims
  • 1. A method using an audio processing apparatus for spatialization of a sound object, the sound object having associated a first parameter, 2D location information and depth information, wherein the first parameter defines whether or not the sound object is to be spatialized, the 2D location information comprises second and third parameters that define the 2D location of the sound object in terms of height and width respectively on a 2D plane, and the depth information comprises a fourth parameter, the method comprising steps of using an audio processing apparatus to determine from the first parameter that the sound object is to be spatialized;transforming the 2D location information and the depth information of the sound object to a 3D coordinate system, wherein said second parameter defining the height of the 2D location is mapped to audio depth information perpendicular to said 2D plane, said third parameter defining the width of the 2D location is mapped to the width information in the 3D coordinate system, and said fourth parameter is mapped to the height in the 3D coordinate system; andspatializing the sound according to the resulting 3D location information.
  • 2. Method according to claim 1, wherein the spatialization is performed according to a scene description containing a parametric description of sound sources corresponding to the audio signals, wherein the parametric description has a hierarchical graph structure with nodes, wherein a first node comprises said 2D location information and a second node comprises at least said defining depth information, the second node being hierarchically arranged above said first node.
  • 3. Method according to claim 2, wherein the second node comprises further data defining said step of transforming.
  • 4. Method according to claim 2, wherein the first node further comprises an intensity parameter for adjusting the loudness of a sound, and a source parameter.
  • 5. Method according to claim 2, wherein a soundtrack is composed from a plurality of sound objects, and wherein each of the sound objects is decoded separately.
  • 6. Method according to claim 1, wherein said 2D plane in which the sound object is located corresponds to the screen plane of a video related to the sound object.
  • 7. Method according to claim 6, wherein said transforming enables mapping of a vertical movement of a graphical object in the screen plane to a movement of a corresponding audio object in the depth, perpendicular to said screen plane.
  • 8. Method according to claim 1, wherein the mapping is performed according to a 2×3 matrix or corresponding rotation.
Priority Claims (2)
Number Date Country Kind
02026770 Dec 2002 EP regional
03016029 Jul 2003 EP regional
PCT Information
Filing Document Filing Date Country Kind 371c Date
PCT/EP03/13394 11/28/2003 WO 00 5/27/2005
Publishing Document Publishing Date Country Kind
WO2004/051624 6/17/2004 WO A
US Referenced Citations (26)
Number Name Date Kind
5208860 Lowe et al. May 1993 A
5714997 Anderson Feb 1998 A
5943427 Massie et al. Aug 1999 A
6009394 Bargar et al. Dec 1999 A
6694033 Rimell et al. Feb 2004 B1
6829017 Phillips Dec 2004 B2
6829018 Lin et al. Dec 2004 B2
6983251 Umemoto et al. Jan 2006 B1
7113610 Chrysanthakopoulos Sep 2006 B1
7116789 Layton et al. Oct 2006 B2
7190794 Hinde Mar 2007 B2
7266207 Wilcock et al. Sep 2007 B2
7356465 Tsingos et al. Apr 2008 B2
7533346 McGrath et al. May 2009 B2
7894610 Schmidt et al. Feb 2011 B2
8020050 DeCusatis et al. Sep 2011 B2
8437868 Spille et al. May 2013 B2
20020103553 Phillips Aug 2002 A1
20030053680 Lin et al. Mar 2003 A1
20030095669 Belrose et al. May 2003 A1
20040141622 Squibbs Jul 2004 A1
20050114121 Tsingos et al. May 2005 A1
20060165238 Spille et al. Jul 2006 A1
20060174267 Schmidt Aug 2006 A1
20070140501 Schmidt et al. Jun 2007 A1
20140037117 Tsingos et al. Feb 2014 A1
Foreign Referenced Citations (1)
Number Date Country
2001-169309 Jun 2001 JP
Non-Patent Literature Citations (5)
Entry
Potard et al., “Using XML Schemas to Create and Encode Interactive 3-D Audio Scenes for Multimedia and Virtual Reality Applications”, Distributed Communities on the Web Lecture Notes in Computer Science, vol. 2468, 2002, pp. 193 to 203.
E.D. Scheirer et al.; “Audiobifs: Describing Audio Scenes With the MPEG-4 Multimedia Standard” IEEE Transactions on Multimedia, IEEE Service Center, Piscataway, NJ US, vol. 1, No. 3. Sep. 1999, pp. 237-250.
Search Report Dated May 14, 2004.
The MPEG-4 Book, edited by Fernando Pereira and Touradj Ebrahimi. IMSC Press Multimedia Series/Andrew Tescher, Series Editor (total pp. 16) (pp. 103-109, 112-117 and 565), (2002).
Information technology—Coding of audio-visual objects—Part 1: Systems (pp. 852) International Standard, Aug. 2001, ISO/IEC 14496-1:2001#38.
Related Publications (1)
Number Date Country
20060167695 A1 Jul 2006 US