EARLY REFLECTION CONCEPT FOR AURALIZATION

Abstract
The present application concerns early reflection processing concepts for auralization. Embodiments relate to apparatuses and methods for sound rendering considering early reflections and to apparatuses and methods for determining an early reflection pattern.
Description

The present application is concerned with early reflection processing concepts for auralization.


BACKGROUND OF THE INVENTION

A room impulse response (RIR) describes the relationship between a sound source in an acoustic environment (a room) and the receiver (i.e. the listener). It specifies the room's response to a unit impulse in time domain and corresponds to the room transfer function in frequency domain. It consists of the direct sound path, the early reflections (ERs) and the diffuse late reverberation.


In binaural (or loudspeaker) rendering for virtual and augmented reality (VR/AR) applications, the room impulse response from a particular source and listener location may change considerably. In 6-Degrees-of-Freedom (6DOF) VR/AR applications, the listener can usually move freely within the entire scene, resulting in a permanently changing room impulse response. Consequently, a tremendous amount of computation has to be spent to determine each reflection from the source to the listener, taking into consideration the geometry of walls, occluding objects and other effects to compute a physically accurate reflection pattern.


It is the observation of this invention that the exact acoustic reproduction of the early reflection (ER) pattern in a room is not required to make a perceptually convincing rendering and that this can be done in a way that largely abstracts from the exact geometric details of the room. In this way, a lot of computation can be saved. In case the reflection pattern has to be transmitted from an encoder to a renderer, a considerable part of the side information associated with efficiently computing reflections depending on the listener position can be saved as compared to the state of the art in regular geometry-based rendering.


The document [1] concerns a replacement of exactly calculated “real” ER by a more general Simple ER pattern. The idea of this was to find, describe and simulate the perceptually orthogonal parameters describing small or large sound sources (e.g. orchestra) on a stage of a large room (e.g. concert hall), [2, 3] and play them back over a loudspeaker setup (e.g. stereo) or binaurally over headphone. A composer or sound engineer was able to use these parameters (like source presence, source warmth, source brilliance, room presence, running reverberation, envelopment and reverberance) to set up a scene. The SPAT software has been used over a long time for such kind of productions, [4]. The approach was also adopted in the ISO MPEG-4 standardization [5].


In a dynamic 6DOF environment the acoustic description of rooms (dimensions, RT60, . . . ) can vary to a considerable amount. The source and receiver position are fully free and will be calculated in real-time for auralization. Perceptual parameters, which are highly dependent on these changing physical setups cannot be defined as constants and are therefore not appropriate for this task.


The invention here has the new approach to take just few basic physical parameters of the environment to select and adjust simple basic ER pattern. This has the following advantages: No specific sound engineering background is needed to define the parameters. They come directly from the physical model. The used Simple ER pattern is adaptive to different room sizes and different RT60 values. Even for outdoor environments, Simple ER patterns are defined, which was not the case in SPAT. The perceptual degradation with this approach relative to a full physically correct simulation is limited because the human auditory system is not able to analyze the fine structure of the early reflections, e.g. [6].


In the following, newly invented Simple ER patterns, room acoustic parameters are used, like RT60, predelay time, room volume or room dimensions, and frequency dependency of RT60. The ER pattern is specifically defined to produce a smooth transition between the direct sound and the late reverb. It should be frequency neutral and the proximity to walls and openings of the source and receiver.


It is the idea to produce a plausible and convincing perception of the listener, fitting to the overall room acoustical parameters. This is enough for most of the cases, because the listener has no direct comparison possibility to the “real” physically exact ER.


The computational consuming exact geometrical calculation of ER, especially with visibility checks, can be avoided, especially in applications like real-time auditory virtual environment and augmented reality. The exact calculation of “real” ER is also sometimes difficult and sensitive to produce artifacts by appearing and disappearing ERs, depending on the exact (and time-varying) location of the source and the listener. This can be avoided by using a constant ER pattern, which has been computed once when entering of the scene or by moving from one acoustic environment to another environment, defined by different acoustic parameters.


The invention takes advantage of an encoder-bitstream-renderer scenario. In one case (a), a default Simple ER pattern can be calculated with the room acoustical parameters available in the renderer alone. These parameters are adjusted in real-time by the source-listener distance and the azimuth angle between them. In case (b), the geometry of the scene is pre-analyzed in a more advanced way in the encoder. Then the Simple ER pattern of few ERs is pre-calculated in the encoder and transmitted to the renderer in a bitstream. There it is adjusted in the same way as in case (a) by the listener distance and angle (or other information that is available at the time of rendering). These two cases give the full flexibility for an open future-proof approach, in which further analysis knowledge can be incorporated later into the encoder.


Motivation

A room impulse response (RIR) describes the relationship between a sound source in an acoustic environment (a room) and the receiver (the listener) and specifies the room's response to a unit impulse, see e.g. FIG. 21. It consists of the direct sound path, the early reflections (ERs) and the diffuse late sound part. FIG. 21 shows an example for a monophonic RIR with 2nd order ERs, generated with the acoustical room simulation program RAVEN [7].


Especially in complex physical environments/rooms, defined by many surfaces, the calculation of the geometrical correct ERs with the needed visibility checks (“is this source in direct line-of-sight to the listener?”) is very time consuming. On the other hand, it is known that the human auditory perceptions suppresses a lot of details about the ERs with regard to the direct sound (law of the first wave front, precedence effect, scene analysis, [8, 9]) and that therefore a precise modeling of the ER part of the impulse response is in many cases not necessary to achieve a convincing rendering quality, e.g. [6]. The auditory system uses the ERs to determine or refine several perceptual attributes. Among them are:

    • Position of the source relative to the receiver
    • Source-receiver distance
    • Auditory source width (ASW)
    • Level and frequency dependent absorption of boundaries
    • Proximity to close boundaries


BACKGROUND OF THE INVENTION

There are several approaches known to simplify ER calculation. The first one is just to avoid the calculation of the ER completely, i.e. render sound without simulated ER, i.e. render only direct sound and late reverb, see FIG. 22. The late reverb starts at the so-called predelay time. FIG. 22 shows a RIR with direct sound and late reverb starting at predelay time 0.13 s, no ER.


The next possibility is to calculate only geometrically exact 1st order reflections, see FIG. 23. In a shoebox shaped room this reduces the number of ER from about 27 to 6. FIG. 23 shows a RIR with 1st order reflections and late reverb (left), top view (right). The square (red) is the sound source, the circle (blue) is the receiver, the line (red) connecting the circle and the square is the direct sound, further lines (blue) coming out of the circle are the reflections, the length is proportional to the logarithmic level.


The next possibility are just two ERs side by side with the direct sound, see FIG. 24. The influence of side reflections on ASW is known from concert hall acoustics, [11]. Note that this is very simple to compute compared to a true geometric simulation. FIG. 24 shows a RIR with two reflections side by side to the direct sound (left), top view (right).


In the next pattern the two side reflections are replaced by 4 reflections to each side of the direct sound and four fixed source position independent reflection sequences at [±45° and ±135°], each consisting of 4 reflections, see FIG. 25. This pattern is inspired by the SPAT algorithm [1, 5], but it does not implement all details, especially not the effect of all the input parameters. The parameters for this pattern are defined to specifically produce perceptual receiver attributes like ASW. No room acoustic properties, beside RT60, are used for it. FIG. 25 shows a RIR with “SPAT” pattern (left), top view (right). The crosses (green and blue) are ER.


The previously described approach is designed such that the input parameters, which define the ER pattern, are perceptual parameters. They should describe the listener's perception caused by the ERs. The shortcoming is that it only vaguely adapts to room related parameters. Sound engineering knowledge and experience is needed to set the perceptual defined parameters, like source presence, source warmth, source brilliance, room presence, running reverberation, envelopment and reverberance. This is a clear disadvantage for designers defining the physical properties of a real-time VR/AR system and having no perceptual sound engineering experience. Especially for VR applications, the geometry of the virtual physical space is often known quite well as a by-product of the visualization process. Also, there is no ER pattern for outdoor environments known with the SPAT algorithm.


The object of the invention is to avoid the shortcomings of the state of the art by explicitly using room acoustical and physical parameters to define the ER pattern. Furthermore, different patterns are defined depending on the room properties, and are even suitable for outdoor environments (where a precise description of the geometry is difficult). The patterns have different numbers of ERs dependent on room size or other physical parameters.


The new ER patterns feature

    • perceptually plausible rendering compared to “real” ERs
    • reduced computational complexity compared to a “real” ER calculation
    • adaptation of the ER pattern dependent on the physical room properties.
    • do not require any specific sound engineering skill and experience to set needed parameters
    • distinct ER patterns for indoor and outdoor
    • no additional side information needed (for an encoder/bitstream/renderer scenario including transmission of a bitstream), in the case that the predefined patterns are calculated within the renderer
    • very little additional side information needed (for an encoder/bitstream/renderer scenario including transmission of a bitstream), in the case that the predefined patterns are calculated in the encoder from the scene geometry


This is achieved by using parameterizable but fixed spatial ER patterns that do not depend on the exact geometry of the room. In an embodiment of the invention, the pattern also does not depend on the listener position in the room. Instead, only one (or a few) global characteristic parameters are used to configure the ER pattern. In this way, the pattern can be rendered extremely efficiently.


In the following newly invented ER patterns, specifically room acoustic parameters are used like RT60, predelay time, room dimensions or room volume, frequency dependency of RT60 for pattern configuration. The ER pattern is defined in a way to produce a (temporally) smooth transition between the direct sound and the late reverb. It should be of neutral timbre. It is dependent on room volume and surface. It is not dependent on the position of the source and receiver in the room.


It is the objective of the invention to produce a plausible and convincing perception by the listener, fitting to the overall room acoustical parameters. This is sufficient for most use cases, especially since the listener has no possibility for a direct comparison with a rendering of the “real” physically correct ER.


SUMMARY

One embodiment relates to an apparatus for sound rendering, configured to receive information on a listener position and a sound source position; render an audio signal of the sound source using a room impulse response whose early reflection portion is exclusively determined by an early reflection pattern, which is indicative of a constellation of early reflection positions, and which is positioned at the listener position in a manner so that the early reflection positions are located around the listener position and at angular directions from the listener position which are invariant with respect to changes in a listener head orientation.


Another embodiment relates to a bitstream for being subject to inventive sound rendition.


Another embodiment relates to a digital storage medium storing an inventive bitstream for being subject to sound rendition.


According to another embodiment, a method for sound rendering may have the steps of: receiving information on a listener position and a sound source position; rendering an audio signal of the sound source using a room impulse response whose early reflection portion is exclusively determined by an early reflection pattern, which is indicative of a constellation of early reflection positions, and which is positioned at the listener position in a manner so that the early reflection positions are located around the listener position and at angular directions from the listener position which are invariant with respect to changes in a listener head orientation.


Another embodiment relates to a non-transitory digital storage medium having a computer program stored thereon to perform the inventive method when said computer program is run by a computer.


In accordance with a first aspect of the present invention, the inventors of the present application realized that one problem encountered when trying to use early reflection (ER) rendering of audio signal stems from the fact that the early reflections depend on a relationship between a source position and a listener position. The inventors found, that it is possible to consider a source position independent ER pattern without, e.g., floor reflection; so that ER rendering gets easier while the rendering result is still pretty good. The early reflection portion of the room impulse response used for the rendering, is exclusively determined by an early reflection pattern. A spatial relationship between a sound source and the listener is not considered for the early reflection portion of the room impulse response. Further the early reflection positions in the early reflection pattern are invariant with respect to changes in a listener head orientation. This is based on the finding that the same ER pattern can be used for determining the early reflection portion of the room impulse response independent whether the listener looks to the sound source or in any other direction.


Accordingly, in accordance with a first aspect of the present application, an apparatus for sound rendering is configured to receive information on a listener position and a sound source position. The apparatus is configured to render an audio signal of the sound source using a room impulse response whose early reflection portion is exclusively determined by an early reflection pattern. The early reflection pattern is indicative of a constellation, e.g. constellation shall denote a set of positions along with defining their mutual placement in terms of the angles between the lines connecting the positions; a synonymous term shall be “pattern”, of early reflection positions. The early reflection pattern is positioned at the listener position in a manner so that the early reflection positions are located around the listener position and at angular directions from the listener position which are invariant with respect to changes in a listener head orientation, i.e. the constellation is translatorily placed at the listener position.


In accordance with a second aspect of the present invention, the inventors of the present application realized that one problem encountered when trying to use early reflection (ER) rendering of audio signal stems from the fact that the early reflection patterns for outdoor environments are highly individual and dependent on the physical setup of the scene. The inventors found, that ER pattern generated using moderate analysis of an environment can result into an acoustically convincing, but computationally moderate ER rendering result.


Accordingly, in accordance with a second aspect of the present application, an apparatus for determining an early reflection pattern for sound rendition is configured to perform a geometric analysis of an acoustic environment by, at each of one or more analysis positions, determining a function indicative, for each of different distances from the respective analysis position, a value representative of an early reflection contribution; and by inspecting the function or a further function derived therefrom with respect to one or more maxima to derive one or more control parameters. Additionally, the apparatus is configured to determine an early reflection pattern, which is indicative of a constellation of early reflection positions, by placing the early reflection positions using the one or more control parameters.


In accordance with a third aspect of the present invention, the inventors of the present application realized that one problem encountered when trying to use early reflection (ER) rendering of audio signal stems from the fact that a transmission of early reflection patterns of the audio scenes for the rendering may result in high signaling costs. The inventors found, that ER pattern can be generated by use of bitstream hints resulting into an acoustically convincing, but computationally moderate ER rendering result. By using only hints in the bitstream, the signaling costs can be reduced, since it is not necessary to transmit the complete ER pattern.


Accordingly, in accordance with a third aspect of the present application, an apparatus for sound rendering is configured to receive first information on a listener position and a sound source position. The apparatus is configured to receive a bitstream comprising, e.g. and read therefrom, a representation of an audio signal of a sound source positioned at the sound source position and one or more early reflection pattern parameters. For example, the bitstream is audio bitstream with the early reflection parameter inside a header or metadata field of the bitstream, or a file format stream with the early reflection parameter inside a packet of the file format stream and a track of the file format stream comprising an audio bitstream representing the audio signal. Additionally, the apparatus is configured to determine an early reflection pattern, which is indicative of a constellation of early reflection positions, depending on the one or more early reflection pattern parameters. Further, the apparatus is configured to render the audio signal of the sound source using a room impulse response whose early reflection portion is determined by an early reflection pattern. The early reflection pattern is indicative of a constellation, e.g. constellation shall denote a set of positions along with defining their mutual placement in terms of the angles between the lines connecting the positions; an synonymous term shall be “pattern”, of early reflection positions. The early reflection pattern is positioned at the listener position in a manner so that the early reflection positions are located around the listener position and at angular directions from the listener position which are invariant with respect to changes in listener head orientation, i.e. the constellation is translatorily placed at the listener position.


In accordance with a fourth aspect of the present invention, the inventors of the present application realized that one problem encountered when trying to use early reflection (ER) rendering of audio signal stems from the fact that a tremendous amount of computation has to be spent to determine each reflection from the source to the listener, taking into consideration the geometry of walls, occluding objects and other effects to compute a physically accurate reflection pattern. The inventors found, that simple room acoustical parameters, like room dimension, room volume or predelay, can be used to determine the number of early reflection positions within an early reflection pattern. It is not needed to analyze the real early reflection of the scene, since the early reflections can be approximated dependent on a room acoustical parameter. The inventors found that ER pattern generation by ER number dependency on room acoustical parameter results into an acoustically convincing, but computationally moderate ER rendering result.


Accordingly, in accordance with a fourth aspect of the present application, an apparatus for determining an early reflection pattern for sound rendition is configured to receive at least one room acoustical parameter which is representative of an acoustical characteristic of an acoustic environment. The apparatus is configured to determine an early reflection pattern, which is indicative of a constellation of early reflection positions, in a manner so that a number of the early reflection positions depend on the at least one room acoustical parameter.


In accordance with a fifth aspect of the present invention, the inventors of the present application realized that one problem encountered when trying to use early reflection (ER) rendering of audio signal stems from the fact that each source is associated with a different early reflection pattern. The inventors found, that it is not necessary to use different ER pattern for signals of different sources. This is based on the idea that the signals can be weighted and summed dependent on a source listener relationship, so that only the weighted sum of the audio signals is rendered based on the ER patter. The inventors found that ER rendition by use of a ER pattern for more than one sound source results into acoustically convincing, but computationally moderate ER rendering result.


Accordingly, in accordance with a fifth aspect of the present application, an apparatus for sound rendering is configured to receive information on a listener position, a first sound source position and a second sound source position. The apparatus is configured to render audio signal of the two sound sources using a room impulse response whose early reflection portion is determined by an early reflection pattern. The early reflection pattern is indicative of a constellation, e.g. constellation shall denote a set of positions along with defining their mutual placement in terms of the angles between the lines connecting the positions; an synonymous term shall be “pattern”, of early reflection positions. The early reflection pattern is positioned at the listener position in a manner so that the early reflection positions are located around the listener position and at angular directions from the listener position which are invariant with respect to changes in listener head orientation, i.e. the constellation is translatorily placed at the listener position. The apparatus is configured to render the audio signals of the two sound sources by forming a weighted sum of a first audio signal of a first sound source positioned at the first sound source position and a second audio signal of a second sound source positioned at the second sound source position. The weighted sum weights the first audio signal more than the second audio signal, if a first distance between the first sound source position and the listener position is smaller than a second distance between the second sound source position and the listener position, and weights the second audio signal more than the first audio signal, if the first distance is larger than the second distance. Additionally, the apparatus is configured to render the audio signals of the two sound sources by generating early reflection contribution loudspeaker signals relating to the early reflection portion of the room impulse response by rendering the weighted sum from the early reflection positions.


In accordance with a sixth aspect of the present invention, the inventors of the present application realized that one problem encountered when trying to use early reflection (ER) rendering of audio signal stems from the fact that a tremendous amount of computation has to be spent to determine each reflection from the source to the listener, taking into consideration the geometry of walls, occluding objects and other effects to compute a physically accurate reflection pattern. The inventors found, that simple room acoustical parameters, like room dimension, room volume or predelay, can be used to parametrize function defining a position of the early reflections. It is not needed to analyze the real early reflection of the scene, since the early reflections can be approximated dependent on the room acoustical parameter. Further it was found that spiral functions provide a good distribution of the early reflection positions. The inventors found that ER pattern generation using one or more spiral functions results into an perceptually convincing, but computationally moderate ER rendering result.


Accordingly, in accordance with a sixth aspect of the present application, an apparatus for determining an early reflection pattern for sound rendition is configured to receive at least one room acoustical parameter which is representative of an acoustical characteristic of an acoustic environment and determine an early reflection pattern, which is indicative of a constellation of early reflection positions, by parameterizing one or more spiral functions centered at the listener position, and place the early reflection positions using the one or more spiral functions.





BRIEF DESCRIPTION OF THE DRAWINGS

Embodiments of the present invention will be detailed subsequently referring to the appended drawings, in which:



FIG. 1 shows an embodiment of an early reflection pattern;



FIG. 2 shows an embodiment of an early reflection pattern determined using spiral functions;



FIG. 3A shows an embodiment of an early reflection pattern over time;



FIG. 3B shows an embodiment of an early reflection pattern in a spatial top view;



FIG. 3C shows an embodiment of an early reflection pattern with respect to a frequency dependency;



FIG. 4 shows a level relation between listener, direct source and reflections;



FIG. 5 shows an implementation of simple ER algorithm in encoder/decoder/renderer;



FIG. 6 shows an apparatus for determining an early reflection pattern by analyzing an environment;



FIG. 7 shows a spatial top view of an embodiment of an ER pattern with four early reflection positions;



FIG. 8A shows a geometrical outdoor scene analysis in a top view;



FIG. 8B shows a geometrical outdoor scene analysis in a side view;



FIG. 9A shows a mesh of analysis points in a top view;



FIG. 9B shows a mesh of analysis points in a side view;



FIG. 10 shows a distribution of reflection surface area over distance, averaged over several analysis points;



FIG. 11A shows a first embodiment of an outdoor ER pattern;



FIG. 11B shows a second embodiment of an outdoor ER pattern;



FIG. 12 shows an amplitude reduction over distance of a point source for different distAlpha values;



FIG. 13 shows a block diagram illustrating a summation of different audio sources into one source signal with distance weighting;



FIG. 14 shows a level relation between the listener, two direct sources and the summed up reflections;



FIG. 15 illustrates the overall rendering process exemplarily;



FIG. 16 shows an embodiment of an apparatus for sound rendering;



FIG. 17 shows an embodiment of an apparatus for sound rendering using ER pattern parameter;



FIG. 18 shows an embodiment of an apparatus for determining an ER pattern dependent on a room acoustical parameter;



FIG. 19 shows an embodiment of an apparatus for rendering a weighted sum of two or more source signals;



FIG. 20 shows an embodiment of an apparatus for determining an ER pattern using spiral functions;



FIG. 21 shows an example for a monophonic 2nd order RIR generated with the acoustical room simulation program RAVEN;



FIG. 22 shows a RIR with direct sound and late reverb starting at predelay time 0.13 s, no ER;



FIG. 23 shows a RIR with 1st order reflections and late reverb (left), top view (right);



FIG. 24 shows a RIR with two reflections side by side to the direct sound (left), top view (right); and



FIG. 25 shows a RIR with “SPAT” pattern (left), top view (right).





DETAILED DESCRIPTION OF THE INVENTION

Equal or equivalent elements or elements with equal or equivalent functionality are denoted in the following description by equal or equivalent reference numerals even if occurring in different figures.


In the following description, a plurality of details is set forth to provide a more throughout explanation of embodiments of the present invention. However, it will be apparent to those skilled in the art that embodiments of the present invention may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form rather than in detail in order to avoid obscuring embodiments of the present invention. In addition, features of the different embodiments described herein after may be combined with each other, unless specifically noted otherwise.


In the following, various examples are described which may assist in achieving a reduced audio rendering complexity when using early reflection processing concepts. The herein discussed simplified early reflection processing concepts may be added to other early reflection processing concepts heuristically designed, for instance, or may be provided exclusively.


In order to ease the understanding of the following embodiments of the present application, the description starts with a general presentation of an early reflection pattern 1, according to an embodiment of the invention. The features described with regard to the early reflection pattern 1 in FIG. 1 can also apply to any other herein described early reflection pattern 1.


An early reflection pattern 1 is indicative of a constellation of early reflection positions ERP, see ERP1 and ERP2. For example, the constellation shall denote a set of positions ERP along with defining their mutual placement, e.g., in terms of the angles a between the lines connecting the positions with the center 2 of the pattern 1. A synonymous term for constellation shall be “pattern”.


The early reflection positions ERP, i.e. positions of early reflections, may indicate or identify positions in an environment 5, e.g., an indoor room or an outdoor area, at which early reflections of an audio signal may occur. For example, a listener positioned at the center 2 of the early reflection pattern 1 may perceive early reflections coming from the early reflection positions ERP. In other word, the early reflection positions ERP may indicate positions from which a listener positioned at the center of the early reflection pattern 1 receives early reflections.


The early reflection pattern 1, for example, is positioned at a listener position 10 in a manner so that the early reflection positions ERP are located around the listener position 10 and at angular directions from the listener position 10 which are invariant with respect to changes in a listener head orientation, i.e. the constellation is translatorily placed at the listener position 10. For example, the early reflection positions ERP may be determined, so that same are in a substantially uniform manner angularly distributed around the listener position 10.


According to an embodiment, the early reflection pattern 1, i.e. the early reflection positions ERP, may be determined, so that connection lines, see 7 and 8 in FIG. 1, between the respective early reflection position ERP1/ERP2 and the listener position 10 do mutually not overlap, i.e. are mutually distinct. This allows an even distribution and prevents accumulation of early reflection positons in the environment 5.


As shown in FIG. 1, the center 2 of the early reflection pattern 1 may be positioned at the listener position 10. The center 2 of the early reflection pattern 1 may be linked to the listener position 10 and the early reflection pattern 1 may move translational together with the listener. However, a rotational movement of the listener will not change the early reflection positions ERP, i.e. the early reflection pattern 1 will not follow a rotational motion of the listener.


According to an embodiment, the early reflection positions ERP lie in a horizontal plane along with the listener position 10.


According to an embodiment, An apparatus for audio rendering or for generating an early reflection pattern 1 may be configured to determine the early reflection positions ERP with adjusting an azimuthal rotation of the constellation according to a pattern azimuth parameter in a bitstream comprising a representation of an audio signal to be rendered. In other words, the complete early reflection pattern 1 may be rotated to better approximate real early reflections, e.g. in a certain environment 5. This azimuthal rotation is not performed in reaction to movements, e.g., a rotational movement of the listener. This adjustment of the azimuthal rotation of the constellation may be performed at an initial determination of the early reflection pattern 1. Once the early reflection pattern 1 is determined, all early reflection positions ERP can solely undergo an identical translational movement in reaction to a translational movement of the listener position 10. The arrangement of the early reflection positions ERP relative to the center 2 of the pattern 1 may be determined using the adjustment of the azimuthal rotation of the constellation. Once the pattern 1 is determined, it may not be adjusted anymore, i.e. a movement of a listener position does not change the relative arrangement between the early reflection positions ERP and the center 2 of the pattern 1.


According to an embodiment, at least one room acoustical parameter which is representative of an acoustical characteristic of an acoustic environment may be considered at a determination of the early reflection pattern. The at least one room acoustical parameter comprises one or more of room dimensions, room volume, and predelay time to the late reverberation. Advantageously, the at least one room acoustical parameter comprises only one of this acoustical characteristics of the acoustic environment. The at least one room acoustical parameter can be received or read from a bitstream, e.g., from the bitstream comprising a representation of an audio signal to be rendered using the early reflection pattern 1.


According to an embodiment, the early reflection pattern 1 can be determined in a manner so that a number of the early reflection positions depends on the at least one room acoustical parameter and/or so that a mutual spacing of the early reflection positions is varied/adapted dependent on the at least one room acoustical parameter. For example, the mutual spacing of the early reflection positions is varied by central expansion centered at the listener position.


According to an embodiment, the number of early reflection positions ERP of the pattern 1 can be determined so that

    • the number and/or a farthest early reflection position from the listener position is larger
    • the larger the room dimensions are, or
    • the number and/or a farthest early reflection position from the listener position is larger
    • the larger the room volume is, or
    • the number and/or a farthest early reflection position from the listener position is larger
    • the larger the predelay time to the late reverberation is.


Under “a farthest early reflection position from the listener position” a “distance of a maximally distanced position among the early reflection positions to the listener position” is understood. According to an embodiment, early reflection positions ERP are placed near the center 2 of the pattern 1 and the more early reflection positions ERP are comprised by the pattern 1 the farther away is the farthest early reflection position from the center 2.


According to an embodiment, mutual spacing of the early reflection positions ERP can be varied/adapted dependent on the at least one room acoustical parameter by uniformly increasing a distance of each early reflection positions ERP to the center 2 with increasing room dimensions, room volume, or predelay time to the late reverberation. Optionally, the mutual spacing of the early reflection positions ERP can be varied/adapted dependent on the at least one room acoustical parameter, so that a distance of a maximally distanced position among the early reflection positions ERP to the listener position 10 is larger the larger the room dimensions are, or the larger the room volume is, or the larger the predelay time to the late reverberation is with the distance being smaller than the predelay time. This allows an even distribution of the early reflection positions ERP and thus an acoustically convincing ER rendering result. It may be advantageous, if the distance of the maximally distanced position among the early reflection positions ERP to the listener position 10 is increased more than a distance of the nearest distanced position among the early reflection positions ERP to the listener position 10 with increasing room dimensions, room volume, or predelay time to the late reverberation.



FIG. 2 shows an embodiment of an early reflection pattern 1 usable for early reflection processing of an audio signal. The early reflection pattern 1 comprises early reflection positions ERP, see ERP11 to ERP15 (ERP1) and ERP21 to ERP25 (ERP2) in FIG. 2. FIG. 2 shows exemplarily 10 early reflection positions ERP. However, it is clear that the early reflection pattern 1 can comprise a different number of early reflection positions ERP. The early reflection pattern 1 may comprise two or more early reflection positions ERP, e.g., only the early reflection position ERP11 and ERP21.


As shown in FIG. 2, two spiral functions 3 and 4 centered at a listener position, i.e. the center 2, can define positions of the early reflections, i.e. the early reflection positions ERP, e.g., within an environment 5. However, it is clear that the positions of the early reflections can alternatively be defined by only one spiral function 3 or 4 or by more than two spiral functions. An apparatus for audio rendering or for generating an early reflection pattern 1 may be configured to place the early reflection positions ERP using the one or more spiral functions 3, 4 to determine the early reflection pattern 1 in the environment 5. For example, the respective apparatus may be configured to place a first set of early reflection positions ERP1, see ERP11to ERP15, using the first spiral function 3 and a second set of early reflection positions ERP2, see ERP21 to ERP25, using the second spiral function 4.


Each of the first set of early reflection positions ERP1 is associated with a corresponding early reflection position of the second set of early reflection positions ERP2. For example, the early reflection position ERP11 may be associated with the corresponding early reflection position ERP21, the early reflection position ERP12 may be associated with the corresponding early reflection position ERP22, the early reflection position ERP13 may be associated with the corresponding early reflection position ERP23, the early reflection position ERP14 may be associated with the corresponding early reflection position ERP24 and the early reflection position ERP15 may be associated with the corresponding early reflection position ERP25. For each of the first set of early reflection positions ERP1, the respective early reflection position ERP1 is positioned on an opposite side of a line perpendicularly crossing a connecting line between the respective early reflection position ERP1 and the corresponding early reflection position ERP2 of the second set of early reflection positions ERP2. This ensures that the listener receives early reflections from different directions and prevents an accumulation of early reflection positions in one area. This positioning using the spiral functions enables a uniform distribution of early reflection positions in the environment 5, resulting into an acoustically convincing, but computationally moderate early reflection rendering result of an audio signal.



FIG. 2 shows an example at which, for each of the first set of early reflection positions ERP1, the corresponding early reflection position ERP2 of the second set of early reflection positions ERP2 is angularly offset relative to the connecting line into an angular direction which is common for all early reflection positions ERP1 of the first set of early reflection positions ERP1.


According to an embodiment, the apparatus for audio rendering or for generating an early reflection pattern 1 may be configured to place the early reflection positions ERP1 and ERP2 using the two spiral functions 3 and 4,

    • so that each of the first set of early reflection positions ERP1 is associated with a corresponding early reflection position of the second set of early reflections ERP2, and
    • so that, for each of the first set of early reflection positions ERP1, the respective early-reflection position ERP1 is positioned on a side of a respective line perpendicularly crossing at the pattern center 2 an axis running through the pattern center 2 and the respective early reflection position ERP1 of the first set of early reflection positions ERP1 and so that the respective corresponding early reflection position ERP2 of the second set of early reflections ERP2 is positioned on an opposite side of the respective line, and
    • so that the respective corresponding early reflection position ERP2 of the second set of early reflection positions ERP2 is angularly offset (see y for the corresponding early reflection positions ERP11 and ERP21) relative to the respective axis into an angular direction which is common for all early reflection positions ERP1 of the first set of early reflection positions ERP1 and/or which is common for all early reflection positions ERP2 of the second set of early reflection positions ERP2.


The one or more spiral functions 3, 4 may define the early reflection positions ERP in polar coordinates (r, β), see (r11 to 5, β11 to 5) for defining the early reflection position ERP1 of the first set of early reflection positions ERP1 and (r21 to 5, β21 to 5) for defining the early reflection position ERP2 of the second set of early reflection positions ERP2.


As will be described in the following in more detail, see especially section 1 “Indoor ER Parameter Calculation”, the one or more spiral functions 3, 4 can be parameterized depending on at least one room acoustical parameter, i.e. the respective spiral function 3, 4 defines the respective early reflection positions ERP dependent on the at least one room acoustical parameter. The at least one room acoustical parameter comprises one or more of room dimensions, room volume and predelay time to late reverberation. The at least one room acoustical parameter may be representative of an acoustical characteristic of an acoustic environment 5.


For example, the one or more spiral functions 3, 4 can be parameterized depending on the at least one room acoustical parameter,

    • so that a number of the early reflection positions ERP is larger the larger the room dimensions are, or larger the larger the room volume is, or larger the larger the predelay time to the late reverberation is; and/or
    • so that, for each of the early reflection positions ERP, a distance of the respective early reflection position ERP to the center 2 of the early reflection pattern 1 is larger the larger the room dimensions are, or larger the larger the room volume is, or larger the larger the predelay time to the late reverberation is.


According to an embodiment, the apparatus for audio rendering or for generating an early reflection pattern 1 may be configured to parametrize the one or more spiral functions and determine a number of early reflection positions ERP so that a distance of a maximally distanced position among the early reflection positions to the listener position is larger the larger the room dimensions are, or the larger the room volume is, or the larger the predelay time to the late reverberation is with the distance being smaller than the predelay time.


According to an embodiment, the apparatus for audio rendering or for generating an early reflection pattern 1 may be configured to support different determinations of the early reflection pattern. The apparatus for audio rendering or for generating an early reflection pattern 1 may be configured to choose the type of determination dependent on the environment 5. For example, the determination, e.g., a first determination, of the early reflection pattern 1 using one or more spiral functions 3, 4 and/or the determination, e.g., a first determination, of the early reflection pattern 1 in a manner so that the number of the early reflection positions depends on the at least one room acoustical parameter may be associated with an indoor environment, like a room, see especially section 1 “Indoor ER Parameter Calculation”. Such a determination, e.g., a first determination, may be selected in case of the acoustic environment 5 being an indoor environment or in case of a pattern type index in a bitstream comprising a representation of an audio signal to be rendered assuming a predetermined state. An alternative determination, e.g., a second determination, is described in more detail in section 3 “Outdoor ER Pattern”.


As already described above, one of the newly invented ER patterns 1 for indoor consists of two spirals, see FIG. 3. This pattern 1 has the advantage to cover all directions around the listener 10 while providing an even distribution over time without clustering. The number of early reflections (ERs) can be adapted to the size of the room, which can also be derived from the predelay for the late reverb. The frequency dependency of RT60 may also define the frequency dependency of the ERs. RT60, or the average absorption factor, defines an additional amplification on top of the normal distance influence. From the frequency dependency of RT60, a simple shelving filter is calculated to adapt the frequency response of the early reflections to the overall absorption behavior, described by RT60. FIG. 3 shows the new ER pattern 1 over a) time, b) spatial top view, c) frequency dependency.


1 Indoor ER Parameter Calculation

The following description of the indoor ER parameter calculation refers to FIG. 2 and FIG. 3.


The variable parameters for the spiral pattern, i.e. for the first spiral function 3 and for the second spiral function 4, are mainly set by the predelay time. For example, used is the predelay time to the late reverb, e.g.







predelay


1000
·



max

(
roomdim
)

c

[
ms
]



,

c
=

343


m
s







The parameters are set dependent on the predelay of the room, which defines the start of the late reverb and calculated with Eq. 1.










NumER
/
2

=

floor
(

3
*


ln

(
predelay
)

2


)





Eq
.

1










ampFac
=

[


1
..


4

]


,

often


2


or


3








distFac
=

[


1
..


4

]


,

often


2


or


3








initDelay
=

[


10
..


50

]


,

often


15


or


30








alpha
=

[


0
..


1

]


,

often

0.1

or

0.3







    • NumER represents the number of early reflection positions.





The first spiral function 3 and the second spiral function 4 can be used so that the first set of early reflection positions ERP1 is determined in polar coordinates as (r1; β1) and the second set of early reflection positions ERP2 is determined in polar coordinates as (r2; β2). Azimuth and radius calculation of ER positions with the two spiral pattern:










β1
=

n
·

π
3



,

n
=

[

1
:

NumER
/
2

]






Eq
.

2













β2
=


π
8

+

n
·

π
3




,

n
=

[

1
:

NumER
/
2

]






Eq
.

3














r

1

=

distfactor
*

base

2
*

β1
/
π





,

base
=
1.85





Eq
.

4














r

2

=


-
distfactor

*

base

2
*

β2
/
π





,

base
=
1.85





Eq
.

5













amp

1

=


(

1
-
alpha

)

/
r

1





(
1
)













amp

2

=


(

1
-
alpha
-
0.01

)

*

abs

(

1
/
r

2

)






(
2
)







The constant distfactor may correspond to the above mentioned constant distFac. According to an embodiment, the distfactor can be determined based on the at least on room acoustical parameter, e.g., the distfactor can be determined such that same is the larger the larger the predelay time to the late reverb is.


As can be seen in FIG. 2. a polar axis 6 runs through the center 2 of the early reflection pattern 1. The origin, i.e. the center 2, of the early reflection pattern 1 represents a pole. A ray runs from the pole in a reference direction, i.e. representing the polar axis 6, so that the azimuth β1(1 to 5) defining the angular coordinate of the early reflection positions ERB1(1 to 5) of the first set of early reflection positions ERB1 and the azimuth β2(1 to 5) defining the angular coordinate of the early reflection positions ERB2(1 to 5) of the second set of early reflection positions ERB2 represent angles from the polar axis 6. The radius coordinates of the early reflection positions ERP1 are directed into the reference direction and the radius coordinates of the early reflection positions ERP are directed into a direction opposite to the reference direction, see FIG. 2 and Eq. 4 and Eq. 5.


An apparatus for sound rendering can be configured to generate early reflection contribution loudspeaker signals relating to an early reflection portion of a room impulse response by performing a rendition of an audio signal of one or more sound sources from the early reflection positions ERP, e.g., in a manner level adjusted according to a distance of the respective early reflection position to the listener position, e.g., see the determination of amp1 and amp2 above. For example, for each of the first set of early reflection positions ERB1, the audio signal of the sound source is rendered from the respective early reflection position ERB1 at the level amp1 and, for each of the second set of early reflection positions ERB2, the audio signal of the sound source is rendered from the respective early reflection position ERB2 at the level amp2.


The amplitude of the reflections is dependent on several influencing parameters:

    • a) Standard distance law (factor 2 reduction per distance doubling)
    • b) Correction by









ampCorrection
=


ampFac
·

(

1
-
absorption

)


/
slDistance





Eq
.

6









    • with slDistance representing a source listener distance. The terms ampFac and absorption represent constants.





As seen in FIG. 4 is the level relation between the reflections and the direct source level is fix. The level of the here shown five sources (one direct source and four early reflections) go up and down in relation to the source-listener distance (sl distance). FIG. 4 shows a level relation between listener, direct source and reflections.


The rendering of the audio signal of the sound source from each early reflection position in a manner level adjusted according to a distance of the respective early reflection position to the listener position, may be performed by

    • of f setting 20 a level at which the audio signal of the sound source is rendered from the respective early reflection position, using a level offset, or amplify same with a level factor, which offset or factor is common for all early reflection positions, and setting the level offset or level factor according to an amplitude correction factor (see Eq. 6).


For example, for each of the first set of early reflection positions ERB1, the level amp1 at which the audio signal of the sound source is rendered from the respective early reflection position ERB1 is offset by ampCorrection (see Eq. 6) and, for each of the second set of early reflection positions ERB2, the level amp2 at which the audio signal of the sound source is rendered from the respective early reflection position ERB2 is offset by ampCorrection (see Eq. 6). The amplitude correction factor, i.e. ampCorrection of Eq. 6, may be contained in a bitstream comprising a representation of the audio signal. According to an embodiment, the amplitude correction factor is contained in one or more early reflection pattern parameters.


According to an embodiment, the rendering of the audio signal of the sound source from each early reflection position in a manner level adjusted according to a distance of the respective early reflection position to the listener position, may be performed by modifying the level adjustment according to the distance of the respective early reflection position to the listener position relative to a level adjustment used by the apparatus for rendering of the audio signal from the sound source positon according to a distance attenuation (amp1 and amp2). The distance attenuation may be contained in a bitstream comprising a representation of the audio signal. According to an embodiment, the attenuation is contained in one or more early reflection pattern parameters.


As can be seen in FIG. 4, at the rendering the level at which the audio signal of the sound source is rendered from the respective early reflection position is offset 20, wherein the same offset applies for all early reflection positions ERP of the early reflection pattern 1. Additionally, at the rendering the level at which the audio signal of the sound source is rendered from the respective early reflection position may be attenuated dependent on a distance between the respective early reflection position and the listener, e.g., using a corrected distance law.


As described above for an audio signal of a single sound source, it is also possible to apply this rendering technic to two or more audio signals of two or more sound sources, wherein the special rendering is applied to a weighted sum of the two or more audio signals. The calculation of the weighted sum is described in more detail in section 5.


2 Implementation in a VR System


FIG. 5 presents a structogram diagram of the Simple ER software algorithm in an encoder/decoder environment. FIG. 5 shows an implementation of simple ER algorithm in en—and decoder/renderer. First, it is decided if a predefined ER pattern is used or not. The next decision is for an in-or outdoor ER pattern. For an indoor pattern no further parameters have to be transmitted. The ER pattern is calculated from the acoustical scene parameters already existing. For an outdoor pattern the geometry of the scene is analyzed, these parameters are transmitted and the ER outdoor pattern is calculated in the decoder. For more details see Section 3. For the transition from one acoustical environment to the next, see Section 4. For the handling of several audio sources in one scene see Section 5.


3 Outdoor ER Pattern

An embodiment shown in FIG. 6 relates to an apparatus 100, for determining an early reflection pattern 1 for sound rendition, configured to perform a geometric analysis 110 of an acoustic environment 5 by, at each of one or more analysis positions 50, see 501 to 505, determining a function 112 indicative, for each of different distances 114 from the respective analysis position 50, a value representative of an early reflection contribution 116. The function 112 or a further function derived therefrom is analyzed with respect to one or more maxima 118 to derive one or more control parameters 120. Additionally, the apparatus 100 is configured to determine an early reflection pattern 1, which is indicative of a constellation of early reflection positions ERP, see ERP1 to ERP4, by placing the early reflection positions using the one or more control parameters. The features of the apparatus 100 are described in the following in more detail.


Specifically for outdoor scenes, but not limited thereto, a new pattern 1 with four roughly cross-positioned ERs is designed, see FIG. 7. FIG. 7 shows a spatial top view of a new ER pattern 1 with four early reflection positions ERP1 to ERP4. The different distances, i.e. the respective distance between the respective early reflection position and the center 2, may be defined here by a predelay time and a compression factor, which are derived from geometry analysis 110 of the scene, i.e. the environment 5.


Usage of ER patterns for outdoor environments known is highly individual and dependent on the physical setup of the scene. The geometrical analysis 110 described hereafter captures perceptually important characteristics of the outdoor scene, i.e. the environment 5, which are relevant to the perception of ERs:



FIG. 8 shows a geometrical outdoor scene analysis. A) Top view of rings around an analysis point. B) Side view around an analysis point with rings of increasing height. From a central listening point, e.g., an analysis point 50, concentric rings are positioned. The area of the rings, defined by radius and height, represents the maximum possible reflection energy at this distance, see FIG. 8. There is a spacing d between the rings (e.g. 3 m). Rays with an angular spacing α (e.g. 6°)are sent out from the analysis point 50. The first surfaces that hit are counted to the existing reflection surface at this distance and summed up over the ring. With this approach it is possible to determine the function 112 indicative of, for each of the different distances from the respective analysis position 50, a value representative of an early reflection contribution. This function may be determined for each of the analysis points 50.


In other words, the acoustic environment 5 is radially sampled with respect to a nearest reflective surface distance to obtain a radial sampling result. Additionally, a radial integration over the radial sampling result and a weighting of the radial sampling result may be performed so as to obtain the function 112. The weighting may be performed according to radial distance so as to decrease the early reflection contribution with increasing distance.



FIG. 9 shows a mesh of analysis points 50 in top a) and side b) view. The dot-dashed line indicates the user reachable area of a scene, i.e. the environment 5. There are a number of analysis points (e.g. 9) positioned in the inner part of a user reachable area, see FIG. 9. It is a 3D mesh, because some of the points are inside the geometrical mesh of the scene and have to be deselected.


Alternatively, to analyzing for each analysis point the respective function 112, it is advantageous in terms of efficiency to subject the function 112 determined at the one or more analysis positions to a summation, e.g. averaging, to yield the further function 112′ shown in FIG. 10. The data over all mesh points may be averaged and the distribution can be analyzed. It represents the reflective outdoor energy over space and distance, see FIG. 10. FIG. 10 shows a distribution of reflection surface area over distance, averaged over several analysis points 50.


As can be seen in FIG. 10, the further function 112′ derived from the functions associated with the individual analysis points is inspected with respect to two largest maxima to derive as the one or more control parameters 120 a first amplitude a1 and a first distance p1 for a nearest of the two largest maxima 1181, and a second amplitude a2 and a second distance p2 for a farthest of the two largest maxima 1182. Alternatively, it is possible to derive from each of the functions associated with the individual analysis points the one or control parameters 120.


The amplitudes a1 and a2—together with their distances p1 and p2—are, for example, the input values to calculate the outdoor ER pattern 1. The outdoor ER pattern 1 comprises four ERs, see FIG. 11A.


According to an embodiment shown in FIG. 11A, the ER pattern 1 is determined by

    • setting distances of the first ERP1 and the third ERP3 early reflection positions from the listener position 10 depending on p2, and
    • setting a ratio, see compFactor, between the distances of the first ERP1 and the third ERP3 early reflection positions from the listener position 10 on the one hand and
    • distances of the second ERP2 and fourth ERP4 early reflection positions from the listener position 10 on the other hand based on a quotient or difference between a first term depending on a1 and a second term depending on a2.



FIG. 11A shows an outdoor ER pattern 1 of four reflections, see the circles (blue) around the listener, see the cross (red). The distance p2 to the second distribution maximum 1182 defines the distance to the two more distant reflections, see the early reflection positions ERP1 and ERP3. A compression factor compFactor may define the distance between the two more close reflections, see the early reflection positions ERP2 and ERP4. The relation between the amplitudes can define the compression factor, e.g.






compFactor
=



log

10


(

a

1

)



log

10


(

a

2

)



-
0.05





The four early reflection positions ERPi can be placed so that same are positioned at polar coordinates (r(i); β(i)) with i=1 . . . 4.


The angle coordinates may be β(1)≈5°-15°, β(2)≈90°-110°, β(3)≈180°-200°, β(4)≈270°-290°. According to an embodiment, β≈[10°, 100°, 190°,280°].


The radius coordinates may be determined according to equations 7 and 8, wherein a deviation of up to 40% from the calculated radius value may be allowable:









preDelay
=

p

2
/
c





(
3
)













r

(
i
)

=



(

0.7
+


(

i
-
1

)

·
rstep


)

·

slDistance
distAlpha


+

0.001
·
preDelay
·
c






Eq
.

6









    • with i=[1 . . . 4], slDistance [m] represents a source listener distance, preDelay [ms] the time to the second distribution peak (a2), c=343 m/s represents speed of sound













r

(
i
)

=



compFac
·

r

(
i
)




with


i

=

[

2
,
4

]






Eq
.

7







As can be seen, the radius coordinate of the early reflection positions ERP1 and ERP3 is determined with equation 7 and for early reflection positions ERP2 and ERP4 equation 7 is modified to become equation 8.


According to the embodiment shown in FIG. 11B, the four early reflection positions ERP1 to ERP4 may be place so that first ERP1 and second ERP2 early reflection positions are arranged at opposite sides of a first line 1000 crossing the listener position 10 and third ERP3 and fourth ERP4 early reflection positions are arranged at opposite sides of a second line 2000, perpendicular to the first line 1000 and crossing the listener position 10. According to an embodiment, the ER pattern 1 is determined by

    • setting distances of the first ERP1 and second ERP2 early reflection positions from the listener position 10 depending on p2, and
    • setting a ratio between the distances of the first ERP1 and second ERP2 early reflection positions from the listener position 10 on the one hand and distances of the third ERP3and fourth ERP4 early reflection positions from the listener position 10 on the other hand based on a quotient or difference between a first term depending on a1 and a second term depending on a2.


The level reduction of an acoustical point source in free-field conditions follows a 1/r law,


corresponding to an amplitude reduction of factor 2 for every distance doubling, [13]. When the influence of different reflective areas are summarized in few ERs, this reduction over distance should be reduced by an exponential factor.







ampRefl

(
r
)

=

1
/

r
distAlpha






The distAlpha values [0.5 . . . 1] can be estimated from the area distribution by e.g.






distAlpha
=


(


log

10



a

2


p

2



-

log

10



a

1


p

1




)

/
4.5





A deviation of about 20% from the calculated distAlpha values may be allowable.


According to an embodiment, distAlpha can be set according to:





if distAlpha<0.5; distAlpha=0.5;





if distAlpha>1.0; distAlpha=1.0.



FIG. 12 shows an amplitude reduction over distance of a point source for different distAlpha values.


When the geometrical analysis is carried out in the encoder, then only the algorithmic parameters: predelay, compFactor and distAlpha have to be transferred to the render.


In the case that a more detailed geometrical analysis results in an ER pattern, which cannot be derived by the above defined equations, all single reflection positions and relative amplitudes can be transmitted independently to represent the desired pattern.


Example values from the geometrical analysis for different outdoor scenarios to calculate the ER pattern:

    • [preDelay,compFac,ampFac,distAlpha]
    • Outdoor field surrounded by rocks [144,0.47,2.2,1]
    • Town street [109,0.44,1,0,65]
    • Park in town [57,0.58,1,0,58]


As already described above with regard to FIG. 2, according to an embodiment, the apparatus for audio rendering or for generating an early reflection pattern 1 may be configured to support different determinations of the early reflection pattern. The apparatus for audio rendering or for generating an early reflection pattern 1 may be configured to choose the type of determination dependent on the environment 5. According to an embodiment, the first determination may be performed as described in this section involving the placing of the early reflection positions ERP using the one or more control parameters 120. The first determination may be selected in case of the acoustic environment being an outdoor environment or in case of a pattern type index in a bitstream comprising a representation of an audio signal to be rendered assuming a predetermined state. Optionally, the second determination may be performed using one or more spiral functions, as described above. But it is clear that also other types of determination could be available for selection.


4 Behavior at Portals

A portal describes the border between one acoustic environment to the next, from one room to the next or from a room to a free-field environment. To make the transition through such portals smooth, a cross-fade processing between the associated simple ER patterns is beneficial. Within a region of e.g. d=5 m, the level of the contribution from one acoustic environment is faded out.


According to an embodiment, an apparatus for rendering may be configured to support a first manner of determination of the early reflection pattern 1 and a second manner of determination of the early reflection pattern 1, wherein the first manner of determination is different from the second manner of determination, e.g., see section 1 and the description of FIG. 2 for a first manner of determination and section 3 for a second manner of determination. The apparatus may be configured to use the first manner of determination or the second manner of determination in the determining the early reflection pattern 1 depending on a pattern type index. This index may be contained in the one or more early reflection pattern parameters.


5 Summation of Several Audio Sources Into one ER Pattern

In a real environment, every audio source has its individual ER pattern, which is dependent on the source and receiver position. In the simplified simulation, every audio source in one environment has the same ER pattern, which is positioned around the listener. When source or listener moves, the source-listener distance changes and therefore the important level relation to the direct sound changes. This level relation has to be preserved.


In an embodiment of the invention this can be accommodated in a computationally efficient way as described in FIG. 13. FIG. 13 shows a block diagram illustrating a summation of different audio sources (AS1, AS2, . . . ) into one source signal with distance weighting. First, the level relations between the different sources AS are considered based on the distance values between source and listener. Then the different audio sources AS can be summed up into a single source signal with the appropriate distance weighting. Thus, only one ER pattern 1 has to be auralized covering all audio sources AS in the simulated environment. This pattern 1 follows the lateral movements of the listener (i.e. the translation in x,y,z direction but not the listener's head orientation). Specifically, when the listener moves into a certain direction, the locations ERP of the ERs in the ER patterns 1 move with the listener. They remain, however, in a constant predefined spatial orientation regardless of the listener's head orientation.


According to an embodiment, an apparatus for audio rendering or for generating an early reflection pattern 1 may be configured to render an audio signal of two or more sound sources using a room impulse response whose early reflection portion is determined by an early reflection pattern by forming a weighted sum of a first audio signal of a first sound source positioned at the first sound source position and a second audio signal of a second sound source positioned at the second sound source position and by generating early reflection contribution loudspeaker signals relating to the early reflection portion of the room impulse response by rendering the weighted sum from the early reflection positions. The weighted sum, for example, weights the first audio signal more than the second audio signal if a first distance between the first sound source position and the listener position is smaller than a second distance between the second sound source position and the listener position, and weights the second audio signal more than the first audio signal if the first distance is larger than the second distance.


According to an embodiment, the early reflection contribution loudspeaker signals relating to the early reflection portion of the room impulse response may be generated by rendering the weighted sum from each early reflection position in a manner level adjusted according to a distance of the respective early reflection position to the listener position.


In FIG. 14 the level relation between the listener, two direct sources and their reflections is visualized. The level of each direct source is dependent on its individual source listener distance. These can vary individually. The common level of the direct sources is calculated by summing up the individual levels. From this level the related reflections are calculated by their distances.



FIG. 14 shows a level relation between the listener, two direct sources and the summed up reflections.


The reduction caused by the source listener distance is individual per source. There is an additional ampCorrection for the complete ER pattern









ampCorrection
=

ampFac
·

(

1
-
absorption

)






Eq
.

8







6 Brief Summary
6.1 Rendering Aspects

A renderer that is equipped to render early reflection patterns in a virtual auditory environment which.

    • do not depend on detailed room geometry description, e.g., only room dimensions and/or room volume and/or predelay to the late reverberation may be considered.
    • do not depend on individual source and listener location (share the same ER pattern for every audio source in one environment), only the source listener distance.
    • rendered at fixed locations, e.g., at the early reflection positons ERP, relative to the user (rather than at locations in space depending on the source and listener location)
      • In an embodiment, the locations of the pattern's ERs, i.e. the early reflection positions ERP, follow the lateral movements of the listener (i.e. the translation in x,y,z direction but not the listener's head orientation). Specifically, when the listener moves into a certain direction, the locations of the ERs in the ER patterns move with the listener. They remain, however, in a constant predefined spatial orientation regardless of the listener's head orientation.



FIG. 15 illustrates the overall rendering process exemplarily. One or more of the features described with regard to FIG. 15 may be comprised by a herein described apparatus for sound rendering.



FIG. 15 shows an apparatus 200 for sound rendering. The apparatus 200 is configured to render one or more audio signals 2121/2122 of one or more sound sources 2101/2102. An audio signal 212, see 2121 and 2122, can be rendered by considering direct sound, see 2201 and 2202, early reflections, see 230, and/or late reverberation, see 240.


At the direct path 2201/2202 the one or more audio signals 2121/2122 may be rendered to obtain for each of the one or more audio signals 2121/2122 a direct sound contribution loudspeaker signal 2221/2222. For example, for each of the audio signals 2121 and 2122 to be rendered a distance d1/d2 between the respective associated sound source 2101/2102 and a listener position 10 as well as an angle α12 between the respective sound source 2101/2102 and an orientation of the listener may be considered to determine the respective direct sound contribution loudspeaker signal 2221/2222. The direct sound contribution loudspeaker signals 2221/2222 relate to a direct sound source portion of a room impulse response.


According to an embodiment, the apparatus 200 may be configured to mix 260 the one or more audio signals 2121/2122 of the one or more sound sources 2101/2102 to obtain a mixed audio signal 262. At the mixing 260, the signals 2121/2122 may be panned dependent on the position of the respective associated sound source 2101/2102. For example, for each of the audio signals 2121/2122, a distance d1/d2 between the respective associated sound source 2101/2102 and the listener position 10 is considered at the panning/mixing 260. Alternatively, or additionally, the mixing may be performed as described in section 5.


The apparatus 200 is configured to render an audio signal, e.g., the mixed audio signal 262, e.g., a weighted sum of the audio signals 2121 and 2122, of the one or more sound sources 2101/2102 using the room impulse response whose early reflection portion is determined by an early reflection pattern 1, e.g., at the ER paths 230, e.g., to obtain early reflection contribution loudspeaker signals 232 relating to the early reflection portion of the room impulse response. The early reflection contribution loudspeaker signals 232 may be generated by performing a rendition of the audio signal from the early reflection positions ERP, see ERP1 to ERP6.


Optionally, the apparatus 200 may comprise an ER pattern determiner 270, e.g., an apparatus for generating an early reflection pattern 1. The determination of the early reflection pattern 1 may be performed as described in one of the above mentioned embodiments, e.g., see FIG. 2 and sections 1, 3 and 5. The ER pattern determiner 270 may obtain ER pattern information 310 for generating the early reflection pattern 1. The ER pattern information 310 may comprise one or more of an ER pattern type (indoor/outdoor); a predelay, a compfactor and/or distAlpha (e.g., for outdoor); and room dimensions, room volume and/or predelay time (e.g., for indoor). For example, depending on the determination to be used by the ER pattern determiner 270, the ER pattern determiner 270 receives or reads from a bitstream 300 an environmental description 310, e.g. one or more room acoustical parameters or one or more control parameters, or a bitstream hint 320, e.g., one or more early reflection pattern parameters.


The bitstream 300 may comprise a representation 2141 of the audios signal 2121 associated with the first sound source 2101 and a representation 2142 of the audios signal 2122 associated with the second sound source 2102.


According to an embodiment, the bitstream 300 may contain/comprise one or more of the herein mentioned parameters. The bitstream 300 may comprise a representation of an audio signal 2141/2142 of a sound source 2101/2102 positioned at a sound source position and comprising one or more early reflection pattern parameters. For example, the bitstream 300 is an audio bitstream with the early reflection parameter inside a header or metadata field of the bitstream, or a file format stream with the early reflection parameter inside a packet of the file format stream and a track of the file format stream comprising an audio bitstream representing the audio signal. The one or more early reflection pattern parameters comprise one or more of an pattern type index, a predelay time to late reverberation, a compression factor, an amplitude correction factor, a distance attenuation exponent, a pattern azimuth parameter, and one or more frequency response parameters.


At the ER path 230, i.e. at the generation of the early reflection contribution loudspeaker signals 232, the apparatus 200 is optionally configured to render the audio signal of the one or more sound sources 2101/2102 from each early reflection position ERP in a manner spectrally shaped according to one or more frequency response parameters (see FIG. 3C). In FIG. 3C the circles (blue) show the frequency dependency of RT60. The same frequency dependency can be applied on all early reflections. Another frequency dependency can be applied by a bass boost for wall proximity (<2 m) of source or receiver. The one or more frequency response parameters can be contained in a bitstream, which can also comprise a representation of the audio signal or of the individual signals 2121 and 2122 of the sound sources 2101/2102. The one or more frequency response parameters may be contained in one or more early reflection pattern parameters.


The apparatus 200, may be configured to, in performing the rendition of the audio signal of the one or more sound sources 2101/2102 from the early reflection positions ERP, use HRTFs specific for a listener head orientation. The HRTF represents a head related transfer function.


At the optional diffuse path 240 the one or more audio signals 2121/2122 may be rendered to obtain diffuse late reverberation loudspeaker signals 242. The apparatus 200 may be configured to generate a diffuse late reverberation portion of the room impulse response and, for example, use this room impulse response to render the one or more audio signals 2121/2122 in the diffuse path 240. The diffuse late reverberation loudspeaker signals 242 relate to the diffuse late reverberation portion of the room impulse response.


The apparatus 200 may be configured to, in rendering the one or more audio signals 2121/2122, generate a set of loudspeaker signals 252 by forming a summation 250 over direct sound contribution loudspeaker signals 2221/2222 relating to a direct sound source portion of the room impulse response and early reflection contribution loudspeaker signals 232 relating to the early reflection portion of the room impulse response and, optionally, diffuse late reverberation loudspeaker signals 242 relating to the diffuse late reverberation portion of the room impulse response.


Indoor Rendering





    • a) ER patterns, which cover the gap between direct sound and the start of the late reverb

    • b) ER patterns, which are distributed in the horizontal plane.

    • c) ER patterns, which are controlled by room acoustical parameters like room dimensions, room volume, predelay time to the late reverb, RT60 to set the number of them, their spacing, their amplitude behavior over distance.

    • d) ER patterns, which can have between 2 and 20 ERs.

    • e) ER, for which the positions are determined by spirals.

    • f) ER, for which the positions are determined by two spiral arms.

    • g) ER, for which the positions are determined by










β1
=

n
·

π
3



,

β2
=


π
8

+

n
·

π
3




,

n
=



[

1
:

nER
/
2

]



with


nER

=

number


of


ER









base
=
1.85







r

1

=

distfactor
*

base

2
*

β1
/
π











r

2

=

distfactor
*

base

2
*

β2
/
π










    • h) ER, for which the positions are randomly spread over azimuth up to the predelay time.

    • i) The ER pattern keeps constant independent from source and receiver positions in the room. Note that the form of the pattern keeps constant, but it moves with the listener. And the amplitude of the reflection is dependent on the source listener distance.

    • j) Use a reduced floor reflection to create a specific sound character.





Outdoor Rendering





    • k) Sparse ER patterns, specifically for outdoor scenes, with e.g. 2-6 reflections.

    • l) Use a geometrically analysis of the reflective surfaces of a whole scene to derive the level and predelays for the ER outdoor patterns.

    • m) Use the summarized distribution over distance to derive the ER pattern parameters.

    • n) Do this analysis over a mesh of possible listening positions in the user reachable area.

    • o) Use the first two peaks of such a distribution, together with the corresponding distances

    • p) Calculate the predelay, the compression factor and the distAlpha from this distribution values.





General





    • q) Apply a level fade-in and-out of the ER pattern level when changing from one acoustic scene and/or room to another.





6.2 Transmission, Bitstream and Signaling Aspects





    • a) The indoor scenes can be calculated entirely in the decoder/renderer with the room acoustical parameters given by the scene.

    • b) Specifically, outdoor scenes can benefit from a geometrical analysis in the encoder. Only the control parameters of the pattern have to be transmitted. In an embodiment, the parameters include: (algorithm/pattern number, predelay to late reverb, compression factor for pattern compared to predelay, amplitude correction factor, distance attenuation exponent, pattern azimuth parameter, frequency response description)

    • c) For the case new ER patterns should be used, these can be calculated completely in the encoder and can then transmitted to the decoder. They are defined by temporal position and relative level of the reflections (regarding the normal distance attenuation) (number of ER, for each: azimuth, elevation, radius, amplitude correction factor, distance attenuation exponent, frequency response description).

    • d) Decoders/renderers can be pre-equipped with a number of ER patters. In this case, the bitstream signaling includes a field indicating which pre-supplied ER pattern should be used. Furthermore, the parameters for this pattern are signaled, as described in b.1





7 Application Fields

The time consuming exact geometrical calculation of ER can especially be avoided in applications like

    • Real-time auditory virtual environment
    • Real-time augmented reality


8 Further Embodiments


FIG. 16 shows an embodiment of an apparatus 200 for sound rendering, configured to receive information on a listener position 10 and a sound source position poss. This information may be used to determine a distance d between the listener and the sound source. Optionally, the apparatus 200 may be configured to use the distance as described with regard to the apparatus 200 in FIG. 15. The apparatus 200 is configured to render 202 an audio signal 212 of the sound source using a room impulse response 400 whose early reflection portion 410 is exclusively determined by an early reflection pattern 1. The early reflection pattern 1 is indicative of a constellation of early reflection positions ERP, see ERP1 to ERP4, and is positioned at the listener position 10 in a manner so that the early reflection positions ERP are located around the listener position 10 and at angular directions from the listener position 10 which are invariant with respect to changes in a listener head orientation.


The apparatus 200 can comprise any of the features described above. For example, the apparatus 200 can comprise the apparatus 100 of FIG. 6, FIG. 18 or of FIG. 20 for determining the early reflection pattern for sound rendition. Alternatively, the apparatus 200 can comprise a different apparatus for determining the early reflection pattern for sound rendition, e.g., an apparatus configured to perform the determination as described with regard to FIG. 2 and/or as described in sections 1, 3 and 5.



FIG. 17 shows an embodiment of an apparatus 200 for sound rendering, configured to receive first information on a listener position 10 and a sound source position poss. This information may be used to determine a distance d between the listener and the sound source. Optionally, the apparatus 200 may be configured to use the distance as described with regard to the apparatus 200 in FIG. 15. The apparatus 200 is configured to receive a bitstream 300 comprising, e.g. and read therefrom, a representation 214 of an audio signal of a sound source positioned at the sound source position poss and one or more early reflection pattern parameters 310. The bitstream 300, for example, is an audio bitstream with the early reflection parameter 310 inside a header or metadata field of the bitstream 300, or a file format stream with the early reflection parameter 310 inside a packet of the file format stream and a track of the file format stream comprising an audio bitstream representing the audio signal.


The one or more early reflection pattern parameters 310 may comprise one or more of an pattern type index, a predelay time to late reverberation, a compression factor, an amplitude correction factor, a distance attenuation exponent, a pattern azimuth parameter, one or more frequency response parameters.


Additionally, the apparatus 200 is configured to determine 270 an early reflection pattern 1 depending on the one or more early reflection pattern parameters 310, e.g., as described with regard to FIG. 2 and/or as described in sections 1, 3 and 5. The early reflection pattern 1 is indicative of a constellation of early reflection positions ERP, see ERP1 to ERP4. For example, the apparatus 300 may be configured to perform the determining 270 of the early reflection pattern 1 so that the number of the early reflection positions ERP is larger the larger a predelay time to the late reverberation is. Additionally, or alternatively, the apparatus 200 is configured to perform the determining 270 of the early reflection pattern 1 so that a farthest early reflection position ERP from the listener position 10 is larger the larger a predelay time to the late reverberation is. The distance may be smaller than the predelay time.


Further the apparatus 200 is configured to render 202 the audio signal of the sound source using a room impulse response 400 whose early reflection portion 410 is determined by an early reflection pattern 1 The early reflection pattern 1 is indicative of a constellation of early reflection positions ERP, see ERP1 to ERP4, and is positioned at the listener position 10 in a manner so that the early reflection positions ERP are located around the listener position 10 and at angular directions from the listener position 10 which are invariant with respect to changes in listener head orientation.


According to an embodiment, the apparatus 200 is configured to, if a pattern type index indicates an encoder-parametrized manner of determination, e.g., as described in section 1, read from the bitstream 300 as part of the one or more early reflection pattern parameters 310 one or more of a number of the early reflections of the early reflection pattern, for each early reflection, an azimuth, an elevation, a radius, e.g., distance to listener position, for each early reflection, an amplitude correction factor, for each early reflection, a distance attenuation exponent and for each early reflection, a frequency response description.


The apparatus 200 can comprise any of the features described above.



FIG. 18 shows an embodiment of an apparatus 100 for determining an early reflection pattern 1 for sound rendition, configured to receive at least one room acoustical parameter 310 which is representative of an acoustical characteristic of an acoustic environment 5. The apparatus 100 is configured to determine 270 the early reflection pattern 1 in a manner so that a number 272 of the early reflection positions ERP, see ERP1 to ERP6 depends on the at least one room acoustical parameter 310. The early reflection pattern 1 is indicative of a constellation of early reflection positions. The apparatus 100 can comprise especially the features described above with regard to FIG. 2 and sections 1 and 5.



FIG. 19 shows an embodiment of an apparatus 200 for sound rendering, configured to receive information on a listener position 10, a first sound source position posS1 and a second sound source position posS2. The apparatus 200 is configured to render 202 audio signals 2121 and 2122 of the two sound sources 2101 and 2102 using a room impulse response 400 whose early reflection portion 410 is determined by an early reflection pattern 1. The early reflection pattern 1 is indicative of a constellation of early reflection positions ERP, see ERP1 to ERP4, and is positioned at the listener position 10 in a manner so that the early reflection positions ERP are located around the listener position 10 and at angular directions from the listener position 10 which are invariant with respect to changes in listener head orientation. The rendering 202 is further performed by forming a weighted sum 204 of a first audio signal 2121 of a first sound source 2101 positioned at the first sound source position posS1 and a second audio signal 2122 of a second sound source 2102 positioned at the second sound source position posS2. The weighted sum 204 weights w1 the first audio signal 2121 more than the second audio signal 2122 if a first distance d1 between the first sound source position posS1 and the listener position 10 is smaller than a second distance d2 between the second sound source position posS2 and the listener position 10, and weights w2 the second audio signal 2102 more than the first audio signal 2101 if the first distance d1 is larger than the second distance d2. Additionally, the rendering is performed by generating early reflection contribution loudspeaker signals 232 relating to the early reflection portion 410 of the room impulse response 400 by rendering the weighted sum 204 from the early reflection positions ERP. The apparatus 200 can especially, comprise features described in section 5. However, it is clear that the apparatus 200 can also comprise an apparatus for determining the ER pattern 1 as described in any of the embodiments above.



FIG. 20 shows an embodiment, of an apparatus 100 for determining 270 an early reflection pattern 1 for sound rendition, configured to receive at least one room acoustical parameter 310 which is representative of an acoustical characteristic of an acoustic environment 5. The apparatus 100 is configured to determine 270 the early reflection pattern 1 by parameterizing one or more spiral functions 3 and 4 centered at the listener position 10, and by placing the early reflection positions ERP, see ERP11 to ERP14 and ERP21 to ERP24. using the one or more spiral functions 3 and 4. The early reflection pattern 1 is indicative of a constellation of the early reflection positions ERP. The apparatus 100 can comprise especially features as described with regard to FIG. 2 and section 1, but it is clear that the apparatus can also comprise other herein described features.


9 Implementation Alternatives

Although some aspects have been described in the context of an apparatus, it is clear that these aspects also represent a description of the corresponding method, where a block or device corresponds to a method step or a feature of a method step. Analogously, aspects described in the context of a method step also represent a description of a corresponding block or item or feature of a corresponding apparatus.


The inventive rendered audio signal or the invented early reflection pattern information can be stored on a digital storage medium or can be transmitted on a transmission medium such as a wireless transmission medium or a wired transmission medium such as the Internet.


Depending on certain implementation requirements, embodiments of the invention can be implemented in hardware or in software. The implementation can be performed using a digital storage medium, for example a floppy disk, a DVD, a CD, a ROM, a PROM, an EPROM, an EEPROM or a FLASH memory, having electronically readable control signals stored thereon, which cooperate (or are capable of cooperating) with a programmable computer system such that the respective method is performed.


Some embodiments according to the invention comprise a data carrier having electronically readable control signals, which are capable of cooperating with a programmable computer system, such that one of the methods described herein is performed.


Generally, embodiments of the present invention can be implemented as a computer program product with a program code, the program code being operative for performing one of the methods when the computer program product runs on a computer. The program code may for example be stored on a machine readable carrier.


Other embodiments comprise the computer program for performing one of the methods described herein, stored on a machine readable carrier.


In other words, an embodiment of the inventive method is, therefore, a computer program having a program code for performing one of the methods described herein, when the computer program runs on a computer.


A further embodiment of the inventive methods is, therefore, a data carrier (or a digital storage medium, or a computer-readable medium) comprising, recorded thereon, the computer program for performing one of the methods described herein.


A further embodiment of the inventive method is, therefore, a data stream or a sequence of signals representing the computer program for performing one of the methods described herein. The data stream or the sequence of signals may for example be configured to be transferred via a data communication connection, for example via the Internet.


A further embodiment comprises a processing means, for example a computer, or a programmable logic device, configured to or adapted to perform one of the methods described herein.


A further embodiment comprises a computer having installed thereon the computer program for performing one of the methods described herein.


In some embodiments, a programmable logic device (for example a field programmable gate array) may be used to perform some or all of the functionalities of the methods described herein. In some embodiments, a field programmable gate array may cooperate with a microprocessor in order to perform one of the methods described herein. Generally, the methods are performed by any hardware apparatus.


While this invention has been described in terms of several advantageous embodiments, there are alterations, permutations, and equivalents, which fall within the scope of this invention. It should also be noted that there are many alternative ways of implementing the methods and compositions of the present invention. It is therefore intended that the following appended claims be interpreted as including all such alterations, permutations, and equivalents as fall within the true spirit and scope of the present invention.


10 LITERATURE





    • [1] Jot, J.-M., Real-time spatial processing of sounds for music, multimedia and interactive human-computer interfaces. Audio and Multimedia, 1997 (ACM Multimedia Systems Journal, February 1997). Available from: http://citeseerx.ist.psu.edu/viewdoc/download?dol=10.11.546319&rep=rep1&type=p

    • [2] Jullien, J. P., E. Kahle, S. Winsberg, and O. Warusfel, Some Results on the Objective Characterisation of Room Acoustical Quality in Both Laboratory and Real Environments, 1992, IRCAM, France. Available from: https://kahle.be/articles/IRCAM Room Acoustical Quality 1992.pdf.

    • [3] Jot, J.-M., O. Warusfel, E. Kahle, and M. Mein. Binaural Concert Hall Simulation in Real Time. IEEE 93. 1993. Mohonk (USA).

    • [4] Carpentier, T. A New Implementation of Spat in Max 15th Sound and Music Computing Conference (SMC2018) 2018. Limassol, Cyprus. https://hal.archives.ouvertes.fr/hal: 02094499/document.

    • [5] Väänänen, R. and J. Huopaniemi, Advanced AudioBIFS: Virtual Acoustics Modeling in MPEG-4 Scene Description. IEEE Transactions on Multimedia, 2004. 6(5): p. 661-675.

    • [6] Brinkmann, F., H. Gamper, N. Raghuvanshi, and I. Tashev. Towards Encoding Perceptually Salient Early Reflections for Parametric Spatial Audio Rendering. 148th AES Convention. 2020. Vienna, Austria.

    • [7] Brinkmann, F., et al., A Round Robin on Room Acoustical Simulation and Auralization. J. Acoust. Soc. Am., 2019. 145(4): p. 2746 . . . 2760 DOI: https://doi.org/10.1121/1.5096178.

    • [8] Bregman, A. S., Auditory Scene Analysis (The Perceptual Organization of Sound). 1990, MIT Press. ISBN: 9780262022972.

    • [9] Blauert, J., Spatial Hearing, The Psychophysics of Human Sound Localization. 2nd ed. 1997, Cambrigde Massachusetts: MIT Press. ISBN: 0-262-02413-6.

    • [10] Angus, J. A. S., The Effects of Specular Versus Diffuse Reflections on the Frequency Response at the Listener. J. Audio Eng. Soc., 2001. 49(3): p. 125-133.

    • [11] Barron, M. and A. H. Marshall, Spatial Impression due to Early Lateral Reflections in Concert Halls: The Derivation of a Physical Measure. Journal of Sound and Vibration, 1981. 77(2): p. 211-232.

    • [12] Bech, S. Perception of Reproduced Sound: Audibility of Individual Reflections in a Complete Sound Field. 96th AES Convention. 1994. Amsterdam, The Netherlands.

    • [13] Kuttruff, H., Room Acoustics (fourth edition). 2000: Spon Press. ISBN: 0-419-24580-4.




Claims
  • 1. Apparatus for sound rendering, configured to receive information on a listener position and a sound source position;render an audio signal of the sound source using a room impulse response whose early reflection portion is exclusively determined by an early reflection pattern which is indicative of a constellation of early reflection positions, and which is positioned at the listener position in a manner so that the early reflection positions are located around the listener position and at angular directions from the listener position which are invariant with respect to changes in a listener head orientation.
  • 2. Apparatus of claim 1, configured to receive at least one room acoustical parameter which is representative of an acoustical characteristic of an acoustic environment;determine the early reflection pattern in a manner so that a number of the early reflection positions depends on the at least one room acoustical parameter.
  • 3. Apparatus of claim 2, wherein the at least one room acoustical parameter comprises one or more of room dimensions,room volume, andpredelay time to the late reverberation.
  • 4. Apparatus of claim 2 wherein the at least one room acoustical parameter comprises merely one parameter selected out of room dimensions,room volume, andpredelay time to the late reverberation.
  • 5. Apparatus of claim 2, configured to, depending on the at least one room acoustical parameter, vary a mutual spacing of the early reflection positions and the number of early reflection positions.
  • 6. Apparatus of claim 2, configured to, depending on the at least one room acoustical parameter, parameterize one or more spiral functions centered at the listener position, and place the early reflection positions using the one or more spiral functions.
  • 7. Apparatus of claim 2, configured to read the at least one room acoustical parameter, from a bitstream comprising a representation of an audio signal to be rendered using the early reflection pattern.
  • 8. Apparatus of claim 2, configured to determine the number of early reflection positions so that the number is larger the larger the room dimensions are, orthe number is larger the larger the room volume is, orthe number is larger the larger the predelay time to the late reverberation is.
  • 9. Apparatus of claim 2, configured to determine the number of early reflection positions so that a farthest early reflection position from the listener position is larger the larger the room dimensions are ora farthest early reflection position from the listener position is larger the larger the room volume is, ora farthest early reflection position from the listener position is larger the larger the predelay time to the late reverberation is.
  • 10. Apparatus of claim 1, configured to determine the early reflection positions so that same are in a substantially uniform manner angularly distributed around the listener position.
  • 11. Apparatus of claim 1, wherein the early reflection positions lie in a horizontal plane along with the listener position.
  • 12. Apparatus of claim 1, configured to receive at least one room acoustical parameter which is representative of an acoustical characteristic of an acoustic environment;determine the early reflection pattern by parameterizing one or more spiral functions centered at the listener position, and place the early reflection positions using the one or more spiral functions.
  • 13. Apparatus of claim 12, wherein the one or more spiral functions comprise a first spiral function and a second spiral function wherein the apparatus is configured to place a first set of early reflection positions using the first spiral function and a second set of early reflection positions using the second spiral function so that each of the first set of early reflection positions is associated with a corresponding early reflection position of the second set of early reflection and is positioned on an opposite side of a line perpendicularly crossing a connecting line between the respective early reflection position and the corresponding early reflection position.
  • 14. Apparatus of claim 13, wherein for each of first set of early reflection positions, the corresponding early reflection position of the second set of early reflection is angularly offset relative to the connecting line into an angular direction which is common for all early reflection positions of the first set of early reflection positions.
  • 15. Apparatus of claim 12, wherein the one or more spiral functions comprise a first spiral function and a second spiral function wherein the apparatus is configured to place a first set of early reflection positions using the first spiral function and a second set of early reflection positions using the second spiral function so that the first set of early reflection positions is determined in polar coordinates as (r1; β1) and the second set of early reflection positions is determined in polar coordinates as (r2; β2) with
  • 16. Apparatus of claim 12, configured to read the at least one room acoustical parameter, from a bitstream comprising a representation of an audio signal to be rendered using the early reflection pattern.
  • 17. Apparatus of claim 1, further configured to generate a diffuse late reverberation portion of the room impulse response.
  • 18. Apparatus of claim 1, further configured to in rendering the audio signal, generate a set of loudspeaker signals by forming a summation over direct sound contribution loudspeaker signals relating to a direct sound source portion of the room impulse response and early reflection contribution loudspeaker signals relating to the early reflection portion of the room impulse response.
  • 19. Apparatus of claim 1, further configured to generating early reflection contribution loudspeaker signals relating to the early reflection portion of the room impulse response by performing a rendition of the audio signal of the sound source from the early reflection positions.
  • 20. Apparatus of claim 19, further configured to in generating the early reflection contribution loudspeaker signals relating to the early reflection portion of the room impulse response by performing a rendition of the audio signal of the sound source from the early reflection positions,render the audio signal of the sound source from each early reflection position in a manner level adjusted according to a distance of the respective early reflection position to the listener position.
  • 21. Apparatus of claim 20, further configured to, in rendering the audio signal of the sound source from each early reflection position in a manner level adjusted according to a distance of the respective early reflection position to the listener position, offset a level at which the audio signal of the sound source is rendered from the respective early reflection position, using a level offset, or amplify same with a level factor, which offset or factor is common for all early reflection positions, andset the level offset or level factor according to an amplitude correction factor.
  • 22. Apparatus of claim 20, further configured to, in rendering the audio signal of the sound source from each early reflection position in a manner level adjusted according to the distance of the respective early reflection position to the listener position, modify the level adjustment according to the distance of the respective early reflection position to the listener position relative to a level adjustment used by the apparatus for rendering of the audio signal from the sound source positon according to a distance attenuation exponent.
  • 23. Apparatus of claim 19, further configured to, in generating the early reflection contribution loudspeaker signals relating to the early reflection portion of the room impulse response by performing a rendition of the audio signal of the sound source from the early reflection positions, render the audio signal of the sound source from each early reflection position in a manner spectrally shaped according to one or more frequency response parameters.
  • 24. Apparatus of claim 1, further configured to, in performing the rendition of an audio signal of the sound source from the early reflection positions, use HRTFs specific for a listener head orientation.
  • 25. Bitstream for being subject to sound rendition according to claim 1.
  • 26. Digital storage medium storing a bitstream for being subject to sound rendition according to claim 25.
  • 27. Method for sound rendering, comprising receiving information on a listener position and a sound source position;rendering an audio signal of the sound source using a room impulse response whose early reflection portion is exclusively determined by an early reflection pattern which is indicative of a constellation of early reflection positions, and which is positioned at the listener position in a manner so that the early reflection positions are located around the listener position and at angular directions from the listener position which are invariant with respect to changes in a listener head orientation.
  • 28. A non-transitory digital storage medium having a computer program stored thereon to perform the method for sound rendering, the method comprising receiving information on a listener position and a sound source position;rendering an audio signal of the sound source using a room impulse response whose early reflection portion is exclusively determined by an early reflection pattern which is indicative of a constellation of early reflection positions, and which is positioned at the listener position in a manner so that the early reflection positions are located around the listener position and at angular directions from the listener position which are invariant with respect to changes in a listener head orientation,when said computer program is run by a computer.
Priority Claims (1)
Number Date Country Kind
21207272.2 Nov 2021 EP regional
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation of copending International Application No. PCT/EP2022/081089, filed Nov. 8, 2022, which is incorporated herein by reference in its entirety, and additionally claims priority from European Application No EP 21207272.2, filed Nov. 9, 2021, which is also incorporated herein by reference in its entirety.

Continuations (1)
Number Date Country
Parent PCT/EP2022/081089 Nov 2022 WO
Child 18655897 US