The disclosure herein generally relates to coding of an audio scene comprising audio objects. In particular, it relates to methods, systems, computer program products and data formats for representing spatial audio, and an associated encoder, decoder and renderer for encoding, decoding and rendering spatial audio.
The introduction of 4G/5G high-speed wireless access to telecommunications networks, combined with the availability of increasingly powerful hardware platforms, have provided a foundation for advanced communications and multimedia services to be deployed more quickly and easily than ever before.
The Third Generation Partnership Project (3GPP) Enhanced Voice Services (EVS) codec has delivered a highly significant improvement in user experience with the introduction of super-wideband (SWB) and full-band (FB) speech and audio coding, together with improved packet loss resiliency. However, extended audio bandwidth is just one of the dimensions required for a truly immersive experience. Support beyond the mono and multi-mono currently offered by EVS is ideally required to immerse the user in a convincing virtual world in a resource-efficient manner.
In addition, the currently specified audio codecs in 3GPP provide suitable quality and compression for stereo content but lack the conversational features (e.g. sufficiently low latency) needed for conversational voice and teleconferencing. These coders also lack multi-channel functionality that is necessary for immersive services, such as live streaming, virtual reality (VR) and immersive teleconferencing.
An extension to the EVS codec has been proposed for Immersive Voice and Audio Services (IVAS) to fill this technology gap and to address the increasing demand for rich multimedia services. In addition, teleconferencing applications over 4G/5G will benefit from an IVAS codec used as an improved conversational coder supporting multi-stream coding (e.g. channel, object and scene-based audio). Use cases for this next generation codec include, but are not limited to, conversational voice, multi-stream teleconferencing, VR conversational and user generated live and non-live content streaming.
While the goal is to develop a single codec with attractive features and performance (e.g. excellent audio quality, low delay, spatial audio coding support, appropriate range of bit rates, high-quality error resiliency, practical implementation complexity), there is currently no finalized agreement on the audio input format of the IVAS codec. Metadata Assisted Spatial Audio Format (MASA) has been proposed as one possible audio input format. However, conventional MASA parameters make certain idealistic assumptions, such as audio capture being done in a single point. However, in a real world scenario, where a mobile phone or tablet is used as an audio capturing device, such an assumption of sound capture in a single point may not hold. Rather, depending on form factor of the particular device, the various mics of the device may be located some distance apart and the different captured microphone signals may not be fully time-aligned. This is particularly true when consideration is also made to how the source of the audio may move around in space.
Another underlying assumption of the MASA format is that all microphone channels are provided at equal level and that there are no differences in frequency and phase response among them. Again, in a real world scenario, microphone channels may have different direction-dependent frequency and phase characteristics, which may also be time-variant. One could assume, for example, that the audio capturing device is temporarily held such that one of the microphones is occluded or that there is some object in the vicinity of the phone that causes reflections or diffractions of the arriving sound waves. Thus, there are many additional factors to take into account when determining what audio format would be suitable in conjunction with a codec such as the IVAS codec.
Example embodiments will now be described with reference to the accompanying drawings, on which:
All the figures are schematic and generally only show parts which are necessary in order to elucidate the disclosure, whereas other parts may be omitted or merely suggested. Unless otherwise indicated, like reference numerals refer to like parts in different figures.
In view of the above it is thus an object to provide methods, systems and computer program products and a data format for improved representation of spatial audio. An encoder, a decoder and a renderer for spatial audio are also provided.
According to a first aspect, there is provided a method, a system, a computer program product and a data format for representing spatial audio.
According to exemplary embodiments there is provided a method for representing spatial audio, the spatial audio being a combination of directional sound and diffuse sound, comprising:
With the above arrangement, an improved representation of the spatial audio may be achieved, taking into account different properties and/or spatial positions of the plurality of microphones. Moreover, using the metadata in the subsequent processing stages of encoding, decoding or rendering may contribute to faithfully representing and reconstructing the captured audio while representing the audio in a bit rate efficient coded form.
According to exemplary embodiments, combining the created downmix audio signal and the first metadata parameters into a representation of the spatial audio may further comprise including second metadata parameters in the representation of the spatial audio, the second metadata parameters being indicative of a downmix configuration for the input audio signals.
This is advantageous in that it allows for reconstructing (e.g., through an upmixing operation) the input audio signals at a decoder. Moreover, by providing the second metadata, further downmixing may be performed by a separate unit before encoding the representation of the spatial audio to a bit stream.
According to exemplary embodiments the first metadata parameters may be determined for one or more frequency bands of the microphone input audio signals.
This is advantageous in that it allows for individually adapted delay, gain and/or phase adjustment parameters, e.g., considering the different frequency responses for different frequency bands of the microphone signals.
According to exemplary embodiments the downmixing to create a single- or multi-channel downmix audio signal x may be described by:
x=D·m
According to exemplary embodiments the downmix coefficients may be chosen to select the input audio signal of the microphone currently having the best signal to noise ratio with respect to the directional sound, and to discard signal input audio signals from any other microphones.
This is advantageous in that it allows for achieving a good quality representation of the spatial audio with a reduced computation complexity at the audio capture unit. In this embodiment, only one input audio signal is chosen to represent the spatial audio in a specific audio frame and/or time frequency tile. Consequently, the computational complexity for the downmixing operation is reduced.
According to exemplary embodiments the selection may be determined on a per Time-Frequency (TF) tile basis.
This is advantageous in that it allows for an improved downmixing operation, e.g. considering the different frequency responses for different frequency bands of the microphone signals.
According to exemplary embodiments the selection may be made for a particular audio frame.
Advantageously, this allows for adaptations with regards to time varying microphone capture signals, and in turn to improved audio quality.
According to exemplary embodiments the downmix coefficients may be chosen to maximize the signal to noise ratio with respect to the directional sound, when combining the input audio signals from the different microphones
This is advantageous in that it allows for an improved quality of the downmix due to attenuation of unwanted signal components that do not stem from the directional sources.
According to exemplary embodiments the maximizing may be done for a particular frequency band.
According to exemplary embodiments the maximizing may be done for a particular audio frame.
According to exemplary embodiments determining first metadata parameters may include analyzing one or more of: delay, gain and phase characteristics of the input audio signals from the plurality microphones.
According to exemplary embodiments the first metadata parameters may be determined on a per Time-Frequency (TF) tile basis.
According to exemplary embodiments at least a portion of the downmixing may occur in the audio capture unit.
According to exemplary embodiments at least a portion of the downmixing may occur in an encoder.
According to exemplary embodiments, when detecting more than one source of directional sound, first metadata may be determined for each source.
According to exemplary embodiments the representation of the spatial audio may include at least one of the following parameters: a direction index, a direct-to-total energy ratio; a spread coherence; an arrival time, gain and phase for each microphone; a diffuse-to-total energy ratio; a surround coherence; a remainder-to-total energy ratio; and a distance.
According to exemplary embodiments a metadata parameter of the second or first metadata parameters may indicate whether the created downmix audio signal is generated from: left right stereo signals, planar First Order Ambisonics (FOA) signals, or FOA component signals.
According to exemplary embodiments the representation of the spatial audio may contain metadata parameters organized into a definition field and a selector field, wherein the definition field specifies at least one delay compensation parameter set associated with the plurality of microphones, and the selector field specifying the selection of a delay compensation parameter set.
According to exemplary embodiments the selector field may specify what delay compensation parameter set applies to any given Time-Frequency tile.
According to exemplary embodiments the relative time delay value may be approximately in the interval of [−2.0 ms, 2.0 ms]
According to exemplary embodiments the metadata parameters in the representation of the spatial audio may further include a field specifying the applied gain adjustment and a field specifying the phase adjustment.
According to exemplary embodiments the gain adjustment may be approximately in the interval of [+10 dB, −30 dB].
According to exemplary embodiments at least parts of the first and/or second metadata elements are determined at the audio capturing device using stored lookup-tables.
According to exemplary embodiments at least parts of the first and/or second metadata elements are determined at a remote device connected to the audio capturing device.
According to a second aspect, there is provided a system for representing spatial audio.
According to exemplary embodiments there is provided a system for representing spatial audio, comprising:
According to a third aspect, there is provided data format for representing spatial audio. The data format may advantageously be used in conjunction with physical components relating to spatial audio, such as audio capturing devices, encoders, decoders, renderers, and so on, and various types of computer program products and other equipment that is used to transmit spatial audio between devices and/or locations.
According to example embodiments, the data format comprises:
According to one example, the data format is stored in a non-transitory memory.
According to a fourth aspect, there is provided an encoder for encoding a representation of spatial audio.
According to exemplary embodiments there is provided an encoder configured to:
According to a fifth aspect, there is provided a decoder for decoding a representation of spatial audio.
According to exemplary embodiments there is provided a decoder configured to:
According to a sixth aspect, there is provided a renderer for rendering a representation of spatial audio.
According to exemplary embodiments there is provided a renderer configured to:
The second to sixth aspect may generally have the same features and advantages as the first aspect.
Other objectives, features and advantages of the present invention will appear from the following detailed disclosure, from the attached dependent claims as well as from the drawings.
The steps of any method disclosed herein do not have to be performed in the exact order disclosed, unless explicitly stated.
As described above, capturing and representing spatial audio presents a specific set of challenges, such that the captured audio can be faithfully reproduced at the receiving end. The various embodiments of the present invention described herein address various aspects of these issues, by including various metadata parameters together with the downmix audio signal when transmitting the downmix audio signal.
The invention will be described by way of example, and with reference to the MASA audio format. However, it is important to realize that the general principles of the invention are applicable to a wide range of formats that may be used to represent audio, and the description herein is not limited to MASA.
Further, it should be realized that the metadata parameters that are described below are not a complete list of metadata parameters, but that there may be additional metadata parameters (or a smaller subset of metadata parameters) that can be used to convey data about the downmix audio signal to the various devices used in encoding, decoding and rendering the audio.
Also, while the examples herein will be described in the context of an IVAS encoder, it should be noted that this is merely one type of encoder in which the general principles of the invention can be applied, and that there may be many other types of encoders, decoders, and renderers that may be used in conjunction with the various embodiments described herein.
Lastly, it should be noted that while the terms “upmixing” and “downmixing” are used throughout this document, they may not necessarily imply increasing and reducing, respectively, the number of channels. While this may often be the case, it should be realized that either term can refer to either reducing or increasing the number of channels. Thus, both terms fall under the more general concept of “mixing.” Similarly, the term “downmix audio signal” will be used throughout the specification, but it should be realized that occasionally other terms may be used, such as “MASA channel,” “transport channel,” or “downmix channel,” all of which have essentially the same meaning as “downmix audio signal.”
Turning now to
The directional sound is incident from a direction of arrival (DOA) represented by azimuth and elevation angles. The diffuse ambient sound is assumed to be omnidirectional, i.e., spatially invariant or spatially uniform. Also considered in the subsequent discussion is the potential occurrence of a second directional sound source, which is not shown in
Next, the signals from the microphones are downmixed to create a single- or multi-channel downmix audio signal, step 104. There are many reasons to propagate only a mono downmix audio signal. For example, there may be bit rate limitations or the intent to make a high-quality mono downmix audio signal available after certain proprietary enhancements have been made, such as beamforming and equalization or noise suppression. In other embodiments, the downmix result in a multi-channel downmix audio signal. Generally, the number of channels in the downmix audio signal is lower than the number of input audio signals, however in some cases the number of channels in the downmix audio signal may be equal to the number of input audio signals and the downmix is rather to achieve an increased SNR, or reduce the amount of data in the resulting downmix audio signal compared to the input audio signals. This is further elaborated on below.
Propagating the relevant parameters used during the downmix to the IVAS codec as part of the MASA metadata may give the possibility to recover the stereo signal and/or a spatial downmix audio signal at best possible fidelity.
In this scenario, a single MASA channel is obtained by the following downmix operation:
The signals m and x may, during the various processing stages, not necessarily be represented as full-band time signals but possibly also as component signals of various sub-bands in the time or frequency domain (TF tiles). In that case, they would eventually be recombined and potentially be transformed to the time domain before being propagated to the IVAS codec.
Audio encoding/decoding systems typically divide the time-frequency space into time/frequency tiles, e.g., by applying suitable filter banks to the input audio signals. By a time/frequency tile is generally meant a portion of the time-frequency space corresponding to a time interval and a frequency band. The time interval may typically correspond to the duration of a time frame used in the audio encoding/decoding system. The frequency band is a part of the entire frequency range of the audio signal/object that is being encoded or decoded. The frequency band may typically correspond to one or several neighboring frequency bands defined by a filter bank used in the encoding/decoding system. In the case the frequency band corresponds to several neighboring frequency bands defined by the filter bank, this allows for having non-uniform frequency bands in the decoding process of the downmix audio signal, for example, wider frequency bands for higher frequencies of the downmix audio signal.
In an implementation using a single MASA channel, there are at least two choices as to how the downmix matrix D can be defined. One choice is to pick that microphone signal having best signal to noise ratio (SNR) with regards to the directional sound. In the configuration shown in
D=(100).
While the sound source moves relative to the audio capturing device, another more suitable microphone could be selected so that either signal m2 or m3 is used as the resulting MASA channel.
When switching the microphone signals, it is important to make sure that the MASA channel signal x does not suffer from any potential discontinuities. Discontinuities could occur due to different arrival times of the directional sound source at the different mics, or due to different gain or phase characteristics of the acoustic path from the source to the mics. Consequently, the individual delay, gain and phase characteristics of the different microphone inputs must be analyzed and compensated for. The actual microphone signals may therefore undergo certain some delay adjustment and filtering operation before the MASA downmix.
In another embodiment, the coefficients of the downmix matrix are set such that the SNR of the MASA channel with regards to the directional source is maximized. This can be achieved, for example, by adding the different microphone signals with properly adjusted weights κ1,1, κ1,2, κ1,3. To make this work in an effective way, individual delay, gain and phase characteristics of the different microphone inputs must again be analyzed and compensated, which could also be understood as acoustic beamforming towards the directional source.
The gain/phase adjustments may be understood as a frequency-selective filtering operation. As such, the corresponding adjustments may also be optimized to accomplish acoustic noise reduction or enhancement of the directional sound signals, for instance following a Wiener approach.
As a further variation, there may be an example with three MASA channels. In that case, the downmix matrix D can be defined by the following 3-by-3 matrix:
Consequently, there are now three signals x1, x2, x3 (instead of one in the first example) that can be coded with the IVAS codec.
The first MASA channel may be generated as described in the first example. The second MASA channel can be used to carry a second directional sound, if there is one. The downmix matrix coefficients can then be selected according to similar principles as for the first MASA channel, however, such that the SNR of the second directional sound is maximized. The downmix matrix coefficients κ3,1, κ3,2, κ3,3 for the third MASA channel may be adapted to extract the diffuse sound component while minimizing the directional sounds.
Typically, stereo capture of dominant directional sources in the presence of some ambient sound may be performed, as shown in
In one embodiment, three main metadata parameters are associated with each captured audio signal: a relative time delay value, a gain value and a phase value. In accordance with a general approach, the MASA channel is obtained according to the following operations:
The delay adjustment term τi in the above expression can be interpreted as an arrival time of a plane sound wave from the direction of the directional source, and as such, it is also conveniently expressed as arrival time relative to the time of arrival of the sound wave at a reference point τref, such as the geometric center of the audio capturing device 202, although any reference point could be used. For example, when two microphones are used, the delay adjustment can be formulated as the difference between τ1, and τ2, which is equivalent to moving the reference point to the position of the second microphone. In one embodiment, the arrival time parameter allows modelling relative arrival times in an interval of [−2.0 ms, 2.0 ms], which corresponds to a maximum displacement of a microphone relative to the origin of about 68 cm.
As to the gain and phase adjustments, in one embodiment they are parameterized for each TF tile, such that gain changes can be modelled in the range [+10 dB, −30 dB], while phase changes can be represented in the range [−Pi, +Pi].
In the fundamental case with only a single dominant directional source, such as source 206 shown in
In a more complex case, where there may be multiple sources 206 of directional sound, one source from a first direction could be dominant in a certain frequency band, while a different source from another direction may be dominant in another frequency band. In such a scenario, the delay adjustment is instead advantageously carried out for each frequency band.
In one embodiment, this can be done by delay compensating microphone signals in a given Time-Frequency (TF) tile with respect to the sound direction that is found dominant. If no dominant sound direction is detected in the TF tile, no delay compensation is carried out.
In a different embodiment, the microphone signals in a given TF tile can be delay compensated with the goal of maximizing a signal-to-noise ratio (SNR) with respect to the directional sound, as captured by all the microphones.
In one embodiment, a suitable limit of different sources for which a delay compensation can be done is three. This offers the possibility to make delay compensation in a TF tile either with respect to one out of three dominant sources, or not at all. The corresponding set of delay compensation values (a set applies to all microphone signals) can thus be signaled by only two bits per TF tile. This covers most practically relevant capture scenarios and has the advantage that the amount of metadata or their bit rate remains low.
Another possible scenario is where First Order Ambisonics (FOA) signals rather than stereo signals are captured and downmixed into e.g. a single MASA channel. The concept of FOA is well known to those having ordinary skill in the art, but can be briefly described as a method for recording, mixing and playing back three-dimensional 360-degree audio. The basic approach of Ambisonics is to treat an audio scene as a full 360-degree sphere of sound coming from different directions around a center point where the microphone is placed while recording, or where the listener's ‘sweet spot’ is located while playing back.
Planar FOA and FOA capture with downmix to a single MASA channel are relatively straightforward extensions of the stereo capture case described above. The planar FOA case is characterized by a microphone triple, such as the one shown in
The delay compensation, amplitude and phase adjustment parameters can be used to recover the three or, respectively, four original capture signals and to allow a more faithful spatial render using the MASA metadata than would be possible just based on the mono downmix signal. Alternatively, the delay compensation, amplitude and phase adjustment parameters can be used to generate a more accurate (planar) FOA representation that comes closer to the one that would have been captured with a regular microphone grid.
In yet another scenario, planar FOA or FOA may be captured and downmixed into two or more MASA channels. This case is an extension of the previous case with the difference that the captured three or four microphone signals are downmixed to two rather than only a single MASA channel. The same principles apply, where the purpose of providing delay compensation, amplitude and phase adjustment parameters is to enable best possible reconstruction of the original signals prior to the downmix.
As the skilled reader realizes, in order to accommodate all these use scenarios, the representation of the spatial audio will need to include metadata about not only the delay, gain and phase, but also parameters that are indicative of the downmix configuration for the downmix audio signal.
Returning now to
To support the above described use cases with downmix to a single or multiple MASA channels, two metadata elements are used. One metadata element is signal independent configuration metadata that is indicative of the downmix. This metadata element is described below in conjunction with
Table 1A, shown in
Table 1B, shown in
In the case where the downmix metadata is signal dependent, some further details are needed, as will now be described. As indicated in Table 1B for the specific case when the transport signal is a mono signal obtained through downmix of multi-microphone signals, these details are provided in a signal dependent metadata field. The information provided in that metadata field describes the applied delay adjustment (with the possible purpose of acoustical beamforming towards directional sources) and filtering of the microphone signals (with the possible purpose of equalization/noise suppression) prior to the downmix. This offers additional information that can benefit encoding, decoding, and/or rendering.
In one embodiment, the downmix metadata comprises four fields, a definition and selector field for signaling the applied delay compensation, followed by two fields signaling the applied gain and phase adjustments, respectively.
The number of downmixed microphone signals n is signaled by the ‘Bit value’ field of Table 1B, i.e., n=2 for stereo downmix (‘Bit value=01’), n=3 for planar FOA downmix (‘Bit value=10’) and n=4 for FOA downmix (‘Bit value=11’).
Up to three different sets of delay compensation values for the up to n microphone signals can be defined and signaled per TF tile. Each set is respective of the direction of a directional source. The definition of the sets of delay compensation values and the signaling which set applies to which TF tile is done with two separate (definition and selector) fields.
In one embodiment, the definition field is an n×3 matrix with 8-bit elements Bi,j encoding the applied delay compensation Δτi,j. These parameters are respective of the set to which they belong, i.e. respective of the direction of a directional source (j=1 . . . 3). The elements Bi,j are further respective of the capturing microphone (or the associated capture signal) (i=1 . . . n, n≤4). This is schematically illustrated in Table 2, shown in
The delay compensation parameter represents a relative arrival time of an assumed plane sound wave from the direction of a source compared to the wave's arrival at an (arbitrary) geometric center point of the audio capturing device 202. The coding of that parameter with the 8-bit integer code word B is done according to the following equation:
This quantizes the relative delay parameter linearly in an interval of [−2.0 ms, 2.0 ms], which corresponds to a maximum displacement of a microphone relative to the origin of about 68 cm. This is, of course, merely one example and other quantization characteristics and resolutions may also be considered.
The signaling of which set of delay compensation values applies to which TF tile is done using a selector field representing the 4*24 TF tiles in a 20 ms frame, which assumes 4 subframes in a 20 ms frame and 24 frequency bands. Each field element contains a 2-bit entry encoding set 1 . . . 3 of delay compensation values with the respective codes ‘01’, ‘10’, and ‘11’. A ‘00’ entry is used if no delay compensation applies for the TF tile. This is schematically illustrated in Table 3, shown in
The Gain adjustment is signaled in 2-4 metadata fields, one for each microphone. Each field is a matrix of 8-bit gain adjustment codes Bα, respective for the 4*24 TF tiles in a 20 ms frame. The coding of the gain adjustment parameters with the integer code word Bα is done according to the following equation:
The 2-4 metadata fields for each microphone are organized as shown in the Table 4, shown in
Phase adjustment is signaled analogous to gain adjustments in 2-4 metadata fields, one for each microphone. Each field is a matrix of 8-bit phase adjustment codes Bφ, respective for the 4*24 TF tiles in a 20 ms frame. The coding of the phase adjustment parameters with the integer code word B100 done according to the following equation:
The 2-4 metadata fields for each microphone are organized as shown in the table 4 with the only difference that the field elements are the phase adjustment code words B100.
This representation of MASA signals, which include associated metadata can then be used by encoders, decoders, renderers and other types of audio equipment to be used to transmit, receive and faithfully restore the recorded spatial sound environment. The techniques for doing this are well-known by those having ordinary skill in the art, and can easily be adapted to fit the representation of spatial audio described herein. Therefore, no further discussion about these specific devices is deemed to be necessary in this context.
As understood by the skilled person, the metadata elements described above may reside or be determined in different ways. For example, the metadata may be determined locally on a device (such as an audio capturing device, an encoder device, etc.,), may be otherwise derived from other data (e.g. from a cloud or otherwise remote service), or may be stored in a table of predetermined values. For example, based on the delay adjustment between microphones, the delay compensation value (
The encoder 704 receives the representation of spatial audio from the audio capturing device 202. That is, the encoder 704 receives a data format comprising a single- or multi-channel downmix audio signal resulting from a downmix of input audio signals from a plurality of microphones in an audio capture unit capturing the spatial audio, and first metadata parameters indicative of a downmix configuration for the input audio signals, a relative time delay value, a gain value, and/or a phase value associated with each input audio signal. It should be noted that the data format may be stored in a non-transitory memory before/after being received by the encoder. The encoder 704 then encodes the single- or multi-channel downmix audio signal into a bitstream using the first metadata. In some embodiments, the encoder 704 can be an IVAS encoder, as described above, but as the skilled person realizes, other types of encoders 704 may have similar capabilities and also be possible to use.
The encoded bitstream, which is indicative of the coded representation of the spatial audio, is then received by the decoder 706. The decoder 706 decodes the bitstream into an approximation of the spatial audio, by using the metadata parameters that are included in the bitstream from the encoder 704. Finally, the renderer 708 receives the decoded representation of the spatial audio and renders the spatial audio using the metadata, to create a faithful reproduction of the spatial audio at the receiving end, for example by means of one or more speakers.
Further embodiments of the present disclosure will become apparent to a person skilled in the art after studying the description above. Even though the present description and drawings disclose embodiments and examples, the disclosure is not restricted to these specific examples. Numerous modifications and variations can be made without departing from the scope of the present disclosure, which is defined by the accompanying claims. Any reference signs appearing in the claims are not to be understood as limiting their scope.
Additionally, variations to the disclosed embodiments can be understood and effected by the skilled person in practicing the disclosure, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word “comprising” does not exclude other elements or steps, and the indefinite article “a” or “an” does not exclude a plurality. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measured cannot be used to advantage.
The systems and methods disclosed hereinabove may be implemented as software, firmware, hardware or a combination thereof. In a hardware implementation, the division of tasks between functional units referred to in the above description does not necessarily correspond to the division into physical units; to the contrary, one physical component may have multiple functionalities, and one task may be carried out by several physical components in cooperation. Certain components or all components may be implemented as software executed by a digital signal processor or microprocessor, or be implemented as hardware or as an application-specific integrated circuit. Such software may be distributed on computer readable media, which may comprise computer storage media (or non-transitory media) and communication media (or transitory media). As is well known to a person skilled in the art, the term computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. Further, it is well known to the skilled person that communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
All the figures are schematic and generally only show parts which are necessary in order to elucidate the disclosure, whereas other parts may be omitted or merely suggested. Unless otherwise indicated, like reference numerals refer to like parts in different figures.
This application claims priority to U.S. patent application Ser. No. 17/293,463 filed May 12, 2021, which is a U.S. National Stage application under U.S.C. 371 of International Application No. PCT/US2019/060862 filed Nov. 12, 2019, which claims priority to U.S. Provisional Patent Application No. 62/760,262 filed Nov. 13, 2018, U.S. Provisional Patent Application No. 62/795,248 filed Jan. 22, 2019; U.S. Provisional Patent Application No. 62/828,038 filed Apr. 2, 2019; and U.S. Provisional Patent Application No. 62/926,719 filed Oct. 28, 2019, each of which is hereby incorporated by reference.
Number | Name | Date | Kind |
---|---|---|---|
5521981 | Gehring | May 1996 | A |
5930451 | Ejiri | Jul 1999 | A |
7450727 | Griesinger | Nov 2008 | B2 |
8712059 | Del Galdo | Apr 2014 | B2 |
9349118 | Chavez | May 2016 | B2 |
9357323 | Ramabadran | May 2016 | B2 |
9510127 | Squires | Nov 2016 | B2 |
9521170 | Bader-Natal | Dec 2016 | B2 |
9728181 | Jot | Aug 2017 | B2 |
9755847 | Clavel | Sep 2017 | B2 |
9774975 | Krueger | Sep 2017 | B2 |
9774976 | Baumgarte | Sep 2017 | B1 |
9820037 | Virolainen | Nov 2017 | B2 |
9838822 | Boehm | Dec 2017 | B2 |
9854375 | Stockhammer | Dec 2017 | B2 |
9933989 | Tsingos | Apr 2018 | B2 |
9955278 | Fersch | Apr 2018 | B2 |
9986363 | Bos̈njak et al. | May 2018 | B2 |
10057707 | Cartwright | Aug 2018 | B2 |
10062208 | Ziman | Aug 2018 | B2 |
10068577 | Melkote | Sep 2018 | B2 |
10187739 | Goodwin | Jan 2019 | B2 |
10210907 | Puri | Feb 2019 | B2 |
10290304 | Hirvonen | May 2019 | B2 |
11699451 | McGrath | Jul 2023 | B2 |
11765536 | Bruhn | Sep 2023 | B2 |
20040135040 | Eason | Jul 2004 | A1 |
20050147261 | Yeh | Jul 2005 | A1 |
20060115100 | Faller | Jun 2006 | A1 |
20090171676 | Oh | Jul 2009 | A1 |
20090264114 | Virolainen | Oct 2009 | A1 |
20090299742 | Toman | Dec 2009 | A1 |
20090325524 | Oh | Dec 2009 | A1 |
20100008640 | Casaccia | Jan 2010 | A1 |
20100061558 | Faller | Mar 2010 | A1 |
20100188568 | Abe | Jul 2010 | A1 |
20100228554 | Beack | Sep 2010 | A1 |
20100303265 | Porwal | Dec 2010 | A1 |
20100332239 | Kim | Dec 2010 | A1 |
20110208528 | Schildbach | Aug 2011 | A1 |
20110222694 | Del Galdo | Sep 2011 | A1 |
20120070007 | Kim | Mar 2012 | A1 |
20120082319 | Jot | Apr 2012 | A1 |
20120114126 | Thiergart | May 2012 | A1 |
20120177204 | Hellmuth | Jul 2012 | A1 |
20130322640 | Dickins | Dec 2013 | A1 |
20140226838 | Wingate | Aug 2014 | A1 |
20140297296 | Koppens | Oct 2014 | A1 |
20140350944 | Jot | Nov 2014 | A1 |
20140358567 | Koppens | Dec 2014 | A1 |
20140376728 | Rämö | Dec 2014 | A1 |
20150035940 | Shapiro | Feb 2015 | A1 |
20150142427 | Terentiv | May 2015 | A1 |
20150162012 | Kastner | Jun 2015 | A1 |
20150194158 | Oh | Jul 2015 | A1 |
20150208156 | Virolainen | Jul 2015 | A1 |
20150269951 | Kalker | Sep 2015 | A1 |
20150332663 | Jot | Nov 2015 | A1 |
20150356978 | Dickins | Dec 2015 | A1 |
20160035355 | Thesing | Feb 2016 | A1 |
20160080880 | Goshen | Mar 2016 | A1 |
20160111099 | Hirvonen | Apr 2016 | A1 |
20160150343 | Wang | May 2016 | A1 |
20160167672 | Krueger | Jun 2016 | A1 |
20160180826 | Dickins | Jun 2016 | A1 |
20160240204 | Kuech | Aug 2016 | A1 |
20160255454 | Mcgrath | Sep 2016 | A1 |
20160345092 | Virolainen | Nov 2016 | A1 |
20170026650 | Mittal | Jan 2017 | A1 |
20170098452 | Tracey | Apr 2017 | A1 |
20170156015 | Stockhammer | Jun 2017 | A1 |
20170171576 | Oh | Jun 2017 | A1 |
20170215019 | Chen | Jul 2017 | A1 |
20170245055 | Sun | Aug 2017 | A1 |
20170270711 | Schoenberg | Sep 2017 | A1 |
20170318070 | Zaitsev | Nov 2017 | A1 |
20170353812 | Schaefer | Dec 2017 | A1 |
20180047394 | Tian | Feb 2018 | A1 |
20180077491 | Butler | Mar 2018 | A1 |
20180090151 | Dick | Mar 2018 | A1 |
20180098174 | Goodwin | Apr 2018 | A1 |
20180123813 | Milevski | May 2018 | A1 |
20180139413 | Diao | May 2018 | A1 |
20180192189 | Virolainen | Jul 2018 | A1 |
20180240470 | Wang | Aug 2018 | A1 |
20180332421 | Torres | Nov 2018 | A1 |
20180338213 | Lehtiniemi | Nov 2018 | A1 |
20180375676 | Bader-Natal | Dec 2018 | A1 |
20190007783 | Magariyachi | Jan 2019 | A1 |
20190013028 | Atti | Jan 2019 | A1 |
20190026936 | Gorur Sheshagiri | Jan 2019 | A1 |
20190103118 | Atti | Apr 2019 | A1 |
20190132674 | Vilkamo | May 2019 | A1 |
20190296821 | Choi | Sep 2019 | A1 |
20210050022 | Kjoerling | Feb 2021 | A1 |
20220022000 | Bruhn | Jan 2022 | A1 |
20230209302 | Koppens | Jun 2023 | A1 |
Number | Date | Country |
---|---|---|
105792086 | Jul 2016 | CN |
2366975 | Mar 2002 | GB |
2005181391 | Jul 2005 | JP |
2009081861 | Apr 2009 | JP |
2009532735 | Sep 2009 | JP |
2011193164 | Sep 2011 | JP |
2012503792 | Feb 2012 | JP |
2012141633 | Jul 2012 | JP |
2013210501 | Oct 2013 | JP |
2015-529850 | Oct 2015 | JP |
2015-531078 | Oct 2015 | JP |
2015528926 | Oct 2015 | JP |
2016519788 | Jul 2016 | JP |
2016-525715 | Aug 2016 | JP |
2016-530788 | Sep 2016 | JP |
2016528542 | Sep 2016 | JP |
2017-515164 | Jun 2017 | JP |
2018511070 | Apr 2018 | JP |
20150032734 | Mar 2015 | KR |
06009931 | Mar 2007 | MX |
2005094125 | Oct 2005 | WO |
2016142375 | Mar 2015 | WO |
2016209098 | Dec 2016 | WO |
2017023601 | Feb 2017 | WO |
2017140666 | Aug 2017 | WO |
2017182714 | Oct 2017 | WO |
2018060550 | Apr 2018 | WO |
2018091776 | May 2018 | WO |
2018100232 | Jun 2018 | WO |
2018106735 | Jun 2018 | WO |
2018226508 | Dec 2018 | WO |
2019068638 | Apr 2019 | WO |
2019091575 | May 2019 | WO |
2019097017 | May 2019 | WO |
2019105575 | Jun 2019 | WO |
2019106221 | Jun 2019 | WO |
2019129350 | Jul 2019 | WO |
Entry |
---|
Ariza, O. et al.“Analysis of Proximity-Based Multimodal Feedback for 3D Selection in Immersive Virtual Environments” IEEE Conference on Virtual Reality and 3D User Interfaces, Mar. 2018, pp. 327-334. |
Williams, D., Pooransingh, A., & Saitoo, J. (2017). Efficient music identification using ORB descriptors of the spectrogram image. EURASIP Journal on Audio, Speech, and Music Processing,2017(1). doi: 10.1186/ s13636-017-0114-4. |
Gabin, F. et al.“5G Multimedia Standardization” Journal of ICT Standardization vol. 6 Issue: Combined Special Issue 1 & 2 Published In: May 2018. |
McGrath, D et al.“Immersive Audio Coding for Virtual Reality Using a Metadata-assisted Extension of the 3GPP EVS CODEC” ICASSP IEEE International Conference on Acoustics, Speech and Signal Processing, May 2019. |
TDOC S4 “Proposal for IVAS MASA Channel Audio Format Parameter” Apr. 8-12, 2019, Newport Beach, CA, USA. |
TDOC S4 (18)0087 “On IVAS audio formats for mobile capture devices” Source: Nokia Corporation, Feb. 5-9, 2018, Fukuoka, Japan. |
TDOC S4-171221 “IVAS use case of spatial conferencing and related codec requirements” Nov. 13-17, 2017, Albuquerque, NM, USA. |
TDOC S4-180143, “Use case specific IVAS design constraints” source: Dolby Laboratories, Inc. Feb. 5-9, 2018 meeting, Fukuoka, Japan. |
TDOC S4-180466, “On capture formats for IVAS” Source: Qualcomm Incorporated, Apr. 9-13, 2018, Kista, Sweden. |
TDOC S4-180806 “Dolby VRStream audio profile candidate—Description of Bitstream, Decoder, and Renderer plus informative Encoder Description” Source: Dolby Laboratories, Inc. Jul. 9-13, 2018, Rome, Italy. |
TDOC S4-191307 IVAS Design Constraints (IVAS-4), TSG SA4#106 meeting, Oct. 21-25, 2019, Busan, Korea. |
ETSI TS 103 190-2 “Digital Audio Compression (AC-4) Standard Part 2: Immersive and Personalized Audio” 103 190-2 V1.3.1 (Oct. 2017), pp. 1-230. |
ETSI TS 103 420 V1.1.1, Jul. 2016 “Backwards-Compatible Object Audio Carriage Using Enhanced AC-3” pp. 1-68. |
Sen, D et al.“Efficient Compression and Transportation of Scene-Based Audio for Television Broadcast” Jul. 14, 2016, AES International Conference, pp. 1-8. |
Setiawan, p et al.“Compressing Higher Order Ambisonics of a Multizone Soundfield” published in Acoustics, Speech and Signal Processing Mar. 2017, pp. 1-5. |
Sound Labs, “3D Audio Formats for Virtual Reality” Jun. 29, 2016, pp. 1-7. |
Mllemoes, L. et al.“Decorrelation for Audio Object Coding” IEEE published in Acoustics, Speech and Signal Processing, Mar. 2017, pp. 706-709. |
Number | Date | Country | |
---|---|---|---|
20240114307 A1 | Apr 2024 | US |
Number | Date | Country | |
---|---|---|---|
62926719 | Oct 2019 | US | |
62828038 | Apr 2019 | US | |
62795248 | Jan 2019 | US | |
62760262 | Nov 2018 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17293463 | US | |
Child | 18465636 | US |