The proposed technology generally relates to sound reproduction, audio processing, and more particularly to a method and system for processing frequency response functions such as Head-Related Transfer Functions (HRTFs) for audio applications, a method and system for processing a set including HRTFs for at least two different directions of sound incidence, a system for generating an audio filter, a database comprising a set of processed HRTFs, an audio filter design procedure, and an audio filter or corresponding audio processing system as well as a corresponding overall audio system and computer program and computer-program product.
Head-Related Transfer Functions (HRTFs) describe how the sound pressure at the ears is affected by sound waves arriving from different directions. For a specific discrete direction of sound incidence to the head, a left-ear HRTF describes the transfer function from a sound source (at some specified distance in that direction) to the left ear, and a right ear HRTF describes the corresponding transfer function to the right ear.
A database of HRTFs may include HRTFs for a large number of directions. By way of example, HRTF databases can be used for headphone binaural signal generation in multiple different applications [1].
One application relying on a database of HRTFs is digital audio filter design for a microphone array which is used to estimate the binaural ear signals which would be present at the ears of a listener in a sound field. Such a microphone array is known as a Virtual Artificial Head (VAH) in the literature [2]. A related application is the audio filter design for producing a binaural signal from an Ambisonics signal [3]. Binaural signal generation using a VAH or Ambisonics has multiple further applications, like streaming of binaural audio from a concert, or from a teleconference.
It is a general object to provide improvements with respect to processing of frequency response functions such as HRTF representations for audio applications.
It is a specific object to provide a method and system for processing frequency response functions such as Head-Related Transfer Functions (HRTFs) for audio applications.
It is a particular object to provide a method for processing a set including Head-Related Transfer Functions, HRTFs, for at least two different directions of sound incidence.
It is also an object to provide a system for processing a set including Head-Related Transfer Functions, HRTFs, for at least two different directions of sound incidence.
Another object is to provide a system for generating an audio filter.
Yet another object is to provide a database comprising a set of processed Head-Related Transfer Functions, HRTFs.
Still another object is to provide an audio filter design procedure based on such a method for processing a set of HRTFs.
It is another object to provide an audio filter or corresponding audio processing system as well as a corresponding overall audio system.
It is also an object to provide a computer program and computer-program product.
These and other objects are met by embodiments of the proposed technology.
According to a first aspect, there is provided a method for processing a set including Head-Related Transfer Functions, HRTFs, for at least two different directions of sound incidence to enable improved generation, modeling or simulation of ear signals corresponding to a diffuse sound field. For each direction of sound incidence to the head, a left-ear HRTF represents the transfer function from a sound source to the left ear, and a right-ear HRTF represents the corresponding transfer function to the right ear. The method comprises:
According to a second aspect, there is provided a system for processing a set including Head-Related Transfer Functions, HRTFs, for at least two different directions of sound incidence to improve generating, modeling or simulating ear signals corresponding to a diffuse sound field,
According to a third aspect, there is provided a system for generating an audio filter comprising such a system for processing a set including Head-Related Transfer Functions, HRTFs, for at least two different directions of sound incidence.
According to a fourth, there is provided a database comprising a set of Head-Related Transfer Functions, HRTFs, processed according to the method of the first aspect.
According to a fifth aspect, there is provided an audio filter design procedure comprising the method for processing a set of Head-Related Transfer Functions, HRTFs, for at least two different directions of incident sound according to the first aspect, and the further step of generating an audio filter based on the processed set of HRTFs.
According to a sixth aspect, there is provided an audio filter generated according to such an audio filter design procedure.
According to a seventh aspect, there is provided an audio system comprising such an audio filter.
According to an eighth aspect, there is provided a computer program (and corresponding computer-program product) comprising instructions, which when executed by a computer, cause the computer to perform the method of the first aspect.
By way of example, the proposed technology relates to a method and/or a corresponding system for processing at least one, or normally at least two, Head-Related Transfer Function (HRTF) representation(s), or similar representation of a frequency response function, for an audio system/application, characterized by applying a phase adjustment to said at least one, or normally at least two, HRTF representation(s) to reduce Interaural Time Differences (ITD) above a threshold frequency or in a frequency band above the threshold frequency. Each so-called HRTF representation, for a given direction, normally corresponds to a left-ear HRTF and right-ear HRTF.
The step of applying a phase adjustment may be performed while adding a direction-dependent Interaural Phase Difference (IPD) for reducing or lowering Interaural Coherence at high frequencies (above a threshold frequency or in a frequency band above the threshold frequency) in diffuse sound fields.
Optionally, the ITD may be gradually reduced with increasing frequency, above the threshold frequency. In other words, applying a phase adjustment to said at least one HRTF representation may be performed such that said ITD is gradually reduced with increasing frequency, above the threshold frequency.
Expressed slightly differently, the proposed technology relates to pre-processing of one or generally more HRTF representations, sometimes referred to as HRTFs, or equivalent frequency response functions. By way of example, the invention may be based on applying a phase adjustment to each of the concerned HRTFs, uniquely for each ear and direction, in order to enable a reduction of the Interaural Time Differences (ITD) at high frequencies that occurs between the left and right ear HRTFs for sound incidence from any given direction.
An illustrative purpose is a perceptually transparent simplification of the HRTF responses which improves performance in for example the above-mentioned applications as well as other applications.
In a sense, the proposed technology may be regarded as pre-processing of an HRTF database or a selected part thereof such as one or more HRTF representations, or equivalent frequency response functions, to enable decreased ITD above a threshold frequency, while adding a direction-dependent Interaural Phase Difference (IPD) at high frequencies such that when modeling a diffuse sound field, Interaural Coherence becomes similar to that of unprocessed HRTFs.
Other advantages will be appreciated when reading the following detailed description of non-limiting embodiments of the invention.
The embodiments, together with further objects and advantages thereof, may best be understood by making reference to the following description taken together with the accompanying drawings, in which:
Throughout the drawings, the same reference designations are used for similar or corresponding elements.
It may be useful to start with an audio system overview with reference to
A Head Related Impulse Response (HRIR) is the impulse response of a sound source in a (preferably) anechoic environment to the ears [1]. The frequency domain equivalent is called a Head-Related Transfer Function (HRTF).
For example, HRIRs can be measured by putting microphones in or at the ears of a person or artificial head and measuring the impulse responses to the ears from a sound source in a specific direction. Taking HRIR measurements in a number of directions produces a set of HRIRs (or corresponding HRTFs) which is usually denoted as an HRTF database. One example is found in [4]. HRTFs can also be acquired by for example computer modelling and/or numerical simulation and are sometimes simplified in some regard compared to the features of HRTFs from a real person, e.g. using a spherical head model.
According to a first aspect, there is provided a method for processing a set including Head-Related Transfer Functions, HRTFs, for at least two different directions of sound incidence to enable improved generation, modeling or simulation of ear signals corresponding to a diffuse sound field. For each direction of sound incidence to the head, a left-ear HRTF represents the transfer function from a sound source to the left ear, and a right-ear HRTF represents the corresponding transfer function to the right ear.
Basically, the method comprises:
In a particular example, the set of HRTFs, before applying the phase adjustment, are referred to as unprocessed HRTFs, and the step of adding a direction-dependent IPD is performed such that when modeling or simulating a diffuse sound field, the Interaural Coherence becomes perceptually the same or similar to that of the unprocessed HRTFs.
For example, the direction-dependent IPD may be set to be substantially equal to an IPD of the unprocessed HRTFs at a reference frequency where diffuse field Interaural Coherence is below a threshold value.
In a particular example, the direction-dependent IPD may be substantially constant as a function of frequency above the threshold frequency.
As a non-limiting example, the step of applying a phase adjustment and the step of adding a direction-dependent IPD may provide a modified HRTmod(ω) for each direction and ear given by:
HRTF
mod(ω)=HRTF(ω) ω<ωlim |HRTF(ω)|e^(−i(ωτm−a(ω)θref)) ω≥ωlim
where ωlim is the threshold frequency in rad/s, and τm is a time delay parameter, and θref is a phase angle, and a(ω) is a frequency dependent real number for controlling an amount of phase modification applied at different frequencies. In the equation above, e^() is the exponential function and i is the unit imaginary number. More information of context and configuration will be given later on.
By way of example, the method may be performed for a set of HRTFs covering multiple directions, and the diffuse sound field (or the ear signals corresponding to a diffuse sound field) may be modelled or simulated based on processed HRTFs for multiple directions.
For example, the diffuse sound field may be modelled or simulated based on processed HRTFs in combination with an additional set of HRTFs, in total constituting an overall set of HRTFs for multiple directions.
In a particular example, the step of applying a phase adjustment to each HRTF may be performed such that the ITD is gradually reduced with increasing frequency, above the threshold frequency.
Optionally, the step of adding a direction-dependent Interaural Phase Difference may also be applied gradually with increasing frequency, above the threshold frequency.
By way of example, the method may be performed for processing an HRTF database or a selected subset of the HRTFs of the HRTF database.
According to a second aspect, there is provided a system 10; 20 for processing a set including Head-Related Transfer Functions, HRTFs, for at least two different directions of sound incidence to improve generating, modeling or simulating ear signals corresponding to a diffuse sound field.
For each direction of sound incidence to the head, a left-ear HRTF represents the transfer function from a sound source to the left ear, and a right-ear HRTF represents the corresponding transfer function to the right ear.
The system is configured to apply a phase adjustment to each HRTF, for each ear and direction, for reducing Interaural Time Differences, ITD, above a threshold frequency or in a frequency band above the threshold frequency. The system is also configured to add a direction-dependent Interaural Phase Difference, IPD, to each HRTF, for each ear and direction, for reducing Interaural Coherence when modelling or simulating the diffuse sound field.
In a particular example, the set of HRTFs, before applying the phase adjustment, may be referred to as unprocessed HRTFs, and the system may be configured to add the direction-dependent IPD such that when modeling or simulating a diffuse sound field, the Interaural Coherence becomes perceptually the same or similar to that of the unprocessed HRTFs.
For example, the system may be configured to set the direction-dependent IPD to be substantially equal to an IPD of the unprocessed HRTFs at a reference frequency where diffuse field Interaural Coherence is below a threshold value.
In a particular example, the system is configured to set the direction-dependent IPD to be substantially constant as a function of frequency above the threshold frequency.
As a non-limiting example, the system may be configured to provide a modified HRTFmod(ω) for each direction and ear given by:
HRTF
mod(ω)=HRTF(ω) ω<ωlim |HRTF(ω)|e^(−i(ωτm−a(ω)θref)) ω≥ωlim
where ωlim is the threshold frequency in rad/s, and τm is a time delay parameter, and θref is a phase angle, and a(ω) is a frequency dependent real number for controlling an amount of phase modification applied at different frequencies. More information of context and configuration will be given later on.
By way of example, the system may be configured to perform the processing for a set of HRTFs covering multiple directions, and to model or simulate the diffuse sound field based on processed HRTFs for multiple directions.
For example, the system may be configured to model or simulate the diffuse sound field based on processed HRTFs in combination with an additional set of (unprocessed or differently processed) HRTFs, in total constituting an overall set of HRTFs for multiple directions.
In a particular example, the system may be configured to process at least a subset of the HRTFs of an HRTF database.
There is also provided a system for generating an audfilter comprising a system for processing a set including Head-Related Transfer Functions, HRTFs, for at least two different directions of sound incidence.
In addition, there is provided a database comprising a set of Head-Related Transfer Functions, HRTFs, processed according to the method described herein.
Furthermore, the proposed technology also provides an overall audfilter design procedure comprising the above-described method for processing a set of Head-Related Transfer Functions, HRTFs, for at least two different directions of incident sound, and the further step S3 of generating an audio filter based on the processed set of HRTFs.
In a particular example, the audfilter design procedure is adapted for binaural signal generation.
For example, the audfilter design procedure may be adapted to provide an audfilter for generating a binaural signal for a virtual sound source, or for estimating a binaural signal from an Ambisonics signal, or for providing an audfilter design for a microphone array such as a Virtual Artificial Head.
According to yet another aspect, there is provided an audfilter generated according to such an audfilter design procedure.
There is also provided an audsystem comprising such an audfilter.
For a better understanding of the invention, reference will now be given to the following non-limiting examples of embodiments and overall contextual scenarios and applications.
In this example, the audprocessing system 200 may be based on an audio filter 210, which performs HRTF-based filtering, i.e. HRTF filtering or processing.
The filter design procedure, including HRTF pre-processing, may be performed at least partially in a HRTF database 10, from which one or more suitable pre-processed HRTF representations is/are selected and integrated and/or merged into the (digital) audfilter.
Alternatively, or as a complement, the HRTF pre-processing may optionally be performed by a filter design/configuration module 20 based on one or more HRTF representations extracted from the HRTF database 10. The filter design/configuration module 20 may thus correspond to a system for processing a set of HRTFs as described above.
The pre-processed HRTF representation(s) may then form the basis of the audfilter 210, which is thus configured to filter and/or process an (audio) input signal and output the filtered and/or processed audsignal.
For example, the input signal could be a microphone signal, Ambisonics signal or mono signal or other suitable audinput signal.
By way of example, the output signal could be forwarded to i) a sound generating system and/or ii) for storage in a memory, e.g. for subsequent streaming or transfer to a sound generating system, and/or iii) for analysis.
By convolving a left-ear HRIR/HRTF and a right-ear HRIR/HRTF with a monophonic audsignal, a binaural signal is produced and listening to it in headphones gives the impression of hearing a virtual sound source in the direction corresponding to the HRIR/HRTF, as schematically illustrated in
Thus, an HRTF database can be used to generate virtual sound sources in all or at least multiple directions for which the database contains HRTFs, keeping in mind that an HRTF is the frequency domain equivalent of an HRIR.
The left- and right-ear HRIRs (and corresponding HRTFs) have a direction-dependent difference in delay, known as inter-aural time difference (ITD), and also an inter-aural level difference (ILD). By the well-known duplex theory, ITD is used by the hearing system for localization primarily at low frequencies below about 1.5 kHz and above this frequency ILD is a dominant localization cue [5]. It is possible to apply significant modifications, by means of digital signal processing methods, to the ITD of a left/right HRIR pair above about 1.5 kHz without major effects to the perceived localization of a single virtual sound source.
As schematically illustrated in
Interaural Coherence is a metric which describes the frequency-dependent correlation between the left and right ear signals and is known to be related to spatial impression for sound fields containing several uncorrelated sound sources. For each frequency, the interaural Coherence by definition has a value between 0 and 1. A mathematical definition (of the equivalent term coherency spectrum) can be found in reference [7]. For a binaural signal corresponding to a diffuse sound field and modelled using unprocessed HRTFs, Interaural Coherence is considerably lower at high frequencies (>−1.5 kHz) than at low frequencies (<−1.5 kHz) (in average over frequency). Above −1.5 kHz the Interaural Coherence is in general below <0.1.
The inventor has recognized that Interaural Coherence can be affected by modifications to ITD. For example, processing an HRIR database to remove all ITD at high frequencies will unfortunately audibly increase Interaural Coherence at high frequencies when modeling a diffuse sound field using HRTFs for multiple directions. A low value of Interaural Coherence corresponds to a subjective sensation of spaciousness with the sound being perceived as originating from all directions, whereas a high value of Interaural Coherence gives a sensation of the sound originating from inside the head.
The proposed technology may be regarded, e.g. as pre-processing of an HRTF database or a selected part thereof such as one or more HRTF representations or equivalent frequency response functions to decrease ITD above a threshold frequency, while adding a direction-dependent interaural phase difference at high frequencies such that when modeling a diffuse sound field, interaural-coherence becomes perceptually similar to that of unprocessed HRTFs. Perceptually similar implies that interaural coherence should preferably be low enough to not give a perceptually significant difference to unprocessed HRTFs, and small deviations of interaural coherence may be inaudible. The inventor has also realized that the proposed technology may be beneficial even when HRTFs for relatively few (two or more) directions are considered when emulating uncorrelated sound sources for diffuse sound field simulation, i.e. when modified HRTFs for two or more directions are used for modeling or simulating a diffuse sound field.
By way of example, the invention may be based on applying a phase adjustment to each of the concerned HRTFs, uniquely for each ear and direction, in order to reduce the interaural time differences at high frequencies that occurs between the left and right ear HRTFs for sound incidence from any given direction. The purpose is a perceptually transparent simplification of the HRTF responses which improves performance in for example the above-mentioned applications.
In the prior art, binaural signals have indeed been produced from an Ambisonics signal in the context of digital filter optimization.
However, the inventor has more specifically recognized the importance of fully understanding and analyzing how ITDs can be minimized above a threshold frequency while obtaining correct modelling of high-frequency interaural coherence.
A binaural signal is understood to consist of two signals, one for the left and right ears respectively. The filter thus has M≥2 inputs and 2 outputs and can be classified as a multiple-input multiple-output (MIMO) filter.
If the M input signals represent an Ambisonics signal, the digital filter in
It should be understood that HRTFs can be used, not only for headphone applications, but also for reproducing binaural sound via ordinary loudspeakers, e.g. with the support of cross-talk cancellation.
The M input signals may also come directly from a microphone array, this application is called a Virtual Artificial Head in the literature [2].
In both cases, the design of the digital filter for binaural signal generation as in
The application of producing a binaural signal from an Ambisonics signal is discussed in for example reference [6] which can be considered to present a state-of-the-art filter design method for this purpose. A set of HRIRs/HRTFs is required for the design. It is suggested in reference [6] that HRIRs can be pre-processed by minimizing time differences between the ears (ITD) at high frequencies (>−1.5 kHz), and that this reduces the HRIR complexity by reducing the energy of high orders in the spherical harmonic spectrum of an HRIR set. It is shown that an HRIR set modified in such a way can lead to an improved quality of the binaural signal produced with the presented filter design method.
It is acknowledged in reference [6] that while removing ITD from the HRTFs has limited perceptual consequences when listening to a single sound source, it is also important to consider the interaural coherence between the left and right ear signals in a diffuse sound field. An additional optimization criterion is therefore added to the filter design which restricts the binaural decoder to give the correct diffuse field coherence, even though the pre-processed HRTF data does not model a correct diffuse field inter-aural coherence. This forced restriction in reference [6] makes the filter design much more complex.
On the contrary, the proposed technology discloses how HRTFs/HRIRs or equivalent frequency response functions are pre-processed to both minimize ITD above a threshold frequency and at the same time achieve correct modelling of diffuse field coherence in a highly robust and efficient manner.
For a better understanding of the invention, it may be useful to provide a brief overview of general filter design theory.
A generalized filter design procedure for a filter configured to produce a binaural signal from M microphone signals is described in the following, to illustrate the utility of the proposed HRTF pre-processing in this application.
A filter design problem formulation can be exemplified by first assuming the existence of N sound sources, which are typically evenly distributed on a spherical surface around a microphone with M output signals. The response of each microphone signal in response to each of the N sound sources can be modelled by a complex frequency response matrix (also called a matrix of steering vectors) B(ω) of dimensions [M×N], where w denotes frequency.
The effect of the filter can similarly be described by its complex frequency response matrix F(ω) of dimensions [2×M]. The effect of each of the N sound sources on the binaural output signal produced by the filter, in the filter design problem formulation, is thus modelled as the product of matrices F(ω) and B(ω): F(ω)B(ω).
The goal in the filter design problem formulation is to design a filter where the effect of each of the N sound sources on the binaural output signal is equal to a known HRTF response for that direction. An optimization criterion for the filter may therefore be formulated as:
where HRTF(w) is a [2×N] matrix of desired HRTF responses for the left and right ears for the N source positions and norm() denotes some error norm, for example mean-square of error magnitude.
This optimization problem is usually extended with different constraints on the filter F(ω) but serves here to illustrate the effect of the choice of the HRTF responses on the residual error.
In the following, non-limiting examples will be given:
In a particular example, the proposed HRIR/HRTF modification is to transform the HRIRs/HRTFs to linear phase at high frequencies above a threshold frequency flim, thus eliminating high frequency ITD, while adding a direction dependent, and possibly frequency independent interaural phase difference (IPD) which has the effect of lowering interaural coherence at high frequencies in diffuse sound fields.
More generally, any appropriate interaural phase difference (IPD), which has the effect of reducing interaural coherence at high frequencies, i.e. above a threshold frequency, may be used.
The threshold frequency flim can for example be around 1.5 kHz, being selected higher or lower as needed by the application.
A convenient choice of high frequency IPD is to set it equal to the IPD of the original HRTF data at a specific reference frequency fref where the diffuse field interaural coherence has a desired value. For example, fref can be selected to be around 1200 Hz where the interaural coherence is low, e.g. below a threshold value. By the requirements of the application, fref may be chosen higher or lower. The choice of fref involves a trade-off since a lower value makes filter design residual error lower in some applications, while a higher value may correspond to a lower and more desirable interaural coherence. fref is typically chosen to be lower than flim.
Expressing the proposed algorithm in the continuous-time Fourier domain, the modified HRTFmod(ω) (with frequency ω expressed in rad/s) for each individual direction and ear is given by:
where ωlim is the threshold frequency in rad/s and τm can be selected to be close to the average broad-band delay in seconds of the HRTF data set in all directions.
Basically, the term corm corresponds to a time delay that is set to be the same for all directions/ears to reduce or eliminate ITD. The phase angle θref is a constant that is direction dependent and thereby introduces IPD.
For example, the phase angle θref can be selected as:
θref=/HRTF(ωref)−ωrefτm (3)
where ωref is the frequency in rad/s selected as reference for high frequency IPD, and the “angle” operator is an operator for complex numbers that returns a real value that represents the phase angle of a complex number (here the HRTF at ωref). There are other ways of setting the phase angle θref to provide the desired result of low high frequency coherence. For example, by giving phase angle θref a small frequency dependency, or a small change in directional dependency, or by geometrically rotating the HRTF coordinate system before applying the phase adjustment.
The frequency dependent real number a(ω) may be set to a substantially constant value and can be used to control the amount of phase modification applied at different frequencies. In a basic implementation it may be chosen as a(ω)=1.
The time delay parameter τm could also be made frequency dependent without affecting the desired properties of the processed HRTFs, as long as it is direction independent, or varies insignificantly with direction.
Phase discontinuities around the transition frequency could be mitigated, e.g. by applying the phase modification gradually over a transition band.
For example, the HRTF modification may be applied several times to a HRTF data set, using different values for the parameters of the algorithm.
It should also be understood that the HRTF modification may be applied to any band of frequencies and is not limited to being applied above a frequency limit flim. By way of example, it is thus possible reduce the Interaural Time Differences (ITD) in a frequency band or range above a first lower threshold frequency, while below another higher threshold frequency.
There are no restrictions on the HRTF set the algorithm is applied to and it should be understood that the invention may be applied to any frequency response function.
It includes two phase curves, original phase response and adjusted phase response. The average HRTF delay τm in this example is assumed to be zero, so that remaining delay is defined as acoustic propagation delay to the ear in relation to the delay to a point in the center of the head. The illustrated original HRTF phase is representative of a direction with a positive delay to the ear, giving a negative phase shift which is linearly growing with frequency. Choosing flim=1.5 kHz and fref=1.2 kHz, the modified phase curve is flat above flim=1.5 kHz, indicating zero delay, and has a phase offset corresponding to the original HRTF phase at fref=1.2 kHz.
By way of example, a practical result that may be attained by the method is to reduce the interaural time differences (ITD) at high frequencies that occurs between the left and right ear HRTFs for sound incidence from any given direction. This can be done without large perceptual impact, since the hearing system depends mostly on interaural level differences at high frequencies.
However, simply removing ITD at high frequencies has negative perceptual impact when the HRTFs are used with a diffuse sound field because zero ITD corresponds to a high interaural coherence, i.e. signal correlation between the ears. This is accounted for by further adjusting the interaural phase difference for the set of HRTFs so that interaural coherence becomes low at high frequencies, as is the case for the unmodified HRTFs.
The application of a VAH relies on a filter design which uses a HRTF database, and the VAH performance is increased considerably by this HRTF modification. This can e.g. allow the use of simpler microphone arrays with fewer capsules.
Another application is a filter design for estimating a binaural signal from a so-called Ambisonic signal, which also relies on a HRTF set or database and filter design is considerably simplified by such a HRTF modification. For the Ambisonics case, there are several non-patent references documenting a similar effect, but using a different implementation. A benefit of the implementation described here is its simplicity which also reduces the risk of introducing unwanted artifacts.
A HRTF set or database can also be used to generate a binaural signal for a virtual sound source for e.g. headphone listening. If head-tracking is used, the active HRTF filter may be switched in real-time so that the virtual sound source stays in the same spot when the listener turns the head. There may be an audible click as the filter is switched. Using the proposed HRTF modification, this click is significantly decreased since the time delay difference between different directions is reduced to a very low value at high frequencies.
It will be appreciated that the methods and arrangements described herein can be implemented, combined and re-arranged in a variety of ways.
By way of example, there is provided a system configured to perform the method as described herein.
For example, embodiments may be implemented in hardware, or in software for execution by suitable processing circuitry, or a combination thereof. The steps, functions, procedures, modules and/or blocks described herein may be implemented in hardware using any conventional technology, such as discrete circuit or integrated circuit technology, including both general-purpose electronic circuitry and application-specific circuitry.
For example, the described method may be translated into to a discrete-time implementation for digital signal processing.
Alternatively, or as a complement, at least some of the steps, functions, procedures, modules and/or blocks described herein may be implemented in software such as a computer program for execution by suitable processing circuitry such as one or more processors or processing units.
Examples of processing circuitry includes, but is not limited to, one or more microprocessors, one or more Digital Signal Processors (DSPs), one or more Central Processing Units (CPUs), video acceleration hardware, and/or any suitable programmable logic circuitry such as one or more Field Programmable Gate Arrays (FPGAs), or one or more Programmable Logic Controllers (PLCs).
It should also be understood that it may be possible to re-use the general processing capabilities of any conventional device or unit in which the proposed technology is implemented. It may also be possible to re-use existing software, e.g. by reprogramming of the existing software or by adding new software components.
It is also possible to provide a solution based on a combination of hardware and software. The actual hardware-software partitioning can be decided by a system designer based on a number of factors including processing speed, cost of implementation and other requirements.
The term ‘processor’ should be interpreted in a general sense as any system or device capable of executing program code or computer program instructions to perform a particular processing, determining or computing task.
The processing circuitry including one or more processors 410 is thus configured to perform, when executing the computer program 425, well-defined processing tasks such as those described herein.
The processing circuitry does not have to be dedicated to only execute the above-described steps, functions, procedure and/or blocks, but may also execute other tasks.
In a particular embodiment, the computer program 425; 435 comprises instructions, which when executed by the processor 410, cause the processor 410 to perform the tasks described herein.
The proposed technology also provides a carrier comprising the computer program, wherein the carrier is one of an electronic signal, an optical signal, an electromagnetic signal, a magnetic signal, an electric signal, a radsignal, a microwave signal, or a computer-readable storage medium.
By way of example, the software or computer program 425; 435 may be realized as a computer program product, which is normally carried or stored on a non-transitory computer-readable medium 420; 430, in particular a non-volatile medium. The computer-readable medium may include one or more removable or non-removable memory devices including, but not limited to a Read-Only Memory (ROM), a Random Access Memory (RAM), a Compact Disc (CD), a Digital Versatile Disc (DVD), a Blu-ray disc, a Universal Serial Bus (USB) memory, a Hard Disk Drive (HDD) storage device, a flash memory, a magnetic tape, or any other conventional memory device. The computer program may thus be loaded into the operating memory of a computer or equivalent processing device for execution by the processing circuitry thereof.
The procedural flows presented herein may be regarded as a computer flows, when performed by one or more processors. A corresponding apparatus may be defined as a group of function modules, where each step performed by the processor corresponds to a function module. In this case, the function modules are implemented as a computer program running on the processor.
The computer program residing in memory may thus be organized as appropriate function modules configured to perform, when executed by the processor, at least part of the steps and/or tasks described herein.
Alternatively, it is possible to realize the function modules predominantly by hardware modules, or alternatively by hardware, with suitable interconnections between relevant modules. Particular examples include one or more suitably configured digital signal processors and other known electronic circuits, e.g. discrete logic gates interconnected to perform a specialized function, and/or Application Specific Integrated Circuits (ASICs) as previously mentioned. Other examples of usable hardware include input/output (I/O) circuitry and/or circuitry for receiving and/or sending signals. The extent of software versus hardware is purely implementation selection.
The embodiments described above are merely given as examples, and it should be understood that the proposed technology is not limited thereto. It will be understood by those skilled in the art that various modifications, combinations and changes may be made to the embodiments without departing from the present scope as defined by the appended claims. In particular, different part solutions in the different embodiments can be combined in other configurations, where technically possible.
[1] H. Møller, “Fundamentals of binaural technology,” Applied Acoustics, vol. 36, pp. 171-218, December 1992.
[2] E. Rasumow, “Synthetic reproduction of head-related transfer functions by using microphone arrays”, Ph.D. thesis, School of Medicine and Health Sciences, University of Oldenburg, 2015.
[3] B. Bernshütz, “Microphone arrays and sound field decomposition for dynamic binaural recording”, Ph.D. thesis, University of Technology Berlin, 2016.
[4] B. Bernschütz, “A Spherical Far Field HRIR/HRTF Compilation of the Neumann KU 100” in Proceedings of the AIA-DAGA 2013 Conference on Acoustics, Merano, Italy, 2013, pp. 592-595.
[5] J. Blauert, Spatial Hearing: The Psychophysics of Human Sound Localization, MIT Press, Cambridge, MA, USA, 2001.
[6] M. Zaunschirm, C. Schörkhuber, and R. Höldrich, “Binaural rendering of ambisonic signals by head-related impulse response alignment and a diffuseness contraint,” The Journal of the Acoustical Society of America, vol. 143, no. 6, pp. 3616-3627, June 2018.
[7] P. Stoica and R. Moses, “Spectral Analysis of Signals”, Prentice Hall, 2005
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/SE2021/050974 | 10/4/2021 | WO |
Number | Date | Country | |
---|---|---|---|
63088129 | Oct 2020 | US |