This relates to an apparatus for generating a multichannel signal. This also relates to a method of generating a multichannel signal.
It is known to record a stereo audio signal on a medium such as a hard drive by recording each channel of the stereo signal using a separate microphone. The stereo signal may be later used to generate a stereo sound using a configuration of loudspeakers, or a pair of headphones.
This specification provides an apparatus comprising a processor configured to receive a first audio signal and first location data, the first location data relating to a location of a source of the first audio signal, receive a second audio signal and second location data, the second location data relating to a location of a source of the second audio signal, receive selected location data relating to a selected location and generate a multichannel signal in dependence on the first and second audio signals, the first and second location data and the selected location data.
This specification also provides a method comprising receiving a first audio signal and first location data, the first location data relating to a location of a source of the first audio signal, receiving a second audio signal and second location data, the second location data relating to a location of a source of the second audio signal, receiving selected location data relating to a selected location; and generating a multichannel signal in dependence on the first and second audio signals, the first and second location data and the selected location data.
Embodiments will now be described, by way of example only, with reference to the accompanying drawings in which:
As shown in
Referring to
Server 60 is configured to generate a multichannel signal, in the form of a stereo signal, in dependence on the received audio signals, audio signal source location data and selected location data and to transmit the generated stereo signal to the user terminal 80. The stereo signal may be an encoded stereo signal. The stereo signal may be encoded by the server 60 and decoded by the user terminal after the user terminal receives the encoded signal. The user may listen to the stereo sound corresponding to the stereo signal on a pair of headphones 85 connected to the user terminal 80. Thus, the user can be provided with a stereo sound obtained from a plurality of audio signal sources located at different positions 21, 22, 23 within the audio space and may therefore experience a representation of the audio experience at the selected location 70 in the area 10.
As shown in
Referring to
As shown in
Although network 90 and network 130 are shown as separate networks in
Referring to
When the user has selected a location in the audio space, selected location data corresponding to the selected location is sent by the terminal 80 to server 60. Server 60 is configured to generate a stereo signal in dependence on the audio signals, the audio signal source location data and the selected location data and to transmit the generated audio signal to the terminal 80. The user may then listen to the stereo sound corresponding to the stereo signal on the headphones 85.
The user may also select an orientation in the area 10 at the terminal 80. Orientation data, corresponding to the selected orientation, may be sent by the terminal 80 to server 60. Server 60 may be configured to generate the stereo signal in dependence on the audio signals, the audio signal source location data, the selected location data and the orientation data and to transmit the generated stereo audio signal to the terminal 80.
As shown in
Referring to
In step F2, terminal 80 transmits selected location data corresponding to the selected location to server 60.
In step F3, server 60 receives the selected location data. Optionally, server 60 may transmit request data to the mobile terminals 20 when the selected location data is received. The request data may comprise a request to transmit audio signals and audio signal source location data from the terminals 20 to server 60. The mobile terminals 20 may be configured to transmit the audio signals and the audio signal source location data to server 60 in response to receiving the request data. Alternatively, server 60 may receive audio signals and audio signal source location data from the user terminals 20 continuously, or periodically throughout a predetermined period. For example, the audio space may comprise a concert venue and a concert may be held in the concert venue during a scheduled period. The user terminals 20 in the concert venue may be configured to transmit audio signals and audio signal source location data to server 60 throughout the scheduled period of the concert.
In step F4, the processor 110 of server 60 generates a stereo signal in dependence on the selected location data, the audio signal source location data and the audio signals received from the mobile terminals 20 by server 60.
In step F5, server 60 streams or otherwise transmits the stereo signal to the user terminal 80.
In step A1, processor 110 receives a plurality of audio signals. The audio signals are represented by data streams. The data streams may be packetized. Alternatively the data streams may be provided in a circuit-switched manner. The data streams may represent audio signals that have been reconstructed from coded audio signals by a decoder. The source of each audio signal may have a different location within the area 10. As shown in A1, the processor also receives location data relating to the locations of the sources of the audio signals. The audio signals may be received by the processor 110 from the communication unit 100 of server 60. The location data may be generated by the positioning module 40 of the mobile terminals 20, and may be received by the processor 110 from the communication unit 100 of server 60, which may be configured to receive location data from the mobile terminals 20 via the network 90.
In step A2, each audio signal is divided into overlapping frames, windowed and Fourier transformed using a discrete Fourier transform (DFT), thereby generating a plurality of signals in the frequency domain. A 50% overlap may, for example, be used. The window function may be defined as:
Where K is the length of a frame. Thus, the frequency representation of the audio signals may be obtained according to the formula:
Where m denotes the mth signal, t denotes the frame number, x is the time domain input frame and DFT is the transformation operator. The “bar” notation used in
Although each audio signal is described above as being transformed using a Fourier transform such as a discrete Fourier transform, any suitable representation could be used, for example any complex valued representation, or any one of, or any combination of: a discrete cosine transform, a modified sine transform or a complex valued quadrature mirror filterbank.
In step A3, the N audio signals are grouped into left-side and right-side signals. Step A3 comprises determining coordinates for each audio signal source relative to the user-selected location 70. The coordinates of the audio signal sources are determined relative to the axes of a coordinate system, which may be predetermined axes or user-specified axes determined in dependence on orientation information received by server 60.
The coordinate system may be a polar coordinate system having a polar axis along a predetermined direction in the audio space. The memory 120 of server 60 or the memory 34 of the terminal 20 may comprise data relating to the polar axis. Alternatively, if selected orientation data relating to a selected orientation is received from terminal 80, the polar axis may be determined from the selected orientation data.
Next, a radial coordinate and an angular coordinate is determined for each mobile communication terminal 20 in dependence on the selected location data and the audio signal source location data. The radial coordinate describes the distance of a mobile communication terminal 20 from the selected location 70 and the angular coordinate describes the angular direction of the audio signal source with respect to the selected location. The audio signals are then grouped into left-side and right-side signals according to the determined co-ordinates. The left-side signal group is formed by the group of audio signals which have audio signal source angular coordinates for which 90°≦θ<270°. The right-side signal group is formed by the other signals, i.e, the signals which have audio signal source angular coordinates for which θm<90° and for which θm≧270°.
In step A4, each signal is scaled. It has been found that scaling the signals results in an improved stereo experience for the user. In one example, each signal is scaled to equalize the radial position with respect to the selected location. That is, the signals may be scaled so that they appear to be recorded from the same distance. The scaling may, for example, be an attenuating linear scaling. The attenuating linear scaling may take the form:
where dm is the radial position on the mth signal and where D is the maximum distance from the selected location, determined according to D=max (d).
In step A5, direction vectors are calculated for the left-side and right-side groups of signals. That is, a first direction vector is calculated for the left-side group of signals and a second direction vector is calculated for the right-side signals.
In step B1,
Thus, NL is the number of signals in the left-side group and NR is the number of signals in the right-side group. angleL is a vector of indexes for the left-side signals and angleR is a vector of indexes for the right-side signals. Accordingly, the size of the vector angleL is equal to the number of signals in the left-side group, and the size of the vector angleR is equal to the number of signals in the right-side group. SbOffset describes the nonuniform frequency band boundaries. |T| is the size of the time-frequency tile, which is the number of successive frames which are combined in the grouping. T may, for example be {t, t+1, t+2, t+3}. Successive frames may be grouped to avoid excessive changes, since perceived sound events may change over ˜100 ms. The sub-band index m may vary between 0 and M, where M is the number of subbands defined for the frame. The invention is not intended to be limited to the grouping described above any many other kinds of grouping could be used, for example a grouping in which the size of a group is the size of a spectral bin.
In step B2, the perceived direction of each source is determined for each subband. This determination may comprise defining Gerzon vectors according to:
Theory relating to Gerzon vectors is discussed in Gerzon, Michael A, “General theory of Auditory Localisation”, AES 92nd Convention, March 1992, Preprint 3306.
The radial position and direction angle of the sound events for the left-side and right-side signals may then be determined from the Gerzon vectors as follows:
rL
rR
In this example, the eventual stereo signal generated by the processor has only has two channels, and therefore cannot produce front, left, right and rear signals simultaneously. In step B3, rear scenes are folded into frontal scenes by, for example modifying the direction angles as follows:
In step B4, the direction angle are smoothed over time to filter out any sudden changes, for example by modifying the direction angles as follows:
θL
where θL
In step B5, a correction is applied. The correction will only be described in relation to the left-side signals. A corresponding correction may be applied to the right-side signals.
As shown in
where dVecre=r·cos(θ), dVecim=r·sin(θ) and α and β are microphone signal angles adjacent to θ, as shown in
Gains may also be scaled to unit-length vectors. For example, gain values may be modified according to:
In step B6, a first direction vector is calculated for the left side signals in dependence on the gain values. The direction vector for the left side signal may, for example, be calculated according to the formula:
dVecout
A second direction vector may be calculated in a corresponding manner for the right side signals.
Referring to
Amplitude panning gains may first be calculated using the VBAP technique. The VBAP technique is known per se and is described in Ville Pulkki, “Virtual Sound Source Positioning using Vector Base Amplitude Panning” JAES Volume 45, issue 6, pp 456-466, June 1997. The gains for the front left and front center channels may be determined according to:
where χ and σ are channel angles for the front left and center channels. These may, for example be set to 120° and 90° respectively. The gains may also be scaled depending on the frequency range.
The front left and left center signals may now be determined as:
Front left and left center signals may thus be determined for each m between 0 and M and for each nεT.
In step A7,
where φ is the channel angle for the front right channel. For example, this may be set to 60°. The gains may also be scaled depending on the frequency range, as described above in relation to the front left and left center channels. The front right and right center signals may then be determined as:
Front right and right center signals may thus be determined for each m between 0 and M and for each nεT.
In step A8, first and second ambience signals are calculated in dependence on the left center and right center signals. Preferably, the first and second ambience signals are calculated in dependence on the difference between the left center and the right center signals. The first ambient signal, denoted below by am
The second ambient signal, denoted below by am
In step A9, the ambience signals are added to the front left and front right signals. The addition of ambience signals improves the feeling of spaciousness for the user.
The ambience signals may, for example, be added to the front left and front right signals according to the formulas:
In step A10, once the ambience signals have been added to the front left and front right signals, signals for the first and second channels of the stereo signal are determined from the front left and front right signals. The signal for the first channel of the stereo signal may be obtained from
The signal for the second channel of the stereo signal is determined from
The procedure illustrated in
In step C1,
In step C6, the first reverberation component is multiplied by a weighting factor and added to the signal for the first output channel. Similarly, in step D6 the second reverberation component is multiplied by a weighting factor and added to the signal for the second output channel. That is, the signals for the first and second output channels may be modified according to the equations:
Lt,n=Lout,t+c·Lamb
The weighting factor c, may be a value in the range 0.5-1.5, for example 0.75.
Although the processor has been described above as generating a stereo (2-channel) signal in dependence on the audio signals, the audio signal source location data and the selected location data, in other embodiments the processor is configured to generate a different multichannel signal, for example a signal having any number of channels in the range 3-12. The generated multichannel signal may be encoded and transmitted from the server to a terminal, where it may be decoded and used to generate a surround sound experience for a user. For example, each channel of the multichannel signal may be used to generate sound on a separate loudspeaker. The loudspeakers may be arranged in a symmetric configuration. In this way, a high quality, immersive sound experience may be provided to the user, which the user may vary by selecting different locations in the area 10.
An embodiment incorporating a modification of the method of operation of the processor shown in
In this embodiment, signals for the front left and front right channels of the 5-channel signal may be generated in a similar manner to the manner in which the signals for the left and right channels are generated in the case of a stereo signal (as is described above in relation to
A signal for the center channel of the 5-channel signal may be generated by a process comprising taking the average of
Signals for the rear left and rear right channels of the 5-channel signal may also be generated in generated in a similar manner to the manner in which the signals for the left and right channels are generated in the case of a stereo signal (as is described above in relation to
Although the mobile terminals are described to transmit their location, as determined by their positioning module, the locations of the mobile terminals may instead be determined in some other way. For instance, a network, such as the network 90, may determine the locations of the mobile terminals. This may occur utilising triangulation based on signals received at a number of receiver or transceiver stations located within range of the mobile terminals. In embodiments in which the mobile terminals do not calculate their locations, the location information may pass directly from the network, or other location determining entity, to server 60 without first being provided to the mobile terminals.
Although the audio signal sources have been described above as forming part of mobile terminals, the audio signal sources could alternatively be fixed in position within the area 10. The area 10 may have a plurality of plural sources 15, 16 of audio energy, and also plural audio signal sources in the form of microphones positioned in different locations in the audio space. This may be of particular interest in a conference environment in which a number of potential sources of audio energy (i.e. people) are co-located with microphones distributed in fixed locations around an area. This may be of particular interest because the stereo signals experienced at different locations within such an environment necessarily will vary more than would be the case in a corresponding environment including only one source 15 of audio energy.
Furthermore, any type of microphone could be used, for example an omnidirectional, unidirectional or bidirectional microphones.
Moreover, the area 10 may be of any size, and may for example span meters or tens of meters. In the case of large areas or audio scenes, signals from microphones further than a predetermined distance from the selected location may be disregarded when generating the stereo signal. For example, signals from microphones further than 4 meters, or another number in the range 3-5 meters, from the selected location may be disregarded when generating the stereo signal.
Moreover, although
Furthermore, although the user terminal may be a mobile user terminal, as described above, the user terminal could alternatively be a desktop or laptop computer, for example. The user may interact with a commercially available operating system or with a web service running on the user terminal in order to specify the selected location and download the stereo signal.
It should be realized that the foregoing examples should not be construed as limiting. Other variations and modifications will be apparent to persons skilled in the art upon reading the present application. Such variations and modifications extend to features already known in the field, which are suitable for replacing the features described herein, and all functionally equivalent features thereof. Moreover, the disclosure of the present application should be understood to include any novel features or any novel combination of features either explicitly or implicitly disclosed herein or any generalisation thereof and during the prosecution of the present application or of any application derived therefrom, new claims may be formulated to cover any such features and/or combination of such features.
Number | Name | Date | Kind |
---|---|---|---|
5852800 | Modeste et al. | Dec 1998 | A |
7277692 | Jones et al. | Oct 2007 | B1 |
20040096066 | Metcalf | May 2004 | A1 |
20040156512 | Parker | Aug 2004 | A1 |
Number | Date | Country |
---|---|---|
0544232 | Jun 1993 | EP |
WO-2007060443 | May 2007 | WO |
WO-2008046531 | Apr 2008 | WO |
WO-2008069597 | Jun 2008 | WO |
Entry |
---|
Gerzon, Michael A., “General Metatheory of Auditory Localisation”, AES 92nd Convention, Mar. 1992, Preprint 3306, 63 pgs. |
Pulkki, Ville, “Audio Virtual Sound Source Positioning Using Vector Base Amplitude Panning”, JAES vol. 45 Issue 6, Jun. 1997, pp. 456-466. |
T. Sporer, B. Klehs, et al., “Perceptual evaluation of 5.1 downmix algorithms”, AES 119th Convention, New York, New York, Oct. 7-10, 2005, Convention Paper 6543, 10 pgs. |
J. Herre, C. Faller, et al., “MP3 Surround: efficient and compatible coding of multi-channel audio”, AES 116th Convention, May 8-11, 2004, Preprint 6049, 14 pgs. |
J. Szczerba, F. de Bont, et al., “Matrixed multi-channel extension for AAC codec”, AES114th Convention, Mar. 22-25, 2003, Preprint 5796, 9 pgs. |
C. Faller, “Coding of spatial audio compatible with different playback formats”, AES117th Convention, Oct. 28-31, 2004, Preprint 6187, 12 pgs. |
A. Seefeldt, M.S. Vinton, C. Q. Robinson, “New techniques in spatial audio coding”, AES119th Convention, Oct. 7-10, 2005, Preprint 6587, 13 pgs. |
Samsudin, et al., “A stereo to mono downmixing scheme for MPEG-4 parametric stereo encoder”, ICASSP 2006, pp. 529-532. |
C. Dubey, R. Annadana, et al., “New enhancements to immersive field rendition (ISR) system”, AES122nd Convention, May 5-8, 2007, Preprint 7080, 8 pgs. |
F. Baumgarte, C. Faller, P. Kroon, “Audio coder enhancement using scalable binaural coding with equalized mixing”, AES116th Convention, May 8-11, 2004, Preprint 6060, 9 pgs. |
ITU-R Recommendation BS, 775-2, “Multichannel stereophonic sound system with and without accompanying picture”, International Telecommunication Union, Geneva, Switzerland, 1992-1994-2006, 11 pgs. |
Number | Date | Country | |
---|---|---|---|
20100119072 A1 | May 2010 | US |