Typically, audio scenes are captured using a set of microphones. Each microphone outputs a microphone signal. For an orchestra audio scene, for example, 25 microphones are used. Then, a sound engineer performs a mixing of the 25 microphone output signals into, for example, a standardized format such as a stereo format or a 5.1, 7.1, 7.2 etc., format. In a stereo format, the sound engineer or an automatic mixing process generates two stereo channels. For a 5.1 format, the mixing results in five channels and a subwoofer channel. Analogously, for example for a 7.2 format, the mixing results in seven channels and two subwoofer channels. When the audio scene is to be rendered in a reproduction environment, the mixing result is applied to electro-dynamic loudspeakers. In a stereo reproduction set-up, two loudspeakers exist and the first loudspeaker receives the first stereo channel and the second loudspeaker receives the second stereo channel. In a 7.2 reproduction set-up, seven loudspeakers exist at predetermined locations and two subwoofers. The seven channels are applied to the corresponding loudspeakers and the two subwoofer channels are applied to the corresponding subwoofers.
The usage of a single microphone arrangement on the capturing side and a single loudspeaker arrangement on the reproduction side typically neglect the true nature of the sound sources.
For example, acoustic music instruments and the human voice can be distinguished with respect to the way in which the sound is generated and they can also be distinguished with respect their emitting characteristic.
Trumpets, trombones horns or bugles, for example, have a powerful, strongly directed sound emission. Stated differently, these instruments emit in an advantageous direction and, therefore, have a high directivity.
Violins, cellos, contrabasses, guitars, grand pianos, small pianos, gongs and similar acoustic musical instruments, for example, have a comparatively small directivity or a corresponding small emission quality factor Q. These instruments use so-called acoustic short-circuits when generating sounds. The acoustic short-circuit is generated by a communication of the front side and the backside of the corresponding vibrating area or surface.
Regarding the human voice, a medium emission quality factor exists. The air connection between mouth and nose causes an acoustic short-circuit.
String or bow instruments, xylophones, cymbals and triangles, for example, generate sound energy in a frequency range up to 100 kHz and, additionally, have a low emission directivity or a low emission quality factor. Specifically, the sound of a xylophone and a triangle are clearly identifiable instead of their low sound energy and their low quality factor even within a loud orchestra.
Hence, it becomes clear that the sound generation by the acoustical instruments or other instruments and the human voice is very different from instrument to instrument.
When generating sound energy, air molecules, for example two- and three-atomic gas molecules are stimulated. There are three different mechanisms responsible for the stimulation. Reference is made to German Patent DE 198 19 452 C1. These are summarized in
Hence, the sound energy generated by acoustical music instruments and generated by the human voice is composed by an individual mixing ratio of translation, rotation and vibration.
In the straightforward electro acoustic science, the definition of the vector sound intensity only reflects the translation. Unfortunately, however, the complete description of the sound energy, where rotation and vibration are additionally acknowledged, is missing in straightforward electro acoustics.
However, the complete sound intensity is defined as a sum of the intensities stemming from translation, from rotation and vibration.
Furthermore, different sound sources have different sound emission characteristics. The sound emission generated by musical instruments and voices generates a sound field and the field reaches the listener in two ways. The first way is the direct sound, where the direct sound portion of the sound field allows a precise location of the sound source. The further component is the room-like emission. Sound energy emitted in all room directions generates a specific sound of instruments or a group of instruments since this room emission cooperates with the room by reflections, attenuations, etc. A characteristic of all acoustical musical instruments and the human voice is a certain relation between the direct sound portion and the room-like emitted sound portion.
According to an embodiment, a method of capturing an audio scene may have the steps of: acquiring sound having a first directivity to achieve a first acquisition signal; acquiring sound having a second directivity to achieve a second acquisition signal, wherein the first directivity is higher than the second directivity, wherein the steps of acquiring are performed simultaneously, and wherein both acquisition signals together represent the audio scene; separately storing the first and the second acquisition signals: or mixing individual channels in the first acquisition signal to achieve a first mixed signal, mixing individual channels in the second acquisition signal to achieve a second mixed signal and separately storing the first and the second mixed signal, or transmitting the first and the second mixed signals or the first and the second acquisition signals to a loudspeaker setup; or rendering the first mixed signal or the first acquisition signal using a loudspeaker arrangement having a first directivity and simultaneously rendering the second mixed signal or the second acquisition signal using a loudspeaker arrangement having a second directivity, wherein the second loudspeaker directivity is lower than the first loudspeaker directivity.
According to another embodiment, a method of rendering an audio scene may have the steps of: providing a first acquisition signal related to sound having a first directivity or a first mixed signal related to sound having the first directivity; providing a second acquisition signal related to sound having a second directivity or a second mixed signal related to sound having the second directivity, wherein the second directivity is lower than the first directivity; generating a sound signal from the first acquisition signal or the first mixed signal using a loudspeaker arrangement having a first loudspeaker directivity; generating a second sound signal from the second acquisition signal or the second mixed signal by a second loudspeaker arrangement having a second loudspeaker directivity, wherein the steps of generating are performed simultaneously, and wherein the second loudspeaker directivity is lower than the first loudspeaker directivity.
According to another embodiment, an apparatus of capturing an audio scene may have: a first device for acquiring sound having a first directivity to achieve a first acquisition signal; a second device for acquiring sound having a second directivity to achieve a second acquisition signal, wherein the first directivity is higher than the second directivity, wherein the devices for acquiring are configured to operate simultaneously, and wherein both acquisition signals together represent the audio scene; a storage for separately storing the first and the second acquisition signals: or a mixer for mixing individual channels in the first acquisition signal to achieve a first mixed signal, mixing individual channels in the second acquisition signal to achieve a second mixed signal and separately storing the first and the second mixed signal, or a transmitter for transmitting the first and the second mixed signals or the first and the second acquisition signals to a loudspeaker setup; or a renderer for rendering the first mixed signal or the first acquisition signal using a loudspeaker arrangement having a first directivity and simultaneously rendering the second mixed signal or the second acquisition signal using a loudspeaker arrangement having a second directivity, wherein the second loudspeaker directivity is lower than the first loudspeaker directivity.
According to another embodiment, an apparatus for rendering an audio scene may have: a device for providing a first acquisition signal related to sound having a first directivity or a first mixed signal related to sound having the first directivity and for providing a second acquisition signal related to sound having a second directivity or a second mixed signal related to sound having the second directivity, wherein the second directivity is lower than the first directivity; and a generator for generating a sound signal from the first acquisition signal or the first mixed signal using a loudspeaker arrangement having a first loudspeaker directivity and for simultaneously generating a second sound signal from the second acquisition signal or the second mixed signal by a second loudspeaker arrangement having a second loudspeaker directivity, wherein the second loudspeaker directivity is lower than the first loudspeaker directivity.
Another embodiment may have a computer program for performing, when running on a computer, the method of capturing an audio scene of claim 1.
Another embodiment may have a computer program for performing, when running on a computer, the method of rendering an audio scene of claim 9.
According to another embodiment, a storage medium may have stored thereon: a first acquisition signal related to sound having a first directivity or a first mixed signal related to sound having the first directivity; and a second acquisition signal related to sound having a second directivity or a second mixed signal related to sound having the second directivity, wherein the second directivity is lower than the first directivity.
The present invention is based on the finding that, for obtaining a very good sound by loudspeakers in a reproduction environment, which is comparable and in most instances even not discernable from the original sound scene, where the sound is not emitted by loudspeakers but by musical instruments or human voices, the different ways in which the sound intensity is generated, i.e., translation, rotation, vibration have to be considered or the different ways in which the sound is emitted, i.e., whether the sound is emitted as a direct sound or as a room-like emission, is to be accounted for when capturing an audio scene and rendering an audio scene. When capturing the audio scene, sound having a first or high directivity is acquired to obtain a first acquisition signal and, simultaneously, sound having a second directivity is acquired to obtain a second acquisition signal, where the directivity of the second acquisition signal or the directivity of the sound actually captured by the second acquisition signal is lower than the second directivity.
Thus, an audio scene is not described by a single set of microphones but is described by two different sets of microphone signals. These different sets of microphone signals are never mixed with each other. Instead, a mixing can be performed with the individual signals within the first acquisition signal to obtain a first mixed signal and, additionally, the individual signals contained in the second acquisition signal can also be mixed among themselves to obtain a second mixed signal. However, individual signals from the first acquisition signal are not combined with individual signals of the second acquisition signal in order to maintain the sound signals with the different directivities. These acquisition signals or mixed signals can be separately stored. Furthermore, when mixing is not performed, the acquisition signals are separately stored. Alternatively or additionally, the two acquisition signals or the two mixed signals are transmitted into a reproduction environment and rendered by individual loudspeaker arrangements. Hence, the first acquisition signal or the first mixed signal is rendered by a first loudspeaker arrangement having loudspeakers emitting with a higher directivity and the second acquisition signal or the second mixed signal is rendered by a second separate loudspeaker arrangement having a more omnidirectional emission characteristic, i.e., having a less directed emission characteristic.
Hence, a sound scene is represented not only by one acquisition signal or one mixed signal, but is represented by two acquisition signals or two mixed signals which are simultaneously acquired on the one hand or are simultaneously rendered on the other hand. The present invention ensures that different emission characteristics are additionally recorded from the audio scene and are rendered in the reproduction set-up.
Loudspeakers for reproducing the omnidirectional characteristic comprise, in an embodiment, a longitudinal enclosure comprising at least one subwoofer speaker for emitting lower sound frequencies. Furthermore, a carrier portion is provided on top of the cylindrical enclosure and a speaker arrangement comprises individual speakers for emitting higher sound frequencies that are arranged in different directions with respect to the cylindrical enclosure. The speaker arrangement is fixed to the carrier portion and is not surrounded by the longitudinal enclosure. In an embodiment, the cylindrical enclosure additionally comprises one or more individual speakers emitting with a high directivity. This can be done by placing these individual speakers within the cylindrical enclosure in a line-array, where the loudspeaker is arranged with respect to the listener so that the directly emitting loudspeakers are facing the listeners. Furthermore, it is advantageous that the carrier portion is a cone or frustum-like element having a small cross-section area on top where the speaker arrangement is placed. This makes sure that the loudspeaker has improved characteristics with respect to the perceived sound due to the fact that the coupling between the longitudinal enclosure in which the subwoofer is arranged and the speaker arrangement for generating the omnidirectional sound is restricted to a comparatively small area. Furthermore, it is advantageous that the speaker arrangement is made up by a ball-like element which has equally distributed loudspeakers in it where the individual loudspeakers, however, are not included in the casing but are freely-vibratable membranes supported by a supporting structure. This makes sure that the omnidirectional emission characteristic is additionally supported by a good rotational portion of sound since such individual speakers, which are not cased in a casing, additionally generate a significant amount of rotational energy.
Additionally, the capturing of the sound scene can be enhanced by using specific microphones comprising a first electrode microphone portion and a second electret microphone portion which are arranged in a back-to-back arrangement. Both electret microphone portions comprise a free space so that a sound acquisition membrane or foil is movable. A vent channel is provided for venting the first free space or the second free space to the ambient pressure so that both microphones, although arranged in the back-to-back arrangement, have superior sound acquisition characteristics. Furthermore, first contacts for deriving an electrical signal are arranged at the first microphone portion and second contacts for deriving an electrical signal are arranged at the second microphone portion. Due to the back-to-back arrangement, it is advantageous that the ground contact, i.e., the counter-electrode contact of both microphones, is connected or implemented as a single contact so that the microphone comprises three output contacts for deriving two different voltages as electrical signals. Advantageously, each microphone portion is comprised of a metalized foil as a first electrode which is movable in response to sound energy impinging on the microphone, a spacer and a counter electrode which has, on its top, an electret foil. Each counter electrode additionally comprises venting channel portions which are vertically arranged with respect to the microphone. Furthermore, the venting channel comprises a horizontal venting channel portion communicating with the vertical venting channel portions and the vertical and horizontal venting channel portions are applied to the first and second microphone portions in such a way that both free spaces of the microphone portions defined by the corresponding spacers are vented to the ambient pressure and are, therefore, at ambient pressure. Additionally, this makes sure that the sound acquisition electrode can freely move with respect to the corresponding counter electrode since the venting makes sure that the free space does not build up an additional counter-pressure in addition to the ambient pressure.
Embodiments of the present invention will be detailed subsequently referring to the appended drawings, in which:
In an embodiment, the step of acquiring the sound having a first directivity comprises placing microphones 100 illustrated in
Furthermore, the step 202 of
As indicated in
The sound acquisition concept illustrated in
Furthermore, these different signals generated by the different microphone arrays are then separately processed and separately rendered.
When an orchestra is considered, it has been found that the sound energy which is emitted directly in the front direction to the listener is composed mainly of instruments having a high directivity such as trumpets or trombones and, additionally, comes from the singers or vocalists. This “high Q” sound portion is detected by microphones 100 of
Instruments having a high directivity but which do not directly emit sound in the front direction such as a tuba, different horns or wings and several wood wind instruments and, additionally, instruments having a low directivity such as string instruments, percussion, gong or triangle generate a room-like or less directed sound emission. This “low Q” sound portion is detected with a microphone set placed lateral and/or above the instruments or with respect to the sound scene. If microphones having a certain directivity are used, it is advantageous that these microphones are directed into the direction of the individual sound sources such as tuba, horns, wood wind instruments, strings, percussion, gong, triangle.
These individual “high Q” and “low Q” microphone signals, i.e., the first and second acquisition signals are independently recorded from each other and further processed such as mixed, stored, transmitted or in other ways manipulated. Hence, separate high and low Q mixtures can be mixed to obtain the first and second mixed signals and these mixed signals can be stored within the storage 108 or can be rendered via separate high and low Q speakers.
Dual Q loudspeaker systems illustrated in
Furthermore, as indicated at 115 in
Advantageously, the dual Q technology is combined with the icon technology which is described in the context of
Subsequently,
Furthermore, instead of or in addition to placing the microphones 102 above or lateral to the sound scene and placing the microphones 100 in front of the sound scene, microphones can also be placed selectively in a corresponding proximity to the corresponding instruments.
When the audio scene, for example, comprises an orchestra having a first set of instruments emitting with a higher directivity and a second set of instruments emitting sound with a lower directivity, then the step of acquiring comprises placing the first set of microphones closer to the instruments of the first set of instruments than to the instruments of the second set of instruments to obtain the first acquisition signal and placing the second set of microphones closer to the instruments of the second set of instruments, i.e., the low directivity emitting instruments, than to the first set of instruments to obtain the second acquisition signal.
Depending on the implementation, the directivity as defined by a directivity factor related to a sound source is the ratio of radiated sound intensity at the remote point on the principle axis of a sound source to the average intensity of the sound transmitted through a sphere passing through the remote point and concentric with the sound source. Advantageously, the frequency is stated so that the directivity factor is obtained for individual subbands.
Regarding a sound acquisition by microphones, the directivity factor is the ratio of the square of the voltage produced by sound waves arriving parallel to the principle axis of a microphone or other receiving transducer to the mean square of the voltage that would be produced if sound waves having the same frequency and mean square pressure where arriving simultaneously from all directions with random phase. Advantageously, the frequency is stated in order to have a directivity factor for each individual subband.
Regarding sound emitters such as speakers, the directivity factor is the ratio of radiated sound intensity at the remote point on the principle axis of a loudspeaker or other transducer to the average intensity of the sound transmitted through a sphere passing through the remote point and concentric with the transducer. Advantageously, the frequency is given as well in this case.
However, other definitions exist for the directivity factor as well which all have the same characteristic but result in different quantitative results. For example, for a sound emitter, the directivity factor is a number indicating the factor by which the radiated power would have to be increased if the directed emitter were replaced by an isotopic radiator assuming the sane field intensity for the actual sound source and the isotropic radiator.
For the receiving case, i.e., for a microphone, the directivity factor is a number indicating the factor by which the input power of the receiver/microphone for the direction of maximum reception exceeds the mean power obtained by averaging the power received from all directions of reception if the field intensity at the microphone location is equal for any direction of wave incidence.
The directivity factor is a quantitative characterization of the capacity of a sound source to concentrate the radiated energy in a given direction or the capacity of a microphone to select signals incident from a given direction.
When the measure of the directivity factor is from 0 to 1, then the directivity factor related to the first acquisition signal is advantageously greater than 0.6 and the directivity factor related to the second acquisition is advantageously lower than 0.4. Stated differently, it is advantageous to place the two different sets of microphones so that the values of 0.6 for the first acquisition signal and 0.4 for the second acquisition signal is obtained. Naturally, it will practically not be possible to have a first acquisition signal only having directed sound and not having any omnidirectional sound. On the other hand, it will not be possible to have a second acquisition signal only having omnidirectionally emitted sound and not having directionally emitted sound. However, the microphones are manufactured and placed in such a way that the directionally emitted sound dominates the omnidirectionally emitted sound in the first microphone signal and that the omnidirectionally emitted sound dominates over the directionally emitted sound in the second acquisition signal.
A method of rendering an audio scene comprises a step of providing a first acquisition signal related to sound having a first directivity or providing a first mixed signal related to sound having the first directivity. The method of rendering additionally comprises providing a second acquisition signal related to sound having a second directivity or providing a second mixed signal related to sound having a second directivity, where the first directivity is higher than the second directivity. The steps of providing can be actually implemented by receiving, in the sound rendering portion of
Furthermore, the method of rendering comprises a step of generating (210, 212) a sound signal from the first acquisition signal or the first mixed signal and the step of generating a second sound signal from the second acquisition signal or the second mixed signal. For generating the first sound signal a directional speaker arrangement 118 is used, and for generating the second signal an omnidirectional speaker arrangement 120 is used. Advantageously, the directivity of the directional speaker arrangement is higher than the directivity of the omnidirectional speaker arrangement 120, although it is clear that an ideal omnidirectional emission characteristic can almost not be generated by existing loudspeaker systems, although the loudspeaker of
Advantageously, the emission characteristic of the omnidirectional speakers is close to the ideal omnidirectional characteristic within a tolerance of 30%.
Subsequently, reference is made to
For example, brass instruments are instruments with a mainly translatory sound generation. The human voice generates a translatorial and a rotational portion of the air molecules. For the transmission of the translation, existing microphones and speakers with piston-like operating membranes and a back enclosure are available.
The rotation is generated mainly by playing bow instruments, guitar, a gong or a piano due to the acoustic short-circuit of the corresponding instrument. The acoustic short-circuit is, for example, performed via the F-holes of a violin, the sound hole for the guitar or between the upper and lower surface of the sounding board at a grand or normal piano or by the front and back phase of a gong. When generating a human voice, the rotation is excited between mouth and nose. The rotation movement is typically limited to the medium sound frequencies and can be advantageously acquired by microphones having a figure of eight characteristic, since these microphones additionally have an acoustic short-circuit. The reproduction is realized by mid-frequency speakers with freely vibratable membranes without having a backside enclosure.
The vibration is generated by violins or is strongly generated by xylophones, cymbals and triangles. The vibrations of the atoms within a molecule is generation up to the ultrasound region above 60 kHz and even up to 100 kHz.
Although this frequency range is typically not perceivable by the human hearing mechanism, nevertheless level and frequency-dependent demodulations effects and other effects take place, which are then made perceivable, since they actually occur within the hearing range extending between 20 Hz and 20 kHz. The authentic transmission of vibration is available by extending the frequency range above the hearing limit at about 20 kHz up to more than 60 or even 100 kHz.
The detection of the directional sound portion for a correct location of sound sources involves a directional microphoning and speakers with a high emission quality factor or directivity in order to only put sound to the ears of the listeners as far as possible. For the directional sound, a separate mixing is generated and reproduced via separate speakers.
The detection of the room-like energy is realized by a microphone setup placed above or lateral with respect to the sound sources. For the transmission of the room-like portion, a separate mixing is generated and reproduced by speakers having a low emission quality factor (sphere emitters) in a separate manner.
Subsequently, an advantageous loudspeaker is described with respect to
Furthermore, the carrier 312 comprises a tip portion having a cross-sectional area which is less than 20% of a cross-sectional area of the base portion, where the speaker arrangement 314 is fixed to the tip portion. Advantageously, as illustrated in
The speaker arrangement 314 has a sphere-like carrier structure 316, which is also illustrated in
Advantageously, the speaker arrangement comprises at least six individual speakers and particularly even twelve individual speakers arranged in twelve different directions, where, in this embodiment, the speaker arrangement 314 comprises a pentagonal dodekaeder (e.g. body with 12 equally distributed surfaces) having twelve individual areas, wherein each individual area is provided with an individual speaker membrane. Importantly, the loudspeaker arrangement 314 does not comprise a loudspeaker enclosure and the individual speakers are held by the supporting structure 316 so that the membranes of the individual speakers are freely suspended.
Furthermore, as illustrated in
Alternatively, however, the loudspeaker in
The enclosure furthermore comprises a further speaker 604 which is suspended at an upper portion of the enclosure and which has a freely suspended membrane. This speaker is a low/mid speaker for a low/mid frequency range between 80 and 300 Hz and advantageously between 100 and 300 Hz. This additional speaker is advantageous, since—due to the freely suspended membrane—the speaker generates rotation stimulation/energy in the low/mid frequency range. This rotation enhances the rotation generated by the speakers 314 at low/mid frequencies. This speaker 604 receives the low/mid frequency portion of the signal provided to the speakers at 314, e.g., the second acquisition signal or the second mixed signal.
In an advantageous embodiment with a single subwoofer, the subwoofer is a twelve inch subwoofer in the closed longitudinal enclosure 300 and the speaker arrangement 314 is a pentagon dodekaeder medium/high speaker arrangement with freely vibratable medium frequency membranes.
Additionally, a method of manufacturing a loudspeaker comprises the production and/or provision of the enclosure, the carrier portion and the speaker arrangement, where the carrier portion is placed on top of the longitudinal enclosure and the speaker arrangement with the individual speakers is placed on top of the carrier portion or alternatively the speaker arrangement without the individual speakers is placed on top of the carrier portion and then the individual speakers are mounted.
Subsequently, reference is made to
The microphone comprises a first electret microphone portion 801 having a first free space and a second electret portion 802 having a second free space. The first and the second microphone portions 801, 802 are arranged in a back-to-back arrangement. Furthermore, a vent channel 804 is provided for venting the first free space and/or the second free space. Furthermore, first contacts 806a, 806b for deriving an electrical signal 806c and second contacts 808a and 806b for deriving a second electrical signal 808b are arranged at the first microphone portion 801, and the second microphone portion 802, respectively. Hence,
The second electret microphone portion 802 is advantageously constructed in the same manner and comprises, from bottom to top, a metallization 820, a foil 821, a spacer 822 defining a second vented free space 823. On the spacer 822 an electret foil 824 is placed and above the electret foil 824 a counter electrode 826 is placed which forms the back plate of the second microphone portion. Hence, elements 820 to 826 represent the second electret microphone portion 802 of the
Advantageously, the first and the second microphone portions have a plurality of vertical vent portions 804b, 804c, as illustrated in
Advantageously, the microphone in accordance with the present invention is a back-electret double-microphone with a symmetrical construction. The metalized foils 811, 821 are moved or excited by the kinetic energy of the air molecules (sound) and therefore the capacity of the capacitor consisting of the back electrode 816, 826 and the metallization 810, 820 is changed. Due to the persistent charge on the electret foils 814, 824, a voltage U.sub.1, U.sub.2 is generated due to the equation Q=C×U, which means that U is equal to Q/C. The voltage U.sub.1 is proportional to the movement of the electrode 810, 811, and the voltage U.sub.2 is proportional to the movement of the electrode 820, 821. Two individual electret microphones are arranged in a back-to-back arrangement. The vertical vent portions 804b, 804c are useful in order to avoid a back-like closure of the free spaces 813, 823. In order to maintain this functionality additionally when the microphones are arranged in the back-to-back arrangement, the horizontal vent portions 804a are provided which communicate with the vertical vent portions 804b, 804c. Hence, even in the back-to-back arrangement, a closure of the vented free spaces 813, 823 is avoided.
Naturally, an actually provided signal combiner does not necessarily have to be the controllability feature. Instead, the in-phase, out-of-phase or weighted addition functionality of the combiner can be correspondingly hardwired so that each microphone has a certain output signal characteristic with the combined C output signal, but this microphone cannot be configured. However, when the controllable combiner has the switching functionality illustrated in
Advantageously, the inventive electret microphone is miniaturized and only has dimensions as are set forth in
Furthermore, in order to fully detect the vibration energy, the icon microphone should have an audio bandwidth of 60 kHz and advantageously up to 100 kHz. To this end, the foils 811, 821 have to be attached to the spacer in a correspondingly stiff manner. The microphone illustrated in
Although some aspects have been described in the context of an apparatus, it is clear that these aspects also represent a description of the corresponding method, where a block or device corresponds to a method step or a feature of a method step. Analogously, aspects described in the context of a method step also represent a description of a corresponding block or item or feature of a corresponding apparatus.
Depending on certain implementation requirements, embodiments of the invention can be implemented in hardware or in software. The implementation can be performed using a digital storage medium, for example a floppy disk, a DVD, a CD, a ROM, a PROM, an EPROM, an EEPROM or a FLASH memory, having electronically readable control signals stored thereon, which cooperate (or are capable of cooperating) with a programmable computer system such that the respective method is performed.
Some embodiments according to the invention comprise a non-transitory data carrier having electronically readable control signals, which are capable of cooperating with a programmable computer system, such that one of the methods described herein is performed or having stored thereon the first or second acquisition signals or first or second mixed signals.
Generally, embodiments of the present invention can be implemented as a computer program product with a program code, the program code being operative for performing one of the methods when the computer program product runs on a computer. The program code may for example be stored on a machine readable carrier.
Other embodiments comprise the computer program for performing one of the methods described herein, stored on a machine readable carrier.
In other words, an embodiment of the inventive method is, therefore, a computer program having a program code for performing one of the methods described herein, when the computer program runs on a computer.
A further embodiment of the inventive methods is, therefore, a data carrier (or a digital storage medium, or a computer-readable medium) comprising, recorded thereon, the computer program for performing one of the methods described herein.
A further embodiment of the inventive method is, therefore, a data stream or a sequence of signals representing the computer program for performing one of the methods described herein. The data stream or the sequence of signals may for example be configured to be transferred via a data communication connection, for example via the Internet.
A further embodiment comprises a processing means, for example a computer, or a programmable logic device, configured to or adapted to perform one of the methods described herein.
A further embodiment comprises a computer having installed thereon the computer program for performing one of the methods described herein.
In some embodiments, a programmable logic device (for example a field programmable gate array) may be used to perform some or all of the functionalities of the methods described herein. In some embodiments, a field programmable gate array may cooperate with a microprocessor in order to perform one of the methods described herein. Generally, the methods are advantageously performed by any hardware apparatus.
While this invention has been described in terms of several embodiments, there are alterations, permutations, and equivalents which fall within the scope of this invention. It should also be noted that there are many alternative ways of implementing the methods and compositions of the present invention. It is therefore intended that the following appended claims be interpreted as including all such alterations, permutations and equivalents as fall within the true spirit and scope of the present invention.
This application is a continuation of U.S. patent application Ser. No. 14/040,549, filed Sep. 27, 2013, which is a continuation of copending International Application No. PCT/EP2012/055697, filed Mar. 29, 2012, which is incorporated herein by reference in its entirety, and additionally claims priority from U.S. Application No. 61/469,436, filed Mar. 30, 2011, which are incorporated by reference herein in their entirety. The present invention is related to electroacoustics and, particularly to concepts of acquiring and rendering sound, loudspeakers and microphones.
Number | Date | Country | |
---|---|---|---|
61469436 | Mar 2011 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14040549 | Sep 2013 | US |
Child | 16665853 | US | |
Parent | PCT/EP2012/055697 | Mar 2012 | US |
Child | 14040549 | US |