1. Technical Field
This application relates to a system for determining the position and/or direction of a sound source relative to a microphone.
2. Related Art
A microphone may measure audio or acoustic signals from a source. When recording sound events from a sound source, such as a music recording, several microphones may be used. The signals produced from each microphone may be combined into a signal that represents a recording.
It may be useful to locate the source at a pre-determined position to ensure an optimal recording. A microphone may be more sensitive to sound in one direction, which suggests that the microphone should be positioned to receive in that direction. Therefore a need exists for accurately determining the location of a sound source.
A system may determine the position of a source in a fixed coordinate system. A microphone may include capsules that receive a audio signals. The audio signals are analyzed and processed to determine the position of the sound source relative to the microphone. The audio signals may be used to adjust the microphone or capsule direction based on the position of the sound source. The direction of the microphone may be adjusted during or after the audio signals are received. The receiving direction may be identified through an optical source or laser. A light beam or laser beam may be used to identify position.
Directional adjustments of the microphone may be based on a fixed coordinate system. When the microphone is placed within the fixed coordinate system it has known coordinates. Those coordinates may be used to identify relative coordinates of the sound source. Based on the position of the sound source, the direction of an optical source beam may be adjusted with reference to the fixed coordinate system.
Other systems, methods, features, and advantages will be, or will become, apparent to one with skill in the art upon examination of the following figures and detailed description. It is intended that all such additional systems, methods, features and advantages be included within this description, be within the scope of the invention, and be protected by the following claims.
The system may be better understood with reference to the following drawings and description. The components in the figures are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the invention. Moreover, in the figures, like referenced numerals designate corresponding parts throughout the different views.
Audio signals may determine the position of individual sound sources in a fixed coordinate system. The directivity characteristics of a microphone may be adjusted based on the received audio and the sound source distribution. The microphone may include capsules that may have a changeable directional characteristic. The capsules may receive aural signals that are converted into an audio signal representative of the audio at that capsule. The audio signals may be used to determine the locations of the sound sources. The system may include an optical source or another identifier that marks a direction of the microphone or of certain capsules. The optical source may be a laser that may pass through a lens and/or an aperture. The direction of the visible or invisible light beam relative to the fixed coordinate system may be determined and adjusted based on the identified location of the sound sources. The light may be detected by an optical or light sensitive device.
The sound source 102 may be positioned to measure sound or audio. Testing may occur during performances, such as an orchestra concert. The testing may position microphones within or near the audience to measure the sound at different locations. The orchestra or audio speakers may generate the sound. Alternatively, acoustic signals or vibrations may be detected when the signal lie in an aural range. The signals may be characterized by wave properties, such as frequency, wavelength, period, amplitude, speed, and direction. These sound signals may be detected by the microphone 104 or an electrical or optical transducer.
The microphone 104 may be a device or instrument for measuring sound. The microphone 104 may be a transducer or sensor that converts sound/audio into an operating signal that is representative of the sound/audio at the microphone. The operating signal may be an analog or digital signal that may be sent to a second device, such as an amplifier, a recorder, a broadcast transmitter, or the sound analyzer 108. The microphone 104 may have directional characteristics which may be changed, so that the microphone 104 may be rotated. The changes may be achieved through a mechanical link that may rotate or swivel, or the adjustment may occur automatically. Based on the directional characteristic of microphones, it may be necessary to know the relative position of the sound source with respect to the location of the microphone 104 to produce a high quality recording. The microphone 104 with a directional characteristic may be a soundfield microphone or an array microphone.
An exemplary directivity pattern of capsule signals is shown in
Alternative directivity patterns may include supercardioid, hypercardioid, omnidirectional, and figure-eight. Cardioid may have a high sensitivity near the front of a receiver or microphone and good sensitivity near its sides. The cardioid pattern has a “heart-shaped” pattern. Supercardioid and hypercardioid are similar to the cardioid pattern, except they may also be subject to sensitivity behind the microphone. Omnidirectional patterns may receive sound almost equally from all directions relative to a receiver or microphone. A figure-eight may be almost equally sensitive to sound in the front and the back ends of the microphone, but may not be sensitive to sound received near the sides of the microphone.
A directivity pattern may be obtained or modeled by combining capsule signals. An omnidirectional, a figure-eight, and a cardioid may be combined. In this combination, the amplitude of both signals may be equally large. By weighting the omnidirectional and figure-eight signal pattern, the resulting directivity pattern may be adjusted between an omnidirectional and a figure-eight pattern, for example, from a cardioid to a hypercardioid pattern. The frequency response of the omnidirectional and figure-eight signal may be adjusted separately before the signals are combined. An exemplary microphone and its modeling are described in commonly owned U.S. application Ser. No. 11/472,801, U.S. Pub. No. 2007/0009115, filed Jun. 21, 2006, entitled “MODELING OF A MICROPHONE,” which is incorporated by reference.
In the sound field microphone 304, each of the individual capsules may yield a signal A, B, C, and D. Each one of the pressure gradient receivers may present a directional characteristic that deviates from an omni directional characteristic, which may be approximated in the form (1−k)+k X cos(θ), in which θ denotes the azimuth under which the capsule is exposed to sound and the ratio factor k may designates an amount by which the signal deviates from an omni directional signal. For example, in a sphere k=0, but in a figure eight k=1. The cylindrical axis of the directional characteristic of each individual capsule may be substantially perpendicular to the membrane or to the corresponding face of the tetrahedron. The individual capsules may have directional characteristics in different directions.
According to one calculation, the four signals may be converted to the B format (W, X, Y, Z). The calculation of the four signals, A, B, C, and D may be:
W=½(A+B+C+D);
X=½(A+B−C−D);
Y=½(A+B+C−D); and
Z=½(A+B−C+D).
The signals produced may correspond to an omni directional characteristic or sphere (W) and a figure-of-eight pattern (X, Y, X), which may be substantially orthogonal with respect to each other and extend each along the x, y, and z directions.
Some systems may combine B format signals to modify desired characteristics of the microphone. By combining the signals that present an omni-directional characteristic with a signal that presents a figure-eight pattern signal characteristic, a cardioid-shaped pattern may be obtained. Signal weighting may be used to obtain a desired directional characteristic with a desired preferential orientation for the overall signal. A combination of the individual capsule signals received through the B format may be known as “synthesizing an overall microphone.” A desired directional characteristic may be adjusted or set after the sound event has occurred, by appropriate mixing of the individual B format signals.
The desired directional characteristic of a microphone may depend on the sound source or sound sources to be recorded. A microphone orientation may depend on the position of the sound source relative to the microphone. For example, a solo instrument within an orchestra may be an identified sound source. In this instance, the microphone may be oriented to maximize the sound from that solo instrument. The relative position of the sound source with respect to a principal direction of the microphone may be used to position or orientate the microphone. The principal direction of the microphone may be manually or automatically positioned to a desired direction. In a soundfield microphone, there may be four equivalent principal directions (each substantially perpendicular to the membrane). A preferential direction may exist at the time of the synthesizing of the overall signal from the individual capsule signals. This preferential direction may be rotated using signal processing techniques.
A mechanical principal direction may be utilized in the determination of the position of sound sources. The mechanical principal direction may be chosen in many ways. In some processes, the relative orientation of the arrangement of the individual capsules with respect to the principal direction should be identified. Establishing the principal direction may establish how the individual microphone capsules are oriented in space. With soundfield microphones, such a principal direction may be implemented by a marking or other identifier, such as an optical or light source in the form of a laser or light emitting diode (LED). The principal direction may establish a coordinate system with a microphone located with the coordinate system. In one system, the microphone may be located near the center of the coordinate system.
The audio processing may identify individual sound sources. The principal direction of the microphone and the orientation of the capsule arrangement may be used with the processed audio to influence the behavior of the microphone. For example, the directional characteristic and/or orientation in space may be adjusted relative to the mechanical principal direction.
The microphone 104 is not limited to soundfield microphones. Microphones with two or more capsules, whose signals may be processed and combined by signal processing techniques, may also be used. The microphones may have a changeable directional characteristic, which may be set and optimized after the recording. The position of sound sources may be identified using the capsules by processing and analyzing the signals, which may comprise different data that identifies directional function. An array microphone is another example of the microphone 104.
In
The light beam may vary based on the system. The light may have a relatively constant-diameter beam over the range of sensitivity of the directional microphone 104. Due to spherical spreading, the diameter of light beam may increase with range. The use of lens configurations, as discussed below, may make a beam more easily visible near the maximum usable range of the microphone 104. The light source may direct a light beam in a direction aligned with an axis of sensitivity of the microphone 104. The light beam may identify an axis of increased sensitivity of the microphone.
The light beam may be directed toward the sound source (or the position to be assumed by the sound source during the sound event). The angle with respect to the predefined mechanical principal direction may be determined. For example, before recording the music of an orchestra, the light beam may be directed toward the chair of each individual orchestra member, and the angle (azimuth and elevation) with respect to the principal direction may be determined. Such a cartographically described orchestra landscape may used during the mixing to emphasize certain spatial areas and to filter out interfering noises or mistakes (improperly executed notes) from a certain direction. These processes may occur as a function of time, for example, as the solo parts move within an orchestra concert.
The sound analyzer 108 may communicate with the microphone 104 and/or the identification generator 106. In some systems, the microphone 104, the identification generator 106, the sound analyzer 108, and/or the user device 110 may comprise a unitary component or may be multiple components. For example, the microphone 104 may include the identification generator 106 and the sound analyzer 108. The sound analyzer 108 may be a computing device that receives signals representative of acoustics and analyzes those signals. The acoustic or audio may originate from one or more sound sources, such as the sound source 102.
The sound analyzer 108 may process audio signals based on information regarding the orientation and direction of the microphone. The spatial arrangement of the capsules of the microphone with respect to the position of the sound source may be considered during signal processing. Further information may also be used, such as the location of at least one sound source (soloists and/or individual orchestra members), the direction of a spatial barycenter of several sound sources (e.g. of the violinists or wind musicians of an orchestra), and the direction from which the best recording may be expected. For example, the resulting microphone signal may be rotated based on the location information. In addition, interfering signals may be expected, such as the audience of a concert hall. Any of this information may be used to combine and weight the individual audio signals and may be included in the process of signal processing, in order adjust the directivity characteristics of the resulting microphone and its capsules to achieve better results and improve sound quality.
The sound analyzer 108 may include a processor 110, memory 112, software 114 and an interface 116. The interface 116 may include a user interface that allows a user to interact with any of the components of the sound analyzer 108. For example, a user of the user device 118 may modify the data or parameters that are used by the sound analyzer 108 to analyze the sound source 102.
The processor 110 in the sound analyzer 108 may include a central processing unit (CPU), a graphics processing unit (GPU), a digital signal processor (DSP) or other type of processing device. The processor 110 may be a component in any one of a variety of systems. For example, the processor 110 may be part of a standard personal computer or a workstation. The processor 110 may be one or more general processors, digital signal processors, application specific integrated circuits, field programmable gate arrays, servers, networks, digital circuits, analog circuits, combinations thereof, or other now known or later developed devices for analyzing and processing data. The processor 110 may operate in conjunction with a software program, such as code generated manually (i.e., programmed).
The processor 110 may communicate with a local memory 112, or a remote memory 112. The interface 116 and/or the software 114 may be stored in the memory 112. The memory 112 may include computer readable storage media such as various types of volatile and non-volatile storage media, including to random access memory, read-only memory, programmable read-only memory, electrically programmable read-only memory, electrically erasable read-only memory, flash memory, magnetic tape or disk, optical media and the like. In one embodiment, the memory 112 includes a random access memory for the processor 110. In alternative embodiments, the memory 112 is separate from the processor 110, such as a cache memory of a processor, the system memory, or other memory. The memory 112 may be an external storage device or database for storing recorded image data. Examples include a hard drive, compact disc (“CD”), digital video disc (“DVD”), memory card, memory stick, floppy disc, universal serial bus (“USB”) memory device, or any other device operative to store image data. The memory 112 is operable to store instructions executable by the processor 110.
The functions, acts or tasks illustrated in the figures or described herein may be processed by the processor executing the instructions stored in the memory 112. The functions, acts or tasks are independent of the particular type of instruction set, storage media, processor or processing strategy and may be performed by software, hardware, integrated circuits, firm-ware, micro-code and the like, operating alone or in combination. Processing strategies may include multiprocessing, multitasking, or parallel processing. The processor 110 may execute the software 114 that includes instructions that analyze signals.
The interface 116 may be a user input device or a display. The interface 116 may include a keyboard, keypad or a cursor control device, such as a mouse, or a joystick, touch screen display, remote control or any other device operative to interact with the sound analyzer 108. The interface 116 may include a display that communicates with the processor 110 and configured to display an output from the processor 110. The display may be a liquid crystal display (LCD), an organic light emitting diode (OLED), a flat panel display, a solid state display, a cathode ray tube (CRT), a projector, a printer or other now known or later developed display device for outputting determined information. The display may act as an interface for the user to see the functioning of the processor 110, or as an interface with the software 114 for providing input parameters. In particular, the interface 116 may allow a user to interact with the sound analyzer 108 to determine a position of the sound source 102 based on the data from the microphone 104.
The sound recorder 702 may receive the sounds or audio signals that are obtained by the microphone 104. The audio signals may be analog signals that are converted to digital signals by an analog-to-digital converter. The sound recorder 702 may store the received audio signals for future processing or may pass the signals to the processor 110 for real-time processing. The stored audio signals may be analyzed after an event (such as a concert) or may be used during the event to adjust the microphone 104 or to identify a particular sound source, such as the sound source 102.
The location calculator 704 may analyze the audio signals that are received or stored by the sound recorder 702. The location calculator 704 may include the processor 110. The location calculator 704 may determine a location or position of the sound source 102 based on the audio signals received by the microphone 102. The microphone 102 may have capsules, each of which provides an audio signal that are analyzed by the location calculator 704. Each audio signal may be analyzed to determine a signal strength or a strength of the audio at that capsule. That information, along with the directivity components of the microphone 102 and its capsules may be used by the location calculator 704 to identify the location or position of sound sources, such as the sound source 102. The B format signals from a soundfield microphone may be used for determining directional characteristics of a soundfield microphone. The location of the sound source 102 and/or the microphone 102 may be identified based on a fixed coordinate system as in
The direction modifier 706 may be in communication with the identification generator 106 to adjust the identifier. When the identifier is a light beam, the direction modifier 706 may adjust the direction that the light beam is marking. The direction modifier 706 may point its light beam in the direction of the principal direction of the microphone 104. The light beam may be adjusted to point towards the sound source 102 as determined by the location calculator 704.
The user device 118 may be a computing device for a user to interact with the microphone 104, the identification generator 106, or the sound analyzer 108. A user device may include a personal computer, personal digital assistant (“PDA”), wireless phone, or other electronic device. The user device 118 may include a keyboard, keypad or a cursor control device, such as a mouse, or a joystick, touch screen display, remote control or any other device that allow a user adjust the position of the microphone 104, or the direction of the identification generator 106. In one system, the user device 118 may be a remote control that can remotely adjust the microphone 104 and the identification generator 106.
The laser 804 may be shifted radially along a guide rail 805 with respect to the shaft 802. The rail 805 may be arranged so that it can be rotated about the shaft 802. A rotation symmetrical curved mirror line 806 deflects a laser beam 807 as a function of the radial separation of the laser 804 from the middle of the shaft 802. The laser beam 807, which is directed toward the sound source 102, may pass through an axis 808 of the microphone shaft 802. The offset between the mirror 806 and the capsule arrangement in the spherical area 803 may have little or no effect on the evaluation because it may be negligibly small in comparison to the separation of the overall microphone 801 from the sound source 102 to be recorded.
A measuring stick 809, which may arranged on the guide rail 805, may show an instantaneous elevation. Likewise, a measuring stick on the circumference of the shat 802 (not shown) may show an instantaneous azimuth. Using these two angles, the direction of the sound source 102 may be determined. In one system, the axis 808 of the microphone shaft 802 may be the above-defined principal direction of the microphone 801. However, any direction may be used as the principal direction and the relative positions in the fixed coordinate system may be determined based on the principal direction. The position of the sound source 102 or sound sources may be calculated with the corresponding angles with respect to the principal direction. In one system, the mirror 806 may be replaced with another optical deflection device, such as lenses, prisms or similar parts.
The sound analyzer 108 and/or identification generator 106 may be located directly on the microphone, or may be coupled to a microphone stand, a microphone tripod, or a microphone suspension, on or in the area of the microphone holder. In one system, the distance to the capsule is minimized to reduce errors that may be caused by the traveling of the audio signals. The light source or identification generator 106 may be located in the proximity of the location of the microphone. In this system, one may consider where the device is used, such as in the vicinity of the intended location of the microphone, for the determination of the position of sound sources. Also, the microphone may be attached to the location after the measurement of the sound sources. If the information concerning the position of the sound sources becomes available at the time of the subsequent mixing or analysis, the determination of the position may also be possible after the recording. The location of the microphone during recording with respect to the fixed coordinate system may be used along with the arrangement and orientation of the individual capsules for the analysis. Once defined, the fixed coordinate system may be determined by the spatial arrangement of individual capsules.
In block 1104, an identification of the principal direction of the microphone is established. In one system, a light source or laser may generate a light beam or laser beam that identifies the principal direction of the microphone. In block 1106, audio is measured from the sound source with the capsules of a microphone. There may be multiple microphones, and each microphone may include one or more adjustable capsules. The direction of the capsules may be determined by the principal direction of the microphone. Each capsule may measure audio and generate an audio signal based on that audio as in block 1108. The microphone may generate multiple audio signals from its capsules.
The audio signals from the microphone may be processed in block 1110. The processing of the audio signals may include recording the signals and analyzing them with the sound analyzer to identify a location of the sound source. In block 1112, the direction of the microphone may be adjusted based on the processed audio signals. The analysis of the audio signals may reveal that the principal direction of the microphone is not directed towards the sound source. The microphone may be adjusted manually or automatically with a motor and remote control. The adjustment may occur after recording the audio signals or may occur in near real-time while the audio signals are being recorded. In addition, the direction identification of the microphone may be adjusted in block 1114. In some systems, a light beam that identifies the principal direction of the microphone may be adjusted based on the audio signals that were recorded by the microphone. The adjustment of the direction identification may result in the identifier pointing towards the sound source in block 1116.
In one system, the recording of an orchestra may be analyzed. For the recording, a microphone may be placed in the proximity of the orchestra. After the principal direction has been established, a light beam may be successively directed on the different (still empty) chairs of the orchestra members and the angle with respect to the principal direction may be measured. One may take into account the fact that, after the measurement of the sound sources, the position and orientation of the microphone may no longer be changed. During the mixing of the recording, the directional effect of the microphone may be directed towards each orchestra member, using the angle that was measured previously.
The system and process described may be encoded in a signal bearing medium, a computer readable medium such as a memory, programmed within a device such as one or more integrated circuits, one or more processors or processed by a controller or a computer. If the methods are performed by software, the software may reside in a memory resident to or interfaced to a storage device, synchronizer, a communication interface, or non-volatile or volatile memory in communication with a transmitter. A circuit or electronic device designed to send data to another location. The memory may include an ordered listing of executable instructions for implementing logical functions. A logical function or any system element described may be implemented through optic circuitry, digital circuitry, through source code, through analog circuitry, through an analog source such as an analog electrical, audio, or video signal or a combination. The software may be embodied in any computer-readable or signal-bearing medium, for use by, or in connection with an instruction executable system, apparatus, or device. Such a system may include a computer-based system, a processor-containing system, or another system that may selectively fetch instructions from an instruction executable system, apparatus, or device that may also execute instructions.
A “computer-readable medium,” “machine readable medium,” “propagated-signal” medium, and/or “signal-bearing medium” may comprise any device that includes, stores, communicates, propagates, or transports software for use by or in connection with an instruction executable system, apparatus, or device. The machine-readable medium may selectively be, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. A non-exhaustive list of examples of a machine-readable medium would include: an electrical connection “electronic” having one or more wires, a portable magnetic or optical disk, a volatile memory such as a Random Access Memory “RAM”, a Read-Only Memory “ROM”, an Erasable Programmable Read-Only Memory (EPROM or Flash memory), or an optical fiber. A machine-readable medium may also include a tangible medium upon which software is printed, as the software may be electronically stored as an image or in another format (e.g., through an optical scan), then compiled, and/or interpreted or otherwise processed. The processed medium may then be stored in a computer and/or machine memory.
While various embodiments of the invention have been described, it will be apparent to those of ordinary skill in the art that many more embodiments and implementations are possible within the scope of the invention. Accordingly, the invention is not to be restricted except in light of the attached claims and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
05450113 | Jun 2005 | EP | regional |
This application is a continuation-in-part of International PCT application No. PCT/EP2006/006012 (Pub. No, WO 2006/136410 A1), filed Jun. 22, 2006 as allowed under 35 U.S.C 365(c), which claims priority to EP Application No. 05450113.5, filed Jun. 23, 2005, each of which are incorporated by reference.
Number | Name | Date | Kind |
---|---|---|---|
4042779 | Craven et al. | Aug 1977 | A |
5073936 | Görike et al. | Dec 1991 | A |
6522761 | Ruffa | Feb 2003 | B1 |
6727935 | Allen et al. | Apr 2004 | B1 |
7835531 | Nell | Nov 2010 | B2 |
20030184645 | Biegelsen et al. | Oct 2003 | A1 |
20050111674 | Hsu | May 2005 | A1 |
20060222187 | Jarrett et al. | Oct 2006 | A1 |
20070009115 | Reining et al. | Jan 2007 | A1 |
20070009116 | Reining et al. | Jan 2007 | A1 |
20070071249 | Reining et al. | Mar 2007 | A1 |
Number | Date | Country |
---|---|---|
198 54 373 | Nov 1998 | DE |
344967 | Dec 1929 | GB |
56-35596 | Apr 1981 | JP |
11-331977 | Nov 1999 | JP |
2000-75014 | Mar 2000 | JP |
WO 0225632 | Sep 2001 | WO |
Number | Date | Country | |
---|---|---|---|
20080144876 A1 | Jun 2008 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/EP2006/006012 | Jun 2006 | US |
Child | 11961354 | US |