Aspects described herein generally relate to an ambisonic microphone, and/or hardware and/or software related thereto. More specifically, one or more aspects described herein provide for an array of microphone capsules for capturing ambisonic audio.
Ambisonic audio may refer to a form of full-sphere periphony that may be used in many virtual reality and/or other immersive applications. Ambisonic audio may be encoded according to Ambisonics B-Format, where four A-format signals from four microphone capsules of an ambisonic microphone are encoded into four separate channels labeled W, X, Y, and Z. The W channel corresponds to the mono output from an omnidirectional microphone while the X, Y, and Z channels correspond to directional components of the sound signal. With the rising popularity of various services and applications utilizing ambisonic audio, there is an increasing demand for improvements in ambisonic microphones that can be achieved with relatively simple processes and with relatively low-cost equipment.
The following presents a simplified summary of the disclosure in order to provide a basic understanding of some aspects of the disclosure. This summary is not an extensive overview of the disclosure. It is not intended to identify key or critical elements of the invention or to delineate the scope of the invention. The following summary merely presents some concepts of the disclosure in a simplified form as a prelude to the more detailed description provided below.
Capturing ambisonic audio often requires extensive capital, cabling, external companion equipment, and advanced knowledge of A-format to B-format conversion techniques to attain a high-quality ambisonic audio signal. Additionally, the increasing accessibility and portability of equipment, such as used for podcasting, live streaming, and/or other recording, may allow a user to perform in various acoustic environments. However, depending on the application, a user might not have sufficient time, knowledge, and/or equipment to properly capture ambisonic audio to attain a desired audio quality.
As described in more detail herein, this application sets forth apparatuses, and methods for capturing ambisonic audio with an array architecture of microphone capsules with stable high frequency polar consistency and reduced acoustic shading. These apparatuses, and methods may be helpful in enabling a consumer to quickly and easily capture high-quality ambisonic audio, convert the ambisonic audio to a desired format, and/or utilize the ambisonic audio in one or more of a number of applications, such as immersive musical recordings, surround sound encoding, podcasting, video game audio design, stereoscopically tracked virtual reality/augmented reality experiences, and multichannel mixing, and/or one or more other applications.
An example ambisonic microphone may comprise a plurality of microphone capsules geometrically arranged to reduce an acoustic shading effect from a structural interference and compactly nested to reduce a phase-related error. The plurality of microphone capsules may comprise a first microphone capsule oriented substantially toward a first vertex of a notional tetrahedron, a second microphone capsule oriented toward a second vertex of a notional tetrahedron, a third microphone capsule oriented substantially toward a third vertex of the notional tetrahedron, and a fourth microphone capsule oriented substantially toward a fourth vertex of the notional tetrahedron
An example method may comprise arranging a first microphone capsule on a first face of a notional tetrahedron and orienting a first face of the first microphone capsule substantially orthogonally to the first face of the notional tetrahedron. The method may further comprise arranging a second microphone capsule on a second face of the notional tetrahedron, orienting a second face of the second microphone capsule substantially orthogonally to the second face of the notional tetrahedron, and nesting the first microphone capsule with the second microphone capsule such that a first axis of minimum sensitivity of the first microphone capsule and a second axis of minimum sensitivity of the second microphone capsule intersect at a first coincident point.
These as well as other novel advantages, details, examples, features and objects of the present disclosure will be apparent to those skilled in the art from following the detailed description, the attached claims and accompanying drawings, listed herein, which are useful in explaining the concepts discussed herein.
Some features are shown by way of example, and not by limitation, in the accompanying drawings. In the drawings, like numerals reference similar elements.
In the following description of the various examples, reference is made to the accompanying drawings, which form a part hereof, and in which is shown by way of illustration various examples in which aspects may be practiced. References to “embodiment,” “example,” and the like indicate that the embodiment(s) or example(s) of the invention so described may include particular features, structures, or characteristics, but not every embodiment or example necessarily includes the particular features, structures, or characteristics. Further, it is contemplated that certain embodiments or examples may have some, all, or none of the features described for other examples. And it is to be understood that other embodiments and examples may be utilized and structural and functional modifications may be made without departing from the scope of the present disclosure.
Unless otherwise specified, the use of the serial adjectives, such as, “first,” “second,” “third,” and the like that are used to describe components, are used only to indicate different components, which can be similar components. But the use of such serial adjectives is not intended to imply that the components must be provided in given order, either temporally, spatially, in ranking, or in any other way.
Also, while the terms “front,” “back,” “side,” and the like may be used in this specification to describe various example features and elements, these terms are used herein as a matter of convenience, for example, based on the example orientations shown in the figures and/or the orientations in typical use. Nothing in this specification should be construed as requiring a specific three dimensional or spatial orientation of structures in order to fall within the scope of the claims.
Microphone 200 may include yoke 202. Yoke 202 may be constructed according to one or more shapes and/or geometries. Yoke 202 may include a protruding member 206. Protruding member 206 may be constructed according to one or more shapes or geometries. Member 206 may be substantially columnar. One or more of the microphone capsules may be coupled to yoke 202 along protruding member 206. One or more of the microphone capsules may be electrically connected to yoke 202 and/or protruding member 206. Protruding member 206 may define a substantially vertical axis (further described with respect to
Microphone 200 may include a handle 204. Handle 204 may include a neck 208. Yoke 202 may be coupled to handle 204 at neck 208. Yoke 202 may be electrically connected to handle 204 and/or neck 208. Yoke 202 may be integrally molded to handle 204. Yoke 202 may be detachably coupled to handle 204. Yoke 202 may include legs 202a and 202b. Legs 202a and 202b may be integrally molded to handle 204 and may be electrically connected to handle 204. Legs 202a and 202b may be detachably coupled to handle 204. Legs 202a and/or 202b may be rotatably and/or pivotably coupled to handle 204, which may allow a user to rotate and/or pivot the orientation of one or more of the microphone capsules about the neck 208 of handle 204. Yoke 202 may be configured to swivel on the neck 208 of handle 204. Yoke 202 and/or member 206 may house some or all of the electronic components described and discussed herein.
Handle 204 and/or neck 208 may be constructed according to any number of shapes or geometries. Handle 204 may be adapted for handheld use and may be constructed according to a number of ergonomic geometries. Handle 204 and/or neck 208 may house some or all of the electronic components described and discussed herein (e.g., conversion module 500). Handle 204 may be adapted as a mounting fixture compatible with one or more cameras or stands, including tripod stands (discussed in greater detail with respect to
Microphone capsule 200a may be positioned in a direction indicated by line 200a′. Microphone capsule 200b may be positioned in a direction indicated by line 200b′. Microphone capsule 200c may be positioned in a direction indicated by line 200c′. Microphone capsule 200d may be positioned in a direction indicated by line 200d′. Lines 200a′, 200b′, 200c′, and 200d′ may represent an axis of maximum sensitivity (i.e., an axis through the center of the microphone capsule projecting infinitely in the positive direction) and minimum sensitivity (i.e., said axis projecting infinitely in the negative, or opposite, direction) for microphone capsules 200a, 200b, 200c, and 200d, respectively. The axes of minimum sensitivity of microphone capsules 200a and 200d (i.e., lines 200a′ and 200d′, respectively) may or might not intersect at a point in space (i.e., lines 200a′ and 200d′ may share at least one coincident point of intersection). The axes of minimum sensitivity of microphone capsules 200b and 200c (i.e., lines 200b′ and 200c′, respectively) may or might not intersect at a point in space (i.e., lines 200b′ and 200c′ may share at least one coincident point of intersection).
Microphone capsules 200a-200d may be geometrically arranged and compactly nested relative to one another such that the microphone capsules may exhibit a consistent and/or stable polar response at high frequencies. The microphone capsules may be compactly nested together to help minimize phase-related errors and/or to help provide higher spatial/localization accuracy. The microphone capsules may be geometrically oriented according to aspects described herein to reduce the acoustic shading due to structural interference introduced by one or more adjacent microphone capsules. That is, the geometric orientation of the microphone capsules may reduce acoustic shading by reducing the cross-section(s) of the obstruction caused by adjacent microphone capsules, which may help improve high frequency response.
As shown in
Microphone capsule 200b may or might not be disposed at the centroid of face 306. Microphone capsule 200b may be disposed at any number of points on face 306. Face 306 may be defined by vertices 300a′, 300b′, and 300d′. Microphone capsule 200c may or might not be disposed at the centroid of face 308. Capsule 200c may be disposed at any number of points on face 308. Face 308 may be defined by vertices 300a′, 300c′, and 300d′. Capsule 200d may or might not be disposed at the centroid of face 310. Capsule 200d may be disposed at any number of points on face 310. Face 310 may be defined by vertices 300b′, 300c′, and 300d′. Capsules 200b-200d may include respective edges that are tangent to faces 306, 308, and 310, respectively, of notional tetrahedron 300. Microphone capsule 200b may be disposed on face 306 of notional tetrahedron 300 such that the plane defined by face 306 intersects one or more points of capsule 200b (i.e., edge of capsule 200b might not be tangent to face 306). Microphone capsule 200c may be disposed on face 308 of notional tetrahedron 300 such that the plane defined by face 308 may intersect one or more points of capsule 200c (i.e., edge of capsule 200c might not be tangent to face 308). Microphone capsule 200d may be disposed on face 310 of notional tetrahedron 300 such that the plane defined by face 310 may intersect one or more points of capsule 200d (i.e., edge of capsule 200d might not be tangent to face 310).
Microphone capsule 200b may be oriented in a direction substantially towards vertex 300b′ (represented by line 200b′). Capsule 200c may be oriented in a direction substantially towards vertex 300c′ (represented by line 200c′). Capsule 200d may be oriented in a direction substantially towards vertex 300d′ (represented by line 200d′). The faces (i.e., the side of the microphone capsules corresponding to maximum acoustic sensitivity) of microphone capsules 200b, 200c, and 200d may each define a plane that is substantially (e.g., ±10 degrees) orthogonal (i.e., perpendicular) to the plane defined by the corresponding face of notional tetrahedron 300 (i.e., face 304 for capsule 200a, face 306 for capsule 200b, face 308 for capsule 200c, and face 310 for capsule 200d). In an example, the faces of microphone capsules 200b, 200c, and/or 200d might not be orientated orthogonally relative to faces 306, 308, and/or 310, respectively. The faces of microphone capsules 200b, 200c, and/or 200d may be oriented parallel to faces 306, 308, and/or 310, respectively. The faces of microphone capsules 200b, 200c, and/or 200d may be oriented parallel and substantially tangent to faces 306, 308, and/or 310, respectively.
As has been discussed, microphone capsules 200a-200d may be compactly nested with respect to one another which may help ensure a consistent polar response of the microphone capsules at high frequencies and may help reduce phase-related errors. Microphone capsules 200a-200d may be geometrically arranged to help reduce acoustic shading that any one microphone capsule is subjected to from the other microphone capsules. While the microphone capsules may be generally oriented or arranged as described above, the distances between any two given microphone capsules may vary (e.g., may vary widely).
For example, as show in
Microphone capsules 200a and 200b may be offset from microphone capsules 200c and 200d by a distance d along a z-axis of yolk 206 (as shown in
As has been discussed, notional tetrahedron 300 may assume the shape of an irregular tetrahedron (i.e., a tetrahedron that does not have four equilateral face). In an example, microphone capsules 200a, 200b, 200c, and 200d may be generally disposed on the faces of notional tetrahedron 300 at a constant radius from a central point such that the array of microphone capsules is radially symmetric when projected onto a plane. The faces of capsules 200a, 200b, 200c, and 200d may be equally spaced relative to one another and may form an angle of about 90 degrees relative to adjacent capsules. Capsules 200a, 200b, 200c, and 200d may be arranged into a first and second vertical plane, such that each vertical plane contains two microphone capsules that share an intersecting axis. Each pair of microphone capsules may be rotated about the respective shared axis such that the respective axes of maximum sensitivity are orthogonal. The second vertical plane may be rotated about 90 degrees and mirrored about its axis of rotation. As a result, microphone capsules 200a, 200b, 200c, and 200d may be substantially outward-facing. The respective axes of maximum sensitivity of microphone capsules 200a, 200b, 200c, and 200d might not overlap. The upper pair or microphone capsules may be largely upward-facing and the lower pair of microphone capsules may be largely downward-facing.
With respect to
One or more aspects may be embodied in computer-usable or readable data and/or computer-executable instructions, such as in one or more program modules, executed by one or more computers or other devices as described herein, such as, for example, microphone 200. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types when executed by a processor in a computer or other device. The modules may be written in a source code programming language that is subsequently compiled for execution, or may be written in a scripting language such as (but not limited to) Python, Perl, PHP, Ruby, JavaScript, and the like. The computer executable instructions may be stored on a computer readable medium such as a nonvolatile storage device. Any suitable computer readable storage media may be utilized, including hard disks, CD-ROMs, optical storage devices, magnetic storage devices, solid state storage devices, and/or any combination thereof. In addition, various transmission (non-storage) media representing data or events as described herein may be transferred between a source and a destination in the form of electromagnetic waves traveling through signal-conducting media such as metal wires, optical fibers, and/or wireless transmission media (e.g., air and/or space). Various aspects described herein may be embodied as a method, a data processing system, or a computer program product. Therefore, various functionalities may be embodied in whole or in part in software, firmware, and/or hardware or hardware equivalents such as integrated circuits, field programmable gate arrays (FPGA), and the like. Particular data structures may be used to more effectively implement one or more aspects described herein, and such data structures are contemplated within the scope of computer executable instructions and computer-usable data described herein.
With further reference to
Microphone capsules 200a-200d may be configured to receive acoustic signals emanating from various directions in an acoustic environment. The microphone capsules may capture a set of audio signals in A-format. The set of audio signals may vary widely in duration (e.g. from less than one second to more than 1000 seconds). The microphone capsules may provide the set of A-format audio signals to an external device. Onboard processing of microphone 200 (e.g., conversion module 500, processor 504) may encode the set of A-format audio signals to B-format, C-format (or Ambisonic UHJ, such as nested multi-channel output formats), D-format (such as 3.1, 5.1, 5.1.n 7.1, 7.1.n and/or other surround sound formats, including custom speaker array formats and other formats with pre-encoded channels), G-format, mono, stereo, and/or to a binaural audio format for headphone listening (described further below with respect to
As shown in
In operation, device controller 501 may receive a set of A-format audio signals captured with microphone capsules 200a-200n. Device controller 501 may route the set of A-format audio signals to A/D converter 502, which may provide a set of digital A-format audio signals to processor 504 for further processing. Processor 504 may provide the set of digital set of A-format audio signals to D/A converter 511 for output via output port 512 to an output device 514. The number of n channels of D/A converter 511 may correspond to the number of n channels of A/D converter and/or to the number of n microphone capsules (i.e., the number of channels in D/A converter 511, represented as the integer n, may be the same as the number of channels in A/D converter 502 and/or same as the number of microphone capsules). Output device 514 may be any of devices 102, 104, and/or other devices such as a mixing console, recording console, headphones, earphones, etc.
Converter module 500 may include an encoder/decoder 510. Encoder 510 may be configured to encode (or convert) the set of digital A-format audio signals to a set of B-format audio signals. Encode 510 may be configured to decode (or render) the set of B-format audio signals to D/A converter 511 and via port 512 for use in output device 514. As has been discussed, the orientation and arrangement of microphone capsules 200a-200d according to aspects described herein may minimize A-format to B-format conversion errors and localization inaccuracies that might typically result from the non-coincidence of the microphone capsules in an ambisonic microphone. As a result, the B-format stability (or bi-directional collapse point) of the microphone capsules may be improved at higher frequencies. That is to say that the directivity patterns and frequency response patterns of the microphone capsules may remain stable while capturing audio signals with frequencies occupying ranges from about 4-20 kHz. Encoder 510 may employ any number of time-domain processing techniques when performing A-format to B-format encoding of the set of audio signals. Encoder 510 and/or processor 504 may employ any number of purely time-domain processing techniques when performing A-format to B-format encoding of the set of audio signals. That is, encoder 510 and/or processor 504 might not perform Fast Fourier Transformation of the set of A-format audio signals before encoding to B-format. Rather, encoder 510 and/or processor 504 may analyze one or more waveforms of the set of audio signals. Encoder 510 and/or processor 504 might not convert the set of audio signals into spectral components and might not analyze those spectral components of the set of audio signals. In one or more examples, frequency response correction filters, equalization filters, and/or other corrective measures might be unnecessary. Encoder 510 and/or processor 504 may encode the set of A-format audio signals to B-format audio signals by employing the convention:
where W represents an omnidirectional microphone channel and X, Y, and Z represent bi-directional (or figure-of-eight) microphone channels; and where FLU may represent the signal captured by microphone capsule 200a, FRD may represent the signal captured by microphone capsule 200c, BLD may represent the signal captured by microphone capsule 200d, and BRU may represent the signal captured by microphone capsule 200b. The W-channel may be attenuated by about 3 dB (i.e., by a factor of √{square root over (2)} or 0.707). As a result of employing time-domain processing, processing latency may be reduced and microphone 200 may provide real-time A-format or decoded B-format audio signals for latency-critical applications such as livestreaming, etc. Encoder/decoder 510 may support any number of B-Format export formats, including, for example, FuMa, Ambix, and the like.
Encoder/decoder 510 may be configured to decode the set of B-format audio signals to a set of D-format audio signals. Converter module 500 may include an interface controller 505 communicatively connected to a user interface 515. The interface controller 505 may facilitate communication between a user interface 515 and the converter module 500. For example, the interface controller 505 may receive user indications and/or queries from user interface 515 and provide the indications and/or queries to the converter module for further actions described herein. The user interface 515 may comprise, for example, a capacitive-touch interface that a user may control via touch, or a graphical user interface. A companion software application (not shown) installed on the device 102 and/or device 104 may provide the user interface 515 and may perform some or all of the processing and decoding of the audio signals described herein.
The interface 515 may function in concert with some or all of the hardware and/or software components described herein to help simplify the setup and workflow of capturing spatial audio with microphone 200 and providing it to a consumer. The user interface 515 may present a user with several audio capture and conversion options. For example, interface 515 may provide the user with options to output captured audio signals in mono, stereo, binaural, A-format, B-format, C-format, D-format, and/or G-format audio standards to an external device. Interface 515 may provide the user with other pre-and/or post-recording processing options, such as filtering, equalization, compression, and steerable virtual microphones independent position/localization and gain adjustments, etc. Interface 515 may provide the user with a graphical representation of an acoustic sound field and may allow the user to create any number of virtual microphones and manipulate the polarity of said virtual microphones. Interface 515 may include a video feed window to allow a user to monitor the synchronization of incoming audio signals to either live or pre-recorded video data.
Any of the circuitry in
In operation, one or more microphone capsules may be arranged or oriented in a first direction relative to a notional tetrahedron according to aspects described herein. For example, a first microphone capsule may be arranged on a first face of notional tetrahedron in a direction substantially toward a first vertex of the notional tetrahedron (Step 602). The face of the microphone capsule may be oriented relative to (or with respect to) the face of the notional tetrahedron (e.g., orthogonally, substantially orthogonally, parallel, substantially parallel) (Step 604). The first microphone capsule may be oriented relative to a second microphone capsule such that the axis of minimum sensitivity of the first capsule may share a coincident point with the axis of minimum sensitivity of the second capsule (i.e., the axes may intersect at a point in space). The axes of maximum sensitivity of the first and second microphone capsules might not share a coincident point with one another. The third and fourth microphone capsules may be oriented relative to one another such that the axis of minimum sensitivity of the third capsule may share a coincident point with the axis of minimum sensitivity of the fourth capsule (i.e., the axes may intersect at a point in space). The first, second, third, and/or fourth microphone capsules may be oriented with respect to one another to reduce structural interference and acoustic shading associated that may be caused by adjacent microphone capsules.
The first microphone capsule may be nested with one or more other microphone capsules in accordance with aspects described herein to help reduce phase-related errors. (Step 606). The microphone capsules may be configured to capture audio signals (Step 608). The user may wish to convert the set of captured audio signals (e.g., A-format audio signals) to any number of different audio standards (e.g., B-format, C-format, D-format, G-format, mono, stereo, binaural, etc.). The user may indicate, via an interface (such as interface 515) and/or microphone 200, that the user wishes for such conversion to occur and may specify the desired format. Based on receiving a conversion indication (Step 610: YES), the conversion module 500 may employ time-domain processing techniques as described herein to convert the A-format audio signals to the desired format (Step 612) for further processing and/or output to, for example, output device 514 (Step 614). In one or more examples, the output audio signals may be synced to a video feed, including a livestream, broadcast, etc. The user may wish to output the raw A-format audio signals to an external device. Based on receiving an indication to output the raw A-format audio signals (Step 610: NO), the conversion module 500 may provide the A-format audio signals to output port 512 for conversion and/or further processing by, for example, output device 514 (Step 614). The microphone capsules may automatically continue to capture audio signals indefinitely (Step 616: YES). Conversion module 500 may receive an indication to stop capturing audio signals (Step 616: NO), upon which method 600 may terminate.
The aspects described herein may be performed by a number of device configurations. For example, a user may connect, for example, microphones 100 and 200 to devices 102, 104, and/or other devices operating a software application capable of performing the operations described herein. In another example, the aspects described herein can be performed by a smartphone, desktop computer, laptop computer, and/or other devices having an internal microphone and a software application capable of performing the operations described herein. No other audio equipment might be necessary to perform the operations described herein.
An ambisonic microphone may comprise a plurality of microphone capsules. The plurality of microphone capsules may be geometrically arranged to reduce an acoustic shading effect from a structural interference. The plurality of microphone capsules may be compactly nested to reduce a phase related error. The plurality of microphone capsules may comprise a first microphone capsule oriented in a first direction, a second microphone capsule oriented in a second direction, a third microphone capsule, and a fourth microphone capsule. The first direction may be substantially toward a first vertex of a notional tetrahedron and the second direction may be substantially toward to a second vertex of the notional tetrahedron. The third microphone capsule may be oriented in a third direction. The third direction may be substantially toward a third vertex of the notional tetrahedron. The fourth microphone capsule may be oriented in a fourth direction. The fourth direction may be substantially toward to a fourth vertex of the notional tetrahedron. The first microphone capsule may comprise a first capsule face that is arranged in a first orientation relative to the first direction; the second microphone capsule may comprise a second capsule face that is arranged in a second orientation relative to the second direction; the third microphone capsule may comprise a third capsule face that is arranged in a third orientation relative to the third direction; and the fourth microphone capsule may comprise a fourth capsule face that is arranged in a fourth orientation relative to the fourth direction. The first orientation, second orientation, third orientation, and/or fourth orientation may be at least one of the group consisting of substantially orthogonal or substantially parallel. The first microphone capsule may be disposed on a first face of the notional tetrahedron. The second microphone capsule may be disposed on a second face of the notional tetrahedron. The third microphone capsule may be disposed on a third face of the notional tetrahedron. The fourth microphone capsule may be disposed on a fourth face of the notional tetrahedron. The ambisonic microphone may comprise one or more processors and memory storing instructions that, when executed by the one or more processors, cause the microphone to encode a set of audio signals generated by the plurality of microphone capsules to at least one of an A-format audio standard, a B-format audio standard, a C-format audio standard, a D-format audio standard, or a G-format audio standard. The memory storing instructions that, when executed by the one or more processors, may cause the microphone to encode the set of audio signals using time-domain processing. The ambisonic microphone may comprise an output port to provide a set of audio signals formatted according to at least one of an A-format audio standard, a B-format audio standard, a C-format audio standard, a D-format audio standard, or a G-format audio standard to an external device. The ambisonic microphone may further comprise a mounting fixture configured to removably couple to at least one camera. The mounting fixture may be disposed above or beneath the plurality of microphone capsules.
An apparatus may comprise one or more processors and memory storing instructions that, when executed by the one or more processors, may cause the apparatus to receive, from an ambisonic microphone, a set of audio signals and encode the set of audio signals using time-domain processing. The apparatus may comprise a first microphone capsule disposed on a first face of the notional tetrahedron. The first microphone capsule may comprise a first microphone capsule face arranged in a first orientation relative to a first face of a notional tetrahedron. The apparatus may comprise a second microphone capsule disposed on a second face of the notional tetrahedron. The second microphone capsule may comprise a second microphone capsule face arranged in a second orientation relative to a second face of the notional tetrahedron. The apparatus may comprise a third microphone capsule disposed on a third face of the notional tetrahedron. The third microphone capsule may comprise a third microphone capsule face arranged in a third orientation relative to a third face of the notional tetrahedron. The apparatus may comprise a fourth microphone capsule disposed on a fourth face of the notional tetrahedron. The fourth microphone capsule may comprise a fourth microphone capsule face arranged in a fourth orientation relative to a fourth face of the notional tetrahedron. The first orientation, second orientation, third orientation, and/or fourth orientation may be at least one of the group consisting of substantially orthogonal or substantially parallel. The set of audio signals may be coded according to at least one of an A-format audio standard, a B-format audio standard, a C-format audio standard, a D-format audio standard, or a G-format audio standard. The apparatus may receive, from the ambisonic microphone, the set of audio signals via a wireless transmission. The memory storing instructions that, when executed by the at least one processor, may cause the apparatus to convert the set of audio signals to at least one of a B-format audio standard, a C-format audio standard, a D-format audio standard, or a G-format audio standard. The ambisonic microphone may comprise the one or more processors and memory. The apparatus may further comprise a mounting fixture configured to removably couple to at least one camera.
A method for capturing audio may comprise arranging a first microphone capsule at a first face of a notional tetrahedron and orienting a first face of the first microphone capsule substantially orthogonally to the first face of the notional tetrahedron. The method may further comprise: arranging a second microphone capsule at a second face of a notional tetrahedron and orienting a second face of the second microphone capsule substantially orthogonally to the second face of the notional tetrahedron and nesting the first microphone capsule with the second microphone capsule such that a first axis of minimum sensitivity of the first microphone capsule and a second axis of minimum sensitivity of the second microphone capsule intersect at a first coincident point. The method may further comprise encoding the first set of audio signals using time-domain processing and providing a second set of audio signals to an output device, wherein the second set of audio signals is encoded according to at least one of an A-format audio standard, a B-format audio standard, a C-format audio standard, a D-format audio standard, or a G-format audio standard. The method may further comprise obtaining a set of audio signals and wirelessly transmitting the set of audio signals to an output device. The set of audio signals may be encoded according to at least one of an A-format audio standard, a B-format audio standard, a C-format audio standard, a D-format audio standard, or a G-format audio standard.
In the foregoing specification, the present disclosure has been described with reference to specific exemplary examples thereof. Although the invention has been described in terms of a preferred example, those skilled in the art will recognize that various modifications, examples or variations of the invention can be practiced within the spirit and scope of the invention as set forth in the appended claims. The specification and drawings are, therefore, to be regarded in an illustrated rather than restrictive sense. Accordingly, it is not intended that the invention be limited except as may be necessary in view of the appended claims.
This application claims priority to U.S. Provisional Patent Application No. 63/576,446, filed on Apr. 28, 2023, which is hereby incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
63576446 | Apr 2023 | US |