The present invention relates to a systems, methods, and apparatuses for panning audio in virtual environments at least partially in response to movement of a user.
Human beings have just two ears, but can locate sounds in three dimensions, in distance and in direction. This is possible because the brain, the inner ears, and the external ears (pinna) work together to make inferences about the location of a sound. The location of a sound is estimated by taking cues derived from one ear (monoaural cues), as well as by comparing the difference between the cues received in both ears (binaural cues).
Binaural cues relate to the differences of arrival and intensity of the sound between the two ears, which assist with the relative localization of a sound source. Monoaural cues relate to the interaction between the sound source and the human anatomy, in which the original sound is modified by the external ear before it enters the ear canal for processing by the auditory system. The modifications encode the source location relative to the ear location and are known as head-related transfer functions (HRTF).
In other words, HRTFs describe the filtering of a sound source before it is perceived at the left and right ear drums, in order to characterize how a particular ear receives sound from a particular point in space. These modifications may include the shape of the listener's ear, the shape of the listener's head and body, the acoustical characteristics of the space in which the sound is played, and so forth. All these characteristics together influence how a listener can accurately tell what direction a sound is coming from. Thus, a pair of HRTFs accounting for all these characteristics, generated by the two ears, can be used to synthesize a binaural sound and accurately recognize it as originating from a particular point in space.
HRTFs have wide ranging applications, from virtual surround sound in media and gaming, to hearing protection in loud noise environments, and hearing assistance for the hearing impaired. Particularly, in fields hearing protection and hearing assistance, the ability to record and reconstruct a particular user's HRTF presents several challenges as it must occur in real time. In the case of an application for hearing protection in high noise environments, heavy hearing protection hardware must be worn over the ears in the form of bulky headphones, thus, if microphones are placed on the outside of the headphones, the user will hear the outside world but will not receive accurate positional data because the HRTF is not being reconstructed. Similarly, in the case of hearing assistance for the hearing impaired, a microphone is similarly mounted external to the hearing aid, and any hearing aid device that fully blocks a user's ear canal will not accurately reproduce that user's HRTF.
Thus, there is a need for an apparatus and system for reconstructing a user's HRTF in accordance to the user's physical characteristics, in order to accurately relay positional sound information to the user in real time.
The present invention meets the existing needs described above by providing for an apparatus, system, and method for generating a head related audio transfer function. The present invention also provides for the ability to enhance audio in real-time and tailors the enhancement to the physical characteristics of a user and the acoustic characteristics of the external environment.
Accordingly, in initially broad terms, an apparatus directed to the present invention, also known as an HRTF generator, comprises an external manifold and internal manifold. The external manifold is exposed at least partially to an external environment, while the internal manifold is disposed substantially within an interior of the apparatus and/or a larger device or system housing said apparatus.
The external manifold comprises an antihelix structure, a tragus structure, and an opening. The opening is in direct air flow communication with the outside environment, and is structured to receive acoustic waves. The tragus structure is disposed to partially enclose the opening, such that the tragus structure will partially impede and/or affect the characteristics of the incoming acoustic waves going into the opening. The antihelix structure is disposed to further partially enclose the tragus structure as well as the opening, such that the antihelix structure will partially impede and/or affect the characteristics of the incoming acoustic waves flowing onto the tragus structure and into the opening. The antihelix and tragus structures may comprise semi-domes or any variation of partial-domes comprising a closed side and an open side. In a preferred embodiment, the open side of the antihelix structure and the open side of the tragus structure are disposed in confronting relation to one another.
The opening of the external manifold is connected to and in air flow communication with an opening canal inside the external manifold. The opening canal may be disposed in a substantially perpendicular orientation relative to the desired orientation of the user. The opening canal is in further air flow communication with an auditory canal, which is formed within the internal manifold but also be formed partially in the external manifold.
The internal manifold comprises the auditory canal and a microphone housing. The microphone housing is attached or connected to an end of the auditory canal on the opposite end to its connection with the opening canal. The auditory canal, or at least the portion of the auditory canal, may be disposed in a substantially parallel orientation relative to the desired listening direction of the user. The microphone housing may further comprise a microphone mounted against the end of the auditory canal. The microphone housing may further comprise an air cavity behind the microphone on an end opposite its connection to the auditory canal, which may be sealed with a cap.
In at least one embodiment, the apparatus or HRTF generator may form a part of a larger system. Accordingly, the system may comprise a left HRTF generator, a right HRTF generator, a left preamplifier, a right preamplifier, an audio processor, a left playback module, and a right playback module.
As such, the left HRTF generator may be structured to pick up and filter sounds to the left of a user. Similarly, the right HRTF generator may be structured to pick up and filter sounds to the right of the user. A left preamplifier may be structured and configured to increase the gain of the filtered sound of the left HRTF generator. A right preamplifier may be structured and configured to increase the gain of the filtered sound of the right HRTF generator. The audio processor may be structured and configured to process and enhance the audio signal received from the left and right preamplifiers, and then transmit the respective processed signals to each of the left and right playback modules. The left and right playback modules or transducers are structured and configured to convert the electrical signals into sound to the user, such that the user can then perceive the filtered and enhanced sound from the user's environment, which includes audio data that allows the user to localize the source of the originating sound.
In at least one embodiment, the system of the present invention may comprise a wearable device such as a headset or headphones having the HRTF generator embedded therein. The wearable device may further comprise the preamplifiers, audio processor, and playback modules, as well as other appropriate circuitry and components.
In a further embodiment, a method for generating a head related audio transfer function may be used in accordance with the present invention. As such, external sound is first filtered through an exterior of an HRTF generator which may comprise a tragus structure and an antihelix structure. The filtered sound is then passed to the interior of the HRTF generator, such as through the opening canal and auditory canal described above to create an input sound. The input sound is received at a microphone embedded within the HRTF generator adjacent to and connected to the auditory canal in order to create an input signal. The input signal is amplified with a preamplifier in order to create an amplified signal. The amplified signal is then processed with an audio processor, in order to create a processed signal. Finally, the processed signal is transmitted to the playback module in order to relay audio and/or locational audio data to a user.
In certain embodiments, the audio processor may receive the amplified signal and first filter the amplified signal with a high pass filter. The high pass filter, in at least one embodiment, is configured to remove ultra-low frequency content from the amplified signal resulting in the generation of a high pass signal.
The high pass signal from the high pass filter is then filtered through a first filter module to create a first filtered signal. The first filter module is configured to selectively boost and/or attenuate the gain of select frequency ranges in an audio signal, such as the high pass signal. In at least one embodiment, the first filter module boosts frequencies above a first frequency, and attenuates frequencies below a first frequency.
The first filtered signal from the first filter module is then modulated with a first compressor to create a modulated signal. The first compressor is configured for the dynamic range compression of a signal, such as the first filtered signal. Because the first filtered signal boosted higher frequencies and attenuated lower frequencies, the first compressor may, in at least one embodiment, be configured to trigger and adjust the higher frequency material, while remaining relatively insensitive to lower frequency material.
The modulated signal from the first compressor is then filtered through a second filter module to create a second filtered signal. The second filter module is configured to selectively boost and/or attenuate the gain of select frequency ranges in an audio signal, such as the modulated signal. In at least one embodiment, the second filter module is configured to be of least partially inverse relation relative to the first filter module. For example, if the first filter module boosted content above a first frequency by +X dB and attenuated content below a first frequency by −Y dB, the second filter module may then attenuate the content above the first frequency by −X dB, and boost the content below the first frequency by +Y dB. In other words, the purpose of the second filter module in one embodiment may be to “undo” the gain adjustment that was applied by the first filter module.
The second filtered signal from the second filter module is then processed with a first processing module to create a processed signal. In at least one embodiment, the first processing module may comprise a peak/dip module. In other embodiments, the first processing module may comprise both a peak/dip module and a first gain element. The first gain element may be configured to adjust the gain of the signal, such as the second filtered signal. The peak/dip module may be configured to shape the signal, such as to increase or decrease overshoots or undershoots in the signal.
The processed signal from the first processing module is then split with a band splitter into a low band signal, a mid band signal and a high band signal. In at least one embodiment, each band may comprise the output of a fourth order section, which may be realized as the cascade of second order biquad filters.
The low band signal is modulated with a low band compressor to create a modulated low band signal, and the high band signal is modulated with a high band compressor to create a modulated high band signal. The low band compressor and high band compressor are each configured to dynamically adjust the gain of a signal. Each of the low band compressor and high band compressor may be computationally and/or configured identically as the first compressor.
The modulated low band signal, the mid band signal, and the modulated high band signal are then processed with a second processing module. The second processing module may comprise a summing module configured to combine the signals. The summing module in at least one embodiment may individually alter the gain of each of the modulated low band, mid band, and modulated high band signals. The second processing module may further comprise a second gain element. The second gain element may adjust the gain of the combined signal in order to create a processed signal that is transmitted to the playback module.
In additional embodiments, different signal filter and processing systems may be used to additionally provide head tracking and audio panning within virtual audio spaces. Accordingly, processors may also be used to adjust the level of each HRTF input channel pair according to a predefined table of angles and corresponding decibel outputs. In further embodiments, the system comprises a signal filter bank, preferably a finite impulse response (“FIR”) filter bank, a signal processor, preferably an upmixer, and a panning function or algorithm configured to detect and subsequently modify angles corresponding to the motion of a user's head, and is further configured to “pan” audio sources in response thereto. Further, the present invention includes methodology for calibration through HRTF coefficient selections, gain tables, and subjective listening tests to provide maximum flexibility for user experience.
By way of analogy, the present invention operates on the principal of a virtual sphere of speakers rotationally affixed to a user's head. The effect of the virtual sphere is accomplished by the FIR filter bank, and may be effectuated even if the output signal is only directed to left and right speakers or headphones. Each speaker within the virtual sphere is identified by a coordinate system and the volume of each speaker is controlled by an upmixer. If the user rotates her head, the sound coming from each speaker must be translated to maintain the directionality of the sound. In effect, virtually speakers aligned with the original angle of a particular sound are not attenuated (or attenuated the least) while the remaining speakers within the virtual sphere are attenuated according to predetermined amounts.
According to one embodiment, the system may include a one-to-many upmixer for each channel of input signal, which is used to determine the level of output signal sent to each one of the virtual speakers. Each input signal includes information corresponding to an original angle, which determines the initial directionality (without modification by panning) on a virtual sphere of speakers surrounding the user. When a user moves her head, a panning function of the present invention determines an appropriate adjustment of the directionality on the virtual sphere of speakers.
In a preferred embodiment, the output of the one-to-many upmixer is fed to a plurality of FIR filter pairs within the FIR filter bank. The FIR filter pairs are arranged into two virtual speaker hemispheres to form complete spherical coverage. Each FIR filter pair includes a left and right channel input, but the output of the FIR filter pairs are configured in a mid-side orientation, and further configured to create the virtual speaker sphere. A signal may be processed by the upmixer, used for each channel of input to determine the level of signal sent to each filter. Each input contains information on an “origin angle” which determines its original point on the virtual speaker sphere. The final decibel output sent to each FIR filter pair is determined for each angle of input contained in the input. Accordingly, the system also includes an array of predetermined relationships between the angle of the input and decibel outputs relative to the original signal level. The system may then interpolate or select an output to send through the FIR filter pair, allowing for a user to determine the directionality of sound through the differences in level provided by each speaker.
However, it is envisioned that the users may be moving or in different positions while in the virtual speaker space. Accordingly, the system also includes a panning function configured to detect the motion of a user's head and correspondingly modify the origin angle before selecting an output to send through the FIR filter pairs, enabling the translation of origin angles of each signal input to new angles based on panning inputs.
By way of non-limiting example, the systems and methodologies of the present embodiment may find use in connection with virtual environments, such as those experienced with a headset unit and earphones. The present embodiment may be utilized to “pan” the directionality of audio sources within the virtual environment in response to input changes from the user and/or the user's head.
The method described herein may be configured to capture and transmit locational audio data to a user in real time, such that it can be utilized as a hearing aid, or in loud noise environments to filter out loud noises. The present invention may also be utilized to transmit directional audio sources from outside a virtual environment, such that a user may be apprised of sounds and their direction outside of the user's virtual environment.
These and other objects, features and advantages of the present invention will become clearer when the drawings as well as the detailed description are taken into consideration.
For a fuller understanding of the nature of the present invention, reference should be had to the following detailed description taken in connection with the accompanying drawings in which:
Like reference numerals refer to like parts throughout the several views of the drawings.
As illustrated by the accompanying drawings, the present invention is directed to an apparatus, system, and method for generating a head related audio transfer function for a user. Specifically, some embodiments relate to capturing surrounding sound in the external environment in real time, filtering that sound through unique structures formed on the apparatus in order to generate audio positional data, and then processing that sound to enhance and relay the positional audio data to a user, such that the user can determine the origination of the sound in three dimensional space.
As schematically represented,
The external manifold 110 may comprise a hexahedron shape having six faces. In at least one embodiment, the external manifold 110 is substantially cuboid. The external manifold 110 may comprise at least one surface that is concave or convex, such as an exterior surface exposed to the external environment. The internal manifold 120 may comprise a substantially cylindrical shape, which may be at least partially hollow. The external manifold 110 and internal manifold 120 may comprise sound dampening or sound proof materials, such as various foams, plastics, and glass known to those skilled in the art.
Drawing attention to
In at least one embodiment, the antihelix structure 101 comprises a semi-dome structure having a closed side 105 and an open side 106. In a preferred embodiment, the open side 106 faces the preferred listening direction 104, and the closed side 105 faces away from the preferred listening direction 104. The tragus structure 102 may also comprise a semi-dome structure having a closed side 107 and an open side 108. In a preferred embodiment, the open side 108 faces away from the preferred listening direction 104, while the closed side 107 faces towards the preferred listening direction 104. In other embodiments, the open side 106 of the antihelix structure 101 may be in direct confronting relation to the open side 108 of the tragus structure 102, regardless of the preferred listening direction 104.
Semi-dome as defined for the purposes of this document may comprise a half-dome structure or any combination of partial-dome structures. For instance, the anti-helix structure 101 of
In at least one embodiment, the antihelix structure 101 and tragus structure 102 may be modular, such that different sizes or shapes (variations of different semi-domes or partial-domes) may be swapped out based on a user's preference for particular acoustic characteristics.
Drawing attention now to
As previously discussed, the internal manifold 120 is formed wholly or substantially within an interior of the apparatus, such that it is not exposed directly to the outside air and will not be substantially affected by the external environment. In at least one embodiment, the auditory canal 121 formed within at least a portion of the internal manifold 121, will be disposed in a substantially parallel orientation relative to desired listening direction 104 of the user. In a preferred embodiment, the auditory canal comprises a length that is greater than two times its diameter.
A microphone housing 122 is attached to an end of the auditory canal 121. Within the microphone housing 122, a microphone generally at 123, not shown, is mounted against the end of the auditory canal 121. In at least one embodiment, the microphone 123 is mounted flush against the auditory canal 121, such that the connection may be substantially air tight to avoid interference sounds. In a preferred embodiment, an air cavity generally at 124 is created behind the microphone and at the end of the internal manifold 120. This may be accomplished by inserting the microphone 123 into the microphone housing 122, and then sealing the end of the microphone housing, generally at 124, with a cap. The cap may be substantially air tight in at least one embodiment. Different gasses having different acoustic characteristics may be used within the air cavity.
In at least one embodiment, apparatus 100 may form a part of a larger system 300 as illustrated in
The left and right HRTF generators 100 and 100′ may comprise the apparatus 100 described above, each having unique structures such as the antihelix structure 101 and tragus structure 102. Accordingly, the HRTF generators 100/100′ may be structured to generate a head related audio transfer function for a user, such that the sound received by the HRTF generators 100/100′ may be relayed to the user to accurately communicate position data of the sound. In other words, the HRTF generators 100/100′ may replicate and replace the function of the user's own left and right ears, where the HRTF generators would collect sound, and perform respective spectral transformations or a filtering process to the incoming sounds to enable the process of vertical localization to take place.
A left preamplifier 210 and right preamplifier 210′ may then be used to enhance the filtered sound coming from the HRTF generators, in order to enhance certain acoustic characteristics to improve locational accuracy, or to filter out unwanted noise. The preamplifiers 210/210′ may comprise an electronic amplifier, such as a voltage amplifier, current amplifier, transconductance amplifier, transresistance amplifier and/or any combination of circuits known to those skilled in the art for increasing or decreasing the gain of a sound or input signal. In at least one embodiment, the preamplifier comprises a microphone preamplifier configured to prepare a microphone signal to be processed by other processing modules. As it may be known in the art, microphone signals sometimes are too weak to be transmitted to other units, such as recording or playback devices with adequate quality. A microphone preamplifier thus increases a microphone signal to the line level by providing stable gain while preventing induced noise that might otherwise distort the signal.
Audio processor 230 may comprise a digital signal processor and amplifier, and may further comprise a volume control. Audio processor 230 may comprise a processor and combination of circuits structured to further enhance the audio quality of the signal coming from the microphone preamplifier, such as but not limited to shelf filters, equalizers, modulators. For example, in at least one embodiment the audio processor 230 may comprise a processor that performs the steps for processing a signal as taught by the present inventor's U.S. Pat. No. 8,160,274, the entire disclosure of which is incorporated herein by reference. Audio processor 230 may incorporate various acoustic profiles customized for a user and/or for an environment, such as those described in the present inventor's U.S. Pat. No. 8,565,449, the entire disclosure of which is incorporated herein by reference. Audio processor 230 may additionally incorporate processing suitable for high noise environments, such as those described in the present inventor's U.S. Pat. No. 8,462,963, the entire disclosure of which is incorporated herein by reference. Parameters of the audio processor 230 may be controlled and modified by a user via any means known to one skilled in the art, such as by a direct interface or a wireless communication interface.
The left playback module 230 and right playback module 230′ may comprise headphones, earphones, speakers, or any other transducer known to one skilled in the art. The purpose of the left and right playback modules 230/230′ is to convert the electrical audio signal from the audio processor 230 back into perceptible sound for the user. As such, a moving-coil transducer, electrostatic transducer, electret transducer, or other transducer technologies known to one skilled in the art may be utilized.
In at least one embodiment, the present system 200 comprises a device 200 as generally illustrated at
In a further embodiment as illustrated in
In a preferred embodiment of the present invention, the method of
In at least one embodiment, the method of
With regard to
The input device 1010 is at least partially structured or configured to transmit an input audio signal 2010, such as an amplified signal from a left or right preamplifier 210, 210′, into the system 1000 of the present invention, and in at least one embodiment into the high pass filter 1110.
The high pass filter 1110 is configured to pass through high frequencies of an audio signal, such as the input signal 2010, while attenuating lower frequencies, based on a predetermined frequency. In other words, the frequencies above the predetermined frequency may be transmitted to the first filter module 3010 in accordance with the present invention. In at least one embodiment, ultra-low frequency content is removed from the input audio signal, where the predetermined frequency may be selected from a range between 300 Hz and 3 kHz. The predetermined frequency however, may vary depending on the source signal, and vary in other embodiments to comprise any frequency selected from the full audible range of frequencies between 20 Hz to 20 kHz. The predetermined frequency may be tunable by a user, or alternatively be statically set. The high pass filter 1110 may further comprise any circuits or combinations thereof structured to pass through high frequencies above a predetermined frequency, and attenuate or filter out the lower frequencies.
The first filter module 3010 is configured to selectively boost or attenuate the gain of select frequency ranges within an audio signal, such as the high pass signal 2110. For example, and in at least one embodiment, frequencies below a first frequency may be adjusted by ±X dB, while frequencies above a first frequency may be adjusted by ±Y dB. In other embodiments, a plurality of frequencies may be used to selectively adjust the gain of various frequency ranges within an audio signal. In at least one embodiment, the first filter module 3010 may be implemented with a first low shelf filter 1120 and a first high shelf filter 1130, as illustrated in
The first compressor 1140 is configured to modulate a signal, such as the first filtered signal 4010. The first compressor 1120 may comprise an automatic gain controller. The first compressor 1120 may comprise standard dynamic range compression controls such as threshold, ratio, attack and release. Threshold allows the first compressor 1120 to reduce the level of the filtered signal 2110 if its amplitude exceeds a certain threshold. Ratio allows the first compressor 1120 to reduce the gain as determined by a ratio. Attack and release determine how quickly the first compressor 1120 acts. The attack phase is the period when the first compressor 1120 is decreasing gain to reach the level that is determined by the threshold. The release phase is the period that the first compressor 1120 is increasing gain to the level determined by the ratio. The first compressor 1120 may also feature soft and hard knees to control the bend in the response curve of the output or modulated signal 2120, and other dynamic range compression controls appropriate for the dynamic compression of an audio signal. The first compressor 1120 may further comprise any device or combination of circuits that is structured and configured for dynamic range compression.
The second filter module 3020 is configured to selectively boost or attenuate the gain of select frequency ranges within an audio signal, such as the modulated signal 2140. In at least one embodiment, the second filter module 3020 is of the same configuration as the first filter module 3010. Specifically, the second filter module 3020 may comprise a second low shelf filter 1150 and a second high shelf filter 1160. In certain embodiments, the second low shelf filter 1150 may be configured to filter signals between 100 Hz and 3000 Hz, with an attenuation of between −5 dB to −20 dB. In certain embodiments the second high shelf filter 1160 may be configured to filter signals between 100 Hz and 3000 Hz, with a boost of between +5 dB to +20 dB.
The second filter module 3020 may be configured in at least a partially inverse configuration to the first filter module 3010. For instance, the second filter module may use the same frequency, for instance the first frequency, as the first filter module. Further, the second filter module may adjust the gain inversely to the gain or attenuation of the first filter module, of content above the first frequency. Similarly second filter module may also adjust the gain inversely to the gain or attenuation of the of the first filter module, of content below the first frequency. In other words, the purpose of the second filter module in one embodiment may be to “undo” the gain adjustment that was applied by the first filter module.
The first processing module 3030 is configured to process a signal, such as the second filtered signal 4020. In at least one embodiment, the first processing module 3030 may comprise a peak/dip module, such as 1180 represented in
The band splitter 1190 is configured to split a signal, such as the processed signal 4030. In at least one embodiment, the signal is split into a low band signal 2200, a mid band signal 2210, and a high band signal 2220. Each band may be the output of a fourth order section, which may be further realized as the cascade of second order biquad filters. In other embodiments, the band splitter may comprise any combination of circuits appropriate for splitting a signal into three frequency bands. The low, mid, and high bands may be predetermined ranges, or may be dynamically determined based on the frequency itself, i.e. a signal may be split into three even frequency bands, or by percentage. The different bands may further be defined or configured by a user and/or control mechanism.
A low band compressor 1300 is configured to modulate the low band signal 2200, and a high band compressor 1310 is configured to modulate the high band signal 2220. In at least one embodiment, each of the low band compressor 1300 and high band compressor 1310 may be the same as the first compressor 1140. Accordingly, each of the low band compressor 1300 and high band compressor 1310 may each be configured to modulate a signal. Each of the compressors 1300, 1310 may comprise an automatic gain controller, or any combination of circuits appropriate for the dynamic range compression of an audio signal.
A second processing module 3040 is configured to process at least one signal, such as the modulated low band signal 2300, the mid band signal 2210, and the modulated high band signal 2310. Accordingly, the second processing module 3040 may comprise a summing module 1320 configured to combine a plurality of signals. The summing module 1320 may comprise a mixer structured to combine two or more signals into a composite signal. The summing module 1320 may comprise any circuits or combination thereof structured or configured to combine two or more signals. In at least one embodiment, the summing module 1320 comprises individual gain controls for each of the incoming signals, such as the modulated low band signal 2300, the mid band signal 2210, and the modulated high band signal 2310. In at least one embodiment, the second processing module 3040 may further comprise a second gain element 1330. The second gain element 1330, in at least one embodiment, may be the same as the first gain element 1170. The second gain element 1330 may thus comprise an amplifier or multiplier circuit to adjust the signal, such as the combined signal, by a predetermined amount.
The output device 1020 may comprise the left playback module 230 and/or right playback module 230′.
As diagrammatically represented,
Accordingly, an input audio signal, such as the amplified signal, is first filtered, as in 5010, with a high pass filter to create a high pass signal. The high pass filter is configured to pass through high frequencies of a signal, such as the input signal, while attenuating lower frequencies. In at least one embodiment, ultra-low frequency content is removed by the high-pass filter. In at least one embodiment, the high pass filter may comprise a fourth-order filter realized as the cascade of two second-order biquad sections. The reason for using a fourth order filter broken into two second order sections is that it allows the filter to retain numerical precision in the presence of finite word length effects, which can happen in both fixed and floating point implementations. An example implementation of such an embodiment may assume a form similar to the following:
The above computation comprising five multiplies and four adds is appropriate for a single channel of second-order biquad section. Accordingly, because the fourth-order high pass filter is realized as a cascade of two second-order biquad sections, a single channel of fourth order input high pass filter would require ten multiples, four memory locations, and eight adds.
The high pass signal from the high pass filter is then filtered, as in 5020, with a first filter module to create a first filtered signal. The first filter module is configured to selectively boost or attenuate the gain of select frequency ranges within an audio signal, such as the high pass signal. Accordingly, the first filter module may comprise a second order low shelf filter and a second order high shelf filter in at least one embodiment. In at least one embodiment, the first filter module boosts the content above a first frequency by a certain amount, and attenuates the content below a first frequency by a certain amount, before presenting the signal to a compressor or dynamic range controller. This allows the dynamic range controller to trigger and adjust higher frequency material, whereas it is relatively insensitive to lower frequency material.
The first filtered signal from the first filter module is then modulated, as in 5030, with a first compressor. The first compressor may comprise an automatic or dynamic gain controller, or any circuits appropriate for the dynamic compression of an audio signal. Accordingly, the compressor may comprise standard dynamic range compression controls such as threshold, ratio, attack and release. An example implementation of the first compressor may assume a form similar to the following:
The modulated signal from the first compressor is then filtered, as in 5040, with a second filter module to create a second filtered signal. The second filter module is configured to selectively boost or attenuate the gain of select frequency ranges within an audio signal, such as the modulated signal. Accordingly, the second filter module may comprise a second order low shelf filter and a second order high shelf filter in at least one embodiment. In at least one embodiment, the second filter module boosts the content above a second frequency by a certain amount, and attenuates the content below a second frequency by a certain amount. In at least one embodiment, the second filter module adjusts the content below the first specified frequency by a fixed amount, inverse to the amount that was removed by the first filter module. By way of example, if the first filter module boosted content above a first frequency by +X dB and attenuated content below a first frequency by −Y dB, the second filter module may then attenuate the content above the first frequency by −X dB, and boost the content below the first frequency by +Y dB. In other words, the purpose of the second filter module in one embodiment may be to “undo” the filtering that was applied by the first filter module.
The second filtered signal from the second filter module is then processed, as in 5050, with a first processing module to create a processed signal. The processing module may comprise a gain element configured to adjust the level of the signal. This adjustment, for instance, may be necessary because the peak-to-average ratio was modified by the first compressor. The processing module may comprise a peak/dip module. The peak/dip module may comprise ten cascaded second-order filters in at least one embodiment. The peak/dip module may be used to shape the desired output spectrum of the signal. In at least one embodiment, the first processing module comprises only the peak/dip module. In other embodiments, the first processing module comprises a gain element followed by a peak/dip module.
The processed signal from the first processing module is then split, as in 5060, with a band splitter into a low band signal, a mid band signal, and a high band signal. The band splitter may comprise any circuit or combination of circuits appropriate for splitting a signal into a plurality of signals of different frequency ranges. In at least one embodiment, the band splitter comprises a fourth-order band-splitting bank. In this embodiment, each of the low band, mid band, and high band are yielded as the output of a fourth-order section, realized as the cascade of second-order biquad filters.
The low band signal is modulated, as in 5070, with a low band compressor to create a modulated low band signal. The low band compressor may be configured and/or computationally identical to the first compressor in at least one embodiment. The high band signal is modulated, as in 5080, with a high band compressor to create a modulated high band signal. The high band compressor may be configured and/or computationally identical to the first compressor in at least one embodiment.
The modulated low band signal, mid band signal, and modulated high band signal are then processed, as in 5090, with a second processing module. The second processing module comprises at least a summing module. The summing module is configured to combine a plurality of signals into one composite signal. In at least one embodiment, the summing module may further comprise individual gain controls for each of the incoming signals, such as the modulated low band signal, the mid band signal, and the modulated high band signal. By way of example, an output of the summing module may be calculated by:
out=w0*low+w1*mid+w2*high
The coefficients w0, w1, and w2 represent different gain adjustments. The second processing module may further comprise a second gain element. The second gain element may be the same as the first gain element in at least one embodiment. The second gain element may provide a final gain adjustment. Finally, the second processed signal is transmitted as the output signal.
As diagrammatically represented,
Accordingly, an input audio signal is first filtered, as in 5010, with a high pass filter. The high pass signal from the high pass filter is then filtered, as in 6010, with a first low shelf filter. The signal from the first low shelf filter is then filtered with a first high shelf filter, as in 6020. The first filtered signal from the first low shelf filter is then modulated with a first compressor, as in 5030. The modulated signal from the first compressor is filtered with a second low shelf filter as in 6110. The signal from the low shelf filter is then filtered with a second high shelf filter, as in 6120. The second filtered signal from the second low shelf filter is then gain-adjusted with a first gain element, as in 6210. The signal from the first gain element is further processed with a peak/dip module, as in 6220. The processed signal from the peak/dip module is then split into a low band signal, a mid band signal, and a high band signal, as in 5060. The low band signal is modulated with a low band compressor, as in 5070. The high band signal is modulated with a high band compressor, as in 5080. The modulated low band signal, mid band signal, and modulated high band signal are then combined with a summing module, as in 6310. The combined signal is then gain adjusted with a second gain element in order to create the output signal, as in 6320.
With reference to
It is envisioned that users may be in motion or in different positions while the system or method determines an origin angle 901. For instance, if a user hears a sound within a virtual environment with a directionality indicating the source of the sound is or should be behind the user, and the user turns right while the sound continues to play, the user must have the outputs adjusted accordingly. As such, in at least one embodiment, the system or method of HRTF may be additionally configured to incorporate a panning function 902, wherein the system or method 900 may account for motion of a user's head in all axes X, Y, and Z. The panning function 902 is configured to translate the origin angles 901 of each input into new angles based on a user's panning input. The panning input may also be a head tracking system or panning controls using principal axes. By way of non-limiting example, the X-axis may refer to the transverse axis “pitch,” or any vertical rotation of a user's head typically exemplified by a nodding motion. The Y-axis may refer to the vertical axis “yaw,” or any side-to-side rotation of a user's head typically exemplified by shaking a user's head to say no. The Z-axis may refer to the longitudinal axis “roll,” or any head-rolling motion exemplified by pointing an ear on the user's head downward while pointing the opposite ear upward. Accordingly, the system or method will also include at least, but may be not limited to, a gyroscope, accelerometer, and/or magnetometer, as well as any software or program to interpret any data produced therefrom.
Accordingly, in at least one additional embodiment, any panning in Y 9021, panning in X, 9022, or panning in Z 9023 will correspondingly modify the calculation of the output by changing the origin angle 901 to reflect such panning. By way of non-limiting example, various panning logic rules as part of the panning function 902 may be implemented to automatically account for any change of axes such that the origin angle 901 must be modified. An example of the base panning logic may include beginning with calculation of the Y-axis angle by assuming a form similar to (Y-axis origin−Y-axis panning). When the Y-axis angle is at its starting point, defined as 0 degrees, X-axis panning and Z-axis panning are calculated as normal, without either the X or Z axes modifying each other therein. When the Y-axis angle pans to 90 degrees, defined as turning left, the X-axis panning is modified to 0%, and the Z-axis panning modifies X-panning to 100%. When the Y-axis angle pans to 180 degrees, which faces opposite to the aforementioned 0-degrees starting point, X-axis panning becomes its opposite with −100% in relation to the starting point. By way of demonstrative example, when at an initial starting point of Y-axis angle 0 degrees, a 10 degree change in the X-axis is the equivalent to a −10 degree change in the X-axis when the Y-axis angle is set at 180 degrees. Additionally, when the Y-axis angle pans to 270 degrees, X-axis panning is modified to 0% and Z-axis panning modifies X-panning to −100%. In this specific ruleset, the X-axis need only be concerned with angles from 0-90 degrees and from 0-270 degrees, since the remaining angles from 90-270 degrees are handled by changes in the Y-axis.
By way of non-limiting example, and with reference to
FL 9101=(45, 0, 0)
FR 9102=(315, 0, 0)
RL 9104=(145, 0, 0)
RR 9103=(215, 0, 0)
Turning to
FL 9101=(40, 10, 0)
FR 9102=(310, 10, 0)
RL 9104=(140, 190, 0)
RR 9103=(210, 190, 0)
It is envisioned that any form of such panning logic may be used as the panning function 902, such as initially calculating the X-axis panning 9022 and using Y-axis panning 9021 to modify the Z-axis panning 9023. However, because rotation about the Y-axis is usually the most common movement of a user's head, the preferred embodiment will initially calculate the Y-axis angle and modify the X-axis and Z-axis accordingly. In yet another embodiment, pre-made or commercial software may be used to as the panning function 902 for modifications to origin angle 901. It is additionally envisioned that users will desire subjective calibration, flexibility, and management of the outputs. Accordingly, any aforementioned rules or logic may be changed or modified to reflect user preference.
In at least one embodiment, arrays 903 may be used to translate a sound input signal passed through an audio processor 220, specifically but not limited to an upmixer, into an origin angle 901, and subsequently into an output, specifically but not limited to a decibel value for a corresponding individual left 230 and right 230′ playback module or speaker 230. The array 903 may include but is not limited to a Y angle index corresponding to every X angle. Accordingly, the array 903 may contain every X/Y combination of angles within the desired points on the combination of two symmetric hemispheres and may be modified accordingly to increase precision in relation to the number of output points on the system or method. Further, each X/Y combination may correspond with a decibel output. In at least one embodiment, the array 903 may be used as a reference for any number of input channels 907 where each channel has an origin angle 901 that is unique. By way of non-limiting example, each X/Y combination corresponding to a decibel value may have a default minimum value of −80 dB with reference to the original signal. It is envisioned that this minimum value may be changed with an allowable range of −20 dB to −100 dB for personalized testing. Additionally, in at least one additional embodiment, the minimum dB value represents a mute level and is essential for interpolation 904 calculation.
In another embodiment, the arrays 903 may be modified in any way, including but not limited to modification of the outputs 905 based on combinations of X/Y, or the addition or subtraction of X/Y combinations to yield a more precise table. Accordingly, in at least one additional embodiment, the values in the array may be empirically created and modified by careful subjective calibration based on the perceived location of the audio source. This approach serves to decouple the discrete speaker locations from the perceived result of mixing signals between pairs of filters.
If the origin angle of an input channel is known, the position relative to the origin can be interpolated by looking up closest values in the array. It is thus additionally envisioned that the system or method of generating HRTF may not produce the input calculations in the exact quantities listed in an array. Accordingly, in an additional embodiment of the present invention, the system or method may use interpolation to either find the nearest possible values or a calculation of an empirically derived relationship. By way of non-limiting example, upon receiving an input that does not perfectly align with the quantities listed in the array, software in the system or method may select the closest two rows for X and the closest two rows for Y for use in linear interpolation to output a decibel value. Specifically, in at least one embodiment for Y and X-Axis location and interpolation after panning, given a desired location for the sound to come from, determined by modifying the current angle of the head, the system or method may look up the closest entries in the array or lookup table to (1) find the Y angle index that is larger than the Y target with smaller X, (2) find the Y angle index that is smaller than the Y target with smaller X, (3) find the Y angle index that is larger than the Y target with larger X, (4) find the Y angle index that is smaller than the Y target with larger X. Upon locating the aforementioned four rows, the system or method may then calculate a Y-ratio modifier and X-ratio modifier which may assume a form similar to the following:
mod y=(smallYTableAngle−currentYAngle)/(large YTableAngle−currentYAngle)
mod x=(smallXTableAngle−currentXAngle)/(largeXTableAngle−currentXAngle)
whereupon the system or method may then loop through the 4 rows selected rows to calculate a new Y to small X array and Y to large X array. Subsequently, using any pre-determined or empirical formula allows for interpolation of the final output level array. A gain table may be used to translate the final coordinate angles to the volume level of the correct speaker.
It should be understood that the above steps may be conducted exclusively or nonexclusively and in any order. Further, the physical devices recited in the methods may comprise any apparatus and/or systems described within this document or known to those skilled in the art.
Since many modifications, variations and changes in detail can be made to the described preferred embodiment of the invention, it is intended that all matters in the foregoing description and shown in the accompanying drawings be interpreted as illustrative and not in a limiting sense. Thus, the scope of the invention should be determined by the appended claims and their legal equivalents.
The present non-provisional patent application claims priority pursuant to 35 U.S.C. Section 119(e), and prior filed, provisional application, namely that having Ser. No. 62/713,793 filed on Aug. 2, 2018, the disclosure of which is incorporated herein by reference, in its entirety. In addition, the present non-provisional patent application also claims priority pursuant to 35 U.S.C. Section 119(e), and prior filed, provisional application, namely that having Ser. No. 62/721,914 filed on Aug. 23, 2018, the disclosure of which is incorporated herein by reference, in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
2643729 | McCracken | Jun 1953 | A |
3430007 | Thielen | Feb 1969 | A |
3795876 | Takashi et al. | Mar 1974 | A |
3813687 | Geil | May 1974 | A |
4162462 | Endoh et al. | Jul 1979 | A |
4184047 | Langford | Jan 1980 | A |
4218950 | Uetrecht | Aug 1980 | A |
4226533 | Snowman | Oct 1980 | A |
4257325 | Bertagni | Mar 1981 | A |
4353035 | Schröder | Oct 1982 | A |
4356558 | Owen et al. | Oct 1982 | A |
4363007 | Haramoto et al. | Dec 1982 | A |
4392027 | Bock | Jul 1983 | A |
4399474 | Coleman, Jr. | Aug 1983 | A |
4412100 | Orban | Oct 1983 | A |
4458362 | Berkovitz et al. | Jul 1984 | A |
4489280 | Bennett, Jr. et al. | Dec 1984 | A |
4517415 | Laurence | May 1985 | A |
4538297 | Waller | Aug 1985 | A |
4549289 | Schwartz et al. | Oct 1985 | A |
4584700 | Scholz | Apr 1986 | A |
4602381 | Cugnini et al. | Jul 1986 | A |
4612665 | Inami et al. | Sep 1986 | A |
4641361 | Rosback | Feb 1987 | A |
4677645 | Kaniwa et al. | Jun 1987 | A |
4696044 | Waller, Jr. | Sep 1987 | A |
4701953 | White | Oct 1987 | A |
4704726 | Gibson | Nov 1987 | A |
4715559 | Fuller | Dec 1987 | A |
4739514 | Short et al. | Apr 1988 | A |
4815142 | Imreh | Mar 1989 | A |
4856068 | Quatieri, Jr. et al. | Aug 1989 | A |
4887299 | Cummins et al. | Dec 1989 | A |
4997058 | Bertagni | Mar 1991 | A |
5007707 | Bertagni | Apr 1991 | A |
5073936 | Gurike et al. | Dec 1991 | A |
5133015 | Scholz | Jul 1992 | A |
5195141 | Jang | Mar 1993 | A |
5210704 | Husseiny | May 1993 | A |
5210806 | Kihara et al. | May 1993 | A |
5355417 | Burdisso et al. | Oct 1994 | A |
5361381 | Short | Nov 1994 | A |
5384856 | Kyouno et al. | Jan 1995 | A |
5420929 | Geddes et al. | May 1995 | A |
5425107 | Bertagni et al. | Jun 1995 | A |
5463695 | Werrbach | Oct 1995 | A |
5465421 | McCormick et al. | Nov 1995 | A |
5467775 | Callahan et al. | Nov 1995 | A |
5473214 | Hildebrand | Dec 1995 | A |
5511129 | Craven et al. | Apr 1996 | A |
5515444 | Burdisso et al. | May 1996 | A |
5539835 | Bertagni et al. | Jul 1996 | A |
5541866 | Sato et al. | Jul 1996 | A |
5572443 | Emoto et al. | Nov 1996 | A |
5615275 | Bertagni | Mar 1997 | A |
5617480 | Ballard et al. | Apr 1997 | A |
5638456 | Conley et al. | Jun 1997 | A |
5640685 | Komoda | Jun 1997 | A |
5671287 | Gerzon | Sep 1997 | A |
5693917 | Bertagni et al. | Dec 1997 | A |
5699438 | Smith et al. | Dec 1997 | A |
5727074 | Hildebrand | Mar 1998 | A |
5737432 | Werrbach | Apr 1998 | A |
5812684 | Mark | Sep 1998 | A |
5828768 | Eatwell et al. | Oct 1998 | A |
5832097 | Armstrong et al. | Nov 1998 | A |
5838805 | Warnaka et al. | Nov 1998 | A |
5848164 | Levine | Dec 1998 | A |
5861686 | Lee | Jan 1999 | A |
5862461 | Yoshizawa et al. | Jan 1999 | A |
5872852 | Dougherty | Feb 1999 | A |
5901231 | Parrella et al. | May 1999 | A |
5990955 | Koz | Nov 1999 | A |
6058196 | Heron | May 2000 | A |
6078670 | Beyer | Jun 2000 | A |
6093144 | Jaeger et al. | Jul 2000 | A |
6108431 | Bachler | Aug 2000 | A |
6195438 | Yumoto et al. | Feb 2001 | B1 |
6201873 | Dal Farra | Mar 2001 | B1 |
6202601 | Ouellette et al. | Mar 2001 | B1 |
6208237 | Saiki et al. | Mar 2001 | B1 |
6244376 | Granzotto | Jun 2001 | B1 |
6263354 | Gandhi | Jul 2001 | B1 |
6285767 | Klayman | Sep 2001 | B1 |
6292511 | Goldston et al. | Sep 2001 | B1 |
6317117 | Goff | Nov 2001 | B1 |
6318797 | Böhm et al. | Nov 2001 | B1 |
6332029 | Azima et al. | Dec 2001 | B1 |
6343127 | Billoud | Jan 2002 | B1 |
6518852 | Derrick | Feb 2003 | B1 |
6529611 | Kobayashi et al. | Mar 2003 | B2 |
6535846 | Shashoua | Mar 2003 | B1 |
6570993 | Fukuyama | May 2003 | B1 |
6587564 | Cusson | Jul 2003 | B1 |
6618487 | Azima et al. | Sep 2003 | B1 |
6661897 | Smith | Dec 2003 | B2 |
6661900 | Allred et al. | Dec 2003 | B1 |
6760451 | Craven et al. | Jul 2004 | B1 |
6772114 | Sluijter et al. | Aug 2004 | B1 |
6839438 | Riegelsberger et al. | Jan 2005 | B1 |
6847258 | Ishida et al. | Jan 2005 | B2 |
6871525 | Withnall et al. | Mar 2005 | B2 |
6907391 | Bellora et al. | Jun 2005 | B2 |
6999826 | Zhou et al. | Feb 2006 | B1 |
7006653 | Guenther | Feb 2006 | B2 |
7016746 | Wiser et al. | Mar 2006 | B2 |
7024001 | Nakada | Apr 2006 | B1 |
7058463 | Ruha et al. | Jun 2006 | B1 |
7123728 | King et al. | Oct 2006 | B2 |
7236602 | Gustavsson | Jun 2007 | B2 |
7254243 | Bongiovi | Aug 2007 | B2 |
7266205 | Miller | Sep 2007 | B2 |
7269234 | Klingenbrunn et al. | Sep 2007 | B2 |
7274795 | Bongiovi | Sep 2007 | B2 |
7519189 | Bongiovi | Apr 2009 | B2 |
7577263 | Tourwe | Aug 2009 | B2 |
7613314 | Camp, Jr. | Nov 2009 | B2 |
7676048 | Tsutsui | Mar 2010 | B2 |
7711129 | Lindahl | May 2010 | B2 |
7711442 | Ryle et al. | May 2010 | B2 |
7747447 | Christensen et al. | Jun 2010 | B2 |
7764802 | Oliver | Jul 2010 | B2 |
7778718 | Janke et al. | Aug 2010 | B2 |
7916876 | Helsloot | Mar 2011 | B1 |
8068621 | Okabayashi et al. | Nov 2011 | B2 |
8144902 | Johnston | Mar 2012 | B2 |
8160274 | Bongiovi | Apr 2012 | B2 |
8175287 | Ueno et al. | May 2012 | B2 |
8218789 | Bharitkar et al. | Jul 2012 | B2 |
8229136 | Bongiovi | Jul 2012 | B2 |
8284955 | Bongiovi et al. | Oct 2012 | B2 |
8385864 | Dickson et al. | Feb 2013 | B2 |
8462963 | Bongiovi | Jun 2013 | B2 |
8472642 | Bongiovi | Jun 2013 | B2 |
8503701 | Miles et al. | Aug 2013 | B2 |
8565449 | Bongiovi | Oct 2013 | B2 |
8577676 | Muesch | Nov 2013 | B2 |
8619998 | Walsh et al. | Dec 2013 | B2 |
8705765 | Bongiovi | Apr 2014 | B2 |
8750538 | Avendano et al. | Jun 2014 | B2 |
8811630 | Burlingame | Aug 2014 | B2 |
8879743 | Mitra | Nov 2014 | B1 |
9195433 | Bongiovi et al. | Nov 2015 | B2 |
9264004 | Bongiovi et al. | Feb 2016 | B2 |
9276542 | Bongiovi et al. | Mar 2016 | B2 |
9281794 | Bongiovi et al. | Mar 2016 | B1 |
9344828 | Bongiovi et al. | May 2016 | B2 |
9348904 | Bongiovi et al. | May 2016 | B2 |
9350309 | Bongiovi et al. | May 2016 | B2 |
9397629 | Bongiovi et al. | Jul 2016 | B2 |
9398394 | Bongiovi et al. | Jul 2016 | B2 |
9413321 | Bongiovi et al. | Aug 2016 | B2 |
9564146 | Bongiovi et al. | Feb 2017 | B2 |
9615189 | Copt et al. | Apr 2017 | B2 |
9621994 | Bongiovi et al. | Apr 2017 | B1 |
9638672 | Butera, III et al. | May 2017 | B2 |
9741355 | Bongiovi et al. | Aug 2017 | B2 |
9793872 | Bongiovi et al. | Oct 2017 | B2 |
9883318 | Bongiovi et al. | Jan 2018 | B2 |
9906858 | Bongiovi et al. | Feb 2018 | B2 |
9906867 | Bongiovi et al. | Feb 2018 | B2 |
9998832 | Bongiovi et al. | Jun 2018 | B2 |
10069471 | Bongiovi et al. | Sep 2018 | B2 |
10158337 | Bongiovi et al. | Dec 2018 | B2 |
10666216 | Bongiovi et al. | May 2020 | B2 |
10701505 | Copt et al. | Jun 2020 | B2 |
20010008535 | Lanigan | Jul 2001 | A1 |
20010043704 | Schwartz | Nov 2001 | A1 |
20010046304 | Rast | Nov 2001 | A1 |
20020057808 | Goldstein | May 2002 | A1 |
20020071481 | Goodings | Jun 2002 | A1 |
20020094096 | Paritsky et al. | Jul 2002 | A1 |
20030016838 | Paritsky et al. | Jan 2003 | A1 |
20030023429 | Claesson et al. | Jan 2003 | A1 |
20030035555 | King et al. | Feb 2003 | A1 |
20030043940 | Janky et al. | Mar 2003 | A1 |
20030112088 | Bizjak | Jun 2003 | A1 |
20030138117 | Goff | Jul 2003 | A1 |
20030142841 | Wiegand | Jul 2003 | A1 |
20030164546 | Giger | Sep 2003 | A1 |
20030179891 | Rabinowitz et al. | Sep 2003 | A1 |
20030216907 | Thomas | Nov 2003 | A1 |
20040003805 | Ono et al. | Jan 2004 | A1 |
20040005063 | Klayman | Jan 2004 | A1 |
20040008851 | Hagiwara | Jan 2004 | A1 |
20040022400 | Magrath | Feb 2004 | A1 |
20040042625 | Brown | Mar 2004 | A1 |
20040044804 | Mac Farlane | Mar 2004 | A1 |
20040086144 | Kallen | May 2004 | A1 |
20040103588 | Allaei | Jun 2004 | A1 |
20040138769 | Akiho | Jul 2004 | A1 |
20040146170 | Zint | Jul 2004 | A1 |
20040189264 | Matsuura et al. | Sep 2004 | A1 |
20040208646 | Choudhary et al. | Oct 2004 | A1 |
20050013453 | Cheung | Jan 2005 | A1 |
20050090295 | Ali et al. | Apr 2005 | A1 |
20050117771 | Vosburgh et al. | Jun 2005 | A1 |
20050129248 | Kraemer et al. | Jun 2005 | A1 |
20050175185 | Korner | Aug 2005 | A1 |
20050201572 | Lindahl et al. | Sep 2005 | A1 |
20050249272 | Kirkeby et al. | Nov 2005 | A1 |
20050254564 | Tsutsui | Nov 2005 | A1 |
20060034467 | Sleboda et al. | Feb 2006 | A1 |
20060045294 | Smyth | Mar 2006 | A1 |
20060064301 | Aguilar et al. | Mar 2006 | A1 |
20060098827 | Paddock et al. | May 2006 | A1 |
20060115107 | Vincent et al. | Jun 2006 | A1 |
20060126851 | Yuen et al. | Jun 2006 | A1 |
20060126865 | Blamey et al. | Jun 2006 | A1 |
20060138285 | Oleski et al. | Jun 2006 | A1 |
20060140319 | Eldredge et al. | Jun 2006 | A1 |
20060153281 | Karlsson | Jul 2006 | A1 |
20060189841 | Pluvinage | Aug 2006 | A1 |
20060291670 | King et al. | Dec 2006 | A1 |
20070010132 | Nelson | Jan 2007 | A1 |
20070030994 | Ando et al. | Feb 2007 | A1 |
20070056376 | King | Mar 2007 | A1 |
20070119421 | Lewis et al. | May 2007 | A1 |
20070150267 | Honma et al. | Jun 2007 | A1 |
20070173990 | Smith et al. | Jul 2007 | A1 |
20070177459 | Behn | Aug 2007 | A1 |
20070206643 | Egan | Sep 2007 | A1 |
20070223713 | Gunness | Sep 2007 | A1 |
20070223717 | Boersma | Sep 2007 | A1 |
20070253577 | Yen et al. | Nov 2007 | A1 |
20080031462 | Walsh et al. | Feb 2008 | A1 |
20080040116 | Cronin | Feb 2008 | A1 |
20080049948 | Christoph | Feb 2008 | A1 |
20080069385 | Revit | Mar 2008 | A1 |
20080123870 | Stark | May 2008 | A1 |
20080123873 | Bjorn-Josefsen et al. | May 2008 | A1 |
20080165989 | Seil et al. | Jul 2008 | A1 |
20080181424 | Schulein et al. | Jul 2008 | A1 |
20080212798 | Zartarian | Sep 2008 | A1 |
20080255855 | Lee et al. | Oct 2008 | A1 |
20090022328 | Neugebauer et al. | Jan 2009 | A1 |
20090054109 | Hunt | Feb 2009 | A1 |
20090080675 | Smirnov et al. | Mar 2009 | A1 |
20090086996 | Bongiovi et al. | Apr 2009 | A1 |
20090116652 | Kirkeby et al. | May 2009 | A1 |
20090282810 | Leone et al. | Nov 2009 | A1 |
20090290725 | Huang | Nov 2009 | A1 |
20090296959 | Bongiovi | Dec 2009 | A1 |
20100045374 | Wu et al. | Feb 2010 | A1 |
20100246832 | Villemoes et al. | Sep 2010 | A1 |
20100256843 | Bergstein et al. | Oct 2010 | A1 |
20100278364 | Berg | Nov 2010 | A1 |
20100303278 | Sahyoun | Dec 2010 | A1 |
20110002467 | Nielsen | Jan 2011 | A1 |
20110013736 | Tsukamoto et al. | Jan 2011 | A1 |
20110065408 | Kenington et al. | Mar 2011 | A1 |
20110087346 | Larsen et al. | Apr 2011 | A1 |
20110096936 | Gass | Apr 2011 | A1 |
20110194712 | Potard | Aug 2011 | A1 |
20110230137 | Hicks et al. | Sep 2011 | A1 |
20110257833 | Trush et al. | Oct 2011 | A1 |
20110280411 | Cheah et al. | Nov 2011 | A1 |
20120008798 | Ong | Jan 2012 | A1 |
20120014553 | Bonanno | Jan 2012 | A1 |
20120020502 | Adams | Jan 2012 | A1 |
20120022842 | Amadu | Jan 2012 | A1 |
20120063611 | Kimura | Mar 2012 | A1 |
20120099741 | Gotoh et al. | Apr 2012 | A1 |
20120170759 | Yuen et al. | Jul 2012 | A1 |
20120170795 | Sancisi et al. | Jul 2012 | A1 |
20120189131 | Ueno et al. | Jul 2012 | A1 |
20120213034 | Imran | Aug 2012 | A1 |
20120213375 | Mahabub et al. | Aug 2012 | A1 |
20120300949 | Rauhala | Nov 2012 | A1 |
20120302920 | Bridger et al. | Nov 2012 | A1 |
20130083958 | Katz et al. | Apr 2013 | A1 |
20130129106 | Sapiejewski | May 2013 | A1 |
20130162908 | Son et al. | Jun 2013 | A1 |
20130163767 | Gauger, Jr. et al. | Jun 2013 | A1 |
20130163783 | Burlingame | Jun 2013 | A1 |
20130169779 | Pedersen | Jul 2013 | A1 |
20130220274 | Deshpande et al. | Aug 2013 | A1 |
20130227631 | Sharma et al. | Aug 2013 | A1 |
20130242191 | Leyendecker | Sep 2013 | A1 |
20130251175 | Bongiovi et al. | Sep 2013 | A1 |
20130288596 | Suzuki et al. | Oct 2013 | A1 |
20130338504 | Demos et al. | Dec 2013 | A1 |
20130343564 | Darlington | Dec 2013 | A1 |
20140067236 | Henry et al. | Mar 2014 | A1 |
20140119583 | Valentine et al. | May 2014 | A1 |
20140126734 | Gauger, Jr. et al. | May 2014 | A1 |
20140261301 | Leone | Sep 2014 | A1 |
20140379355 | Hosokawsa | Dec 2014 | A1 |
20150039250 | Rank | Feb 2015 | A1 |
20150194158 | Oh et al. | Jul 2015 | A1 |
20150208163 | Hallberg et al. | Jul 2015 | A1 |
20150215720 | Carroll | Jul 2015 | A1 |
20160209831 | Pal | Jul 2016 | A1 |
20170072305 | Watanabe | Mar 2017 | A1 |
20170188989 | Copt et al. | Jul 2017 | A1 |
20170193980 | Bongiovi et al. | Jul 2017 | A1 |
20170272887 | Copt et al. | Sep 2017 | A1 |
20170345408 | Hong et al. | Nov 2017 | A1 |
20180091109 | Bongiovi et al. | Mar 2018 | A1 |
20180102133 | Bongiovi et al. | Apr 2018 | A1 |
20180139565 | Norris et al. | May 2018 | A1 |
20190020950 | Bongiovi et al. | Jan 2019 | A1 |
20190069114 | Tai | Feb 2019 | A1 |
20190318719 | Copt et al. | Oct 2019 | A1 |
20190387340 | Audfray | Dec 2019 | A1 |
20200007983 | Bongiovi et al. | Jan 2020 | A1 |
20200053503 | Butera, III | Feb 2020 | A1 |
20200404441 | Copt, et al. | Dec 2020 | A1 |
Number | Date | Country |
---|---|---|
9611417 | Feb 1999 | BR |
96113723 | Jul 1999 | BR |
2533221 | Jun 1995 | CA |
2161412 | Apr 2000 | CA |
2854086 | Dec 2018 | CA |
1139842 | Jan 1997 | CN |
1173268 | Feb 1998 | CN |
1221528 | Jun 1999 | CN |
1357136 | Jul 2002 | CN |
1391780 | Jan 2003 | CN |
1682567 | Oct 2005 | CN |
1879449 | Dec 2006 | CN |
1910816 | Feb 2007 | CN |
101163354 | Apr 2008 | CN |
101277331 | Oct 2008 | CN |
101518083 | Aug 2009 | CN |
101536541 | Sep 2009 | CN |
101720557 | Jun 2010 | CN |
101946526 | Jan 2011 | CN |
101964189 | Feb 2011 | CN |
102171755 | Aug 2011 | CN |
102265641 | Nov 2011 | CN |
102361506 | Feb 2012 | CN |
102652337 | Aug 2012 | CN |
102754151 | Oct 2012 | CN |
102822891 | Dec 2012 | CN |
102855882 | Jan 2013 | CN |
103004237 | Mar 2013 | CN |
203057339 | Jul 2013 | CN |
103247297 | Aug 2013 | CN |
103250209 | Aug 2013 | CN |
103262577 | Aug 2013 | CN |
103348697 | Oct 2013 | CN |
103455824 | Dec 2013 | CN |
1672325 | Sep 2015 | CN |
19826171 | Oct 1999 | DE |
10116166 | Oct 2002 | DE |
0206746 | Aug 1992 | EP |
0541646 | Jan 1995 | EP |
0580579 | Jun 1998 | EP |
0698298 | Feb 2000 | EP |
0932523 | Jun 2000 | EP |
0666012 | Nov 2002 | EP |
2509069 | Oct 2012 | EP |
2814267 | Oct 2016 | EP |
2218599 | Oct 1998 | ES |
2249788 | Oct 1998 | ES |
2219949 | Aug 1999 | ES |
2003707 | Mar 1979 | GB |
2089986 | Jun 1982 | GB |
2320393 | Dec 1996 | GB |
3150910 | Jun 1991 | JP |
7106876 | Apr 1995 | JP |
2005500768 | Jan 2005 | JP |
2011059714 | Mar 2011 | JP |
1020040022442 | Mar 2004 | KR |
1319288 | Jun 1987 | SU |
401713 | Aug 2000 | TW |
WO 9219080 | Oct 1992 | WO |
WO 1993011637 | Jun 1993 | WO |
WO 9321743 | Oct 1993 | WO |
WO 9427331 | Nov 1994 | WO |
WO 9514296 | May 1995 | WO |
WO 9531805 | Nov 1995 | WO |
WO9535628 | Dec 1995 | WO |
WO 9535628 | Dec 1995 | WO |
WO 9601547 | Jan 1996 | WO |
WO 9611465 | Apr 1996 | WO |
WO 9708847 | Mar 1997 | WO |
WO 9709698 | Mar 1997 | WO |
WO 9709840 | Mar 1997 | WO |
WO 9709841 | Mar 1997 | WO |
WO 9709842 | Mar 1997 | WO |
WO 9709843 | Mar 1997 | WO |
WO 9709844 | Mar 1997 | WO |
WO 9709845 | Mar 1997 | WO |
WO 9709846 | Mar 1997 | WO |
WO 9709848 | Mar 1997 | WO |
WO 9709849 | Mar 1997 | WO |
WO 9709852 | Mar 1997 | WO |
WO 9709853 | Mar 1997 | WO |
WO 9709854 | Mar 1997 | WO |
WO 9709855 | Mar 1997 | WO |
WO 9709856 | Mar 1997 | WO |
WO 9709857 | Mar 1997 | WO |
WO 9709858 | Mar 1997 | WO |
WO 9709859 | Mar 1997 | WO |
WO 9709861 | Mar 1997 | WO |
WO 9709862 | Mar 1997 | WO |
WO 9717818 | May 1997 | WO |
WO 9717820 | May 1997 | WO |
WO 9813942 | Apr 1998 | WO |
WO 9816409 | Apr 1998 | WO |
WO 9828942 | Jul 1998 | WO |
WO 9831188 | Jul 1998 | WO |
WO 9834320 | Aug 1998 | WO |
WO 9839947 | Sep 1998 | WO |
WO 9842536 | Oct 1998 | WO |
WO 9843464 | Oct 1998 | WO |
WO 9852381 | Nov 1998 | WO |
WO 9852383 | Nov 1998 | WO |
WO 9853638 | Nov 1998 | WO |
WO 9902012 | Jan 1999 | WO |
WO 9908479 | Feb 1999 | WO |
WO 9911490 | Mar 1999 | WO |
WO 9912387 | Mar 1999 | WO |
WO 9913684 | Mar 1999 | WO |
WO 9921397 | Apr 1999 | WO |
WO 9935636 | Jul 1999 | WO |
WO 9935883 | Jul 1999 | WO |
WO 9937121 | Jul 1999 | WO |
WO 9938155 | Jul 1999 | WO |
WO 9941939 | Aug 1999 | WO |
WO 9952322 | Oct 1999 | WO |
WO 9952324 | Oct 1999 | WO |
WO 9956497 | Nov 1999 | WO |
WO 9962294 | Dec 1999 | WO |
WO 9965274 | Dec 1999 | WO |
WO 0001264 | Jan 2000 | WO |
WO 0002417 | Jan 2000 | WO |
WO 0007408 | Feb 2000 | WO |
WO 0007409 | Feb 2000 | WO |
WO 0013464 | Mar 2000 | WO |
WO 0015003 | Mar 2000 | WO |
WO 0033612 | Jun 2000 | WO |
WO 0033613 | Jun 2000 | WO |
WO 03104924 | Dec 2003 | WO |
WO 2006020427 | Feb 2006 | WO |
WO 2007092420 | Aug 2007 | WO |
WO 2008067454 | Jun 2008 | WO |
WO 2009070797 | Jun 2009 | WO |
WO 2009102750 | Aug 2009 | WO |
WO 2009114746 | Sep 2009 | WO |
WO2009155057 | Dec 2009 | WO |
WO 2009155057 | Dec 2009 | WO |
WO 2010027705 | Mar 2010 | WO |
WO2010051354 | May 2010 | WO |
WO 2010051354 | May 2010 | WO |
WO2010138311 | Dec 2010 | WO |
WO 2011081965 | Jul 2011 | WO |
WO 2012134399 | Oct 2012 | WO |
WO2012154823 | Nov 2012 | WO |
WO 2013055394 | Apr 2013 | WO |
WO 2013076223 | May 2013 | WO |
WO 2014201103 | Dec 2014 | WO |
WO 2015061393 | Apr 2015 | WO |
WO 2015077681 | May 2015 | WO |
WO 2016019263 | Feb 2016 | WO |
WO 2016022422 | Feb 2016 | WO |
WO 2016144861 | Sep 2016 | WO |
WO 2019051075 | Mar 2019 | WO |
WO2019200119 | Oct 2019 | WO |
WO 2020028833 | Feb 2020 | WO |
WO2020132060 | Jun 2020 | WO |
Entry |
---|
NovaSound Int., http://www.novasoundint.com/new_page_t.htm, 2004. |
Stephan Peus et al. “Natürliche Hören mite künstlichem Kopf”, Funkschau—Zeitschrift für elektronische Kommunikation, Dec. 31, 1983, pp. 1-4, XP055451269. Web: https://www.neumann.com/?lang-en&id−hist_microphones&cid=ku80_publications. |
Number | Date | Country | |
---|---|---|---|
20200053503 A1 | Feb 2020 | US |
Number | Date | Country | |
---|---|---|---|
62721914 | Aug 2018 | US | |
62713798 | Aug 2018 | US |