System, method, and apparatus for generating and digitally processing a head related audio transfer function

Information

  • Patent Grant
  • 10701505
  • Patent Number
    10,701,505
  • Date Filed
    Monday, January 8, 2018
    7 years ago
  • Date Issued
    Tuesday, June 30, 2020
    4 years ago
Abstract
The present invention provides for an apparatus, system, and method for generating a head related audio transfer function in real time. Specifically, the present invention utilizes unique structural components including a tragus structure and an antihelix structure in connection with a microphone in order to communicate the location of a sound in three dimensional space to a user. The invention also utilizes an audio processor to digitally process the head related audio transfer function.
Description
FIELD OF THE INVENTION

The present invention provides for a system and apparatus for generating a real time head related audio transfer function. Specifically, unique structural components are utilized in connection with a microphone to reproduce certain acoustic characteristics of the human pinna in order to facilitate the communication of the location of a sound in three dimensional space to a user. The invention may further utilize an audio processor to digitally process the head related audio transfer function.


BACKGROUND OF THE INVENTION

Human beings have just two ears, but can locate sounds in three dimensions, in distance and in direction. This is possible because the brain, the inner ears, and the external ears (pinna) work together to make inferences about the location of a sound. The location of a sound is estimated by taking cues derived from one ear (monoaural cues), as well as by comparing the difference between the cues received in both ears (binaural cues).


Binaural cues relate to the differences of arrival and intensity of the sound between the two ears, which assist with the relative localization of a sound source. Monoaural cues relate to the interaction between the sound source and the human anatomy, in which the original sound is modified by the external ear before it enters the ear canal for processing by the auditory system. The modifications encode the source location relative to the ear location and are known as head-related transfer functions (HRTF).


In other words, HRTFs describe the filtering of a sound source before it is perceived at the left and right ear drums, in order to characterize how a particular ear receives sound from a particular point in space. These modifications may include the shape of the listener's ear, the shape of the listener's head and body, the acoustical characteristics of the space in which the sound is played, and so forth. All these characteristics together influence how a listener can accurately tell what direction a sound is coming from. Thus, a pair of HRTFs accounting for all these characteristics, generated by the two ears, can be used to synthesize a binaural sound and accurately recognize it as originating from a particular point in space.


HRTFs have wide ranging applications, from virtual surround sound in media and gaming, to hearing protection in loud noise environments, and hearing assistance for the hearing impaired. Particularly, in fields hearing protection and hearing assistance, the ability to record and reconstruct a particular user's HRTF presents several challenges as it must occur in real time. In the case of an application for hearing protection in high noise environments, heavy hearing protection hardware must be worn over the ears in the form of bulky headphones, thus, if microphones are placed on the outside of the headphones, the user will hear the outside world but will not receive accurate positional data because the HRTF is not being reconstructed. Similarly, in the case of hearing assistance for the hearing impaired, a microphone is similarly mounted external to the hearing aid, and any hearing aid device that fully blocks a user's ear canal will not accurately reproduce that user's HRTF.


Thus, there is a need for an apparatus and system for reconstructing a user's HRTF in accordance to the user's physical characteristics, in order to accurately relay positional sound information to the user in real time.


SUMMARY OF THE INVENTION

The present invention meets the existing needs described above by providing for an apparatus, system, and method for generating a head related audio transfer function. The present invention also provides for the ability to enhance audio in real-time and tailors the enhancement to the physical characteristics of a user and the acoustic characteristics of the external environment.


Accordingly, in initially broad terms, an apparatus directed to the present invention, also known as an HRTF generator, comprises an external manifold and internal manifold. The external manifold is exposed at least partially to an external environment, while the internal manifold is disposed substantially within an interior of the apparatus and/or a larger device or system housing said apparatus.


The external manifold comprises an antihelix structure, a tragus structure, and an opening. The opening is in direct air flow communication with the outside environment, and is structured to receive acoustic waves. The tragus structure is disposed to partially enclose the opening, such that the tragus structure will partially impede and/or affect the characteristics of the incoming acoustic waves going into the opening. The antihelix structure is disposed to further partially enclose the tragus structure as well as the opening, such that the antihelix structure will partially impede and/or affect the characteristics of the incoming acoustic waves flowing onto the tragus structure and into the opening. The antihelix and tragus structures may comprise semi-domes or any variation of partial-domes comprising a closed side and an open side. In a preferred embodiment, the open side of the antihelix structure and the open side of the tragus structure are disposed in confronting relation to one another.


The opening of the external manifold is connected to and in air flow communication with an opening canal inside the external manifold. The opening canal may be disposed in a substantially perpendicular orientation relative to the desired orientation of the user. The opening canal is in further air flow communication with an auditory canal, which is formed within the internal manifold but also be formed partially in the external manifold.


The internal manifold comprises the auditory canal and a microphone housing. The microphone housing is attached or connected to an end of the auditory canal on the opposite end to its connection with the opening canal. The auditory canal, or at least the portion of the portion of the auditory canal, may be disposed in a substantially parallel orientation relative to the desired listening direction of the user. The microphone housing may further comprise a microphone mounted against the end of the auditory canal. The microphone housing may further comprise an air cavity behind the microphone on an end opposite its connection to the auditory canal, which may be sealed with a cap.


In at least one embodiment, the apparatus or HRTF generator may form a part of a larger system. Accordingly, the system may comprise a left HRTF generator, a right HRTF generator, a left preamplifier, a right preamplifier, an audio processor, a left playback module, and a right playback module.


As such, the left HRTF generator may be structured to pick up and filter sounds to the left of a user. Similarly, the right HRTF generator may be structured to pick up and filter sounds to the right of the user. A left preamplifier may be structured and configured to increase the gain of the filtered sound of the left HRTF generator. A right preamplifier may be structured and configured to increase the gain of the filtered sound of the right HRTF generator. The audio processor may be structured and configured to process and enhance the audio signal received from the left and right preamplifiers, and then transmit the respective processed signals to each of the left and right playback modules. The left and right playback modules or transducers are structured and configured to convert the electrical signals into sound to the user, such that the user can then perceive the filtered and enhanced sound from the user's environment, which includes audio data that allows the user to localize the source of the originating sound.


In at least one embodiment, the system of the present invention may comprise a wearable device such as a headset or headphones having the HRTF generator embedded therein. The wearable device may further comprise the preamplifiers, audio processor, and playback modules, as well as other appropriate circuitry and components.


In a further embodiment, a method for generating a head related audio transfer function may be used in accordance with the present invention. As such, external sound is first filtered through an exterior of an HRTF generator which may comprise a tragus structure and an antihelix structure. The filtered sound is then passed to the interior of the HRTF generator, such as through the opening canal and auditory canal described above to create an input sound. The input sound is received at a microphone embedded within the HRTF generator adjacent to and connected to the auditory canal in order to create an input signal. The input signal is amplified with a preamplifier in order to create an amplified signal. The amplified signal is then processed with an audio processor, in order to create a processed signal. Finally, the processed signal is transmitted to the playback module in order to relay audio and/or locational audio data to a user.


In certain embodiments, the audio processor may receive the amplified signal and first filter the amplified signal with a high pass filter. The high pass filter, in at least one embodiment, is configured to remove ultra-low frequency content from the amplified signal resulting in the generation of a high pass signal.


The high pass signal from the high pass filter is then filtered through a first filter module to create a first filtered signal. The first filter module is configured to selectively boost and/or attenuate the gain of select frequency ranges in an audio signal, such as the high pass signal. In at least one embodiment, the first filter module boosts frequencies above a first frequency, and attenuates frequencies below a first frequency.


The first filtered signal from the first filter module is then modulated with a first compressor to create a modulated signal. The first compressor is configured for the dynamic range compression of a signal, such as the first filtered signal. Because the first filtered signal boosted higher frequencies and attenuated lower frequencies, the first compressor may, in at least one embodiment, be configured to trigger and adjust the higher frequency material, while remaining relatively insensitive to lower frequency material.


The modulated signal from the first compressor is then filtered through a second filter module to create a second filtered signal. The second filter module is configured to selectively boost and/or attenuate the gain of select frequency ranges in an audio signal, such as the modulated signal. In at least one embodiment, the second filter module is configured to be of least partially inverse relation relative to the first filter module. For example, if the first filter module boosted content above a first frequency by +X dB and attenuated content below a first frequency by −Y dB, the second filter module may then attenuate the content above the first frequency by −X dB, and boost the content below the first frequency by +Y dB. In other words, the purpose of the second filter module in one embodiment may be to “undo” the gain adjustment that was applied by the first filter module.


The second filtered signal from the second filter module is then processed with a first processing module to create a processed signal. In at least one embodiment, the first processing module may comprise a peak/dip module. In other embodiments, the first processing module may comprise both a peak/dip module and a first gain element. The first gain element may be configured to adjust the gain of the signal, such as the second filtered signal. The peak/dip module may be configured to shape the signal, such as to increase or decrease overshoots or undershoots in the signal.


The processed signal from the first processing module is then split with a band splitter into a low band signal, a mid band signal and a high band signal. In at least one embodiment, each band may comprise the output of a fourth order section, which may be realized as the cascade of second order biquad filters.


The low band signal is modulated with a low band compressor to create a modulated low band signal, and the high band signal is modulated with a high band compressor to create a modulated high band signal. The low band compressor and high band compressor are each configured to dynamically adjust the gain of a signal. Each of the low band compressor and high band compressor may be computationally and/or configured identically as the first compressor.


The modulated low band signal, the mid band signal, and the modulated high band signal are then processed with a second processing module. The second processing module may comprise a summing module configured to combine the signals. The summing module in at least one embodiment may individually alter the gain of each of the modulated low band, mid band, and modulated high band signals. The second processing module may further comprise a second gain element. The second gain element may adjust the gain of the combined signal in order to create a processed signal that is transmitted to the playback module.


The method described herein may be configured to capture and transmit locational audio data to a user in real time, such that it can be utilized as a hearing aid, or in loud noise environments to filter out loud noises.


These and other objects, features and advantages of the present invention will become clearer when the drawings as well as the detailed description are taken into consideration.





BRIEF DESCRIPTION OF THE DRAWINGS

For a fuller understanding of the nature of the present invention, reference should be had to the following detailed description taken in connection with the accompanying drawings in which:



FIG. 1 is a perspective external view of an apparatus for generating a head related audio transfer function.



FIG. 2 is a perspective internal view of an apparatus for generating a head related audio transfer function.



FIG. 3 is a block diagram directed to a system for generating a head related audio transfer function.



FIG. 4A illustrates a side profile view of a wearable device comprising an apparatus for generating a head related audio transfer function.



FIG. 4B illustrates a front profile view of a wearable device comprising an apparatus for generating a head related audio transfer function.



FIG. 5 illustrates a flowchart directed to a method for generating a head related audio transfer function.



FIG. 6 illustrates a schematic of one embodiment of an audio processor according to one embodiment of the present invention.



FIG. 7 illustrates a schematic of another embodiment of an audio processor according to one embodiment of the present invention.



FIG. 8 illustrates a block diagram of one method for processing an audio signal with an audio processor according to one embodiment of the present invention.



FIG. 9 illustrates a block diagram of another method for processing an audio signal with an audio processor according to another embodiment of the present invention.





Like reference numerals refer to like parts throughout the several views of the drawings.


DETAILED DESCRIPTION OF THE EMBODIMENT

As illustrated by the accompanying drawings, the present invention is directed to an apparatus, system, and method for generating a head related audio transfer function for a user. Specifically, some embodiments relate to capturing surrounding sound in the external environment in real time, filtering that sound through unique structures formed on the apparatus in order to generate audio positional data, and then processing that sound to enhance and relay the positional audio data to a user, such that the user can determine the origination of the sound in three dimensional space.


As schematically represented, FIGS. 1 and 2 illustrate at least one preferred embodiment of an apparatus 100 for generating a head related audio transfer function for a user, or “HRTF generator”. Accordingly, apparatus 100 comprises an external manifold 110 and an internal manifold 120. The external manifold 110 will be disposed at least partially on an exterior of the apparatus 100. The internal manifold 120, on the other hand, will be disposed along an interior of the apparatus 100. For further clarification, the exterior of the apparatus 100 comprises the external environment, such that the exterior is directly exposed to the air of the surrounding environment. The interior of the apparatus 100 comprises at least a partially sealed off environment that partially or fully obstructs the direct flow of acoustic waves.


The external manifold 110 may comprise a hexahedron shape having six faces. In at least one embodiment, the external manifold 110 is substantially cuboid. The external manifold 110 may comprise at least one surface that is concave or convex, such as an exterior surface exposed to the external environment. The internal manifold 120 may comprise a substantially cylindrical shape, which may be at least partially hollow. The external manifold 110 and internal manifold 120 may comprise sound dampening or sound proof materials, such as various foams, plastics, and glass known to those skilled in the art.


Drawing attention to FIG. 1, the external manifold 110 comprises an antihelix structure 101, a tragus structure 102, and an opening 103 that are externally visible. The opening 103 is in direct air flow communication with the surrounding environment, and as such will receive a flow of acoustic waves or vibrations in the air that passes through the opening 103. The tragus structure 102 is disposed to partially enclose the opening 103, and the antihelix structure 101 is disposed to partially enclose both the antihelix structure 102 and the opening 103.


In at least one embodiment, the antihelix structure 101 comprises a semi-dome structure having a closed side 105 and an open side 106. In a preferred embodiment, the open side 106 faces the preferred listening direction 104, and the closed side 105 faces away from the preferred listening direction 104. The tragus structure 102 may also comprise a semi-dome structure having a closed side 107 and an open side 108. In a preferred embodiment, the open side 108 faces away from the preferred listening direction 104, while the closed side 107 faces towards the preferred listening direction 104. In other embodiments, the open side 106 of the antihelix structure 101 may be in direct confronting relation to the open side 108 of the tragus structure 102, regardless of the preferred listening direction 104.


Semi-dome as defined for the purposes of this document may comprise a half-dome structure or any combination of partial-dome structures. For instance, the anti-helix structure 101 of FIG. 1 comprises a half-dome, while the tragus structure 102 comprises a partial-dome wherein the base portion may be less than that of a half-dome, but the top portion may extend to or beyond the halfway point of a half-dome to provide increased coverage or enclosure of the opening 103 and other structures. Of course, in other variations, the top portion and bottom portion of the semi-dome may vary in respective dimensions to form varying portions of a full dome structure, in order to create varying coverage of the opening 103. This allows the apparatus to produce different or enhanced acoustic input for calculating direction and distance of the source sound relative to the user.


In at least one embodiment, the antihelix structure 101 and tragus structure 102 may be modular, such that different sizes or shapes (variations of different semi-domes or partial-domes) may be swapped out based on a user's preference for particular acoustic characteristics.


Drawing attention now to FIG. 2, the opening 103 is connected to, and in air flow communication with, an opening canal 111 inside the external manifold 110. In at least one embodiment, the opening canal 111 is disposed in a substantially perpendicular orientation relative to the desired listening direction 104 of the user. The opening canal 111 is further connected in air flow communication with an auditory canal 121. A portion of the auditory canal 121 may be formed in the external manifold 110. In various embodiments, the opening canal 111 and auditory canal 121 may be of a single piece constructions. In other embodiments, a canal connector not shown may be used to connect the two segments. At least a portion of the auditory canal 121 may also be formed within the internal manifold 121.


As previously discussed, the internal manifold 120 is formed wholly or substantially within an interior of the apparatus, such that it is not exposed directly to the outside air and will not be substantially affected by the external environment. In at least one embodiment, the auditory canal 121 formed within at least a portion of the internal manifold 121, will be disposed in a substantially parallel orientation relative to desired listening direction 104 of the user. In a preferred embodiment, the auditory canal comprises a length that is greater than two times its diameter.


A microphone housing 122 is attached to an end of the auditory canal 121. Within the microphone housing 122, a microphone generally at 123, not shown, is mounted against the end of the auditory canal 121. In at least one embodiment, the microphone 123 is mounted flush against the auditory canal 121, such that the connection may be substantially air tight to avoid interference sounds. In a preferred embodiment, an air cavity generally at 124 is created behind the microphone and at the end of the internal manifold 120. This may be accomplished by inserting the microphone 123 into the microphone housing 122, and then sealing the end of the microphone housing, generally at 124, with a cap. The cap may be substantially air tight in at least one embodiment. Different gasses having different acoustic characteristics may be used within the air cavity.


In at least one embodiment, apparatus 100 may form a part of a larger system 300 as illustrated in FIG. 3. Accordingly, a system 300 may comprise a left HRTF generator 100, a right HRTF generator 100′, a left preamplifier 210, a right preamplifier 210′, an audio processor 220, a left playback module 230, and a right playback module 230′.


The left and right HRTF generators 100 and 100′ may comprise the apparatus 100 described above, each having unique structures such as the antihelix structure 101 and tragus structure 102. Accordingly, the HRTF generators 100/100′ may be structured to generate a head related audio transfer function for a user, such that the sound received by the HRTF generators 100/100′ may be relayed to the user to accurately communicate position data of the sound. In other words, the HRTF generators 100/100′ may replicate and replace the function of the user's own left and right ears, where the HRTF generators would collect sound, and perform respective spectral transformations or a filtering process to the incoming sounds to enable the process of vertical localization to take place.


A left preamplifier 210 and right preamplifier 210′ may then be used to enhance the filtered sound coming from the HRTF generators, in order to enhance certain acoustic characteristics to improve locational accuracy, or to filter out unwanted noise. The preamplifiers 210/210′ may comprise an electronic amplifier, such as a voltage amplifier, current amplifier, transconductance amplifier, transresistance amplifier and/or any combination of circuits known to those skilled in the art for increasing or decreasing the gain of a sound or input signal. In at least one embodiment, the preamplifier comprises a microphone preamplifier configured to prepare a microphone signal to be processed by other processing modules. As it may be known in the art, microphone signals sometimes are too weak to be transmitted to other units, such as recording or playback devices with adequate quality. A microphone preamplifier thus increases a microphone signal to the line level by providing stable gain while preventing induced noise that might otherwise distort the signal.


Audio processor 230 may comprise a digital signal processor and amplifier, and may further comprise a volume control. Audio processor 230 may comprise a processor and combination of circuits structured to further enhance the audio quality of the signal coming from the microphone preamplifier, such as but not limited to shelf filters, equalizers, modulators. For example, in at least one embodiment the audio processor 230 may comprise a processor that performs the steps for processing a signal as taught by the present inventor's U.S. Pat. No. 8,160,274, the entire disclosure of which is incorporated herein by reference. Audio processor 230 may incorporate various acoustic profiles customized for a user and/or for an environment, such as those described in the present inventor's U.S. Pat. No. 8,565,449, the entire disclosure of which is incorporated herein by reference. Audio processor 230 may additionally incorporate processing suitable for high noise environments, such as those described in the present inventor's U.S. Pat. No. 8,462,963, the entire disclosure of which is incorporated herein by reference. Parameters of the audio processor 230 may be controlled and modified by a user via any means known to one skilled in the art, such as by a direct interface or a wireless communication interface.


The left playback module 230 and right playback module 230′ may comprise headphones, earphones, speakers, or any other transducer known to one skilled in the art. The purpose of the left and right playback modules 230/230′ is to convert the electrical audio signal from the audio processor 230 back into perceptible sound for the user. As such, a moving-coil transducer, electrostatic transducer, electret transducer, or other transducer technologies known to one skilled in the art may be utilized.


In at least one embodiment, the present system 200 comprises a device 200 as generally illustrated at FIGS. 4A and 4B, which may be a wearable headset 200 having the apparatus 100 embedded therein, as well as various amplifiers including but not limited to 210/210′, processors such as 220, playback modules such as 230/230′, and other appropriate circuits or combinations thereof for receiving, transmitting, enhancing, and reproducing sound.


In a further embodiment as illustrated in FIG. 5, a method for generating a head related audio transfer function is shown. Accordingly, external sound is first filtered through at least a tragus structure and an antihelix structure formed along an exterior of an HRTF generator, as in 201, in order to create a filtered sound. Next, the filtered sound is passed through an opening and auditory canal along an interior of the HRTF generator, as in 202, in order to create an input sound. The input sound is received at a microphone embedded within the HRTF generator, as in 203, in order to create an input signal. The input signal is then amplified with a preamplifier, as in 204, in order to create an amplified signal. The amplified signal is processed with an audio processor, as in 205, in order to create a processed signal. Finally, the processed signal is transmitted to a playback module, as in 206, in order to relay the audio and/or locational audio data to the user.


In a preferred embodiment of the present invention, the method of FIG. 5 may perform the locational audio capture and transmission to a user in real time. This facilitates usage in a hearing assistance situation, such as a hearing aid for a user with impaired hearing. This also facilitates usage in a high noise environment, such as to filter out noises and/or enhancing human speech.


In at least one embodiment, the method of FIG. 5 may further comprise a calibration process, such that each user can replicate his or her unique HRTF in order to provide for accurate localization of a sound in three dimensional space. The calibration may comprise adjusting the antihelix and tragus structures as described above, which may be formed of modular and/or moveable components. Thus, the antihelix and/or tragus structure may be repositioned, and/or differently shaped and/or sized structures may be used. In further embodiments, the audio processor 230 described above may be further calibrated to adjust the acoustic enhancement of certain sound waves relative to other sound waves and/or signals.


With regard to FIG. 6, one embodiment of an audio processor 230 is represented schematically as a system 1000. As schematically represented, FIG. 6 illustrates at least one preferred embodiment of a system 1000, and FIG. 7 provides examples of several subcomponents and combinations of subcomponents of the modules of FIG. 6. Accordingly, and in these embodiments, the systems 1000 and 3000 generally comprise an input device 1010 (such as the left preamplifier 210 and/or right preamplifier 210′), a high pass filter 1110, a first filter module 3010, a first compressor 1140, a second filter module 3020, a first processing module 3030, a band splitter 1190, a low band compressor 1300, a high band compressor 1310, a second processing module 3040, and an output device 1020.


The input device 1010 is at least partially structured or configured to transmit an input audio signal 2010, such as an amplified signal from a left or right preamplifier 210, 210′, into the system 1000 of the present invention, and in at least one embodiment into the high pass filter 1110.


The high pass filter 1110 is configured to pass through high frequencies of an audio signal, such as the input signal 2010, while attenuating lower frequencies, based on a predetermined frequency. In other words, the frequencies above the predetermined frequency may be transmitted to the first filter module 3010 in accordance with the present invention. In at least one embodiment, ultra-low frequency content is removed from the input audio signal, where the predetermined frequency may be selected from a range between 300 Hz and 3 kHz. The predetermined frequency however, may vary depending on the source signal, and vary in other embodiments to comprise any frequency selected from the full audible range of frequencies between 20 Hz to 20 kHz. The predetermined frequency may be tunable by a user, or alternatively be statically set. The high pass filter 1110 may further comprise any circuits or combinations thereof structured to pass through high frequencies above a predetermined frequency, and attenuate or filter out the lower frequencies.


The first filter module 3010 is configured to selectively boost or attenuate the gain of select frequency ranges within an audio signal, such as the high pass signal 2110. For example, and in at least one embodiment, frequencies below a first frequency may be adjusted by ±X dB, while frequencies above a first frequency may be adjusted by ±Y dB. In other embodiments, a plurality of frequencies may be used to selectively adjust the gain of various frequency ranges within an audio signal. In at least one embodiment, the first filter module 3010 may be implemented with a first low shelf filter 1120 and a first high shelf filter 1130, as illustrated in FIG. 6. The first low shelf filter 1120 and first high shelf filter 1130 may both be second-order filters. In at least one embodiment, the first low shelf filter 1120 attenuates content below a first frequency, and the first high shelf filter 1120 boosts content above a first frequency. In other embodiments, the frequency used for the first low shelf filter 1120 and first high shelf filter 1130 may comprise two different frequencies. The frequencies may be static or adjustable. Similarly, the gain adjustment (boost or attenuation) may be static or adjustable.


The first compressor 1140 is configured to modulate a signal, such as the first filtered signal 4010. The first compressor 1120 may comprise an automatic gain controller. The first compressor 1120 may comprise standard dynamic range compression controls such as threshold, ratio, attack and release. Threshold allows the first compressor 1120 to reduce the level of the filtered signal 2110 if its amplitude exceeds a certain threshold. Ratio allows the first compressor 1120 to reduce the gain as determined by a ratio. Attack and release determines how quickly the first compressor 1120 acts. The attack phase is the period when the first compressor 1120 is decreasing gain to reach the level that is determined by the threshold. The release phase is the period that the first compressor 1120 is increasing gain to the level determined by the ratio. The first compressor 1120 may also feature soft and hard knees to control the bend in the response curve of the output or modulated signal 2120, and other dynamic range compression controls appropriate for the dynamic compression of an audio signal. The first compressor 1120 may further comprise any device or combination of circuits that is structured and configured for dynamic range compression.


The second filter module 3020 is configured to selectively boost or attenuate the gain of select frequency ranges within an audio signal, such as the modulated signal 2140. In at least one embodiment, the second filter module 3020 is of the same configuration as the first filter module 3010. Specifically, the second filter module 3020 may comprise a second low shelf filter 1150 and a second high shelf filter 1160. In certain embodiments, the second low shelf filter 1150 may be configured to filter signals between 100 Hz and 3000 Hz, with an attenuation of between −5 dB to −20 dB. In certain embodiments the second high shelf filter 1160 may be configured to filter signals between 100 Hz and 3000 Hz, with a boost of between +5 dB to +20 dB.


The second filter module 3020 may be configured in at least a partially inverse configuration to the first filter module 3010. For instance, the second filter module may use the same frequency, for instance the first frequency, as the first filter module. Further, the second filter module may adjust the gain inversely to the gain or attenuation of the first filter module, of content above the first frequency. Similarly second filter module may also adjust the gain inversely to the gain or attenuation of the of the first filter module, of content below the first frequency. In other words, the purpose of the second filter module in one embodiment may be to “undo” the gain adjustment that was applied by the first filter module.


The first processing module 3030 is configured to process a signal, such as the second filtered signal 4020. In at least one embodiment, the first processing module 3030 may comprise a peak/dip module, such as 1180 represented in FIG. 7. In other embodiments, the first processing module 3030 may comprise a first gain element 1170. In various embodiments, the processing module 3030 may comprise both a first gain element 1170 and a peak/dip module 1180 for the processing of a signal. The first gain element 1170, in at least one embodiment, may be configured to adjust the level of a signal by a static amount. The first gain element 1170 may comprise an amplifier or a multiplier circuit. In other embodiments, dynamic gain elements may be used. The peak/dip module 1180 is configured to shape the desired output spectrum, such as to increase or decrease overshoots or undershoots in the signal. In some embodiments, the peak/dip module may further be configured to adjust the slope of a signal, for instance for a gradual scope that gives a smoother response, or alternatively provide for a steeper slope for more sudden sounds. In at least one embodiment, the peak/dip module 1180 comprises a bank of ten cascaded peak/dipping filters. The bank of ten cascaded peaking/dipping filters may further be second-order filters. In at least one embodiment, the peak/dip module 1180 may comprise an equalizer, such as parametric or graphic equalizers.


The band splitter 1190 is configured to split a signal, such as the processed signal 4030. In at least one embodiment, the signal is split into a low band signal 2200, a mid band signal 2210, and a high band signal 2220. Each band may be the output of a fourth order section, which may be further realized as the cascade of second order biquad filters. In other embodiments, the band splitter may comprise any combination of circuits appropriate for splitting a signal into three frequency bands. The low, mid, and high bands may be predetermined ranges, or may be dynamically determined based on the frequency itself, i.e. a signal may be split into three even frequency bands, or by percentage. The different bands may further be defined or configured by a user and/or control mechanism.


A low band compressor 1300 is configured to modulate the low band signal 2200, and a high band compressor 1310 is configured to modulate the high band signal 2220. In at least one embodiment, each of the low band compressor 1300 and high band compressor 1310 may be the same as the first compressor 1140. Accordingly, each of the low band compressor 1300 and high band compressor 1310 may each be configured to modulate a signal. Each of the compressors 1300, 1310 may comprise an automatic gain controller, or any combination of circuits appropriate for the dynamic range compression of an audio signal.


A second processing module 3040 is configured to process at least one signal, such as the modulated low band signal 2300, the mid band signal 2210, and the modulated high band signal 2310. Accordingly, the second processing module 3040 may comprise a summing module 1320 configured to combine a plurality of signals. The summing module 1320 may comprise a mixer structured to combine two or more signals into a composite signal. The summing module 1320 may comprise any circuits or combination thereof structured or configured to combine two or more signals. In at least one embodiment, the summing module 1320 comprises individual gain controls for each of the incoming signals, such as the modulated low band signal 2300, the mid band signal 2210, and the modulated high band signal 2310. In at least one embodiment, the second processing module 3040 may further comprise a second gain element 1330. The second gain element 1330, in at least one embodiment, may be the same as the first gain element 1170. The second gain element 1330 may thus comprise an amplifier or multiplier circuit to adjust the signal, such as the combined signal, by a predetermined amount.


The output device 1020 may comprise the left playback module 230 and/or right playback module 230′.


As diagrammatically represented, FIG. 8 illustrates a block diagram of one method for processing an audio signal with an audio processor 220, which may in at least one embodiment incorporate the components or combinations thereof from the systems 1000 and/or 3000 referenced above. Each step of the method in FIG. 8 as detailed below may also be in the form of a code segment stored on a non-transitory computer readable medium for execution by the audio processor 220.


Accordingly, an input audio signal, such as the amplified signal, is first filtered, as in 5010, with a high pass filter to create a high pass signal. The high pass filter is configured to pass through high frequencies of a signal, such as the input signal, while attenuating lower frequencies. In at least one embodiment, ultra-low frequency content is removed by the high-pass filter. In at least one embodiment, the high pass filter may comprise a fourth-order filter realized as the cascade of two second-order biquad sections. The reason for using a fourth order filter broken into two second order sections is that it allows the filter to retain numerical precision in the presence of finite word length effects, which can happen in both fixed and floating point implementations. An example implementation of such an embodiment may assume a form similar to the following:

    • Two memory locations are allocated, designated as d(k−1) and d(k−2), with each holding a quantity known as a state variable. For each input sample x(k), a quantity d(k) is calculated using the coefficients a1 and a2:

      d(k)=x(k)−a1*d(k−1)−a2*d(k-2)
    • The output y(k) is then computed, based on coefficients b0, b1, and b2, according to:

      y(k)=b0*d(k)+b1*d(k-1)+b2*d(k-2)


The above computation comprising five multiplies and four adds is appropriate for a single channel of second-order biquad section. Accordingly, because the fourth-order high pass filter is realized as a cascade of two second-order biquad sections, a single channel of fourth order input high pass filter would require ten multiples, four memory locations, and eight adds.


The high pass signal from the high pass filter is then filtered, as in 5020, with a first filter module to create a first filtered signal. The first filter module is configured to selectively boost or attenuate the gain of select frequency ranges within an audio signal, such as the high pass signal. Accordingly, the first filter module may comprise a second order low shelf filter and a second order high shelf filter in at least one embodiment. In at least one embodiment, the first filter module boosts the content above a first frequency by a certain amount, and attenuates the content below a first frequency by a certain amount, before presenting the signal to a compressor or dynamic range controller. This allows the dynamic range controller to trigger and adjust higher frequency material, whereas it is relatively insensitive to lower frequency material.


The first filtered signal from the first filter module is then modulated, as in 5030, with a first compressor. The first compressor may comprise an automatic or dynamic gain controller, or any circuits appropriate for the dynamic compression of an audio signal. Accordingly, the compressor may comprise standard dynamic range compression controls such as threshold, ratio, attack and release. An example implementation of the first compressor may assume a form similar to the following:

    • The compressor first computes an approximation of the signal level, where att represents attack time; rel represents release time; and invThr represents a precomputed threshold:



















temp = abs (x(k))




if temp > level (k−1)




  level(k) = att * (level(k−1) − temp) + temp




else




  level = rel * (level(k−1) − temp) + temp












    • This level computation is done for each input sample. The ratio of the signal's level to invThr then determines the next step. If the ratio is less than one, the signal is passed through unaltered. If the ratio exceeds one, a table in the memory may provide a constant that's a function of both invThr and level:






















if (level * thr < 1)




  output(k) = x(k)




else




  index = floor(level * invThr)




if (index > 99)




  index = 99




gainReduction = table[index]




output(k) = gainReduction * x(k)










The modulated signal from the first compressor is then filtered, as in 5040, with a second filter module to create a second filtered signal. The second filter module is configured to selectively boost or attenuate the gain of select frequency ranges within an audio signal, such as the modulated signal. Accordingly, the second filter module may comprise a second order low shelf filter and a second order high shelf filter in at least one embodiment. In at least one embodiment, the second filter module boosts the content above a second frequency by a certain amount, and attenuates the content below a second frequency by a certain amount. In at least one embodiment, the second filter module adjusts the content below the first specified frequency by a fixed amount, inverse to the amount that was removed by the first filter module. By way of example, if the first filter module boosted content above a first frequency by +X dB and attenuated content below a first frequency by −Y dB, the second filter module may then attenuate the content above the first frequency by −X dB, and boost the content below the first frequency by +Y dB. In other words, the purpose of the second filter module in one embodiment may be to “undo” the filtering that was applied by the first filter module.


The second filtered signal from the second filter module is then processed, as in 5050, with a first processing module to create a processed signal. The processing module may comprise a gain element configured to adjust the level of the signal. This adjustment, for instance, may be necessary because the peak-to-average ratio was modified by the first compressor. The processing module may comprise a peak/dip module. The peak/dip module may comprise ten cascaded second-order filters in at least one embodiment. The peak/dip module may be used to shape the desired output spectrum of the signal. In at least one embodiment, the first processing module comprises only the peak/dip module. In other embodiments, the first processing module comprises a gain element followed by a peak/dip module.


The processed signal from the first processing module is then split, as in 5060, with a band splitter into a low band signal, a mid band signal, and a high band signal. The band splitter may comprise any circuit or combination of circuits appropriate for splitting a signal into a plurality of signals of different frequency ranges. In at least one embodiment, the band splitter comprises a fourth-order band-splitting bank. In this embodiment, each of the low band, mid band, and high band are yielded as the output of a fourth-order section, realized as the cascade of second-order biquad filters.


The low band signal is modulated, as in 5070, with a low band compressor to create a modulated low band signal. The low band compressor may be configured and/or computationally identical to the first compressor in at least one embodiment. The high band signal is modulated, as in 5080, with a high band compressor to create a modulated high band signal. The high band compressor may be configured and/or computationally identical to the first compressor in at least one embodiment.


The modulated low band signal, mid band signal, and modulated high band signal are then processed, as in 5090, with a second processing module. The second processing module comprises at least a summing module. The summing module is configured to combine a plurality of signals into one composite signal. In at least one embodiment, the summing module may further comprise individual gain controls for each of the incoming signals, such as the modulated low band signal, the mid band signal, and the modulated high band signal. By way of example, an output of the summing module may be calculated by:

out=w0*low+w1*mid+w2*high

The coefficients w0, w1, and w2 represent different gain adjustments. The second processing module may further comprise a second gain element. The second gain element may be the same as the first gain element in at least one embodiment. The second gain element may provide a final gain adjustment. Finally, the second processed signal is transmitted as the output signal.


As diagrammatically represented, FIG. 9 illustrates a block diagram of one method for processing an audio signal with an audio processor 220, which may in at least one embodiment incorporate the components or combinations thereof from the systems 1000 and/or 3000 referenced above. Because the individual components of FIG. 9 have been discussed in detail above, they will not be discussed here. Further, each step of the method in FIG. 9 as detailed below may also be in the form of a code segment directed to at least one embodiment of the present invention, which is stored on a non-transitory computer readable medium, for execution by the audio processor 220 of the present invention.


Accordingly, an input audio signal is first filtered, as in 5010, with a high pass filter. The high pass signal from the high pass filter is then filtered, as in 6010, with a first low shelf filter. The signal from the first low shelf filter is then filtered with a first high shelf filter, as in 6020. The first filtered signal from the first low shelf filter is then modulated with a first compressor, as in 5030. The modulated signal from the first compressor is filtered with a second low shelf filter as in 6110. The signal from the low shelf filter is then filtered with a second high shelf filter, as in 6120. The second filtered signal from the second low shelf filter is then gain-adjusted with a first gain element, as in 6210. The signal from the first gain element is further processed with a peak/dip module, as in 6220. The processed signal from the peak/dip module is then split into a low band signal, a mid band signal, and a high band signal, as in 5060. The low band signal is modulated with a low band compressor, as in 5070. The high band signal is modulated with a high band compressor, as in 5080. The modulated low band signal, mid band signal, and modulated high band signal are then combined with a summing module, as in 6310. The combined signal is then gain adjusted with a second gain element in order to create the output signal, as in 6320.


It should be understood that the above steps may be conducted exclusively or nonexclusively and in any order. Further, the physical devices recited in the methods may comprise any apparatus and/or systems described within this document or known to those skilled in the art.


Since many modifications, variations and changes in detail can be made to the described preferred embodiment of the invention, it is intended that all matters in the foregoing description and shown in the accompanying drawings be interpreted as illustrative and not in a limiting sense. Thus, the scope of the invention should be determined by the appended claims and their legal equivalents.


Now that the invention has been described,

Claims
  • 1. An apparatus for generating a head related audio transfer function for a user, said apparatus comprising: an external manifold disposed at least partially on an exterior of said apparatus, said external manifold comprising: an opening disposed along an exterior of said external manifold, said opening in air flow communication with the external environment,a tragus structure disposed to partially enclose said opening,an antihelix structure disposed to partially enclose said tragus structure and said opening,an opening canal in air flow communication with said opening,an internal manifold disposed along an interior of said apparatus, said internal manifold comprising: an auditory canal in air flow communication with said opening canal,a microphone housing attached to an end of said auditory canal, said microphone housing comprising a microphone,an air cavity in air flow communication with said auditory canal;left and right preamplifiers configured to receive an audio signal, an audio processor configured to receive an amplified signal, and a playback module configured to receive a processed signal;said audio processor including at least a high pass filter, a first low shelf filter, a first high shelf filter, a first compressor, a second low shelf filter, a second high shelf filter, a first processing module, a band splitter, a low band compressor, a high band compressor, and a second processing module;said high pass filter configured to filter an amplified signal to create a high pass high pass signal;said first low shelf filter configured to filter said high pass signal to create a first low shelf signal;said first high shelf filter configured to filter said first low shelf signal to create a first filtered signal;said first compressor configured to compress said first filtered signal to create a modulated signal;said second low shelf filter configured to filter said modulated signal to create a second low shelf signal;said second high shelf filter configured to filter said second low shelf signal to create a second filtered signal;said first processing module configured to process said second filtered signal to create a processed signal;said band splitter configured to split said processed signal into a low band signal, a mid band signal and a high band signal;said low band compressor configured to compress said low band signal to create a modulated low band signal,said high band compressor configured to compress said high band signal to create a modulated high band signal; andsaid second processing module configured to process said modulated low band signal, said mid band signal and said modulated high band signal to create a processed signal.
  • 2. A system for generating a head related audio transfer function (HRTF) for a user, said system comprising: a left HRTF generator structured and disposed to pick up sound signals to the left side of the user;a right HRTF generator structured and disposed to pick up sound signals to the right side of the user;at least one audio processor including at least a high pass filter, a first low shelf filter, a first high shelf filter, a first compressor, a second low shelf filter, a second high shelf filter, a first processing module, a band splitter, a low band compressor, a high band compressor, and a second processing module;said high pass filter configured to filter an amplified signal to create a high pass high pass signal;said first low shelf filter configured to filter said high pass signal to create a first low shelf signal;said first high shelf filter configured to filter said first low shelf signal to create a first filtered signal;said first compressor configured to compress said first filtered signal to create a modulated signal;said second low shelf filter configured to filter said modulated signal to create a second low shelf signal;said second high shelf filter configured to filter said second low shelf signal to create a second filtered signal;said first processing module configured to process said second filtered signal to create a first processed signal;said band splitter configured to split said processed signal into a low band signal, a mid band signal and a high band signal;said low band compressor configured to compress said low band signal to create a modulated low band signal,said high band compressor configured to compress said high band signal to create a modulated high band signal;said second processing module configured to process said modulated low band signal, said mid band signal and said modulated high band signal to create a second processed signal;a left playback module structured and configured to relay positional audio data to the user's left ear; anda right playback module structured and configured to relay positional audio data to the user's right ear.
  • 3. The system as recited in claim 2 wherein each of said left and right HRTF generators comprise the apparatus of claim 1.
  • 4. The system as recited in claim 2 further comprising a left preamplifier structured to enhance the sound signals of the left HRTF generator, creating an amplified signal.
  • 5. The system as recited in claim 4 further comprising a right preamplifier structured to enhance the sound signals of the right HRTF generator, creating an amplified signal.
  • 6. The system as recited in claim 2 wherein said at least one audio processor further comprises a volume control for adjusting an input volume picked up from each of the left and right HRTF generators.
  • 7. The system as recited in claim 2 wherein said at least one audio processor further comprises a post-amplifier for adjusting an output volume from said at least one audio processor.
  • 8. A system as recited in claim 2 wherein said second low shelf filter is configured to filter signals between 100 Hz and 3000 Hz, with an attenuation of between −5 db to −20 dB.
  • 9. A system as recited in claim 2 wherein said second high shelf filter is configured to filter signals between 100 Hz and 3000 Hz, with a boost of between +5 db to +20 dB.
  • 10. A system as recited in claim 2 wherein said first processing module comprises a peak/dip module configured to process said second filtered signal to create said first processed signal.
  • 11. A system as recited in claim 2 wherein said first processing module comprises: a first gain element configured to adjust a gain level of said second filtered signal to create a first gain signal,a peak/dip module configured to process said first gain signal to create said first processed signal.
  • 12. A system as recited in claim 2 wherein said second processing module comprises a summing module configured to combine said modulated low band signal, said mid band signal, and said modulated high band signal to create an output signal.
  • 13. A system as recited in claim 2 wherein said second processing module comprises: a summing module configured to combine said modulated low band signal, said mid band signal, and said modulated high band signal to create a combined signal,a second gain element configured to adjust a gain level of the combined signal to create an output signal.
  • 14. A system as recited in claim 2 wherein said high pass filter comprises a fourth order high pass filter.
  • 15. A system as recited in claim 2 wherein said first low shelf filter comprises a second order low shelf filter.
  • 16. A system as recited in claim 2 wherein said first high shelf filter comprises a second order high shelf filter.
  • 17. A system as recited in claim 2 wherein said second low shelf filter comprises a second order low shelf filter.
  • 18. A system as recited in claim 2 wherein said second high shelf filter comprises a second order high shelf filter.
CLAIM OF PRIORITY

The present application is a continuation-in-part of a previously filed, now pending application having Ser. No. 15/478,696 and a filing date of Apr. 4, 2017, which is a continuation application of a previously filed application having Ser. No. 14/485,145 and a filing date of Sep. 12, 2014, which matured into U.S. Pat. No. 9,615,189, and which is based on, and a claim of priority was made under 35 U.S.C. Section 119(e), to a provisional patent application having Ser. No. 62/035,025 and a filing date of Aug. 8, 2014, all of which are explicitly incorporated herein by reference, in their entireties. The present invention is also a continuation in part of a previously filed, now pending application having Ser. No. 15/163,353 and a filing date of May 24, 2016, which is a continuation-in-part of Ser. No. 14/059,948, which matured into U.S. Pat. No. 9,348,904, and which is a continuation-in-part of Ser. No. 12/648,007 filed on Dec. 28, 2009, which matured into U.S. Pat. No. 8,565,449, and which is a continuation-in-part of Ser. No. 11/947,301, filed Nov. 29, 2007, which matured into U.S. Pat. No. 8,160,274, and which claims priority to U.S. Provisional Application No. 60/861,711 filed Nov. 30, 2006, each which are explicitly incorporated herein by reference, in there entireties. Further, Ser. No. 11/947,301 is a continuation-in-part of Ser. No. 11/703,216, filed Feb. 7, 2007, and which claims priority to U.S. Provisional Application No. 60/765,722 filed Feb. 7, 2006, each which are explicitly incorporated herein by reference, in there entireties

US Referenced Citations (361)
Number Name Date Kind
2643729 McCracken Jun 1953 A
2755336 Zener et al. Jul 1956 A
3396241 Anderson Aug 1968 A
3430007 Thielen Feb 1969 A
3662076 Gordon et al. May 1972 A
3795876 Takashi et al. Mar 1974 A
3813687 Geil May 1974 A
4162462 Endoh et al. Jul 1979 A
4184047 Langford Jan 1980 A
4215583 Botsco et al. Aug 1980 A
4218950 Uetrecht Aug 1980 A
4226533 Snowman Oct 1980 A
4257325 Bertagni Mar 1981 A
4277367 Madsen et al. Jul 1981 A
4286455 Ophir et al. Sep 1981 A
4331021 Lopez et al. May 1982 A
4353035 Schröder Oct 1982 A
4356558 Owen et al. Oct 1982 A
4363007 Haramoto et al. Dec 1982 A
4392027 Bock Jul 1983 A
4399474 Coleman, Jr. Aug 1983 A
4412100 Orban Oct 1983 A
4458362 Berkovitz et al. Jul 1984 A
4489280 Bennett, Jr. et al. Dec 1984 A
4517415 Laurence May 1985 A
4538297 Waller Aug 1985 A
4549289 Schwartz et al. Oct 1985 A
4584700 Scholz Apr 1986 A
4602381 Cugnini et al. Jul 1986 A
4612665 Inami et al. Sep 1986 A
4641361 Rosback Feb 1987 A
4677645 Kaniwa et al. Jun 1987 A
4696044 Waller, Jr. Sep 1987 A
4701953 White Oct 1987 A
4704726 Gibson Nov 1987 A
4715559 Fuller Dec 1987 A
4739514 Short et al. Apr 1988 A
4815142 Imreh Mar 1989 A
4856068 Quatieri, Jr. et al. Aug 1989 A
4887299 Cummins et al. Dec 1989 A
4997058 Bertagni Mar 1991 A
5007707 Bertagni Apr 1991 A
5073936 Gurike et al. Dec 1991 A
5133015 Scholz Jul 1992 A
5195141 Jang Mar 1993 A
5210704 Husseiny May 1993 A
5210806 Kihara et al. May 1993 A
5226076 Baumhauer, Jr. et al. Jul 1993 A
5239997 Guarino et al. Aug 1993 A
5355417 Burdisso et al. Oct 1994 A
5361381 Short Nov 1994 A
5384856 Kyouno et al. Jan 1995 A
5420929 Geddes et al. May 1995 A
5425107 Bertagni et al. Jun 1995 A
5463695 Werrbach Oct 1995 A
5465421 McCormick et al. Nov 1995 A
5467775 Callahan et al. Nov 1995 A
5473214 Hildebrand Dec 1995 A
5515444 Burdisso et al. May 1996 A
5539835 Bertagni et al. Jul 1996 A
5541866 Sato et al. Jul 1996 A
5572443 Emoto et al. Nov 1996 A
5615275 Bertagni Mar 1997 A
5617480 Ballard et al. Apr 1997 A
5638456 Conley et al. Jun 1997 A
5640685 Komoda Jun 1997 A
5671287 Gerzon Sep 1997 A
5693917 Bertagni et al. Dec 1997 A
5699438 Smith et al. Dec 1997 A
5727074 Hildebrand Mar 1998 A
5737432 Werrbach Apr 1998 A
5812684 Mark Sep 1998 A
5828768 Eatwell et al. Oct 1998 A
5832097 Armstrong et al. Nov 1998 A
5838805 Warnaka et al. Nov 1998 A
5848164 Levine Dec 1998 A
5861686 Lee Jan 1999 A
5862461 Yoshizawa et al. Jan 1999 A
5872852 Dougherty Feb 1999 A
5883339 Greenberger Mar 1999 A
5901231 Parrella et al. May 1999 A
5990955 Koz Nov 1999 A
6002777 Grasfield et al. Dec 1999 A
6058196 Heron May 2000 A
6078670 Beyer Jun 2000 A
6093144 Jaeger et al. Jul 2000 A
6108431 Bachler Aug 2000 A
6195438 Yumoto et al. Feb 2001 B1
6201873 Dal Farra Mar 2001 B1
6202601 Ouellette et al. Mar 2001 B1
6208237 Saiki et al. Mar 2001 B1
6220866 Amend et al. Apr 2001 B1
6244376 Granzotto Jun 2001 B1
6263354 Gandhi Jul 2001 B1
6285767 Klayman Sep 2001 B1
6292511 Goldston et al. Sep 2001 B1
6317117 Goff Nov 2001 B1
6318797 Böhm et al. Nov 2001 B1
6332029 Azima et al. Dec 2001 B1
6343127 Billoud Jan 2002 B1
6518852 Derrick Feb 2003 B1
6529611 Kobayashi et al. Mar 2003 B2
6535846 Shashoua Mar 2003 B1
6570993 Fukuyama May 2003 B1
6587564 Cusson Jul 2003 B1
6618487 Azima et al. Sep 2003 B1
6661897 Smith Dec 2003 B2
6661900 Allred et al. Dec 2003 B1
6772114 Sluijter et al. Aug 2004 B1
6839438 Riegelsberger et al. Jan 2005 B1
6847258 Ishida et al. Jan 2005 B2
6871525 Withnall et al. Mar 2005 B2
6907391 Bellora et al. Jun 2005 B2
6999826 Zhou et al. Feb 2006 B1
7006653 Guenther Feb 2006 B2
7016746 Wiser et al. Mar 2006 B2
7024001 Nakada Apr 2006 B1
7058463 Ruha et al. Jun 2006 B1
7123728 King et al. Oct 2006 B2
7236602 Gustavsson Jun 2007 B2
7254243 Bongiovi Aug 2007 B2
7266205 Miller Sep 2007 B2
7269234 Klingenbrunn et al. Sep 2007 B2
7274795 Bongiovi Sep 2007 B2
7430300 Vosburgh et al. Sep 2008 B2
7519189 Bongiovi Apr 2009 B2
7577263 Tourwe Aug 2009 B2
7613314 Camp, Jr. Nov 2009 B2
7676048 Tsutsui Mar 2010 B2
7711129 Lindahl May 2010 B2
7711442 Ryle et al. May 2010 B2
7747447 Christensen et al. Jun 2010 B2
7764802 Oliver Jul 2010 B2
7778718 Janke et al. Aug 2010 B2
7916876 Helsloot Mar 2011 B1
8068621 Okabayashi et al. Nov 2011 B2
8144902 Johnston Mar 2012 B2
8160274 Bongiovi Apr 2012 B2
8175287 Ueno et al. May 2012 B2
8218789 Bharitkar et al. Jul 2012 B2
8229136 Bongiovi Jul 2012 B2
8284955 Bongiovi et al. Oct 2012 B2
8385864 Dickson et al. Feb 2013 B2
8462963 Bongiovi Jun 2013 B2
8472642 Bongiovi Jun 2013 B2
8503701 Miles et al. Aug 2013 B2
8565449 Bongiovi Oct 2013 B2
8577676 Muesch Nov 2013 B2
8619998 Walsh et al. Dec 2013 B2
8705765 Bongiovi Apr 2014 B2
8750538 Avendano et al. Jun 2014 B2
8811630 Burlingame Aug 2014 B2
8879743 Mitra Nov 2014 B1
9195433 Bongiovi et al. Nov 2015 B2
9264004 Bongiovi et al. Feb 2016 B2
9275556 East et al. Mar 2016 B1
9276542 Bongiovi et al. Mar 2016 B2
9281794 Bongiovi et al. Mar 2016 B1
9344828 Bongiovi et al. May 2016 B2
9348904 Bongiovi et al. May 2016 B2
9350309 Bongiovi et al. May 2016 B2
9397629 Bongiovi et al. Jul 2016 B2
9398394 Bongiovi et al. Jul 2016 B2
9413321 Bongiovi et al. Aug 2016 B2
9564146 Bongiovi et al. Feb 2017 B2
9615189 Copt et al. Apr 2017 B2
9615813 Copt et al. Apr 2017 B2
9621994 Bongiovi et al. Apr 2017 B1
9638672 Butera, III et al. May 2017 B2
9741355 Bongiovi et al. Aug 2017 B2
9793872 Bongiovi et al. Oct 2017 B2
9883318 Bongiovi et al. Jan 2018 B2
9906858 Bongiovi et al. Feb 2018 B2
9906867 Bongiovi et al. Feb 2018 B2
9998832 Bongiovi et al. Jun 2018 B2
1006947 Bongiovi et al. Sep 2018 A1
1015833 Bongiovi et al. Dec 2018 A1
10158337 Bongiovi et al. Dec 2018 B2
20010008535 Lanigan Jul 2001 A1
20010043704 Schwartz Nov 2001 A1
20010046304 Rast Nov 2001 A1
20020057808 Goldstein May 2002 A1
20020071481 Goodings Jun 2002 A1
20020094096 Paritsky et al. Jul 2002 A1
20020170339 Passi et al. Nov 2002 A1
20030016838 Paritsky et al. Jan 2003 A1
20030023429 Claesson et al. Jan 2003 A1
20030035555 King et al. Feb 2003 A1
20030043940 Janky et al. Mar 2003 A1
20030112088 Bizjak Jun 2003 A1
20030138117 Goff Jul 2003 A1
20030142841 Wiegand Jul 2003 A1
20030164546 Giger Sep 2003 A1
20030179891 Rabinowitz et al. Sep 2003 A1
20030216907 Thomas Nov 2003 A1
20040003805 Ono et al. Jan 2004 A1
20040005063 Klayman Jan 2004 A1
20040008851 Hagiwara Jan 2004 A1
20040022400 Magrath Feb 2004 A1
20040042625 Brown Mar 2004 A1
20040044804 Mac Farlane Mar 2004 A1
20040086144 Kallen May 2004 A1
20040103588 Allaei Jun 2004 A1
20040105556 Grove Jun 2004 A1
20040138769 Akiho Jul 2004 A1
20040146170 Zint Jul 2004 A1
20040189264 Matsuura et al. Sep 2004 A1
20040208646 Choudhary et al. Oct 2004 A1
20050013453 Cheung Jan 2005 A1
20050090295 Ali et al. Apr 2005 A1
20050117771 Vosburgh Jun 2005 A1
20050129248 Kraemer et al. Jun 2005 A1
20050175185 Korner Aug 2005 A1
20050201572 Lindahl Sep 2005 A1
20050249272 Kirkeby et al. Nov 2005 A1
20050254564 Tsutsui Nov 2005 A1
20060034467 Sleboda et al. Feb 2006 A1
20060045294 Smyth Mar 2006 A1
20060064301 Aguilar et al. Mar 2006 A1
20060098827 Paddock et al. May 2006 A1
20060115107 Vincent et al. Jun 2006 A1
20060126851 Yuen et al. Jun 2006 A1
20060126865 Blarney et al. Jun 2006 A1
20060138285 Oleski et al. Jun 2006 A1
20060140319 Eldredge et al. Jun 2006 A1
20060153281 Karlsson Jul 2006 A1
20060189841 Pluvinage Aug 2006 A1
20060285696 Houtsma Dec 2006 A1
20060291670 King et al. Dec 2006 A1
20070010132 Nelson Jan 2007 A1
20070030994 Ando et al. Feb 2007 A1
20070056376 King Mar 2007 A1
20070106179 Bagha et al. May 2007 A1
20070119421 Lewis et al. May 2007 A1
20070150267 Honma et al. Jun 2007 A1
20070165872 Bridger et al. Jul 2007 A1
20070173990 Smith et al. Jul 2007 A1
20070177459 Behn Aug 2007 A1
20070206643 Egan Sep 2007 A1
20070223713 Gunness Sep 2007 A1
20070223717 Boersma Sep 2007 A1
20070253577 Yen et al. Nov 2007 A1
20080031462 Walsh et al. Feb 2008 A1
20080040116 Cronin Feb 2008 A1
20080049948 Christoph Feb 2008 A1
20080069385 Revit Mar 2008 A1
20080093157 Drummond et al. Apr 2008 A1
20080112576 Bongiovi May 2008 A1
20080123870 Stark May 2008 A1
20080123873 Bjorn-Josefsen et al. May 2008 A1
20080137876 Kassan et al. Jun 2008 A1
20080137881 Bongiovi Jun 2008 A1
20080165989 Seil et al. Jul 2008 A1
20080181424 Schulein et al. Jul 2008 A1
20080212798 Zartarian Sep 2008 A1
20080219459 Bongiovi et al. Sep 2008 A1
20080255855 Lee et al. Oct 2008 A1
20090022328 Neugebauer et al. Jan 2009 A1
20090054109 Hunt Feb 2009 A1
20090062946 Bongiovi et al. Mar 2009 A1
20090080675 Smirnov et al. Mar 2009 A1
20090086996 Bongiovi et al. Apr 2009 A1
20090116652 Kirkeby et al. May 2009 A1
20090211838 Bilan Aug 2009 A1
20090282810 Leone et al. Nov 2009 A1
20090290725 Huang Nov 2009 A1
20090296959 Bongiovi Dec 2009 A1
20100045374 Wu et al. Feb 2010 A1
20100166222 Bongiovi Jul 2010 A1
20100246832 Villemoes et al. Sep 2010 A1
20100256843 Bergstein et al. Oct 2010 A1
20100278364 Berg Nov 2010 A1
20100303278 Sahyoun Dec 2010 A1
20110002467 Nielsen Jan 2011 A1
20110007907 Park et al. Jan 2011 A1
20110013736 Tsukamoto et al. Jan 2011 A1
20110065408 Kenington et al. Mar 2011 A1
20110087346 Larsen et al. Apr 2011 A1
20110096936 Gass Apr 2011 A1
20110125063 Shalon et al. May 2011 A1
20110194712 Potard Aug 2011 A1
20110230137 Hicks et al. Sep 2011 A1
20110257833 Trush et al. Oct 2011 A1
20110280411 Cheah et al. Nov 2011 A1
20120008798 Ong Jan 2012 A1
20120014553 Bonanno Jan 2012 A1
20120020502 Adams Jan 2012 A1
20120022842 Amadu Jan 2012 A1
20120063611 Kimura Mar 2012 A1
20120089045 Seidl et al. Apr 2012 A1
20120099741 Gotoh et al. Apr 2012 A1
20120170759 Yuen et al. Jul 2012 A1
20120170795 Sancisi et al. Jul 2012 A1
20120189131 Ueno et al. Jul 2012 A1
20120213034 Imran Aug 2012 A1
20120213375 Mahabub et al. Aug 2012 A1
20120300949 Rauhala Nov 2012 A1
20120302920 Bridger et al. Nov 2012 A1
20120329904 Suita et al. Dec 2012 A1
20130083958 Katz et al. Apr 2013 A1
20130121507 Bongiovi et al. May 2013 A1
20130129106 Sapiejewski May 2013 A1
20130162908 Son et al. Jun 2013 A1
20130163767 Gauger, Jr. et al. Jun 2013 A1
20130163783 Burlingame Jun 2013 A1
20130169779 Pedersen Jul 2013 A1
20130220274 Deshpande et al. Aug 2013 A1
20130227631 Sharma et al. Aug 2013 A1
20130242191 Leyendecker Sep 2013 A1
20130251175 Bongiovi et al. Sep 2013 A1
20130288596 Suzuki et al. Oct 2013 A1
20130338504 Demos et al. Dec 2013 A1
20140067236 Henry et al. Mar 2014 A1
20140100682 Bongiovi Apr 2014 A1
20140112497 Bongiovi Apr 2014 A1
20140119583 Valentine et al. May 2014 A1
20140126734 Gauger, Jr. et al. May 2014 A1
20140153730 Habboushe et al. Jun 2014 A1
20140153765 Gan et al. Jun 2014 A1
20140185829 Bongiovi Jul 2014 A1
20140261301 Leone Sep 2014 A1
20140369504 Bongiovi Dec 2014 A1
20140369521 Bongiovi et al. Dec 2014 A1
20140379355 Hosokawsa Dec 2014 A1
20150039250 Rank Feb 2015 A1
20150194158 Oh et al. Jul 2015 A1
20150201272 Wong Jul 2015 A1
20150208163 Hallberg et al. Jul 2015 A1
20150215720 Carroll Jul 2015 A1
20150297169 Copt et al. Oct 2015 A1
20150297170 Copt et al. Oct 2015 A1
20150339954 East et al. Nov 2015 A1
20160036402 Bongiovi et al. Feb 2016 A1
20160044436 Copt et al. Feb 2016 A1
20160209831 Pal Jul 2016 A1
20160225288 East et al. Aug 2016 A1
20160240208 Bongiovi et al. Aug 2016 A1
20160258907 Butera, III et al. Sep 2016 A1
20160344361 Bongiovi et al. Nov 2016 A1
20160370285 Jang et al. Dec 2016 A1
20170020491 Ogawa Jan 2017 A1
20170033755 Bongiovi et al. Feb 2017 A1
20170041732 Bongiovi et al. Feb 2017 A1
20170122915 Vogt et al. May 2017 A1
20170188989 Copt et al. Jul 2017 A1
20170193980 Bongiovi et al. Jul 2017 A1
20170263158 East et al. Sep 2017 A1
20170272887 Copt et al. Sep 2017 A1
20170289695 Bongiovi et al. Oct 2017 A1
20170345408 Hong et al. Nov 2017 A1
20180077482 Yuan Mar 2018 A1
20180091109 Bongiovi et al. Mar 2018 A1
20180102133 Bongiovi et al. Apr 2018 A1
20180139565 Norris et al. May 2018 A1
20180226064 Seagriff et al. Aug 2018 A1
20190020950 Bongiovi et al. Jan 2019 A1
20190069114 Tai et al. Feb 2019 A1
20190075388 Schrader et al. Mar 2019 A1
20190318719 Bongiovi et al. Oct 2019 A1
20190387340 Audfray et al. Dec 2019 A1
20200053503 Butera, III et al. Feb 2020 A1
Foreign Referenced Citations (146)
Number Date Country
2005274099 Oct 2010 AU
20070325096 Apr 2012 AU
2012202127 Jul 2014 AU
2533221 Jun 1995 CA
2161412 Apr 2000 CA
2854086 Dec 2018 CA
1139842 Jan 1997 CN
1173268 Feb 1998 CN
1221528 Jun 1999 CN
1357136 Jul 2002 CN
1391780 Jan 2003 CN
1879449 Dec 2006 CN
1910816 Feb 2007 CN
101163354 Apr 2008 CN
101277331 Oct 2008 CN
101518083 Aug 2009 CN
101536541 Sep 2009 CN
101720557 Jun 2010 CN
101946526 Jan 2011 CN
101964189 Feb 2011 CN
102652337 Aug 2012 CN
102754151 Oct 2012 CN
102822891 Dec 2012 CN
102855882 Jan 2013 CN
103004237 Mar 2013 CN
203057339 Jul 2013 CN
103247297 Aug 2013 CN
103262577 Aug 2013 CN
103348697 Oct 2013 CN
103455824 Dec 2013 CN
0206746 Aug 1992 EP
0541646 Jan 1995 EP
0580579 Jun 1998 EP
0698298 Feb 2000 EP
0932523 Jun 2000 EP
0666012 Nov 2002 EP
2814267 Oct 2016 EP
2249788 Oct 1998 ES
2219949 Aug 1999 ES
2003707 Mar 1979 GB
2089986 Jun 1982 GB
2320393 Dec 1996 GB
P0031074 Jun 2012 ID
198914 Jul 2014 IS
7106876 Apr 1995 JP
4787255 Dec 1995 JP
2005500768 Jan 2005 JP
2011059714 Mar 2011 JP
4787255 Jul 2011 JP
1020040022442 Mar 2004 KR
101503541 Mar 2015 KR
553744 Jan 2009 NZ
574141 Apr 2010 NZ
557201 May 2012 NZ
2483363 May 2013 RU
1319288 Jun 1987 SU
401713 Aug 2000 TW
WO 9219080 Oct 1992 WO
WO 1993011637 Jun 1993 WO
WO 9321743 Oct 1993 WO
WO 9427331 Nov 1994 WO
WO 9514296 May 1995 WO
WO 9531805 Nov 1995 WO
WO 9535628 Dec 1995 WO
WO 9601547 Jan 1996 WO
WO 9611465 Apr 1996 WO
WO 9708847 Mar 1997 WO
WO 9709698 Mar 1997 WO
WO 9709840 Mar 1997 WO
WO 9709841 Mar 1997 WO
WO 9709842 Mar 1997 WO
WO 9709843 Mar 1997 WO
WO 9709844 Mar 1997 WO
WO 9709845 Mar 1997 WO
WO 9709846 Mar 1997 WO
WO 9709848 Mar 1997 WO
WO 9709849 Mar 1997 WO
WO 9709852 Mar 1997 WO
WO 9709853 Mar 1997 WO
WO 9709854 Mar 1997 WO
WO 9709855 Mar 1997 WO
WO 9709856 Mar 1997 WO
WO 9709857 Mar 1997 WO
WO 9709858 Mar 1997 WO
WO 9709859 Mar 1997 WO
WO 9709861 Mar 1997 WO
WO 9709862 Mar 1997 WO
WO 9717818 May 1997 WO
WO 9717820 May 1997 WO
WO 9813942 Apr 1998 WO
WO 9816409 Apr 1998 WO
WO 9828942 Jul 1998 WO
WO 9831188 Jul 1998 WO
WO 9834320 Aug 1998 WO
WO 9839947 Sep 1998 WO
WO 9842536 Oct 1998 WO
WO 9843464 Oct 1998 WO
WO 9852381 Nov 1998 WO
WO 9852383 Nov 1998 WO
WO 9853638 Nov 1998 WO
WO 9902012 Jan 1999 WO
WO 9908479 Feb 1999 WO
WO 9911490 Mar 1999 WO
WO 9912387 Mar 1999 WO
WO 9913684 Mar 1999 WO
WO 9921397 Apr 1999 WO
WO 9935636 Jul 1999 WO
WO 9935883 Jul 1999 WO
WO 9937121 Jul 1999 WO
WO 9938155 Jul 1999 WO
WO 9941939 Aug 1999 WO
WO 9952322 Oct 1999 WO
WO 9952324 Oct 1999 WO
WO 9956497 Nov 1999 WO
WO 9962294 Dec 1999 WO
WO 9965274 Dec 1999 WO
WO 0001264 Jan 2000 WO
WO 0002417 Jan 2000 WO
WO 0007408 Feb 2000 WO
WO 0007409 Feb 2000 WO
WO 0013464 Mar 2000 WO
WO 0015003 Mar 2000 WO
WO 0033612 Jun 2000 WO
WO 0033613 Jun 2000 WO
WO 03104924 Dec 2003 WO
WO 2006020427 Feb 2006 WO
WO 2007092420 Aug 2007 WO
WO 2008067454 Jun 2008 WO
WO 2009070797 Jun 2009 WO
WO 2009102750 Aug 2009 WO
WO 2009114746 Sep 2009 WO
WO 2009155057 Dec 2009 WO
WO 2010027705 Mar 2010 WO
WO 2010051354 May 2010 WO
WO 2011081965 Jul 2011 WO
WO 2012134399 Oct 2012 WO
WO 2013055394 Apr 2013 WO
WO 2013076223 May 2013 WO
WO 2014201103 Dec 2014 WO
WO 2015061393 Apr 2015 WO
WO 2015077681 May 2015 WO
WO 2015161034 Oct 2015 WO
WO 2016019263 Feb 2016 WO
WO 2016022422 Feb 2016 WO
WO 2016144861 Sep 2016 WO
2020028833 Feb 2020 WO
Non-Patent Literature Citations (2)
Entry
NovaSound Int., http://www.novasoundint.com/new_page_t.htm, 2004.
Sepe, Michael. “Density & Molecular Weight in Polyethylene.” Plastic Technology. Gardner Business Media, Inc., May 29, 2012. Web. http://ptonline.com/columns/density-molecular-weight-in-polyethylene. Stephan Peus et al. “Naturliche Horen mite kunstlichem Kopf”, Funkschau — Zeitschrift fur.
Related Publications (1)
Number Date Country
20180213343 A1 Jul 2018 US
Provisional Applications (3)
Number Date Country
62035025 Aug 2014 US
60861711 Nov 2006 US
60765722 Feb 2006 US
Continuations (2)
Number Date Country
Parent 14485145 Sep 2014 US
Child 15478696 US
Parent 15864190 Jan 2018 US
Child 15478696 US
Continuation in Parts (6)
Number Date Country
Parent 15478696 Apr 2017 US
Child 15864190 US
Parent 15163353 May 2016 US
Child 15864190 US
Parent 14059948 Oct 2013 US
Child 15163353 US
Parent 12648007 Dec 2009 US
Child 14059948 US
Parent 11947301 Nov 2007 US
Child 12648007 US
Parent 11703216 Feb 2007 US
Child 11947301 US