This document relates generally to hearing device systems and more particularly to spatially differentiated noise reduction for hearing device applications.
Examples of hearing devices, also referred to herein as hearing assistance devices or hearing instruments, include both prescriptive devices and non-prescriptive devices. Specific examples of hearing devices include, but are not limited to, hearing aids, headphones, assisted listening devices, and earbuds.
Hearing aids are used to assist patients suffering hearing loss by transmitting amplified sounds to ear canals. In one example, a hearing aid is worn in and/or around a patient's ear. Hearing aids may include processors and electronics that improve the listening experience for a specific wearer or in a specific acoustic environment.
Hearing and understanding speech in a noisy environment can be challenging, especially for a hearing-impaired person. Improved methods of noise reduction for hearing devices are needed.
Disclosed herein, among other things, are systems and methods for spatially differentiated noise reduction for hearing device applications. A method includes sensing sound signals with a hearing device. A front-facing directional beam and a rear-facing directional beam are generated using the sensed sound signals, and the front-facing directional beam and the rear-facing directional beam are combined using a directionality algorithm to obtain an output directional beam. The front-facing directional beam is compared to the rear-facing directional beam to determine a front-rear differential. Responsive to a determination that the front-rear differential indicates that the front-facing directional beam is dominant, an amount of noise reduction of the output directional beam is reduced. Responsive to a determination that the front-rear differential indicates that the rear-facing directional beam is dominant, the amount of noise reduction of the output directional beam is increased.
Various aspects include a method for spatially differentiated noise reduction. The method includes sensing sound signals with a hearing device. A front-facing directional beam and a rear-facing directional beam are generated using the sensed sound signals, and the front-facing directional beam and the rear-facing directional beam are combined using a directionality algorithm to obtain an output directional beam. The output directional beam is compared to the rear-facing directional beam to determine an output-rear differential. Responsive to a determination that the output-rear differential indicates that the output directional beam is dominant, an amount of noise reduction of the output directional beam is reduced. Responsive to a determination that the output-rear differential indicates that the rear-facing directional beam is dominant, the amount of noise reduction of the output directional beam is increased.
Various aspects of the present subject matter include a hearing device including two or more microphones configured to sense sound signals, and one or more processors. The one or more processors are programmed to generate a front-facing directional beam and a rear-facing directional beam using outputs of the two or more microphones, and combine the front-facing directional beam and the rear-facing directional beam using a directionality algorithm to obtain output directional beam. The front-facing directional beam or the output directional beam is compared to the rear-facing directional beam to determine a front-rear differential. Responsive to a determination that the front-rear differential indicates that the rear-facing directional beam is dominant, the amount of noise reduction of the output directional beam is increased. Responsive to a determination that the front-rear differential indicates that the rear-facing directional beam is not dominant, an amount of noise reduction of the output directional beam is reduced.
This Summary is an overview of some of the teachings of the present application and not intended to be an exclusive or exhaustive treatment of the present subject matter. Further details about the present subject matter are found in the detailed description and appended claims.
Various embodiments are illustrated by way of example in the figures of the accompanying drawings. Such embodiments are demonstrative and not intended to be exhaustive or exclusive embodiments of the present subject matter.
The following detailed description of the present subject matter refers to subject matter in the accompanying drawings which show, by way of illustration, specific aspects and embodiments in which the present subject matter may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the present subject matter. References to “an”, “one”, or “various” embodiments in this disclosure are not necessarily to the same embodiment, and such references contemplate more than one embodiment, including combinations of such embodiments. The following detailed description is demonstrative and not to be taken in a limiting sense. The scope of the present subject matter is defined by the appended claims, along with the full scope of legal equivalents to which such claims are entitled.
The present detailed description will discuss hearing devices generally, including earbuds, headsets, headphones and hearing assistance devices using the example of hearing aids. Other hearing devices include, but are not limited to, those in this document. It is understood that their use in the description is intended to demonstrate the present subject matter, but not in a limited or exclusive or exhaustive sense.
Hearing and understanding in a noisy environment is challenging for anyone, but especially for hearing impaired patients. Speech understanding in a noisy environment is a common complaint for hearing aid wearers. Often, the source of the speech is in front of the hearing aid wearer. Directionality has been shown to be beneficial for hearing speech in noise, while current noise reduction (NR) algorithms provide comfort without significantly improving intelligibility.
Previously, directionality algorithms and noise reduction algorithms have been applied separately and consecutively to clean up a received audio signal. Directionality algorithms may employ adaptive null-steering in multiple bands to minimize the power from the rear, while not degrading the signal at 0 degrees azimuth (directly in front of the listener). These directionality algorithms can produce up to 6 dB signal-to noise ratio (SNR) improvement in noisy environments, with good sound quality.
Noise reduction algorithms can further improve the SNR by 2-3 dB, depending on number of bands and acceptance of sound artifacts. Especially when the environmental SNR is near 0 dB, it is exceedingly difficult for any algorithm to differentiate between speech and noise. There is a balancing act between reduction of speech, reduction of noise, and willingness to accept audio artifacts due to the fast processing of the signal in multiple independent frequency bands. It is possible to use the rear-facing beam, e.g. the rear-facing cardioid beam, as input to the noise estimator of the NR algorithm, and the front-facing beam, e.g. the front-facing cardioid as the input to the speech estimator of the NR algorithm. This can help to improve the instantaneous SNR estimate that is a part of any NR algorithm, and thereby reduce artifacts.
Most directional beamformers in hearing aids employ two omnidirectional (omni) microphones. The output of the two microphones are combined to form a front-facing cardioid directivity pattern (or directional beam) and a rear-facing cardioid directivity pattern (or directional beam). From these two opposing cardioid patterns a combined pattern can be formed with a variable null angle, known as the Elko-Yong algorithm, to allow for adaptive null steering to maximally cancel noise in the rear hemisphere. In order for this adapted-null beam to optimally create a beam, the two microphones must be well matched. Any signal processing differentially applied before beamforming, such as noise reduction, will destroy the beam integrity. Consequently, it is currently not possible to integrate noise reduction directly with directionality.
According to various embodiments of the present subject matter, the present systems and methods provide for improved hearing in noisy environments, by making use of spatial information, or directionality, in combination with noise reduction. The present subject matter applies noise reduction differentially depending on whether the instantaneous signal is more likely to be originating in front of the listener (hearing device wearer) or behind the listener.
Additionally or alternatively, the two opposing directional beams, such as fixed-pattern cardioids, (front, rear) can be compared to each other, and/or to the adapted-null beam (the output of the directionality algorithm). If the momentary comparison between the fixed cardioids is stronger to the rear, the present subject matter may apply more noise reduction to the adapted-null beam output. If the comparison shows that the front-facing cardioid is dominant, the present subject matter may apply less noise reduction to the adapted-null beam output.
The spatial analysis 150 may include smoothing of the power of the front-facing directional beam, a rear-facing directional beam, and a directional beamformer output, in various embodiments, Optionally, the spatial analysis 150 calculates a difference as rear-facing directional beam power minus directional beamformer output power. Additionally or alternatively, the spatial analysis 150 calculates a difference as rear-facing directional beam power minus front-facing directional beam power. In either case, the difference results in a weighting value per frequency band. The per-band weighting values may be combined across bands to produce a smaller number of frequency band weighting values, in various examples. Additionally or alternatively, the weighting values may be smoothed before being incorporated into a noise reduction calculation.
Noise reduction can have two aspects, an underlying noise reduction algorithm that calculates instantaneous values of gain reduction per frequency band, and a slow-moving limit to the maximum gain reduction that can be applied. The noise reduction 140 may be performed using weighting values calculated by the spatial analysis 150. The weighting of the noise reduction can be accomplished in different ways in different embodiments. In various examples, the weighting value can be applied to either the noise reduction limit (i.e., maximum noise reduction) or to the noise reduction itself. In some additional or alternative examples, the weighting value can be used as an additive factor, such that the difference between the rear directional beam and the front directional beam(or directional beamformer output) can be added to the limit (e.g., modified_NR_limit=NR_limit+weighting value). In other examples, the weighting value can be used as a multiplicative factor, such that the difference between the rear directional beam and the front directional beam (or directional beamformer output) can for a multiplier on the limit or the NR itself (e.g., modified_NR_limit=NR_limit*weighting value*c, where c is a scaling factor).
According to various embodiments, processing may be done on a subband basis, to provide for subband noise reduction to be applied with spatial information. Thus, in the present subject matter signals from the front are minimally disrupted, while signals from the rear can be maximally noise reduced, without corrupting the target speech signal in front of the listener. Optionally, the spatially differentiated noise reduction can be applied without disrupting the beamformer. The combination of spatial information and noise reduction may be accomplished in one of a plurality of methods. In one example the front-rear differential could serve as a logical switch, whereby if front sound is dominating, the noise reduction is limited to a maximum value x, and if rear sound is dominating, noise reduction is limited to a maximum value y. This method may be extended to a plurality of front-rear differentials, in various embodiments. In another alternative or additional example, the front-rear differential could be a continuous function adding to or subtracting from the maximum noise reduction. In a further alternative or additional example, the front-rear differential may form a multiplier on the maximum noise reduction. In other additional or alternative examples, the front-rear differential may be applied to the underlying noise reduction, rather than the maximum noise reduction.
Using this additional fourth input, the spatial analysis block can perform a left-right (or inter-device) comparison in addition to the front-back comparison of the single monaural aid. In one example, the input can be used to further emphasize the front ipsilateral signal by increasing the amount of noise reduction when the contralateral noise dominates the signal. In an additional or alternative example, the ipsilateral and contralateral signals are compared to each other to generate separate medial and lateral energy measures (one or more inter-device comparisons). The medial and lateral energy measures can be used by the noise reduction block 140 to provide more aggressive noise reduction for lateral signals, and less aggressive noise reduction for medial (or common) signals, in an example. In various embodiments, either or both of the left-right (inter-device) or medial-lateral refinements to noise reduction described herein are performed in addition to the front-back noise reduction refinements described with respect to
The present subject matter can perform a three-way comparison using the front ipsilateral signal (or beamformed ipsilateral signal), the rear ipsilateral signal and the beamformed contralateral signal, in an example, to obtain an evaluation of the spatial audio scene for adjusting noise reduction. Thus, the device of the present system may include one or more processors programmed to receive a wireless signal indicative of a second output directional beam from a second hearing device, compare the received second output directional beam to the front-facing directional beam or the output directional beam, and/or to the rear-facing directional beam, to perform an inter-device comparison, and increase or decrease an amount of noise reduction of the output directional beam based on the inter-device comparison.
In yet another alternative or additional embodiment, both the front- and rear-facing information can be transmitted to the contralateral side (or separate device processor) and used to generate a four-quadrant spatial map, including left-front, left-rear, right-front, and right-rear components. In various examples, the spatial analysis block can perform comparisons between these four quadrants in multiple simultaneous frequency bands to provide for sophisticated spatial steering of noise reduction, as well as isolation of signals of interest at angles anywhere in the azimuthal plane.
Thus, the device of the present system may include one or more processors programmed to receive wireless signals indicative of a second front-facing directional beam and a second rear-facing directional beam from a second hearing device, generate a four-quadrant spatial map using the second front-facing directional beam, the second rear-facing directional beam, the front-facing directional beam, and the rear-facing directional beam, and perform spatial steering of noise reduction using the four-quadrant spatial map. The one or more processors may be further programmed to isolate signals of interest from the sensed sound signals using the four-quadrant spatial map, in one example.
According to various embodiments, comparing the front-facing directional beam to the rear-facing directional beam includes performing a momentary comparison, A spatial analysis may be used to calculate a front-facing power, a rear-facing power, and a directional power using the front-facing directional beam, the rear-facing directional beam and the output directional beam, Comparing the front-facing directional beam to the rear-facing directional beam includes subtracting the front-facing power from the rear-facing power, in various examples. In some additional or alternative examples, the subtraction is performed on a subband frequency basis to determine a weighting value per subband. The weighting value is applied to a noise reduction limit or maximum per subband to increase or decrease noise reduction, in some embodiments. In other examples, the weighting value is applied to a noise reduction calculation per subband to increase or decrease noise reduction. For example, the weighting value can be applied as a multiplier in the noise reduction calculation, or the weighting value can be applied as an addition or subtraction in the noise reduction calculation, or in some combination of the two.
In various embodiments, a spatial analysis is used to calculate a front-facing power, a rear-facing power, and a directional power using the front-facing directional beam, the rear-facing directional beam and the output directional beam. Comparing the output directional beam to the rear-facing directional beam includes subtracting the directional power from the rear-facing power, in various examples. In some additional or alternative examples, the subtraction is performed on a subband frequency basis to determine a weighting value per subband.
Various aspects of the present subject matter include a hearing device including two or more microphones configured to sense sound signals, and one or more processors. The one or more processors are programmed to generate a front-facing directional beam and a rear-facing directional beam using outputs of the two or more microphones, and combine the front-facing directional beam and the rear-facing directional beam using a directionality algorithm to obtain an output directional beam. The front-facing directional beam or the output directional beam is compared to the rear-facing directional beam to determine a front-rear differential. Responsive to a determination that the front-rear differential indicates that the rear-facing directional beam is dominant, the amount of noise reduction of the output directional beam is increased. Responsive to a determination that the front-rear differential indicates that the rear-facing directional beam is not dominant, an amount of noise reduction of the output directional beam is reduced.
According to various embodiments, the two or more microphones include an omnidirectional microphone. Other types of microphones can be used additionally or alternatively. In some embodiments, the two or more microphones include a first microphone and a second microphone. The first microphone includes a front microphone, and the second microphone includes a rear microphone, in various embodiments. In some additional or alternative embodiments, the hearing device is a hearing aid. Optionally, the hearing device is an earbud. In various additional or alternative examples, the present subject matter processes a front beamformer and a rear beamformer separately to determine if either or both are predominately speech or predominately noise, and then uses the result to change a noise reduction calculation. Optionally, each individual hearing device performs the spatially differentiated noise reduction. In other additional or alternative examples, spatially differentiated noise reduction is performed using data from each of a left and right hearing device.
The present subject matter provide for improved hearing in noisy environments, by making use of spatial information in combination with noise reduction. For example, the present subject matter provides for more aggressive noise reduction when the sensed sound is from behind a listener (such that artifacts from aggressive noise reduction may be tolerated), and provides for less aggressive noise reduction when the sensed sound is from in front of a listener where maximum speech intelligibility is desired.
Examples, as described herein, may include, or may operate by, logic or a number of components, or mechanisms. Circuit sets are a collection of circuits implemented in tangible entities that include hardware (e.g., simple circuits, gates, logic, etc.). Circuit set membership may be flexible over time and underlying hardware variability. Circuit sets include members that may, alone or in combination, perform specified operations when operating. In an example, hardware of the circuit set may be immutably designed to carry out a specific operation (e.g., hardwired). In an example, the hardware of the circuit set may include variably connected physical components (e.g., execution units, transistors, simple circuits, etc.) including a computer readable medium physically modified (e.g., magnetically, electrically, moveable placement of invariant massed particles, etc.) to encode instructions of the specific operation. In connecting the physical components, the underlying electrical properties of a hardware constituent are changed, for example, from an insulator to a conductor or vice versa. The instructions enable embedded hardware (e.g., the execution units or a loading mechanism) to create members of the circuit set in hardware via the variable connections to carry out portions of the specific operation when in operation. Accordingly, the computer readable medium is communicatively coupled to the other components of the circuit set member when the device is operating. In an example, any of the physical components may be used in more than one member of more than one circuit set. For example, under operation, execution units may be used in a first circuit of a first circuit set at one point in time and reused by a second circuit in the first circuit set, or by a third circuit in a second circuit set at a different time.
Machine (e.g., computer system) 400 may include a hardware processor 402 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof), a main memory 404 and a static memory 406, some or all of which may communicate with each other via an interlink (e.g., bus) 408. The machine 400 may further include a display unit 410, an alphanumeric input device 412 (e.g., a keyboard), and a user interface (UI) navigation device 414 (e.g., a mouse). In an example, the display unit 410, input device 412 and UI navigation device 414 may be a touch screen display. The machine 400 may additionally include a storage device (e.g., drive unit) 416, one or more input audio signal transducers 418 (e.g., microphone), a network interface device 420, and one or more output audio signal transducer 421 (e.g., speaker). The machine 400 may include an output controller 432, such as a serial (e.g., universal serial bus (USB), parallel, or other wired or wireless (e.g., infrared (IR), near field communication (NFC), etc.) connection to communicate or control one or more peripheral devices (e.g., a printer, card reader, etc.).
The storage device 416 may include a machine readable medium 422 on which is stored one or more sets of data structures or instructions 424 (e.g., software) embodying or utilized by any one or more of the techniques or functions described herein. The instructions 424 may also reside, completely or at least partially, within the main memory 404, within static memory 406, or within the hardware processor 402 during execution thereof by the machine 400. In an example, one or any combination of the hardware processor 402, the main memory 404, the static memory 406, or the storage device 416 may constitute machine readable media.
While the machine readable medium 422 is illustrated as a single medium, the term “machine readable medium” may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) configured to store the one or more instructions 424.
The term “machine readable medium” may include any medium that is capable of storing, encoding, or carrying instructions for execution by the machine 400 and that cause the machine 400 to perform any one or more of the techniques of the present disclosure, or that is capable of storing, encoding or carrying data structures used by or associated with such instructions. Non-limiting machine-readable medium examples may include solid-state memories, and optical and magnetic media. In an example, a massed machine-readable medium comprises a machine-readable medium with a plurality of particles having invariant (e.g., rest) mass. Accordingly, massed machine-readable media are not transitory propagating signals. Specific examples of massed machine-readable media may include: non-volatile memory, such as semiconductor memory devices (e.g., Electrically Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM)) and flash memory devices; magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.
The instructions 424 may further be transmitted or received over a communications network 426 using a transmission medium via the network interface device 420 utilizing any one of a number of transfer protocols (e.g., frame relay, internee protocol (IP), transmission control protocol (TCP), user datagram protocol (LDP), hypertext transfer protocol (HTTP), etc.). Example communication networks may include a local area network (LAN), a wide area network (WAN), a packet data network (e.g., the Internet), mobile telephone networks (e.g., cellular networks), Plain Old Telephone (POTS) networks, and wireless data networks (e.g., Institute of Electrical and Electronics Engineers (IEEE) 802.11 family of standards known as Wi-Fi®, IEEE 802.16 family of standards known as WiMax®), IEEE 802.15.4 family of standards, peer-to-peer (P2P) networks, among others. In an example, the network interface device 420 may include one or more physical jacks (e.g., Ethernet, coaxial, or phone jacks) or one or more antennas to connect to the communications network 426. In an example, the network interface device 420 may include a plurality of antennas to communicate wirelessly using at least one of single-input multiple-output (SIMO), multiple-input multiple-output (MIMO), or multiple-input single-output (MISO) techniques. The term “transmission medium” shall be taken to include any intangible medium that is capable of storing, encoding, or carrying instructions for execution by the machine 400, and includes digital or analog communications signals or other intangible medium to facilitate communication of such software.
Various embodiments of the present subject matter support wireless communications with a hearing device. In various embodiments the wireless communications may include standard or nonstandard communications. Some examples of standard wireless communications include link protocols including, but not limited to, Bluetooth™, Bluetooth™ Low Energy (BLE), IEEE 802.11(wireless LANs), 802.15 (WPANs), 802.16 (WiMAX), cellular protocols including, but not limited to CDMA and GSM, ZigBee, and ultra-wideband (UWB) technologies. Such protocols support radio frequency communications and some support infrared communications while others support NTMI. Although the present system is demonstrated as a radio system, it is possible that other forms of wireless communications may be used such as ultrasonic, optical, infrared, and others. It is understood that the standards which may be used include past and present standards. It is also contemplated that future versions of these standards and new future standards may be employed without departing from the scope of the present subject matter.
The wireless communications support a connection from other devices. Such connections include, but are not limited to, one or more mono or stereo connections or digital connections having link protocols including, but not limited to 802.3 (Ethernet), 802.4, 802.5, USB, SPI, PCM, ATM, Fibre-channel, Firewire or 1394, InfiniBand, or a native streaming interface. In various embodiments, such connections include all past and present link protocols. It is also contemplated that future versions of these protocols and new future standards may be employed without departing from the scope of the present subject matter.
Hearing assistance devices typically include at least one enclosure or housing, a microphone, hearing assistance device electronics including processing electronics, and a speaker or “receiver.” Hearing assistance devices may include a power source, such as a battery. In various embodiments, the battery is rechargeable. In various embodiments multiple energy sources are employed. It is understood that in various embodiments the microphone is optional. It is understood that in various embodiments the receiver is optional. It is understood that variations in communications protocols, antenna configurations, and combinations of components may be employed without departing from the scope of the present subject matter. Antenna configurations may vary and may be included within an enclosure for the electronics or be external to an enclosure for the electronics. Thus, the examples set forth herein are intended to be demonstrative and not a limiting or exhaustive depiction of variations.
It is understood that digital hearing assistance devices include a processor. In digital hearing assistance devices with a processor, programmable gains may be employed to adjust the hearing assistance device output to a wearer's particular hearing impairment. The processor may be a digital signal processor (DSP), microprocessor, microcontroller, other digital logic, or combinations thereof. The processing may be done by a single processor, or may be distributed over different devices. The processing of signals referenced in this application may be performed using the processor or over different devices. Processing may be done in the digital domain, the analog domain, or combinations thereof. Processing may be done using subband processing techniques. Processing may be done using frequency domain or time domain approaches. Some processing may involve both frequency and time domain aspects. For brevity, in some examples drawings may omit certain blocks that perform frequency synthesis, frequency analysis, analog-to-digital conversion, digital-to-analog conversion, amplification, buffering, and certain types of filtering and processing. In various embodiments of the present subject matter the processor is adapted to perform instructions stored in one or more memories, which may or may not be explicitly shown. Various types of memory may be used, including volatile and nonvolatile forms of memory. In various embodiments, the processor or other processing devices execute instructions to perform a number of signal processing tasks. Such embodiments may include analog components in communication with the processor to perform signal processing tasks, such as sound reception by a microphone, or playing of sound using a receiver (i.e., in applications where such transducers are used). In various embodiments of the present subject matter, different realizations of the block diagrams, circuits, and processes set forth herein may be created by one of skill in the art without departing from the scope of the present subject matter.
It is further understood that different hearing devices may embody the present subject matter without departing from the scope of the present disclosure. The devices depicted in the figures are intended to demonstrate the subject matter, but not necessarily in a limited, exhaustive, or exclusive sense. It is also understood that the present subject matter may be used with a device designed for use in the right ear or the left ear or both ears of the wearer.
The present subject matter is demonstrated for hearing devices, including hearing assistance devices, including but not limited to, behind-the-ear (BTE), in-the-ear (ITE), in-the-canal (ITC), receiver-in-canal (RIC), invisible-in-canal (IIC) or completely-in-the-canal (CIC) type hearing assistance devices. It is understood that behind-the-ear type hearing assistance devices may include devices that reside substantially behind the ear or over the ear. Such devices may include hearing assistance devices with receivers associated with the electronics portion of the behind-the-ear device, or hearing assistance devices of the type having receivers in the ear canal of the user, including but not limited to receiver-in-canal (RIC) or receiver-in-the-ear (RITE) designs. The present subject matter may also be used in hearing assistance devices generally, such as cochlear implant type hearing devices. The present subject matter may also be used in deep insertion devices having a transducer, such as a receiver or microphone. The present subject matter may be used in bone conduction hearing devices, in some embodiments. The present subject matter may be used in devices whether such devices are standard or custom fit and whether they provide an open or an occlusive design. It is understood that other hearing devices not expressly stated herein may be used in conjunction with the present subject matter.
The description can be described further with respect to the following consistory clauses:
1. A method, comprising:
2. The method of clause 1, wherein comparing the front-facing directional beam to the rear-facing directional beam includes performing a momentary comparison.
3. The method of clause 1, comprising using a spatial analysis to calculate a front-facing power, a rear-facing power, and a directional power using the front-facing directional beam, the rear-facing directional beam and the output directional beam.
4. The method of clause 3, wherein comparing the front-facing directional beam to the rear-facing directional beam includes subtracting the front-facing power from the rear-facing power.
5. The method of clause 4, wherein the subtraction is performed on a subband frequency basis to determine a weighting value per subband.
6. The method of clause 5, wherein the weighting value is applied to a noise reduction limit or maximum per subband to increase or decrease noise reduction.
7. The method of clause 6, wherein the weighting value is applied as a multiplier.
8. The method of clause 6, wherein the weighting value is applied as an addition or subtraction.
9. The method of clause 5, wherein the weighting value is applied to a noise reduction calculation per subband to increase or decrease noise reduction.
10. The method of clause 9, wherein the weighting value is applied as a multiplier in the noise reduction calculation.
11. The method of clause 9, wherein the weighting value is applied as an addition or subtraction in the noise reduction calculation.
12. A method, comprising:
13. The method of clause 12, comprising using a spatial analysis to calculate a front-facing power, a rear-facing power, and a directional power using the front-facing directional beam, the rear-facing directional beam and the output directional beam.
14. The method of clause 13, wherein comparing the output directional beam to the rear-facing directional beam includes subtracting the directional power from the rear-facing power.
15. The method of clause 14, wherein the subtraction is performed on a subband frequency basis to determine a weighting value per subband.
16. A hearing device, comprising:
17. The hearing device of clause 16, wherein the two or more microphones include an omnidirectional microphone.
18. The hearing device of clause 16, wherein the one or more processors are further programmed to:
19. The hearing device of clause 16, wherein the one or more processors are further programmed to:
20. The hearing device of clause 19, wherein the one or more processors are further programmed to:
This application is intended to cover adaptations or variations of the present subject matter. It is to be understood that the above description is intended to be illustrative, and not restrictive. The scope of the present subject matter should be determined with reference to the appended claims, along with the full scope of legal equivalents to which such claims are entitled.
This patent application claims the benefit of U.S. Provisional Patent Application Nos. 63/203,797, filed Jul. 30, 2021 and 63/267,006, filed Jan. 21, 2022, each of which are incorporated by reference herein in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
8396234 | Derleth et al. | Mar 2013 | B2 |
8942387 | Elko et al. | Jan 2015 | B2 |
9143857 | Every et al. | Sep 2015 | B2 |
9301049 | Elko et al. | Mar 2016 | B2 |
9473850 | Konchitsky | Oct 2016 | B2 |
9491543 | Konchitsky | Nov 2016 | B1 |
9799330 | Nemala et al. | Oct 2017 | B2 |
10015589 | Ebenezer | Jul 2018 | B1 |
10176823 | Dusan et al. | Jan 2019 | B2 |
10244333 | Mustiere et al. | Mar 2019 | B2 |
10347269 | Van Hoesel et al. | Jul 2019 | B2 |
20080260175 | Elko | Oct 2008 | A1 |
20090175466 | Elko et al. | Jul 2009 | A1 |
20170164102 | Ivanov | Jun 2017 | A1 |
Number | Date | Country |
---|---|---|
0652686 | Aug 2002 | EP |
1278395 | Nov 2009 | EP |
2973556 | Jul 2018 | EP |
3668123 | Jun 2020 | EP |
4125276 | Feb 2023 | EP |
2561408 | Oct 2018 | GB |
0197558 | Dec 2001 | WO |
Entry |
---|
“European Application Serial No. 22187717.8, Partial European Search Report dated Dec. 21, 2022”, 13 pgs. |
Elko, Gary, “A simple adaptive first-order differential microphone”, IEEE Workshop on Application of Signal Processing to Audio and Acoustics, (Oct. 15, 1995), 4 pgs. |
Number | Date | Country | |
---|---|---|---|
20230034525 A1 | Feb 2023 | US |
Number | Date | Country | |
---|---|---|---|
63267006 | Jan 2022 | US | |
63203797 | Jul 2021 | US |