This invention relates generally to the field of digital signal processing, audio engineering and audiology—more specifically systems and methods for a hearing assistive device, for example having a user's hearing test parameterize a sound enhancement algorithm that can process ambient sound through a mobile device.
Hearing aids, although effective for improving speech comprehension for listeners, are still extremely expensive and inaccessible for the vast majority of hearing impaired (HI) individuals. Furthermore, the use of hearing aids has been subject to social stigmatization, despite the prevalence of hearing loss across all age groups. Cheaper hearing assistive devices, such as over the counter sound enhancement ear buds, provide a solution to this problem—but fall short due to limitations in processing capacity, as well as inadequate testing methodologies and ineffective signal processing techniques.
The most common technique employed by hearing assistive devices consists of a simple increase in wide spectrum gain (i.e. volume enhancement). Less commonly, simple equalization (EQ) handset applications have been utilized. These applications apply gain(s) to frequencies in which a listener exhibits raised thresholds (as determined through an audiogram). Both techniques may enable a listener to better perceive conversation, however, the listener may simultaneously or subsequently experience loudness discomfort. This is because listeners with sensorineural hearing loss have similar, or even reduced, discomfort thresholds when compared to normal hearing listeners, despite the hearing thresholds of such HI listeners being raised relative to normal hearing listeners. To this extent, the dynamic range of HI listeners is narrower and simply adding EQ or wide spectrum gain would be detrimental to the long-term hearing health of these HI listeners.
Although hearing loss typically begins at higher frequencies, listeners who are aware that they have hearing loss do not typically complain about the absence of high frequency sounds. Instead, they report difficulties listening in a noisy environment and in hearing out the details in a complex mixture of sounds, such as in a normal conversation at restaurant or coffeeshop. In essence, off frequency sounds more readily mask information with energy in other frequencies for HI individuals—conversations that were once clear become muddled by background noise, e.g. background noises mask the sound-of-interest. As hearing deteriorates, the signal-conditioning capabilities of the ear begin to break down, and thus HI listeners need to expend more mental effort to make sense of sounds of interest in complex acoustic scenes (or miss the information entirely). A raised threshold in an audiogram is not merely a reduction in aural sensitivity, but a result of the malfunction of some deeper processes within the auditory system that have implications beyond the detection of faint sounds. To this extent, the use of suprathreshold data, such as masked threshold (MT) data, to parameterize a sound enhancement DSP for a hearing assistive device would better measure the increase masking that occurs with hearing deterioration.
As the majority of individuals have access to a smartphone (with high processing capabilities) and a set of headphones and/or ear pods (by one estimate, 45% of the world), this presents a global opportunity to provide greater accessibility to hearing technology with improved hearing test methodologies that will help HI individuals.
Accordingly, it is an aspect of the present disclosure to provide systems and methods for a hearing assistive device, for example having a user's hearing test parameterize a sound enhancement algorithm that can process ambient sounds through a mobile device.
According to an aspect of the present disclosure, a method for ambient sound enhancement on a mobile device comprises: generating a user hearing profile; calculating at least one set of ambient sound enhancement digital signal processing (DSP) parameters for each of one or more sound enhancement algorithms, the calculation of the ambient sound enhancement DSP parameters based at least in part on the user hearing profile; in response to a user initiating an ambient sound enhancement function on a mobile computing device, retrieving the at least one set of calculated ambient sound enhancement DSP parameters; capturing ambient sound with at least one microphone of the mobile computing device; processing the captured ambient sound with an ambient sound enhancement DSP to generate a DSP enhanced processed audio signal, wherein the ambient sound enhancement DSP is parameterized with the retrieved set of calculated ambient sound enhancement DSP parameters; and outputting the DSP enhanced processed audio signal to a transducer of the mobile device.
In a further aspect of the present disclosure, capturing the ambient sound is performed in substantially real time with one or more of: processing the captured ambient sound with the ambient sound enhancement DSP to generate the DSP enhanced processed audio signal; and outputting the DSP enhanced processed audio signal to the transducer of the mobile device.
In a further aspect of the present disclosure, the retrieved set of calculated ambient sound enhancement DSP parameters corresponds to a sound enhancement algorithm associated with the user's mobile computing device or indicated by a user input to the mobile computing device.
In a further aspect of the present disclosure, the sound enhancement algorithm associated with the user's mobile computing device is selected from a plurality of available sound enhancement algorithms configured in a local storage of the user's mobile computing device.
In a further aspect of the present disclosure, the selection of the sound enhancement algorithm from the plurality of sound enhancement algorithms is based at least in part on an analysis of the ambient sound captured by the user's mobile computing device.
In a further aspect of the present disclosure, the method further comprises one or more of processing the captured ambient sound to: attenuate sound not originating in front of the user or the microphone of the user's mobile computing device that captured the ambient sound, by applying a directional processing algorithm to the captured ambient sound or the DSP enhanced processed audio signal; and attenuate sounds that have typical characteristics of noise, regardless of the direction of arrival, by applying one or more digital noise reduction algorithms.
In a further aspect of the present disclosure, the user hearing profile is generated by conducted at least one hearing test on a mobile computing device.
In a further aspect of the present disclosure, the mobile computing device is the mobile computing device associated with the user.
In a further aspect of the present disclosure, the hearing test is one or more of a masked threshold test (MT test), a pure tone threshold test (PTT test), a psychophysical tuning curve test (PTC test), or a cross frequency simultaneous masking test (xF-SM test).
In a further aspect of the present disclosure, the user hearing profile is generated at least in part by analyzing a user input of demographic information to thereby interpolate a representative hearing profile.
In a further aspect of the present disclosure, the user input of demographic information includes an age of the user.
In a further aspect of the present disclosure, the sound enhancement algorithm is a multiband dynamic processor; and the at least one set of calculated ambient sound enhancement DSP parameters includes one or more ratio values and gain values.
In a further aspect of the present disclosure, the at least one set of calculated ambient sound enhancement DSP parameters is stored on a remote server; and retrieving the at least one set of calculated ambient sound enhancement DSP parameters comprises receiving a requested set of calculated ambient sound enhancement DSP parameters at the mobile computing device from the remote server.
In a further aspect of the present disclosure, the at least one set of calculated ambient sound enhancement DSP parameters is stored locally on the mobile computing device; and retrieving the at least one set of calculated ambient sound enhancement DSP parameters comprises accessing a local storage of the mobile computing device.
In a further aspect of the present disclosure, calculating the at least one set of ambient sound enhancement DSP parameters is performed on a remote server.
In a further aspect of the present disclosure, calculating the at least one set of ambient sound enhancement DSP parameters is performed by a processor of the user's mobile computing device.
In a further aspect of the present disclosure, the hearing test measures masking threshold curves within a range of frequencies from 250 Hz to 12 kHz.
In a further aspect of the present disclosure, the at least one set of calculated ambient sound enhancement DSP parameters is determined via one or more of: a best fit of the user hearing profile with previously inputted hearing data within a database; or a fitted mathematical function derived from plotted hearing and DSP parameter data.
In a further aspect of the present disclosure, the parameters associated with the best fit of the user hearing profile and the previously inputted hearing data are selected to correspond to a user's parameters.
In a further aspect of the present disclosure, the best fit is determined by one or more of average Euclidean distance and root mean square difference.
Unless otherwise defined, all technical terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this technology belongs.
The term “sound enhancement algorithm”, as used herein, is defined as any digital signal processing (DSP) algorithm that processes an audio signal to enhance the clarity of the signal to a listener. The DSP algorithm may be, for example: an equalizer, an audio processing function that works on the subband level of an audio signal, a multiband compressive system, or a non-linear audio processing algorithm.
The term “hearing test”, as used herein, is any test that evaluates a user's hearing health, more specifically a hearing test administered using any transducer that outputs a sound wave. The test may be a threshold test or a suprathreshold test, including, but not limited to, a psychophysical tuning curve (PTC) test, a masked threshold (MT) test, a temporal fine structure test (TFS), temporal masking curve test and a speech in noise test.
The term “server”, as used herein, generally refers to a computer program or device that provides functionalities for other programs or devices. The term “headphone” or “earphone”, as used herein, is any earpiece bearing a transducer that outputs soundwaves into the ear. The earphone may be a wireless hearable, a corded or wireless headphone, a hearable device, or any pair of earbuds.
The above aspects disclosed for the proposed method may be applied in a similar way to an apparatus or system having at least one processor and at least one memory to store programming instructions or computer program code and data, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to perform the above functions. Alternatively, the above apparatus may be implemented by circuitry.
According to another broad aspect, a computer program comprising instructions for causing an apparatus to perform any of the above methods is disclosed. Furthermore, a computer readable medium comprising program instructions for causing an apparatus to perform any of the above methods is disclosed.
Furthermore, a non-transitory computer readable medium is disclosed, comprising program instructions stored thereon for performing the above functions.
Implementations of the disclosed apparatus may include using, but not limited to, one or more processor, one or more application specific integrated circuit (ASIC) and/or one or more field programmable gate array (FPGA). Implementations of the apparatus may also include using other conventional and/or customized hardware such as software programmable processors.
It will be appreciated that method steps and apparatus features may be interchanged in many ways. In particular, the details of the disclosed apparatus can be implemented as a method, as the skilled person will appreciate.
Other and further embodiments of the present disclosure will become apparent during the course of the following discussion and by reference to the accompanying drawings.
In order to describe the manner in which the above-recited and other advantages and features of the disclosure can be obtained, a more particular description of the principles briefly described above will be rendered by reference to specific embodiments thereof, which are illustrated in the appended drawings. Understand that these drawings depict only example embodiments of the disclosure and are not therefore to be considered to be limiting of its scope, the principles herein are described and explained with additional specificity and detail through the use of the accompanying drawings in which:
Various embodiments of the disclosure are discussed in detail below. While specific implementations are discussed, it should be understood that this is done for illustration purposes only. A person skilled in the relevant art will recognize that other components and configurations may be used without parting from the spirit and scope of the disclosure. Thus, the following description and drawings are illustrative and are not to be construed as limiting the scope of the embodiments described herein. Numerous specific details are described to provide a thorough understanding of the disclosure. However, in certain instances, well-known or conventional details are not described in order to avoid obscuring the description. References to one or an embodiment in the present disclosure can be references to the same embodiment or any embodiment; and, such references mean at least one of the embodiments.
Reference to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosure. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Moreover, various features are described which may be exhibited by some embodiments and not by others.
The terms used in this specification generally have their ordinary meanings in the art, within the context of the disclosure, and in the specific context where each term is used. Alternative language and synonyms may be used for any one or more of the terms discussed herein, and no special significance should be placed upon whether or not a term is elaborated or discussed herein. In some cases, synonyms for certain terms are provided. A recital of one or more synonyms does not exclude the use of other synonyms. The use of examples anywhere in this specification including examples of any terms discussed herein is illustrative only and is not intended to further limit the scope and meaning of the disclosure or of any example term. Likewise, the disclosure is not limited to various embodiments given in this specification.
Without intent to limit the scope of the disclosure, examples of instruments, apparatus, methods and their related results according to the embodiments of the present disclosure are given below. Note that titles or subtitles may be used in the examples for convenience of a reader, which in no way should limit the scope of the disclosure. Unless otherwise defined, technical and scientific terms used herein have the meaning as commonly understood by one of ordinary skill in the art to which this disclosure pertains. In the case of conflict, the present document, including definitions will control.
Additional features and advantages of the disclosure will be set forth in the description which follows, and in part will be obvious from the description, or can be learned by practice of the herein disclosed principles. The features and advantages of the disclosure can be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. These and other features of the disclosure will become more fully apparent from the following description and appended claims or can be learned by the practice of the principles set forth herein.
Various example embodiments of the disclosure are discussed in detail below. While specific implementations are discussed, it should be understood that this is done for illustration purposes only. A person skilled in the relevant art will recognize that other components and configurations may be used without departing from the spirit and scope of the present disclosure.
It is an aspect of the present disclosure to provide systems and methods for a hearing assistive device.
To this extent,
In some embodiments, other suprathreshold testing may be used. For example, a cross frequency masked threshold test is illustrated in
A PRI optimization approach may also be employed, details of which an example implementation are illustrated in
PRI can be calculated according to a variety of methods. One such method, also called perceptual entropy, generally comprises: transforming a sampled window of audio signal into the frequency domain, obtaining masking thresholds using psychoacoustic rules by performing critical band analysis, determining noise-like or tone-like regions of the audio signal, applying thresholding rules for the signal, and then accounting for absolute hearing thresholds. Following this, the number of bits required to quantize the spectrum without introducing perceptible quantization error is determined. For instance, Painter & Spanias disclose a formulation for perceptual entropy in units of bits/s, which is closely related to ISO/IEC MPEG-1 psychoacoustic model 2 [see e.g., Painter & Spanias, Perceptual Coding of Digital Audio, Proc. Of IEEE, Vol. 88, No. 4 (2000); see also generally Moving Picture Expert Group standards https://mpeg.chiarigilione.org/standards; both documents included by reference].
Various optimization methods are possible to maximize the PM of audio samples, depending on the type of the applied audio processing function such as the above-mentioned multiband dynamics processor. For example, a subband dynamic compressor may be parameterized by compression threshold, attack time, gain and compression ratio for each subband, and these parameters may be determined by the optimization process. In some cases, the effect of the multiband dynamics processor on the audio signal is nonlinear and an appropriate optimization technique such as gradient descend is required. The number of parameters that need to be determined may become large, e.g. if the audio signal is processed in many subbands and a plurality of parameters needs to be determined for each subband. In such cases, it may not be practicable to optimize all parameters simultaneously and a sequential approach for parameter optimization may be applied. Although sequential optimization procedures do not necessarily result in the optimum parameters, the obtained parameter values result in increased PRI over the unprocessed audio sample, thereby improving the listener's listening experience.
Other parameterization processes commonly known in the art may be used to calculate parameters based off user-generated threshold and suprathreshold information. For instance, common prescription techniques for linear and non-linear DSP may be employed. Well known procedures for linear hearing aid algorithms include POGO, NAL, and DSL. See, e.g., H. Dillon, Hearing Aids, 2nd Edition, Boomerang Press, 2012.
Fine tuning of any of the above-mentioned techniques may be estimated from manual fitting data. For instance, it is common in the art to fit a multiband dynamic processor according to series of subjective tests 704 given to a patient in which parameters are adjusted according to a patient's responses, e.g. a series of AB tests, decision tree paradigms, 2D exploratory interface, in which the patient is asked which set of parameters subjectively sounds better. This testing ultimately guides the optimal parameterization of the DSP.
The parameters of the multi-band compression system in a frequency band are threshold 1111 and gain 1112. These two parameters are determined from the user masking contour curve 1406 for the listener and target masking contour curve 1107. The threshold 1111 and ratio 1112 must satisfy the condition that the signal-to-noise ratio 1121 (SNR) of the user masking contour curve 1106 at a given frequency 1109 is greater than the SNR 1122 of the target masking contour curve 1107 at the same given frequency 1109. Note that the SNR is herein defined as the level of the signal tone compared to the level of the masker noise. The broader the curve will be, the greater the SNR. The given frequency 1109 at which the SNRs 1121 and 1122 are calculated may be arbitrarily chosen, for example, to be beyond a minimum distance from the probe tone frequency 1408.
The sound level 1130 (in dB) of the target masking contour curve 1107 at a given frequency corresponds (see bent arrow 1131) to an input sound level 1141 entering the compression system. The objective is that the sound level 1142 outputted by the compression system will match the user masking contour curve 1106, i.e., that this sound level 1142 is substantially equal to the sound level (in dB) of the user masking contour curve 1106 at the given frequency 1109. This condition allows the derivation of the threshold 1111 (which has to be below the input sound level 1141) and the ratio 1112. In other words, input sound level 1141 and output sound level 1142 determine a reference point of the compression curve. As noted above, threshold 1111 must be selected to be lower than input sound level 1141—if it is not, there will be no change, as below the threshold of the compressor, the system is linear). Once the threshold 1111 is selected, the ratio 1112 can be determined from the threshold and the reference point of the compression curve.
In the context of the present disclosure, a masking contour curve is obtained from a user hearing test. A target masking contour curve 1107 is interpolated from at least the user masking contour curve 1106 and a reference masking contour curve, representing the curve of a normal hearing individual. In some embodiments, the target masking contour curve 1107 is preferred over a reference curve because fitting an audio signal to a reference curve is not necessarily optimal. Depending on the initial hearing ability of the listener, fitting the processing according to a reference curve may cause an excess of processing to spoil the quality of the signal. The objective is to process the signal in order to obtain a good balance between an objective benefit and a good sound quality.
The given frequency 1109 is then chosen. It may be chosen arbitrarily, e.g., at a certain distance from the tone frequency 1108. The corresponding sound levels of the listener and target masking contour curves are determined at this given frequency 1109. The value of these sound levels may be determined graphically on the y-axis 1102.
The right panel in
In some embodiments, content-specific DSP parameter sets may be calculated indirectly from a user hearing test based on preexisting entries or anchor points in a server database. An anchor point comprises a typical hearing profile constructed based at least in part on demographic information, such as age and sex, in which DSP parameter sets are calculated and stored on the server to serve as reference markers. Indirect calculation of DSP parameter sets bypasses direct parameter sets calculation by finding the closest matching hearing profile(s) and importing (or interpolating) those values for the user.
(√{square root over ((d5a−d1a)2+(d6b−d2b)2 . . . )}<√{square root over ((d5a−b9a)2+(d6b−d10b)2 . . . )})
(√{square root over ((y1−x1)2+(y2−x2)2 . . . )}<√{square root over ((y1−z1)2+(y2−z2−z2)2 . . . )})
As would be appreciated by one of ordinary skill in the art, other methods may be used to quantify similarity amongst user hearing profile graphs, where the other methods can include, but are not limited to, methods such as a Euclidean distance measurements, e.g. ((y1−x1)+(y2−x2) . . . >(y1−x1)+(y2−x2)) . . . or other statistical methods known in the art. For indirect DSP parameter set calculation, then, the closest matching hearing profile(s) between a user and other preexisting database entries or anchor points can then be used.
DSP parameter sets may be interpolated linearly, e.g., a DRC ratio value of 0.7 for user 5 (u_id)5 and 0.8 for user 3 (u_id)3 would be interpolated as 0.75 for user 200 (u_id)200 in the example of
In some embodiments computing system 1600 is a distributed system in which the functions described in this disclosure can be distributed within a datacenter, multiple datacenters, a peer network, etc. In some embodiments, one or more of the described system components represents many such components each performing some or all of the function for which the component is described. In some embodiments, the components can be physical or virtual devices.
Example system 1600 includes at least one processing unit (CPU or processor) 1610 and connection 1605 that couples various system components including system memory 1615, such as read only memory (ROM) 1620 and random access memory (RAM) 1625 to processor 1610. Computing system 1600 can include a cache of high-speed memory 1612 connected directly with, in close proximity to, or integrated as part of processor 1610.
Processor 1610 can include any general-purpose processor and a hardware service or software service, such as services 1632, 1634, and 1636 stored in storage device 1630, configured to control processor 1610 as well as a special-purpose processor where software instructions are incorporated into the actual processor design. Processor 1610 may essentially be a completely self-contained computing system, containing multiple cores or processors, a bus, memory controller, cache, etc. A multi-core processor may be symmetric or asymmetric.
To enable user interaction, computing system 1600 includes an input device 1645, which can represent any number of input mechanisms, such as a microphone for speech, a touch-sensitive screen for gesture or graphical input, keyboard, mouse, motion input, speech, etc. Computing system 1600 can also include output device 1635, which can be one or more of a number of output mechanisms known to those of skill in the art. In some instances, multimodal systems can enable a user to provide multiple types of input/output to communicate with computing system 1600. Computing system 1600 can include communications interface 1640, which can generally govern and manage the user input and system output. There is no restriction on operating on any particular hardware arrangement and therefore the basic features here may easily be substituted for improved hardware or firmware arrangements as they are developed.
Storage device 1630 can be a non-volatile memory device and can be a hard disk or other types of computer readable media which can store data that are accessible by a computer, flash memory cards, solid state memory devices, digital versatile disks, cartridges, random access memories (RAMs), read only memory (ROM), and/or some combination of these devices.
The storage device 1630 can include software services, servers, services, etc., that when the code that defines such software is executed by the processor 1610, it causes the system to perform a function. In some embodiments, a hardware service that performs a particular function can include the software component stored in a computer-readable medium in connection with the necessary hardware components, such as processor 1610, connection 1605, output device 1635, etc., to carry out the function.
It should be further noted that the description and drawings merely illustrate the principles of the proposed device. Those skilled in the art will be able to implement various arrangements that, although not explicitly described or shown herein, embody the principles of the invention and are included within its spirit and scope. Furthermore, all examples and embodiment outlined in the present document are principally intended expressly to be only for explanatory purposes to help the reader in understanding the principles of the proposed device. Furthermore, all statements herein providing principles, aspects, and embodiments of the invention, as well as specific examples thereof, are intended to encompass equivalents thereof.
Methods according to the above-described examples can be implemented using computer-executable instructions that are stored or otherwise available from computer readable media. Such instructions can comprise, for example, instructions and data which cause or otherwise configure a general-purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. Portions of computer resources used can be accessible over a network. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, firmware, or source code. Examples of computer-readable media that may be used to store instructions, information used, and/or information created during methods according to described examples include magnetic or optical disks, flash memory, USB devices provided with non-volatile memory, networked storage devices, and so on.
Devices implementing methods according to these disclosures can comprise hardware, firmware and/or software, and can take any of a variety of form factors. Typical examples of such form factors include laptops, smart phones, small form factor personal computers, personal digital assistants, rackmount devices, standalone devices, and so on. Functionality described herein also can be embodied in peripherals or add-in cards. Such functionality can also be implemented on a circuit board among different chips or different processes executing in a single device, by way of further example. The instructions, media for conveying such instructions, computing resources for executing them, and other structures for supporting such computing resources are means for providing the functions described in these disclosures.
Although a variety of examples and other information was used to explain aspects within the scope of the appended claims, no limitation of the claims should be implied based on particular features or arrangements in such examples, as one of ordinary skill would be able to use these examples to derive a wide variety of implementations. Further and although some subject matter may have been described in language specific to examples of structural features and/or method steps, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to these described features or acts. For example, such functionality can be distributed differently or performed in components other than those identified herein. Rather, the described. features and steps are disclosed as examples of components of systems and methods within the scope of the appended claims.
This application is a continuation-in-part of U.S. Ser. 16/851,048 entitled “SYSTEMS AND METHODS FOR PROVIDING CONTENT-SPECIFIC, PERSONALIZED AUDIO REPLAY ON CUSTOMER DEVICES,” filed Apr. 16, 2020, which is incorporated by reference herein in its entirety.
Number | Date | Country | |
---|---|---|---|
Parent | 16851048 | Apr 2020 | US |
Child | 16992407 | US |