Hearing loss (HL) affects hundreds of millions worldwide. Impaired communication as a result of hearing loss may lead to withdrawal from social interactions, loneliness, and accelerated cognitive decline. Hearing Aids (HAs) may be considered sound processing devices that modify sound signals and render the modified signals to improve intelligibility and/or acoustic comfort. Many hearing aids are available Over The Counter (OTC). Over The Counter Hearing Aids (OTC-HAs) may be configured to address some hearing limitations. Other hearing limitations are often addressed by an audiologist providing a hearing aid fitting. However, many people suffer from hearing impairments that are not correctable from existing hearing aids. For example, many veterans suffer from hearing impairments that are not recognizable with conventional approaches.
Conventional approaches to hearing aid fitting may be based on using prescriptive programs such as, for example, National Acoustics Lab Non-Linear 2 (NAL-NL2), Desired Sensation Level (DSL), and/or Cambridge Fitting protocol (CAM2). Conventional approaches to hearing aid fitting may employ open-loop systems. Conventional approaches to hearing aid fitting may require measured audiograms.
Problems may arise in conventional approaches when hearing-impaired patients continue to suffer from hearing loss and/or acoustic discomfort. Problems may arise in using conventional OTC hearing aids when hearing-impaired patients with some hearing limitations need to self-treat instead of working with an audiologist. Working with an audiologist for some hearing limitations may increase costs and/or time required to address hearing limitations. Problems may arise in conventional approaches when hearing-impaired patients need adjustments to hearing aid parameters for different environments. Problems may arise in conventional approaches when audiologists can't efficiently address patient needs. Problems may arise in conventional approaches when hearing-impaired patients are unable to travel to audiologists.
This Background is provided to introduce a brief context for the Detailed Description that follows. This Background is not intended to be an aid in determining the scope of the claimed subject matter nor be viewed as limiting the claimed subject matter to implementations that solve any or all of the shortcomings or problems presented above.
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate several embodiments and, together with the description, serve to explain the disclosed principles. In the drawings:
Consistent with disclosed embodiments, systems and methods for hearing aid fitting are disclosed. Disclosed systems and methods may provide Over The Counter (OTC) Hearing Aids (HAs) to users with hearing impairments who would otherwise require an audiologist for hearing aid fitting using conventional approaches. Providing effective OTC-HAs may reduce costs for hearing-impaired patients, health care providers, and/or health insurance providers. Disclosed systems and methods may provide clinically dispensed Hearing Aids (HAs) to users with hearing impairments working with an audiologist for hearing aid fitting. Providing effective HA fitting may reduce costs for hearing-impaired patients, health care providers, and/or health insurance providers. The fitting protocols may depend on the specific sound processing capabilities and adjust one or more sound processing parameters to improve the HA function. Disclosed systems and methods may provide improved sound quality of a HA in both clinical dispensing and OTC dispensing. Disclosed systems and methods may provide a higher Quality of Fit (QOF) over conventional approaches. Disclosed systems and methods may provide a shorter Time to Fit (TTF) over conventional approaches. For example, using disclosed embodiments, a TTF may be on the order of 10 minutes. Disclosed systems and methods may enable audiologists to work with hearing-impaired patients remotely. Hearing aids may be fitted by skilled professionals using disclosed embodiments, and/or self-fitted by hearing-impaired patients or users of hearing aids using disclosed embodiments. Hearing aids configured for interoperability with the disclosed embodiments may be provided OTC. These OTC-HAs may be fitted through use of the disclosed embodiments.
As used herein, a set of hearing loss characteristics is the diagnosis of hearing loss disease state.
As used herein, an audiogram is an example of hearing loss characteristics.
As used herein, a Pure Tone Audiogram (PTA) is a conventional measured audiogram generated as a result of pure tone audiometry.
As used herein, a hearing aid fitting is equivalent to a hearing aid prescription.
As used herein, a hearing aid is a sound processing device provisioned with modifying an input signal in realtime and rendering it through an audio rendering transducer.
As used herein, an intervention may comprise a set of hearing aid parameter settings.
As used herein, Measured Speech Intelligibility (MSI) comprises one or more measurements of intelligibility for a given set of stimuli S.
As used herein, Quality of Fit (QOF) may be based on Word Recognition Scores (WRS), global phonetic confusion matrices based on WRS, and/or phonetic confusion matrices based on broad phonetic features.
As used herein, Time to Fit (TTF) may be based on an aggregate of fitting and outcomes assessment. TTF may be based on the times a patient or user takes to provide answers to stimuli during a fitting.
Embodiments consistent with the present disclosure may be configured to provide a diagnosis and an optimal intervention based on patient preferences for quality and intelligibility. The diagnosis and the optimal intervention may be provided at the termination of one or more searches.
Embodiments consistent with the present disclosure may include Closed-loop Language-based Fitting (CLBFit). CLBFit may comprise a closed loop between presenting stimuli to a hearing-impaired patient or a user of a hearing aid, recording a perception of hearing aid quality by the patient or user through selection of a preferred stimulus, and modifying hearing aid parameters to improve intelligibility and/or acoustic comfort. CLBFit may comprise a hearing loss level fit. CLBFit may comprise a hearing loss shape fit. Selecting a hearing loss shape may result in selecting a spectral shape to process sound to compensate for different hearing loss thresholds at different audiometric frequencies. CLBFit may comprise a hearing loss fine fit. Hearing aid parameters may include one or more parameter settings. Parameter settings may include gain settings. A parameter setting may be specific to a specific auditory frequency band or a plurality of auditory frequency bands. CLBFit may comprise optimizing hearing aid parameters after the patient or user responds to each stimulus with a preferred intervention. CLBFit may comprise determining a current state of hearing loss after each response from the patient or user.
Embodiments consistent with the present disclosure may include a database. The database may comprise a set of hearing loss characteristics. For example, the hearing loss characteristics may be represented by an audiogram. The set of hearing loss characteristics may be quantized through employment of Vector Quantization (VQ). Quantized hearing loss characteristics may be organized for efficient searching. For example, codebook values from VQ may be organized according to hearing loss level. For example, codebook values from VQ may be organized according to hearing loss shape. For example, codebook values from VQ may be organized according to hearing loss shape for each hearing loss level. A hearing loss shape may be associated with a hearing loss shape type. Quantized hearing loss characteristics may be clustered by hearing loss shape type. Hearing loss levels may be organized in 1 dB increments. For example, quantized hearing loss characteristics may be organized by hearing loss level and hearing loss shape. The set of hearing loss characteristics may be transformed into another domain. For example, a domain may comprise a uniformly spaced frequency axis. For example, a domain may comprise line spectral pairs (LSPs). For example, a domain may comprise Mel-frequency Cepstral Coefficients. Each set of vector quantized hearing loss characteristics may be associated with one set of sound processing parameters.
Embodiments consistent with the present disclosure may include a sound database. The sound database may comprise a plurality of sound files. The sound files may comprise spoken phrases and words in a given language. Each sound file may comprise a digital representation of a spoken language passage. A spoken language passage may comprise a spoken word or phrase. A set of spoken words may be confusing to a user experiencing hearing loss. A sound file may comprise a digital representation of music. A sound file may comprise a digital representation of spoken language, mixed content, and music.
Some embodiments may include a sound processing device. A sound processing device may comprise a hearing aid. The sound processing device may be configured to generate processed stimuli for playing to a user. The processed stimuli may be based on one or more sound files. The processed stimuli may be based on one or more sound processing parameters. The processed stimuli may be based on one or more sound processing parameter settings. The sound processing parameters may be based on one of a plurality of vector quantized hearing loss characteristics.
As used herein, the term “space” refers to a mathematical construct where each point in the N-dimensional space is exactly represented by N variables (x1, x2, . . . , xn). This is the space of all possible vectors in the N-dimensional Euclidean space, denoted by n. For example, the space of all possible audiograms is
10, wherein we measure audiograms at 10 audiometric frequencies. The space of audiograms corresponds to the “diagnosis space” in this disclosure. For a given hearing sound processing based on WDRC with, for example, 11 bands, there may be 66 parameters, specified by NAL-NL2. These parameters may correspond to gains for inputs at 65 dB SPL (g65), Compression Ratio (CR), Attack Time (AT), Release Time (RT), Knee_low point where compressive amplification starts, and the maximum power output in each band (MPO_per_band). The space represented by
66 may be referred to as the “intervention space.” The intervention space is a set of hearing aid sound processing parameter settings, with a one-to-one correspondence with the elements of the diagnosis space.
In pure tone audiometry, the resolution for hearing loss thresholds (HLTs) at a given frequency may be limited. For example, HLTs may be represented with 8 bit integers. In this example, there are 256 unique points in each dimension in the 10 space, resulting in more than 1.2e+24 points in the diagnosis space. For example, the intervention space may comprise 2{circumflex over ( )}12=4096 unique points in each dimension. In this example
66 will result in a total of 2.6e+238 points. In this example, the diagnosis space and the intervention space may be too large to search efficiently in real time.
In some embodiments, the diagnosis space may be vector quantized to a codebook of size L for hearing loss level and a codebook of size S for a hearing loss shape. The values for L and S may be in the range 14 to 60 and 8 to 32, respectively. The vector quantizing may provide an efficient search space. The quantized diagnosis space may be referred to as quantized auditory perceptual space (QuAPS).
In some embodiments, hearing aid parameters may be determined based on a given prescription formulae for each point in QuAPS offline, resulting in a 1-to-1 correspondence between the diagnosis space and the intervention space. The QuAPS and the corresponding intervention space may be stored in a database configured for efficient searching. Searching in the intervention space serves as a proxy to searching for a hearing loss diagnosis in the diagnosis space. The process of searching an intervention space as a proxy to searching the diagnosis space may be referred to as Diagnosis by Intervention (DBI).
Some embodiments may comprise multiple searches. During each search, a hearing-impaired patient or a user of a hearing aid may be prompted to listen to a stimulus sj and provide a response Ψ. The response may be a preferred stimulus selected by the patient or user. A search may comprise a hearing loss level search. A search may comprise a hearing loss shape search. A search may comprise a hearing loss fine fit search. CLBFit may not need an audiogram to fit HAs. CLBFit may be based on a set of interventions. The interventions may comprise hearing aid parameter settings identified by prescription formulae such as, for example, NAL-NL2. The hearing aid prescriptions may be constructed for each quantized set of hearing loss characteristics. Therefore, searching a set of quantized hearing loss characteristics with feedback from a patient or user may be equivalent to searching a set of hearing aid prescriptions. CLBFit may result in an effective set of sound processing parameter settings selected by the patient or user. The selected prescription may correspond to a unique element in the QuAPS. This unique vector quantized audiogram may be referred to as selected audiogram and is a byproduct of the search process.
In some embodiments, quantized hearing loss characteristics may be represented through quantized audiograms.
Some embodiments may include a Learning Machine (LM). A LM may be configured to generate Actionable Information (AI). A LM may be configured to access a plurality of quantized hearing loss characteristics. The quantized hearing loss characteristics may be organized in ascending order of average hearing loss. A quantized set of hearing loss characteristics may have hearing loss level quantization. A quantized set of hearing loss characteristics may have hearing loss shape quantization. A quantized set of hearing loss characteristics may have hearing loss fine fit quantization. A LM may be configured to conduct unsupervised clustering. For example, the LM may be configured to perform unsupervised k-means clustering. The unsupervised clustering may be based on a distance metric for determining similarity between two sets of hearing loss characteristics. Examples of distance metrics include Euclidean, Cosine Similarity, and Mahalanobis distance. A LM may be trained through employment of a training set of hearing loss characteristics. The training may be based on an objective criterion. For example, an objective criterion may comprise min ((ϕm, ϕj), ∀j) in a set of hearing loss characteristics comprising ϕm and ϕj from the training set. A LM may be configured to select stimuli to be presented to a hearing-impaired patient or a user of a hearing aid. A LM may be configured to estimate word level accuracy. A LM may be configured to construct a phonetic confusion matrix. A LM may be configured to construct broad phonetic confusion matrices. A LM may be configured to control a stimulus presentation. A LM may be configured to guide the patient or user to choose optimal hearing parameters. A LM may be configured to control sound processing for a given intervention. A LM may be configured to receive responses from the patient or the user on the perceived sound. A LM may be configured to store responses to each stimulus and/or sound processing used in a local database. A LM may be configured to estimate reaction times for a patient or user to select a preferred stimulus. A LM may be configured to upload patient/user responses to one or more servers. A LM may be configured to upload patient/user responses in different environments to one or more servers. A LM may be based on one or more local quantized hearing loss characteristics. A LM may be based on one or more global quantized hearing loss characteristics. A LM may be configured to comply with one or more HIPPA regulations.
Embodiments consistent with the present disclosure may include a system for configuring a hearing aid device. The system may comprise at least one memory. The at least one memory may be configured to store instructions. The system may comprise at least one processor. The at least one processor may be configured to execute instructions to perform operations. The operations may comprise automatically selecting a sound file from a sound database. The operations may comprise automatically processing the sound file with a first set of sound processing parameter settings. The first set of sound processing parameter settings may be employed to generate two distinct processed level stimuli. The two distinct processed level stimuli may be generated through employment of a sound processing device. The operations may comprise automatically playing the two distinct processed level stimuli to a patient or a user of a hearing aid. The operations may comprise automatically recording a preferred coarse setting by the patient or the user. The operations may comprise automatically recording a preferred coarse setting by the patient or the user. The operations may comprise automatically processing the sound file with a second set of sound processing parameter settings to generate two distinct processed shape stimuli through employment of the sound processing device. The operations may comprise automatically playing the two distinct processed shape stimuli to the patient or the user. The operations may comprise automatically recording a preferred shape setting by the patient or the user. The operations may comprise automatically processing the sound file with the third set of sound processing parameter settings to generate two distinct processed fine resolution stimuli with the hearing loss shape through employment of the sound processing device. The operations may comprise automatically playing the two distinct processed fine resolution stimuli with the hearing loss shape to the patient or the user. The operations may comprise automatically recording a preferred fine resolution setting for the hearing loss shape by the patient or the user. The operations may comprise automatically determining hearing aid parameter settings for the patient or the user based on the fine resolution setting.
In some embodiments, quantized hearing loss characteristics may be organized according to coarse hearing loss level, hearing loss shape for each coarse hearing loss level, and/or fine hearing loss level for each hearing loss shape.
In some embodiments, operations may comprise automatically updating a first set of sound processing parameter settings based on a preferred coarse setting. The operations may comprise automatically processing a sound file with the updated first set of sound processing parameter settings to generate two updated distinct processed coarse stimuli through employment of a sound processing device. The operations may comprise automatically playing the two updated distinct processed coarse stimuli to a user. The operations may comprise automatically updating the preferred coarse setting for the user based on a selected preference by the user.
In some embodiments, operations may comprise automatically updating s second set of sound processing parameter settings based on s preferred hearing loss shape setting. The operations may comprise automatically processing a sound file with the updated second set of sound processing parameter settings to generate two updated distinct processed hearing loss shape stimuli through employment of a sound processing device. The operations may comprise automatically playing the two updated distinct processed hearing loss shape stimuli to a user. The operations may comprise automatically updating a preferred hearing loss shape setting for the user based on a selected preference by the user.
In some embodiments, operations may comprise automatically updating a third set of sound processing parameter settings based on a preferred fine resolution setting. The operations may comprise automatically processing a sound file with the updated third set of sound processing parameter settings to generate two updated distinct processed fine resolution stimuli with a hearing loss shape through employment of a sound processing device. The operations may comprise automatically playing the two updated distinct processed fine resolution stimuli with the hearing loss shape to a user. The operations may comprise automatically updating a preferred fine resolution setting for the hearing loss shape for the user based on a selected preference by the user.
In some embodiments, a second set of sound processing parameter settings may be based on hearing loss shape characteristics and an overall gain setting determined from a preferred shape setting.
In some embodiments, operations may comprise automatically communicating the hearing aid parameters to a hearing aid device.
Embodiments consistent with the present disclosure may include a method for configuring a hearing aid device. The method may comprise automatically selecting a sound file from a sound database. The method may comprise automatically processing the sound file with a first set of sound processing parameter settings. The first set of sound processing parameter settings may be employed to generate two distinct processed level stimuli. The two distinct processed level stimuli may be generated through employment of a sound processing device. The method may comprise automatically playing the two distinct processed level stimuli to a patient or a user of a hearing aid. The method may comprise automatically recording a preferred coarse setting by the patient or the user. The method may comprise automatically recording a preferred coarse setting by the patient or the user. The method may comprise automatically processing the sound file with a second set of sound processing parameter settings to generate two distinct processed shape stimuli through employment of the sound processing device. The method may comprise automatically playing the two distinct processed shape stimuli to the patient or the user. The method may comprise automatically recording a preferred shape setting by the patient or the user. The method may comprise automatically processing the sound file with the third set of sound processing parameter settings to generate two distinct processed fine resolution stimuli with the hearing loss shape through employment of the sound processing device. The method may comprise automatically playing the two distinct processed fine resolution stimuli with the hearing loss shape to the patient or the user. The method may comprise automatically recording a preferred fine resolution setting for the hearing loss shape by the patient or the user. The method may comprise automatically determining hearing aid parameter settings for the patient or the user based on the fine resolution setting.
In some embodiments, a method may comprise automatically updating a first set of sound processing parameter settings based on a preferred coarse setting. The method may comprise automatically processing a sound file with the updated first set of sound processing parameter settings to generate two updated distinct processed level stimuli through employment of a sound processing device. The method may comprise automatically playing the two updated distinct processed level stimuli to a user. The method may comprise automatically updating the preferred coarse setting for the user based on a selected preference by the user.
In some embodiments, a method may comprise automatically updating s second set of sound processing parameter settings based on s preferred hearing loss shape setting. The method may comprise automatically processing a sound file with the updated second set of sound processing parameter settings to generate two updated distinct processed hearing loss shape stimuli through employment of a sound processing device. The method may comprise automatically playing the two updated distinct processed hearing loss shape stimuli to a user. The method may comprise automatically updating a preferred hearing loss shape setting for the user based on a selected preference by the user.
In some embodiments, a method may comprise automatically updating a third set of sound processing parameter settings based on a preferred fine resolution setting. The method may comprise automatically processing a sound file with the updated third set of sound processing parameter settings to generate two updated distinct processed fine resolution stimuli with a hearing loss shape through employment of a sound processing device. The method may comprise automatically playing the two updated distinct processed fine resolution stimuli with the hearing loss shape to a user. The method may comprise automatically updating a preferred fine resolution setting for the hearing loss shape for the user based on a selected preference by the user.
In some embodiments, method may comprise automatically communicating the hearing aid parameters to a hearing aid device.
Embodiments consistent with the present disclosure may include Hearing aids Objective Outcomes (HO2).
Some embodiments may include narrow and broad phonetic confusion matrices. Examples of phonetic confusion matrices include a phone, place of articulation, manner of articulation, and/or the like. Examples of place of articulation include but are not limited to bilabials, alveolars, and velars. Examples of manner of articulation include but are not limited to stops, fricatives, affricates, glides, and vowels.
Some embodiments may include searching for a hearing aid prescription. A hearing aid prescription may be searched in the intervention space with a hearing-impaired patient or a user of a hearing aid in the search. Systems and methods configured to search for a hearing aid prescription in the intervention space may be configured to search in the quantized auditory perceptual space ϕ′ with a finite number of audiograms. Systems and methods configured for hearing aid solution searching may be configured to create the intervention space for a given hearing aid sound processing a for a given hearing aid fitting protocol k. Systems and methods configured for hearing aid solution searching may be configured to provide a one-to-one correspondence between a quantized auditory perceptual space and an intervention space. This one-to-one correspondence may provide more optimal hearing parameters search for the patient or the user over conventional systems and methods.
Some embodiments may include TTF analytics. TTF analytics may comprise response times to Ψp(Ay,By) as a distance between ϕm and ϕn changes in equation 16 (see
Some embodiments may be employed to evaluate outcomes for conventional audiogram based fitting and compare to outcomes of disclosed systems and methods.
Some embodiments may be configured for multi-lingual support. Hearing aid parameters may be optimized for a dominant language over another language. Hearing aid parameters may be optimized for more than one specific language. For example, a language with less competence than the patient's or user's dominant language. Hearing aid parameters may be based on one or more multi-lingual preferences of a hearing-impaired patient or a user of a hearing aid. Some embodiments may be employed for real-time language translation.
Reference will now be made in detail to example embodiments, examples of which are illustrated in the accompanying drawings and disclosed herein. The disclosed embodiments are described in sufficient detail to enable those skilled in the art to practice the disclosed embodiments. It is to be understood that other embodiments may be utilized and that changes may be made without departing from the scope of the disclosed embodiments. Thus, the materials, methods, and examples are illustrative only and are not intended to be necessarily limiting.
In some embodiments, a process for hearing aid fitting may comprise an initial condition. A process for hearing aid fitting may be based on a cost function. A process for hearing aid fitting may comprise a terminating condition. A process for hearing aid fitting may comprise multiple trials. Each trial may comprise a single traversal from LM/AI 170, to HO2 110, to Intervention 120, to Patient 140, and back to LM/AI 170. A result of the process for hearing aid fitting may comprise a patient selected audiogram or PTA 180. A result of the process for hearing aid fitting may comprise corresponding outcomes 160. Corresponding outcomes may comprise MSI, QOF, TTF, and/or TTF analytics. A result of the process for hearing aid fitting may comprise a patient selected hearing aid parameters 150.
In some embodiments, LM/AI 170 may be configured to access and/or receive MSI information for a given set of stimuli. LM/AI 170 may be configured to access and/or receive QOF information.
In some embodiments, HO2 Stimuli 110 may comprise multiple stimuli sj in a set of stimuli S. A stimulus may comprise audio information. A stimulus may comprise visual information. A stimulus may comprise an audio transcription. A stimulus may comprise “Closed Captions”. A stimulus sj in the set of stimuli S may be selected based on one or more previous responses from a hearing-aid patient or a user of a hearing aid. A stimulus sj in the set of stimuli S may be selected at random.
In some embodiments, interventions 120 may comprise instructions configured to modify one or more hearing aid parameters. The hearing aid parameters may span a plurality of auditory frequency bands. Additional stimuli may be presented to a hearing-impaired patient or a user of a hearing aid with modified hearing aid parameters applied.
In some embodiments, during a process for hearing aid fitting, a hearing-impaired patient or a user of a hearing aid may be presented with choices for selection. Their selections may be based on a perceived hearing loss level. The choices may be based on a hearing loss level. The hearing loss level stimuli may be within 10 to 15 dB of hearing loss from the perceived hearing loss level. A hearing loss shape may be based on a correction in gain in each of a plurality of frequency bands. Hearing loss shapes may be classified into shape types. The choices may be based on a fine fit. The fine fit may be within 15 to 20 dB from the perceived hearing loss level and/or the hearing loss shape.
The process for coarse hearing loss level searching may setup sound processing parameters for the next iteration at 520. When no prior information is available about a user, the process for coarse hearing loss level searching may perform 522. For subsequent iterations, 521 may be performed. The process for coarse hearing loss level searching may initial stimuli A and B at 522. The initial stimuli may be based on processing a sound file using parameters A0P and B0P, respectively. An iteration i may be set to 0 at 522. In one embodiment, A0P may be based on normal hearing loudness level. For example, a normal hearing loudness level may comprise 10 dB hearing loss thresholds. In one embodiment, B0P=A0P+Xdiff. This may make B0P louder by Xdiff or a=10 dB. The process for coarse hearing loss level searching may Receive AiP and BiP from 571 at 521. In one embodiment, AiP and BiP are based on Ai−1P and Bi−1P. In one embodiment, Xdiff may be adaptive based on one or more past responses and a time taken by a user for a response. Xdiff=f(r, t) where r refers to past responses and t refers to the corresponding response times. In one embodiment, Xup may be adaptive based on one or more past responses and a time taken by a user for a response. Xup=f(r, t) where r refers to past responses and t refers to the corresponding response times. In one embodiment, Xdown may be adaptive based on one or more past responses and a time taken by a user for a response. Xdown=f(r, t) where r refers to past responses and t refers to the corresponding response times. Ai+1P and Bi+1P from 571 may be modified based on adapted Xdiff, Xup and Xdown.
The process for coarse hearing loss level searching may automatically process a sound file to generate stimuli A and B using parameters AiP and BiP, respectively at 530. The process for coarse hearing loss level searching may automatically play the stimuli A and B. In one embodiment, a user is provided Play|Pause|Rewind|Fast Forward controls for both stimuli A and B. In one embodiment, labels for stimuli A and B may be randomized. For consistency, stimuli A and B may be assigned the roles of softer and louder stimuli, respectively.
The process for coarse hearing loss level searching may automatically record the preferred loudness level by the user and the response time t at 540. The preferred loudness level may be correlated with the hearing loss level of the user. The response time t may be correlated with the difficulty in determining A versus B preference. In one embodiment, the selected level may be based on A versus B choice, typically referred to as 2 Alternate Forced Choice (2AFC). In one embodiment, the selected level may be based on an N-point Likert scale reflecting the magnitude of preference of one stimulus over the other. In one embodiment, the selected level may be based on a bounded, continuous function such as between ±1.0, reflecting the magnitude of preference of one stimulus over the other. In one embodiment, a response time t may be recorded and associated with the corresponding preference selection.
The process for coarse hearing loss level searching may evaluate the user response at 550. Evaluation may be based on IF the user selected the softer stimulus A, THEN perform 551. Evaluation may be based on IF the user selected the louder stimulus B, THEN perform 552.
The process for coarse hearing loss level searching may decrease loudness levels by Xdown. Ai+1P=AiP−Xdown; and Bi+1P=BiP−Xdown at 551.
The process for coarse hearing loss level searching may increase the loudness levels by Xup. Bi+1P=BiP+Xup and Ai+1P=AiP+Xup at 552.
The process for coarse hearing loss level searching may check if there was a change in the preference direction at 560. IF the user selected louder stimulus B in iteration i−1 AND softer stimulus A in iteration i, THEN perform 561. IF the user selected softer stimulus A in iteration i−1 AND louder stimulus B in iteration i, THEN perform 562.
The process for coarse hearing loss level searching may record maximum inflection point at 561.
The process for coarse hearing loss level searching may record minimum inflection point at 562.
The process for coarse hearing loss level searching may record check for termination of the Coarse Level Selection at 570. Termination may be based on IF at least k sets of Maximum and Minimum inflection points have been automatically recorded, THEN perform 580. Termination may be based on IF less than k sets of Maximum and Minimum inflection points have been recorded, THEN perform 571. In one embodiment, k=1.
The process for coarse hearing loss level searching may set iteration i=i+1 at 571. The process for coarse hearing loss level searching may communicate AiP and BiP to 521.
The process for coarse hearing loss level searching may exit at 580. The process for coarse hearing loss level searching may compute the preferred coarse setting as the average of k maximum inflection and k minimum inflection values at 580.
The World Health Organization (WHO) has classified the degree of hearing loss or hearing impairment as shown in the chart below, using 4fPTA, computed as shown in Equation 4 (see
Equation 6 (see
A conventional master hearing aid H, with a specific sound processing algorithm a, may be expressed as Ha. The master hearing aid may comprise one or more core signal processing modules similar to those included in a digital hearing aid. The master hearing aid comprise a simulated hearing aid. Equations 8, 9, and 10 (see
Equation 13 (see
Equation 14 (see
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
In this specification, “a” and “an” and similar phrases are to be interpreted as “at least one” and “one or more.” References to “a”, “an”, and “one” are not to be interpreted as “only one”. In this specification, the term “may” is to be interpreted as “may, for example.” In other words, the term “may” is indicative that the phrase following the term “may” is an example of one of a multitude of suitable possibilities that may, or may not, be employed to one or more of the various embodiments. In this specification, the phrase “based on” is indicative that the phrase following the term “based on” is an example of one of a multitude of suitable possibilities that may, or may not, be employed to one or more of the various embodiments. References to “an” embodiment in this disclosure are not necessarily to the same embodiment.
Many of the elements described in the disclosed embodiments may be implemented as modules. A module is defined here as an isolatable element that performs a defined function and has a defined interface to other elements. The modules described in this disclosure may be implemented in hardware, a combination of hardware and software, firmware, wetware (i.e. hardware with a biological element), or a combination thereof, all of which are behaviorally equivalent. For example, modules may be implemented using computer hardware in combination with software routine(s) written in a computer language (e.g., Java, HTML, XML, PHP, Python, ActionScript, JavaScript, Ruby, Prolog, SQL, VBScript, Visual Basic, Perl, C, C++, Objective-C, or the like). Additionally, it may be possible to implement modules using physical hardware that incorporates discrete or programmable analog, digital and/or quantum hardware. Examples of programmable hardware include: computers, microcontrollers, microprocessors, application-specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), and complex programmable logic devices (CPLDs). Computers, microcontrollers, and microprocessors are programmed using languages such as assembly, C, C++, or the like. FPGAs, ASICs, and CPLDs are often programmed using hardware description languages (HDL) such as VHSIC hardware description language (VHDL) or Verilog that configure connections between internal hardware modules with lesser functionality on a programmable device. Finally, it needs to be emphasized that the above mentioned technologies may be used in combination to achieve the result of a functional module.
Some embodiments may employ processing hardware. Processing hardware may include one or more processors, computer equipment, embedded system, machines, and/or the like. The processing hardware may be configured to execute instructions. The instructions may be stored on a machine-readable medium. According to some embodiments, the machine-readable medium (e.g. automated data medium) may be a medium configured to store data in a machine-readable format that may be accessed by an automated sensing device. Examples of machine-readable media include: flash memory, memory cards, electrically erasable programmable read-only memory (EEPROM), solid state drives, optical disks, barcodes, magnetic ink characters, and/or the like.
While various embodiments have been described above, it should be understood that they have been presented by way of example, and not limitation. It will be apparent to persons skilled in the relevant art(s) that various changes in form and detail can be made therein without departing from the spirit and scope. In fact, after reading the above description, it will be apparent to one skilled in the relevant art(s) how to implement alternative embodiments. Thus, the present embodiments should not be limited by any of the above described example embodiments. In particular, it should be noted that, for example purposes, hearing aid fitting systems may include a server and a mobile device. However, one skilled in the art will recognize that the server and mobile device may vary from a traditional server/device relationship over a network such as the internet. For example, a server may be collective based: portable equipment, broadcast equipment, virtual, application(s) distributed over a broad combination of computing sources, part of a cloud, and/or the like. Similarly, for example, a mobile device may be a user based client, portable equipment, broadcast equipment, virtual, application(s) distributed over a broad combination of computing sources, part of a cloud, and/or the like. Additionally, it should be noted that, for example purposes, several of the various embodiments were described as comprising operations. However, one skilled in the art will recognize that many various languages and frameworks may be employed to build and use embodiments of the present invention.
In this specification, various embodiments are disclosed. Limitations, features, and/or elements from the disclosed example embodiments may be combined to create further embodiments within the scope of the disclosure. Moreover, the scope includes any and all embodiments having equivalent elements, modifications, omissions, adaptations, or alterations based on the present disclosure. Further, aspects of the disclosed methods can be modified in any manner, including by reordering aspects, or inserting or deleting aspects.
In addition, it should be understood that any figures that highlight any functionality and/or advantages, are presented for example purposes only. The disclosed architecture is sufficiently flexible and configurable, such that it may be utilized in ways other than that shown. For example, the blocks presented in any flowchart may be re-ordered or only optionally used in some embodiments.
Furthermore, many features presented above are described as being optional through the use of “may” or the use of parentheses. For the sake of brevity and legibility, the present disclosure does not explicitly recite each and every permutation that may be obtained by choosing from the set of optional features. However, the present disclosure is to be interpreted as explicitly disclosing all such permutations. For example, a system described as having three optional features may be embodied in seven different ways, namely with just one of the three possible features, with any two of the three possible features, or with all three of the three possible features.
Further, the purpose of the Abstract of the Disclosure is to enable the U.S. Patent and Trademark Office and the public generally, and especially the scientists, engineers and practitioners in the art who are not familiar with patent or legal terms or phraseology, to determine quickly from a cursory inspection the nature and essence of the technical disclosure of the application. The Abstract of the Disclosure is not intended to be limiting as to the scope in any way.
Finally, it is the applicant's intent that only claims that include the express language “means for” or “step for” be interpreted under 35 U.S.C. 112. Claims that do not expressly include the phrase “means for” or “step for” are not to be interpreted under 35 U.S.C. 112.
Number | Date | Country | |
---|---|---|---|
63544127 | Oct 2023 | US |