System for aiding hearing and method for use of same

Information

  • Patent Grant
  • 12108220
  • Patent Number
    12,108,220
  • Date Filed
    Friday, April 12, 2024
    7 months ago
  • Date Issued
    Tuesday, October 1, 2024
    2 months ago
  • Inventors
  • Examiners
    • Dabney; Phylesha
    Agents
    • Griggs; Scott
    • Griggs Bergen LLP
Abstract
A system for aiding hearing and a method for use of the same are disclosed. In one embodiment, a hearing aid device equipped with sound processing capabilities, including a microphone, speaker, and electronic signal processor wirelessly communicates with a smart device. The electronic signal processor is adaptable based on a custom audiogram stored at the hearing aid device, which can be dynamically adjusted through a smart device application. Patients can directly influence their hearing experience by modifying the audiogram via the app, adjusting settings like directional microphone activation, noise cancellation levels, and amplification for specific frequency ranges. This method allows users to tailor their hearing aid settings to their immediate environment and personal hearing needs, ensuring an optimized auditory experience. The system emphasizes user control and adaptability, offering a significant advancement in hearing aid technology.
Description
TECHNICAL FIELD OF THE INVENTION

This invention relates, in general, to hearing tests and systems for aiding hearing and, in particular, to systems for aiding hearing, hearing aids, and methods for use of the same that provide hearing testing as well as signal processing and feature sets to enhance speech and sound intelligibility.


BACKGROUND OF THE INVENTION

Traditionally, the management of hearing loss has been anchored in a process that confines the crucial step of audiogram assessment and fitting to specialized test facilities. This conventional approach necessitates that individuals seeking hearing aid adjustments must physically visit these facilities to undergo testing, followed by the fitting of the hearing aid according to the newly assessed audiogram. In the event of any changes in the patient's hearing capabilities or dissatisfaction with the hearing aid's performance, the cycle necessitates a return to the test facility for reassessment. This process not only imposes significant logistical challenges but delays the optimization of hearing aid settings to accommodate evolving patient needs. Hence, there is a burgeoning need for innovative hearing aids and methodologies that transcend these traditional constraints, offering patients the flexibility to tailor their hearing experience directly, without the repeated need to revert to test facilities for adjustments.


SUMMARY OF THE INVENTION

This application introduces a transformative approach to existing in-situ hearing aid technology, fundamentally redefining the audiogram's role and the concept of hearing testing. Unlike traditional hearing care systems, where audiograms are generated and stored at test facilities, requiring patients to visit for testing, fitting, and subsequent adjustments, this embodiment of innovation embeds the audiogram directly within the hearing aid itself and enables real-time, user-driven audiogram adjustments via a smart device, for example, effectively making the journey to test facilities for adjustments obsolete. “Testing” is reinterpreted to mean generating an up-to-the-moment audiogram through the user's smart device, allowing for immediate customization of the hearing aid's settings based on current environmental needs and personal hearing preferences. For example, a user troubled by excessive high-frequency sounds can instantly adjust the frequency settings through their smart device and upload the new audiogram directly to their hearing aid, bypassing traditional processes and devices.


Moreover, the innovation extends to the smart device conducting actual hearing tests by playing harmonics, for example, integrating seamlessly with the hearing aid to refine the audiogram. This direct integration challenges existing in-situ hearing aids and the conventional reliance on external test facilities and real-time communication between smart devices and hearing aids, which is hampered by time delays. Further, systems and methodology are presented for dissecting the frequency range into distinct frequency segments and managing each as an independent entity. This approach enhances the precision and customization of the hearing aid.


In one embodiment, a hearing aid device equipped with sound processing capabilities, including a microphone, speaker, and electronic signal processor wirelessly communicates with a smart device. The electronic signal processor is adaptable based on a custom audiogram, which can be dynamically adjusted through a smart device application and which may be stored on the hearing aid device. Patients can directly influence their hearing experience by modifying the audiogram via the app, adjusting settings like directional microphone activation, noise cancellation levels, and amplification for specific frequency ranges. This method allows users to tailor their hearing aid settings to their immediate environment and personal hearing needs, ensuring an optimized auditory experience. The system emphasizes user control and adaptability, offering a significant advancement in hearing aid technology. These and other aspects of the invention will be apparent from and elucidated with reference to the embodiments described hereinafter.





BRIEF DESCRIPTION OF THE DRAWINGS

For a more complete understanding of the features and advantages of the present invention, reference is now made to the detailed description of the invention along with the accompanying figures in which corresponding numerals in the different figures refer to corresponding parts and in which:



FIG. 1A is a front perspective schematic diagram depicting one embodiment of a hearing aid device being utilized according to the teachings presented herein;



FIG. 1B is a top plan view depicting the hearing aid device of FIG. 1A being utilized according to the teachings presented herein;



FIG. 2 is a front perspective view of another embodiment of a hearing aid device according to the teachings presented herein;



FIG. 3 is a functional block diagram depicting one embodiment of the hearing aid device shown herein;



FIG. 4 is a functional block diagram depicting one embodiment of a smart device shown in FIG. 1, which may form a pairing with the hearing aid;



FIG. 5 is a functional block diagram depicting one embodiment of a system for aiding hearing according to the teachings presented herein;



FIG. 6 is a first audiogram being utilized by the system presented herein;



FIG. 7 is a second audiogram being utilized by the system presented herein;



FIG. 8 is a third audiogram being utilized by the system presented herein;



FIG. 9 is a fourth audiogram being utilized by the system presented herein;



FIG. 10 is a modal diagram depicting different operational modes of the system presented herein;



FIG. 11 is a flow chart depicting one embodiment of a method for aiding hearing being utilized according to the teachings presented herein; and



FIG. 12 is a flow chart depicting another embodiment of a method for aiding hearing being utilized according to the teachings presented herein.





DETAILED DESCRIPTION OF THE INVENTION

While the making and using of various embodiments of the present invention are discussed in detail below, it should be appreciated that the present invention provides many applicable inventive concepts, which can be embodied in a wide variety of specific contexts. The specific embodiments discussed herein are merely illustrative of specific ways to make and use the invention, and do not delimit the scope of the present invention.


Referring initially to FIG. 1A and FIG. 1B, therein is depicted one embodiment of a system for aiding hearing, which is schematically illustrated and designated 10. As shown, a user U, who may be considered a patient requiring a hearing aid, is wearing a hearing aid device 12 and sitting at a table T at a restaurant or café, for example, and engaged in a conversation with an individual I1 and an individual I2. As part of a conversation at the table T, the user U is speaking sound S1, the individual I1 is speaking sound S2, and the individual I2 is speaking sound S3. Nearby, in the background, a bystander B1 is engaged in a conversation with a bystander B2. The bystander B1 is speaking sound S4 and the bystander B2 is speaking sound S5. An ambulance A is driving by the table T and emitting sound S6. The sounds S1, S2, and S3 may be described as the immediate background sounds. The sounds S4, S5, and S6 may be described as the background sounds. The sound S6 may be described as the dominant sound as it is the loudest sound at table T. By way of example, the ambulance A and the sound S6 are originating on the left side of the user U and the sound is appropriately distributed at the hearing aid 10 to reflect this occurrence as indicated by an arrow L.


In some embodiments, the hearing aid device 12 seamlessly integrates with a proximate smart device 14—such as a smartphone, smartwatch, tablet computer, or wearable. This integration is facilitated through a user-friendly interface displayed on the smart device 14, which hosts a range of intuitive controls including volume adjustments, operational mode selections, and real-time audiogram customization features. The user can effortlessly transmit control signals wirelessly from the smart device 14 to the hearing aid device 12, enabling immediate changes to volume, operational modes such as directional sound focus, noise cancellation levels, and audiogram adjustments, for example. This direct interaction heralds a significant shift towards user-empowered hearing aid management, allowing for on-the-spot modifications tailored to the user's specific auditory environment and personal preferences.


Central to this system is a programming interface that establishes a dynamic communication channel between the hearing aid device 12 and the smart device 14. This bidirectional interface supports the direct adjustment and real-time customization of the hearing aid's settings via an application on the smart device 14. The application empowers users to actively manage and fine-tune their hearing experience, from adjusting an audiogram 20 that may be stored on the hearing aid device 12 and displayed on the smart device 14, to selecting specific sound processing features. This level of customization and control directly from the user's smart device is unprecedented, moving beyond the traditional confines of hearing aid management and setting a new standard for personal auditory assistance. Further, this programming interface is an extensible architecture configured to integrate additional operational modes and functionalities beyond those described, wherein the architecture enables the seamless addition of features and enhancements derived from future scientific achievements, thereby allowing for continuous improvement and expansion of the system's capabilities in response to evolving user needs and technological advances.


Furthermore, this system addresses the concept of auditory testing and audiogram customization. Utilizing the smart device application, users can conduct on-demand auditory tests, thereby transforming any location into a potential test environment. This capability enables users to create and adjust their audiograms in real time, based on immediate hearing assessments and environmental conditions. This approach stands in stark contrast to conventional methods reliant on static, infrequently updated audiograms and limited adaptability. By placing the power of audiogram customization and auditory testing directly in the hands of the user, the system offers unparalleled flexibility and personalization in hearing aid technology, marking a significant leap forward from existing practices.


Referring to FIG. 2, therein is depicted an embodiment of the hearing aid device 12. As shown, in the illustrated embodiment, the hearing aid 10 includes a body 62 having a microphone stand 64 extending from the body 62 and an ear mold 66 on the other side of the body 62. The body 62 and the ear mold 66 may each at least partially conform to the contours of the external ear and sized to engage therewith. By way of example, the body 62 may be sized to engage with the contours of the ear. The ear mold 66 may be sized to be fitted for the physical shape of a patient's ear. The microphone stand 64 holds a microphone 68, which gathers sound and converts the gathered sound into an electrical signal. An opening 70 within the ear mold 66 permits sound traveling through the ear hook 64 to exit into the patient's ear. An internal compartment 72 provides space for housing electronics, which will be discussed in further detail hereinbelow. Various controls 74 provide a patient interface with the hearing aid device 12 on the body 62.


As alluded, the hearing aid device 12 may be an vivo adaptare device—adapting to the living within the living—that incorporates the functionality to not only assist hearing but also to conduct auditory tests directly within the user's ear, where the hearing aid device 12 is situated. This means the hearing aid can generate, adjust, and apply audiograms-personalized hearing profiles based on the user's specific hearing capabilities and environmental conditions-without the need to remove the device or visit a professional audiologist for testing in a separate facility.


With this arrangement, the systems and methods presented herein allow the hearing aid to assess hearing capabilities in the actual environment—dynamically—where the user listens, providing more accurate and personalized results than traditional, clinic-based audiograms. Unlike traditional hearing aids, which are programmed using audiograms obtained from clinical tests, this vivo adaptare hearing aid embodiment can generate and update audiograms automatically. This process may involve playing a series of harmonic tones directly into the ear through the hearing aid and measuring the user's responses to these sounds, effectively mapping out the user's hearing profile in real-time. Once the audiogram is generated or updated, the hearing aid device 12 can immediately adjust its settings to match the user's current hearing needs. This dynamic approach allows users to have their hearing aids adjusted for optimal performance across different listening environments, such as moving from a quiet room to a noisy outdoor setting.


Referring now to FIG. 3, an illustrative embodiment of the internal components of the hearing aid device 12 is depicted. In one embodiment, within the internal compartments 62, 82, an electronic signal processor 130 may be housed. The hearing aid 10 may include an electronic signal processor 130 for each ear or the electronic signal processor 130 for each ear may be at least partially integrated or fully integrated. With respect to FIG. 2, within the internal compartment 72 of the body 62, the electronic signal processor 130 is housed, which, as indicated, may include non-transitory memory. In order to measure, filter, compress, and generate, for example, continuous real-world analog signals in form of sounds, the electronic signal processor 130 may include an analog-to-digital converter (ADC) 132, a digital signal processor (DSP) 134, and a digital-to-analog converter (DAC) 136. In some embodiments, the electronic signal processor 130, including the digital signal processor embodiment, has memory accessible to a processor. One or more microphone inputs 138 corresponding to one or more respective microphones, a speaker output 140, various controls, such hearing aid controls 142, 144, an induction coil 146, a battery 148, and a transceiver 150 are also housed within the hearing aid 10.


As shown, a signaling architecture communicatively interconnects the microphone inputs 138 to the electronic signal processor 130 and the electronic signal processor 130 to the speaker output 140. The various hearing aid controls 142, 144, the induction coil 146, the battery 148, and the transceiver 150 are also communicatively interconnected to the electronic signal processor 130 by the signaling architecture. The speaker output 140 sends the sound output to a speaker or speakers to project sound and in particular, acoustic signals in the audio frequency band as processed by the hearing aid 10. The hearing aid controls 142, 144 may include an ON/OFF switch as well as volume controls, for example. It should be appreciated, however, that in some embodiments, all control is manifested through the adjustment of the vivo adaptare audiogram. The induction coil 146 may receive magnetic field signals in the audio frequency band from a telephone receiver or a transmitting induction loop, for example, to provide a telecoil functionality. The induction coil 146 may also be utilized to receive remote control signals encoded on a transmitted or radiated electromagnetic carrier, with a frequency above the audio band. Various programming signals from a transmitter may also be received via the induction coil 146 or via the transceiver 150, as will be discussed. The battery 148 provides power to the hearing aid 10 and may be rechargeable or accessed through a battery compartment door (not shown), for example. The transceiver 150 may be internal, external, or a combination thereof to the housing. Further, the transceiver 150 may be a transmitter/receiver, receiver, or an antenna, for example. Communication between various smart devices and the hearing aid 10 may be enabled by a variety of wireless methodologies employed by the transceiver 150, including 802.11, 3G, 4G, Edge, WiFi, ZigBee, near field communications (NFC), Bluetooth low energy, and Bluetooth, for example.


The various controls and inputs and outputs presented above are exemplary and it should be appreciated that other types of controls may be incorporated in the hearing aid device 10. Moreover, the electronics and form of the hearing aid device 10 may vary. The hearing aid device 10 and associated electronics may include any type of headphone configuration, a behind-the-ear configuration, an in-the-ear configuration, or in-the-ear configuration, for example.


Referencing FIG. 3, the electronic signal processor 130 within the hearing aid is engineered to work with a dynamically customizable audiogram, allowing for personalization in hearing aid technology. This innovative approach permits the user U to adjust the audiogram in real-time via the smart device 14 to suit his or her unique hearing preferences and the specific demands of their auditory environment. The electronic signal processor 130 within the hearing aid device 12 allows for a range of adjustments to suit the user's individual hearing preferences and environmental needs. Furthermore, the system supports on-demand auditory testing via the smart device, a feature that empowers users to continuously refine their audiogram settings. By assessing their hearing capabilities and environmental conditions in real-time, users can achieve a more personalized and effective hearing aid performance, thereby significantly enhancing their quality of life. Further, in one embodiment, with respect to FIG. 3, the various controls 72 may include digital noise reduction, impulse noise reduction, and wind noise reduction may also be incorporated, for example. As alluded to, system compatibility features, such as FM compatibility and Bluetooth compatibility, may be included in the hearing aid device 12.


Inside the hearing aid, the electronic signal processor 130 operates as a sophisticated computational unit, processing complex instructions stored within its memory. This memory, which can be either volatile for temporary data storage or non-volatile for long-term data retention, is crucial for the adaptive functionalities of the hearing aid. Upon execution of these instructions, the processor transforms input analog signals from the microphone into digital signals for advanced processing. This process includes an innovative step where the digital signal is adjusted based on a user-specific audiogram, incorporating a subjective assessment of sound quality. This unique feature allows the electronic signal processor 130 to personalize the audio output, tailoring it to the user's hearing profile by adjusting the signal to match the preferred hearing range identified in the audiogram. Consequently, the processor refines the digital signal into an optimized analog output, ready to be delivered to the user through the hearing aid's speaker. This end-to-end processing not only customizes sound based on individual preferences but also dynamically adapts to changing environmental conditions, significantly enhancing the auditory experience.


Further, the memory's processor-executable instructions extend the hearing aid's capabilities, enabling it to respond to various control signals for volume adjustment and mode selection. By way of example, these instructions facilitate the activation of specialized operational modes, such as directional sound focus, noise cancellation, and frequency amplification, allowing adjustments on a per-ear basis to suit different listening environments. Integration with a smart device is achieved through a wireless connection, enabled by the hearing aid's transceiver 150, which allows for real-time customization of settings via the smart device. This seamless interaction is made possible by a programming interface that supports the exchange of audiogram settings and control commands between the hearing aid and the smart device. This advanced communication empowers users to directly and effortlessly adjust their hearing aid settings, offering an unparalleled level of control and personalization. By enabling these functionalities, the hearing aid system evolves into a highly adaptable and user-centered device, capable of delivering a bespoke auditory experience that meets the unique needs of each individual.


Referring now to FIG. 4, the proximate smart device 14 may be a wireless communication device of the type including various fixed, mobile, and/or portable devices. To expand rather than limit the discussion of the proximate smart device 14, such devices may include, but are not limited to, cellular or mobile phones, smart tablet computers, smartwatches, wearables, and so forth. The proximate smart device 14 may include a processor 180, memory 182, storage 184, a transceiver 186, and a cellular antenna 188 interconnected by a busing architecture 190 that also supports the display 16, I/O panel 192, and a camera 194. It should be appreciated that although a particular architecture is explained, other designs and layouts are within the teachings presented herein.


In operation, the teachings presented herein permit the proximate smart device 14 such as a smart phone to form a pairing with the hearing aid device 12 and operate the hearing aid device 12. As shown, the proximate smart device 14 includes the memory 182 accessible to the processor 180 and the memory 182 includes processor-executable instructions that, when executed, cause the processor 180 to provide an interface for an operator that includes an interactive application for viewing the status of the hearing aid device 12. The processor 180 is caused to present a menu for controlling the hearing aid device 12. The processor 180 is then caused to receive an interactive instruction from the user and forward a control signal via the transceiver 186, for example, to implement the instruction at the hearing aid device 12. The processor 180 may also be caused to generate various reports about the operation of the hearing aid device 12. The processor 180 may also be caused to translate or access a translation service for the audio.


In a further embodiment of processor-executable instructions, the processor-executable instructions cause the processor 180 to create a pairing via the transceiver 186 with the hearing aid device 12. Then, the processor-executable instructions may cause the processor 400 to transform through compression with distributed computing between the processor 180 and the hearing aid device 12, the digital signal into a processed digital signal having the qualified sound range, which includes the preferred hearing range as well as the subjective assessment of sound quality, as represented by the dynamically customizable audiogram. It should be appreciated, however, that in some embodiments the distributed computing is not necessary and all functionality may be with the hearing aid device 12. The dynamically customizable audiogram may include a range or ranges of sound corresponding to highest hearing capacity of an ear of a patient modified with a subjective assessment of sound quality according to the patient. The dynamically customizable audiogram may include a completed assessment of a degree of annoyance caused to the user by an impairment of wanted sound. The dynamically customizable audiogram according to the user may also include a completed assessment of a degree of pleasantness caused to the patient by an enablement of wanted sound. That is, the subjective assessment according to the user may include a completed assessment to determine best sound quality to the user.


Significantly, the processor-executable instructions extend beyond basic device operation, enabling users to actively participate in their auditory experience. Users can select operational modes, such as directional sound mode, amplification mode, and background noise reduction mode, tailored to their immediate environmental needs and hearing preferences. This functionality is emblematic of the system's dynamic architecture, where adjustments to the hearing aid's settings are not just reactionary but predictive and personalized, fostering an auditory environment that is both adaptive and immersive.


Moreover, the integration of distributed computing between the smart device 14 and the hearing aid device 12 facilitates the transformation of digital signals into a processed digital format, reflecting the nuanced preferences captured in the dynamically customizable audiogram. That is, the processor-executable instructions receive, through the user interface, patient inputs for adjusting a dynamically customizable audiogram that represents preferred hearing settings of the patient, including adjustments to one or more frequency segments. Each of the frequency segments are a divided portion of the dynamically customizable audiogram. The processor-executable instructions then process the patient inputs to adjust the dynamically customizable audiogram, thereby enabling adaptation to varying auditory environments as perceived by the patient. The processor-executable instructions then cause the transmission of the adjusted dynamically customizable audiogram to the hearing aid device for immediate application. This dynamically customizable audiogram, adjustable via the smart device, encapsulates a spectrum of auditory capabilities and preferences, including subjective assessments of sound quality. Such assessments enable the identification and enhancement of sounds, clarity ensuring and reducing discomfort, thereby exemplifying the system's commitment to providing a tailored and enriched auditory experience for users. Through this innovative approach, the hearing aid system 10 not only adapts to the auditory landscape but also reshapes it, making it conducive to the unique needs and preferences of the user, marking a paradigm shift in personalized hearing care.



FIG. 5 illustrates a comprehensive embodiment of the hearing aid system 10, showcasing a structured arrangement of its core components in ascending order to enhance user auditory experience. At the foundation of this system is the vivo adaptare audiogram module 194, an element that enables the hearing aid to perform real-time hearing assessments directly within the user's ear. This module is designed to dynamically generate and adjust audiograms based on the immediate acoustic environment, thereby allowing for personalized hearing aid calibration without the need for external audiometric testing.


Positioned above the vivo adaptare audiogram module is the subjective assessment module 196. This module integrates the user's personal preferences and perceptions of sound quality into the hearing aid's processing algorithms. By assessing and incorporating feedback on sound clarity, volume, and tone, the subjective assessment module ensures that the audio output is finely tuned to the user's specific auditory preferences, enhancing the overall satisfaction with the hearing aid's performance. Further enhancing the system's functionality are several function modules, labeled 198, 200, and 202, each designed to perform distinct sound processing tasks. Function module 198 serves as an advanced equalizer, offering precise control over frequency response to shape the audio signal according to the user's customized audiogram and subjective preferences. This allows for the attenuation or amplification of specific frequency bands, ensuring that the sound delivered to the user is both clear and comfortable. Adjacent to the equalizer, additional function modules (200) provide various specialized processing capabilities, such as noise reduction, feedback suppression, frequency transition, dead zone analysis, and spatial awareness, further refining the sound and quality intelligibility for the user. The series culminates with function module 202, acting as a sophisticated amplifier. This module is responsible for adjusting the overall volume of the audio signal to the optimal listening level as determined by the vivo adaptare audiogram, subjective assessments, and user-controlled settings. The amplifier ensures that the sound is delivered at a consistent, comfortable level, accommodating for both the quiet and loud acoustic environments the user may encounter. Together, these modules within system 10 depict a holistic approach to hearing aid design, emphasizing personalization, adaptability, and user control. By integrating these advanced modules, the system offers a tailored auditory experience, significantly surpassing the capabilities of traditional hearing aids.


Delving into FIG. 6, a dynamically customizable audiogram 210, which is the vivo adaptare audiogram, is depicted as the system's inaugural amplification chart, conceived through an initial auditory evaluation facilitated by both the hearing aid device 12 and the smart device 14. This foundational assessment leverages the resonant qualities of an organ sound, denoted by element 212, setting the stage for a deeply personalized auditory calibration process. The system extends an extensive suite of customization capabilities to users, delineated into a series of discrete frequency segments, exemplified by frequency segment 214. Each frequency segment encapsulates a distinct segment of the user's auditory spectrum, offering flexibility to fine-tune sound equalization with unprecedented granularity. This innovative approach allows users to sculpt their sound environment with precision, amplifying desired frequencies while attenuating or silencing others according to personal preference. The array of frequency segments spans a selection range from 100 to 500 Hz, with intermediary ranges like 200 to 400 Hz, and 250 to 350 Hz, although, in this specific illustration, a uniform increment of 300 Hz is adopted for each frequency segment. It should be appreciated, however, that each of the frequency segments is an adjustable frequency increment with its frequency range adjustable. Accompanying this, the original testing sound level at 216 is also depicted, laying the groundwork for subsequent auditory enhancements.


Transitioning to FIG. 7, the evolution of the audiogram into a second amplification chart 220 is observed, wherein the sound level 222 has been adeptly adjusted by the user, employing dynamic interventions to refine their hearing experience. This progression signifies the active role of users in shaping their auditory perception, responding to real-time listening environments and personal hearing needs. Further exploration in FIG. 8 reveals another iteration of the audiogram, designated as 230, where the sound level 232 emerges, infused with the dual benefits of amplification 234 and dynamic background noise suppression 236. This configuration underscores the system's capacity to adapt and respond to complex auditory environments, offering nuanced control over the hearing landscape.


In an illustrative leap to FIG. 9, the audiogram 240 showcases an advanced modification of sound level 242, where selective sound suppression is strategically applied to navigate around the challenges of tinnitus, resulting in a tailored sound output 244. This adaptation illustrates the system's sensitivity to user-specific auditory conditions, highlighting its ability to not only enhance general hearing experiences but also to provide comfort and relief in scenarios dominated by tinnitus. Through these sequential refinements captured across FIGS. 6 to 9, the system's prowess in delivering a highly customized, user-centric auditory enhancement journey is vividly demonstrated, blending sophisticated technology with the nuanced demands of individual hearing profiles.


Turning attention to FIG. 10, an expansive view into the user interfaces presented on the smart device 14 is offered, revealing a sophisticated and intuitive platform for user interaction and auditory customization. Within this interface, a variety of controls are laid out, thoughtfully designed to cater to the nuanced needs of the user in managing their auditory experience. Specifically, the interface is segmented into several control groups, each with its designated function and utility. The first set of controls, identified as controls 260, encompasses a trio of fundamental auditory adjustments: volume, compression, and background noise suppression. Each control is engineered to offer precise manipulation over the hearing aid's output, allowing users to fine-tune the auditory input to match their personal preferences and environmental requirements. Volume control provides users with the capability to adjust the loudness of the sound, ensuring that audio is neither too faint nor overwhelmingly loud. Compression controls offer a way to manage the dynamic range of sounds, making softer sounds more audible without increasing the volume of louder sounds, thus preserving auditory comfort. Background noise suppression controls are designed to minimize distracting ambient sounds, enabling users to focus on the primary audio source, whether it be a conversation, music, or other sounds of interest.


Expanding upon the customization options, Controls 262 introduce the user to the capability of setting, storing, and uploading changes to a series of dynamically customizable audiograms, labeled as 270, 272, and 274. These audiograms represent specific auditory profiles tailored to different listening environments and preferences. The dynamically customizable audiogram 270 illustrates an adjustment in volume, allowing users to modify the overall loudness of the hearing aid output to achieve the desired auditory balance. The dynamically customizable audiogram 272 is dedicated to background noise suppression, enabling users to create a hearing profile that effectively reduces ambient noise, thus enhancing the clarity and intelligibility of foreground sounds. Lastly, the dynamically customizable audiogram 274 focuses on compression adjustments, providing users the ability to set a hearing profile that optimizes the dynamic range of sounds, ensuring that all sounds, regardless of their original volume, are heard comfortably and clearly.


These interfaces and controls underscore the smart device's commitment to delivering a highly personalized and adaptable hearing aid experience. By empowering users with the ability to precisely adjust and save multiple audiogram settings, the system acknowledges the dynamic nature of human hearing and the diverse auditory environments encountered in daily life. This approach not only enhances the user's autonomy over their hearing experience but also fosters a sense of engagement and satisfaction with the hearing aid system, marking a significant advancement in the integration of technology and personalized care in the realm of auditory assistance.


Referring now to FIG. 11, in one embodiment of a methodology, at block 300, establishing a wireless communication link is established between a hearing aid device and a smart device via a transceiver. At block 302, a user interface of the smart device receives patient inputs for adjusting a dynamically customizable audiogram that represents preferred hearing settings of the patient. The adjustments may include modifications to one or more of the frequency segments within the audiogram. As previously discussed, each frequency segment represents a divided portion of the hearing range. At block 304, the patient inputs are processed and at block 306, the dynamically customizable audiogram is modified in accordance with the processing, thereby allowing for adaptation to varying auditory environments as perceived by the patient. At block 308, the modified dynamically customizable audiogram is transmitted from the smart device to the hearing aid device for immediate implementation in sound signal processing.


Referring now to FIG. 12, in one embodiment of the methodology tailored for this advanced hearing aid invention, the process unfolds as follows. At block 320, a seamless wireless communication link is established between the hearing aid device and a smart device via a transceiver. This connection enables the exchange of data and control commands necessary for the dynamic customization and operation of the hearing aid. At block 322, the smart device's user interface collects patient inputs for real-time adjustments to the dynamically customizable audiogram. These adjustments are intricately designed to modify specific frequency segments within the audiogram, where each segment pertains to a distinct portion of the hearing range, allowing for granular control over the hearing experience.


Proceeding to block 324, the smart device processes the patient inputs. This step involves analyzing the adjustments to ensure they align with the user's hearing preferences and the acoustic characteristics of the current environment. At block 326, the audiogram within the hearing aid is dynamically updated to reflect the processed adjustments. This crucial step enables the hearing aid to adapt its audio processing in real-time to suit the varying auditory environments experienced by the user, ensuring an optimal listening experience under diverse conditions. Finally, at block 328, the updated, customized audiogram is utilized.


The order of execution or performance of the methods and data flows illustrated and described herein is not essential, unless otherwise specified. That is, elements of the methods and data flows may be performed in any order, unless otherwise specified, and that the methods may include more or less elements than those disclosed herein. For example, it is contemplated that executing or performing a particular element before, contemporaneously with, or after another element are all possible sequences of execution.


While this invention has been described with reference to illustrative embodiments, this description is not intended to be construed in a limiting sense. Various modifications and combinations of the illustrative embodiments as well as other embodiments of the invention, will be apparent to persons skilled in the art upon reference to the description. It is, therefore, intended that the appended claims encompass any such modifications or embodiments.

Claims
  • 1. A hearing aid system for a patient, the hearing aid system comprising: a programming interface configured to facilitate bidirectional communication between a hearing aid device and a smart device, the hearing aid device having integrated sound processing capabilities, including a microphone, a speaker, and an electronic signal processor capable of receiving, processing, and outputting audio signals, the smart device including a housing securing a processor, non-transitory memory, a user interface, a transceiver and storage therein, the smart device including a busing architecture communicatively interconnecting the speaker, the user interface, the processor, the transceiver, the memory, and the storage;an integrated, dynamically customizable audiogram stored within the hearing aid device; andthe non-transitory memory accessible to the processor, the non-transitory memory including processor-executable instructions that, when executed, by the processor cause the system to: establish a wireless communication link with the hearing aid device via the transceiver,receive, through the user interface, patient inputs for adjusting the dynamically customizable audiogram that represents hearing settings of the patient, including adjustments to one or more of a plurality of frequency segments, each of the frequency segments being a divided portion of the dynamically customizable audiogram,process the patient inputs to adjust the dynamically customizable audiogram, thereby enabling adaptation to varying auditory environments as perceived by the patient, andtransmit the adjusted dynamically customizable audiogram to the hearing aid device for immediate application.
  • 2. The hearing aid system as recited in claim 1, wherein the processor-executable instructions further comprise instructions that, when executed, by the processor cause the system to: facilitate on-demand auditory testing through the smart device, allowing the user to continuously refine the audiogram settings based on self-assessed hearing capabilities and environmental conditions, further enhancing the personalized hearing aid performance.
  • 3. The hearing aid system as recited in claim 1, wherein each of the plurality of frequency segments is an increment selected from the group consisting of 100 to 500 Hz, 200 to 400 Hz, and 250 to 350 Hz.
  • 4. The hearing aid system as recited in claim 1, wherein each of the plurality of frequency segments is an increment of 300 Hz.
  • 5. The hearing aid system as recited in claim 1, wherein each of the plurality of frequency segments is an adjustable frequency increment.
  • 6. The hearing aid system as recited in claim 1, wherein adjusting the dynamically customizable audiogram further comprises activating directional microphone functionality to focus on sounds coming from specific directions.
  • 7. The hearing aid system as recited in claim 1, wherein adjusting the dynamically customizable audiogram further comprises selecting from multiple background noise on the current environment.
  • 8. The hearing aid system as recited in claim 1, wherein adjusting the dynamically customizable audiogram further comprises adjusting amplification settings to enhance sound volume and clarity for specific frequency ranges, providing a tailored auditory experience for the user based on individual hearing preferences and situational requirements.
  • 9. The hearing aid system as recited in claim 1, wherein adjusting the dynamically customizable audiogram further comprises muting specific frequency ranges.
  • 10. The hearing aid system as recited in claim 1, wherein the smart device comprises a device selected from the group consisting of smartphones, tablet computers, smartwatches, and wearable devices.
  • 11. The hearing aid system as recited in claim 1, wherein the programming interface further comprises an extensible architecture configured to integrate operational modes and functionalities.
  • 12. A hearing aid system for a patient, the hearing aid system comprising: a programming interface configured to facilitate bidirectional communication between a hearing aid device and a smart device, the hearing aid device having integrated sound processing capabilities, including a microphone, a speaker, and an electronic signal processor capable of receiving, processing, and outputting audio signals, the smart device;a wireless communication module enabling bidirectional communication between the hearing aid device and the smart device; andthe electronic signal processor causes the system to: receive a dynamically customizable audiogram from the smart device, the audiogram adjusted based on patient inputs received via the smart device and reflecting the patient's preferred hearing settings, including adjustments within a plurality of frequency segments, each representing a portion of the hearing range,apply the received audiogram to adjust processing parameters of the sound signals in real-time, thereby enabling adaptive sound processing based on the patient's current auditory environment and self-assessed hearing needs.
  • 13. The hearing aid system as recited in claim 12, wherein each of the plurality of frequency segments is an increment selected from the group consisting of 100 to 500 Hz, 200 to 400 Hz, and 250 to 350 Hz.
  • 14. The hearing aid system as recited in claim 12, wherein each of the plurality of frequency segments is an increment of 300 Hz.
  • 15. The hearing aid system as recited in claim 12, wherein each of the plurality of frequency segments is an adjustable frequency increment.
  • 16. The hearing aid system as recited in claim 12, wherein adjusting the dynamically customizable audiogram further comprises activating directional microphone functionality to focus on sounds coming from specific directions.
  • 17. The hearing aid system as recited in claim 12, wherein adjusting the dynamically customizable audiogram further comprises selecting from multiple background noise on the current environment.
  • 18. The hearing aid system as recited in claim 12, wherein adjusting the dynamically customizable audiogram further comprises adjusting amplification settings to enhance sound volume and clarity for specific frequency ranges, providing a tailored auditory experience for the user based on individual hearing preferences and situational requirements.
  • 19. The hearing aid system as recited in claim 12, wherein adjusting the dynamically customizable audiogram further comprises muting specific frequency ranges.
  • 20. The hearing aid system as recited in claim 12, wherein the smart device comprises a device selected from the group consisting of smartphones, tablet computers, smartwatches, and wearable devices.
  • 21. A method for adjusting a hearing aid system for a patient, the method comprising: establishing a wireless communication link between a hearing aid device and a smart device via a transceiver;receiving, through a user interface of the smart device, patient inputs for adjusting a dynamically customizable audiogram that represents preferred hearing settings of the patient, the adjustments including modifications to one or more of a plurality of frequency segments within the audiogram, each frequency segment representing a divided portion of the hearing range;processing the patient inputs to modify the dynamically customizable audiogram, thereby allowing for adaptation to varying auditory environments as perceived by the patient; andtransmitting the modified dynamically customizable audiogram from the smart device to the hearing aid device for immediate implementation in sound signal processing.
  • 22. The method of claim 20, further including facilitating on-demand auditory testing via the smart device, enabling the patient to refine the audiogram settings continuously based on self-assessed hearing capabilities and environmental conditions, thus enhancing personalized hearing aid performance.
  • 23. The method of claim 20, wherein the modifications to the dynamically customizable audiogram include activating directional microphone functionality on the hearing aid device to prioritize sounds originating from specific directions as specified by the patient through the smart device.
  • 24. The method of claim 20, wherein the modifications to the dynamically customizable audiogram include selecting from multiple background noise cancellation profiles on the hearing aid device to minimize ambient noise in the patient's current environment, as directed through adjustments made via the smart device.
  • 25. The method of claim 20, wherein the modifications to the dynamically customizable audiogram include adjusting amplification settings for specific frequency ranges within the hearing aid device, enhancing sound volume and clarity to provide a tailored auditory experience based on the patient's individual hearing preferences and situational requirements, as adjusted through the smart device.
PRIORITY STATEMENT & CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims priority from U.S. Provisional Patent Application Ser. No. 63/564,110 entitled “System for Aiding Hearing and Method for Use of Same” filed on Mar. 12, 2024 in the name of Laslo Olah, and U.S. Provisional patent application Ser. No. 63/632,371 entitled “System for Aiding Hearing and Method for Use of Same” filed on Apr. 10, 2024 in the name of Laslo Olah; both of which are hereby incorporated by reference, in entirety, for all purposes.

US Referenced Citations (20)
Number Name Date Kind
5987147 Nishimoto Nov 1999 A
7113589 Mitchler Sep 2006 B2
8565460 Takagi et al. Oct 2013 B2
8761421 Apfel Jun 2014 B2
9232322 Fang Jan 2016 B2
9344814 Rasmussen May 2016 B2
9712928 Pedersen et al. Jul 2017 B2
10181328 Jensen et al. Jan 2019 B2
20050004691 Edwards Jan 2005 A1
20050245221 Beyer Nov 2005 A1
20080004691 Weber Jan 2008 A1
20120106762 Kornagel May 2012 A1
20120121102 Jang May 2012 A1
20130142369 Zhang et al. Jun 2013 A1
20130223661 Uzuanis Aug 2013 A1
20160338622 Chen Nov 2016 A1
20180035216 Van Hasselt et al. Feb 2018 A1
20180207167 Dyhrfjeld-Johnsen Jul 2018 A1
20200268260 Tran Aug 2020 A1
20210006909 Olah et al. Jan 2021 A1
Foreign Referenced Citations (3)
Number Date Country
20170026786 Mar 2017 KR
2016188270 Dec 2016 WO
2019136382 Jul 2019 WO
Non-Patent Literature Citations (3)
Entry
International Preliminary Report on Patentability dated Jul. 7, 2020 regarding International Application No. PCT/US2019/012550; 8 pp.
International Search Report and Written Opinion dated Mar. 22, 2019 regarding International Application No. PCT/US2019/012550; 10 pp.
International Search Report and Written Opinion dated Aug. 5, 2021 concerning International Application No. PCT/US2021/029414; 8 pp.
Provisional Applications (2)
Number Date Country
63632371 Apr 2024 US
63564110 Mar 2024 US