The present disclosure is generally related to processing of audio information, and more particularly related to adjusting audio parameters for a user.
The subject matter discussed in the background section should not be assumed to be prior art merely as a result of its mention in the background section. Similarly, a problem mentioned in the background section or associated with the subject matter of the background section should not be assumed to have been previously recognized in the prior art. The subject matter in the background section merely represents different approaches, which in and of themselves may also correspond to implementations of the claimed technology.
Hearing loss is one amongst the most prevalent chronic health conditions. Typically, the hearing loss is mitigated through use of hearing aids. However, each and every user may not use the hearing aids due to various reasons such as, but not limited to, cost, physical discomfort, and lack of effectiveness in some specific listening situations, societal perception, and unawareness of the hearing loss. Further, the hearing aids may not work with various headphone devices. Also, the hearing aids may not be able to modify the audio heard by each user while the user is suffering from impaired hearing.
Currently, the hearing loss is diagnosed by a medical specialist by performing hearing test. The hearing test comprises playing an audio, including various audio frequencies, on a user device for a short listening test, and capturing an auditory response of the user towards the audio and various audio frequencies. The auditory response is resulted into a score and a chart for determining whether the hearing of the user is good or bad for each ear. However, the current method of the hearing test does not provide any appropriate solution to the user for overcoming hearing problems.
Further, the hearing loss is diagnosed by the medical specialist by using a tool such as audiometer in a noise-free environment. The noise-free environment is an environment where impediments to the hearing are absent. However, the user is exposed to many environments in which acoustic noise is prevalent, such as a moving automobile or a crowded location, and thus performance may decrease dramatically in the presence of noise.
Thus, the current state of the art is costly and lacks an efficient mechanism for overcoming the hearing problems of the users. Therefore, there is a need for an improved method and system that may be cost effective and efficient.
The accompanying drawings illustrate various embodiments of systems, methods, and embodiments of various other aspects of the disclosure. Any person with ordinary skills in the art will appreciate that the illustrated element boundaries (e.g. boxes, groups of boxes, or other shapes) in the figures represent one example of the boundaries. It may be that in some examples one element may be designed as multiple elements or that multiple elements may be designed as one element. In some examples, an element shown as an internal component of one element may be implemented as an external component in another, and vice versa. Furthermore, elements may not be drawn to scale. Non-limiting and non-exhaustive descriptions are described with reference to the following drawings. The components in the figures are not necessarily to scale, emphasis instead being placed upon illustrating principles.
Some embodiments of this disclosure, illustrating all its features, will now be discussed in detail. The words “comprising,” “having,” “containing,” and “including,” and other forms thereof, are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items.
It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise. Although any systems and methods similar or equivalent to those described herein can be used in the practice or testing of embodiments of the present disclosure, the preferred, systems and methods are now described.
Embodiments of the present disclosure will be described more fully hereinafter with reference to the accompanying drawings in which like numerals represent like elements throughout the several figures, and in which example embodiments are shown. Embodiments of the claims may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. The examples set forth herein are non-limiting examples and are merely examples among other possible examples.
The communication network 104 may be a wired and/or a wireless network. The communication network 104, if wireless, may be implemented using communication techniques such as Visible Light Communication (VLC), Worldwide Interoperability for Microwave Access (WiMAX), Long Term Evolution (LTE), Wireless Local Area Network (WLAN), Infrared (IR) communication, Public Switched Telephone Network (PSTN), Radio waves, and other communication techniques known in the art.
The user device 106 may refer to a computing device used by the user, to perform one or more operations. In one case, an operation may correspond to selecting a particular band of frequencies. In another case, an operation may correspond to defining playback amplitudes of an audio. The audio may be a sample tone, music, or spoken words. The user device 106 may be realized through a variety of computing devices, such as a desktop, a computer server, a laptop, a personal digital assistant (PDA), a tablet computer, and the like.
The database 108 may be configured to store auditory response of the user towards the audio. In one case, the database 108 may store one or more results of a hearing test of the user. The one or more results may correspond to a hearing ability of the user. In an embodiment, the database 108 may store hearing profile of the user. As an example, the hearing profile may correspond to a hearing adjustment profile. The hearing profile may include a spectrum of the audio divided into a plurality of audio frequency bands.
In an embodiment, the database 108 may store user defined playback amplitudes of the audio. Further, the database 108 may store historical data related to the hearing ability of the user. The historical data may include user preferences towards the audio. A single database 108 is used in present case; however different databases may also be used for storing the data.
In one embodiment, referring to
The interface(s) 202 may be used by the user to program the system 102. The interface(s) 202 of the system 102 may either accept an input from the user or provide an output to the user, or may perform both the actions. The interface(s) 202 may either be a Command Line Interface (CLI), Graphical User Interface (GUI), or a voice interface.
The memory 204 may include, but is not limited to, fixed (hard) drives, magnetic tape, floppy diskettes, optical disks, Compact Disc Read-Only Memories (CD-ROMs), and magneto-optical disks, semiconductor memories, such as ROMs, Random Access Memories (RAMs), Programmable Read-Only Memories (PROMs), Erasable PROMs (EPROMs), Electrically Erasable PROMs (EEPROMs), flash memory, magnetic or optical cards, or other type of media/machine-readable medium suitable for storing electronic instructions.
The processor 206 may execute an algorithm stored in the memory 204 for adjusting the audio parameters for the user. The processor 206 may also be configured to decode and execute any instructions received from one or more other electronic devices or server(s). The processor 206 may include one or more general purpose processors (e.g., INTEL® or Advanced Micro Devices® (AMD) microprocessors) and/or one or more special purpose processors (e.g., digital signal processors or Xilinx® System On Chip (SOC) Field Programmable Gate Array (FPGA) processor). The processor 206 may be configured to execute one or more computer-readable program instructions, such as program instructions to carry out any of the functions described in this description.
In an embodiment, the processor 206 may be configured to perform various steps for adjusting the audio parameters for the user. At first, the processor 206 may perform a hearing test of the user. The hearing test may be performed by playing an audio. The audio including various audio frequencies may be played on an audio device. In one case, the audio may be played on the user device 106. The audio may be a sample tone, music, or spoken words.
For example, as shown in
In one embodiment, the user may listen to the audio. While listening to the audio, the user may provide an auditory response towards the audio. In one case, the auditory response may be provided by the user using the user device 106. The auditory response may include information, such as increased/reduced hearing in a left ear.
In one embodiment, the processor 206 may generate a hearing profile of the user. The hearing profile may be generated based on one or more results of the hearing test. The one or more results may correspond to a hearing ability of the user. It should be noted that the results of the hearing test may be utilized to regulate the audio parameters for both ears of the user. For example, the one or more results may include the user not being able to hear properly from his left ear, and the user may require balancing volume or frequency of the audio, for both of his ears.
Further, the hearing profile may be defined as a hearing adjustment profile that may include a spectrum of the audio divided into a plurality of audio frequency bands. Each frequency band of the audio may be associated with the user defined playback amplitudes of the audio. It should be noted that the playback amplitudes may be defined by the user while listening to the audio. For example, the user may require low amplitude in a right ear and/or the user may require high volume in a left ear. In one case, the processor 206 may display the hearing profile of the user on the user device 106.
Successive to generating the hearing profile, the processor 206 may adjust a playing speed of the audio. The playing speed of the audio may be adjusted based on the hearing profile. In one case, the processor 206 may adjust various other audio parameters such as, but not limited to, amplitude of the audio, frequency of the audio, and/or volume of the audio. For example, the user may have an impaired hearing and the user may want to understand the audio properly. Then, the processor 206 may adjust the volume of the audio by increasing volume of the audio and also decreasing speed of the audio, so that the user may hear the audio properly. In some cases, the processor 206 may also increase the speed of the audio and in certain cases the processor 206 may modulate the audio by increasing or decreasing the speed of the audio.
In another scenario, if the hearing profile states that the user needs additional volume in the left ear, the processor 206 may adjust the volume of the audio accordingly. Similarly, if the hearing profile of the user states that the user needs a frequency adjustment (i.e., less bass or more bass) for the audio in the right ear, the processor 206 may adjust frequency for the right ear accordingly. In another example, if the hearing profile of the user states that the user needs volume or frequency balance between the ears, then the processor may adjust the audio parameters accordingly for the user.
In one embodiment, a device may be configured to adjust the audio parameters for the user. The device may perform a hearing test of the user. The hearing test may be performed by playing an audio and capturing an auditory response of the user towards the audio. Based on results of the hearing test, a hearing profile of the user may be generated. Thereafter, the device may adjust a playing speed of the audio based on the hearing profile, thereby adjusting the audio parameters for the user. In an embodiment, the device may adjust various audio parameters such as amplitude of the audio, frequency of the audio, and volume of the audio, based on the hearing profile. In one case, the device may refer to the user device 106 or a separate audio device.
The flowchart 400 of
At step 402, a hearing test of the user may be performed, by the processor 206. The user may be suffering from impaired hearing. The hearing test may include playing an audio for the user. An auditory response of the user may be received, towards the audio. The processor 206 may capture the auditory response.
At step 404, a hearing profile of the user may be generated. The hearing profile may be generated based at least on one or more results of the hearing test. The one or more results of the hearing test may correspond to a hearing ability of the user. Further, the hearing profile may include a spectrum of the audio divided into a plurality of audio frequency bands and each frequency band being associated with user defined playback amplitudes of the audio.
At step 406, a playing speed of the audio may be adjusted. The playing speed may be adjusted based on the hearing profile, and thereby adjusting the audio parameters for the user. Based on the hearing profile, the processor 206 may further adjust the audio parameters such as volume of the audio, frequency of the audio, and amplitude of the audio, in an embodiment.
The flowchart 500 of
At step 502, a hearing test of the user may be performed, by the processor 206. The user may be suffering from impaired hearing. The hearing test may include playing an audio for the user. An auditory response of the user may be received, towards the audio. The processor 206 may capture the auditory response.
At step 504, a hearing profile of the user may be generated. The hearing profile may be generated based at least on one or more results of the hearing test. The one or more results of the hearing test may correspond to a hearing ability of the user. Further, the hearing profile may include a spectrum of the audio divided into a plurality of audio frequency bands and each frequency band being associated with user defined playback amplitudes of the audio.
At step 506, amplitude and frequency of the audio may be adjusted. The amplitude of the audio and the frequency of the audio may be adjusted based on the hearing profile. Based on the hearing profile, the processor 206 may further adjust the audio parameters such as volume of the audio, in an embodiment.
Embodiments of the present disclosure may be provided as a computer program product, which may include a computer-readable medium tangibly embodying thereon instructions, which may be used to program a computer (or other electronic devices) to perform a process. The computer-readable medium may include, but is not limited to, fixed (hard) drives, magnetic tape, floppy diskettes, optical disks, Compact Disc Read-Only Memories (CD-ROMs), and magneto-optical disks, semiconductor memories, such as ROMs, Random Access Memories (RAMs), Programmable Read-Only Memories (PROMs), Erasable PROMs (EPROMs), Electrically Erasable PROMs (EEPROMs), flash memory, magnetic or optical cards, or other type of media/machine-readable medium suitable for storing electronic instructions (e.g., computer programming code, such as software or firmware). Moreover, embodiments of the present disclosure may also be downloaded as one or more computer program products, wherein the program may be transferred from a remote computer to a requesting computer by way of data signals embodied in a carrier wave or other propagation medium via a communication link (e.g., a modem or network connection).
This patent application claims the benefit of U.S. Provisional Application No. 62/541,801, filed on Aug. 7, 2017.
Number | Date | Country | |
---|---|---|---|
62541801 | Aug 2017 | US |