The present disclosure relates to a method for providing a self-fitting hearing test, a set of hearing devices configured to carry out the method, a computer program comprising instructions for carrying out the method, and a computer-readable medium comprising instruction which when executed by a computer carries out the method.
Hearing devices adapted to be worn at the ears of the user are becoming increasingly personalized. For hearing devices such as hearing aids this has long been the case, and these have included gain settings, which are frequency dependent on the user's personal hearing profile, so that the hearing devices could provide customized amplification in different frequency bands to compensate for the user's hearing deficit. This has recently also become a feature in hearing devices for the consumer segment.
However, for hearing aids, which are medical devices, setting up the hearing devices requires a consultation with a hearing care professional, which is a lengthy and for some consumers expensive process. For consumer devices, such as ear buds and headsets, it is up to the consumers themselves to customize their settings, which often results in a customization that is less than optimal.
At the heart of optimizing a hearing device or a set of hearing devices for the specific user, is the process of obtaining correct audiogram data for the user which represents the user's hearing ability in a number of frequency bands, as such audiogram data is necessary for calculating the amount of gain the hearing device(s) should apply in the frequency bands to produce an acoustic output signal customized for the used. However, it may be troublesome obtaining the correct audiogram without consulting a hearing care professional.
Accordingly, there is a need to provide a method which assist the user in performing a self-administered hearing test to obtain correct audiogram data for self-fitting a hearing device to the user.
According to a first aspect, there is provided method for providing a self-fitting hearing test performed via a hearing device adapted to be worn at an ear of a user, the hearing device comprising an output transducer, a wireless communication interface, a processing unit, and a memory unit, the method comprising the steps of performing a test-cycle comprising a emission timeslot, where a test sound is emitted by the hearing device, the test sound having a test volume and a test frequency, a response timeslot, where a user response indicating whether the user heard the test sound is detected, wherein the response timeslot comprises a grace period following the emission timeslot, and a test-cycle end step, where cycle data indicative of the test-cycle's test volume, test frequency, and user response is added to a test data set, calculating audiogram data indicative of the user's hearing ability based on the test data set, and outputting the audiogram data.
According to a first aspect, there is provided a set of hearing devices comprising a first hearing device adapted to be worn at a first ear of a user and a second hearing device adapted to be worn at a second ear of the user, each of the first and second hearing devices comprising an output transducer, a wireless communication interface, a processing unit, and a memory unit, wherein at least one of the processing units of the first and second hearing device is configured to perform the steps of the method according to the first aspect.
According to a third aspect there is provided a computer program comprising instructions to cause the set of hearing devices according to the second aspect to carry out the steps of the method according to the first aspect.
According to a third aspect there is provided a computer-readable medium having stored thereon the computer program of the third aspect.
It is noted the emission timeslot and the response timeslot overlap partially, so that the response timeslot starts before the emission timeslot ends. Following the end of the emission timeslot, the response timeslot then continues for the grace period, during which user responses are still detected.
It is further noted that an omission of a response by the user in the response timeslot can be detected as an indication that they did not hear the test sound. Hence the term detection should be understood to cover not only the detection of an active action by the user but also a passive action, I.e., the absence of an active action.
For sets of hearing devices comprising a first and a second hearing device, the method of the first aspect may be repeated and performed on the user's second, i.e., other ear, in order to provide audiogram data for both the first and the second ear. In this disclosure audiogram data may be denoted as first or second audiogram data, as a reference to which of the user's ears hearing ability the audiogram data is associated with.
For self-administered hearing tests it is an advantage to provide a grace period in the response timeslot, wherein the grace period follows the emission timeslot, because, as the inventors realized, many people tend to hesitate due to uncertainty as to whether they heard the emitted sound or not. This uncertainty led many users to pause when hearing the test sound to decide whether they were hearing the test sound or merely imagining that they were hearing it, only to miss the response timeslot. The inventors thus found that including a grace period in the response time-slot led to fewer false negatives, and thus improved the quality of the audiogram data in the end.
The above and other features and advantages of the present disclosure will become readily apparent to those skilled in the art by the following detailed description of exemplary embodiments thereof with reference to the attached drawings, in which:
Various exemplary embodiments and details are described hereinafter, with reference to the figures when relevant. It should be noted that the figures may or may not be drawn to scale and that elements of similar structures or functions are represented by like reference numerals throughout the figures. It should also be noted that the figures are only intended to facilitate the description of the embodiments. They are not intended as an exhaustive description of the invention or as a limitation on the scope of the invention. In addition, an illustrated embodiment needs not have all the aspects or advantages shown. An aspect or an advantage described in conjunction with a particular embodiment is not necessarily limited to that embodiment and can be practiced in any other embodiments even if not so illustrated, or if not so explicitly described.
The hearing device(s) may be of the behind-the-ear (BTE) type, which comprises a BTE component for placement behind the ear, the BTE component having a speaker, an in-the-ear component for placement in the ear, and a hollow sound tube connecting the BTE and ITE components for transferring sound from the speaker to the ITE component. The hearing devices may be of the receiver-in-canal (RIC) type also known as receiver-in-the-ear (RITE) or receiver-in-ear (RIE), which comprises a BTE component for placement behind the ear, an in-the-ear component for placement in the ear, the ITE component having a speaker, and connector with one or more conducting wires connecting the BTE and ITE components for transferring electrical signals between the BTE and ITE components. The hearing devices may be of the in-the-ear (ITE) type, in-the-canal (ITC) type, completely in the canal (CIC) type, or a set of earbuds, which comprises an ITE component for placement at least partially in the ear canal, the ITE component having a speaker.
One or both of the hearing devices may be configured for wireless communication with one or more other devices, such as with one or more accessory devices, such as a smartphone and/or a smart watch. Each of the hearing devices comprises a primary antenna and/or a secondary antenna. Any of the antennas may be configured to convert one or more wireless input signals, e.g., a primary wireless input signal and/or a secondary wireless input signal, to antenna output signal(s). The wireless input signal(s) may originate from external source(s), such as spouse microphone device(s), wireless TV audio transmitter, and/or a distributed microphone array associated with a wireless transmitter. The wireless input signal(s) may origin from the other hearing device, and/or from one or more accessory devices.
Each of the hearing devices comprise a radio transceiver coupled to the respective hearing device's primary antenna and/or secondary antenna. The radio transceiver may be configured to convert the antenna output signal(s) to a transceiver input signal. Wireless signals from different external sources may be multiplexed in the radio transceiver to a transceiver input signal or provided as separate transceiver input signals on separate transceiver output terminals of the radio transceiver. The hearing device may comprise a plurality of antennas and/or an antenna may be configured to be operate in one or a plurality of antenna modes. The transceiver input signal optionally comprises a first transceiver input signal representative of the first wireless signal from a first external source.
Each of the hearing devices may comprise a set of microphones. The set of microphones may comprise one or more microphones. The set of microphones comprises a primary microphone for provision of a primary microphone input signal and/or optionally a secondary microphone for provision of a secondary microphone input signal. The set of microphones may comprise N microphones for provision of N microphone input signal(s), wherein N is an integer in the range from 1 to 10. In one or more exemplary hearing devices, the number N of microphones is two, three, four, five or more. The set of microphones may comprise a tertiary microphone for provision of a tertiary microphone input signal. The set of microphones may comprise other vibration sensors, such as bone vibration sensors.
Each of the hearing devices comprise a processing unit for processing input signal(s), such as transceiver input signal(s) and/or microphone input signal(s). The processing unit is optionally configured to compensate for hearing loss of a user of the hearing device. The compensation may comprise amplifying sound signals in the transceiver input signal(s) and/or microphone input signal(s) by applying gain in one or more frequency bands according to the user's frequency dependent hearing loss. The processing unit may be connected to the radio transceiver for processing the transceiver input signal. The processing unit may be connected the first microphone for processing the first microphone input signal. The processing unit may be connected the second microphone if present for processing the second microphone input signal. The processing unit may comprise one or more A/D-converters for converting analog microphone input signal(s) to digital pre-processed microphone input signal(s).
The processing unit provides an electrical output signal based on the input signals to the processing unit. Input terminal(s) of the processing unit are optionally connected to respective output terminals of a pre-processing unit. For example, a transceiver input terminal of the processing unit may be connected to a transceiver output terminal of the transceiver. One or more microphone input terminals of the processing unit may be connected to respective one or more microphone output terminals of the one or more microphones.
The primary antenna may comprise a coil part coiled along a first antenna axis. The first antenna may be configured for magnetic induction communication, such as in a range of frequencies between 3 MHz and 100 MHz. The primary antenna may be configured for communication in a frequency band in the GHz range, such as at 5 GHZ, such as at 2.4 GHz. The primary antenna may be configured for providing a binaural wireless link between hearing devices located at opposite ears of the user. Optionally the primary antenna may also be configured for providing a wireless link with and external device, i.e., an electronic device other than the first and second hearing devices, such as a computer device, such as a smart device, such as a smart phone or smart watch.
The secondary antenna may be configured for communication in a frequency band in the GHz range, such as at 5 GHZ, such as at 2.4 GHZ, e.g., at Bluetooth frequencies. The secondary antenna may be configured for providing a wireless link with and external device, i.e., an electronic device other than the first and second hearing devices, such as a computer device, such as a smart device, such as a smart phone or smart watch, whilst the primary antenna provides the binaural wireless link between hearing devices.
When outputted the audiogram data can be included in setting data for the hearing device and the setting data is stored in the memory unit. This may allow the processing unit to calculate the appropriate gains based on the audiogram data for processing input signals to compensate for the user's hearing ability or lack thereof. Alternatively, gain data may be obtained based on the audiogram data, and the gain data can be included in the setting data for the hearing device and the setting data is stored in the memory unit.
The emission timeslot may comprise a lock period preceding the response timeslot, wherein user responses submitted in the lock period is discarded. The inventors found that many positive user responses, i.e., responses that indicated that the user heard the test sound, that came quickly, i.e., early in the emission timeslot, were actually false positives. The advantage of including a lock period, is that the number of false positives is reduced.
The duration of the emission timeslot may be chosen at random from an emission duration distribution ranging from a lower emission duration to and upper emission duration. The lower emission duration may be in the range 1 second to 3 seconds and the upper emission duration may be in the range 2 seconds to 4 seconds. The grace period may have a fixed duration ranging from 250 mS to 750 mS.
The step of performing a test-cycle may be repeated for a different test volume and/or a different test frequency. It is advantageous to repeat the test-cycle at multiple test volumes for a frequency as this increases the precision at which the user's hearing ability at that frequency can be calculated for the audiogram data. As hearing loss/hearing ability usually varies through the audible spectrum, it is advantageous to repeat the test-cycle at multiple test frequencies as this allows the hearing ability the be tested in multiple sub frequency bands of the audible frequency band. The step of performing a test-cycle may comprise pausing for a pause duration before proceeding to the next test-cycle if a positive user response is detected.
The method may be performed using a hearing device or a set of hearing devices in combination with an external device, such as a smart phone or smart watch. The user response may be detected using a user interface of the external device, e.g., a touch screen. A processor of the external device may perform the calculation of the audiogram data indicative of the user's hearing ability based on the test data set.
The method may be performed for a set of hearing devices comprising a first hearing device adapted to be worn at the user's first ear and a second hearing device adapted to be worn at the user's second ear, i.e., the method is carried out for the first ear using the first hearing device yielding first audiogram data and the for the second ear using the second hearing device yielding second audiogram data. The first audiogram data may be outputted to the first hearing device which may include it in first setting data and store it in the memory unit of the first hearing device.
The second audiogram data may be outputted to the second hearing device which may include it in second setting data and store it in the memory unit of the second hearing device.
The first setting data and backup data may comprise first audiogram data or first gain data, and/or the second setting data and backup data may comprise second audiogram data or second gain data. The first audiogram data and/or second audiogram data may represent results from a user's hearing test of the first ear and the second ear, respectively. The first audiogram data and/or second audiogram data may represent results from a user's computer assisted self-administered hearing test of the first ear and the second ear, respectively. The first gain data and/or second gain data may comprise one or more gain setting(s) derived from audiogram data.
The first audiogram data may comprise level data for two or more frequency values, preferably five or more frequency values, and/or the second audiogram data and/or second gain data comprises level data for two or more frequency values, preferably five or more frequency values. The two or more frequency values may be within the frequency range 20 HZ to 8000 Hz. Preferably, the first audiogram data and/or the second audiogram data comprise level data for two to four frequency values substantially equally distributed in the frequency range 300 Hz to 1200 Hz, e.g. level data for 500 Hz, 750 Hz, and 1000 Hz. Preferably, the first audiogram data and/or the second audiogram data comprise level data for two to four frequency values substantially equally distributed in the frequency range 800 Hz to 2200 Hz, e.g. level data for 1000 Hz, 1500 Hz, and 2000 Hz. Preferably, the first audiogram data and/or the second audiogram data comprise level data for two to six frequency values substantially equally distributed in the frequency range 1500 Hz to 6500 Hz, e.g. level data for 2000 Hz, 4000 Hz, 5000 Hz, and 6000 Hz. The level data will represent the user's level of hearing at the associated frequency value. The first and/or second hearing device(s) may be configured for calculating gain settings for frequency ranges corresponding to the frequency values, e.g., based on the NAL-NL2 prescription procedure, so that the hearing devices can compensate for the user's hearing loss.
The first audiogram data may be based on a first hearing test performed on the user's first ear using the first hearing device. The second audiogram data may be based on a second hearing test performed on the user's second ear using the second hearing device. The first and second hearing tests may be self-administered hearing test performed by the user assisted by a computing device, e.g., a smartphone. The first and second hearing tests may be Bayesian Pure-Tone Audiometry, BPTA, tests.
The first setting data and backup data may comprise first filter data, and/or the second setting data and backup data may comprise second filter data. The filter data may comprise user defined gain modification data representing the user's user preferences for gain modification in one or more frequency ranges, e.g., in the bass, mid, and treble ranges. In some instances, the user may prefer higher or lover gain in some frequency ranges, e.g., some users may prefer additional bass. The gain settings may be calculated based on both the audiogram data and the gain modification data, so as to both compensate for the user's hearing loss and meet their personal preferences.
The first setting data and backup data may comprise time stamp data, and/or the second setting data and backup data may comprise time stamp data. By including time stamp data, the first and/or second hearing device(s) may determine if the setting data or the setting backup data is synchronized or which of the setting data or the setting backup data is the more recent. If the setting data or the setting backup data is not synchronized, i.e., if the time stamps of the setting data or the setting backup data does not match, the set of hearing devices may initiate and update of the outdated of the setting data or the setting backup data, either automatically or by asking the user for permissions to update.
The first audiogram data and/or first gain data may be of the dictionary data type, and/or the second audiogram data and/or second gain data may be of the dictionary data type. Dictionary type data comprises key and value pairs, e.g., a frequency value and a corresponding audiogram level, preferably 8 or more key and value pairs including 8 different frequency values or ranges and corresponding audiogram levels or gain values, e.g.:
Wherein the key is a frequency in Hz, and the corresponding value represents an audiogram level corresponding to the user's hearing ability around the key frequency value. The key may be the centre frequency of a frequency sub-band. The key may be the starting frequency of a frequency sub-band, i.e., a frequency sub-band may extend from the frequency value of one key to the frequency value of the subsequent key, e.g., from 500 Hz to 750 Hz.
The processing unit of the first hearing device may be configured for validating first setting backup data received via the wireless interface of the first hearing device, and/or the processing unit of the second hearing device may be configured for validating second setting backup data received via the wireless interface of the second hearing device. Validating the received backup data may comprise one or more of the following steps: checking the size of the transferred backup data, checking the data type of the transferred backup data, or checking values of level settings in the transferred backup data against pre-set acceptable level thresholds.
The hearing device 200 may comprise an ear canal part and another part towards the opening of the ear. In one or more exemplary hearing devices, the secondary microphone 10 is positioned in the ear canal part of the hearing device 200, i.e., the part facing the ear drum or tympanic membrane) and the primary microphone 8 is positioned on the part of the hearing device 200 that faces the opening of the ear (capturing the environment).
The hearing device 200 comprises a casing 210 adapted to be worn at an ear of the user. The casing 210 may house one or more electrical and/or mechanical components of the hearing device 200, such as an output transducer 16, an input transducer 8, 10, a wireless interface, a processing unit 14, and a memory unit 4. A microphone inlet of the primary microphone 8 may be arranged at an outer surface of the casing 210. The microphone inlet of the primary microphone 8 may be configured to receive sounds from the environment outside the hearing device 200, e.g., by the inlet pf the primary microphone 8 being arranged at an outer surface of the casing 210 and facing towards the environment outside the hearing device 200. A microphone inlet of the secondary microphone 10 may be arranged at an outer surface of the casing 210. The microphone inlet of the secondary microphone 10 may be configured to receive sounds from the ear canal of the user wearing the hearing device 200, e.g., by the inlet of the secondary microphone 10 being arranged at an outer surface of the casing 210 and facing towards the ear canal of the user wearing the hearing device 200.
The hearing device 200 may comprise a wireless interface comprising a primary antenna 6a and optionally a secondary antenna 6b for converting received wireless signal(s) to antenna output signal(s). The primary antenna 6a is configured for establishing a wireless binaural link 5 with another hearing device at the user's other ear. The secondary antenna 6b is configured for establishing a wireless link with an external device other than the other hearing device, such as a smart device of computing device. In the shown embodiment the primary antenna 6a is a magnetic induction coil while the secondary antenna 6b is a radio frequency antenna for communication at 2.45 GHZ. The wireless interface may also be denoted as a wireless communication interface.
The hearing device 200 comprises a transceiver 7 coupled to the wireless interface for provision of a transceiver input signal 7A based on one or more antenna output signal(s). The transceiver 7 may be split into sub-units, wherein a respective sub-unit is coupled with one of the plurality of antennas.
The hearing device 200 comprises a processing unit 14 for processing input signal(s) and providing an output signal 15. The processing unit 14 is connected to the transceiver 7 for receiving and processing transceiver input signal(s). The processing unit 14 is connected to the microphones 8, 10 for receiving and processing the microphone input signal(s). The processing unit 14 is configured to compensate for a hearing loss of a user and to provide an output signal 15.
The processing of the transceiver and/or microphone input signal(s) is for some operations, e.g., compensation of hearing loss, performed according to custom settings stored in a memory unit 4 as setting data. The processing unit 14 is coupled to the memory unit 4 so that the processing unit 14 has access to the setting data.
The setting data comprises audiogram data or gain data. Wherein the audiogram data comprises information relating to the user's hearing ability, thus also the user's hearing loss, and the gain data comprises information about the amount of gain that the processing unit 14 should apply when processing in order to compensate for the user's hearing loss. The processing unit 14 may be configured to derive gain settings based on audiogram data.
The memory unit 4 of the hearing device 200 may be a short time/volatile memory such as a random access memory, and/or a cache memory. The memory unit 4 of the hearing device 200 may be a long time/non-volatile memory such as a read-only memory, and/or a flash memory.
The hearing device 200 comprises an output transducer in the form of a receiver 16, i.e., hearing aid terminology for a miniature loudspeaker, operatively coupled to the processing unit 14 so the receiver may receive the output signal 15 and convert the output signal 15 to an acoustic output signal 11. In some embodiments the output transducer may and antenna or implantable device, e.g., a cochlear implant.
The transceiver 7 may also receive signals from the processing unit 14 that are to be transmitted to the other hearing device or to an external device.
According to some embodiments, the memory unit 4 of the first hearing device 201 comprises second setting backup data, wherein the second setting backup data is a backup of the second setting data. Likewise, the memory unit 4 of the second hearing device 202 comprises first setting backup data, wherein the first setting backup data is a backup of the first setting data.
Thereby, the set of hearing devices 201, 202 is provided with improved redundancy as either of the first and second setting data are backed up on the opposite hearing device. Should the first or second setting data be corrupted the corrupted setting data may be restored by initiating a transfer of setting backup data from the other hearing device, preferably using the binaural wireless link 5. Similarly, should one of the first or second hearing devices 201, 202 be lost or damaged, the user may save the time needed to re-customize the replacement hearing device, as the replacement may receive the backup data stored in the other hearing device.
Once the smart phone 20 and the hearing device 200 are connected, the user may initiate the self-fitting hearing test using the interface of the smart phone 25, i.e., the touch screen. This will cause the smart phone to carry out a test-cycle as illustrated in
Then, during a response timeslot which partially overlaps with the emission timeslot, a user response indicating whether the user heard the test sound is detected. In practice, this is done by having the user perform an action if they hear the test sound through the hearing device 200. The action could be to tap a virtual button 27 on the touch screen 25 of the smart phone 20 or simply tap the touch screen. On the other hand, if the user does not hear the test sound, then the user response that is detected will be an absence of the action, i.e., the lack of a tap. The response timeslot continues for a grace period after the emission timeslot is over, during which grace period user responses are still detected. The grace period allows the user to get their indication that they heard the test sound detected, even if they hesitate and miss responding while the test sound is playing.
At the end of the test cycle, the detected response, i.e., positive or negative depending on whether the user heard the test sound or not, as well at the test volume and test frequency of the completed test cycle is added to a test data set. Another test cycle may then be repeated for another test volume and/or test frequency, and this process is repeated until sufficient data has been gathered. The user response, test volume, and test frequency for each test-cycle is added to the test data set.
Audiogram data is now calculated based on the test data set. This calculation is either done by a processor of the smart phone 20 or by the processing unit 14 of the hearing device 200. Finally, the audiogram data is outputted and added to setting data stored in the memory unit 4 of the hearing device 200. If there are two hearing devices 201, 202, i.e., a set of hearing devices, then the setting data is also stored in the memory unit 4 of the other hearing device as setting backup data. If there are two hearing devices 201, 202, the self-fitting hearing test may be repeated as described above for the other hearing device.
Before the first test-cycle, there may be an initial countdown, e.g., in the form of an acoustic signal played to the user via the receiver 16, to prepare the user and inform them that the test is about to begin. Each test-cycle may comprise a pause duration to separate the emission timeslot of one test-cycle from that of the subsequent.
It is noted that the method may be carried out aided by a server device 30 connected to the external device via a network 40, e.g., the internet. Optionally, the server may assist or perform the step of calculating the audiogram data.
The first and second setting data package and the first and second setting backup data optionally comprise time stamp data 320 which indicates when the data was generated. This allows the set of hearing devices to compare the which of the setting data or setting backup data to check if the two are up to date with each other, and if not which of the two are the most current. As an example, the set of hearing devices may check the time stamp of the first setting backup data in the memory unit of the second hearing device against the time stamp of the first setting data in the memory unit of the first hearing device. If the time stamps show that the first setting backup data is outdated, the set of hearing devices may initiate a backup by transferring a copy of the first setting data to the second hearing device to replace the outdated first setting backup data. If the time stamps show that the first setting data is older than the first setting backup data, e.g., due to rollbacks, the set of hearing devices may initiate an update by transferring a copy of the first setting backup data in the first hearing device to the second hearing device to replace the outdated first setting data. Finally, if the time stamps are identical then both the setting data and the setting backup data are current, and no update/backup is needed.
The update of the first setting data of the first hearing device 201 is initiated by a check by the processing unit 14 of the first hearing device 201 to check whether first setting data is present in the memory unit 4 of the first hearing device 201. If not, the first hearing device 201 will send a query for first setting backup data to the second hearing device 202. If on the other hand first setting data is present in the memory unit 4 of the first hearing device 201, the first hearing device 201 will send a query for first time stamp data of the first setting backup data to the second hearing device 202.
Upon receiving the time stamp time stamp data of the first setting backup data, the processing unit 14 of the first hearing device 201 will compare it with the time stamp data of the first setting data to check if the first setting data is current. If the comparison shows that the first setting data is newer or identical to the first setting backup data, the update process may end. In the case where the time stamp comparison indicates that the first setting data is newer than the first setting backup data, the set of hearing devices may initiate a backup by transferring the first setting data from the first hearing device 201 to the second hearing device 202, e.g., via the wireless link 5, after which the second hearing device 202 will store it in the memory unit 4 of the second hearing device 202 as first setting backup data to replace the outdated first setting backup data. If the comparison of the time stamp data shows that the first setting data is older than the first setting backup data, the first hearing device 201 will send a query for first setting backup data to the second hearing device 202.
Upon receiving the query for the first setting backup data, the second hearing device 202 will transfer a copy of the first setting backup data to the first hearing device 201. When receiving the copy of the first setting backup data, the first hearing device 201 performs a validation of the received data. The validation may comprise one or more of the following steps:
If the first hearing device 201 cannot validate the received first setting backup data, the first hearing device 201 will delete it and send a new query for the first setting backup data to the second hearing device, whereby a new copy of the first setting backup data can be transmitted by the second hearing device 202, and the validation process repeated.
Once the first hearing device 201 receives first setting backup data which it can validate, it will proceed by storing it in the memory unit 4 as first setting data, and if present erasing outdated first setting data, whereby the first hearing device 201 is updated/restored with the most current first setting data.
The set of hearing devices may be configured for performing a user self-administered hearing test in order to obtain the first and/or second setting data, the first and/or second audiogram data, and/or the first and/or second gain data.
The use of the terms “first”, “second”, “third” and “fourth”, “primary”, “secondary”, “tertiary” etc. does not imply any particular order, but are included to identify individual elements. Moreover, the use of the terms “first”, “second”, “third” and “fourth”, “primary”, “secondary”, “tertiary” etc. does not denote any order or importance, but rather the terms “first”, “second”, “third” and “fourth”, “primary”, “secondary”, “tertiary” etc. are used to distinguish one element from another. Note that the words “first”, “second”, “third” and “fourth”, “primary”, “secondary”, “tertiary” etc. are used here and elsewhere for labelling purposes only and are not intended to denote any specific spatial or temporal ordering.
Furthermore, the labelling of a first element does not imply the presence of a second element and vice versa.
It may be appreciated that the figures comprise some modules or operations which are illustrated with a solid line and some modules or operations which are illustrated with a dashed line. The modules or operations which are comprised in a solid line are modules or operations which are comprised in the broadest example embodiment. The modules or operations which are comprised in a dashed line are example embodiments which may be comprised in, or a part of, or are further modules or operations which may be taken in addition to the modules or operations of the solid line example embodiments. It should be appreciated that these operations need not be performed in order presented. Furthermore, it should be appreciated that not all of the operations need to be performed. The exemplary operations may be performed in any order and in any combination.
It is to be noted that the word “comprising” does not necessarily exclude the presence of other elements or steps than those listed.
It is to be noted that the words “a” or “an” preceding an element do not exclude the presence of a plurality of such elements.
It should further be noted that any reference signs do not limit the scope of the claims, that the exemplary embodiments may be implemented at least in part by means of both hardware and software, and that several “means”, “units” or “devices” may be represented by the same item of hardware.
The various exemplary methods, devices, and systems described herein are described in the general context of method steps processes, which may be implemented in one aspect by a computer program product, embodied in a computer-readable medium, including computer-executable instructions, such as program code, executed by computers in networked environments. A computer-readable medium may include removable and non-removable storage devices including, but not limited to, Read Only Memory (ROM), Random Access Memory (RAM), compact discs (CDs), digital versatile discs (DVD), etc. Generally, program modules may include routines, programs, objects, components, data structures, etc. that perform specified tasks or implement specific abstract data types. Computer-executable instructions, associated data structures, and program modules represent examples of program code for executing steps of the methods disclosed herein. The particular sequence of such executable instructions or associated data structures represents examples of corresponding acts for implementing the functions described in such steps or processes.
Although features have been shown and described, it will be understood that they are not intended to limit the claimed invention, and it will be made obvious to those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the claimed invention. The specification and drawings are, accordingly, to be regarded in an illustrative rather than restrictive sense. The claimed invention is intended to cover all alternatives, modifications, and equivalents.