This application claims priority from Taiwan Patent Application No. 108112773, filed on Apr. 11, 2019, the entire contents of which are incorporated herein by reference for all purposes.
The present invention relates to an adjustment method, and more particularly to an adjustment method of a hearing auxiliary device.
Hearing is a very personal feeling, and auditory responses and feelings of each person are different. In general, various hearing auxiliary devices commonly used on the market, such as hearing aids, require professionals to adjust and set the hearing auxiliary device according to the experiences of the professionals and the questions described by the user. However, as mentioned above, hearing is a personal feeling, it is more difficult to dictate the complete presentation, and the communication between the user and the professional spends a lot of time.
Most of the present hearing auxiliary devices are appropriately selected through the assistance of the professionals. When the user has a need to adjust the hearing auxiliary device, the user has to come back to the store and ask the professionals to help. However, it is difficult for a user to find a problem and give feedback as soon as the hearing auxiliary device is adjusted. It is also necessary to spend time and energy learning how to adjust for finding a suitable setting for his or her own hearing. It is time consuming and cannot reach the best results. Even if some parameters can be adjusted by an application that can be installed on a computer or a smart phone, such as adjusting the equalizer and volume, the user still needs to spend a lot of time learning the changes brought by the parameters and finding the direction of the parameter adjustment. It is more likely that the user feels wrong but does not know how to adjust, which in turn leads to frustration and even loses confidence in the hearing auxiliary device.
Therefore, there is a need of providing an adjustment method of a hearing auxiliary device distinct from the prior art in order to solve the above drawbacks.
Some embodiments of the present invention are to provide an adjustment method of a hearing auxiliary device in order to overcome at least one of the above-mentioned drawbacks encountered by the prior arts.
The present invention provides an adjustment method of a hearing auxiliary device. Since the sound adjustment is performed and the user response is determined by the context awareness platform according to the activity and emotional information and the scene information, the hearing auxiliary device can be appropriately adjusted to meet the demands of the user, such that the hearing auxiliary device can be correctly and effectively adjusted without any assistance of a professional.
The present invention also provides an adjustment method of a hearing auxiliary device. By collecting the environment in which the user is located and the auditory response of the user, the suitable auditory setting can be determined in response to the relevant of the current environment and the auditory response of the user, such that the discomfort and inconvenience of the user using the hearing auxiliary device can be reduced.
In accordance with an aspect of the present invention, there is provided an adjustment method of a hearing auxiliary device. The adjustment method includes steps of (a) providing a context awareness platform and a hearing auxiliary device, (b) acquiring an activity and emotion information and inputting the activity and emotion information to the context awareness platform, (c) acquiring a scene information and inputting the scene information to the context awareness platform, (d) obtaining a sound adjustment suggestion according to the activity and emotional information and the scene information, (e) determining whether a response of a user to the sound adjustment suggestion meets expectation, and (f) when the judgment result of the step (e) is TRUE, transmitting the sound adjustment suggestion to the hearing auxiliary device and adjusting the hearing auxiliary device according to the sound adjustment suggestion.
The above contents of the present invention will become more readily apparent to those ordinarily skilled in the art after reviewing the following detailed description and accompanying drawings, in which:
The present invention will now be described more specifically with reference to the following embodiments. It is to be noted that the following descriptions of preferred embodiments of this invention are presented herein for purpose of illustration and description only. It is not intended to be exhaustive or to be limited to the precise form disclosed.
Please refer to
In some embodiments, the context awareness platform can be stored in and operated on a wearable electronic device 2 or an electronic device with computing functions, in which the former can be a smart watch, a smart wristband or a smart eyeglass, and the latter can be a personal computer, a tablet PC or a smart phone, but not limited thereto. In an embodiment, taking the wearable electronic device 2 for example and illustration. The wearable electronic device 2 includes a control unit 20, a storage unit 21, a sensing unit hub 22, a communication unit 23, an input/output unit hub 24 and a display unit 25. The control unit 20 is configured to operate the context awareness platform. The storage unit 21 is connected with the control unit 20, and the context awareness platform can be stored in the storage unit 21. The storage unit 21 may include a non-volatile storage unit such as a solid-state drive or a flash memory, and may include a volatile storage unit such as a DRAM or the like, but not limited thereto. The sensing unit hub 22 is connected with the control unit 20. The sensing unit hub 22 can be utilized as merely a hub connected with a plurality of sensors, or be integrated with the sensors, and a sensor fusion platform and/or an environment analysis and scene detection platform. For example, the sensor fusion platform and/or the environment analysis and scene detection platform can be implemented in manners of hardware chips or software applications, but not limited thereto.
In some embodiments, the sensors connected with the sensing unit hub 22 include a biometric sensing unit 31, a motion sensing unit 32 and an environment sensing unit 33, but not limited thereto. The biometric sensing unit 31, the motion sensing unit 32 and the environment sensing unit 33 can be independent from the wearable electronic device 2, installed in another device, or integrated with the wearable electronic device 2.
In addition, the communication unit 23 is connected with the control unit 20. The communication unit 23 is communicated with a wireless communication element 11 of the hearing auxiliary device 1. The input/output (I/O) unit hub 24 is connected with the control unit 20, and the I/O unit hub 24 can be connected with and integrated with an input unit 41 and an output unit 42, in which the input unit can be a microphone, and the output unit 42 can be a speaker, but not limited thereto. The display unit 25 is connected with the control unit 20 to implement the display of the content needed for the wearable electronic device 2 itself. In some embodiments, the step S200 of the adjustment method of the hearing auxiliary device is preferably implemented through the control unit 20 and the sensing unit hub 22. The step S300 and the step S500 are preferably implemented through the control unit 20, the sensing unit hub 22 and the I/O unit hub 24. The step S400 is preferably implemented through the control unit 20. The step S600 is preferably implemented through the control unit 20 and the communication unit 23.
Please refer to
For example, the correct physiological response during a speech should be biased between the first quadrant and the second quadrant of the two-dimensional scale shown in
In some embodiments, the sensors include two of a six-axis motion sensor, a gyroscope sensor, a global positioning system sensor, an altimeter sensor, a heartbeat sensor, a barometric sensor, and a blood-flow sensor. The plurality of sensing data are obtained through the plurality of the sensors. The sensing data include two of motion data, displacement data, global positioning system data, height data, heartbeat data, barometric data and blood-flow data. The sensors can be connected with the sensing unit hub 22.
Please refer to
In some embodiments, the environment data source mentioned in the step S310 includes one of a global positioning system sensor, an optical sensor, a microphone, a camera and a communication unit. Moreover, it is worthy noted that the sub-step S320 to the sub-step S330 can be implemented through providing the environment data to the environment analysis and scene detection platform for analyzing and determining, but not limited thereto.
Please refer to
Additionally, the sub-step S260, which is described in the above-mentioned embodiments, of deciding the activity and emotion information according to the classification value, can be executed through an activity and emotion identifier 50. The activity and emotion identifier can be an application or an algorithm. Likewise, the sub-step S340, which is described in the above-mentioned embodiments, of deciding the scene information according to the result of the scene detection, can be executed through a scene classifier 60. The scene classifier can be an application or an algorithm. Similarly, the steps S400-S600 of the adjustment method of the present invention can be executed through a context awareness platform 7 and a sound profile recommender 70. The context awareness platform 7 can be implemented in manners of hardware chips or software applications, and the sound profile recommender 70 can be an application or an algorithm.
It should be noted that the sensor fusion platform 5, the environment analysis and scene detection platform 6, the context awareness platform 7, the activity and emotion identifier 50, the scene classifier 60 and the sound profile recommender 70 can be all existed in for example the wearable electronic device 2 as shown in
Please refer to
From the above description, the present invention provides an adjustment method of a hearing auxiliary device. Since the sound adjustment is performed and the user response is determined by the context awareness platform according to the activity and emotional information and the scene information, the hearing auxiliary device can be appropriately adjusted to meet the demands of the user, such that the hearing auxiliary device can be correctly and effectively adjusted without any assistance of a professional. Meanwhile, by collecting the environment in which the user is located and the auditory response of the user, the suitable auditory setting can be determined in response to the relevant of the current environment and the auditory response of the user, such that the discomfort and inconvenience of the user using the hearing auxiliary device can be reduced.
While the invention has been described in terms of what is presently considered to be the most practical and preferred embodiments, it is to be understood that the invention needs not be limited to the disclosed embodiment. On the contrary, it is intended to cover various modifications and similar arrangements included within the spirit and scope of the appended claims which are to be accorded with the broadest interpretation so as to encompass all such modifications and similar structures.
Number | Date | Country | Kind |
---|---|---|---|
108112773 A | Apr 2019 | TW | national |
Number | Name | Date | Kind |
---|---|---|---|
9824698 | Jerauld | Nov 2017 | B2 |
9934697 | O'Dowd | Apr 2018 | B2 |
10108984 | Baldwin | Oct 2018 | B2 |
20060031288 | Ter Horst | Feb 2006 | A1 |
20100228696 | Sim | Sep 2010 | A1 |
20110295843 | Ingrassia, Jr. | Dec 2011 | A1 |
20120308971 | Shin | Dec 2012 | A1 |
20130095460 | Bishop | Apr 2013 | A1 |
20130243227 | Kinsbergen | Sep 2013 | A1 |
20150162000 | Di Censo | Jun 2015 | A1 |
20150177939 | Anderson | Jun 2015 | A1 |
20150195641 | Di Censo | Jul 2015 | A1 |
20170347205 | Aschoff | Nov 2017 | A1 |
Number | Date | Country |
---|---|---|
105432096 | Mar 2016 | CN |
105580389 | May 2016 | CN |
M510020 | Oct 2015 | TW |
201615036 | Apr 2016 | TW |
201703025 | Jan 2017 | TW |