Aspects related to media systems having audio capabilities are disclosed. More particularly, aspects related to media systems used to play audio content to a user are disclosed.
Audio-capable devices, such as laptop computers, tablet computers, or other mobile devices, can deliver audio content to a user. For example, the user may use the audio-capable device to listen to audio content. The audio content can be pre-stored audio content, such as a music file, a podcast, a virtual assistant message, etc., which is played to the user by a speaker. Alternatively, the reproduced audio content can be real-time audio content, such as audio content from a phone call, a videoconference, etc.
Noise exposure, ageing, or other factors can cause an individual to experience hearing loss. Hearing loss profiles of individuals can vary widely, and may even be attributed to people that are not diagnosed as having hearing impairment. That is, every individual can have some frequency-dependent loudness perceptions that differ from a norm. Such differences can vary widely across a human population, and correspond to a spectrum of hearing loss profiles of the human population. Given that each individual hears differently, audio content that is reproduced in the same way to several individuals may be experienced differently by each. For example, a person with substantial hearing loss at a particular frequency may experience playback of audio content containing substantial components at that frequency as being muffled, By contrast, a person without hearing loss at the particular frequency may experience playback of the same audio content as being clear.
An individual can adjust audio-capable devices to modify playback of audio content in order to enhance the user's experience. For example, the person that has substantial hearing loss at the particular frequency can adjust an overall level of the audio signal volume to increase a loudness of the reproduced audio. Such adjustments can be made in hopes that the modified playback will compensate for the hearing loss of the person.
Volume adjustment to modify playback as described above can fail to compensate for hearing loss in a personalized manner. For example, increasing an overall level of the audio signal can increase loudness, however, the loudness is increased across a range of audible frequencies regardless of whether the user experiences hearing loss across the entire range. The result of such broad-scale level adjustments can be an uncomfortably loud and disturbing listening experience for the user.
A media system and a method of using the media system to accommodate hearing loss of a user, are described. In an aspect, the media system performs the method by selecting an audio filter, e.g., a level-and-frequency-dependent audio filter, from several audio filters, e.g., several level-and-frequency-dependent audio filters, and applying the audio filter to an audio input signal to generate an audio output signal that can be played back to a user. The audio filter can be a personal audio filter, e.g., a personal level-and-frequency dependent audio filter that corresponds to a hearing loss profile of the user.
The selection of the personal level-and-frequency dependent audio filter can be made by the media system from level-and-frequency-dependent audio filters that correspond to respective preset hearing loss profiles. The level-and-frequency-dependent audio filters compensate for the preset hearing loss profiles because the level-and-frequency-dependent audio filters have respective average gain levels and respective gain contours that correspond to average loss levels and loss contours of the hearing loss profiles. The personal level-and-frequency dependent audio filter can amplify the audio input signal based on an input level and an input frequency of the audio input signal, and thus, the user can experience sound from the reproduced audio output signal normally (rather than muffled as would be the case if the uncorrected audio input signal were played).
Selection of the personal level-and-frequency dependent audio filter can be made through a brief and straightforward enrollment process. In an aspect, a first audio signal is output during a first stage of the enrollment process using one or more predetermined gain levels or using a first group of level-and-frequency-dependent audio filters having different average gain levels. The first audio signal can be played back to a user that experiences the audio content, e.g., speech, at different loudnesses. The user can select the loudness that is audible or preferable. More particularly, the media system receives, in response to outputting the first audio signal using the one or more predetermined gain levels or the one or more level-and-frequency-dependent audio filters of the first group, a selection of a personal average gain level. The selection of the personal average gain level can indicate that the first audio signal, e.g., a speech signal, is output at a level that is audible to the user. The selection of the personal average gain level can indicate that the first audio signal is output at a preferred loudness. The media system can select the personal level-and-frequency-dependent audio filter based in part on the personal level-and-frequency-dependent audio filter having the personal average gain level. For example, the respective average gain level of the personal level-and-frequency-dependent audio filter can be equal to the personal average gain level.
In an aspect, a second audio signal is output during a second stage of the enrollment process using a second group of level-and-frequency-dependent audio filters having different gain contours. The second group of level-and-frequency-dependent audio filters may be selected for exploration based on the user selection made during the first stage of the enrollment process. For example, each level-and-frequency-dependent audio filter in the second group can have the personal average gain level corresponding to the audibility selection made during the first stage. The second audio signal can be played back to the user that experiences the audio content, e.g., music, at different timbre or tonal settings and selects the timbre or tonal setting that is preferable. More particularly, the media system receives, in response to outputting the second audio signal, a selection of a personal gain contour. The media system can select the personal level-and-frequency-dependent audio filter based in part on the personal level-and-frequency-dependent audio filter having the personal gain contour. For example, the respective gain contour of the personal level-and-frequency-dependent audio filter can be equal to the personal gain contour.
In an aspect, the enrollment process can modify the first and second audio signals for play back using level-and-frequency-dependent audio filters that correspond to preset hearing loss profiles. For example, audio filters corresponding to the most common hearing loss profiles in a human population can be used. The audio filters can alternatively correspond to hearing loss profiles from the human population that relate closely to an audiogram of the user. For example, the media system can receive a personal audiogram of the user, and based on the personal audiogram, several preset hearing loss profiles can be determined that encompass the hearing loss profile of the user as represented by the audiogram. The media system can then determine the level-and-frequency-dependent audio filters that correspond to the determined hearing loss profiles, and use those audio filters during the presentation of audio in the first stage or the second stage of the enrollment process.
The media system may select the personal level-and-frequency dependent audio filter based directly on an audiogram of the user without utilizing the enrollment process. For example, the media system can receive a personal audiogram of the user, and based on the personal audiogram, a preset personal hearing loss profile can be selected that most closely matches the hearing loss profile of the user as represented by the audiogram. For example, the personal audiogram may indicate that the user has an average hearing loss level and a loss contour, and the media system can select a preset hearing loss profile that fits the audiogram. The media system can then determine the level-and-frequency-dependent audio filter that corresponds to the personal hearing loss profile. For example, the media system can determine the level-and-frequency-dependent audio filter having an average gain level corresponding to the average hearing loss level of the audiogram and/or having a gain contour corresponding to the loss contour. The media system can use the audio filter as the personal level-and-frequency dependent audio filter to enhance the audio input signal and compensate for the hearing loss of the user when playing back audio content.
The above summary does not include an exhaustive list of all aspects of the present invention. It is contemplated that the invention includes all systems and methods that can be practiced from all suitable combinations of the various aspects summarized above, as well as those disclosed in the Detailed Description below and particularly pointed out in the claims filed with the application. Such combinations have particular advantages not specifically recited in the above summary.
Aspects describe a media system and a method of using the media system to accommodate hearing loss of a user. The media system can include a mobile device, such as a smartphone, and an audio output device, such as an earphone. The mobile device, however, can be another device for rendering audio to the user, such as a desktop computer, a laptop computer, a tablet computer, a smartwatch, etc., and the audio output device can include other types of devices, such as headphones, a headset, a computer speaker, etc., to name only a few possible applications.
In various aspects, description is made with reference to the figures. However, certain aspects may be practiced without one or more of these specific details, or in combination with other known methods and configurations. In the following description, numerous specific details are set forth, such as specific configurations, dimensions, and processes, in order to provide a thorough understanding of the aspects. In other instances, well-known processes and manufacturing techniques have not been described in particular detail in order to not unnecessarily obscure the description. Reference throughout this specification to “one aspect,” “an aspect,” or the like, means that a particular feature, structure, configuration, or characteristic described is included in at least one aspect. Thus, the appearance of the phrase “one aspect,” “an aspect,” or the like, in various places throughout this specification are not necessarily referring to the same aspect. Furthermore, the particular features, structures, configurations, or characteristics may be combined in any suitable manner in one or more aspects.
The use of relative terms throughout the description may denote a relative position or direction. For example, “in front of” may indicate a first direction away from a reference point. Similarly, “behind” may indicate a location in a second direction away from the reference point and opposite to the first direction. Such terms are provided to establish relative frames of reference, however, and are not intended to limit the use or orientation of a media system to a specific configuration described in the various aspects below.
In an aspect, a media system is used to accommodate hearing loss of a user. The media system can compensate for a hearing loss profile, whether mild or moderate, of the user. Furthermore, the compensation can be personalized, meaning that it adjusts an audio input signal in a level-dependent and frequency-dependent manner based on the unique hearing preferences of the individual, rather than adjusting only a balance or an overall level of the audio input signal. The media system can personalize the audio tuning based on selections made during a brief and straightforward enrollment process. During the enrollment process the user can experience sounds from several audio signals filtered in different manners, and the user can make binary choices based on subjective evaluations or comparisons of the experiences to select personal audio settings. The personal audio settings include an average gain level and a gain contour of a preferred audio filter. When the user has selected the personal audio settings, the media system can generate an audio output signal by applying a personal level-and-frequency dependent audio filter having the personal audio settings to amplify an audio input signal based on an input level and an input frequency of the audio input signal. Playback of the audio output signal can deliver speech or music to the user that is clear to the user despite the user's hearing loss profile.
Referring to
Referring to
The hearing preferences and/or hearing abilities of a user are frequency-dependent and level-dependent. Individuals that have hearing impairment require a higher sound pressure level in their ears to reach a same perceived loudness as individuals that have less hearing loss. The graph shows loudness level curves 200, which describe perceived loudness (PHON) as a function of sound pressure level (SPL) for several individuals at a particular frequency, e.g., 1 kHz. Curve 202 has a 1:1 slope and an origin at zero because a loudness unit, e.g., 50 PHON, is defined as the perceived loudness of a 1 kHz tone of the corresponding SPL, e.g., 50 dB SPL, by a normal hearing listener. By contrast, an individual having impaired hearing 204 has no perceived loudness until the sound pressure level reaches a threshold level. For example, when the individual has 60 dB hearing loss, the individual will not perceive loudness until the sound pressure level reaches 60 dB.
Referring to
The amount of amplification required to compensate for the hearing loss of the individual decreases as sound pressure level increases. More particularly, the amount of amplification required to compensate for the hearing loss depends on both frequency and input signal level. That is, when the input signal level of the audio input signal produces a higher sound pressure level for a given frequency, less amplification is required to compensate for the hearing loss at the frequency. Similarly, hearing loss of individuals is frequency-dependent, and thus, the loudness level curves and gain curves may differ at another frequency, e.g., 2 kHz. By way of example, if the gain curves shift upward for the individual having impaired hearing (more hearing loss at 2 kHz than 1 kHz), more amplification is required to perceive sound normally at that frequency. Accordingly, when the input signal level of the audio input signal has components at the particularly frequency (2 kHz), more amplification is required to compensate for the hearing loss at the frequency. The method of adjusting the audio input signal to amplify the audio input signal based on an input level and an input frequency of the audio input signal may be referred to herein as multiband upward compression.
Multiband upward compression can achieve the desired enhancement of audio content by bringing sounds that are either not perceived or perceived as being too quiet into an audible range, without adjusting sounds that are already perceived as being adequately or normally loud. In other words, multiband upward compression can boost the audio input signal in a level-dependent and frequency-dependent manner to cause a hearing impaired individual to perceive sounds normally. The normalization of the loudness level curve of the hearing impaired individual can avoid over- or under-amplification at certain levels or frequencies, which avoids problems associated with simply turning up volume and amplifying the audio input signal across an entire audible frequency range.
Referring to
Referring to
In an aspect, first group 602 can include hearing loss profiles having different contour parameters. The contour parameters can include a flat loss contour 606, a notched loss contour 608, and a sloped loss contour 610. The different shapes can have pronounced hearing loss at respective frequencies. For example, flat loss contour 606 can have more hearing loss at a low band frequency, e.g., at 500 Hz, than notched loss contour 608 or sloped loss contour 610. By contrast, notched loss contour 608 can have more hearing loss at an intermediate band frequency, e.g., at 4 kHz, than flat loss contour 606 or sloped loss contour 610. Sloped loss contour 610 can have more hearing loss at a high band frequency, e.g., at 8 kHz, than flat loss contour 606 or notched loss contour 608.
The hearing loss profile shapes can have other generalized distinctions. For example, flat loss contour 606 can have a smallest variation in hearing loss as compared to notched loss contour 608 and sloped loss contour 610. That is, flat loss contour 606 exhibits more consistent hearing loss at each frequency. Additionally, notched loss contour 608 can have more hearing loss at the intermediate band frequency than at other frequencies for the same curve.
The hearing loss profiles shown in
The comparison between audiograms and hearing loss profiles as described above is introduced by way of example, and will be referenced again below with respect to
Referring to
In an aspect, personal level-and-frequency dependent audio filter 402 can be a multiband compression gain table. The multiband compression gain table can be a user-specific prescription to compensate for the hearing loss of an individual and thereby provide personalized media enhancement. In an aspect, personal level-and-frequency dependent audio filter 402 is used to amplify audio input signal 404 based on an input level 902 and an input frequency 904. Input level 902 of audio input signal 404 can be determined within a range spanning from low sound pressure levels to high sound pressure levels. By way of example, audio input signal 404 can have the sound pressure level shown at the left of the gain table, which may be 20 dB, for example. Input frequency 904 of audio input signal 404 can be determined within an audible frequency range. By way of example, audio input signal 404 can have a frequency at the top of the gain table, which may be 8 kHz, for example. Based on input level 902 and input frequency 904 of audio input signal 404, media system 100 can determine that a particular gain level, e.g., 30 dB, is to be applied to audio input signal 404 to generate audio output signal 406. It will be appreciated that this example is consistent with the hearing loss and gain curves of
The gain table example of
Referring to
In an aspect, a convenient and noise-robust enrollment procedure can be used to drive the selection of a personal level-and-frequency dependent audio filter that accommodates the hearing preferences of the user. The enrollment procedure can play back one or more audio signals altered by one or more predetermined gain levels and/or one or more level-and-frequency-dependent audio filters that correspond to the most common hearing loss profiles of a predetermined demographic. The user can make selections during the enrollment procedures, e.g., of one or more of the level-and-frequency-dependent audio filters, and through the user selections, media system 100 can determine and/or select an appropriate personal level-and-frequency dependent audio filter to apply to an audio input signal for the user. Several embodiments of enrollment procedures are described below. The enrollment procedures can incorporate several stages, and one or more of the stages of the embodiments can differ. For example,
Referring to
During the first stage, audio input signal 404 can be reproduced for the user with a first predetermined gain level. For example, the speech signal may be output at a low level, e.g., 40 dB or less. The first predetermined gain level can correspond to one of the different average hearing loss levels, e.g., levels 604, 704, or 804. For example, the 40 dB or less level may be expected to be heard by the demographic having average hearing loss level 604 and possibly not hearing loss levels 704 and 804.
During play back of the first audio signal at the first level of amplification, the user can select an audibility selection element 1102 or an inaudibility selection element 1104 of a graphical user interface displayed on audio signal device 102 of media system 100. More particularly, after listening to the first setting, the user can make a selection indicating whether the output audio signal has a loudness that is audible to the user. The user can select the audibility selection element 1102 to indicate that the output level is audible. By contrast, the user can select the inaudibility selection element 1104 to indicate that the output level is inaudible.
After making the selection of the audibility selection element 1102 or the inaudibility selection element 1104, the user may select the selection element 1106 to provide the selection to the system. When the system receives the selection of the audibility selection element 1102, the system can determine, based on the selection indicating whether the output audio signal is audible to the user, a personal average gain level of the user. For example, when the system receives the selection of the audibility selection element 1102 during a first phase of the first stage, the system can determine that the personal average gain level for the user corresponds to average hearing loss level 604 of the mild hearing loss profile group. This hearing loss profile group may be used as a basis for further exploration of level-and-frequency-dependent audio filters in a second stage of the enrollment procedure. By contrast, selection of the inaudibility selection element 1104 during the first phase can cause the enrollment procedure to progress to a second phase of the first stage of the enrollment procedure.
In the second phase of the first stage, the first audio signal may be played at a second level of amplification. For example, the speech signal may be output a higher level, e.g., 55 dB. After listening to the second setting, the user can select the audibility selection element 1102 or the inaudibility selection element 1104 to indicate whether the speech signal is audible.
After making the selection of the audibility selection element 1102 or the inaudibility selection element 1104, the user may select the selection element 1106 to provide the selection to the system. The system can determine, based on the selection indicating whether the output audio signal is audible to the user, the personal average gain level. For example, when the system receives the selection of the audibility selection element 1102 during the second phase of the first stage, the system can determine that the personal average gain level for the user corresponds to average hearing loss level 704 of the mild to moderate hearing loss profile group. This hearing loss profile group may be used as a basis for further exploration of level-and-frequency-dependent audio filters in the second stage of the enrollment procedure. By contrast, when the system receives the selection of the inaudibility selection element 1104 during the second phase, the system can determine that the personal average gain level for the user corresponds to average hearing loss level 804 of the moderate hearing loss profile group. This hearing loss profile group may be used as a basis for further exploration of level-and-frequency-dependent audio filters in the second stage of the enrollment procedure.
The first audio signal can be generated and/or output during the first stage using the one or more predetermined gain levels in an order of increasing gain. For example, as described above, the first audio signal can be output at 40 dB during the first phase and then at 55 dB during the second phase as the user progresses through the first stage of the enrollment procedure. Play back of the speech signal using the increasing predetermined gain levels can continue until the personal average gain level is determined. Determination of the personal average gain level can be made through selection of the audibility selection element 1102 or selection of the inaudibility selection element 1104. For example, if the user selects the audibility selection element 1102 when the speech signal is output at 55 dB, the personal average gain level corresponding to the mild to moderate hearing loss profile is determined. By contrast, if the user selects the inaudibility selection element 1104 after outputting the speech signal at 55 dB, the personal average gain level corresponding to the moderate hearing loss profile is determined.
The first audio signal may be set at a calibrated level, and thus, volume adjustment during the first stage of the enrollment process may be disallowed. More particularly, one or more processors of the media system 100 can disable volume adjustment of the media system 100 during output of the first audio signal. By locking out the volume controls of media system 100 during the first stage of the enrollment process, the gain levels that compensate for hearing loss can be set to the predetermined gain levels that correspond to the common hearing loss profiles that are being tested for. Accordingly, the levels can be explored using the speech stimulus at predetermined levels that are fixed during the evaluation.
Referring to
When the speech signal is presented at a first level, e.g., 40 dB, during the first phase of the first stage of the enrollment procedure, the user makes a selection to indicate whether the output audio signal is audible. Selection of the audibility selection element 1102 indicates that the first level is audible, and may be termed a first phase audibility selection 1200. The system can determine, based on the first phase audibility selection 1200, that a zero gain audio filter and/or a first group of level-and-frequency-dependent audio filters (1F, 1N, and 1S) have respective average gain levels equal to a personal average gain level of the user. More particularly, the system can determine, in response to first phase audibility selection 1200, that the personal average gain level of the user is one of the average gain levels of the zero gain audio filter or the first group of level-and-frequency-dependent audio filters (1F, 1N, and 1S). For example, the zero gain audio filter can have an average gain level of zero, and the first group of filters can have an average gain level corresponding to the first group 602 of hearing loss profiles. One or more of the audio filters can be explored during the second stage of the enrollment procedure to further narrow the determination, as described below.
When the speech signal is presented at a second level, e.g., 55 dB, during the second phase of the first stage of the enrollment procedure, the user makes a selection to indicate whether the output audio signal is audible. Selection of the audibility selection element 1102 indicates that the second level is audible, and may be termed a second phase audibility selection 1204. The system can determine, based on the second phase audibility selection 1204, that a second group of level-and-frequency-dependent audio filters (2F, 2N, and 2S) has an average gain level equal to a personal average gain level of the user. More particularly, the personal average gain level of the user can be determined to be the average gain level of the second group. For example, the second group of filters can have an average gain level corresponding to the second group 702 of hearing loss profiles. One or more of the audio filters of the second group can be explored during the second stage of the enrollment procedure, as described below.
Selection of the inaudibility selection element 1104 during presentation of the speech signal at the second level indicates that the second level is inaudible, and may be termed a second phase inaudibility selection 1206. The system can determine, based on the second phase inaudibility selection 1206, that a third group of level-and-frequency-dependent audio filters (3F, 3N, and 3S) has an average gain level equal to a personal average gain level of the user. More particularly, the personal average gain level of the user can be determined to be the average gain level of the third group. For example, the third group of filters can have an average gain level corresponding to the third group 802 of hearing loss profiles. One or more of the audio filters of the third group can be explored during the second stage of the enrollment procedure, as described below.
In the second stage of the enrollment process, the user can explore the determined group(s) of level-and-frequency-dependent audio filters to select a personal gain contour. The personal gain contour can correspond to the user-preferred gain contour (flat, notched, or sloped) that adjusts audio input signal tonal characteristics to the liking of the user.
Referring to
During the second stage, audio input signal 404 can be sequentially reproduced for the user with different tonal enhancement settings. More particularly, the group(s) of level-and-frequency-dependent audio filters determined in response to the first phase audibility selection 1200, the second phase audibility selection 1204, or the second phase inaudibility selection 1206 are used to output the second audio signal. Each of the members of the groups can have different gain contours. For example, each group (other than the zero gain audio filter) can include a flat audio filter corresponding to a flat loss contour of a common hearing loss profile, a notched audio filter corresponding to a notched loss contour of a common hearing loss profile, and a sloped audio filter corresponding to a sloped loss contour of a common hearing loss profile. It will be appreciated that, with reference to the loss contours above and the inverse relationship between the loss contours and the respective gain contours, that the gain contour of the flat audio filter has a highest gain at a low frequency band, the gain contour of the notched audio filter has a highest gain at an intermediate frequency band, and the gain contour of the sloped audio filter has a highest gain at a high frequency band. The audio filters are applied to the second audio signal to play back the audio signal such that different frequencies are pronounced corresponding to different hearing loss contours.
The user can select current tuning element 1304 to play the second audio signal with a first play back setting. For example, when the first phase audibility selection 1200 was made in
Referring to
In the illustrated example, the second phase audibility selection 1204 was made in
The second stage of the enrollment process may require presentation of all gain contour settings in the vertical direction across the grid of
Referring to
In contrast to the first stage of the enrollment process, volume adjustment of media system 100 can be enabled during output of the second audio signal. Allowing volume adjustment can help distinguish between tonal characteristics of the different audio signal adjustments. More particularly, allowing the user to adjust the volume of media system 100 using a volume control 1302 (
A sequence of presentation of filtered audio signals allows the user to step through the enrollment process to first determine a personal average gain level and then determine a personal gain contour. More particularly, the user can first select the personal average gain level by selecting a setting at which the first audio signal is audible, and then select personal gain contour 1402 by stepping through the grid in the vertical direction along a shape axis. Each square of the grid represents a level-and-frequency-dependent audio filter having a respective average gain level and gain contour, and thus, the illustrated example (3×3 grid) assumes that personal level-and-frequency dependent audio filter 402 that results from the enrollment process will be one of 9 level-and-frequency-dependent audio filters corresponding to 9 common hearing loss profiles. This level of granularity, e.g., three level groups and three contour groups, has been shown to consistently lead users to select the preset that the users consistently preferred, whether or not the selected preset precisely matched their hearing loss profile. It will be appreciated, however, that the number of presets used in the enrollment process can vary. For example, the first stage of the enrollment process could allow the users to step through four or more predetermined gain levels to drive the selection of audio filter groups having the personal average gain level. Similarly, more or fewer gain contours may be represented across the shape axis of the grid to allow the user to assess different tonal enhancements.
Referring to
As described above, the enrollment process allows the user to first explore levels to determine a correct column within the audio filter grid for further exploration of contours. At operation 1502, in the first stage of the enrollment process, the user listens to an audio signal at a predetermined level, e.g., a 40 dB level. The predetermined level is a presentation level resulting from a predetermined gain level being applied to the speech audio signal. At operation 1504, media system 100 determines whether the user can hear the current presentation level. For example, if the user can hear the 40 dB level resulting from the predetermined gain level audio filter, the user selects the audibility selection element 1102 to identify the current level as corresponding to the personal average gain level. In such case, the system determines that the personal average gain level is the average gain level of the zero gain filter or the (1F, 1N, 1S) audio filter group. If, however, the user selects the inaudibility selection element 1104, at operation 1506 the first decision sequence iterates to a next predetermined level, e.g., a 55 dB level. The next predetermined level is a presentation level resulting from a next predetermined gain level being applied to the speech audio signal. The audio signal can be presented at the next predetermined level at operation 1502. At operation 1504, media system 100 determines whether the user can hear the current level. If the user can hear the current level, the user selects the audibility selection element 1102 to identify the current level as corresponding to the personal average gain level. In such case, the system determines that the personal average gain level is the average gain level of the (2F, 2N, 2S) audio filter group. If the user selects the inaudibility selection element 1104, however, the system determines that the personal average gain level is the average gain level of the (3F, 3N, 3S) audio filter group. Whichever level the user selects as being audible during the iterations can be used to drive the determination of the personal average gain level. When the user selects the audible level, the system can determine the audio filter groups for further exploration which have average gain levels corresponding to the selected predetermined gain level. More particularly, the personal average gain level can be determined from the audibility selections and the enrollment process can continue to the second stage.
As described above, the enrollment process allows the user to explore gain contours within the selected audio filter groups to determine a correct row within the audio filter grid, and thus, arrive at the square within the grid that represents personal level-and-frequency dependent audio filter 402. At operation 1508, in the second stage of the enrollment process, the user compares several shape audio signals.
In a special case, the user makes first phase audibility selection 1200 and the system determines that the zero gain audio filter or the (1F, 1N, 1S) audio filter group correspond to the personal average gain level of the user. In such case, the music file is played at the decision sequence 1508. At decision sequence 1508, a comparison can be made between the zero gain audio filter (or no filter) applied to the music audio signal and the low-gain flat audio filter (1F) applied to the music audio signal. If the zero gain audio filter is again selected, e.g., via the current tuning element 1304, the process can iterate to compare the zero gain audio filter to the low-gain notched audio filter (1N). If the zero gain audio filter is again selected, e.g., via the current tuning element 1304, the enrollment process can end and no audio filter is applied to audio input signal 404. More particularly, when the flowchart advances through the sequence with the user selecting the zero gain audio filter over the several level-and-dependent audio filters corresponding to the hearing loss profiles, media system 100 determines that the user has normal hearing and no adjustments are made to the default audio settings of the system. This may also be framed as the personal level-and-frequency-dependent audio filter having a personal average gain level of zero and a personal gain contour of non-adjustment.
In the event that the user selects a non-zero personal average gain level, however, e.g., the second phase audibility selection 1204 or the second phase inaudibility selection 1206 is selected during the first stage, or the (1F) or (1N) audio filters are selected at the initial operation 1508 of the second stage, the shape audio signal comparison at operation 1508 is between the non-zero gain audio filters applied to the music audio signal. For example, if the second phase audibility selection 1204 drove the selection of the (2F, 2N, 2S) audio filter group for further exploration, then at operation 1508 the (2F) audio filter can be applied to the music audio signal as the current tuning and the low-level notched audio filter (2N) can be applied to the music audio signal as the altered tuning. The filtered audio signals can be presented to the user as respective shape audio signals. At operation 1510, media system 100 determines whether the user has selected a personal gain contour 1402. The personal gain contour 1402 is selected after the user has listened to all shape audio signals and selected a preferred shape audio signal. For example, if the user selects the (2F) audio filter over the (2N) audio filter at operation 1508, the (2F) audio filter is a candidate for the personal gain contour 1402. At operation 1512, the second stage iterates to a next shape audio signal comparison. For example, the (2F) audio filter selected during a previous iteration can be applied to the music audio signal and the low-level sloped audio filter (2S) can be applied to the music audio signal. The filtered audio signals can be presented to the user as respective shape audio signals at operation 1508, and the user can select the preferred shape audio signal. At operation 1510, media system 100 determines whether the user has selected personal gain contour 1402. For example, if the user selects the (2S) audio filter, media system 100 identifies the selection as personal gain contour 1402 given that the user selected the audio filter and all shape audio signals have been presented to the user for selection.
After the level and contour settings are explored, at operation 1002, media system selects personal level-and-frequency dependent audio filter 402. More particularly, the user identifies a particular square in the grid, e.g., based in part on personal level-and-frequency dependent audio filter 402 having the personal average gain level determined from the first stage, and based in part on personal level-and-frequency dependent audio filter 402 having personal gain contour 1402 determined from the second stage. The selected filter having the personal average gain level and personal gain contour 1402 can be used by the process in a verification operation. At the verification operation, an audio signal, e.g., a music audio signal, can be output and played back by media system 100 using personal level-and-frequency dependent audio filter 402 that was identified during the enrollment process. The verification operation allows the user to adjust between the selected preset and normal play (no adjustment) so that the user can confirm that the adjustment is in fact an improvement. When the user agrees that the personal level-and-frequency dependent audio filter improves a listening experience, the user can select an element, e.g., “done,” to complete the enrollment process.
At the conclusion of the enrollment process, personal level-and-frequency dependent audio filter 402 is identified as the audio filter having the preferred personal average gain level and/or personal gain contour 1402 of the user. Accordingly, at operation 1002, media system 100 can select personal level-and-frequency dependent audio filter 402 based in part on personal level-and-frequency dependent audio filter 402 having the personal average gain level, and based in part on personal level-and-frequency dependent audio filter 402 having personal gain contour 1402, as determined by the enrollment process.
In an alternative embodiment, the enrollment procedure can differ from the process described above with respect to
The user can select a current tuning element 1602 of a graphical user interface displayed on audio signal device 102 of media system 100 to play the first audio signal with a first level of amplification. After listening to the first setting, the user can select an altered tuning element 1604 of the graphical user interface to play the first audio signal with a second level of amplification, which is higher than the first level of amplification. When the user has identified the preferred setting, e.g., the tuning that allows the user to better hear the speech of the first audio signal, the user can select a selection element 1606 of the graphical user interface. Alternatively, the user can make a selection through a physical switch, such as by tapping a button on audio signal device 102 or audio output device 104. If the user selects selection element 1606 while current tuning element 1602 is enabled, the selection can be a personal average gain level 1702. More particularly, the personal average gain level 1702 can be the average gain level applied to the first audio signal when the user decides to continue the enrollment process using the current tuning. Alternatively, the user may choose to continue the enrollment with the altered tuning element 1604 enabled. In such case, the selection causes the enrollment process to progress to a next operation in the first stage. At the next operation, the first audio signal can be reproduced by another pair of level-and-frequency-dependent audio filters.
Referring to
It will be appreciated that, should the user prefer the altered tuning in
In an aspect, the first audio signal is output to the user using level-and-frequency-dependent audio filters of the first group in an order of increasing average gain levels. For example, in
In an aspect, the first audio signal can have some noise embedded to provide realism to the listening experience. By way of example, the first audio signal can include a speech signal representing speech, and a noise signal representing noise. The speech signal and the noise signal can be embedded at a particular ratio such that an increase in level of the first audio signal brings up the level of both the speech and the noise audio content in the speech file. For example, a ratio of the speech signal to the noise signal can be in a range of 10 to 30 dB, e.g., 15 dB. The ratio may be high enough that noise does not overpower the speech. Progressive amplification of the noise with each increase in average gain level, however, may deter the user from selecting a level-and-frequency-dependent audio filter that unnecessarily boosts the volume of the audio signal. More particularly, the embedded noise provides realism to help the user select an amplification level that compensates, but does not overcompensate, for the user's hearing loss.
The first audio signal may be set at a calibrated level, and thus, volume adjustment during the first stage of the enrollment process may be disallowed. More particularly, one or more processors of the media system 100 can disable volume adjustment of the media system 100 during output of the first audio signal. By locking out the volume controls of media system 100 during the first stage of the enrollment process, the gain levels that compensate for hearing loss can be set to the average gain levels of the level-and-frequency-dependent audio filters that correspond to the common hearing loss profiles that are being tested for. Accordingly, the levels can be explored using a speech stimulus at a fixed level.
In addition to allowing a selection of the personal average gain level 1702 during the first stage, the enrollment process can include a second stage to select a personal gain contour. The personal gain contour can correspond to the user-preferred gain contour (flat, notched, or sloped) that adjusts audio input signal tonal characteristics to the liking of the user.
Referring to
During the second stage, audio input signal 404 can be sequentially reproduced for the user with different tonal enhancement settings. More particularly, the second group of level-and-frequency-dependent audio filters used to output the second audio signal can have different gain contours. The second group can include a flat audio filter corresponding to a flat loss contour of a common hearing loss profile, a notched audio filter corresponding to a notched loss contour of a common hearing loss profile, and a sloped audio filter corresponding to a sloped loss contour of a common hearing loss profile. It will be appreciated that, with reference to the loss contours above and the inverse relationship between the loss contours and the respective gain contours, that the gain contour of the flat audio filter has a highest gain at a low frequency band, the gain contour of the notched audio filter has a highest gain at an intermediate frequency band, and the gain contour of the sloped audio filter has a highest gain at a high frequency band. The audio filters are applied to the second audio signal to play back the audio signal such that different frequencies are pronounced corresponding to different hearing loss contours.
The user can select current tuning element 1602 to play the second audio signal with a first audio filter having a respective gain contour. After listening to the first setting, the user can select altered tuning element 1604 to play the second audio signal with a second audio filter having a respective gain contour, which is different than the gain contour of the first audio filter. When the user has identified the preferred setting, e.g., the tuning that allows the user to better hear the music of the second audio signal, the user can select selection element 1606. Alternatively, the user can make a selection through a physical switch, such as by tapping a button on audio signal device 102 or audio output device 104.
Referring to
Whereas the first stage of the enrollment process did not require presentation of all average gain level settings as represented in the horizontal direction across the grid of
Referring to
In contrast to the first stage of the enrollment process, volume adjustment of media system 100 can be enabled during output of the second audio signal. Allowing volume adjustment can help distinguish between tonal characteristics of the different audio signal adjustments. More particularly, allowing the user to adjust the volume of media system 100 using a volume control 2302 (
A sequence of presentation of filtered audio signals allows the user to step through the grid in the horizontal direction during the first stage and in the vertical direction during the second stage. More particularly, the user can first select personal average gain level 1702 by stepping through the grid in the horizontal direction along a level axis, and then select personal gain contour 1902 by stepping through the grid in the vertical direction along a shape axis. Each square of the grid represents a level-and-frequency-dependent audio filter having a respective average gain level and gain contour, and thus, the illustrated example (3×3 grid) assumes that personal level-and-frequency dependent audio filter 402 that results from the enrollment process will be one of 9 level-and-frequency-dependent audio filters corresponding to 9 common hearing loss profiles. This level of granularity, e.g., three level groups and three contour groups, has been shown to consistently lead users to select the preset that the users consistently preferred, whether or not the selected preset precisely matched their hearing loss profile. It will be appreciated, however, that the number of presets used in the enrollment process can vary. For example, the first stage of the enrollment process could allow the users to step through four or more average gain levels across a grid having more columns. Similarly, more or fewer gain contours may be represented across the shape axis of the grid to allow the user to assess different tonal enhancements.
Referring to
As described above, the enrollment process allows the user to first explore levels to determine a correct column within the audio filter grid. At operation 2002, in the first stage of the enrollment process, the user compares several level audio signals, e.g., a current gain level and a next gain level. For example, the zero gain audio filter (no gain, or “off”) can be applied to the speech audio signal as a current gain level and the low-gain flat audio filter (1F) can be applied to the speech audio signal as a next gain level. The filtered audio signals can be presented to the user as respective level audio signals. At operation 2004, media system 100 determines whether the user is satisfied with the current level. For example, if the user is satisfied with the zero gain audio filter, the user selects the zero gain audio filter as personal gain level 1702. If, however, the user selects the next audio level, e.g., the (1F) level-and-frequency-dependent audio filter, at operation 2006 the first decision sequence iterates to a next level audio signal comparison. For example, the (1F) filter can be applied to the speech audio signal as the current gain level and the mid-gain flat audio filter (2F) can be applied to the speech audio signal as the next gain level. The filtered audio signals can be presented to the user as respective level audio signals at operation 2002, and the user can select the preferred level audio signal. At operation 2004, media system 100 determines whether the user is satisfied with the current level. If the user is satisfied with the current level, the user selects the current level, which the system determines as personal gain level 1702. If the user is more satisfied with the next level, the user selects the next gain level and the system iterates to allow a comparison of a next group of level audio signals. For example, the sequence advances to allow the user to also compare the mid-gain flat audio filter (2F) and the high-gain flat audio filter (3F). Whichever current level the user selects during the iterations can be determined to be personal average gain level 1702. More particularly, when the user selects the zero gain audio filter, the (1F) filter, the (2F) filter, or the (3F) filter at the point in the process when the selected filter is the current (as compared to the next) audio filter, the selected audio filter can be determined to have personal gain contour 1902 and the enrollment process can continue to the second stage.
As described above, the enrollment process allows the user to explore gain contours within the selected gain level to determine a correct row within the audio filter grid, and thus, arrive at the square within the grid that represents personal level-and-frequency dependent audio filter 402. At operation 2008, in the second stage of the enrollment process, the user compares several shape audio signals.
In a special case, the user selects the zero gain audio filter as the personal gain level during the first stage. In such case the speech file is played at the decision sequence 2008. Similar to decision sequence 2002, at decision sequence 2008 a comparison can be made between the zero gain audio filter applied to the speech audio signal and the low-gain notched audio filter (1N) applied to the speech audio signal. If the zero gain audio filter is again selected, the process can iterate to compare the zero gain audio filter to the high-gain sloped audio filter (1S). If the zero gain audio filter is again selected, the enrollment process can end and no audio filter is applied to audio input signal 404. More particularly, when the flowchart advances through the sequence with the user selecting the zero gain audio filter over the several level-and-dependent audio filters corresponding to the hearing loss profiles, media system 100 determines that the user has normal hearing and no adjustments are made to the default audio settings of the system.
In the event that the user selects a non-zero personal gain level during the first stage, the shape audio signal comparison at operation 2008 is between the non-zero gain audio filters applied to the music audio signal. For example, if the (1F) audio filter was selected as the personal gain level at operation 2004, then at operation 2008 the (1F) audio filter can be applied to the music audio signal and the low-level notched audio filter (1N) can be applied to the music audio signal. The filtered audio signals can be presented to the user as respective shape audio signals. At operation 2010, media system 100 determines whether the user has selected a personal gain contour 1902. The personal gain contour 1902 is selected after the user has listened to all shape audio signals and selected a preferred shape audio signal. For example, if the user selects the (1F) audio filter over the (1N) audio filter at operation 2008, the (1F) audio filter is a candidate for the personal gain contour 1902. At operation 2012, the second stage iterates to a next shape audio signal comparison. For example, the (1F) audio filter selected during a previous iteration can be applied to the music audio signal and the low-level sloped audio filter (1S) can be applied to the music audio signal. The filtered audio signals can be presented to the user as respective shape audio signals at operation 2008, and the user can select the preferred shape audio signal. At operation 2010, media system 100 determines whether the user has selected personal gain contour 1902. For example, if the user selects the (1S) audio filter, media system 100 identifies the selection as personal gain contour 1902 given that the user selected the audio filter and all shape audio signals have been presented to the user for selection.
After the level and contour settings are explored, at operation 1002, media system selects personal level-and-frequency dependent audio filter 402. More particularly, the user identifies a particular square in the grid, e.g., based in part on personal level-and-frequency dependent audio filter 402 having personal average gain level 1702, and based in part on personal level-and-frequency dependent audio filter 402 having personal gain contour 1902. The selected filter having personal gain level 1702 and personal gain contour 1902 can be used by the process in a verification operation. At the verification operation, an audio signal, e.g., a music audio signal, can be output and played back by media system 100 using personal level-and-frequency dependent audio filter 402 that was identified during the enrollment process. The verification operation allows the user to adjust between the selected preset and normal play (no adjustment) so that the user can confirm that the adjustment is in fact an improvement. When the user agrees that the personal level-and-frequency dependent audio filter improves a listening experience, the user can select an element, e.g., “done,” to complete the enrollment process.
At the conclusion of the enrollment process, personal level-and-frequency dependent audio filter 402 is identified as the audio filter having the preferred personal average gain level 1702 and personal gain contour 1902 of the user. Accordingly, at operation 1002, media system 100 can select personal level-and-frequency dependent audio filter 402 based in part on personal level-and-frequency dependent audio filter 402 having personal average gain level 1702, and based in part on personal level-and-frequency dependent audio filter 402 having personal gain contour 1902, as determined by the enrollment process.
The enrollment processes described above drives media system 100 toward the selection of personal level-and-frequency dependent audio filter 402 based on the assumption that the actual hearing loss of the user will be similar to the common hearing loss profile presets that are stored by the system. No knowledge of the user's personal audiogram 500 is necessary to complete the enrollment process. When personal audiogram 500 is available, however, it may lead to as good or better outcomes than the selection process described above.
Referring to
In an aspect, the use of personal audiogram 500 to drive the presets available for selection during the enrollment process can be especially helpful for a user that has an uncommon hearing loss profile. Media system 100 can receive personal audiogram 500 at operation 2102. At operation 2104, media system 100 can determine several hearing loss profiles 2110 based on personal audiogram 500. Similarly, at operation 2106, media system 100 can determine level-and-frequency-dependent audio filters that correspond to the user-specific hearing loss profile presets. The determined hearing loss profiles and/or level-and-frequency-dependent audio filters can be user-specific presets that are personalized to the user to ensure a good listening experience. For example, an average hearing loss 504 of the user may be determined from personal audiogram 500, and the several user-specific presets that are determined may include hearing loss profiles that each have average hearing loss values similar to the average hearing loss value of personal audiogram 500. In an aspect, the average hearing loss values for each of the user-specific presets is within a predetermined difference, e.g., +/−10 dB hearing loss, of the average hearing loss value of personal audiogram 500. As shown in
In an aspect, the determined level-and-frequency-dependent audio filters corresponding to the user-specific presets are applied to the speech and/or music audio signals. More particularly, the audio filters can be assessed in a decision tree such as the sequence described with respect to
Referring to
In an aspect, at operation 2202, media system 100 can receive personal audiogram 500. At operation 2204, media system 100 can determine and/or select a personal hearing loss profile 2205 based on personal audiogram 500. For example, personal hearing loss profile 2205 can be selected from several hearing loss profiles that are stored or available to media system 100. Selection of personal hearing loss profile 2205 may be driven by an algorithm for fitting personal audiogram 500 to the known hearing loss profiles. More particularly, media system 100 can select personal hearing loss profile 2205 having a same average hearing loss and hearing loss contour as personal audiogram 500. When the closest match is found, media system 100 can select personal hearing loss profile 2205 and determine the level-and-frequency-dependent audio filter that corresponds to personal hearing loss profile 2205. More particularly, at operation 2206, media system 100 can select or determine personal level-and-frequency dependent audio filter 402 corresponding to personal hearing loss profile 2205, which can be used to compensate for hearing loss of the user.
At operation 1004 (
At operation 1006 (
Referring to
Audio output device 104 can include an earphone processor 2320 and an earphone memory 2322. Earphone processor 2320 and earphone memory 2322 can perform functions the functions performed by device processor 2302 and device memory 2304 described above. For example, audio signal device 102 can transmit one or more of audio input signal 404, hearing loss profiles, or level-and-frequency-dependent audio filters to earphone processor 2320, and audio output device 104 can use the input signals in an enrollment process and/or audio rendering process to generate audio output signal 406 using personal level-and-frequency dependent audio filter 402. More particularly, earphone processor 2320 may be configured to generate audio output signal 406 and present the signal for audio playback via the earphone speaker. Media system 100 may include several earphone components, although only a single earphone is shown in
As described above, one aspect of the present technology is the gathering and use of data available from various sources to perform personalized media enhancement. The present disclosure contemplates that in some instances, this gathered data may include personal information data that uniquely identifies or can be used to contact or locate a specific person. Such personal information data can include demographic data, location-based data, telephone numbers, email addresses, TWITTER ID's, home addresses, data or records relating to a user's health or level of fitness (e.g., audiograms, vital signs measurements, medication information, exercise information), date of birth, or any other identifying or personal information.
The present disclosure recognizes that the use of such personal information data, in the present technology, can be used to the benefit of users. For example, the personal information data can be used to perform personalized media enhancement. Accordingly, use of such personal information data enables users to have an improved audio listening experience. Further, other uses for personal information data that benefit the user are also contemplated by the present disclosure. For instance, health and fitness data may be used to provide insights into a user's general wellness, or may be used as positive feedback to individuals using technology to pursue wellness goals.
The present disclosure contemplates that the entities responsible for the collection, analysis, disclosure, transfer, storage, or other use of such personal information data will comply with well-established privacy policies and/or privacy practices. In particular, such entities should implement and consistently use privacy policies and practices that are generally recognized as meeting or exceeding industry or governmental requirements for maintaining personal information data private and secure. Such policies should be easily accessible by users, and should be updated as the collection and/or use of data changes. Personal information from users should be collected for legitimate and reasonable uses of the entity and not shared or sold outside of those legitimate uses. Further, such collection/sharing should occur after receiving the informed consent of the users. Additionally, such entities should consider taking any needed steps for safeguarding and securing access to such personal information data and ensuring that others with access to the personal information data adhere to their privacy policies and procedures. Further, such entities can subject themselves to evaluation by third parties to certify their adherence to widely accepted privacy policies and practices. In addition, policies and practices should be adapted for the particular types of personal information data being collected and/or accessed and adapted to applicable laws and standards, including jurisdiction-specific considerations. For instance, in the US, collection of or access to certain health data may be governed by federal and/or state laws, such as the Health Insurance Portability and Accountability Act (HIPAA); whereas health data in other countries may be subject to other regulations and policies and should be handled accordingly. Hence different privacy practices should be maintained for different personal data types in each country.
Despite the foregoing, the present disclosure also contemplates aspects in which users selectively block the use of, or access to, personal information data. That is, the present disclosure contemplates that hardware and/or software elements can be provided to prevent or block access to such personal information data. For example, in the case of personalized media enhancement, the present technology can be configured to allow users to select to “opt in” or “opt out” of participation in the collection of personal information data during registration for services or anytime thereafter. In addition to providing “opt in” and “opt out” options, the present disclosure contemplates providing notifications relating to the access or use of personal information. For instance, a user may be notified upon downloading an app that their personal information data will be accessed and then reminded again just before personal information data is accessed by the app.
Moreover, it is the intent of the present disclosure that personal information data should be managed and handled in a way to minimize risks of unintentional or unauthorized access or use. Risk can be minimized by limiting the collection of data and deleting data once it is no longer needed. In addition, and when applicable, including in certain health related applications, data de-identification can be used to protect a user's privacy. De-identification may be facilitated, when appropriate, by removing specific identifiers (e.g., date of birth, etc.), controlling the amount or specificity of data stored (e.g., collecting location data a city level rather than at an address level), controlling how data is stored (e.g., aggregating data across users), and/or other methods.
Therefore, although the present disclosure broadly covers use of personal information data to implement one or more various disclosed aspects, the present disclosure also contemplates that the various aspects can also be implemented without the need for accessing such personal information data. That is, the various aspects of the present technology are not rendered inoperable due to the lack of all or a portion of such personal information data. For example, the enrollment process can be performed based on non-personal information data or a bare minimum amount of personal information, such as an approximate age of the user, other non-personal information available to the device processors, or publicly available information.
To aid the Patent Office and any readers of any patent issued on this application in interpreting the claims appended hereto, applicants wish to note that they do not intend any of the appended claims or claim elements to invoke 35 U.S.C. 112(f) unless the words “means for” or “step for” are explicitly used in the particular claim.
In the foregoing specification, the invention has been described with reference to specific exemplary aspects thereof. It will be evident that various modifications may be made thereto without departing from the broader spirit and scope of the invention as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense.
This application is a continuation of co-pending U.S. patent application Ser. No. 16/872,040, filed May 11, 2020, which claims the benefit of priority of U.S. Provisional Patent Application No. 62/855,951, filed Jun. 1, 2019, and incorporates herein by reference these applications.
Number | Name | Date | Kind |
---|---|---|---|
5303327 | Sturner et al. | Apr 1994 | A |
11252518 | Woodruff | Feb 2022 | B2 |
11418894 | Woodruff | Aug 2022 | B2 |
20080165980 | Pavlovic et al. | Jul 2008 | A1 |
20090274310 | Taenzer | Nov 2009 | A1 |
20130223662 | Kurmann et al. | Aug 2013 | A1 |
20130243227 | Kinsbergen et al. | Sep 2013 | A1 |
20140194775 | Van Hasselt et al. | Jul 2014 | A1 |
20140198027 | Edelstein | Jul 2014 | A1 |
20140254842 | Taenser | Sep 2014 | A1 |
20160142538 | Bredikhin et al. | May 2016 | A1 |
20190149921 | Pedersen et al. | May 2019 | A1 |
20190356989 | Li | Nov 2019 | A1 |
Number | Date | Country |
---|---|---|
102724360 | Oct 2012 | CN |
108024178 | May 2018 | CN |
109788420 | May 2019 | CN |
3276983 | Jan 2018 | EP |
20180087782 | Aug 2018 | KR |
201815173 | Apr 2018 | TW |
2018172111 | Sep 2018 | WO |
Entry |
---|
Chinese Notice of Allowance from related Chinese Patent Application No. 202010482726.9 dated Nov. 22, 2021, 4 pages. |
Chinese Office Action from related Chinese Patent Application No. 202010482726.9 dated May 27, 2021. |
Korean Office Action from related Korean Patent Application No. 10-2020-0064781 dated Aug. 18, 2021. |
Office Action for Australian Application No. 22020203568, dated Nov. 30, 2020, 6 pages. |
Number | Date | Country | |
---|---|---|---|
20220150626 A1 | May 2022 | US |
Number | Date | Country | |
---|---|---|---|
62855951 | Jun 2019 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16872040 | May 2020 | US |
Child | 17583105 | US |