AUDITORY HEALTH MONITORING SYSTEM WITH AURICULAR DEVICE AND PHYSIOLOGICAL SENSOR

Information

  • Patent Application
  • 20240404549
  • Publication Number
    20240404549
  • Date Filed
    May 29, 2024
    11 months ago
  • Date Published
    December 05, 2024
    5 months ago
Abstract
A system for monitoring auditory health of a user can include an auricular device that can comprise a microphone configured to generate OAE audio data responsive to detecting one or more otoacoustic emissions originating from the inner ear of the user. One or more hardware processors associated with the auricular device can access the OAE audio data from the microphone; access physiological data of the user originating from a physiological sensor; determine a feature of the physiological data in a time-domain, the feature comprising a value of the physiological data exceeding a threshold; adjust a portion of the OAE audio data based on the feature of the physiological data, the portion of the OAE audio data corresponding to the feature of the physiological data in the time-domain; and determine one or more physiological characteristics of the user based on the OAE audio data.
Description
CROSS REFERENCE TO RELATED APPLICATIONS

Any and all applications for which a foreign or domestic priority claim is identified in the Application Data Sheet as filed with the present application are hereby incorporated by reference under 37 CFR 1.57 for all purposes and for all that they contain.


TECHNICAL FIELD

The present disclosure relates to the field of physiological monitoring.


BACKGROUND

Auricular devices, such as earbuds, can be worn in a user's ear and can emit audio into the user's ear. Such audio emitted from an auricular device can include audio playback, such as music, for the user's enjoyment and can also include audio that is part of an auditory diagnostic assessment. Otoacoustic emissions emanating from a user's inner ear can be detected to analyze the user's auditory health. The quality of otoacoustic emissions can be corrupted by noise including noise originating from physiological processes occurring within the user's body such as cardiac activity, respiratory activity, and digestive tract activity. An inner ear's ability to generate consistent otoacoustic emissions can be affected by physiological changes such as changes in blood oxygen saturation, changes in body temperature, and changes in intracranial pressure.


SUMMARY

Disclosed herein is a system for monitoring auditory health of a user. The system can comprise and auricular device and one or more hardware processors associated with the auricular device. The auricular device can comprise a microphone oriented toward an inner ear of the user when the auricular device is worn by the user and configured to generate OAE audio data responsive to detecting one or more otoacoustic emissions originating from the inner ear of the user. The one or more hardware processors can be configured to: access the OAE audio data originating from the microphone; access physiological data of the user originating from a physiological sensor coupled to the user; detect a feature of the physiological data in a time-domain based on a value of the physiological data exceeding a threshold, the feature of the physiological data corresponding to a physiological event generating noise during the one or more otoacoustic emissions, the noise having a noise frequency within a threshold of an OAE frequency of the one or more otoacoustic emissions; adjust a portion of the OAE audio data responsive to detecting the feature of the physiological data to reduce interference of the noise from the physiological event on the OAE audio data, the portion of the OAE audio data corresponding to the feature of the physiological data in the time-domain; and determine one or more physiological characteristics of the user based on the OAE audio data.


In some implementations, the physiological data includes one or more of PPG data originating from an optical sensor or ECG data originating from an ECG sensor comprising an electrode.


In some implementations, the one or more hardware processors are configured to: generate one or more waveforms from the physiological data including an ECG waveform, a respiration waveform, or a pulse waveform; and determine the feature of the physiological data from the one or more waveforms.


In some implementations, the auricular device further comprises a speaker configured to emit an audio stimulus toward the inner ear of the user, the audio stimulus configured to evoke the one or more otoacoustic emissions, wherein the one or more hardware processors are configured to adjust an operation of the speaker based on the physiological data to control the one or more otoacoustic emissions.


In some implementations, adjusting the operation of the speaker includes adjusting a time the speaker emits the audio stimulus to avoid evoking the one or more otoacoustic emissions during the physiological event.


In some implementations, adjusting the operation of the speaker includes adjusting a stimulus frequency of the audio stimulus to adjust the OAE frequency of the one or more otoacoustic emissions to avoid interference with the noise from the physiological event.


In some implementations, adjusting the operation of the speaker includes adjusting a stimulus amplitude of the audio stimulus to evoke the one or more otoacoustic emissions with an OAE amplitude being greater than a noise amplitude of the noise from the physiological event.


In some implementations, the one or more hardware processors are configured to: estimate a future physiological event based on the physiological data; and adjust the operation of the speaker based on the estimated future physiological event.


In some implementations, adjusting the portion of the OAE audio data includes discarding one or more values of the OAE audio data.


In some implementations, adjusting the portion of the OAE audio data includes assigning one or more weights to the OAE audio data.


In some implementations, adjusting the portion of the OAE audio data includes binning the OAE audio data.


In some implementations, the physiological sensor is integrated with the auricular device.


In some implementations, the one or more physiological characteristics of the user includes a hearing sensitivity of the user.


In some implementations, the one or more physiological characteristics of the user includes an oxygenation of the user.


In some implementations, the one or more physiological characteristics of the user includes an intracranial pressure of the user.


In some implementations, the one or more hardware processors are configured to generate a hearing transfer function from the OAE audio data to implement user-specific audio playback.


Disclosed herein is a system for monitoring auditory health of a user. The system can comprise an auricular device comprising: an internal microphone oriented toward an inner ear of the user when the auricular device is worn by the user and configured to generate OAE audio data responsive to detecting one or more otoacoustic emissions originating from the inner ear of the user; an external microphone oriented away from the user when the auricular device is worn by the user and configured to generate external audio data responsive to detecting external audio originating from outside the user's body; and an inertial sensor configured to generate internal audio data responsive to detecting internal audio conducted through the user's body. The system can further comprise one or more hardware processors associated with the auricular device configured to: access the OAE audio data originating from the internal microphone; access the external audio data originating from the external microphone; access the internal audio data originating from the inertial sensor; determine an adaptive filter based on the external audio data and the internal audio data; suppress noise from the OAE audio data based on the adaptive filter, the noise comprising the internal audio or the external audio; and determine one or more physiological characteristics of the user based on the OAE audio data.


In some implementations, the one or more hardware processors are configured to suppress the noise from the OAE audio data by applying a filter to the OAE audio data.


In some implementations, the filter includes a high pass filter.


In some implementations, the inertial sensor includes a vibration sensor.


In some implementations, the inertial sensor includes a bone conduction microphone.


In some implementations, the one or more physiological characteristics of the user includes a hearing sensitivity of the user.


In some implementations, the one or more physiological characteristics of the user includes an oxygenation of the user.


In some implementations, the one or more physiological characteristics of the user includes an intracranial pressure of the user.


In some implementations, the one or more hardware processors are configured to generate a hearing transfer function from the OAE audio data to implement user-specific audio playback.


In some implementations, the one or more hardware processors are configured to determine the adaptive filter based on determining one or more of a transfer function, a frequency response, and/or an impulse response.


In some implementations, the one or more hardware processors are configured to select the adaptive filter from predetermined filters based on a decibel level of the internal audio data and a decibel level of the external audio data.


Various combinations of the above and below recited features, embodiments, implementations, and aspects are also disclosed and contemplated by the present disclosure.


Additional implementations of the disclosure are described below in reference to the appended claims, which may serve as an additional summary of the disclosure.


In various implementations, systems and/or computer systems are disclosed that comprise a computer-readable storage medium having program instructions embodied therewith, and one or more processors configured to execute the program instructions to cause the systems and/or computer systems to perform operations comprising one or more aspects of the above-and/or below-described implementations (including one or more aspects of the appended claims).


In various implementations, computer-implemented methods are disclosed in which, by one or more processors executing program instructions, one or more aspects of the above-and/or below-described implementations (including one or more aspects of the appended claims) are implemented and/or performed.


In various implementations, computer program products comprising a computer-readable storage medium are disclosed, wherein the computer-readable storage medium has program instructions embodied therewith, the program instructions executable by one or more processors to cause the one or more processors to perform operations comprising one or more aspects of the above-and/or below-described implementations (including one or more aspects of the appended claims).





BRIEF DESCRIPTION OF THE DRAWINGS

Various implementations will be described hereinafter with reference to the accompanying drawings. These implementations are illustrated and described by example only, and are not intended to limit the scope of the disclosure. In the drawings, similar elements may have similar reference numerals.



FIG. 1 illustrates an example auricular device worn by a user.



FIG. 2 illustrates an example implementation of an auricular device in communication with various other devices in a network.



FIG. 3 is a block diagram illustrating an example implementation of a physiological monitoring system with an auricular device.



FIG. 4 illustrates various portions of an inner ear of a subject.



FIG. 5A is a diagram illustrating example audiometry data from an OAE test.



FIG. 5B is a diagram illustrating an example DP-gram.



FIG. 5C is a diagram illustrating an example DP-gram.



FIG. 6 is a diagram illustrating an example adaptive noise suppressor.



FIG. 7 is a flowchart illustrating an example process of adaptively filtering OAE audio data.



FIG. 8 is a flowchart illustrating an example process of adjusting OAE audio data based on physiological data.





DETAILED DESCRIPTION

The present disclosure will now be described with reference to the accompanying figures, wherein like numerals may refer to like elements throughout. The following description is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses. It should be understood that steps within a method may be executed in different order without altering the principles of the present disclosure. Furthermore, the devices, systems, and/or methods disclosed herein can include several novel features, no single one of which is solely responsible for its desirable attributes or which is essential to practicing the devices, systems, and/or methods disclosed herein. Additionally, the structures, systems, and/or devices described herein may be embodied as integrated components or as separate components.



FIG. 1 illustrates an example auricular device 100 secured to an ear 102 of a user 101 (which may also be referred to as a “subject”, “wearer,” or “patient”). The auricular device 100 can be secured to any of a number of portions and/or locations relative to the ear 102. For example, the auricular device 100 can be secured to, placed adjacent, and/or positioned to be in contact with a pinna, a concha, an ear canal, a tragus, an antitragus, a helix, an antihelix, and/or another portion of the ear 102. In some implementations, the auricular device 100 may be an over-the-ear earcup or headphone. In some implementations, the auricular device 100 may be a hearing aid. In some implementations, the auricular device 100 may be an earbud. In some implementations, two auricular devices 100 can be secured to two cars of the user 101 (one per each ear). The user 101 may be a newborn baby. The auricular device 100 may be used during a diagnostic assessment of acoustic function of the user 101.


Any of the auricular devices described herein and/or components and/or features of the auricular devices described herein can be integrated into a wearable device that secures to another portion of a user's body. For example, any of the components and/or features of the auricular devices described herein can be integrated into a wearable device that can be secured to a head, chest, neck, leg, ankle, wrist, or another portion of the body. As another example, any of the components and/or features of the auricular devices described herein can be integrated into glasses and/or sunglasses that a user can wear. As another example, any of the components and/or features of the auricular devices described herein can be integrated into a device (for example, a band) that that a user can wear around their neck.



FIG. 2 illustrates an example implementation of the auricular device 100 in communication with various other devices and/or systems over network 210. The network 210 can include one or more communications networks. The network 210 can include a plurality of computing devices configured to communicate with one another. The network 210 can include the Internet. The network 210 can include a cellular network. The network 210 can include any combination of a body area network (e.g., implementing human body communication with capacitive coupling via the tissue of a user's body), a local area network (“LAN”) and/or a wide area network (“WAN”), or the like. Accordingly, various computing devices can communicate with one another directly or indirectly via any appropriate communications links and/or networks, such as network 210 (e.g., one or more communications links, one or more computer networks, one or more wired or wireless connections, the Internet, any combination of the foregoing, and/or the like).


Communication over the network 210 can include a variety of communication protocols, including wired communication, wireless communication, wire-like communication, near-field communication (such as inductive coupling between coils of wire or capacitive coupling between conductive electrodes), and far-field communication (such as transferring energy via electromagnetic radiation (e.g., radio waves)). Example communication protocols can include Wi-Fi, Bluetooth®, ZigBee®, Z-wave®, cellular telephony, such as long-term evolution (LTE) and/or 1G, 2G, 3G, 4G, 5G, etc., infrared, radio frequency identification (RFID), satellite transmission, inductive coupling, capacitive coupling, proprietary protocols, combinations of the same, and the like.


The auricular device 100 can communicate with a server 201. The server 201 can comprise one or more computing devices including one or more hardware processors. The server 201 can comprise program instructions configured to cause the server 201 to perform one or more operations when executed by the hardware processors. The server 201 can include, and/or have access to (e.g., be in communication with and/or host) a storage device, database, or system which can include any computer readable storage medium and/or device (or collection of data storage mediums and/or devices), including, but not limited to, one or more memory devices that store data, including without limitation, dynamic and/or static random-access memory (RAM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), optical disks (e.g., CD-ROM, DVD-ROM, etc.), magnetic disks (e.g., hard disks, floppy disks, etc.), memory circuits (e.g., solid state drives, random-access memory (RAM), etc.), and/or the like. In some implementations, the server 201 can include and/or be in communication with a hosted storage environment that includes a collection of physical data storage devices that may be remotely accessible and may be rapidly provisioned as needed (commonly referred to as “cloud” storage). Data stored in and/or accessible by the server 201 can include physiological data and/or audio data. In some implementations, the server 201 can comprise and/or be in communication with an electronic medical records (EMR). An EMR can comprise a propriety EMR. An EMR can comprise an EMR associated with a hospital. An EMR can store data including medical records.


The auricular device 100 can communicate with an electronic device 203. The electronic device 203 may be a smartphone or other mobile device such as a tablet, a PDA, a computer, a laptop, or the like.


The auricular device 100 can communicate with one or more physiological sensor devices such as a finger-worn sensor device 205A, a chest-worn sensor device 205B, a wrist-worn sensor device 205C which may be a smartwatch, and foot-worn sensor device 205D. Any of the physiological sensor devices 205A-205D can include one or more physiological sensors configured to generate physiological data of physiological parameters. Such sensors can include acoustic sensors, optical sensors, inertial sensors, temperatures sensors, electrical sensors, voltage sensors, impedance sensors, etc. Such sensors can include an oximeter. Such sensors can include a photoplethysmography (PPG) sensor configured to measure volumetric variations in blood circulation and derive one or more parameters therefrom, such as pulse rate, blood pressure, respiration rate, cardiac output, perfusion index, pleth variability index, PPG waveform data, blood oxygen saturation, etc. Such sensors can include one or more optical emitters configured to emit optical radiation of a plurality of wavelengths, which can include visible light. Such sensors can include one or more optical detectors configured to detect optical radiation attenuated by the tissue of subject (which may have been emitted by optical emitters) and generate data relating to the pulsatile characteristics of the subject, including blood oxygen saturation, hydration, hemoglobin content, etc. Such sensors can include electrocardiogram (ECG) sensors, including one or more electrodes, configured to measure electrical activity of the subject, such as cardiac signals. Such sensors can include electroencephalography (EEG) sensors. Such sensors can measure and/or generate data relating to respiration rate, blood oxygen saturation (e.g., SpO2), heart rate, pulse rate, skin temperature, core body temperature, spatial orientation, or the like. Such sensors can include inertial sensors (e.g., accelerometer and/or gyroscope) configured to measure linear and/or angular acceleration indicative of a motion and/or posture of a subject. Such sensors can include temperature sensors (e.g., thermistor, infrared temperature sensor) configured to measure temperature of a subject such as skin surface temperature and/or core body temperature.


The auricular device 100 can be configured to transmit and/or receive data over the network 210, such as physiological data of a user, audio playback data, audiometry data, or the like. For example, the auricular device 100 can receive physiological data from any of the physiological sensor devices 205A-205D. As another example, the auricular device 100 can receive instructions or information associated with employing an audio playback mode or hearing aid mode or in utilizing a particular hearing transfer function from the server 201, electronic device 203, and/or the wrist-worn sensor device 205C.



FIG. 3 illustrates a block diagram of an example auricular device 300 implemented with a user's ear. Various emissions can be generated within a user in response to an audio stimulus 310. For example, the user's auditory system can generate an otoacoustic emission (OAE) 320 and/or electrical signal 330 responsive to audio stimulus 310 being introduced to the user's car. The cochlea can generate the OAE 320. In some implementations, the cochlea can generate the OAE 320 spontaneously without an audio stimulus. The OAE 320 can include one or more of a distortion product otoacoustic emission (DP-OAE), a spontaneous otoacoustic emission (S-OAE), and/or a transient evoked otoacoustic emission (TE-OAE).


The electrical signal 330 can include an electrical voltage originating from the user's inner ear and/or brain (e.g., the auditory nerve) in response to audio stimulus 310. The electrical signal 330 can be an auditory evoked potential (AEP). The electrical signal 330 can be measured with one or more physiological sensors such as electrodes to generate audiometry data, such as auditory brainstem response (ABR) data, mid-latency response data, cortical response data, acoustic change complex data, auditory steady state response (ASSR) data, complex auditory brainstem response data, electrocochleography (ECoG) data, cochlear microphonic data, cochlear neurophonic AEPs data, electroencephalography (EEG) data, or the like. The electrical signal 330 can be measured by one or more sensors placed on the scalp or skin of the user.


The auricular device 300 can be an earpiece such as a hearing aid, a wired earbud, a wireless earbud, and an earpiece integrated into an earcup, an osseointegrated auditory prosthesis, a cochlear implant, etc. The auricular device 300 can include a hardware processor 301, a communication component 303, storage 305, a power source 307, one or more external microphones 309, an internal microphone 311, one or more speakers 313, one or more physiological sensors 315, and an inertial sensor 317. In some implementations, the auricular device 300 may include less than all of the components shown in FIG. 3. For example, the auricular device 300 may not include an external microphone 309. In some implementations, the auricular device 300 can include multiple components of a single component shown. For example, the auricular device 300 can include a plurality of speakers 313 and/or internal microphones 311. The auricular device 200 can comprise one or more components as a single integrated unit, such as within a housing of the auricular device 200. In some implementations, one or more components may be remote the auricular device 300. For example, one or more of the physiological sensors 315 can be embodied in a separate device remote to the auricular device 300, such as any of the example physiological sensor devices 205A-205D shown and/or described herein.


The hardware processor 301 can comprise one or more integrated circuits. The hardware processor 301 can comprise and/or have access to memory. The hardware processor 301 can comprise and/or be embodied as one or more chips, controllers such as microcontrollers (MCUs), and/or microprocessors (MPUs). The hardware processor 301 can comprise a central processing unit (CPU). In some implementations, the hardware processor 301 can be embodied as a system-on-a-chip (SoC). The hardware processor 301 can be configured to implement an operating system which can allow multiple processes to execute simultaneously. The hardware processor 301 can be configured to execute program instructions to cause the auricular device 300 to perform one or more operations. The hardware processor 301 can be configured, among other things, to process data, execute instructions to perform one or more functions, and/or control the operation of the auricular device 300 or components thereof. For example, the hardware processor 301 can process physiological data obtained from physiological sensors and can execute instructions to perform functions related to storing and/or transmitting such physiological data. In some implementations, the hardware processor 301 can be remote to the auricular device 300. The hardware processor 301 can receive and process data that was collected by the external microphone 309, the internal microphone 311, the inertial sensor 317 and/or the physiological sensor 315. The hardware processor 301 can access data as it is generated in real-time and/or can access data stored in storage 305, such as historical data previously generated.


The hardware processor 301 can execute one or more processes to monitor an auditory health of a user. The hardware processor 301 can generate audiometry data of a user, such as by playing an input audio signal comprising varying amplitudes at a single frequency. The input audio signal can include a test audio signal, and/or a content audio signal comprising music, speech, environment sounds, animal sounds, etc. For example, the input audio signal can include the content audio signal with an embedded test audio signal. The hardware processor 301 can continue to refine audiometry data associated with a user (e.g., auditory profile), as the user continues to listen to audio.


The hardware processor 301 can implement one or more audiometry tests to assess the operation of an inner ear of a user which may indicate an auditory health of the user. The hardware processor 301 can implement an otoacoustic emissions (OAE) test, such as a stimulus frequency OAE test, swept-tone OAE test, transient evoked OAE test, distortion product OAE test, or pulsed OAE test. The hardware processor 301 can access audio data originating from microphones (e.g., internal microphone 311) to measure otoacoustic emissions (OAE) from the user's ear (e.g., in the external car canal). The hardware processor 301 can process the OAE audio data from the microphones (e.g., internal microphone 311 during an audiometric test) to generate audiometry data (e.g., processed OAE audio data and/or stimulus audio data). As an example, the hardware processor 301 can determine and/or generate any of the audiometry data represented in the graphs of FIGS. 5A-5C.


The hardware processor 301 can determine a user's hearing transfer function based on audiometry data. For example, the hardware processor 301 can compare the measured OAEs with response ranges from normal-hearing subjects and impaired-hearing subjects to develop the frequency dependent hearing transfer function for each car of the user. A hearing transfer function can correlate an actual amplitude or intensity of sound produced by an audio signal to a user-perceived amplitude or intensity. A hearing transfer function can correlate actual amplitudes or intensities to user-perceived amplitudes or intensities for given frequencies of an audio signal. As an example, a hearing transfer function can indicate that an input audio signal that produces a sound at 1000 Hz at 70 dB, is perceived by the user as being at 25 dB. A hearing transfer function can include data to facilitate achieving a certain audio output for a given audio input. For example, the hearing transfer function can include data relating to filters, gain, suppression, phase shift, latency, etc. to apply to an audio signal.


The hardware processor 301 can access and process physiological data originating from the one or more physiological sensors 315. For example, the hardware processor 301 can implement electrocardiography processing techniques on voltage data originating from electrodes to analyze the user's cardiac activity. For example, the hardware processor 301 can generate ECG waveform data from electrode voltage data. An ECG waveform can include a trend line in the time domain having peaks and valleys with various voltage amplitude representing cardiac electrical activity. The hardware processor 301 can analyze features of the ECG waveform to determine cardiac characteristics, such as heart rate and/or arrythmias. ECG waveform characteristics can include peak amplitudes, valleys amplitudes, RR intervals, PR interval, QRS interval, QT interval, or the like.


The hardware processor 301 can access and process impedance data originating from electrodes on the user. The hardware processor 301 can generate impedance waveform data (in the time domain and/or frequency domain) from the impedance data. The hardware processor 301 can determine one or more physiological characteristics from the impedance data such as respiration rate, respiration pressure, and respiration volume.


The hardware processor 301 can access and process photoplethysmography (PPG) data originating from a PPG sensor such as an oximeter. The hardware processor 301 can determine characteristics of pulsatile blood flow from the PPG data such as pulse rate, blood oxygen saturation, blood pressure, respiration rate, respiration volume, etc.


The hardware processor 301 can access and process other physiological data such as temperature data originating from a thermistor and/or infrared sensor, and inertial data originating from an inertial sensor which can indicate user motion, posture, orientation, etc.


The communication component 303 can facilitate communication (via wireless, wired, and/or wire-like connection) between the auricular device 300 (and/or components thereof) and separate devices, such as separate monitoring hubs, monitoring devices, sensors, systems, servers, or the like. For example, the communication component 207 can be configured to allow the auricular device 300 to wirelessly communicate with other devices, systems, and/or networks over any of a variety of communication protocols, including near-field communication protocols and far-field communication protocols. Near-field communication protocols, which may also be referred to as non-radiative communication, can implement inductive coupling between coils of wire to transfer energy via magnetic fields (e.g., NFMI). Near-field communication protocols can implement capacitive coupling between conductive electrodes to transfer energy via electric fields. Far-field communication protocols, which may also be referred to as radiative communication, can transfer energy via electromagnetic radiation (e.g., radio waves). The communication component 303 can communicate via any variety of communication protocols such as Wi-Fi, Bluetooth®, ZigBee®, Z-wave®, cellular telephony, such as long-term evolution (LTE) and/or 1G, 2G, 3G, 4G, 5G, etc., infrared, radio frequency identification (RFID), satellite transmission, inductive coupling, capacitive coupling, proprietary protocols, combinations of the same, and the like. In some implementations, communication component 303 can implement human body communication (HBC) which can include capacitively coupling a transmitter and receiver via an electric field propagating through the human body. The communication component 303 can allow data and/or instructions to be transmitted and/or received to and/or from the auricular device 300 and separate computing devices. The communication component 303 can be configured to transmit and/or receive (for example, wirelessly) processed and/or unprocessed physiological data with separate computing devices including physiological sensors, other monitoring hubs, remote servers, or the like. As another example, the communication component 303 can be configured to transmit and/or receive (for example, wirelessly) physiological data, audiometry data, and/or playback audio data, with separate computing devices including physiological sensors, other auricular devices, remote servers, or the like. In some implementations, communication component 303 can transfer power required for operation of a computing device. The communication component 303 can be embodied in one or more components that are in communication with each other. The communication component 303 can include one or more of: transceivers, antennas, transponders, radios, emitters, detectors, coils of wire (e.g., for inductive coupling), and/or electrodes (e.g., for capacitive coupling). The communication component 303 can include one or more integrated circuits, chips, controllers, processors, or the like, such as a Wi-Fi chip and/or a Bluetooth chip.


The storage 305 can include any computer readable storage medium and/or device (or collection of data storage mediums and/or devices), including, but not limited to, one or more memory devices that store data, including without limitation, dynamic and/or static random-access memory (RAM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), optical disks (e.g., CD-ROM, DVD-ROM, etc.), magnetic disks (e.g., hard disks, floppy disks, etc.), memory circuits (e.g., solid state drives, random-access memory (RAM), etc.), and/or the like. The storage 305 can store data including processed and/or unprocessed physiological data obtained from physiological sensors, and/or audio data originating from microphones, for example. The storage 305 can store program instructions that when executed by the hardware processor 301 cause the auricular device 300 to perform one or more operations.


The power source 307 can provide power for components of the auricular device 300. The power source 307 can include a battery. In some implementations, the power source 307 can be external to the auricular device 300. For example, the auricular device 300 can include or can be configured to connect to a cable which can itself connect to an external power source to provide power to the auricular device 300.


The external microphone 309 can be embodied as part of the auricular device 300. The external microphone 309 can be located within the auricular device 300. The external microphone 309 can be oriented to capture acoustic signals that are external to an car of a user, such as when the external microphone 309 (or the auricular device 300) is donned by the user. The external microphone 309 can be oriented away from a user such as away from an car of a user. The hardware processor 301 can receive audio data generated by the external microphone 309. The hardware processor 301 can use the audio data to perform noise suppression, such as active noise suppression or adaptive noise suppression.


The internal microphone 311 can be embodied as part of the auricular device 300. The internal microphone 311 can generate audio data (which may be referred to as OAE data) responsive to detecting audio originating from an car of a user, such as OAE 320 originating from an inner car and travelling along an car canal of the user. The internal microphone 311 can be oriented to capture audio within an car of a user, such as within an car canal, such as when the internal microphone 311 (or the auricular device 300) is donned by the user. The internal microphone 311 can be oriented toward an car canal of a user. Audio captured by the internal microphone 311 can include audio resulting from activity of hair cells, such as outer hair cells in the cochlea of the user. The audio can include audio resulting from movement of the tectorial membrane of the user. The internal microphone 311 can detect evoked responses such as an acoustic response signal evoked in response to an acoustic stimulus signal. The hardware processor 301 can access audio data (e.g., OAE data) generated by the internal microphone 311 and can determine one or more physiological characteristics of the user from the audio data. For example, the hardware processor 301 can determine one or more inner ear characteristics of the user and/or audiometry data of the user.


The speakers 313 can be embodied as part of the auricular device 300. The speakers 313 can emit audio to an ear of a user. For example, the speakers 313 can emit audio stimulus 310 to evoke an acoustic response from the inner ear such as an OAE 320. The audio stimulus 310 can comprise a plurality of tones, such as two tones. The speakers 313 can emit sound based on an audio playback signal, such as music or media. The speakers 313 can emit a noise cancelling signal. The hardware processor 301 can generate instructions to control an operation of the speakers 313. The hardware processors 301 can transmit audio data to the speakers 313 for the speakers 313 to emit as audio. The speakers 313 can be configured to emit a range of audio frequencies. The speakers 313 can emit multiple frequencies or tones simultaneously. The speakers 313 can include a tweeter, and/or a woofer.


The auricular device 300 can optionally include one or more sensors 315. The sensor 315 can generate physiological data of physiological parameters. The sensor 315 can include acoustic sensors, optical sensors, inertial sensors, temperatures sensors, electrical sensors, voltage sensors, impedance sensors, etc. The sensor 315 can include an oximeter. The sensor 315 can include a photoplethysmography (PPG) sensor configured to measure volumetric variation in blood circulation and derive one or more parameters therefrom, such as pulse rate, blood oxygen saturation, respiration rate, etc. The sensor 315 can include one or more optical emitters configured to emit optical radiation of a plurality of wavelengths, which may include visible light. The sensor 315 can include one or more optical detectors configured to detect optical radiation attenuated by the tissue of subject (which may have been emitted by optical emitters) and generate data relating to the pulsatile characteristics of the subject. The sensor 315 can include electrocardiogram (ECG) sensors, including one or more electrodes, configured to measure electrical activity of the subject, such as cardiac signals. The sensor 315 can include electroencephalography (EEG) sensors. The sensors 315 can detect an evoked response such as electrical signal 330 evoked in response to audio stimulus 310. As an example, the sensors 315 may detect electrical activity of the auditory nerve of a subject such as in response to emitting an acoustic signal into the car of the subject to stimulate activity of the inner ear. In some implementations, the sensors 315 can collect one or more of electroencephalography (EEG) data, auditory brain stem response (ABR) data, electrocochleography (ECoG) data, evoked cortical response data, and/or auditory steady-state response (ASSR) data. The sensor 315 can measure and/or generate data relating to respiration rate, blood oxygen saturation (e.g., SpO2), heart rate, pulse rate, skin temperature, core body temperature, spatial orientation, or the like. The sensor 315 can include inertial sensors (e.g., accelerometer and/or gyroscope) configured to measure linear and/or angular acceleration indicative of a motion and/or posture of a subject. The sensor 315 can include temperature sensors (e.g., thermistor, infrared sensor) configured to measure temperature of a subject such as skin surface temperature and/or core body temperature.


The inertial sensor 317 can include one or more of a motion sensor, vibration sensor, accelerometer, gyroscope, force sensor, and/or a bone conduction microphone. The inertial sensor 317 can include a transducer configured to convert a signal in one form of energy into another form of energy. The inertial sensor 317 can generate inertial data comprising an electrical signal responsive to detecting internal audio 340 conducted through and/or originating from within the user's body. Internal audio 340 can include kinetic energy such as mechanical vibrations and/or a pressure wave conducted through the user's body tissues. Internal audio 340 can originate from activity of a vocal cord of the subject, activity of respiratory airways of the subject, air movement through respiratory airways of the subject, speaking, breathing, coughing, chewing, swallowing, head movement of the subject, jaw movement of the subject, and/or the like. Inertial data generated by the inertial sensor 317 can indicate one or more frequencies of internal audio 340 including low frequencies, such as frequencies of less than 250 Hz, less than 500 Hz, less than 750 Hz, less than 1,000 Hz, less than 1,250 Hz, less than 1,500 Hz, less than 1,750 Hz, less than 2,000 Hz, etc. The processor 301 can determine frequencies of internal audio 340 from the inertial data generated by inertial sensor 317.



FIG. 4 is a perspective cutaway view of example portions of an inner ear 400 of a subject such as a cochlea. The portions of the inner ear shown in FIG. 4 may include potions of a cochlea. The portions of the inner ear shown in FIG. 4 may include portions of an organ of corti. As shown, the inner ear can include an auditory nerve 401, one or more inner hair cells 403, one or more outer hair cells 405, and a tectorial membrane 407. The tectorial membrane 407 may be connected to the inner hair cells 403 and the outer hair cells 405. The auditory nerve 401 may conduct electro-chemical signals between the brain and portions of the inner ear 400 such as the inner hair cells 403 and the outer hair cells 405. The auditory nerve 401 may conduct electro-chemical signals that may include a voltage differential that may be detected by a physiological sensor. The auditory nerve 401 may include efferent axons that conduct signals from the brain to the outer hair cells 405. The auditory nerve 401 may include afferent axons that conduct signals from the inner hair cells 403 to the brain.


The auditory nerve 401 may communicate signals to the outer hair cells 405 to cause the outer hair cells 405 to relax or contract. Movement of the outer hair cells 405, such as contraction or relaxation, may cause the tectorial membrane 407 to move. Movement of the tectorial membrane 407 may cause pressure differentials in the surrounding air which may cause an acoustic wave or acoustic signal. Acoustic signals generated in response to movement of the tectorial membrane 407 may include OAEs, such as DP-OAE, TE-OAE, S-OAE, etc. Movement of the tectorial membrane 407 may cause the inner hair cells 403 to relax or contract. Movement of the inner hair cells 403, such as contraction or relaxation may cause the inner hair cells 403 to generate a signal to be conducted by the auditory nerve 401 to the brain.


As described herein, a sensor can detect inner car physiological data, such as acoustic signals generated in response to movement of the tectorial membrane 407 and/or electrical signals generated in response to signals conducted along the auditory nerve 401. Inner car physiological data may be used to determine an operational status of various components of the inner ear. For example, detecting and analyzing an acoustic signal generated in response to movement of the tectorial membrane 407, such as an OAE, may indicate whether the outer hair cells 405 are relaxing and/or contracting properly which may indicate whether the inner ear 400 of the subject is properly amplifying certain frequencies which may indicate any hearing loss of the subject. Detecting and analyzing electrical signals of the auditory nerve 401 may likewise indicate similar information.


The inner ear 400 may operate differently based on various circumstances or physiological factors. As an example, pressure changes within the inner ear 400 may change an operation of the inner ear 400, such as operation of the outer hair cells 405 which may affect movement of the tectorial membrane 407. Accordingly, one or more characteristics of an OAE, such as phase, may change as pressure changes within the inner car 400.


Pressure within the inner ear may correspond to intracranial pressure. Intracranial pressure may change based on various factors such as position of the subject, cardiac activity of the subject, respiratory activity of the subject, physical exertion of the subject, acceleration of the subject, temperature of the subject, or the like. For example, as a subject stands up from lying down, the subject's intracranial pressure may change which may affect operation of the inner ear 400 which may affect an OAE generated by the inner car 400. As another example, pressure within the blood vessels of the subject will change during a cardiac cycle of the subject, which can affect intracranial pressure which can in turn affect an OAE. As another example, intracranial pressure may change during a respiratory cycle of the subject which can affect OAE. Accordingly, measuring and analyzing inner car data, such as an OAE, alone or in combination with other physiological data, may provide information relating to the intracranial pressure of the subject.


As another example of the inner car 400 operating differently based on various circumstances, changes in blood oxygen saturation within the inner ear may affect an operation of the inner ear 400, such as operation of the outer hair cells 405 which may affect movement of the tectorial membrane 407. Accordingly, one or more characteristics of an OAE, such as phase, may change as blood oxygen concentration changes within the inner car 400. The inner ear 400 may be sensitive to changes in blood oxygen concentration. For example, slight changes in blood oxygen levels within the inner ear 400 may greatly affect operation of the inner ear 400 which can an OAE generated by the inner ear 400. As an example, the auditory nerve 401 and/or outer hair cells 405 may operate differently during states of hypoxia, normoxia, and/or hyperoxia, which may result in changes in an OAE generated by the inner car 400. Accordingly, measuring and analyzing inner ear data, such as an OAE, alone or in combination with other physiological data, may provide information relating to an oxygenation status of the subject, such as an intracranial oxygenation status. Using inner car data may to monitor oxygenation of the subject may help prevent the subject's oxygenation from reaching harmful levels such as excessive hyperoxia which may result in blindness in newborns or excessive hypoxia which may lead to tissue damage such as in the brain.



FIGS. 5A-5B illustrate graphs comprising example audiometry data. FIG. 5A illustrates graph 500 which shows example audiometry data from OAE measurements. The graph 500 shows data along an X-axis corresponding to frequency (kHz) and along a Y-axis corresponding to decibel (dB) also referred to as sound pressure level (SPL). Data point 501 corresponds to a first stimulus tone which may also be referred to as a first input frequency or f1. In this example, as shown by data point 501, the first input frequency (f1) has a frequency of about 1,642 Hz and a dB SPL of between 60 dB and 70 dB or about 65 dB. Data point 502 corresponds to a second stimulus tone which may also be referred to as a second input frequency or f2. In this example, as shown by data point 501, the second input frequency (f2) has a frequency of about 2,002 Hz and a dB SPL of between 50 dB and 60 dB or about 55 dB. In some implementations, a ratio of f2 to f1 (e.g., f2/f1) can be between about 1.10 and about 1.30, between about 1.20 and about 1.25, about 1.21, or about 1.22. In some implementations, the frequencies of f1 and/or f2 may change so long as the ratio of f2/f1 remains about 1.22, as described. f1 and f2 may have amplitudes (“L1”) and (“L2”) respectively. L1 may be separated by L2 by about a 10-to 15-dB level difference. A speaker, such as in an auricular device, can emit first and second frequencies (f1 and f2) into an ear canal of a user (e.g., simultaneously) to evoke an OAE from the user's cochlea. Data point 505 corresponds to an OAE (e.g., a distortion product OAE) which may have been evoked by input frequencies f1 and f2. In this example, as shown by data point 505, the OAE has a frequency of about 1,281 Hz and a dB SPL of between 10 dB and 20 dB or about 14.7 dB. The OAE can be a DP-OAE and can be referred to as the cubic difference tone. The OAE may have a frequency equal to about 2*F1−F2. For example, if F1=1000 Hz and F2=1200 Hz, then 2*F1−F2=800 Hz. A microphone, such as an internal microphone in an auricular device, can detect the OAE emanating from the user's inner ear.


Any of the computing devices shown and/or described herein, such as an auricular device, can generate the data shown in graph 500. For example, a hardware processor can generate the data in graph 500 responsive to a microphone in an auricular device detecting audio (either stimulus audio input into a user's ear or audio emanating from a user's inner ear). For example, the auricular device can measure OAE from a user's inner ear (e.g., cochlea). The data in graph 500 may correspond to a plurality of measurements. For example, the data in graph 500 may correspond to eight measurement samples over a period of time such as 1.5 seconds.


As used herein, “audiometry data” can include processed audio data originating from a microphone representative of an OAE and/or audio data corresponding to input stimulus audio introduce to an car from a speaker. For example, audiometry data can include any of the data, or associated characteristics, shown and/or described in graph 500 such as the frequency, amplitude (dB SPL), latency (defined as time interval between stimulus and OAE detection), and/or phase of any of the stimulus tones (f1, f2) or the measured OAEs. Audiometry data can include a noise floor or hearing threshold.



FIG. 5B illustrates a graph 550 representing an example DPgram having data corresponding to a plurality of OAE measurements including the OAE measurement shown in graph 500 of FIG. 5A. Specifically, data point 551 corresponds to the graph 500 of FIG. 5A (e.g., the frequency of data point 551 corresponds to the frequency of data point 502 and the amplitude of data point 551 corresponds to the amplitude of data point 505). The DPgram shown in graph 550 comprises data points 551-554, a noise floor 557, and a minimum OAE level 559. A frequency of data points 551-554 represents a frequency used for an input audio stimulus (e.g., f2) to evoke an OAE. For example, data point 501 has a frequency of about 2,002 Hz corresponding to the f2 shown by data point 502 in graph 500. A dB SPL of data points 551-554 represents a dB level of an OAE. For example, data point 551 has a dB SPL of about 14.7 dB corresponding to the dB SPL of data point 505 in graph 500. One or more measurement samples may be used to generate data points 551-554. Different numbers of measurement samples may be used to generate data points 551-554, and/or different lengths of measurement time.


In this example, the minimum OAE level 559 is about −10 dB. OAEs with dB SPL below the minimum OAE level 559 may be considered to be insignificant. The noise floor 557 indicates that an OAE test should end because residual noise is so low that extending test time will not produce significant results.


As used herein, “audiometry data” may refer to any of the data, or associated characteristics, shown and/or described in graph 550. For example, audiometry data may refer to the frequencies or dB SPL of any of data points 551-554. Audiometry data can comprise an auditory profile of a user including information corresponding to a plurality of OAE measurements such as shown in the graph of FIG. 5B. In some cases, a DPgram, or characteristics of a DPgram, can be referred to as an auditory profile of a user.



FIG. 5C illustrates a graph 570 representing an additional example DPgram comprising a plurality of OAE data points along line 571. The OAE data points along line 571 may be generated from two separate sites within the cochlea, a primary and a secondary site, the signals from each constructively and destructively interfere with each other, making crests and troughs in the response. The pattern of the specific locations (in the frequency domain) of the crests and troughs is called fine structure and is unique to each car. The graph 570 comprises a plurality of data points along line 572 representing a noise floor.



FIG. 6 is a block diagram illustrating an example adaptive noise suppressor 600. The adaptive noise suppressor 600 can be implemented by one or more hardware processors, such as hardware processor 301 shown and/or described herein. The adaptive noise suppressor 600 can receive a signal 601. For example, the signal 601 may be detected by an acoustic sensor such as a microphone, such as internal microphone 311 shown and/or described herein. In some implementations, the signal 601 may correspond to inner car physiological data as discussed herein, such as an OAE. The noise 603 can correspond to an acoustic or non-acoustic signal. For example, the noise 603 may be an electrical signal or a mechanical signal, such as vibrations. In some implementations, the noise 603 can originate from one or more sources such as internal to the user's body (represented by noise 603A) and external to the user's body (represented by noise 603B). The noise 603 may represent an undesired signal. The noise 603 may originate from a different source than the signal 601. As shown, the noise 603 may mix with the signal 601 such that the adaptive noise suppressor 600 may receive the signal 601 mixed with the noise 603.


The adaptive noise suppressor 600 can receive the noise 603. The adaptive noise suppressor 600 can receive the noise 603 separately from the signal 601. The noise 603 may be detected at an acoustic sensor, such as a microphone, such as external microphone 209 shown and/or described herein. The noise 603 may be detected at an inertial sensor, such as an accelerometer or bone conduction microphone.


The adaptive noise suppressor 600 can apply a filter 605 to the noise 603. The filter 605 can be an analogue filter. The filter 605 can be a digital filter. The filter 605 can be an adaptive filter. The adaptive noise suppressor 600 may adjust a transfer function associated with the filter 605. The adaptive noise suppressor 600 may adjust a frequency response associated with the filter 605. The adaptive noise suppressor 600 may adjust an impulse response associated with the filter 605. The adaptive noise suppressor 600 may adjust one or more filter weights associated with the filter 605. The adaptive noise suppressor 600 may adjust one or more characteristics of the filter 605 based on an adaptive algorithm. The adaptive noise suppressor 600 may adjust one or more characteristics of the filter 605 based on a feedback loop, such as based on an output of the adaptive noise suppressor 600.


The filter 605 may output a noise estimate (Nest). Nest may approximate the noise 603. The adaptive noise suppressor 600 may subtract the Nest from the signal 601 which may be corrupted by the presence of the noise 603. The adaptive noise suppressor 600 may design the filter 605 to approximate the noise 603 as closely as possible to avoid further distorting the signal 601 when subtracting Nest from the signal 601. The adaptive noise suppressor 600 may subtract the Nest from the signal 601 by applying a filter to the signal 601. The adaptive noise suppressor 600 may apply a high pass filter to the signal 601 to remove low frequencies from the signal 601. The adaptive noise suppressor 600 may determine a cutoff frequency to filter the signal 601 based on the Nest.


The adaptive noise suppressor 600 may output a signal estimate (Sest). The Sest may approximate the signal 601. The adaptive noise suppressor 600 may adjust one or more characteristics of the filter 605 based on the output Sest. The adaptive noise suppressor 600 may control the filtering and subtraction by an adaptive process because the characteristics of the channels transmitting the noise 603 from the noise source to the adaptive noise suppressor 600 may not be known and may be unpredictable, Accordingly, the filter 605 may be an adaptive filter and the adaptive noise suppressor 600 may adjust one or more characteristics of the filter 605 such as transfer function, frequency response, and/or impulse response to more closely approximate Nest to the noise 603 and to minimize possible error of distorting the signal 601.



FIG. 7 is a flowchart illustrating an example process 700 of adaptively filtering OAE audio data for subsequent processing. This process, in full or parts, can be executed by one or more hardware processors, whether they are associated with a singular or multiple computing devices, and even devices in remote or wireless communication. By way of example, the one or more hardware processors executing process 700 can be associated with auricular device 100/300, a wearable device such as a watch, a mobile device such as a phone, a laptop, a tablet, and/or any of the electronic computing devices shown and/or described herein. The implementations of this process may vary and can involve modifications like omitting blocks, adding blocks, and/or rearranging the order of execution of the blocks. Process 700 serves as an example and is not intended to restrict the present disclosure.


As shown in process 700, analyzing OAE data in combination with other audio data, such as internal audio data, can improve determining physiological characteristics of a subject. For example, internal audio such as swallowing, breathing, talking, chewing, head movements, jaw movements, can be noise that corrupts the quality of OAE. The process 700 may accordingly account for internal audio (or other noise sources such as external audio) when using OAE data to assess physiological characteristics of a user.


At block 701, a computing device (e.g., one or more hardware processors of a computing device executing program instructions) can access audio data generated by an internal microphone responsive to detecting one or more OAEs originating from an inner car of a user and conducted through an car canal via air. Such audio data can be referred to as OAE audio data. The OAE audio data can be in the form of an electrical signal and can originate from a microphone responsive to the microphone detecting the OAE. The OAE audio data can include analogue and/or digital data. The computing device can access the OAE audio data from storage. For example, the OAE audio data can include historical OAE audio data that was previously generated by a microphone prior to being accessed for processing. In some implementations, a processor can receive the OAE audio data from the microphone directly for processing without the OAE audio data being stored in storage. For example, the processor can access the OAE audio data in substantially real-time as the OAE audio data is generated by the microphone. The computing device can access the OAE audio data continuously. The OAE audio data can be a continuous stream of data. In some implementations, the OAE audio data can be generated by one or more physiological sensors, which can include electrodes, responsive to detecting electrical activity of the auditory system of the subject. Such electrical activity can relate to one or more of auditory brainstem response (ABR), mid-latency response, cortical response, acoustic change complex, auditory steady state response (ASSR), complex auditory brainstem response, electrocochleography (ECoG), cochlear microphonic, cochlear neurophonic AEPs, electroencephalography (EEG), or the like.


At block 703, the computing device can access audio data generated by an inertial sensor responsive to detecting internal audio conducted through the body of the user. Such audio data can be referred to as internal audio data. The internal audio data can correspond to audio generated from activity of a vocal cord of the subject (e.g., due to talking), activity of respiratory airways of the subject (e.g., such as air movement due to speaking, breathing, coughing, etc.), activity of an esophagus of a subject (e.g., due to chewing, swallowing, etc.), or other internal sounds which may arise from head movement of the subject, jaw movement of the subject, and/or the like. The internal audio data can include low frequencies, such as frequencies between 50 Hz and 250 Hz, between 200 Hz and 500 Hz, between 400 Hz and 750 Hz, between 600 Hz and 1000 Hz, between 750 Hz and 1250 Hz, between 1250 Hz and 1500 Hz, between 1500 Hz and 1750 Hz, or between 1750 Hz and 2000 Hz, etc. The internal audio data can be in the form of an electrical signal and can originate from an inertial sensor responsive to the inertial sensor detecting the internal audio. The inertial sensor can be a vibration sensor such as a bone conduction microphone. The internal audio data can include analogue and/or digital data. The computing device can access the internal audio data from storage. For example, the internal audio data can include historical internal audio data that was previously generated by an inertial sensor prior to being accessed for processing. In some implementations, a processor can receive the internal audio data from the inertial sensor directly for processing without the internal audio data being stored in storage. For example, the processor can access the internal audio data in substantially real-time as the internal audio data is generated by the inertial sensor. The computing device can access the internal audio data continuously. The internal audio data can be a continuous stream of data.


At block 705, the computing device can optionally access audio data generated by an external microphone responsive to detecting external audio originating from outside of the user's body and conducted through air. Such audio data can be referred to as external audio data. The external audio data can be in the form of an electrical signal and can originate from an external microphone responsive to the microphone detecting the external audio. The external audio data can include analogue and/or digital data. The computing device can access the external audio data from storage. For example, the external audio data can include historical external audio data that was previously generated by a microphone prior to being accessed for processing. In some implementations, a processor can receive the external audio data from the microphone directly for processing without the external audio data being stored in storage. For example, the processor can access the external audio data in substantially real-time as the external audio data is generated by the microphone. The computing device can access the external audio data continuously. The external audio data can be a continuous stream of data.


In some implementations, the computing device can optionally adjust an operation of a speaker of an auricular device based on any of the audio data described at any of blocks 701, 703, or 705. The speaker can emit an acoustic stimulus signal into the car of a subject to evoke a response such as an OAE. Adjusting the operation of the speaker can include adjusting a time at which the speaker emits an acoustic signal. As an example, the processor can control a time at which the speaker emits an acoustic signal to control a time at which an OAE occurs based on acoustic data (e.g., internal acoustic data from a bone conduction microphone), which may optimize the quality of the OAE. Adjusting the operation of the speaker can include adjusting an amplitude of an acoustic signal emitted from the speaker. Adjusting the operation of the speaker can include adjusting a frequency of an acoustic signal emitted from the speaker.


At block 707, the computing device can optionally determine an adaptive filter for suppressing noise from the OAE audio data. The internal audio and/or the external audio can be noise that corrupts the OAE audio. The computing device can determine the adaptive filter based on comparing one or more characteristics of the internal audio data with one or more characteristics of the external audio data. Such characteristics can include an amplitude (decibel level) of the internal audio data and/or the external audio data. For example, the computing device can select a first adaptive filter if a primary component of the noise is from the internal audio (which can be indicated by the amplitude of the internal audio data surpassing a threshold relative to the external audio data) and can select a second adaptive filter if a primary component of the noise is from the external audio (which can be indicated by the amplitude of the external audio data surpassing a threshold relative to the internal audio data). Selecting a filter for noise suppression based on the source of the noise (whether internal or external) can improve the noise suppression because the filter will be specific to the source of noise. Determining the adaptive filter can include determining one or more characteristic of the filter such as transfer function, frequency response, and/or an impulse response. Determining the adaptive filter can include selecting one of a pre-determined set of adaptive filters, or characteristics thereof. Accordingly, adaptively selecting one or more filters or characteristics of a filter, based on the source of the noise can improve noise suppression by more closely tailoring the filter to the characteristics of the noise.


At block 709, the computing device can suppress noise (e.g., external and/or internal audio data) from the OAE audio data based on the adaptive filter. Suppressing noise from the OAE audio data can include subtracting internal audio data and/or external audio data (or filtered versions thereof) from the OAE audio data. For example, the computing device can apply a high pass filter to the OAE audio data (which can remove low frequency noise corresponding to internal audio). The high pass filter can have a cut off frequency between 250 Hz and 500 Hz, between 500 Hz and 750 Hz, between 750 Hz and 1000 Hz, between 1000 Hz and 1250 Hz, between 1250 Hz and 1500 Hz, between 1500 Hz and 1750 Hz, or between 1750 Hz and 2000 Hz, for example.


In some implementations, the computing device can suppress noise from the OAE audio data by discarding one or more values or portions of the OAE audio data. Discarding OAE audio data can include rejecting, ignoring, deleting, erasing from memory, preventing storage in memory, setting a value to zero, or the like. In some cases, simply discarding OAE audio data that has become so corrupted by noise can more efficiently improve quality of overall OAE audio data than trying to suppress noise from the OAE audio data. As an example, the computing device can discard portions of the OAE audio data that correspond (e.g., in time) to portions of the internal and/or external audio data that exceed a threshold amplitude.


At block 711, the computing device can determine one or more physiological characteristics of a subject based on OAE audio data which can include adjusted (e.g., noise-suppressed) OAE audio data. In some implementations, the computing device can further process the OAE audio data such as to generate a user hearing profile or any other audiometry data described herein which can be used to determine the physiological characteristics.


The physiological characteristics can include auditory characteristics of the subject. The physiological characteristics can include inner ear characteristics. The physiological characteristics may indicate an operational capability of one or more of a cochlea, an organ of corti, hair cells, outer hair cells, inner hair cells, tectorial membrane, and/or auditory nerve of the subject. The physiological characteristics can include an indication of hearing sensitivity, hearing loss, hearing loss at certain frequencies, or the like. For example, physiological characteristics may indicate whether outer hair cells of the subject are operating to amplify certain frequencies by moving a tectorial membrane of the subject. In some implementations, the processor can determine the physiological characteristics as part of a screening or diagnostic assessment of acoustic function. For example, the physiological characteristics may be a part of a screening regimen to determine the auditory health of newborn babies. As another example, the processor can monitor ototoxicity damage to hearing health (e.g., caused by ototoxic drugs used to treat cancer) based on monitoring auditory function. In some implementations, the processor can determine physiological characteristics based on one or more of a phase, amplitude, latency, and/or frequency of the OAE audio data.


The physiological characteristics can include oxygenation of the subject, including intracranial oxygenation. The computing device can determine oxygenation based on at least the OAE audio data, alone or in combination with, the physiological data, such as blood oxygenation saturation obtained from a pulse oximeter. OAE audio data can change based on oxygenation of the inner ear. The computing device can analyze a phase change of the OAE audio data, and how the phase change corresponds to blood oxygenation saturation to determine an indication of intracranial oxygenation of the subject. The inner ear is sensitive to changes in oxygenation. For example, the auditory nerve and/or hair cells may not function properly to generate OAEs. By using inner car physiological data, such as OAE data, the processer can indicate changes in oxygenation to prevent harmful physiological consequences such as excessive oxygen saturation (e.g., hyperoxia), which can lead to blindness in newborns, or under oxygen saturation (e.g., hypoxia), which can lead to permanent brain damage.


The physiological characteristics can include intracranial pressure (ICP). Changes in ICP can cause changes in OAE (such as phase changes, amplitude changes, etc.). The computing device can monitor changes in OAE to determine changes in ICP. In turn, the computing device can determine cardiac activity, respiratory activity, and/or orientation of the subject based on the ICP. Changes in cardiac activity, such as during a heart beat cycle, can change blood pressure, which can change ICP. Changes in respiratory activity, such as during a respiration cycle, can change blood pressure, which can change ICP. Changes in orientation of the subject (e.g., raising or lowering head) can change ICP. The processor can analyze OAE audio data alone or in combination with cardiac data to determine an indication of intracranial pressure. For example, the processor can analyze a phase change of the OAE data, and how the phase change corresponds to heart beat cycles to determine an indication of intracranial pressure of the subject.


At block 713, the computing device can optionally generate a hearing transfer function for the user based on the OAE audio data. A computing device, such as an auricular device, can implement a hearing transfer function for user-specific audio playback. For example, an auricular device can adjust audio playback according to a hearing transfer function to improve the auditory listening experience of the user. As described herein, a hearing transfer function can indicate which frequencies to adjust and by how much to adjust their amplitudes so that a user's perceives those frequencies at a desired decibel level (e.g., to recreate “normal” hearing).



FIG. 8 is a flowchart illustrating an example process 800 of adjusting OAE audio data based on physiological data for subsequent processing. This process, in full or parts, can be executed by one or more hardware processors, whether they are associated with a singular or multiple computing devices, and even devices in remote or wireless communication. By way of example, the one or more hardware processors executing process 800 can be associated with auricular device 100/300, a wearable device such as a watch, a mobile device such as a phone, a laptop, a tablet, and/or any of the electronic computing devices shown and/or described herein. The implementations of this process may vary and can involve modifications like omitting blocks, adding blocks, and/or rearranging the order of execution of the blocks. Process 800 serves as an example and is not intended to restrict the present disclosure.


As shown in process 800, analyzing OAE data in combination with physiological data can improve determining physiological characteristics of a subject. Physiological data can correspond to physiological processes, such as cardiac cycle or respiratory cycles, that generate noise that can corrupt an OAE. Physiological data can correspond to physiological processes that alter an OAE such as by changing intracranial pressure (ICP) and/or changing body temperature. The process 700 may accordingly account for such physiological processes and how they can affect OAE when using OAE data to assess physiological characteristics of a user. Accordingly, OAE quality can be preserved for improved processing. Moreover, OAE can be accurately measured across a broader range of frequencies (such as low frequencies of less than 400 Hz) corresponding to physiological events (e.g., heart contraction generating noise at less than 400 Hz) that might have otherwise excessively corrupted the OAE.


At block 801, a computing device (e.g., one or more hardware processors of a computing device executing program instructions) can access audio data generated by an internal microphone responsive to detecting one or more OAEs originating from an inner ear of a user and conducted through an ear canal via air. Such audio data can be referred to as OAE audio data. The OAE audio data can be in the form of an electrical signal and can originate from a microphone responsive to the microphone detecting the OAE. The OAE audio data can include analogue and/or digital data. The computing device can access the OAE audio data from storage. For example, the OAE audio data can include historical OAE audio data that was previously generated by a microphone prior to being accessed for processing. In some implementations, a processor can receive the OAE audio data from the microphone directly for processing without the OAE audio data being stored in storage. For example, the processor can access the OAE audio data in substantially real-time as the OAE audio data is generated by the microphone. The computing device can access the OAE audio data continuously. The OAE audio data can be a continuous stream of data. In some implementations, the OAE audio data can be generated by one or more physiological sensors, which can include electrodes, responsive to detecting electrical activity of the auditory system of the subject. Such electrical activity can relate to one or more of auditory brainstem response (ABR), mid-latency response, cortical response, acoustic change complex, auditory steady state response (ASSR), complex auditory brainstem response, electrocochleography (ECoG), cochlear microphonic, cochlear neurophonic AEPs, electroencephalography (EEG), or the like.


At block 803, the computing device can access physiological data originating from one or more physiological sensors. The physiological data can include voltage data, impedance data, decibel data, cardiac data, electrocardiography (ECG) data, electromyography (EMG) data, electrooculography (EOG) data, respiration data, pressure data, position data, motion data, PPG data, inertial data, vibration data, acoustic data, and/or temperature data. The physiological data can include processed physiological data. The physiological data can include a waveform, such as an ECG waveform, a respiratory waveform, or a pulse waveform. A respiratory waveform can be an impedance-based waveform (from ECG data) and/or a PPG-based waveform. The physiological data may correspond to a physiological event that generates noise that can interfere with otoacoustic emissions such as if the noise from the physiological event and the otoacoustic emissions occur at similar frequencies (e.g., within a threshold of each other). For example, a heart contraction can generate noise that has a frequency similar to an OAE which can corrupt the quality of the OAE. As discussed, process 800 can advantageously account for such noise to preserve the quality of an OAE and to allow for detecting OAEs across a broader range of frequencies which may have otherwise been too corrupted at certain frequencies to be useful due to noise from physiological events at those same frequencies. The physiological data can correspond to physiological processes that generate audio such as cardiac activity and respiratory activity which may corrupt an OAE. The physiological data can correspond to physiological processes and/or parameters, such as SpO2, temperature, and/or intracranial pressure (e.g., blood pressure), that do not generate audio that corrupts an OAE but which may however impact an inner ear's ability to generate normal OAEs.


The physiological data can include analogue and/or digital data. The computing device can access the physiological data from storage. For example, the physiological data can include historical physiological data that was previously generated by a physiological sensor prior to being accessed for processing. In some implementations, a processor can receive the physiological data from the sensor directly for processing without the physiological data being stored in storage. For example, the processor can access the physiological data in substantially real-time as the physiological data is generated by the sensor. The computing device can access the physiological data continuously. The physiological data can be a continuous stream of data.


At block 805, the computing device can optionally adjust an operation of a speaker of an auricular device based on the physiological data. The speaker can emit an audio stimulus into the car of a subject to evoke a response such as an OAE. The audio stimulus can comprise a plurality of tones, such as two tones. Adjusting the operation of the speaker can include adjusting a time at which the speaker emits an audio stimulus. As an example, the computing device can control a time at which the speaker emits an audio stimulus based on the occurrence of physiological processes determined from the physiological data to control a time at which an OAE occurs. Adjusting the timing of OAE measurements to account for (e.g., avoid) physiological processes such as cardiac cycles or respiratory cycles, can optimize the quality of the OAE by reducing the presence of noise that may corrupt the OAE quality. Adjusting the operation of the speaker can include preventing the speaker from emitting an audio stimulus signal, such as for a predetermined length of time, such as during a certain portion of a cardiac or respiratory cycle. Adjusting the operation of the speaker can include adjusting a volume of an audio stimulus signal emitted from the speaker. Adjusting audio stimulus amplitude can cause proportional changes in amplitude of OAE evoked responsive to the audio stimulus. For example, an audio stimulus having an amplitude of “L” can evoke an OAE having an amplitude of X*L, where X is a scalar factor of less than 1.0 in some cases. Adjusting the operation of the speaker to increase an amplitude of OAE can cause the OAE amplitude to be greater than, or at least more easily distinguishable from, noise occurring from physiological events. Adjusting the operation of the speaker can include adjusting a frequency of an audio stimulus emitted from the speaker. Adjusting audio stimulus frequency can cause proportional changes in frequency of OAE evoked responsive to the audio stimulus. For example, an audio stimulus having frequencies at f1 and f2 can evoke an OAE having a frequency of 2f1−f2. Adjusting the operation of the speaker to change a frequency of OAE can cause the OAE frequency to be different than frequencies of noise resulting from physiological events, which can reduce the interference of the noise with the OAE. For example, the speaker can emit frequencies to evoke a high frequency OAE when physiological processes are occurring that generate low frequencies to avoid signal interference, and vice versa when physiological processes are occurring that generate high frequencies. The computing device can analyze historical physiological data to determine expected future physiological events and when they will occur. For example, the computing device can analyze historical cardiac data to estimate future occurrence of cardiac activity such as Q, R, S, T events on an ECG waveform. The computing device can control operation of the speaker based on estimated future physiological events such as to avoid evoking an OAE during the future estimated event.


At block 807, the computing device can adjust the OAE audio data based on the physiological data. The computing device can analyze features of the physiological data such as amplitude, peaks, valleys, maximum values, minimum values, frequencies, or the like. As discussed, physiological data can include ECG data which can include voltage values (such as on an ECG waveform) or impedance values (such as on a respiration waveform), PPG data (from which volumetric changes, blood pressure, pulse waveform can be derived), decibel data, inertial data, temperature data, etc. In some implementations, the computing device can determine that a feature occurs if the physiological data is greater than a threshold, is less than a threshold, and/or is within threshold limits. In some implementations, the computing device can determine that feature occurs if the physiological data exceeds a threshold for longer than a threshold time. Physiological data features can correspond to physiological events which may be cyclical physiological events, such as cardiac cycles, respiration cycles, or pulse cycles, or which may be non-cyclical events such as intracranial pressure or body temperature. For example, a voltage peak on an ECG waveform can correspond to a heart contraction at which time the heart may be generating the most noise. As another example, a peak on a respiration waveform (which can be derived from an impedance data and/or PPG data), can correspond to a time between inhalation and exhalation when air is substantially motionless in the airways of the subject and thus generating the least noise during the respiration cycle. As another example, a peak on a pulse waveform (which can be derived from PPG data), can correspond to a heart contraction at which time the heart may be generating the most noise. As another example, inertial sensor data exceeding a threshold can be a feature that can correspond to the user being in a certain orientation (e.g., indicating head elevation) which may affect intracranial pressure (such as if blood rushes to, or drains from, the head) which may affect inner ear operation (indicating that OAE measurements at that time may be affected). As another example, body temperature exceeding a threshold may be a feature in physiological data that can correspond to atypical functioning of the inner car. The computing device can analyze the features of the physiological data with respect to time (e.g., in the time domain). The computing device can determine a time at which features of the physiological data occur. The computing device can compare the physiological data with the OAE audio data based on time. The computing device can map the physiological data to the OAE audio data based on time to determine when physiological events occur (as indicated by features in physiological data) relative to when OAE audio data occurs.


Adjusting OAE audio data can include discarding one or more values or portions of the OAE audio data. Discarding can include rejecting, ignoring, deleting, erasing from memory, preventing storage in memory, setting a value to zero, or the like. As an example, the computing device can discard portions of the OAE audio data that correspond (e.g., in time) to the occurrence of features in the physiological data (which can indicate occurrence of physiological events that might introduce noise to the OAE audio data). Accordingly, subsequent processing/analysis with the OAE audio data may not include portions of the OAE audio data that have been corrupted by noise from physiological processes which can advantageously improve the accuracy of the processing/analysis.


Adjusting OAE audio data can include assigning weights (or adjusting previously assigned weights) to values or portions of the OAE audio data. Weighting the OAE audio data can correspond to a degree to which the OAE audio data may be corrupted by noise from physiological processes. As an example, the computing device can assign certain weights to portions of the OAE audio data that correspond (e.g., in time) to the occurrence of features in the physiological data (which can indicate occurrence of physiological events that might introduce noise to the OAE audio data) and can assign other weights to other portions of the OAE audio data that correspond (e.g., in time) to the occurrence of other features in the physiological data (which can indicate occurrence of physiological events that might not introduce noise to the OAE audio data). Assigning weights to OAE audio data can include multiplying the OAE audio data by a scalar factor. Assigning a range of weights to OAE audio data can account for a range of physiological events that may affect OAE differently. Weighting the OAE audio data can improve accuracy of subsequent processing of the weighted OAE audio data, such as when determining physiological characteristics of the subject based on the weighted OAE audio data. For example, the effect of certain weighted OAE audio data in subsequent processing/analysis may be minimized to correspond to a degree to which the OAE audio data may be corrupted by noise from physiological processes.


Adjusting OAE audio data can include binning values or portions of the OAE audio data. As an example, the computing device can assign OAE audio data to different bins based on whether the OAE audio data corresponds (e.g., occurs at a same time) to certain physiological data features. Accordingly, the computing device can bin the OAE audio data based on how it corresponds in time to physiological processes of the subject such as cardiac cycles, respiratory cycles, body temperature, position/orientation, or the like. As an example, the computing device can bin OAE audio data to a first bin if it occurs during peak electrical activity during a cardiac cycle and can bin OAE audio data to a second bin if it occurs during minimal cardiac activity of a cardiac cycle. Accordingly, the computing device can process different portions of OAE audio data differently based on the bin to which it is assigned. For example, the computing device can apply a first transfer function to OAE audio data in a first bin and can apply a second transfer function to OAE audio data in a second bin, such as when determining physiological characteristics of the subject from the OAE audio data. Binning OAE audio data can include assigning metadata to the OAE audio data indicating to which bin the OAE audio data is assigned.


Adjusting OAE audio data can include adjusting a transfer function associated with the OAE audio data, for example that can be applied to the OAE audio data to assess physiological characteristics of a subject. Adjusting the transfer function can include selecting a transfer function from a set of predetermined transfer functions. As an example, the processor can select a certain transfer function to apply to the OAE audio data if the physiological data that corresponds (e.g., in time) to the occurrence of other features in the physiological data (which can indicate occurrence of physiological events that might not introduce noise to the OAE audio data). Adjusting the transfer function can include adjusting a mapping of the transfer function such as to map OAE audio data inputs to physiological characteristic outputs.


At block 809, the computing device can determine one or more physiological characteristics of a subject based on OAE audio data which can include adjusted OAE audio data. In some implementations, the computing device can further process the OAE audio data such as to generate a user hearing profile or any other audiometry data described herein which can be used to determine the physiological characteristics.


The physiological characteristics can include auditory characteristics of the subject. The physiological characteristics can include inner ear characteristics. The physiological characteristics may indicate an operational capability of one or more of a cochlea, an organ of corti, hair cells, outer hair cells, inner hair cells, tectorial membrane, and/or auditory nerve of the subject. The physiological characteristics can include an indication of hearing sensitivity, hearing loss, hearing loss at certain frequencies, or the like. For example, physiological characteristics may indicate whether outer hair cells of the subject are operating to amplify certain frequencies by moving a tectorial membrane of the subject. In some implementations, the processor may determine the physiological characteristics as part of a screening or diagnostic assessment of acoustic function. For example, the physiological characteristics may be a part of a screening regimen to determine the auditory health of newborn babies. As another example, the processor can monitor ototoxicity damage to hearing health (e.g., caused by ototoxic drugs used to treat cancer) based on monitoring auditory function. In some implementations, the processor can determine physiological characteristics based on one or more of a phase, amplitude, latency, and/or frequency of the OAE audio data (or changes thereof).


The physiological characteristics can include oxygenation of the subject, including intracranial oxygenation. The computing device can determine oxygenation based on at least the OAE audio data, alone or in combination with, the physiological data, such as blood oxygenation saturation obtained from a pulse oximeter. OAE audio data can change based on oxygenation of the inner ear. The computing device can analyze a phase change of the OAE audio data, and how the phase change corresponds to blood oxygenation saturation to determine an indication of intracranial oxygenation of the subject. The inner ear is sensitive to changes in oxygenation. For example, the auditory nerve and/or hair cells may not function properly to generate OAEs. By using inner car physiological data, such as OAE data, the processer can indicate changes in oxygenation to prevent harmful physiological consequences such as excessive oxygen saturation (e.g., hyperoxia), which can lead to blindness in newborns, or under oxygen saturation (e.g., hypoxia), which can lead to permanent brain damage.


The physiological characteristics can include intracranial pressure (ICP). Changes in ICP can cause changes in OAE (such as phase changes, amplitude changes, etc.). The computing device can monitor changes in OAE to determine changes in ICP. In turn, the computing device can determine cardiac activity, respiratory activity, and/or orientation of the subject based on the ICP. Changes in cardiac activity, such as during a heart beat cycle, can change blood pressure, which can change ICP. Changes in respiratory activity, such as during a respiration cycle, can change blood pressure, which can change ICP. Changes in orientation of the subject (e.g., raising or lowering head) can change ICP. The processor can analyze OAE audio data alone or in combination with cardiac data to determine an indication of intracranial pressure. For example, the processor can analyze a phase change of the OAE data, and how the phase change corresponds to heart beat cycles to determine an indication of intracranial pressure of the subject.


At block 811, the computing device can optionally generate a hearing transfer function for the user based on the OAE audio data. A computing device, such as an auricular device, can implement a hearing transfer function for user-specific audio playback. For example, an auricular device can adjust audio playback according to a hearing transfer function to improve the auditory listening experience of the user. As described herein, a hearing transfer function can indicate which frequencies to adjust and by how much to adjust their amplitudes so that a user's perceives those frequencies at a desired decibel level (e.g., to recreate “normal” hearing).


Additional Considerations

Certain categories of persons, such as caregivers, clinicians, doctors, nurses, and friends and family of a user, may be used interchangeably to describe a person providing care to the user. Furthermore, patients or users used herein interchangeably refer to a person who is wearing a sensor or is connected to a sensor or whose measurements are used to determine a physiological parameter or a condition. Parameters may be, be associated with, and/or be represented by, measured values, display icons, alphanumeric characters, graphs, gages, power bars, trends, or combinations. Real time data may correspond to active monitoring of a user, however, such real time data may not be synchronous to an actual physiological state at a particular moment. Measurement value(s) of a parameter and the parameter used herein such as, SpO2, RR, PaO2 and the like, unless specifically stated otherwise, or otherwise understood with the context as used is generally intended to convey a measurement or determination that is responsive to the physiological parameter.


Although certain implementations and examples have been described herein, it will be understood by those skilled in the art that many aspects of the systems and devices shown and described in the present disclosure may be differently combined and/or modified to form still further implementations or acceptable examples. All such modifications and variations are intended to be included herein within the scope of this disclosure. A wide variety of designs and approaches are possible. No feature, structure, or step disclosed herein is essential or indispensable. The various features and processes described herein may be used independently of one another, or may be combined in various ways. For example, elements may be added to, removed from, or rearranged compared to the disclosed example implementations. All possible combinations and sub-combinations are intended to fall within the scope of this disclosure.


Any methods and processes described herein are not limited to any particular sequence, and the blocks or states relating thereto can be performed in other sequences that are appropriate. For example, described blocks or states may be performed in an order other than that specifically disclosed, or multiple blocks or states may be combined in a single block or state, or certain method or process blocks may be omitted, or certain blocks or states may be performed in a reverse order from what is shown and/or described. The example blocks or states may be performed in serial, in parallel, or in some other manner. Blocks or states may be added to or removed from the disclosed example implementations.


The methods disclosed herein may include certain actions taken by a practitioner; however, they can also include any third-party instruction of those actions, either expressly or by implication.


The methods and tasks described herein may be performed and fully automated by a computer system. The computer system may, in some cases, include multiple distinct computers or computing devices (e.g., physical servers, workstations, storage arrays, cloud computing resources, etc.) that communicate and interoperate over a network to perform the described functions. Each such computing device typically includes a processor (or multiple processors) that executes program instructions or modules stored in a memory or other non-transitory computer-readable storage medium or device (e.g., solid state storage devices, disk drives, etc.). The various functions disclosed herein may be embodied in such program instructions, and/or may be implemented in application-specific circuitry (e.g., ASICs or FPGAs) of the computer system. Where the computer system includes multiple computing devices, these devices may, but need not, be co-located. The results of the disclosed methods and tasks may be persistently stored by transforming physical storage devices, such as solid state memory chips and/or magnetic disks, into a different state. The computer system may be a cloud-based computing system whose processing resources are shared by multiple distinct entities or other users. The systems and modules may also be transmitted as generated data signals (for example, as part of a carrier wave or other analog or digital propagated signal) on a variety of computer-readable transmission mediums, including wireless-based and wired/cable-based mediums, and may take a variety of forms (for example, as part of a single or multiplexed analog signal, or as multiple discrete digital packets or frames).


Many other variations than those described herein will be apparent from this disclosure. For example, depending on the implementation, certain acts, events, or functions of any of the algorithms described herein can be performed in a different sequence, can be added, merged, or left out altogether (for example, not all described acts or events are necessary for the practice of the algorithms). Moreover, in certain implementations, acts or events can be performed concurrently, for example, through multi-threaded processing, interrupt processing, or multiple processors or processor cores or on other parallel architectures, rather than sequentially. In addition, different tasks or processes can be performed by different machines and/or computing systems that can function together.


Various illustrative logical blocks, modules, routines, and algorithm steps that may be described in connection with the disclosure herein can be implemented as electronic hardware (e.g., ASICs or FPGA devices), computer software that runs on computer hardware, or combinations of both. Various illustrative components, blocks, and steps may be described herein generally in terms of their functionality. Whether such functionality is implemented as specialized hardware versus software running on general-purpose hardware depends upon the particular application and design constraints imposed on the overall system. The described functionality can be implemented in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosure.


Moreover, various illustrative logical blocks and modules that may be described in connection with the implementations disclosed herein can be implemented or performed by a machine, such as a general purpose processor, a digital signal processor (“DSP”), an application specific integrated circuit (“ASIC”), a field programmable gate array (“FPGA”) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor can be a microprocessor, controller, microcontroller, or state machine, combinations of the same, or the like. A processor can include electrical circuitry configured to process computer-executable instructions. A processor can include an FPGA or other programmable devices that performs logic operations without processing computer-executable instructions. A processor can also be implemented as a combination of computing devices, for example, a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Although described herein primarily with respect to digital technology, a processor may also include primarily analog components. For example, some, or all, of the signal processing algorithms described herein may be implemented in analog circuitry or mixed analog and digital circuitry. A computing environment can include any type of computer system, including, but not limited to, a computer system based on a microprocessor, a mainframe computer, a digital signal processor, a portable computing device, a device controller, or a computational engine within an appliance, to name a few.


The elements of any method, process, routine, or algorithm described in connection with the disclosure herein can be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module can reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of a non-transitory computer-readable storage medium. An exemplary storage medium can be coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium can be integral to the processor. The storage medium can be volatile or nonvolatile. The processor and the storage medium can reside in an ASIC. The ASIC can reside in a user terminal. In the alternative, the processor and the storage medium can reside as discrete components in a user terminal.


Conditional language used herein, such as, among others, “can,” “could,” “might,” “may,” “e.g.,” and the like, unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain features, elements, and/or steps are optional. Thus, such conditional language is not generally intended to imply that features, elements, and/or steps are in any way required or that one or more embodiments necessarily include logic for deciding, with or without other input or prompting, whether these features, elements, and/or steps are included or are to be always performed. The terms “comprising,” “including,” “having,” and the like are synonymous and are used inclusively, in an open-ended fashion, and do not exclude additional elements, features, acts, operations, and so forth. Also, the term “or” is used in its inclusive sense (and not in its exclusive sense) so that when used, for example, to connect a list of elements, the term “or” means one, some, or all of the elements in the list. Further, the term “each,” as used herein, in addition to having its ordinary meaning, can mean any subset of a set of elements to which the term “each” is applied.


Conjunctive language such as the phrase “at least one of X, Y, and Z,” unless specifically stated otherwise, is otherwise understood with the context as used in general to convey that an item, term, etc. may be either X, Y, or Z. Thus, such conjunctive language is not generally intended to imply that certain embodiments require the presence of at least one of X, at least one of Y, and at least one of Z.


Language of degree used herein, such as the terms “approximately,” “about,” “generally,” and “substantially” as used herein represent a value, amount, or characteristic close to the stated value, amount, or characteristic that still performs a desired function or achieves a desired result. For example, the terms “approximately”, “about”, “generally,” and “substantially” may refer to an amount that is within less than 10% of, within less than 5% of, within less than 1% of, within less than 0.1% of, and within less than 0.01% of the stated amount. As another example, in certain embodiments, the terms “generally parallel” and “substantially parallel” refer to a value, amount, or characteristic that departs from exactly parallel by less than or equal to 10 degrees, 5 degrees, 3 degrees, or 1 degree. As another example, in certain embodiments, the terms “generally perpendicular” and “substantially perpendicular” refer to a value, amount, or characteristic that departs from exactly perpendicular by less than or equal to 10 degrees, 5 degrees, 3 degrees, or 1 degree.


As used herein, “real-time” or “substantial real-time” may refer to events (e.g., receiving, processing, transmitting, displaying etc.) that occur at a same time as each other, during a same time as each other, or overlap in time with each other. “Real-time” may refer to events that occur at distinct or non-overlapping times the difference between which is imperceptible and/or inconsequential to humans such as delays arising from electrical conduction or transmission. A human may perceive real-time events as occurring simultaneously, regardless of whether the real-time events occur at an exact same time. As a non-limiting example, “real-time” may refer to events that occur within a time frame of each other that is on the order of milliseconds, seconds, tens of seconds, or minutes. For example, “real-time” may refer to events that occur within a time frame of less than 1 minute, less than 30 seconds, less than 10 seconds, less than 1 second, less than 0.05 seconds, less than 0.01 seconds, less than 0.005 seconds, less than 0.001 seconds, etc.


Unless otherwise explicitly stated, articles such as “a” or “an” should generally be interpreted to include one or more described items. Accordingly, phrases such as “a device configured to” are intended to include one or more recited devices. Such one or more recited devices can also be collectively configured to carry out the stated recitations. For example, “a processor configured to carry out recitations A, B and C” can include a first processor configured to carry out recitation A working in conjunction with a second processor configured to carry out recitations B and C.


As used herein, “system,” “instrument,” “apparatus,” and “device” generally encompass both the hardware (for example, mechanical and electronic) and, in some implementations, associated software (for example, specialized computer programs for operational control) components.


It should be emphasized that many variations and modifications may be made to the herein-described implementations, the elements of which are to be understood as being among other acceptable examples. All such modifications and variations are intended to be included herein within the scope of this disclosure. Any section headings used herein are merely provided to enhance readability and are not intended to limit the scope of the implementations disclosed in a particular section to the features or elements disclosed in that section. The foregoing description details certain implementations. It will be appreciated, however, that no matter how detailed the foregoing appears in text, the systems and methods can be practiced in many ways. As is also stated herein, it should be noted that the use of particular terminology when describing certain features or aspects of the systems and methods should not be taken to imply that the terminology is being re-defined herein to be restricted to including any specific characteristics of the features or aspects of the systems and methods with which that terminology is associated.


Those of skill in the art would understand that information, messages, and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.


While the above detailed description has shown, described, and pointed out novel features, it can be understood that various omissions, substitutions, and changes in the form and details of the devices or algorithms illustrated can be made without departing from the spirit of the disclosure. As can be recognized, certain portions of the description herein can be embodied within a form that does not provide all of the features and benefits set forth herein, as some features can be used or practiced separately from others. The scope of certain embodiments disclosed herein is indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.

Claims
  • 1. A system for monitoring auditory health of a user comprising: an auricular device comprising: a microphone oriented toward an inner ear of the user when the auricular device is worn by the user and configured to generate OAE audio data responsive to detecting one or more otoacoustic emissions originating from the inner ear of the user; andone or more hardware processors associated with the auricular device configured to: access the OAE audio data originating from the microphone;access physiological data of the user originating from a physiological sensor coupled to the user;detect a feature of the physiological data in a time-domain based on a value of the physiological data exceeding a threshold, the feature of the physiological data corresponding to a physiological event generating noise during the one or more otoacoustic emissions, the noise having a noise frequency within a threshold of an OAE frequency of the one or more otoacoustic emissions;adjust a portion of the OAE audio data responsive to detecting the feature of the physiological data to reduce interference of the noise from the physiological event on the OAE audio data, the portion of the OAE audio data corresponding to the feature of the physiological data in the time-domain; anddetermine one or more physiological characteristics of the user based on the OAE audio data.
  • 2. The system of claim 1 wherein the physiological data includes one or more of PPG data originating from an optical sensor or ECG data originating from an ECG sensor comprising an electrode.
  • 3. The system of claim 1 wherein the one or more hardware processors are configured to: generate one or more waveforms from the physiological data including an ECG waveform, a respiration waveform, or a pulse waveform; anddetermine the feature of the physiological data from the one or more waveforms.
  • 4. The system of claim 1 wherein the auricular device further comprises a speaker configured to emit an audio stimulus toward the inner ear of the user, the audio stimulus configured to evoke the one or more otoacoustic emissions, wherein the one or more hardware processors are configured to adjust an operation of the speaker based on the physiological data to control the one or more otoacoustic emissions.
  • 5. The system of claim 3 wherein adjusting the operation of the speaker includes adjusting a time the speaker emits the audio stimulus to avoid evoking the one or more otoacoustic emissions during the physiological event.
  • 6. The system of claim 3 wherein adjusting the operation of the speaker includes adjusting a stimulus frequency of the audio stimulus to adjust the OAE frequency of the one or more otoacoustic emissions to avoid interference with the noise from the physiological event.
  • 7. The system of claim 3 wherein adjusting the operation of the speaker includes adjusting a stimulus amplitude of the audio stimulus to evoke the one or more otoacoustic emissions with an OAE amplitude being greater than a noise amplitude of the noise from the physiological event.
  • 8. The system of claim 3 wherein the one or more hardware processors are configured to: estimate a future physiological event based on the physiological data; andadjust the operation of the speaker based on the estimated future physiological event.
  • 9. The system of claim 1 wherein adjusting the portion of the OAE audio data includes discarding one or more values of the OAE audio data.
  • 10. The system of claim 1 wherein adjusting the portion of the OAE audio data includes assigning one or more weights to the OAE audio data.
  • 11. The system of claim 1 wherein adjusting the portion of the OAE audio data includes binning the OAE audio data.
  • 12. The system of claim 1 wherein the physiological sensor is integrated with the auricular device.
  • 13. The system of claim 1 wherein the one or more physiological characteristics of the user includes a hearing sensitivity of the user.
  • 14. The system of claim 1 wherein the one or more physiological characteristics of the user includes an oxygenation of the user.
  • 15. The system of claim 1 wherein the one or more physiological characteristics of the user includes an intracranial pressure of the user.
  • 16. The system of claim 1 wherein the one or more hardware processors are configured to generate a hearing transfer function from the OAE audio data to implement user-specific audio playback.
  • 17. A method for monitoring auditory health of a user comprising: detecting one or more otoacoustic emissions with a microphone of an auricular device worn by the user, the one or more otoacoustic emissions originating from an inner ear of the user;generating OAE audio data with the microphone responsive to detecting the one or more otoacoustic emissions; andunder control of one or more hardware processors: accessing the OAE audio data originating from the microphone;accessing physiological data of the user originating from a physiological sensor coupled to the user;detecting a feature of the physiological data in a time-domain based on a value of the physiological data exceeding a threshold, the feature of the physiological data corresponding to a physiological event generating noise during the one or more otoacoustic emissions, the noise having a noise frequency within a threshold of an OAE frequency of the one or more otoacoustic emissions;adjusting a portion of the OAE audio data responsive to detecting the feature of the physiological data to reduce interference of the noise from the physiological event on the OAE audio data, the portion of the OAE audio data corresponding to the feature of the physiological data in the time-domain; anddetermining one or more physiological characteristics of the user based on the OAE audio data.
  • 18. The method of claim 16 further comprising: generating one or more waveforms from the physiological data including an ECG waveform, a respiration waveform, or a pulse waveform; anddetermining the feature of the physiological data from the one or more waveforms.
  • 19. Non-transitory computer-readable media including computer-executable instructions that, when executed by a computing system, cause the computing system to perform operations comprising: accessing OAE audio data generated by a microphone of an auricular device responsive to detecting one or more otoacoustic emissions emanating from an inner ear of a user;accessing physiological data of the user originating from a physiological sensor coupled to the user;detecting a feature of the physiological data in a time-domain based on a value of the physiological data exceeding a threshold, the feature of the physiological data corresponding to a physiological event generating noise during the one or more otoacoustic emissions, the noise having a noise frequency within a threshold of an OAE frequency of the one or more otoacoustic emissions;adjusting a portion of the OAE audio data responsive to detecting the feature of the physiological data to reduce interference of the noise from the physiological event on the OAE audio data, the portion of the OAE audio data corresponding to the feature of the physiological data in the time-domain; anddetermining one or more physiological characteristics of the user based on the OAE audio data.
  • 20. The non-transitory computer-readable media of claim 18 wherein the computer-executable instructions, when executed by the computing system, cause the computing system to perform operations comprising: generating one or more waveforms from the physiological data including an ECG waveform, a respiration waveform, or a pulse waveform; anddetermining the feature of the physiological data from the one or more waveforms.
Provisional Applications (4)
Number Date Country
63602230 Nov 2023 US
63602274 Nov 2023 US
63505971 Jun 2023 US
63505942 Jun 2023 US