Some embodiments described herein generally relate to validation, compliance, and/or intervention with an ear device.
Unless otherwise indicated herein, the materials described herein are not prior art to the claims in the present application and are not admitted to be prior art by inclusion in this section.
Sound-related behaviors such as sneezing, coughing, vomiting, and/or shouting (e.g., tied to mood or rage) may be useful to measure in health-related research. For example, measuring sneezing, coughing, vomiting, and/or shouting may be useful in researching the intended effects and/or side effects of a given medication. Such behaviors have been self-reported in the past, but self-reporting may be cumbersome to subjects, may be inefficient, and/or may be inaccurate.
The subject matter claimed herein is not limited to implementations that solve any disadvantages or that operate only in environments such as those described above. Rather, this background is only provided to illustrate one example technology area where some implementations described herein may be practiced.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential characteristics of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
Some example implementations described herein generally relate to validation, compliance, and/or intervention with an ear device.
An example validation method may include generating, at an ear of a user, a signal indicative of at least one of a behavior of the user, a biometric of the user, or an environmental condition of an environment of the user. The method may also include determining, based on the signal, at least one of the behavior of the user, the biometric of the user, or the environmental condition of the environment of the user.
An example compliance method may include outputting, through an audio output device positioned at least partially in, on, or proximate to an ear of a user, a compliance message to evoke a target behavior in the user. The method may also include monitoring behavior of the user, through a sensor positioned in, on, or proximate to the ear of the user. The method may also include determining, based on the monitoring, compliance of the user with the target behavior.
An example intervention method may include determining a state of a user. The method may include determining whether the state of the user warrants an intervention or treatment. The method may include in response to determining that the state of the user warrants an intervention or treatment, determining a specific intervention or treatment to administer to the user. The method may include administering the specific intervention or treatment to the user. The state of the user may be determined based on a signal generated by a sensor positioned in, on, or proximate to the user's ear and/or the specific intervention or treatment may be administered at least in part by an output device positioned in, on, or proximate to the user's ear.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the invention. The features and advantages of the invention may be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. These and other features of the present invention will become more fully apparent from the following description and appended claims or may be learned by the practice of the invention as set forth hereinafter.
To further clarify the above and other advantages and features of the present invention, a more particular description of the invention will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. It is appreciated that these drawings depict only typical embodiments of the invention and are therefore not to be considered limiting of its scope. The invention will be described and explained with additional specificity and detail through the use of the accompanying drawings in which:
all arranged in accordance with at least one embodiment described herein.
Some embodiments described herein generally relate to validation, compliance, and/or intervention with an ear device, such as a hearing aid or headphone. The '242 application discloses methods, systems, and/or devices related to sensor fusion to validate and/or measure sound-producing behaviors of a subject. Such sound-producing behaviors can include sneezing, coughing, vomiting, shouting, or other sound-producing behaviors. The embodiments described in the '242 application may detect sound-producing behaviors in general and/or may categorize each of the sound-producing behaviors, e.g., as a sneeze, cough, vomiting, wheezing, shortness of breath, chewing, swallowing, masturbation, sex, a shout, or other particular type of sound-producing behavior.
Sensors implemented in the '242 application may be included in a wearable electronic device worn on a user's wrist, included in a user's smartphone (often carried in a user's pocket), or applied to a user's body, e.g., in the form of a sensor sticker. Such devices are often at least partially covered by a user's clothing some or all of the time during use. The presence of clothing may interfere with sensor detection, introducing noise and/or otherwise reducing measurement accuracy.
In comparison, hearing aids, headphones, and other ear-mountable devices may be less likely to be even partially covered by clothing than wrist-wearable devices, smartphones, sensor stickers, and/or other wearable electronic devices. For example, many users when clothed keep their heads completely uncovered such that any ear-mountable device worn by the user may remain uncovered. Further, many head-wearable accessories, such as baseball caps and bandanas, may interfere little or not at all with an ear-mountable device.
Some embodiments described herein relate to ear-mountable devices with one or more sensors and both input and output capabilities. Ear-mountable devices may be advantageously mounted (e.g., worn on or otherwise attached to) to a user's ears on the user's head where it is unlikely to be covered by clothing or other objects that may interfere with sensing functions of the devices. In addition, ear-mountable devices may include one or more sensors in contact with or proximate to the user's ear canal, which may have solid vibration and sound conduction through the user's skull, such that the ear-mountable devices may sense solid vibrations and/or sounds from the user's ear canal. Further, the proximity to the user's head may permit ear-mountable devices to sense brain waves and/or electroencephalography (EEG) waves.
Due to the location of ear-mountable devices when used, e.g., on the user's head, they may be better situated than other personal wearable electronic devices to detect with less noise and/or better accuracy one or more of the following parameters: core body temperature, ambient light exposure, ambient ultraviolet (UV) light exposure, ambient temperature, head orientation, head impact, coughing, sneezing, and/or vomiting.
In some embodiments, an ear-mountable device may include an output device, such as a speaker, that outputs information in an audio format to be heard by a user. Alternatively or additionally, an ear-mountable device may include an input device, such as a microphone or an accelerometer, through which a user may provide input. Accordingly, embodiments described herein may use an ear-mountable device for: passive and/or active validation of a behavior, an environmental condition, and/or a biometric of the use; compliance; and/or intervention.
Each ear-mountable device may be implemented as a hearing aid, a headphone, or other device configured to be mounted to a user's ear. Hearing aid users often wear and use their hearing aids for lengths of time that may be longer than lengths of times for which headphones may typically be used. Even so, embodiments described herein may be implemented in either or both hearing aids and headphones, or in other ear-mountable devices, with or without regard to an expected or typical period of use of such devices.
Reference will now be made to the drawings to describe various aspects of some example embodiments of the disclosure. The drawings are diagrammatic and schematic representations of such example embodiments, and are not limiting of the present disclosure, nor are they necessarily drawn to scale.
The network 112 may include one or more wide area networks (WANs) and/or local area networks (LANs) that enable the ear-mountable devices 103, the wearable electronic device 104, the smartphone 106, the cloud 108, the remote server 110, the sensor devices 116, and/or the user devices 104 to communicate with each other. In some embodiments, the network 112 includes the Internet, including a global internetwork formed by logical and physical connections between multiple WANs and/or LANs. Alternately or additionally, the network 112 may include one or more cellular RF networks and/or one or more wired and/or wireless networks such as 802.xx networks, Bluetooth access points, wireless access points, IP-based networks, or other suitable networks. The network 112 may also include servers that enable one type of network to interface with another type of network.
One or more of the ear-mountable device 103, the wearable electronic device 104, the smartphone 106, and/or the sensor devices 116 may include a sensor configured to generate data signals that measure parameters that may be indicative of behaviors, environmental conditions, and/or biometric responses of the subject 102. The measured parameters may include, for example, sound near the subject 102, acceleration of the subject 102 or of a head, chest, hand, wrist, or other part of the subject 102, angular velocity of the subject 102 or of a head, chest, hand, wrist, or other part of the subject 102, temperature of the skin of the subject 102, core body temperature of the subject 102, blood oxygenation of the subject 102, blood flow of the subject 102, electrical activity of the heart of the subject 102, electrodermal activity (EDA) of the subject 102, sound or vibration or other parameter indicative of the subject 102 swallowing, grinding teeth, or chewing, an intoxication state of the subject 102, a dizziness level of the subject 102, EEG brain waves of the subject 102, one or more parameters indicative of volatile organic compounds in the user's sweat or sweat vapor, an environmental or ambient temperature, light level, or UV light level of an environment of the user, or other parameters, one or more of which may be indicative of certain sound-producing behaviors of the subject 102, such as sneezing, coughing, wheezing, vomiting, or shouting. The ear-mountable device 103, the wearable electronic device 104, the smartphone 106, and/or the remote server 110 may be configured to determine or extract one or more features from the data signals and/or from data derived therefrom to validate behaviors, environmental conditions, or biometrics of the user and or to implement compliance and/or interventions for the subject 102.
In some embodiments, one or both of the ear-mountable devices 103 may include a sensor and/or input device that may be positioned at any desired location in, on, or proximate to the ear. Example locations for each sensor and/or input device of each of the ear-mountable devices 103 include in the user's ear canal, in or near the user's tympanic membrane, in the user's ear-hole (e.g., the opening of the ear canal), behind the user's ear, on the user's ear lobe, or other suitable location(s) in, on, or proximate to the user's ear. For example, a sensor to acquire core body temperature, heart rate via photoplethysmograph (PPG), sweat vapor, signals relating to the tympanic membrane, and/or UV/light levels may be positioned inside the user's ear canal. Alternatively or additionally, a sensor to acquire environmental/ambient temperature/light levels/sound may be positioned behind the user's ear.
All of the sensors may be included in a single device, such as the ear-mountable device 103, the sensor device 116, the wearable electronic device 104, and/or the smartphone 106. Alternately or additionally, the sensors may be distributed between two or more devices. For instance, one or each of the ear-mountable device 103, the sensor devices 116, the wearable electronic device 104 or the smartphone 106 may include a sensor. Alternately or additionally, the one or more sensors may be provided as separate sensors that are separate from either of the ear-mountable device 103, the wearable electronic device 104, or the smartphone 106. For example, the sensor devices 116 may be provided as separate sensors. In particular, the sensor devices 116 are separate from the ear-mountable device 103, the wearable electronic device 104, and the smartphone 106.
Each sensor, such as each sensor included in the ear-mountable device 103, may include any of a discrete microphone, an accelerometer, a gyro sensor, a thermometer, an oxygen saturation sensor, a PPG sensor, an electrocardiogram (ECG) sensor, an EDA sensor, or other sensor. In some embodiments, each of the ear-mountable devices 103 may include multiple sensors. Alternatively or additionally, a first sensor device 116a may be positioned along a sternum of the subject 102, a second sensor device 116b may be positioned over the left breast to be over the heart, and/or a third sensor device 116c may be positioned beneath the left arm of the subject 102. In these and other embodiments, the different sensors included in, e.g., two or more of the ear-mountable device 103, the wearable electronic device 104, the smartphone 106, and/or the sensor devices 116 at different locations may be beneficial for a more robust set of data to analyze the subject 102. For example, different locations of the sensors may identify different features based on their respective locations proximate different parts of the anatomy of the subject 102.
In some embodiments, the sensor(s) included in one or more of the ear-mountable device 103, the wearable electronic device 104, the smartphone 106, and/or the sensor devices 116 may include a discrete or integrated sensor attached to or otherwise borne on the body of the subject 102. Various non-limiting examples of sensors that may be attached to the body of the subject 102 or otherwise implemented according to the embodiments described herein and that may be implemented as the sensor(s) included in the ear-mountable device 103, the wearable electronic device 104, the smartphone 106, and/or the sensor devices 116 include microphones, PPG sensors, accelerometers, gyro sensors, heart rate sensors (e.g., pulse oximeters), ECG sensors, EDA sensors, or other suitable sensors. Each sensor may be configured to generate data signals, e.g., of sounds, vibrations, acceleration, angular velocity, blood flow, electrical activity of the heart, EDA, temperature, light level, UV light level, or of other parameters of or near the subject 102.
In an example implementation, at least one ear-mountable device 103 is provided with at least one sensor in the form of a microphone. Alternatively or additionally the ear-mountable device 103 may include an output device such as a speaker which may be used both for a normal output function of a hearing aid (e.g., to amplify sounds for a user) or headphone (e.g., as audio output from a music player or other device) as well as to output messages to a user for active validation, compliance, and/or intervention.
Each of the ear-mountable devices 103, the wearable electronic device 104, and/or the sensor devices 116 may be embodied as a portable electronic device and may be borne by the subject 102 throughout the day and/or at other times. As used herein, “borne by” means carried by and/or attached to. One or more of the ear-mountable devices 103, the wearable electronic device 104, the smartphone 106, and/or the sensor devices 116 may be configured to, among other things, analyze signals collected by one or more sensors within the environment 100 to validate behaviors and/or to implement compliance and/or interventions. Each of the ear-mountable devices 103, the wearable electronic device 104, the smartphone 106, and/or the sensor devices 116 may analyze and process sensor signals individually, or one or more of the ear-mountable devices 103, the wearable electronic device 104, the smartphone 106, and/or the sensor devices 116 may collect sensor signals from some or all of the other devices to analyze and/or process multiple sensor signals.
The ear-mountable devices 103, the wearable electronic device 104, the smartphone 106, and/or the sensor devices 116 may be used by the subject 102 to perform journaling, including providing subjective annotations to confirm or deny the occurrence of one or more behaviors, biometrics, and/or environmental conditions. Additional details regarding example implementations of journaling using a wearable electronic device or other device are disclosed in U.S. Pat. No. 10,362,002 issued on Jul. 23, 2019, which is incorporated herein by reference. The subject 102 may provide annotations any time desired by the subject 102, such as after exhibiting a behavior or biometric or after occurrence of an environmental condition and without being prompted by any of the ear-mountable devices 103, the wearable electronic device 104, the smartphone 106, or the sensor devices 116. Alternatively or additionally, the subject 102 may provide annotations regarding a behavior, biometric, or environmental condition responsive to prompts from any of the ear-mountable devices 103, the wearable electronic device 104, the smartphone 106, and/or the sensor devices 116. For instance, in response to detecting a behavior based on data signals generated by one or more sensors, one of the ear-mountable devices 103 or the wearable electronic device 104 may provide an output to the subject 102 to query whether the detected behavior actually occurred. The subject 102 may then provide an annotation or other input that confirms or denies occurrence of the detected behavior. The annotations may be provided to the cloud 108 and in particular to the remote server 110.
The remote server 110 may include a collection of computing resources available in the cloud 108. The remote server 110 may be configured to receive annotations and/or data derived from data signals collected by one or more sensors or other devices, such as the era-mountable devices 103, the wearable electronic device 104, the smartphone 106, and/or the sensor devices 116 within the environment 100. Alternatively or additionally, the remote server 110 may be configured to receive from the sensors relatively small portions of the data signals, or even larger portions or all of the data signals. The remote server 110 may apply processing to the data signals, portions thereof, or data derived from the data signals and sent to the remote server 110, to extract features and/or determine behaviors, biometrics, and/or environmental conditions of the subject 102.
In some embodiments, one or more of the ear-mountable devices 10, the wearable electronic device 104, the smartphone 106, and/or the sensor devices 116 may transmit the data signals to the remote server 110 such that the remote server 110 may detect the behavior, biometric, and/or environmental condition. Additionally or alternatively, one or more of the ear-mountable devices 10, the wearable electronic device 104, the smartphone 106, and/or the sensor devices 116 may detect the behavior, biometric, and/or environmental condition from the data signals locally at one or more of the ear-mountable devices 10, the wearable electronic device 104, the smartphone 106, and/or the sensor devices 116.
In these and other embodiments, a determination of whether or not to perform the detection of the behavior, biometric, and/or environmental condition locally or remotely may be based on capabilities of the processor of the local device, power capabilities of the local device, remaining power of the local device, communication channels available to transmit data to the remote server 110 (e.g., Wi-Fi, Bluetooth, etc.), payload size (e.g., how much data is being communicated), cost for transmitting data (e.g., a cellular connection vs. a Wi-Fi connection), or other criteria. For example, if the ear-mountable device 103 includes a battery as a power source that is not rechargeable, the ear-mountable device 103 may include simple behavior, biometric, or environmental condition detection, and otherwise may send the data signals to the remote server 110 for processing. As another example, if the ear-mountable device 103 includes a rechargeable battery that is full, the ear-mountable device 103 may perform the detection locally when the battery is full or close to full and may decide to perform the detection remotely when the battery has less charge.
As described in the present disclosure, the detection of the behavior, biometric, and/or environmental condition may include one or more steps, such as feature extraction, identification, and/or classification. In these and other embodiments, any of these steps or processes may be performed at any combination of devices such as at the ear-mountable device 103, the wearable electronic device 104, the smartphone 106, the sensor device 116, and/or the remote server 110. For example, the ear-mountable device 103 may collect data and perform some processing on the data (e.g., collecting audio data and performing a power spectral density process on the data), provide the processed data to the smartphone 106, and the smartphone 106 may extract one or more features in the processed data, and may communicate the extracted features to the remote server 110 to classify the features into one or more behaviors.
In some embodiments, an intermediate device may act as a hub to collect data from the ear-mountable device 103, the wearable electronic device 104, the smartphone 106, and/or the sensor device 116. For example, the hub may collect data over a local communication scheme (Wi-Fi, Bluetooth, near-field communications (NFC), etc.) and may transmit the data to the remote server 110. In some embodiments, the hub may act to collect the data and periodically provide the data to the remote server 110, such as once per week. An example hub and associated methods and devices are disclosed in U.S. application Ser. No. 16/395,052 filed Apr. 25, 2019, which is incorporated herein by reference.
The remote server 110 may maintain one or more of the algorithms and/or state machines used in the detection of behaviors, biometrics, and/or environmental conditions by the ear-mountable device 103, the wearable electronic device 104, the smartphone 106, and/or the sensor device 116. In some embodiments, annotations or other information collected by, e.g., the ear-mountable device 103, the wearable electronic device 104, the smartphone 106, the sensor device 116, and/or the user devices 114, for multiple subjects may be fed back to the cloud 108 to update the algorithms and/or state machines. This may lead to significant network effects, e.g., as more information is collected from more subjects, the algorithms and/or state machines used to detect behaviors, biometrics, and/or environmental conditions may be updated to become increasingly accurate and/or efficient. The updated algorithms and/or state machines may be downloaded from the remote server 110 to the ear-mountable device 103, the wearable electronic device 104, the smartphone 106, the sensor device 116, and/or the user devices 114 to, e.g., improve detection.
Each of the processors 202 may include an arithmetic logic unit, a microprocessor, a general-purpose controller, or some other processor or array of processors, to perform or control performance of operations as described herein. The processors 202 may be configured to process data signals and may include various computing architectures including a complex instruction set computer (CISC) architecture, a reduced instruction set computer (RISC) architecture, or an architecture implementing a combination of instruction sets. Although each of the ear-mountable device 103 and the remote server 110 of
Each of the communication interfaces 204 may be configured to transmit and receive data to and from other devices and/or servers through a network bus, such as an I2C serial computer bus, a universal asynchronous receiver/transmitter (UART) based bus, or any other suitable bus. In some implementations, each of the communication interfaces 204 may include a wireless transceiver for exchanging data with other devices or other communication channels using one or more wireless communication methods, including IEEE 802.11, IEEE 802.16, BLUETOOTH®, Wi-Fi, Zigbee, near field communication (NFC), or another suitable wireless communication method.
The storage 206 may include a non-transitory storage medium that stores instructions or data that may be executed or operated on by a corresponding one of the processors 202. The instructions or data may include programming code that may be executed by a corresponding one of the processors 202 to perform or control performance of the operations described herein. The storage 206 may include a non-volatile memory or similar permanent storage media including a flash memory device, an electrically erasable and programmable read only memory (EEPROM), a magnetic memory device, an optical memory device, or some other mass storage for storing information on a more permanent basis. In some embodiments, the storage 206 may also include volatile memory, such as a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, or other suitable volatile memory device.
The ear-mountable device 103 may additionally include one or more sensors 208, an output device 209, an intervention module 211 (“Inter. Module 211” in
The sensor 208 may include one or more of a microphone, an accelerometer, a gyro sensor, a PPG sensor, an ECG sensor, an EDA sensor, a vibration sensor, a light sensor, a UV light sensor, a body temperature sensor, an environmental temperature sensor, or other suitable sensor. While only a single sensor 208 is illustrated in
In some embodiments, the ear-mountable device 103 may include multiple sensors 208, with a trigger from one sensor 208 causing another sensor 208 to receive power and start capturing data. For example, an accelerometer, gyro sensor, ECG sensor, or other relatively low-power sensor may trigger a microphone to begin receiving power to capture audio data.
The output device 209 may include a speaker or other device to output audio signals to a subject or user. For example, when the ear-mountable device 103 is implemented as a hearing aid, the output device 209 may include a speaker to output sound representative of sound in an environment of the user that has been amplified and/or processed to, e.g., improve speech intelligibility and/or reduce noise. Alternatively or additionally, when the ear-mountable device 103 is implemented as a headphone, the output device 209 may include a speaker to output sound from, e.g., a portable music player, a radio, a computer, or other signal source. In some embodiments, the output device 209 may also be used to output messages, such as compliance messages, queries to provide annotations, or other messages, to the subject.
The input device 213 may include a microphone, accelerometer, or other device to receive input from a subject or user. For example, the user, in response to a query received via the output device 209, may respond to the query by speaking a response aloud, tapping the ear-mountable device 103 with a predetermined number and/or pattern of taps, or providing other input suitable for a given implementation of the input device 213. Although the input device 213 is illustrated as being separate from the sensor 208, alternatively a given one of the sensors 208 may also function as the input device 213.
One or more of the intervention module 211, the compliance module 218, and the validation module 219 may each include code such as computer-readable instructions that may be executable by a processor, such as the processor 202A of the ear-mountable device 103 and/or the processor 202B of the remote server 110, to perform or control performance of one or more methods or operations as described herein. For instance, the intervention module 211 may include code executable to perform or control performance of the method and/or one or more of the operations described with respect to
The raw data 216 may include some or all of each data signal generated by each sensor 208. In an example embodiment, portions of each data signal may be stored temporarily in the storage 206A for processing (e.g., feature extraction as described in the '242 application) and may be discarded after processing, to be replaced by another newly collected portion of the data signal. Alternatively or additionally, one or more portions of one or more data signals may be retained in storage 206A even after being processed. In some embodiments, certain sensors may continuously gather data, while others may intermittently capture data. For example the data 216 may contain continuous data from an accelerometer but only a few windows of data from a microphone.
In some embodiments, the size of the data 216 stored may be based on the capacity of the storage 206A. For example, if the storage 206A includes large amounts of storage, longer windows of time of the data 216 may be stored, while if the storage 206A includes limited amounts of storage, shorter windows of time of the data 216 may be stored. As another example, if the storage 206A includes large amounts of storage, multiple short windows of time of the data 216 may be stored, while if the storage 206A includes limited amounts of storage, a single window of time of the data 216 may be stored.
The detected parameters 220 may include behaviors, biometrics, and/or environmental conditions determined from the signals generated by the sensors 208. Each of the detected parameters 220 may include, e.g., a classification of the parameter, a time at which the parameter occurred, and/or other information.
In some embodiments, the sensors 208 may include a microphone (and/or the input device 213 may include a microphone) and at least one other sensor. The processor 202A may continually monitor the raw data 216 from the other sensor other than the microphone (e.g., an accelerometer). The data 216 from the other sensor may be continuously gathered and discarded along a running window (e.g., storing a window of 10 seconds, discarding the oldest time sample as a new one is obtained). In these and other embodiments, as the raw data 216 for the other sensor is monitored to identify a feature for waking up the microphone (e.g., a rapid acceleration potentially identified as a sneeze), the raw data 216 may include a window of audio data from the microphone. The processor 202A may analyze both the raw data 216 from the other sensor and the raw data 216 from the microphone to extract one or more features 218.
Referring to the remote server 110, it may additionally include a feature extractor 210B, a classifier 212B, and/or a machine learning (ML) module 222. The storage 206B of the remote server 110 may include one or more of subject data 224 and/or detection algorithms 226. The subject data 224 may include snippets of data, extracted features, detected parameters (e.g., behaviors, biometrics, environmental conditions), and/or annotations received from ear-mountable devices, wearable electronic devices, smartphones, and/or sensor devices used by subjects, such as the ear-mountable device 103, the wearable electronic device 104, the smartphone 106, and/or the sensor devices 116 of
The feature extractor 210B, the classifier 212B, and the ML module 222 may each include code such as computer-readable instructions that may be executable by a processor, such as the processor 202B of the remote server 110, to perform or control performance of one or more methods or operations as described herein. For instance, the feature extractor 210B and the classifier 212B may in some embodiments perform processing of snippets of data signals, extracted features, and/or other data received from the ear-mountable device 103. The ML module 222 may evaluate some or all of the subject data 224 to generate and/or update the detection algorithms 226. For instance, annotations together with extracted features, detected behaviors, detected biometrics, and/or detected environmental conditions or other subject data 224 may be used as training data by the ML module 222 to generate and/or update the detection algorithms 226. Updated detection algorithms 226 used in feature extraction, classification, or other aspects of behavior, biometric, and/or environmental condition detection may then update one or more of the feature extractors 210A, 210B and/or classifiers 212A, 212B or other modules in one or both of the remote server 110 and ear-mountable device 103.
As illustrated in
In general, the main body 256 may include a microphone to convert a voice signal into an electrical signal, a hearing aid processing circuit to amplify the output signal of the microphone and/or perform other such hearing aid processing, an earphone circuit to convert the output of the hearing aid processing circuit into a voice signal, a battery to power the hearing aid 250, and/or other circuits, components, or portions. The ear canal insertion portion 254 may include a speaker to convert the voice signal into sound. The ear hook 258 may provide a mechanical connection and/or an electrical connection between the main body 256 and the ear canal insertion portion 254. The microphone of the hearing aid 250 may include or correspond to the sensor 208 and/or the input device 213 of
Alternatively or additionally, the hearing aid 250 may include one or more other sensors, such as one or more of a temperature sensor, a PPG sensor, a sweat vapor sensor, a tympanic membrane sensor, an EEG sensor, a UV light sensor, a light sensor, and/or other sensors. The additional sensor(s) may be located in or on the main body 256, the ear hook 258, and/or the ear canal insertion portion 254, depending on the sensor signal that is desired to be acquired. For example, if it is desired to acquire core body temperature, heart rate via PPG, sweat vapor, and/or UV/light levels, the additional sensor may be located in or on the ear canal insertion portion 254 so that the additional sensor is positioned inside the user's ear canal during use. Alternatively or additionally, if it is desired to acquire environmental/ambient temperature/light levels/sound, the additional sensor may be located in or on the main body 256 and/or the ear hook 258 so that the additional sensor is positioned outside the user's ear 252 during use.
Optionally, the main body 256 may be attached behind the user's ear 252, e.g., directly to the skull or directly to the back of the ear 252, using an adhesive to ensure and/or improve conduction of audio waves and/or bone conduction to a sensor included in or on the main body 256.
Optionally, the hearing aid 250 and/or other ear-mountable devices described herein may be communicatively linked to other devices (e.g., the wearable electronic device 104, the smartphone 106, one or more of the sensor devices 116, or other devices). With such a communication link, the hearing aid 250 and/or other ear-mountable devices may receive updates or alerts from the other devices and may output audio updates or alerts to the user. For example, when one of the other devices has a low battery, poor signal quality, or needs to be synchronized to a base station or hub, the other device may send a corresponding update or alert to the hearing aid 250 and/or other ear-mountable device, which may then output an audio update or alert to the user so that the user can take appropriate action.
As illustrated in
The headphones 262 may additionally include one or more input devices, such as the input device 213 of
Alternatively or additionally, the headphones 262 may include one or more other sensors, such as one or more of a temperature sensor, a PPG sensor, a sweat vapor sensor, a tympanic membrane sensor, an EEG sensor, a UV light sensor, a light sensor, a sound sensor, and/or other sensors. The additional sensor(s) may be located in or on either or both of the headphone units 264 or the headband 266, depending on the sensor signal that is desired to be acquired. For example, if it is desired to acquire EEG waves, the sensor may be located in or on the headband 266.
At block 302, a signal indicative of at least one of a behavior of a user, a biometric of a user, or an environmental condition of an environment of the user may be generated at an ear of the user. For example, such a signal may be generated by the ear-mountable device 103 of
Generating the signal at block 302 may include generating, at the ear of the user, at least one of an audio signal, a bone conduction signal, a vibrational sound signal, an accelerometer signal, a sweat vapor (or component thereof) signal, a light signal, a UV light signal, or a temperature signal. In this and other embodiments, the signal may specifically be indicative of at least one of: the user swallowing; the user grinding the user's teeth; the user chewing; the user coughing; the user vomiting; the user wheezing; the user sneezing; an intoxication state of the user; a dizziness level of the user; the user's heart rate; the user's EEG brain waves; the user's body temperature; the user's sweat vapor to sense volatile organic compounds to determine if the user has consumed a particular substance such as alcohol, ethanol, a medication, or other substance emitted through sweat; an ambient temperature in the environment of the user; an ambient light level in the environment of the user; an ambient UV light level in the environment of the user; ambient music, which may then be analyzed to determine artist, song, genre, or other information to correlate with mood/depression of the user.
Additional details regarding the detection of markers, e.g., of alcohol, medications, or other substances, in sweat vapor are disclosed in co-pending U.S. application Ser. No. 15/353,738 (hereinafter the '738 application) filed Nov. 17, 2016 which is incorporated herein by reference.
Block 302 may be followed by block 304. At block 304, at least one of the behavior of the user, the biometric of the user, or the environmental condition of the environment of the user may be determined based on the signal. In some embodiments, determining at least one of the behavior of the user, the biometric of the user, or the environmental condition of the environment of the user may be determined exclusively based on the signal, e.g., based on a single signal. In other embodiments, determining at least one of the behavior of the user, the biometric of the user, or the environmental condition of the environment of the user may be determined based on two more signals, e.g., generated by two or more sensors.
The method 300 of
In an example active validation implementation, the method 300 may further include making a preliminary determination of at least one of the behavior of the user, the biometric of the user, or the environmental condition of the environment of the user, e.g., based on the signal. The method 300 may also include outputting, through an audio output device positioned at least partially in, on, or proximate to the ear of the user, a query regarding the behavior of the user, the biometric of the user, or the environmental condition of the environment of the user. For example, outputting the query may include outputting a query regarding at least one of whether the user performed or exhibited a particular behavior, whether the user is or has been subject to a particular environmental condition, or whether the user is or has been experiencing a particular symptom associated with a particular biometric reading. Various example queries may ask the user whether the user chewed food, swallowed water and/or a medication, ground the user's teeth, vomited, sneezed, coughed, is intoxicated, is dizzy, is nauseous, is or has been subject to a particular environmental condition (e.g., inside a dark room) for at least a predetermined amount of time, and/or is or has been wheezing or has shortness of breath (e.g., which may occur if the user's heartbeat or breathing is racing without any indication that the user is exercising).
In some embodiments, the audio output device may include the output device 209 of
The response to the query may be received through an input device, such as the input device 213 of
In some embodiments, determining the at least one of the behavior of the user, the biometric of the user, or the environmental condition of the environment of the user may include determining that the behavior of the user is not compliant with a target behavior of the user. In this and other embodiments, the method may further include outputting, through the audio output device which is positioned at least partially in, on, or proximate to the ear of the user, a compliance message to evoke the target behavior in the user. For example, the user may have a prescribed medication and the ear-mountable device may monitor the user to determine whether the user takes the prescribed medication according to a prescribed schedule (e.g., one or more times daily). In response to determining that the user has not complied with the prescribed schedule, one or both of the ear-mountable devices 103 may output a message, e.g., through a corresponding output device 209, to take the prescribed medication. Various example behaviors that may be monitored for compliance may include medication adherence, physical exercise, and physical rehabilitation.
One skilled in the art will appreciate that, for this and other processes and methods disclosed herein, the functions performed in the processes and methods may be implemented in differing order. Furthermore, the outlined steps and operations are only provided as examples, and some of the steps and operations may be optional, combined into fewer steps and operations, or expanded into additional steps and operations without detracting from the essence of the disclosed embodiments.
At block 402, a compliance message to evoke a target behavior in a user may be output through an audio output device positioned at least partially in, on, or proximate to the user's ear. In general, the compliance message may ask or instruct the user to perform a particular behavior, such as taking or applying a medication, performing one or more exercises, performing one or more physical rehabilitation exercises, or following some other protocol. As a specific example, a compliance message may ask or instruct the user to take a first dose (or only dose) of a prescribed medication, e.g., at or by a specified time each day, or may ask or instruct the user to do one or more physical rehabilitation exercises, e.g., at or by a specified time each day. Block 402 may be followed by block 404.
At block 404, behavior of the user may be monitored through a sensor positioned in, on, or proximate to the ear of the user. Monitoring the behavior of the user may include generating one or more sensor signals indicative of the behavior of the user, e.g., as described elsewhere herein, including in connection with block 302 of
At block 406, compliance of the user with the target behavior may be determined based on the monitoring. For example, determining compliance of the user with the target behavior based on the monitoring may include comparing one or more features of the signal indicative of the behavior of the user to one or more target features of a signal indicative of the target behavior and determining that the user's behavior includes the target behavior if the one or more features of the signal indicative of the behavior of the user match the one or more target features of the signal indicative of the target behavior.
Alternatively or additionally, determining compliance of the user with the target behavior based on the monitoring may include determining that the user does not comply with the target behavior within a predetermined period of time from the outputting of the compliance message, or within a predetermined period of time specified in the compliance message. For example, it may be determined that the user does not comply with the target behavior within 30 minutes or some other period of time after the compliance message is output to the user, or within 30 minutes of a time specified in the compliance message. In this and other embodiments, the method 400 may further include outputting a reminder compliance message through the audio output device positioned at least partially in, on, or proximate to the ear of the user. The reminder compliance message may remind the user to perform the particular behavior originally specified in the initial compliance message.
Alternatively or additionally, the method 400 may be combined with one or more steps or operations of one or more of the other methods described herein, such as the method 300 of
At block 502, a state of a user may be determined. The state of the user may be determined from one or more sensor signals generated by one or more sensors included in, e.g., one or both of the ear-mountable devices and/or one or more of the other devices of
At block 504, it may be determined whether the state of the user warrants an intervention or treatment. Some mental and/or physical states may not warrant any intervention or treatment (e.g., happy, excited, normal or baseline, tired), while other mental and/or physical states may warrant an intervention (e.g., depressed, fallen down, head impact). Guidelines for determining whether a state warrants an intervention or treatment may be based on guidelines for a general population and/or may be customized based on the specific user. For example, a young, healthy user in a fallen state, e.g., from a slip and fall on an icy walkway in winter, who relatively quickly stands back up and does not remain in the fallen state for very long may not warrant an intervention or treatment, whereas an older user with arthritis in a fallen state, e.g., due to a loss of balance while walking in the user's own home, who remains in the fallen state for more than a predetermined period of time may warrant an intervention or treatment. Block 504 may be followed by block 506.
At block 506, and in response to determining that the state of the user warrants an intervention or treatment, a specific intervention or treatment to administer to the user may be determined. The specific intervention or treatment to administer may depend on the specific state of the user. Block 506 may be followed by block 508.
At block 508, the specific intervention or treatment may be administered to the user. According to the method 500 of
In some embodiments, administering the specific intervention or treatment to the user at block 508 may include at least one of: administering a somatosensory evoked potential (SSEP) evaluation of the user; contacting an emergency response service to notify the emergency response service that the user is in need of assistance; administering a treatment to the user to alter at least one of EEG brain waves, a heart rate, or a breathing rate or pattern of the user; administering neuro-stimulation to an ear canal or ear lobe of the user; or applying a magnetic field to at least a portion of the user's body.
A specific example implementation of the method 500 may include determining at block 502 that a user has fallen and/or the user's head has impacted or been impacted by an object based on a signal generated by a sensor positioned in, on, or proximate to the user's ear. A message may be output to the user, e.g., through the output device 209 positioned in, on, or proximate to the user's ear to ask if the user is okay. If the user answers in the negative and/or doesn't answer at all, e.g., within a predetermined period of time, it may be determined at block 504 that the state of the user warrants an intervention or treatment. At block 506, it may be determined to contact an emergency response service to provide assistance to the user as the specific intervention or treatment to administer to the user. At block 508, the emergency response service may be contacted and informed that the user is in need of assistance. Alternatively or additionally, the ear-mountable device may generate, at the ear of the user, a signal indicative of a biometric of the user, such as the user's heart rate, temperature respiration rate, blood pressure, or other vital sign(s). The user's biometric(s) may be provided to the emergency response service, e.g., in advance of the emergency response service reaching the user. Alternatively or additionally, if the state determined at block 502 includes an impact to the user's head, the emergency response service may be informed, e.g., in advance of reaching the user, that the user may have head trauma.
The methods 300, 400, 500 and/or one or more discrete steps or operations thereof may be combined in any combination.
Alternatively or additionally, embodiments described herein may include a hub or smartphone (such as the smartphone 106 of
Alternatively or additionally, environmental/ambient sound and/or environmental/ambient music may be monitored and/or sensed by ear-mounted devices and/or other devices described herein in connection with the user's mental health. The sound and/or music may be broken down, e.g., by type, as done for, e.g., the music genome project. Embodiments described herein may more generally form correlations and/or causal links between music, behavior, and environment to objectively monitor and diagnose depression and general anxiety disorder.
Embodiments of the ear-mountable device or devices described herein may include, implement, or provide one or more of the following features and/or advantages:
Unique aspects of a hearing aid or other ear-mountable device mounted to the ear area of a user:
1→It's situated on the head
2→It is generally uncovered
3→The ear canal has solid vibration and sound conduction through skull
4→Brain waves and EEG
One or more of the following may have unique benefits from being sensed in the user's ear:
Core Body Temperature
Ambient Light & UV exposure (Unless wearing a hat, but even then, can sense outdoors) (wrist, chest is often covered) . . . light is not
Ambient temperature→Also rarely covered up
Head Orientation
Impact on the Head (Falling and hitting head, any head impact)
Coughing Sneezing, Vomiting . . . gives unique head motions to those actions
Sensing one or more signals at the ear may accomplish validation, compliance, and/or intervention better than other locations of the body:
Passive Validation:
Behaviors: Chewing Food, Swallowing water or taking a pill, Grinding Teeth (Depression/anxiety), Intoxication of alcohol or substances (head wobbles much more while drunk.
Dizziness, Vertigo . . .
Some embodiment s may break down the sound and music the user listens to for correlating mental health independent of knowing what song/album/artist is actually playing. This can correlate with mood/depression, and other states.
Environment: Ambient temperature and light sensing at an ear-mountable device is much better than on wrist/chest which is often covered by clothing.
Biometrics: HR, Coughing/Vomiting/Wheezing, EEG brain waves to assess mood, stress, etc.
Active Validation:
May include having the user use their voice to journal inputs or acknowledge things (e.g., I took medicine, I feel better today, my knee hurts, I have phlegm in my cough).
May include having the ear-mountable device prompt and then user can use voice, tap a sticker sensor several times, or use a smartwatch/smartphone touch screen to respond.
Compliance:
May include hearing aids, headphones, or other ear-mountable devices for medication reminders, rehab reminders for exercise or instructions to follow a protocol. Embodiments herein may measure whether compliance occurs for a user, and then if it is determined that compliance has not occurred, some embodiments may remind the user again. For example, if swallowing or drinking water (e.g., to take a medication) is not detected, some embodiments may remind the user again to take the medication or ask the user for an explicit confirmation that the user took the medication.
Interventions & Treatment:
Some embodiments may involve SSEP, e.g., as described at: http ://ormonitoring.com/what-is-ssep-somatosensory-evoked-potentials/. SSEP may evaluate nerve pathways responsible for feeling touch and pressure. When you touch something hot or step on something sharp, a signal is sent to your brain to react. SSEPs evaluate this signal as it travels to your brain and provide information about the various functions that are important to your sensory system. Understanding sensory function during surgery plays a critical role in detecting and avoiding unintended complications that could leave a patient with short or long term impairment.
SSEP testing involves the stimulation of specific nerves and the recording of their activity as they travel to the brain. Stimulating electrodes are placed over specific nerves, typically at the ankle and/or wrist, while recording electrodes are placed on the scalp over the sensory area of the brain. Function of the sensory pathway is evaluated by measuring the commute time between the nerve and the brain, as well as the strength of the sensory response. If the commute time is slower than expected or if the sensory response is weak, this may indicate abnormalities that are interfering with the pathway.
SSEPs are useful for a variety of reasons, from the evaluation of spinal cord integrity after injury to the assessment of vascular flow to the brain. Due to its ease of application and multi-functional use, SSEPs are often combined with other intraoperative neurophysiologic tests that focus on motor or movement function, such as Electromyography (EMG) or Transcranial Motor Evoked Potentials (TceMEP). SSEP testing is standard practice for intraoperative neuromonitoring during cervical, thoracic, vascular, and brain surgeries, among others.
The SSEP test is a non-invasive way to assess the somatosensory system. While there is always a small risk of infection any time a needle is involved, risks are almost nonexistent otherwise.
Accordingly, some embodiments described herein may send an electrical signal from an ear-mountable device into ear or skull and measure the value at the base of the spine or other location with, e.g., a sticker sensor.
Some embodiments may involve personal emergency response: Example embodiments may detect a fall, and potentially a head impact. The user may be asked if they are ok through the ear-mountable device, and an emergency response service may be called and dispatchers may be informed that there may be head trauma. Alternatively or additionally, vitals may be determined, e.g., from the ear-mountable device or other devices, and may be given to the dispatchers ahead of time before emergency response service personnel arrive.
Some embodiments may involve EEG, breathing, heart rate, with music, activity, etc.
Some embodiments may send neurostimulation to the ear canal or ear lobes for mental priming.
Some embodiments may induce magnetism to treatments.
The present disclosure is not to be limited in terms of the particular embodiments described herein, which are intended as illustrations of various aspects. Many modifications and variations can be made without departing from its spirit and scope, as will be apparent to those skilled in the art. Functionally equivalent methods and apparatuses within the scope of the disclosure, in addition to those enumerated herein, will be apparent to those skilled in the art from the foregoing descriptions. Such modifications and variations are intended to fall within the scope of the appended claims. The present disclosure is to be limited only by the terms of the appended claims, along with the full scope of equivalents to which such claims are entitled. It is to be understood that the present disclosure is not limited to particular methods, reagents, compounds, compositions, or biological systems, which can, of course, vary. It is also to be understood that the terminology used herein is for the purpose of describing particular embodiments only, and is not intended to be limiting.
With respect to the use of substantially any plural and/or singular terms herein, those having skill in the art can translate from the plural to the singular and/or from the singular to the plural as is appropriate to the context and/or application. The various singular/plural permutations may be expressly set forth herein for sake of clarity.
It will be understood by those within the art that, in general, terms used herein, and especially in the appended claims (e.g., bodies of the appended claims) are generally intended as “open” terms (e.g., the term “including” should be interpreted as “including but not limited to,” the term “having” should be interpreted as “having at least,” the term “includes” should be interpreted as “includes but is not limited to,” etc.). It will be further understood by those within the art that if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the introductory phrases “at least one” and “one or more” to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles “a” or “an” limits any particular claim containing such introduced claim recitation to embodiments containing only one such recitation, even when the same claim includes the introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an” (e.g., “a” and/or “an” should be interpreted to mean “at least one” or “one or more”); the same holds true for the use of definite articles used to introduce claim recitations. In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should be interpreted to mean at least the recited number (e.g., the bare recitation of “two recitations,” without other modifiers, means at least two recitations, or two or more recitations). Furthermore, in those instances where a convention analogous to “at least one of A, B, and C, etc.” is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., “a system having at least one of A, B, and C” would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). In those instances where a convention analogous to “at least one of A, B, or C, etc.” is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., “a system having at least one of A, B, or C” would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). It will be further understood by those within the art that virtually any disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms. For example, the phrase “A or B” will be understood to include the possibilities of “A” or “B” or “A and B.”
The present disclosure may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the disclosure is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.
This application claims the benefit of and priority to U.S. Provisional Application No. 62/732,922, filed Sep. 18, 2018, which is incorporated herein by reference. This application is related to co-pending U.S. application Ser. No. 16/118,242 (hereinafter the '242 application), filed Aug. 30, 2018. The '242 application is incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
62732922 | Sep 2018 | US |