Provided herein are medical sensors, including mechano-acoustic sensing electronics, coupled with on-board microphone and feedback stimuli, including but not limited to vibration motor, speaker, or LED indicator. Systems and methods are provided for mechano-acoustic electrophysiological sensing electronics derived from the body using a 3-axis high frequency accelerometer. The devices are referred herein as soft, flexible, and wearable with advanced power conservation functions and wireless communication capabilities, including being compatible with Bluetooth® enabled systems. Within the system, there is signal processing, signal analysis, and machine learning functionalities that provide a platform for multi-modal sensing for a wide range of physiological and environmental signals that include, but are not limited to: speech, talk time, respiration rate, heart rate, lung volumes, swallowing function, physical activity, sleep quality, movement, eating behaviors. The systems and methods are compatible with use of additional sensors, including one or more of an onboard microphone, pulse oximeter, ECG, and EMG (amongst others).
Mechano-acoustic signals are known to contain essential information for clinical diagnosis and healthcare applications. Specifically, mechanical waves that propagate through the tissues and fluids of the body as a result of natural physiological activity reveal characteristic signatures of individual events, such as the closure of heart valves, the contraction of skeletal muscles, the vibration of the vocal folds, the cycle of respiration, the movement and sound of scratching, and movement in the gastrointestinal tract.
Frequencies of these signals can range from a fraction of 1 Hz (for example, respiratory rate) to 2000 Hz (for example, speech), often with low amplitudes beyond hearing threshold. Physiological auscultation typically occurs with analog or digital stethoscopes, in individual procedures conducted during clinical examinations.
An alternative approach relies on accelerometers in conventional rigid electronic packages, typically strapped physically to the body to provide the necessary mechanical coupling. Research demonstrations include recording of phonocardiography (PCG; sound from the heart), seismocardiography (SCG; vibrations of the chest induced by the beating of the heart), ballistocardiography (BCG; recoil motions associated with reactions to cardiovascular pressure), and sounds associated with respiration.
In the context of cardiovascular health, these measurements yield important insights that complement those inferred from electrocardiography (ECG). For example, structural defects in heart valves manifest as mechano-acoustic responses and do not appear directly in ECG traces.
Previously reported digital measurement methods are useful for laboratory and clinical studies but suffer the following disadvantages: (i) their form factors (rigid designs and large size, for example, 150 mm×70 mm×25 mm) limit the choices in mounting locations and prohibit their practical utility as wearable; (ii) their bulk construction involves physical masses that suppress, through inertial effects, subtle motions associated with important physiological events; (iii) their mass densities and moduli are dissimilar from those of the skin, thereby leading to acoustic impedance mismatches with the skin; and (iv) they offer only a single mode of operation, without the ability, for example, to simultaneously capture ECG and PCG/SCG/BCG signals; (iv) their way of communication to the user interface and data transmission are done via wires tethered to the device and the user interface machine; (v) their power management is through wired connection. The devices and methods provided herein address these limitations in the art.
Provided herein are methods and devices that provide a telemedicine-type platform, wherein a medical sensor on or implanted in a user provides useful information that can be acted on by a caregiver, such as a medical professional, friend or family member. Not only are the devices and methods useful in diagnostic or therapeutic applications, but can be used for training and rehabilitation. This is reflected in the devices and systems having two-way communication so that information may be sent externally for action to a caregiver and commands received by the medical sensor, including to indicate to a user to take appropriate action, including swallowing, inhalation, exhalation and the like.
The devices and systems can provide real-time output, such as information useful for novel clinical metrics, novel clinical markers, and beneficial endpoints, thereby improving a user's overall health and well-being. The devices and systems are particularly amenable to utilizing off-site cloud storage and analytics that conveniently, reliably and readily can lead to clinician or caregiver action.
The special configuration of hardware, software, bidirectional information flow and remote storage and analysis represents a fundamentally improved platform for medical well-being in a relatively unobtrusive and mobile manner untethered to conventional clinical settings (confined to hospitals or controlled environments, for example). In particular, the software, which may be embedded in a chip or processor, either on-board or remote from the devices described herein, and provide much improved sensor performance and clinically-actionable information. Machine learning algorithms are particularly useful for further improving device performance
Specifically included herein, are the appended claims and any other portions of the specification and drawings.
In an aspect, provided is a medical sensor comprising: a) an electronic device having a sensor comprising an accelerometer; and b) a bidirectional wireless communication system electronically connected to the electronic device for sending an output signal from the sensor to an external device and receiving commands from an external controller to the electronic device.
The medical sensor may be wearable, tissue mounted or implantable or in mechanical communication or direct mechanical communication with tissue of a subject. The medical sensor may comprise a wireless power system for powering the electronic device. The medical sensor may comprise a processor to provide a real-time metric. The processor may be on-board with the electronic device or is positioned in an external device that is located at a distance from the medical sensor and in wireless communication with the wireless communication system. The processor may be part of a portable smart device.
The medical sensor may continuously monitor and generate a real-time metric, for example, a social metric or a clinical metric. For example, the clinical metric may be selected from the group consisting of a swallowing parameter, a respiration parameter, an aspiration parameter, a coughing parameter, a sneezing parameter, a temperature, a heart rate, a sleep parameter, pulse oximetry, a snoring parameter, body movement, scratching parameter, bowel movement parameter, a neonate subject diagnostic parameter; a cerebral palsy diagnostic parameter, and any combination thereof. For example, the social metric may be selected from the group consisting of: talking time, number of words, phonatory parameter, linguistic discourse parameter, conversation parameter, sleep quality, eating behavior, physical activity parameter, and any combination thereof.
The medical sensor may comprise a processor configured to analyze the output signal. The processor may utilize machine learning to customize the analysis to each individual user of the medical sensor. The machine learning may comprise one or more supervised learning algorithms and/or unsupervised learning algorithms customizable to the user. The machine learning may improve a sensor performance parameter used for diagnostic sensing or a therapeutic application and/or a personalized user performance parameter.
The processors described herein may be configured to filter and analyze a measured output from the electronic device to improve a sensor performance parameter. The medical sensor may comprise a wireless power system for wirelessly powering the electronic device. The accelerometer may be a 3-axis high frequency accelerometer.
The electronic devices described herein may comprise a stretchable electrical interconnect, a microprocessor, an accelerometer, a stimulator, a resistor and a capacitor in electronic communication to provide sensing of vibration or motion by the accelerometer and a stimulus to a user with the stimulator. The sensor may sense multiple or single physiological signals from a subject; wherein a threshold is used to provide a trigger for a corrective, stimulatory, biofeedback, or reinforcing signal back to the subject.
The electronic devices described herein may comprise a network comprising a plurality of sensors, for example, one sensor may be for sensing said physiological signals from said subject and one sensor may be for providing a feedback signal to said subject.
The threshold may be personalized for said subject. The stimulator may comprise one or more of a vibratory motor, an electrode, a light emitter, a thermal actuator or an audio notification.
The medical sensors described herein may further comprise a flexible encapsulating layer that surrounds the flexible substrate and electronic device. The encapsulating layer may comprise comprises a bottom encapsulating layer and a top encapsulating layer, and a strain isolation layer, wherein the strain isolation layer is supported by the bottom encapsulating layer, and the flexible substrate is supported by the strain isolation layer. There may be an air pocket between the electronic device and the top encapsulating layer. The medical sensor may be configured such that an air pocket does not exist between the electronic device and a bottom layer of the device proximate to or in contact with a tissue surface of a subject.
The medical sensor may have a device mass less than 1 g, less than 500 mg, less than 400 mg, or optionally less than 200 mg and a device thickness less than 10 cm, less than 6 mm, less than 5 mm, or optionally, less than 3 mm.
The medical sensors described herein may be configured for a therapeutic swallow application; a social interaction meter; a stroke rehabilitation device; or respiratory therapeutic device. The medical sensors may be configured to be worn by a user and for use in a therapeutic swallow application, wherein the output signal is for one or more swallow parameters selected from the group consisting of swallow frequency, swallow count, swallow energy. The medical sensor may further comprise a stimulator that provides a haptic signal to a user to engage in a safe swallow. The safe swallow may be determined by sensing onset of inspiration and expiration of a user respiratory cycle. One or more machine learning algorithms may be used in a feedback loop for optimization of the haptic signal timing.
The medical sensors described herein may be configured to be worn by a user and for use as a social interaction meter, wherein the output signal is for one or more social parameters selected from the group consisting of: talking time, number of words (fluency rate), phonatory parameter, linguistic discourse parameter or conversation parameter. The medical sensor may be configured for mounting to a suprasternal notch of the user. The medical sensor may be for use with one more additional user well-being parameters selected from the group consisting of sleep quality, eating behavior and physical activity, wherein the medical sensor social parameters and well-being parameters are combined to provide a social interaction metric.
The medical sensor may comprise a stimulator that provides a haptic signal to a user to engage in a social interaction event. The medical sensor may be configured to be worn by a user and for use in a stroke rehabilitation device, wherein the output signal is for a social parameter and/or a swallow parameter. The medical sensor may be for use with one or more additional stroke rehabilitation parameters selected from the group consisting of: gait, falls and physical activity. The medical sensor may comprise a stimulator that provides a haptic signal to a user to engage in a safe swallowing event.
The medical device may be configured to be worn by a user and for use in a respiratory therapeutic device, wherein the output signal is for respiratory inspiration and/or expiration: effort, duration, or airflow through the throat. The medical device may comprise a stimulator that provides a haptic signal to a user to engage in respiratory training.
The medical devices described herein may comprise an external sensor operably connected to the electronic device. The external sensor may comprise: a microphone and/or a mouthpiece.
The medical sensors described herein may be capable of reproducing an avatar or video representation of body position and movement of a subject across time.
In an aspect, provided is a method of measuring a real-time personal metric using any of the medical sensors described herein.
In an aspect provided is a method of measuring a real-time personal metric the method comprising the steps of: a) mounting any of the devices of the above claims to a user skin surface or implanting subdermally; b) detecting a signal generated by the user with the sensor; c) analyzing the filtered signal to thereby classify the filtered signal; and d) providing a real-time metric to the user or a third-party based on the classified filtered signal.
The described methods may comprise a step of filtering the detected signal before the analyzing step. The providing step may comprise one or more of: providing a haptic stimulus to the user; storing or displaying a clinical metric; and/or storing or displaying a social metric. The providing step may further comprise storing the real time metric on a remote server for subsequent analysis to generate a clinician or caregiver action. The action may comprise sending a command to the medical sensor.
The real time metric may be a mental, physical or social metric related to health. The analyzing step may comprise use of a machine learning algorithms. The machine learning algorithm may comprise an independent supervised learning algorithm, wherein each algorithm is independently trained to provide a personalized real-time metric specific for an individual user.
The personalized real time personal metric may be for a therapeutic or diagnostic application. The therapeutic or diagnostic application may be selected from the group consisting of: safe swallowing; respiratory therapy; cerebral palsy diagnosis or therapy; and a neonate diagnosis or therapy.
The real time personal metric may be for a medical application selected from the group consisting of: sleep medicine; dermatology; pulmonary medicine; social interaction evaluation; speech therapy; dysphagia; stroke rehabilitation; nutrition; obesity treatment; fetal monitoring; neonate monitoring; cerebral palsy diagnosis; maternal monitoring; bowel function; diagnosis or treatment of sleeping disorder; sleep therapy; injury; injury prevention falls or over extension of joints or limbs; injury prevention in sleep; firearm/ballistic related injuries; and cardiac output monitoring.
In an aspect, provided is a medical sensor comprising an electronic device having a sensor comprising an accelerometer; and a wireless communication system electronically connected to the electronic device.
The wireless communication system may be a bidirectional wireless communication system. The wireless communication system may be for sending an output signal from the sensor to an external device. The wireless communication system may be for receiving commands from an external controller to the electronic device.
The medical sensors described herein may be wearable or implantable. The medical sensors may comprise a wireless power system for powering the electronic device. The medical sensors may comprise a processor to provide a real-time metric. The processor may be on-board with the electronic device or is positioned in an external device that is located at a distance from the medical sensor and in wireless communication with the wireless communication system. The processor may be part of a portable smart device.
The medical sensors described herein may continuously monitor and generate a real-time metric. The real-time metric may be a social metric or a clinical metric. The clinical metric may be selected from the group consisting of a swallowing parameter, a respiration parameter, an aspiration parameter, a coughing parameter, a sneezing parameter, a temperature, a heart rate, a sleep parameter, pulse oximetry, a snoring parameter, body movement, scratching parameter, bowel movement parameter, and any combination thereof.
The social metric may be selected from the group consisting of: talking time, number of words, phonatory parameter, linguistic discourse parameter, conversation parameter, sleep quality, eating behavior, physical activity parameter, and any combination thereof.
The medical sensors described herein may comprise a processor configured to analyze the output signal. The processor may utilize machine learning to customize the analysis to each individual user of the medical sensor. The machine learning may comprise one or more supervised learning algorithms and/or unsupervised learning algorithms customizable to the user. The machine learning may improve a sensor performance parameter used for diagnostic sensing or a therapeutic application and/or a personalized user performance parameter.
The described sensors may be provided on or proximate to a suprasternal notch of a subject. The described sensors may be provided on or proximate to a mastoid process of a subject. The described sensors may be provided on or proximate to a neck of a subject. The described sensors may be provided on or proximate to a lateral neck of a subject. The described sensors may be provided on or proximate to a under the chin of a subject. The described sensors may be provided on or proximate to the jaw line of a subject. The described sensors may be provided on or proximate to the clavicle of a subject. The described sensors may be provided on or proximate to a bony prominence of a subject. The described sensors may be provided behind the ear of a subject.
The described electronic devices may comprise one or more three-axis high frequency accelerometers. The described electronic devices may comprise a mechano-acoustic sensor. The described electronic devices may comprise one or more of an onboard microphone, ECG, pulse oximeter, vibratory motors, flow sensor, and pressure sensor.
The described electronic devices may be a flexible device and/or a stretchable device. The described electronic devices may have a multilayer floating device architecture. The described electronic devices may be at least partially supported by an elastomer substrate, superstrate or both. The described electronic devices may be is at least partially supported by a silicone elastomer providing for strain isolation.
The described electronic devices may be at least partially encapsulated by a moisture resistant enclosure. The described electronic devices may further comprise an air pocket.
The wireless communication systems described herein may be a Bluetooth communication module. The wireless communication systems described herein may be powered by a wireless re-chargeable system. The wireless re-chargeable system may comprise one or more of a rechargeable battery, an inductive coil, a full wave rectifier, a regulator, a charging IC and PNP transistor.
The medical sensors described herein may comprise a gyroscope, for example, a 3-axis gyroscope. The medical sensors described herein may comprise a magnetometer, for example, for measuring the electric field generated by a patient's respiration. The medical sensors described herein may be mounted proximate to a suprasternal notch of a patient.
In an aspect, provided is a device comprising: an electronic device having a sensor comprising an accelerometer; a bidirectional wireless communication system electronically connected to the electronic device for sending an output signal from the sensor to an external device and receiving commands from an external controller to the electronic device; wherein the sensor senses multiple or single physiological signals from a subject that provides the basis of one or more corrective, stimulatory, biofeedback, or reinforcing signals provided to the subject.
The corrective, stimulatory, biofeedback, or reinforcing signals may be provided by one or more actuators. The one or more actuators may be thermal, optical, electrotactile, auditory, visual, haptic or chemical actuators operationally connected to said subject. The device may comprise a processor for providing feedback control of said one or more corrective, stimulatory, biofeedback, or reinforcing signals provided to the subject.
The multiple or single physiological signals may provide input for said feedback control. The feedback control may include a thresholding step for triggering said one or more corrective, stimulatory, biofeedback, or reinforcing signals provided to the subject. The thresholding step may be achieved by dynamic thresholding.
In an aspect, provided is a device comprising: an electronic device having a multi-modal sensor system comprising a plurality of sensors; wherein said sensors comprise an accelerometer and at least one sensor that is not an accelerometer; and a bidirectional wireless communication system electronically connected to the electronic device for sending an output signal from the sensor to an external device and receiving commands from an external controller to the electronic device.
The sensor system may comprise one or more sensor selected from the group consisting of an optical sensor, an electronic sensor, a thermal sensor, a magnetic sensor, an optical sensor, a chemical sensor, an electrochemical sensor, a fluidic sensor or any combination of these. The sensor system may comprise one or more sensors selected from the group consisting of a pressure sensor, an electrophysiological sensor, a thermocouple, a heart rate sensor, a pulse oximetry sensor, an ultrasound sensor, or any combination of these.
In an aspect, provided is a device comprising: an electronic device having a sensor comprising an accelerometer; and one or more actuators operationally connected to said sensor; wherein the sensor senses multiple or single physiological signals from a subject that provides the basis of one or more corrective, stimulatory, biofeedback, or reinforcing signals provided to the subject by said one or more actuators.
The one or more corrective, stimulatory, biofeedback, or reinforcing signals may be one or more optical signals, electronic signals, thermal signals, magnetic signals, chemical signals, electrochemical signals, fluidic signals, visual signals, mechanical signals or any combination of these.
The one or more actuators may be selected from the groups consisting of a thermal actuator, optical actuator, electrotactile actuator, auditory actuator, visual actuator, haptic actuator, mechanical actuator, or chemical actuators operationally connected to said subject. The one or more actuators may be one or more stimulators. The one or more actuators may be a heater, a light emitter, a vibrating element, a piezoelectric element, an sound generating element, a haptic element or any combination of these.
A processor may be operationally connected to said electronic device and said one or more actuators; wherein said processor provides for feedback control of said one or more corrective, stimulatory, biofeedback, or reinforcing signals provided to the subject. The multiple or single physiological signals may provide input for said feedback control.
The feedback control may include a thresholding step for triggering said one or more corrective, stimulatory, biofeedback, or reinforcing signals provided to the subject. The thresholding step may be achieved by dynamic thresholding.
The described devices may comprise a bidirectional wireless communication system electronically connected to the electronic device for sending an output signal from the sensor to an external device and receiving commands from an external controller to the electronic device. The corrective, stimulatory, biofeedback, or reinforcing signals may be provided to the subject for training or therapy. The training or therapy may be for respiratory or swallowing training.
The described devices may continuously monitor and generate a real-time metric. The real-time metric may be a social or clinical metric. The clinical metric may be selected from the group consisting of a swallowing parameter, a respiration parameter, an aspiration parameter, a coughing parameter, a sneezing parameter, a temperature, a heart rate, a sleep parameter, pulse oximetry, a snoring parameter, body movement, scratching parameter, bowel movement parameter, a neonate subject diagnostic parameter; a cerebral palsy diagnostic parameter, and any combination thereof. The social metric may be selected from the group consisting of: talking time, number of words, phonatory parameter, linguistic discourse parameter, conversation parameter, sleep quality, eating behavior, physical activity parameter, and any combination thereof.
The described devices may comprise a gyroscope, for example, a 3-axis gyroscope. The described devices may comprise a magnetometer.
In an aspect, provided is a method of diagnosis using any of the devices or sensors described herein.
In an aspect, provided is a method of training a subject using any of the devices or sensors described herein.
Additionally, the configuration of sensors provided may be used in conjunction to provide more precise measurements or metrics. For example, an accelerometer may be used in conjunction with a mechano-acoustic sensor for measuring a user's scratching. Scratching motion can be detected by the accelerometer, but other common motion (e.g. waving, typing) can be difficult to distinguish from scratching. The incorporation of an acoustic sensor proximate to the skins allows for secondary classification and improves data collection.
Differential measurement of separate areas of a patient's body may also be useful in improving data collection and accuracy. In some cases a single device may measure two different areas by being positioned on a biological boundary, in some cases, multiple devices may be used. For example, placement of a device on the suprasternal notch allows for accelerometric measurement of both the chest and neck. During respiration, there is a high degree of motion in the chest while the neck remains relatively static. This leads to more robust measurement and assessment using the devices described herein.
Without wishing to be bound by any particular theory, there may be discussion herein of beliefs or understandings of underlying principles relating to the devices and methods disclosed herein. It is recognized that regardless of the ultimate correctness of any mechanistic explanation or hypothesis, an embodiment of the invention can nonetheless be operative and useful.
In general, the terms and phrases used herein have their art-recognized meaning, which can be found by reference to standard texts, journal references and contexts known to those skilled in the art. The following definitions are provided to clarify their specific use in the context of the invention.
“Mechano-acoustic” refers to any sound, vibration or movement by the user that is detectable by an accelerometer. Accordingly, accelerometers are preferably high frequency, three-axis accelerometers, capable of detecting a wide range of mechano-acoustic signals. Examples include respiration, swallowing, organ (lung, heart) movement, motion (scratching, exercise, movement), talking, bowel activity, coughing, sneezing, and the like.
“Bidirectional wireless communication system” refers to onboard components of the sensor that provides capability of receiving and sending signals. In this manner, an output may be provided to an external device, including a cloud-based device, personal portable device, or a caregiver's computer system. Similarly, a command may be sent to the sensor, such as by an external controller, which may or may not correspond to the external device. Machine learning algorithms may be employed to improve signal analysis and, in turn, command signals sent to the medical sensor, including a stimulator of the medical sensor for providing haptic signal to a user of the medical device useful in a therapy. More generally, these systems may be incorporated into a processor, such as a microprocessor located on-board or physically remote from the electronic device of the medical sensor.
“Real-time metric” is used broadly herein to refer to any output that is useful in medical well-being. It may refer to a social metric useful in understanding a user's social well-being. It may refer to a clinical metric useful in understanding or training a biological function, such as breathing and/or swallowing.
“Customized machine learning” refers to the analysis of the output from the sensor that is tailored to the individual user. Such a system recognizes the person-to-person variabilities between users, including by medical condition (stroke versus dementia), weight, baseline fluency, resting respiratory rate, base heart rate, etc. By specifically tailoring the analysis to individual users, great improvement in the sensor output and what is done downstream by a caregiver is achieved. This is referred herein as generally an improvement in a “sensor performance parameter”. Exemplary parameters include accuracy, repeatability, fidelity, and classification accuracy, for example.
“Proximate to” refers to a position that is nearby another element and/or location of a subject such as a human subject. In an embodiment, for example, proximate is within 10 cm, optionally for some applications within 5 cm, optionally for some applications within 1 cm, of another element and/or location on a subject.
In some embodiments, the sensor systems of the inventor are wearable, tissue mounted or implantable or in mechanical communication or direct mechanical communication with tissue of a subject. As used herein mechanical communication refers to the ability for the present sensors to interface directly or indirectly with the skin or other tissue in a conformable, flexible, and direct manner (e.g., there is no air gap) which in some embodiments allows for deeper insights and better sensing with less motion artifact compared to accelerometers strapped to the body (wrists or chest).
Various embodiments of the present technology generally relate sensing and a physical feedback interface, including a “mechano-acoustic” sensing. More specifically, some embodiments of the present technology relate to systems and methods for mechano-acoustic sensing electronics configured for use in respiratory diagnostics, digestive diagnostics, social interaction diagnostics, skin irritation diagnostics, cardiovascular diagnostics and human-machine-interface (HMIs).
Physiological mechano-acoustic signals, often with frequencies and intensities that are beyond those associated with the audible range, can provide information of great clinical utility. Stethoscopes and digital accelerometer in conventional packages can capture some relevant data, but neither is suitable for use in a continuous, wearable mode, typical non-stationary environment, and both have shortcomings associated with mechanical transduction or signal through the skin.
Various embodiments of the present technology include a soft, conformal, stretchable class of device configured specifically for mechano-acoustic recording from the skin, capable of being used on nearly any part of the body, in forms that maximize detectable signals and allow for multimodal operation, such as electrophysiological recording, and neurocognitive interaction.
Experimental and computational studies highlight the key roles of low effective modulus and low areal mass density for effective operation in this type of measurement mode on the skin. Demonstrations involving seismocardiography and heart murmur detection in a series of cardiac patients illustrate utility in advanced clinical diagnostics. Monitoring of pump thrombosis in ventricular assist devices provides an example in characterization of mechanical implants. Tracking of swallowing trend of normal relative to the breathing cycle presents new understanding of natural physical behaviors. Measuring the movement and listening to the sound of respiratory, circulatory, digestive system, and even typical movement such as scratching simultaneously with single device provides entire new dimension of the pathological diagnostics. Speech recognition and human-machine interfaces represent additional demonstrated applications. These and other possibilities suggest broad-ranging uses for soft, skin-integrated digital technology that can capture human body acoustics. Physical feedback system integrated with the sensor delivers the additional therapeutic functionality to the device.
In the following description, for the purpose of explanation, numerous specific details are set forth in order to provide a thorough understanding of embodiments of the present technology. It will be apparent, however, to one skilled in the art that embodiments of the present technology may be practiced without some of these specific details. While, for convenience, embodiments of the present technology are described with reference to cardiovascular diagnostics, respiration and swallowing correlation, and scratching intensity detection, the present technology provides many other uses in a wide variety of potential technical fields.
The techniques introduced here can be embodied as special purpose hardware (e.g. circuitry) as programmable circuitry appropriately programmed with software and/or firmware, or as a combination of special-purpose and programmable circuitry. Hence, embodiment may include a machine-readable medium having stored thereon instructions which may be used to program a computer (or other electronic devices) to perform a process. The machine-readable medium may include, but is not limited to, floppy diskettes, optical disks, compact disc read-only memories (CD-ROMs), magneto-optical disks, ROMs, random access memories (RAMs), erasable programmable read-only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), magnetic or optical cards, flash memory, or other type of media/machine-readable medium suitable for storing electronic instructions.
The phrases “in some embodiments,” “according to some embodiments,” “in the embodiments shown,” “in other embodiments,” and the like generally mean the particular feature, structure, or characteristics following the phrase is included in at least one implementation of the present technology, and may be included in more than one implementation. In addition, such phrases do not necessarily refer to the same embodiments or different embodiments.
In this example embodiment, an epidermal mechano-acoustic-electrophysiological measurement device comprises: a lower elastomeric shell 20, silicone strain isolation layer 30, stretchable interconnects 40, electronic devices 50 such as microprocessor, accelerometers, vibration motor, resistors, capacitors, and the like, and an upper elastomeric shell 60.
The present technology provides a different type of mechano-acoustic-electrophysiological sensing platform that exploits the most advanced concepts in flexible and stretchable electronics to allow soft, conformal integration with the skin without any requirement of wire connection to the device. The technology allows precision recordings of vital physiological signals in ways that bypass many of the limitations of conventional technologies (e.g. heavy mass and bulky package) with the freedom of application environment. The mechano-acoustic modality includes miniaturized, low-power accelerometers with high sensitivity (16384 LSB/g) and large frequency bandwidth (1600 Hz) with possible augmentation of its functional limitation. Soft, strain-isolating packaging assemblies, together with electronics for electrophysiological recording and active feedback system represent other example features of these stretchable systems. Example embodiments of the present technology have a mass of 300 mg (or less than 600 mg, or between 100 mg and 500 mg), a thickness of 4 mm (or between about 3 mm and 5 mm), effective moduli of 100 kPa (in both the x and y direction) (or between about 50 kPa and 200 kPa), which correspond to values that are orders of magnitude lower than those previously reported. In this manner, any of the medical devices provided herein may be described as conformable, including conformable to the skin of a user. Such physical device parameters ensure the device is not unduly uncomfortable and can be worn for long periods of time.
Example embodiments of the present technology provide qualitative improvements in measurement capabilities and wearability, in formats that can interface with nearly any region of the body, including curvilinear parts of the neck to capture signals associated with respiration, swallowing, and vocal utterances, with completely wireless form factor that can transfer, communicate, and power wirelessly. The following description and figures illustrate properties of this technology and demonstrates its utility in wide-ranging examples, from human studies on patients to personal health monitoring/ training devices with customizable applications.
Specific data show simultaneous recording of gait, respiration, heart activity, breathing cycle, and swallowing. Also, vibrational acoustics of ventricular assist devices (VADs) (that is, devices used to augment failing myocardial function, through often complicated by intradevice thrombus formation) can be captured and used to detect pump thrombosis or device malfunction.
In addition, applications exist in speech recognition and classification for human-machine interfaces, in modes that capture vibrations of the larynx without interference from noise in the ambient environment. Baseline studies on the biocompatibility of the skin interface and on the mechanical properties and fundamental aspects of the interface coupling provide additional insights into the operation of the present technology.
Also, the device's functionality in interacting with patients through stimuli integrated in sensor allows it to be therapeutic device. Having the device in wireless form factor and personal use as well as clinical use, large data is collected. With machine learning, the devices not only utilize stimuli as output based on the scheduled moment, but also as input for the study of mechano-acoustic signal associated with the physiological responses.
Referring to
The fabrication process involves five parts: (i) production of the of the flexible PCB (fPCB) device platform; (ii) chip-bonding onto the fPCB device platform; (iii) casting the top and bottom elastomeric shells from molds; (iv) layering the silbione gel; (v) bonding the top and bottom elastomeric shells.
The following describes the fabrication process in more detail: (i) Photolithography and metal etching process, or laser cutting process defines a pattern of interconnects in the copper. Spin-coating and curing process yields a uniform layer of PI on the resulting pattern. Photolithography and reactive ion etching (RIE, Nordson MARCH) define the top, middle, and bottom layers of PI in geometries matching those of the interconnects. (ii) Chip bonding process assembles the necessary electronic components for the device to operate. (iii) Pairs of recessed and protruded molds for each of top and bottom elastomeric shells define the shape of the outer structure of the device. (iv) Recessed region in the bottom shell contains the layer of Silbione gel for both bonding and strain isolating purpose of the device platform. (v) Bonding the curved thin top elastomeric membrane shells with the flat bottom elastomeric shells packages the electronic components along with the air pocket.
The sensing circuit comprises a mechano-acoustic sensor (BMI160, Bosch), coin cell motor and Bluetooth capable microcontroller (nRF52, Nordic Semiconductor). The sensor has a frequency bandwidth (1600 Hz) that lies between the range of targeted respiration, heart, scratching, and vocal fold movements and sounds. Additional sensors within the platform may include but are not limited to the following: onboard microphone, ECG, pulse oximeter, vibratory motors, flow sensor, pressure sensor.
The wireless charging circuit comprises an inductive coil, full wave rectifier (HSMS-2818, Broadcom), regulator (LP2985-N, Texas Instruments), charging IC (BQ2057, Texas Instruments), and PNP transistor (BF550, SIEMENS).
The device can also couple with an external component, such as an external mouth piece to measure the lung volume. The mouth piece contains a diaphragm. Its deflection associated to a specific pressure. The amount of deflection of the membrane using the device defines the amount of volume of the air transferred during the period of expiration.
For healthy adults, the first sound (S1) and the second sound (S2) of the heart have acoustic frequencies of 10 to 180 Hz and 50 to 250 Hz, respectively. Vibration frequencies of vocal folds in humans range from 90 to 2000 Hz. With an average fundamental frequency of ˜116 Hz (male, mean age, 19.5), ˜217 Hz (female; mean age, 19.5), and ˜226 Hz (child, age 9 to 11) during conversation. To enable sensing of cardiac operation and speech, the cutoff frequency of the low-pass filter is 500 Hz. The high-pass filter (cutoff frequency, 15 Hz) removes motion artifacts.
Low frequency respiration cycle (0.1-0.5 Hz), cardiac cycle (0.5-3 Hz), and snoring signal (3-500 Hz) have their own specific frequency band. By passing these specific frequency band for each of these biomarkers, the filter removes the high frequency noise and low frequency motion artifacts.
Aside from the present frequency range, from the raw data, it measures many other mechanical and acoustic bio signals (e.g. Scratching movement (1-10 Hz), Scratching sound (15-150 Hz)).
Signal processing algorithms including but not limited to Shannon energy conversion, moving average smoothing, Savitsky-Golay smoothing, and automatic threshold set up faster analysis of the large volume of data.
The general signal processing involves seven parts: (i) collection of raw data; (ii) filtering of the data; (iii) normalization of the filtered data; (iv) energy conversion of the data; (v) smoothing of the data and production of the envelop; (vi) threshold setting; (vii) masking of the data.
The following describes the signal processing in more detail: (i) Capturing the raw acceleration signal without analog filter provides multiple signals superposed onto each other. (ii) Filtering of the data in various bands of frequency spectrum dissects the raw signal into multiple layer of signals specific to different biomarkers. (iii) Normalization of each filtered data allows reasonable comparisons of each signal. (iv) Converting the normalized filtered signal simplifies the signal to all positive values. Observing signals that are higher than DC frequency regime, the signal fluctuates across the zero-base line. For information related but not limited to the duration of talking, coughing, or swallowing, the true measurement is possible with energy interpretation of the signal. (v) smoothing the data contains the normalized filtered signal and represents the measured signal in simpler way. (vi) using histogram, or automatic threshold setting algorithm, certain activity can be determined and classified. (vii) Using the picked threshold value, mask defines the number of samples associated to the activity.
Wavelet transform method simply extracts out the signals related to certain activity, such as talking, laughing, coughing, or swallowing. Using the scale and time information from the transformation, it classifies specific characteristics of swallowing in specific type of food contents, and type of communication and interaction.
Supervised machine learning of labeled signal involves two parts: (i) labeling the activity to signal by time stamping the data at the time when the event occurs; (ii) multi-class classification methods including but not limited to the Random Forest method.
Such classification generates classification for specific incidents of breathing pattern (inspiration, expiration), swallowing specific type of food (fluid, solid), and human machine interface for vocal fold vibration recognition.
The following describes human interface in more detail. Learning the trend of respiration cycle and swallowing incident of normal, coin cell activates to cue the appropriate swallowing time based on the respiration cycle for people who has difficulty in swallowing. Also measuring the movement and frequency of the vocal folds and learning the letter and words associated to specific vibration.
Subjective study including social meter utilizes unsupervised learning. It includes dimension reduction methods such as Latent Dirichlet for obtaining predictors. Then, clustering methods including but not limited to k-modes and DBSCAN categorizes the specific group of people with share of similar behavior of the signal.
Reinforced learning correlates the clinical result of therapy given by the device's user interface. The implementation of reinforced learning happens towards to the end of classification and set of pilot studies.
The system may employ any of a range of bidirectional communication systems, including those that correspond to the Bluetooth® standard, to connect to any standard smartphone (
On board memory provides maximum freedom in the wireless environment even without the user interface machines that are linked to the device for data streaming and storage.
Beyond the use of traditional adhesives—we propose a novel skin-device interface that incorporates adhesives that last up to 2 weeks of continuous wear. Rather than requiring the user to adhere and remove the sensor completely from the skin, particularly the fragile skin of the neck, our device can detach and reattach with the use of magnets. Other coupling mechanisms can involve buttons, clasps, hook and loop connections. The adhesive attached to the skin can be varied by type (e.g. acrylic, hydrogel, etc) and optimized to the desired length of skin adherence (
Communications to the user interface machine that displays, stores, and analyzes the data are generally known. Here, in contrast, we present the sensor technology that has onboard processor, data storage, and also communicates to the user interface via a wireless protocol, such as BLUETOOTH®, or in some embodiments ultra-wide band or narrow band communication protocols, optionally capable of providing for secure transmission. This way, one can utilize the device in a naturalistic setting without requirement for an external power source.
The device is powered by inductive coupling and also can communicates and/or transfer data via near field communication (NFC) protocol. When the user is utilizing the device within a confined environment, such as a bed during sleep, or in a hospital setting, the power and data transmission can be done via inductive coil that resonates at 13.56 MHz. This allows continuous measurement without the need for an onboard battery or external power source.
Wireless battery charging platform enables a completely encapsulated device that separates the electronics from the surroundings, preventing substances that would otherwise damage the sensor. The encapsulation layer is made out of thin membrane of a polymer or elastomer, such as a silicone elastomer (Silbione RTV 4420). Such an encapsulation layer is even less permeable than the polydimethylsiloxane and ecoflex described in the prior art.
Digital filtering: both type of (finite impulse response) FIR and infinite impulse response (MR) digital filters are used appropriately. With the specific time window automatically selected in the region that has high signal to noise ratio, specific frequency band is selected to reduce the effect of artifacts and noise and maximize signal of interest
Algorithms for signal-specific analysis: one method involves the processing of filtered signal in the time domain. When the signal of interested is filtered with the appropriate band of frequency, the specific event of interest (e.g. talking vs coughing vs scratching) is better elucidated from the acoustomechanic sensor's raw output. Using energy information generated from acceleration of the sensor, the information such as the duration of a discrete event or the number or the frequency of the event is better calculated. Another processing technique our system uses power frequency spectrum analysis where the power distribution of each frequency component is assessed. This allows the derivation of additional information from the raw signal (e.g. pitch from audio).
Supervised Learning: supervised machine learning of labeled signal involves two parts: (i) labeling the activity to signal by time stamping the data at the time when the event occurs; (ii) multi-class classification methods including but not limited to the Random Forest method. Such classification generates classification for specific incidents of breathing pattern (inspiration, expiration), swallowing specific type of food (fluid, solid), and human machine interface for vocal fold vibration recognition.
Using the scale and time information from the transformation, as an example, we can classify the specific characteristics of swallowing that relate to the food content eaten (e.g. thin liquid like water, thick liquid, soft foods, or regular foods) through supervised machine learning. This process does not require the time or frequency ambiguity as much as the fast Fourier transform.
The following describes human interface in more detail. Learning the trend of respiration cycle and swallowing incident of normal, coin cell activates to cue the appropriate swallowing time based on the respiration cycle for people who have lost the ability to time swallowing with breathing. Also, the sensor measures the movement and frequency of the vocal folds and learning the letter and words associated to each specific signal.
Unsupervised Learning: this is accomplished without labeled signal inputs. In the case of a wearable social interaction meter, we employ unsupervised learning. It includes dimension reduction methods such as Latent Dirichlet for obtaining features relevant to quantifying social interaction. This includes features of voice (tone, pitch), physical activity, sleep quality, and talk time. Then, clustering methods (e.g. k-modes and DBSCAN) categorizes a specific group of signals into categories.
Reinforced Learning: this involves the sensor system learning of the effect of haptic stimulation on swallowing and then measuring the actual swallowing event along with respiration. This enables the system to auto-adjust and calibrate to ensure that the measured swallowing event corresponds to the ideal timing within the respiratory cycle.
The coupling of high-fidelity sensing, signal processing, and machine learning enable the creation of novel metrics that can serve as physical biomarkers of health and well-being. For instance, the ability to quantify spontaneous swallowing during the day has been shown previously to be an independent measure of swallowing dysfunction. Thus, the sensors provided herein can be used to calculate, in a patient's naturalistic environment, scores of swallowing function that are sensitive to small but clinically meaningful changes.
The timing of swallowing in relationship to the respiration cycle (inspiration, expiration) is important to avoid problems such as aspiration, which can lead to choking, or pneumonia. The ability to time swallowing is largely under involuntary control leading to a coordinated effort between respiration and swallowing. However in conditions such as stroke or head/neck cancer where radiation is delivered, this coordination is lost. Our sensor could then quantify swallowing events in the context of the respiratory cycle and provide a measure of “safe swallows.” Social interaction scores can also be created via signal process and machine learning to create aggregate scores of social activity. This can be used as a threshold to engage caregivers, or loved ones to increase daily social interaction when a baseline threshold is not met. These are illustrative examples of how novel metrics can be derived from this sensor system to enable patient behavior change, or clinician intervention and caregiver intervention.
In this present disclosure, there are advanced functionalities presented for the sensor system that serve a therapeutic purpose. Prior work has focused solely on diagnostic uses.
Examples of two therapeutic uses are described herein. First, the timing of safe swallowing enables prevention of dangerous events such as aspiration, which can lead to choking, pneumonia, or even death. Our sensor can be converted into a therapeutic swallow primer that triggers user swallowing based on sensing the onset of inspiration and expiration of the respiratory cycle. This enables the sensor to trigger swallowing during a safer part of the respiratory cycle (typically mid to late end expiration). Further, machine learning algorithms can be used to optimize the timing of the trigger in a feedback loop. For instance, the sensor can track both respiratory rate and swallowing behavior. A trigger is delivered that is timed to lead to a swallow event within an ideal respiratory timing window. In this embodiment to trigger a swallow, we propose a vibratory motor that provides direct haptic feedback. Other trigger mechanisms may include a visual notification (e.g. light emitting diode), an electrical impulse (e.g., electrodes), a temperature notification (e.g., thermistors). In some embodiments, for example, the system is configured to provide a sensor that detects one or more parameters which are used as the basis of input for a feedback loop involving a signaling device component that provides one or more signals to a subject (e.g., patient), such as a vibrational signal (e.g. electromechanical motor), and electrical signal, a thermal signal (e.g. heater), a visual signal (either LED or a full graphical user interface), an audio signal (e.g., audible sounds) and/or a chemical signal (elution of a skin-perceptible compound such as menthol or capsaicin). In such embodiments, the feedback loop is carried out for a specified time interval on the basis of measurements by the sensor, wherein one or more signals are provided to the subject periodically or repeatedly on the basis of the sensed parameter(s). The feedback approach may be implemented using machine learning, for example, to provide an individualize response based on measured parameters specific to a given subject.
In an embodiment, on-body sensing is achieved with an enclosed sensing/stimulating circuit enabled through real-time processing, wherein the feedback loop can be haptic, electrotactile, thermal, visual, audio, chemical, etc. In an embodiment, the sensors would also be able to work in a network—and that anatomically separate sensing allows for more information—one sensor could measure (e.g. on the suprasternal notch) but trigger feedback in a sensor somewhere else that is more hidden (e.g. chest).
A second therapeutic modality is for the sensor to act as a wearable respiratory therapy system. In conditions such as chronic obstructive pulmonary disorder (COPD), dyspnea or shortness of breath is a common symptom that greatly impacts quality of life. Respiratory therapy is a commonly deployed method that trains a subject to control their breathing (both timing and respiratory effort) to increase lung aeration and improve respiratory muscle recruitment. Our sensor can be used to track respiratory inspiration and expiration efforts and duration. Based on these measurements, haptic feedback (or visual feedback via an LED) can potentially train users to extend or shorten inspiration or expiration to maximize airflow. Respiratory inhalation effort can also be triggered as well. For instance, if a certain respiratory inspiratory effort is achieved a threshold is passed triggering a haptic vibration. This haptic feedback can also be triggered after a certain length of time is reached for an inspiratory effort. Thus, the sensor can track airflow through the throat and use this as a way to deliver on-body respiratory training. In another embodiment, the sensor itself can be outfitted with an external mouthpiece (
Another therapeutic modality involves use of the present sensor systems to assess and, optionally treat a patient regarding, positioning of the body of a subject, or portion thereof, to prevent injury and/or support a given therapeutic outcome. Body injury can occur with motion and movement of limbs to points of significant deformation. This can occur for instances, for example, where a limb (e.g. shoulder) is injured and must be placed in a relatively immobile or limited in a safe range of motion, for example, to support healing or therapy. In instances of sleep or daily activity, the subject may inadvertently position this limb into a deformation that would cause injury. In these embodiments, the present sensors here are used as a sentinel system to assess the position in space of the limb—and lead to a notification (either haptic, sound, visual, thermal, chemical, electrical, etc.) to alert the user and/or a caregiver.
Sleep Medicine: wireless sleep tracker with ability to measure: time until sleep, wake time after sleep onset, sleep duration, respiration rate, heart rate, pulse oximetry, inspiration time, expiration time, snoring time, respiratory effort, and body movement. Intimate skin coupling on the suprasternal notch enables capture of respiration and heart rate given the proximity to the carotid arteries and trachea. As an example, sleep medicine applications can extend beyond simply measuring vital signs sleep or provide sleep quality metrics. The present sensor systems also support applications to improve sleep. Example of applications for this aspect include the following:
In an embodiment, the sensors can evaluate position in space for specific limbs or body locations that are prone to injury (e.g. post-surgical rotator cuff) where if a dangerous range of motion or position is sensed this triggers a biofeedback signal that warns the user or causes the user to alter their position to avoid sleeping on an injured arm. The present sensor systems are also useful for monitoring an therapy in connection with snoring, for example, wherein sensing of snoring leads to vibratory biofeedback to trigger positional change.
In an embodiment, the sensors are used to recapitulate a video and/or visual representations of a subject's position in space. Benefits of this aspect of the invention included that it mitigates privacy concerns, data storage also.
Dermatology: ability to capture scratching behavior and distinguish this from other limb movements through coupling mechanical and acoustic signal processing.
Pulmonary Medicine: chronic obstructive pulmonary disease (COPD) is a chronic condition characterized by relapsing pulmonary symptoms. Our sensor would be able to quantify important markers indicative of COPD exacerbation including: cough, throat clearing, wheezing, altered air volume with forced lung expiration, respiratory rate, heart rate, and pulse oximetry. Asthma and idiopathic pulmonary fibrosis can similarly be assessed with the same measures.
Social Interaction Metrics, Quantification of Acoustic and Linguistic features of single speaker and multiple speaker tasks: measurement of spoken discourse and speech signals as components of social interaction is complex, requiring a sensor capable of capturing a wide range of acoustic and linguistic parameters, as well as acoustic features of the speaking environment. The sensor can quantify key parameters of social interaction related to the inbound acoustic signal including talking time and number of words. The recorded signal can be used to extract additional data including phonatory features (e.g., F0, spectral peak, voice onset time, temporal features of speech) as well as linguistic discourse markers (e.g. pausing, verbal disfluencies). When worn by individual interlocutors, the sensor is able to capture linguistic features across multiple interlocutors from the separately recorded signals, facilitating analysis of conversation social interactions. The coupling to skin along the suprasternal notch enables precise quantification of true user talk time regardless of ambient condition. Furthermore, social interaction is a complex multi-factorial complex. The present disclosure enables quantification of important physical parameters (e.g. sleep quality, eating behavior, physical activity) that can potentially be combined into a novel metric for social interaction.
The present sensor systems are also useful for creating and monitoring social interaction scores and metrics, for example, using approaches based on sensor signals, feedback analysis and/or signaling to a subject.
The ability to monitor a broad range of acoustic and linguistic features in ecologically valid settings is key in identifying individuals at increased risk for mood disorders, identifying those at risk for social isolation that may lead to increased risk of cognitive decline, and those at risk for other disorders marked by early changes in speech, voice, and language quantity/quality (e.g., early language changes in dementia Alzheimer's type, prodromal Huntington's disease, fluency changes in Multiple Sclerosis, Parkinson's disease, among others).
Acquired Neurocognitive and Neuro-linguistic disorders (e.g., aphasia, cognitive-communication impairments associated with neurodegenerative disorders with/without dementia, traumatic brain injury, right brain injury), acquired motor speech and fluency disorders, neurodevelopmental disorders, and child language disorders. The device can also be used in clinical applications in recording conversation quantity and quality in hearing loss treatment/aural rehabilitation applications. The device can also be used to monitor vocal use patterns in professional voice users and those with vocal pathologies.
The present sensor systems and methods are also useful for treatment of diseases associated with loss of muscular or neurological function such as amyotropic lateral sclerosis, Lambert-Eaton myasthenic syndrome, myasthenia gravis, Duchenne's muscular dystrophy, the sensor can be used to assess functional performance of the subject, for example, by assessing physical activity, breathing performance or swallowing performance in these conditions.
As mentioned above, the ability to quantify speech recovery in a wearable format impervious to ambient noise conditions would hold high value in evaluating the nature of and treatment outcomes for numerous disorders associated with voice, speech, language, pragmatic, and cognitive-communication disorders. Further applications include quantifying stuttering frequency and severity in individuals with fluency and fluency related disorders. The coupling to skin along the suprasternal notch enables this functionality, with minimal stigma associated with wearing the device. Recording large volumes of data from ecologically valid environments is key for advancing clinical assessment, monitoring, and intervention options for a number of disorders.
Dysphagia and Swallowing Problems: difficulty swallowing (dysphagia) remains a problem across a host of conditions that include, but not limited to: head/neck cancer, stroke, scleroderma, and dementia. Prior works have indicated the frequency of spontaneous swallowing is an independent marker of dysphagia severity. Furthermore, in hospitalized patients, the ability to determine the safety and efficiency of swallowing function is critical for identifying patients at risk for aspiration, diet modifications that optimize nutrition and prevent aspiration, facilitate timely hospital discharge and avert readmission related to aspiration pneumonia. This sensor could potentially operate as a screening tool that detects abnormal movements associated with dysphagia and/or potentially guide dietary recommendations. The improvement of dysphagia with therapeutic intervention can also be tracked with this sensor. This application could be applied across a wide range of age groups from neonates to elderly adults.
Stroke Rehabilitation: as mentioned, the sensor provides the unique ability to assess speaking and swallowing function. Both are key parameters in stroke recovery. Beyond this, the sensor can also measure gait, falls, and physical activity as a comprehensive stroke rehabilitation sensor.
Nutrition/Obesity: the preferred deployment of the sensor is via intimate skin coupling to the suprasternal notch. This enables quantification of swallowing and swallowing count. The passage of food leads to a unique sensor signature that enables us to predict for mealtime and feeding behaviors. The mechanics of swallowing differs based on the density of the food or liquid bolus being ingested. Thus, our sensor can detect the ingestion of liquids versus solids. Furthermore, our sensor can assess swallowing signals that can distinguish between the ingestion of solid foods, denser semi-liquid foods (e.g. peanut butter), or thin liquids (e.g. water). This may hold utility for food ingestion tracking for weight loss. Other uses include assessing food intake in individuals with eating disorders (e.g. anorexia or bulimia). Further uses include assessing meal-time behavior in individuals who have undergone gastric bypass—the sensor can provide warning in instances where too much food or liquids are ingested post-operatively.
Maternal/Fetal Monitoring: currently, ECHO Doppler is the most common modality to capture fetal heart rate in pregnant women. However, this modality is limited in the sense that fetal heart rate from obese patients can be difficult to capture. Furthermore, the Doppler signal is frequently lost as the fetus descends during labor. Prior work has demonstrated the potential value of mechano-acoustic sensing for fetal heart rate monitoring. Our wearable sensor system would be well-suited for this application.
Post-operative Surgery Monitoring of Bowel Function: The stethoscope is used commonly to assess return of bowel function after abdominal surgery. Bowel obstruction, or failure of bowel function return is a common cause of hospitalization or delayed discharge. A sensor capable of quantifying return of bowel function through acoustic signal measurement would have utility in this context.
Cardiology: the stethoscope is standard of care for diagnosis and disease monitoring. The sensor presented here represents the ability to continuously capture data and information derived from the stethoscope. This includes the continuous evaluation of abnormal murmurs. In certain instances such as congenital heart defects, the presence of a murmur is critical to the subject's health. The present sensor systems may provide a continuous acoustic measurement of heart function. Abnormal sounds are also reflective of heart valve disease. Accordingly, the sensors here may be used to track the stability or worsening of valve disease such as aortic stenosis, mitral valve stenosis, mitral valve regurgitation, tricuspid stenosis or regurgitation, or pulmonary stenosis or regurgitation.
Specific to cardiology, non-invasive ways to assess cardiac output and left ventricular function remains elusive. Cardiac echocardiography is non-invasive, but requires specialized training and is not conducive to continuous wearable use. A non-invasive method to continuously track cardiac output is of high clinical value for numerous conditions including congestive heart failure. Embodiments of the present sensor systems are able to provide a measure of both heart rate and stroke volume (the volume of blood pumped per beat). Cardiac output is the product of heart rate and cardiac output. This may be accomplished, for example, by assessing the time delay between peaks for heart rate. In turn, the attenuation in the amplitude of the accelerometer represents the intensity of each heartbeat by measuring the displacement of the skin with each beat.
Another embodiment is in military: Injury from a firearm or explosion leads to propagation of mechanical waves from the point of impact. The sensor can be used to assess the severity of such an impact as a way to non-invasively assess a bullets impact or proximity of the user to a blast. The sensor can also be used to assess the likelihood of damage to a vital organ (e.g. placement over the heart or lungs). The sensor may be deployed directly on the user (e.g. police officer, soldier) or in clothing or in body armor.
Any of the medical devices provided herein may have one or more external modifications, including to provide access to new diagnostic and therapeutic capabilities. For instance, the addition of an external mouthpiece enables a controlled release of airflow from a user that can then be measured by the sensing elements within the sensor system (e.g. accelerometer or microphone). This enables the quantification of airflow (volume over time) without the need for expensive equipment such as spirometry. Critical parameters such as forced expiratory volume in 1 second (FEV1) could then be collected at home with the data transmitted and stored wirelessly. Changes in air-flow parameters such as FEV1 could then be coupled to other parameters such as wheeze sounds, cough frequency, or throat clearing to create novel metrics of disease that can serve as an early warning system of deterioration.
Therapeutic Applications: In respiratory diseases such as chronic obstructive pulmonary disease (COPD) or asthma, respiratory training is a key component to reducing shortness of breath (dyspnea). This includes teaching breathing techniques such as pursed lip breathing (PLB). This involves exhaling through tightly pressed lips and inhaling through the nose with the mouth closed. The length of inspiration and expiration are also adjusted to meet the patient's unique respiratory status. The length of expiration and inspiration can be adjusted depending on user comfort. The sensor can then be deployed in a therapeutic manner to distinguish mouth breathing from nose breathing by variations in throat vibration or airflow. The sensor can also time the length of inspiration and expiration. A respiratory therapist could set an ideal time length for instance and the sensor can provide haptic feedback to the patient/user of when an ideal inspiratory or expiratory time length is reached. Overall, the sensor can act as a ‘wearable’ respiratory therapist that reinforces effective breathing patterns and techniques to improve breathing and patient symptoms, and prevent exacerbations of respiratory diseases. Further work could couple this with continuous pulse oximetry.
Alzheimer's dementia (AD) affects 5.4 million Americans, costs $236 billion dollars in yearly spending, and requires a collective 18.1 billion hours of care from loved ones. 1 Reduced social interaction or loneliness is a key accelerator of cognitive decline, and directly increases the risk of depression in patients with AD. Second, quality social interaction is associated with reduced risk of dementia later in life offering a non-pharmacological strategy to reduce the morbidity and mortality of AD. Third, social interaction and conversation changes represents a potential biomarker for early identification of AD and disease progression. A major barrier in advancing the use of social interaction in AD patients has been the lack of tools capable of comprehensively assessing the amount and quality of social interaction in real world settings. Social interaction rating scales (self-report/proxy report) are subject to reporting bias and lack sensitivity. Smartphones have limited sensing accuracy, exhibit variability in sensor performance between manufacturers, lack the ability to measure key parameters (e.g. meal time behavior), and suffer poor audio fidelity in noisy ambient settings. While devices to measure social interaction have been reported in the literature, those systems are bulky and heavy precluding continuous use, and lack the comprehensive sensing capabilities necessary to adequately capture the entire spectrum of parameters in social interaction. Furthermore, these systems have not been validated rigorously in the elderly population where technical literacy is low.
To advance the care of patients with AD, there is a need for wearer-accepted, non-invasive, remote monitoring technology capable of tracking the broad range of parameters relevant to social interaction across mental, social, and physical health domains. To address this, we propose the development of the first integrated wearable sensor capable of continuous measurement of critical parameters of social interaction in a networked environment that minimizes user stigma through an optimized wearable form factor. The current prototype incorporates a high-frequency 3-axis accelerometer capable of measuring speech, physiological parameters (e.g. heart rate, heart rate variability), sleep quality, meal-time activity and physical activity (e.g. step count) in ecologically valid environments through additional signal analytics. The sensor is completely enclosed in medical-grade silicone that is less than 4 mm thick with bending and moduli parameters orders of magnitude lower than previously reported technologies. The sensor, adhered to the suprasternal notch with hypo-allergenic adhesives, enables unobtrusive, intimate skin connection allowing our technology to collect mechano-acoustic signals invisible to wrist-band based sensors and smartphones. This includes the ability to measure respiration rate, heart rate, swallowing rate, and talk time with accuracy unachievable by other technologies. We propose the development of a fully-integrated social interaction sensor with additional functionality, designed rationally with the input from AD patients and their caregivers and validated against clinically standard equipment, with more advanced signal processing. The estimated cost of each sensor is <$25 USDs with a total addressable yearly market of $288 million USDs yearly. Aim 1 will add an integrated microphone to our existing wearable, flexible sensor platform that already includes a high-frequency 3-axis accelerometer capable of continuous communication via Bluetooth®. The success criteria will be successful bench testing showing high-fidelity audio capture from the full range of 38 dB (whispers) to 128 dB (concert) inputs, and successful wireless data transfer to a HIPAA secure database. A user interface is provided for researchers to enable more advanced analytics. Additional parameters may be extracted: pitch, tone, speech paucity, overtalk time, and conversation turn-taking count.
The development of the first truly wearable social interaction sensor capable of continuous, multimodal, and real-world sensing represents an important innovation, including for the AD research community, as an observational tool, and to patients and their caregivers as ah interventional tool. By accurately, reliably, and discretely capturing the numerous parameters relevant to social interaction, we hope our sensor can detect social isolation in individuals with AD and provide subtle feedback that encourages more engagement and reduces loneliness.
Alzheimer's dementia (AD) affects 5.4 million Americans, represents the 6th most common cause of death increasing 71% from 2000 to 2013, costs $236 billion dollars in yearly spending, and require a collective 18.1 billion hours of care from loved ones yearly. There are limited therapies (behavioral and pharmaceutical) for AD with numerous candidates failing in late stage clinical trials. Advancing the next generation of AD therapies depends on high-quality clinical measurement tools for detecting novel, ecologically valid, and sensitive biophysical markers of cognitive decline. As the search for new therapies continues, there is an urgent need for alternative strategies that bend the disease trajectory by addressing social interaction contributors and consequences associated with AD. Central to these strategies is the recognition that loneliness and social isolation pose serious threats to the health of older adults, leading to self-harm, self-neglect, cognitive disability, physical disability, and increased mortality. Addressing modifiable risk factors, specifically social isolation, is a major policy goal of public health institutions and governments to mitigate the tremendous burden of AD. A large body of rigorous research supports the protective effects of high quality social interaction in mitigating the deleterious effects of AD and in optimizing healthy aging (mental, physical, and social). Increased conversation difficulties such as breakdowns in message exchanges between interlocutors or increased time required to convey and to understand messages manifest early in AD, result in increased social isolation, which accelerates cognitive decline; and add significantly to caregiver burden in AD. Additionally, because the natural course of AD is marked by periods of disease stability, punctuated with periods of rapid decline, measuring social interaction changes longitudinally would facilitate a deeper understanding of the natural progression of AD. Conversation and social interaction behaviors extracted from real-world communication contexts are promising next generation biophysical markers of cognitive change and treatment outcome measures. Despite their significant clinical importance, changes to conversation abilities and social interaction in real-world contexts are not easily evaluated during clinical visits. Clinicians must rely on patient and proxy reports that are subject to inaccuracies and reporting biases. Developing a reliable, non-invasive, user-accepted wearable technology for collecting conversation and social interaction data would be an invaluable tool for the field. Currently, there is no existing commercially available technology capable of measuring the wide range of parameters relevant for social interaction in a form factor that enables long-term, real-world use in individuals with AD. Accordingly, any of the devices and methods provided herein may be used in AD evaluation, diagnosis and therapy.
Parameters of Importance for Social Interaction (Physical, Mental, and Social): Social interaction is a complex construct. Prior research links social interaction to cognitive function, mental health, sleep quality, physical activity, social activity, eating behaviors, and language use in dementia. Thus, assessment of social interactions requires tools capable of collecting numerous behaviors within a naturalistic environment.
Assessing collecting social interaction in adults typically involves self-report and proxy-report psychometric surveys (e.g. Friendship Scale, Yale Physical Activity Scale, SF-36). However, this method of data collection is prone to bias, lacks sensitivity, and is frequently inaccessible by individuals with cognitive and language impairments. Moreover, psychometric survey tools, in isolation, do not reflect the changes in conversation abilities that frequently underlie social interaction changes in aging and dementia. Consequently, survey tools are best considered in conjunction with objective measures of conversation changes in real-world environments. Smartphones with custom mobile apps have been explored previously for this purpose. Elderly individuals are the least likely to use smartphones and exhibit lower technical literacy. However, smartphones offer some advantages including wide availability, onboard sensors (e.g. accelerometer, microphone), and wireless communication capabilities. While previous studies have shown that smartphone collected data (text message and phone use) correlates with traditional psychometric mood assessments, the overall accuracy of these smartphone-based remains poor (<66%). While there is compelling evidence that voice, conversation, and linguistic features are sensitive markers of mood, cognitive-linguistic, and social interaction changes, smartphone audio recordings are of insufficient quality for clinical monitoring of these behaviors particularly in the context of real-world situations with high ambient noise. Furthermore, there remains accuracy concerns of smartphone based accelerometers for monitoring physical activity and sleep. The large number of available smartphone platforms with their distinct hardware specifications precludes the ability to normalize data inputs. Commercially available systems attached to the wrist (e.g. FitBit®) are largely limited to tracking step counts and thus do not capture mobility sphere data. While remote data recording systems such as LENA offer more advanced signal processing, they have been tested only in parent-child social interactions, limited to only speech collection, and have not demonstrated the ability to capture important speech features in AD. For instance, measuring ‘overtalk’ time, a primary source of conversation breakdowns and a negative behavior evinced by healthy conversation partners, is important in the context of AD. The most advanced system reported in the literature for social interaction includes both an accelerometer and microphone in a strap-on device. However, the system is bulky making daily wear infeasible, raises the concern of user stigma, and requires quiet ambient conditions to operate. Furthermore, these systems are not able to collect relevant physiological parameters (e.g. heart rate, heart rate variability, respiration rate) for social interaction. Since mealtime behaviors are associated with alterations in mental health and social interaction, a number of groups have reported wrist-based and neck-based sensors to measure hand movements and chewing/swallowing behaviors but only modest accuracy. These eating behavior sensors lack the ability to collect other relevant parameters such as speech, physical activity, or physiological metrics. Currently, there is a critical need for a technology that is capable of providing objective, comprehensive and unobtrusive measurements that capture the wide range of parameters important to social interaction for individuals with AD.
Recent advances in materials science and mechanics principles, have enabled a new class of stretchable, bendable, and soft electronics. These systems match the modulus equivalent to skin enabling mechanically invisible use for up to 2 weeks with coupling to any curvilinear surface of the body. The intimate coupling with skin, similar to a temporary tattoo, enables physiological measurements with data fidelity comparable to FDA-approved medical devices. Specifically, mechano-acoustic signals are of high clinical relevance. The propagation of mechanical waves through the body, measureable through the skin, reflects a range of physiological processes including: opening/closing of heart valves on the chest, vibrations of the vocal cords on the neck, and swallowing. Thus, a wearable sensor intimately connected to the skin is key to sensing these bio-signals and enabling a broad range of sensing possibilities. This is contrast to external accelerometers embedded in smartphones and wrist-based to traditional “wearables”, which are limited to measuring only basic physical activity metrics (e.g. step count). Described are the use of high-frequency accelerometers coupled to the skin to sense a wide range of parameters relevant to assessing social interaction.
We present a novel mechano-acoustic sensing platform (
This platform provides a system that employs a high-frequency accelerometer intimately mated to the skin enabled by low-modulus construction and robust adhesion capable of multimodal operation. The system may use Bluetooth® to communicate with the smartphone, although the smartphone largely serves as a visual display and additional data storage unit. The current system can also engage, in an additive fashion, with a smartphone's sensors including the microphone if desired.
Software and Signal Analytics for Novel Data Collection Relevant to Social Interaction: Provided is a suite of signal processing capabilities that involves bandpass filters of the raw acousto-mechanic in selective ranges within the accelerometer's bandwidth enabling multimodal sensing for numerous biomarkers, from step counts and respiration (low band of the spectrum), to swallowing (mid band of the spectrum), and speech (high band of the spectrum). The intimate skin coupling enables the highly sensitive measurement with high signal to noise ratio. This allows the sensor to measure both subtle mechanical activities and acoustic bio-signals that are below the threshold for audible level with conventional microphones. We demonstrate the ability to use our acousto-mechanic sensor to detect the words (left, right, up, and down) by differentiating their time-frequency characteristics from vocal cord vibrations associated with the creation of each word. This ability can then be used by the sensor to control a computer game (e.g. Pacman). In the case of talktime calculations, the raw mechano-acoustic signal is filtered with an eighth-order Butterworth filter. The filtered signal is then passed through a root-means square value threshold. The energy of the signal is then interrogated with a 50-ms window enabling the determination of talktime and word count. A Short Time Fourier Transformation defines the spectrogram of the data. The results are averaged and reduced in dimensionality using principal components analysis to form a feature vector. Finally, the feature vector is classified using linear discriminant analysis. We demonstrate the system's ability to identify specific interlocutors and quantify talktime in a group of 3 stroke survivors with aphasia and one speech language pathologist (
Another key advantage is the ability to couple acoustic and mechanical signal collection in synchrony allowing for the capture of talktime specific to a wearer in both noisy and quiet ambient conditions. We demonstrate the minimal performance differences of our sensor in quiet and noisy conditions in comparison to a smartphone microphone (iPhone 6, Apple, Cupertino). This overcomes a fundamental limitation of other technologies that struggle to capture true user talk time in noisy ambient conditions. Also, the unique IDs applied to each sensor allows us to discern the number of conversation partners
Beyond acoustic signals, the sensor has the capability of leveraging additional analytics to measure other parameters relevant to social interaction through its intimate skin connection. As reported previously from studies employing signal processing strategies from electrocardiograms and acoustic signals derived from stethoscopes, we employ Shannon energy calculations to induce higher contrast to the pronounced mechano-acoustic signature in the time domain from signal noise. Savitzky-Golay smoothing functions are then applied to form an envelope over the transient energy data. Examples of the advantage of this system includes measurement of respiration rate transmitted through the neck and the pulsation of arterial blood through the external carotid arteries—measures such as a heart rate, heart rate variability and respiratory rate are relevant in assessing sleep quality (
Form Factor-Reducing Caregiver and Wearer Burden and Stigma: The sensor's flexible platform maximizes user comfort with neck movement, talking, and swallowing. Highly visible neck-based sensors (necklaces and circumferential neck sensors) are another limitation for other published solutions. 79% of respondents expressed significant reluctance and concern in regards to wearing a neck-based sensor daily. Thus, a highly wearable sensor capable of capturing the necessary parameters must minimize potential stigma for the person with AD and their interlocutors. Prior qualitative studies of user acceptance of wearables in AD highlight the importance of low device maintenance, data security, and discreteness in wear. The deployment of the sensor on the suprasternal notch with a medical-grade adhesive is a key advantage in user acceptability in that it enables capture of the relevant signals transmitted from the speech production system, while being largely covered by a collared shirt. The sensor is also encapsulated with silicone that can be matched to the user's skin tone. Finally, the sensor accommodates full wireless charging and waterproof use enabling bathing with the device in place. In regards to adhesive choice to maximize wearer comfort, we have extensive experience identifying the optimal adhesive that can be adjusted based on the desired length of use (1 day to 2 weeks). Given the heightened fragility of mature skin, we currently employ a gentle acrylic polymer matrix adhesive (STRATGEL®, Nitto Dento) that operates without causing significant skin irritation or redness with prolonged daily use (>2 weeks) in healthy adults. In summary, the key advantages of the wearable acousto-mechanic sensor for social interaction compared to existing systems and prior reported research include:
Multimodal Functionality: the sensors already have demonstrated the ability to collect the largest number of parameters of value to assess social interaction in one technology platform enabled through intimate skin coupling. Parameters include: talktime, # of conversation partners, swallow count, respiration rate, heart rate, sleep quality and physical activity. Additional parameters are compatible with the devices and methods provided herein.
Real-World Continuous Sensing: the sensor can measure sound only when mechanical vibrations are sensed on the user's throat enabling highly specific recording of true user talktime regardless of noisy or quiet ambient environments. This enables real-world deployment outside of controlled clinical settings.
Low Burden, Unobtrusive Form Factor: the sensor passively collects data without the need for user adjustment. Wireless charging limits user burden facilitating adherence. Deployment on the suprasternal notch enables high fidelity signal capture without the stigma of a highly visible neck-deployed system.
Advanced Signal Analytics: various signal processing techniques may be employed to derive additional metrics meaningful to social interaction.
Hardware may be employed within flexible wearable platforms. Currently, the central microprocessor has up to 8-analog channel inputs with 2.4 GHz 32 bit CPU with 64 KB RAM. Off-the-shelf microphones may be used to determine ideal specifications. Specifically, the MP23AB01 DH (STMicroelectronics) series offers a thin profile microphone MEMS system (3.6 mm×2.5 mm×1 mm) that will not add any additional bulk to the wearable form factor. Furthermore, the system is low-power (250 ρA) and exhibits a high signal-to-noise ratio (65 dB) with a sensitivity as low as 38 dB. The microphone can operate in synchrony with the 3-axis accelerometer to collect external audio signals. The current lithium-ion battery has 12 mAh capacity. Thus, we do not expect the additional of an external microphone to significantly affect battery life. To determine success the microphone's performance and auditory clarity is tested with a standardized block of audio text (60 s) of increasing levels of decibels (10) from 38 dB (whisper) to 128 dB (concert).
Software and Signal Analysis Augmentation—Bluetooth® may be used to connect to any standard smartphone, tablet or laptop. The user interface may display the raw signal, and data storage. The sensor may also be used as an observational tool for social interaction, including by use of a secure, researcher-focused user interface. This includes software protocol that enables HIPAA compliant data transfer and cloud storage—we have previously used Box® as a HIPAA compliant storage platform for our wireless sensors. While signal processing (Savitzky Golay filtering, Butterworth filtering, and Shannon Energy Envelop techniques) has enabled the derivation of numerous important metrics of social interaction, additional signal processing functionality will derive additional more advanced metrics. For instance, paralinguistic features such as a user's pitch, tone, and verbal response time in a conversation have all been correlated to depression including within the dementia population. Turn-taking and overtalk are additional metrics of interest. We propose a multi-pronged approach that includes employing hidden Markov model approach, open access speech processing algorithms (e.g. COVAREP), 58 and wavelet analysis. Specifically, we believe wavelet analysis is the most promising strategy given the well-established theory of prior work—a mother wavelet for any specific metric of interest will be classified from the raw input acousto-mechanic signals. The user interface allows researchers considerable freedom to manipulate the raw data various and deploy various signal processing strategies and toolboxes of interest. Further signal analysis would enable classification of other relevant behaviors for individuals with AD such as personal hygiene (brushing teeth), chores, or driving.
While the wearable global medical device is >$3 billion USDs with 20% growth over the next decade, the elderly population is highly underserved despite greater needs. The platform provided herein is applicable to a wide range of dementia indications, and additional sensing applications (e.g. sleep or dysphagia sensor). Dementia, including AD is a devastating condition. Increasing meaningful social interaction represents an immediate strategy to reduce cognitive decline and morbidity for AD while simultaneously providing a potential prophylactic strategy in the elderly. The wearable medical sensors provided herein have the opportunity to become a critical clinical outcomes tool for AD researchers by providing the first technology capable of comprehensively assessing social interaction in naturalistic environments. Furthermore, this sensor can directly help individuals and their caregivers—in days when a person with AD has not been spoken to or engaged with meaningfully, the sensors provided herein can notify the appropriate person and reduce loneliness for that day.
Exemplary devices employing mechano-acoustic sensing and actuation were fabricated and tested with respect to overall functionality and mechanical properties.
The present example demonstrates usefulness of flexible wearable sensor devices of the invention for diagnostic applications including early triage of high-risk neonate subjects for cerebral palsy (CP). Predicting for eventual neurological function in at-risk neonates is challenging and research demonstrates that the absence of fidgety movements are predictive of the development of CP (see, e.g., BMJ 2018:360:K207). Assessment of CP in neonate subjects is performed typically by the General Movement Assessment (GMA), for example, corresponding to a 5 min video assessment of a supine infant with a standardized rubric.
In some embodiments, networked sensors provide additional value. The ability to assess—in time synchrony through a network of on body sensors—limb movement would allow for deeper insights on abnormal movements. Analogous to sleep—this would allow for visual reproduction of movements that could provide GMA-like video data for future analysis. The advantages here include reduced data storage requirements, anonymization of the subject, and the ability to operate in low light conditions (e.g. night time or sleep).
While GMA is the current gold standard with best available evidence of positive and negative predictive value, conducting GMA requires specialized training that is not always feasible for broader screening. 3-D computer vision and motion trackers are also potentially useful for GMA, but have drawbacks of being highly expensive, requiring enormous computational power and requiring large training sets.
The present sensors provide an alternative approach capable of accurately monitoring and analyzing the movement of neonate subjects in real time and, therefore, support applications to provide clinically relevant predictive information for diagnosis of CP.
This application claims the benefit of and priority to U.S. Provisional Patent Application No. 62/710,324 filed Feb. 16, 2018, 62/631,692 filed Feb. 17, 2018, and 62/753,203 filed Oct. 31, 2018, each of which is specifically incorporated by reference to the extent not inconsistent herewith.
Number | Date | Country | |
---|---|---|---|
62631692 | Feb 2018 | US | |
62710324 | Feb 2018 | US | |
62753203 | Oct 2018 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16970023 | Aug 2020 | US |
Child | 18397618 | US |