Many of the embodiments described herein are compatible with embodiments described in the above related applications. Moreover, some or all of the features described herein can be used or otherwise combined with many of the features described in the applications listed above.
The “piezoelectric effect” is the appearance of an electric potential and current across certain faces of a crystal when it is subjected to mechanical stresses. Due to their capacity to convert mechanical deformation into an electric voltage, piezoelectric crystals have been broadly used in devices such as transducers, strain gauges and microphones. However, before the crystals can be used in many of these applications they must be rendered into a form which suits the requirements of the application. In many applications, especially those involving the conversion of acoustic waves into a corresponding electric signal, piezoelectric membranes have been used.
Piezoelectric membranes are typically manufactured from polyvinylidene fluoride plastic film. The film is endowed with piezoelectric properties by stretching the plastic while it is placed under a high-poling voltage. By stretching the film, the film is polarized and the molecular structure of the plastic aligned. A thin layer of conductive metal (typically nickel-copper) is deposited on each side of the film to form electrode coatings to which connectors can be attached.
Piezoelectric membranes have a number of attributes that make them interesting for use in sound detection, including: a wide frequency range of between 0.001 Hz to 1 GHz; a low acoustical impedance close to water and human tissue; a high dielectric strength; a good mechanical strength; and piezoelectric membranes are moisture resistant and inert to many chemicals.
Due in large part to the above attributes, piezoelectric membranes are particularly suited for the capture of acoustic waves and the conversion thereof into electric signals and, accordingly, have found application in the detection of body sounds. However, there is still a need for a reliable acoustic sensor, particularly one suited for measuring bodily sounds in noisy environments.
An aspect of an acoustic monitoring system has an acoustic front-end, a first signal path from the acoustic front-end directly to an audio transducer and a second signal path from the acoustic front-end to an acoustic data processor via an analog-to-digital converter. The acoustic front-end receives an acoustic sensor signal responsive to body sounds in a person. The audio transducer provides continuous audio of the body sounds. The acoustic data processor provides audio of the body sounds upon user demand.
In various embodiments, a second acoustic front-end receives a second acoustic sensor signal responsive to body sounds in a person. A third signal path is from the second acoustic front-end to a parameter processor via the analog-to-digital converter. The parameter processor derives a physiological measurement responsive to the body sounds. A fourth signal path is from the second acoustic front-end to the acoustic data processor via the analog-to-digital converter. The acoustic data processor provides a stereo audio output of the body sounds.
In other embodiments, the acoustic monitoring system further comprises a communications link to a remote site via a network. A trigger is responsive to the physiological measurement. A notification is transmitted over the communications link according to the trigger. The notification alerts an individual at the remote site of the physiological measurement and allows the individual to request the body sounds be downloaded to the individual via the communications link. An optical front-end receives an optical signal responsive to pulsatile blood flow at a tissue site on the person. The parameter processor derives a second physiological measurement responsive to the pulsatile blood flow. The trigger is responsive to a combination of the physiological measurement and the second physiological measurement.
Further embodiments comprise acoustic filters implemented in the acoustic data processor. The filters define a series of audio bandwidths. Controls are in communications with the acoustic data processor. The controls are configured to adjust the audio bandwidths. The stereo audio output is adjusted by the controls so as to emphasize at least one of the audio bandwidths and de-emphasize at least one other of the audio bandwidths so that a user of the controls can focus on a particular aspect of the stereo body sound output. The acoustic monitoring system further comprises a display that is responsive in real-time to the stereo audio output.
Another aspect of an acoustic monitoring system inputs a sensor signal responsive to body sounds of a living being. The sensor signal routes to an audio output device so as to enable a first user to listen to the body sounds. The sensor signal is digitized as acoustic data, and the acoustic data is transmitted to a remote device over a network. The acoustic data is reproduced on the remote device as audio so as to enable a second user to listen to the body sounds.
In various embodiments, the acoustic data is transmitted when a request is received from the remote device to transmit the body sounds. The request is generated in response to the second user actuating a button on the remote device to listen-on-demand. Further, a physiological event is detected in the sensor signal and a notification is sent to the second user in response to the detected event. The acoustic data transmission comprises an envelope extracted from the acoustic data and a breath tag sent to the remote device that is representative of the envelope.
In other embodiments, the reproduced acoustic data comprises the synthesized envelope at the remote device in response to the breath tag, where the envelope is filled with white noise. The reproduced acoustic data may also comprise the envelope modified with a physiological feature derived from the breath tag. The acoustic data may be stored on a mass storage device as a virtual tape. A searchable feature of the contents of the virtual tape in logged in a database and the virtual tape is retrieved from the mass storage device according to the searchable feature.
A further aspect of an acoustic monitoring system has an acoustic sensor, a sensor interface and a wireless communications device comprising a first monitor section. The acoustic sensor has a piezoelectric assembly, an attachment assembly and a sensor cable. The attachment assembly retains the piezoelectric assembly and one end of the sensor cable. The attachment assembly has an adhesive so as to removably attach the piezoelectric assembly to a tissue site. The other end of the sensor cable is communications with a sensor interface so as to activate the piezoelectric assembly to be responsive to body sounds transmitted via the tissue site. The wireless communications device is responsive to the sensor interface so as to transmit the body sounds remotely.
The acoustic monitor has a second wireless communications device and an audio output device comprising a second monitor section. The second wireless communications device is responsive to the wireless communications device so as to receive the body sounds. The audio output device is responsive to the second wireless communications device so as to audibly and continuously reproduce the body sounds.
The first monitor section is located near a living person and the second wireless communications device is located remote from the living person and attended to by a user. The sensor is adhesively attached to the living person so that the user hears the body sounds from the living person via the sensor and a continuous audio output. In an embodiment, the first monitor section is located proximate to an infant, the sensor is adhesively attached to the infant, the second monitor section is located remote to the infant proximate the adult and the continuous audio output allows the adult to monitor the infant so as to avoid sudden infant death syndrome (SIDS).
The processors 130 include an audio processor 132 that outputs audio waveforms 142, a parameter processor 134 that derives physiological parameters 144 from sensor signals 112 and an acoustic data processor 136 that stores, retrieves and communicates acoustic data 146. Parameters include, as examples, respiration rate, heart rate and pulse rate. Audio waveforms include body sounds from the heart, lungs, gastrointestinal system and other organs. These body sounds may include tracheal air flow, heart beats and pulsatile blood flow, to name a few. Displays allow parameters 144 and acoustic data 146 to be visually presented to a user in various forms such as numbers, waveforms and graphs, as examples. Audio 152 allows audio waveforms to be reproduced through speakers, headphones or similar transducers. Raw audio 122 allows acoustic sensor signals 112 to be continuously reproduced through speakers, headphones or similar transducers, bypassing A/D conversion 120 and digital signal processing 130.
Storage media 160 allows acoustic data 146 to be recorded, organized, searched, retrieved and played back via the processors 130, communications 156 and audio output 152. Communications 156 transmit or receive acoustic data or audio waveforms via local area or wide area data networks or cellular networks 176. Controls 158 may cause the audio processor 132 to amplify, filter, shape or otherwise process audio waveforms 142 so as to emphasize, isolate, deemphasize or otherwise modify various features of an audio waveform or spectrum. In addition, controls 158 include buttons and switches 178, such as a “push to play” button that initiates local audio output 152 or remote transmission 176 of live or recorded acoustic waveforms.
As shown in
In an embodiment, sensor sounds 142 may be continuously “piped” to a remote device/listener or a central monitor or both. Listening devices may variously include pagers, cell phones, PDAs, electronic pads or tablets and laptops or other computers to name a few. Medical staff or other remote listeners are notified by the acoustic monitoring system according to flexible pre-programmed protocols to respond to the notification so as to hear breathing sounds, voice, heart sounds or other body sounds.
Further shown in
As shown in
As shown in
In various embodiments, the monitor 500 may be one or more processor boards installed within and communicating with a host instrument. Generally, a processor board incorporates the front-end, drivers, converters and DSP. Accordingly, the processor board derives physiological parameters and communicates values for those parameters to the host instrument. Correspondingly, the host instrument incorporates the instrument manager and I/O devices. A processor board may also have one or more microcontrollers (not shown) for board management, including, for example, communications of calculated parameter data and the like to the host instrument.
Communications 569 may transmit or receive acoustic data or audio waveforms via local area or wide area data networks or cellular networks. Controls may cause the audio processor to amplify, filter, shape or otherwise process audio waveforms so as to emphasize, isolate, deemphasize or otherwise modify various features of the audio waveform or spectrum. In addition, switches, such as a “push to play” button can initiate audio output of live or recorded acoustic data. Controls may also initiate or direct communications.
The network server 622 in certain embodiments provides logic and management tools to maintain connectivity between physiological monitors, clinician notification devices and external systems, such as EMRs. The network server 622 also provides a web based interface to allow installation (provisioning) of software related to the physiological monitoring system, adding new devices to the system, assigning notifiers to individual clinicians for alarm notification, escalation algorithms in cases where a primary caregiver does not respond to an alarm, interfaces to provide management reporting on alarm occurrences and internal journaling of system performance metrics such as overall system uptime. The network server 622 in certain embodiments also provides a platform for advanced rules engines and signal processing algorithms that provide early alerts in anticipation of a clinical alarm.
As shown in
Additionally shown in
Once the sound processing module characterizes a particular type of sound, the acoustic monitoring system can, depending on the identified sound, use the characterization to generate an appropriate response. For example, the system may alert the appropriate medical personnel to modify treatment. In one embodiment, medical personnel may be alerted via an audio alarm, mobile phone call or text message, or other appropriate means. In one example scenario, the breathing of the patient can become stressed or the patient may begin to choke due to saliva, mucosal, or other build up around an endotracheal tube. In an embodiment, the sound processing module can identify the stressed breathing sounds indicative of such a situation and alert medical personnel to the situation so that a muscle relaxant medication can be given to alleviate the stressed breathing or choking.
According to some embodiments, acoustic sensors described herein can be used in a variety of other beneficial applications. For example, an auscultation firmware module may process a signal received by the acoustic sensor and provide an audio output indicative of internal body sounds of the patient, such as heart sounds, breathing sounds, gastrointestinal sounds, and the like. Medical personnel may listen to the audio output, such as by using a headset or speakers. In some embodiments the auscultation module allows medical personnel to remotely listen for patient diagnosis, communication, etc. For example, medical personnel may listen to the audio output in a different room in a hospital than the patient's room, in another building, etc. The audio output may be transmitted wirelessly (e.g., via Bluetooth, IEEE 802.11, over the Internet, etc.) in some embodiments such that medical personnel may listen to the audio output from generally any location.
As shown in
In various other embodiments, acoustic breathing waveforms are detected by an acoustic sensor, processed, transmitted and played on a local or remote speaker or other audio output from actual (raw) data, synthetic data and artificial data. Actual data may be compressed, but is a nearly complete or totally complete reproduction of the actual acoustic sounds at the sensor. Synthetic data may be a synthetic version of the breathing sound with the option of the remote listener to request additional resolution. Artificial data may simulate an acoustic sensor sound with minimal data rate or bandwidth, but is not as clinically useful as synthetic or actual data. Artificial data may, for example, be white noise bursts generated in sync with sensed respiration. Synthetic data is something between actual data and artificial data, such as the acoustic envelope process described above that incorporates some information from the actual sensor signal. In an embodiment breath sounds are artificially hi/lo frequency shifted or hi/lo volume amplified to distinguish inhalation/exhalation. In an embodiment, dual acoustic sensors placed along the neck are responsive to the relative time of arrival of tracheal sounds so as to distinguish inhalation and exhalation in order to appropriately generate the hi/lo frequency shifts.
A physiological acoustic monitoring system has been disclosed in detail in connection with various embodiments. These embodiments are disclosed by way of examples only and are not to limit the scope of the claims that follow. One of ordinary skill in art will appreciate many variations and modifications.
This application claims the benefit of priority under 35 U.S.C. §119(e) of U.S. Provisional Application No. 61/252,099, filed Oct. 15, 2009, and U.S. Provisional No. 61/391,098, filed Oct. 8, 2010, the disclosures of each of which are incorporated in their entirety by reference herein. Additionally, this application relates to the following U.S. patent applications, the disclosures of which are incorporated in their entirety by reference herein: FilingApp. No.DateTitleAttorney Docket60/893853Mar. 08, 2007MULTI-PARAMETERMCAN.014PRPHYSIOLOGICAL MONITOR60/893850Mar. 08, 2007BACKWARD COMPATIBLEMCAN.015PRPHYSIOLOGICAL SENSOR WITHINFORMATION ELEMENT60/893858Mar. 08, 2007MULTI-PARAMETER SENSOR FORMCAN.016PRPHYSIOLOGICAL MONITORING60/893856Mar. 08, 2007PHYSIOLOGICAL MONITOR WITHMCAN.017PRFAST GAIN ADJUST DATAACQUISITION12/044883Mar. 08, 2008SYSTEMS AND METHODS FORMCAN.014ADETERMINING A PHYSIOLOGICALCONDITION USING AN ACOUSTICMONITOR61/252083Oct. 15, 2009DISPLAYING PHYSIOLOGICALMCAN.019PRINFORMATION##/######Oct. 14, 2010BIDIRECTIONAL PHYSIOLOGICALMCAN.019A1INFORMATION DISPLAY##/######Oct. 14, 2010BIDIRECTIONAL PHYSIOLOGICALMCAN.019A2INFORMATION DISPLAY61/141584Dec. 30, 2008ACOUSTIC SENSOR ASSEMBLYMCAN.030PR61/252076Oct. 15, 2009ACOUSTIC SENSOR ASSEMBLYMCAN.030PR212/643939Dec. 21, 2009ACOUSTIC SENSOR ASSEMBLYMCAN.030A61/313645Mar. 12, 2010ACOUSTIC RESPIRATORYMCAN.033PR2MONITORING SENSOR HAVINGMULTIPLE SENSING ELEMENTS##/######Oct. 14, 2010ACOUSTIC RESPIRATORYMCAN.033AMONITORING SENSOR HAVINGMULTIPLE SENSING ELEMENTS##/######Oct. 14, 2010ACOUSTIC RESPIRATORYMCAN.033A2MONITORING SENSOR HAVINGMULTIPLE SENSING ELEMENTS##/######Oct. 14, 2010ACOUSTIC RESPIRATORYMCAN.033A3MONITORING SENSOR HAVINGMULTIPLE SENSING ELEMENTS##/######Oct. 14, 2010ACOUSTIC PATIENT SENSORMCAN.033A4##/######Oct. 14, 2010ACOUSTIC RESPIRATORYMCAN.034AMONITORING SYSTEMS ANDMETHODS61/252062Oct. 15, 2009PULSE OXIMETRY SYSTEM WITHMCAN.035PRLOW NOISE CABLE HUB61/265730Dec. 01, 2009PULSE OXIMETRY SYSTEM WITHMCAN.035PR3ACOUSTIC SENSOR##/######Oct. 14, 2010PULSE OXIMETRY SYSTEM WITHMCAN.035ALOW NOISE CABLE HUB##/######Oct. 14, 2010PHYSIOLOGICAL ACOUSTICMCAN.046AMONITORING SYSTEM61/331087May 04, 2010ACOUSTIC RESPIRATION DISPLAYMASIMO.800PR2
Number | Date | Country | |
---|---|---|---|
61252099 | Oct 2009 | US | |
61391098 | Oct 2010 | US | |
61252083 | Oct 2009 | US | |
61252076 | Oct 2009 | US | |
61313645 | Mar 2010 | US | |
61252062 | Oct 2009 | US | |
61265730 | Dec 2009 | US | |
61331087 | May 2010 | US |