The present invention relates to devices, systems, and methods for auscultation of a body, and particularly as may be utilized to analyze Eustachian tube function and intra-aural sounds during swallowing.
Auscultation, or the term for listening to the internal sounds of a body, is of great importance to many disciplines, such as the medical fields. For example, auscultation of a body, such as the body of a patient, assists a medical professional in the diagnosis of ailments that may affect the patient. Such may be traditionally achieved with a stethoscope, which may use a wide bell and/or a diaphragm to listen to a narrow range of low frequency acoustic signals, such as those associated with patient's heartbeat. However, such approaches are fundamentally inadequate for many other diagnostic purposes, such as receiving acoustic signals associated with higher frequency signals. Also, medical professionals are increasingly in need of more sensitive and more precise means of diagnosing patients. Current approaches, while effective, are subject to inherent limits based on past technologies.
One area of the art in need of improved systems and methods for auscultation is in the diagnosis of dysphagia and other issues affecting the throat, nasal cavity, and surrounding areas. Typical diagnostics for dysphagia include the use of fluoroscopy and a test known as the Modified Barium Swallow. A clinician monitors the patient's ability to swallow various boluses with differing volume and consistency while using a fluoroscope to view the action of the throat area, including the soft palate area known as the vellum. The transit time of a given bolus can be one indication that a patient is suffering from dysphagia.
However, there is a need in the art for more precise methods of diagnosing dysphagia other than with visual observation. Auscultation techniques, such as those disclosed herein, can enhance diagnostic abilities.
Some or all of the above needs and/or problems may be addressed by certain embodiments of the disclosure. Certain embodiments may include systems, devices, and methods for auscultation of a body. According to one embodiment of the disclosure, there is disclosed a system. The system can include an internal auscultation device for use within a cavity of the body. The system can also include an external auscultation device for use outside the body. And the system can also include an external computing device in communication with the internal auscultation device and the external auscultation device.
According to another embodiment of the disclosure, there is disclosed a method. The method can include configuring an internal auscultation device to be disposed within a cavity of a body. The method can include configuring the device with one end structured to receive an acoustic signal. The method can also include configuring the device to include a transducer capable of converting an acoustic signal into an electrical signal.
According to another embodiment of the disclosure, there is disclosed a device. The device can include an exterior molding configured to fit within a cavity of a body, such as the auditory canal of the ear, and the exterior molding can include an open end and a closed end. The exterior molding can include a chamber structured to receive an acoustic signal. The device can also include a transducer capable of converting an acoustic signal into an electrical signal.
In connection with the analysis and diagnosis of Eustachian tube function, intra-aural sounds, and bolus transit times, one or more features of the present invention may be deployed to accomplish same in connection with one or more inventive methodologies. Such a system can facilitate auscultation of a patient, as well as provide methods for diagnosing dysphagia.
According to one such methodology, a patient may be provided with at least an internal auscultation device which is disposed within the external auditory canal of the patient. In order to isolate intra-aural sounds, a relatively snug fit within the external auditory canal should be accomplished, in order to block external noise from reaching the internal auscultation device. The patient may be directed to swallow (with or without food, liquid, or other bolus) and the resulting audio signal may be observed and/or recorded. For the normal function of the patient's Eustachian tube during swallowing, an initial intra-aural event should be detected which is approximately 30-80 milliseconds in duration, and included frequency content between approximately 10 kHz and 12 kHz, and which further precedes the bolus transit. This initial intra-aural event is indicative of the opening of the Eustachian tube that naturally occurs to equalize pressure just prior to swallowing. Once a bolus transit is complete, a secondary intra-aural event with the same or similar characteristics may be detected which is indicative of the closing of the Eustachian tube. Accordingly, a system and method of the present invention may be used to determine that a patient's Eustachian tube is functioning normally during swallowing.
The patient may also be provided with an external auscultation device disposed against the patient's throat. When a patient swallows, the resulting velopharyngeal event can be monitored and/or recorded by the external auscultation device. The velopharyngeal event can comprise the closure of the velum of the patient against the posterior pharyngeal wall of the patient, as well as the transit of a bolus through the pharynx of the patient.
Accordingly, when a patient is confirmed to have normal Eustachian tube function, the timing of the bolus transit may be more accurately determined by marking the beginning of the bolus transit to coincide with the first intra-aural event, i.e., the opening of the Eustachian tube. Such a method is preferable as it provides a clear marker for the timing of a bolus transit as compared to, e.g., visual observation with a fluoroscope. However, the present invention may be utilized in conjunction with a fluoroscope to achieve synergies in diagnosis as well.
In yet further embodiments, the use of computerized pattern recognition engines may aid in diagnoses. Accordingly one or more artificial neural nets may be trained with respect to the various audio events determined by the internal and external auscultation devices. An intra-aural event artificial neural net may be trained with respect to audio signals corresponding to the opening and closing of the Eustachian tubes during swallowing. Accordingly, the intra-aural event artificial neural net can be configured to determine whether an audio signal, or portion thereof, corresponds to an intra-aural event, and can be further configured to report such a determination to a desired user interface, such as a computer program or visual representation (e.g., waveform or spectrogram). A velopharyngeal event can likewise be trained with respect to the audio signal's inclusion of a bolus transit, and is accordingly configured to determine whether an audio signal, or portion thereof, corresponds to a bolus transit, and can be further configured to report such a determination to a desired user interface.
These and other objects, features and advantages of the present invention will become clearer when the drawings as well as the detailed description are taken into consideration.
For a fuller understanding of the nature of the present invention, reference should be had to the following detailed description taken in connection with the accompanying drawings in which:
Like reference numerals refer to like parts throughout the several views of the drawings.
Illustrative embodiments of the disclosure will now be described more fully hereinafter with reference to the accompanying drawings in which some, but not all, embodiments of the disclosure are shown. The disclosure can be embodied in many different forms and should not be construed as limited to the embodiments set forth herein, rather, these embodiments are provided so this disclosure will satisfy applicable legal requirements.
Certain embodiments disclosed herein relate to systems for auscultation of a body. Referring to
In some embodiments, device 110 or device 120 or both can include an external layer that dampens ambient noise levels. This noise-dampening layer can be useful in reducing the unwanted acoustic information received by transducers 117 and/or 127. In this way, the acoustic signals ultimately processed by transducers 117 and/or 127 can then be more accurately directed from the desired source of acoustic signals. In some embodiments, system 100 can include an audio output to one or more headphone devices 150. System 100 can include a time and frequency analysis 160 which can be displayed on a screen 180 or analyzed for pattern recognition 170, or both.
In some embodiments, device 110 and/or device 120 can be configured with a bell structure, for example, with a wider opening at the proximal end. The bell structure can be vented into the primary opening 115 and/or 125 via a small hole within one of the internal chambers. The diameter of the hole is constructed to allow a desired amount of low frequency content into a high frequency primary opening. When the device 110 or 120 is placed against a body, high frequencies can be captured in the primary opening 115 or 125. Lower frequency content can be simultaneously, and separately, captured in the larger concentric bell. Lower frequencies can be captured more efficiently so they are attenuated before being “mixed” with high frequency content. The attenuation and mixing can be accompanied by allowing low frequency content to pass into the high frequency opening through a small diameter hole. The composite sound can be captured by a single microphone in a microphone chamber, and this chamber can be sealed with a cap.
In some embodiments, devices 110 and/or 120 can include exterior moldings with layering of multiple, dissimilar materials. This layering can, among other things, create an impedance barrier to vibrational energy and dampen resonant characteristics of denser materials. In a basic form, layering could require three materials layered with each other, and the layering can be expanded to include more layers which can increase performance if, for example, the material for each layer is of a different density than its neighboring material(s). In one embodiment, the layering can include a first material that constitutes the outer body of device 110 and/or 120. This outer body can be a rigid material of moderate to high density such as, but not limited to, aluminum, steel, stainless steel, or any number of high density plastics. The outer body of device 110 and/or 120 can then be shrouded in a layer of a second material on all faces except for the proximal end oriented toward the source of an acoustic signal. The second material can be pliable such as composed of putty, gel rubber, or foam. This second material can impede the transmission of vibrational energy and serve to dampen resonant characteristics of the first material. This second layer can, in turn, be shrouded by a third material on all faces except the proximal end. The third material can be similar to the first material in a property of rigidness and of moderate to high density. In some embodiments, the first and third materials are different from each other even though they may share some qualities and properties. Configuring the materials to be dissimilar can increase performance of device 110 and/or 120. However, device 110 and/or 120 still performs as desired if the first and third materials are the same but, in some scenarios, that performance may be decreased. The usage of multiple, dissimilar layers works to create multiple impedance barriers which can significantly reduce the amount of vibrational energy transmitted through device 110 and/or 120. The layers also can serve to dampen resonant characteristics of the rigid materials.
In some embodiments, for example for purposes of sanitization, acoustic, and/or functional requirements of some stethoscope applications, a diaphragm can be attached, such as temporarily, to the outermost layer of device 110 and/or 120 at the proximal end. The diaphragm can be molded from a single piece of plastic or it can be constructed using multiple materials, depending on the desired acoustic or other requirements. Some benefits of a disposable diaphragm include economical production as a single piece of plastic, construction with varying thicknesses or materials to provide alternative acoustic characteristics, sanitary barrier to microphone and interior elements of a stethoscope, provided through hands-free attachment packaging (similar to otoscope ends) so a new diaphragm can be attached without a user touching the diaphragm before use, facilitating auscultation over clothing for cases where that scenario is required, and providing mechanical isolation of the microphone housing from the body. The outer ring of the diaphragm can be configured to fit over the outer shell of device 110 and/or 120 at the proximal end. This ring can be rigid and can include a locking mechanism to prevent the diaphragm from falling off during use. A seal can be created with same or different materials to provide an acoustic (air-tight) closure over the proximal end. The seal can also provide a mechanical stand-off so the diaphragm does not come in contact with the primary inner bell structure as the seal can be made with the outer shell only. The diaphragm can be within 0.1 millimeters and 0.75 millimeters thick to provide good isolation while allowing vibrations from the body to pass through with minimal impedance. The entire diaphragm assembly need not touch any part of the inner microphone housing thereby providing, among other things, mechanical isolation from environmental and other unwanted noise sources.
In one embodiment, system 100 can be used for obtaining acoustic information relating to the physiology of a person's swallowing. System 100 can be used to monitor the person's swallowing over a period of time or during fluoroscopy. In one embodiment of the swallowing monitoring, an external auscultation device 120 can be attached to a person via straps and/or an adhesive. This external auscultation device 120 can be placed at the midline of the neck, for example, inferior to the thyroid cartilage and superior to the jugular notch. In this embodiment, an internal auscultation device 110 can be placed in an ear canal of the person and held in place by the foam or elastic material comprising the device's 110 exterior molding. This placement could also be within a nasal cavity. In this and other embodiments, auscultation devices 110 and 120 can be amplified by microphone preamplifier 130 and processed by DSP 140 and analyzer 160 and recognition device 170 before the data is displayed to a clinician via display 180. The information received by separate auscultation devices, for example by devices 110 and 120, can be carried by separate cables and/or the separate signals can be carried by a single multi-channel cable. In one embodiment, some or all of the components such as the microphone preamplifier, DSP, time/frequency analysis, pattern recognition system, and display can be contained within a single device, such as a handheld device. The handheld device can include one or more light-emitting diodes (LEDs) to denote the presence or absence of some information, or to convey other information to a clinician, for example, and can be included in the display 180. In this or other embodiments, the display 180 can include a screen, such as a touchscreen to both convey information to a user as well as receive input from a user.
According to another embodiment of the invention, and with reference to
Method 200 may optionally end following block 240.
The operations described and shown in method 200 of
According to another embodiment of the invention, and with reference to
In some embodiments, device 300 can include an external computing device. The external computing device can receive communication from transducer 330, for example, through wireless network communication. In one embodiment, transducer 330 receives a body's acoustic signal via opening 320. Transducer 330 can then convert that acoustic signal into an electrical signal for, among other reasons, more efficient transmission of the acoustic signal to a remote location. In some embodiments the electrical signal originating at the transducer 330 can be received by a microphone preamplifier. The microphone preamplifier can boost the electrical signal for continued transmission. In some embodiments, device 300 can include a DSP. The DSP can receive the signal from the microphone preamplifier, or from the transducer 330, or both. The DSP can include processing that incorporates audio frequency dynamic range control and/or equalization. The audio processing can also include frequency filtering. Device 300 can perform time and frequency analysis on the audio signal. In some embodiments, a time and frequency analysis can be used to perform a pattern recognition evaluation of the frequency, intensity, and/or time. In some embodiments, device 300 includes a display. The display can output, for example, the pattern recognition evaluation, the time and frequency analysis, and/or other information pertaining to the auscultation. The display can include one or more light-emitting diodes (LEDs) for displaying information. The display can also include a screen for displaying information. In some embodiments, the display can include an interactive touch screen.
In some embodiments, device 300 can include multiple assemblies containing a molding 310 with a transducer 330. Some or all of the assemblies can transmit their respective acoustic information to a DSP. Some of the multiple assemblies can be designed for internal (e.g. within a cavity) placement, and some of the assemblies can be designed for external (e.g. outside a cavity) placement. In some embodiments, device 300 can include one or more headphone outputs to enable listening to the signals that have been captured. The headphone outputs can be connected to the DSP. The headphone outputs can be standard headphones and the headphone outputs can be purpose-built to work with device 300 for auscultation of a body.
One example of headphone outputs can provide hearing protection in high-noise environments while simultaneously providing high quality, electronic sound with situational/directional integrity of the sound. These headphone output embodiments can also be utilized in other applications, such as in extremely loud ambient noise scenarios, in addition to use in the immediate disclosure. The outputs can include a circumaural muff designed to reduce ambient sound by at least 30 decibels. The headphone outputs can include one or more in-ear “buds.” The buds can use foam eartips and a fully sealed system to provide additional ambient noise rejection of 20 decibels and higher. The buds can include a speaker for audio playback. The outputs can also include electronic voice communication input, for example, a wired audio connection or wireless audio receiver, such as Bluetooth, 2.4 GHz, etc. The outputs can also include situational awareness microphone input. One embodiment of the situational awareness microphone input can include at least one microphone mounted on the outside of each circumaural earcup, and each microphone can be positioned to face forward relative to the wearer's face. Each microphone may be contained within a manifold designed to mimic mechanical filtering of a human ear, and the output of each microphone can feed a pre-amplifier. The headphone outputs can also include DSP and amplification. The DSP can receive the electronic sound or voice communication and the preamplified situational awareness. The DSP is programmed to provide increased speech intelligibility for voice communications and create a natural, realistic recreation of the directional and situational (e.g. outside world) on the microphone signal. The output signal from the DSP can be fed into an amplifier which drives the speakers in the in-ear buds. The in-ear buds can be tethered to the interior of the earcups such that no external wires need exit the earcups and compromise the seal of the muffs against the wearer's head. The tether wire (which can carry the signal to speakers in the buds) can be governed by a spring-loaded or ratcheting take-up reel. Inclusion of the take-up reel could allow an unusually long tether wire to be used. A longer tether wire can allow for easier placement of the buds into the user's ears. Any excess length of wire could then be automatically (or at the press of a button) coiled back when the muffs are positioned on the user's head. This could eliminate the need to bunch up the excess wire inside the earcup which makes the system easier to put on. Eliminating excess wire bunch can also provide superior comfort to the user. The DSP can be programmed in multiple ways, and the signal could be affected as described, for example, in U.S. Pat. Nos. 8,160,274 and/or 9,264,004, for the purpose of, among other things, to effectively and measurably increase the intelligibility of the incoming sound signal, including human speech. Some benefits of using this type of method can include: superior frequency response control allowing for natural and realistic representation of real-world acoustic environments; ability to limit extremely loud transient sounds to safe levels without any loss of or artifacting of other environmental sounds (e.g. If a person is speaking and a gunshot occurs nearby, the gunshot can be limited to a safe level while the person's voice would be perceived to remain at a consistent level) and/or if coupled with the aforementioned microphone manifold, this processing can achieve a perfect recreation of the directionality of environmental sounds on all axes. Output signals from the DSP can be combined in multiple, different ways for different embodiments of the system. For example, the DSP can provide a user with level control by which the mix between voice communication and situational awareness can be continually adjusted. Also, voice communication may always be enabled with the situational awareness muted. The situational awareness can then be turned on by use of a momentary switch, for example, located on an external portion of one or both ear muffs. This could allow a push-to-talk type of feature for communicating with persons within the environment. Additionally, the DSP can, by default, have both voice communication and situational awareness turned on, while being programmed with a threshold for automatic muting and unmuting of the situational awareness microphones. Yet another example is a voice communication input can be combined with or replaced by an additional wired or wireless audio input designed to carry entertainment, e.g. music, etc. If the two are combined into a single channel, the DSP can be programmed for multiple modes in order to, among other things, provide superior speech intelligibility for voice and digital audio enhancement for entertainment. If the two remain in separate channels, they could be processed separately by the DSP for their respective purposes.
Turning to
The internal auscultation device 110′ and external auscultation device 120′ may also be disposed in communication with an audio processing module 510 such that the audio signal captured by either of the auscultation devices 110′, 120′ is processed by the audio processing module 510. According to various embodiments, the audio processing module 50 may include a microphone preamp 511, and analog-to-digital converter 512, an audio pre-processing module 513 (such as to adjust the gain of certain frequencies of the audio signal), and an analysis buffer 514. The analysis buffer 514 can be configured to hold the processed audio signal in a memory and transmit the audio signal to a pattern recognition engine 170′, 170″ in predetermined packets and/or predetermined time intervals.
According to various embodiments, the pattern recognition engine 170′, 170″ may comprise an ensemble of artificial neural networks (“ANN”) 550, including an intra-aural event ANN 530 and a velopharyngeal event ANN 540. The intra-aural event ANN 530 is trained with audio signals comprising the opening and closing of Eustachian tubes and is therefore configured to determined whether an audio signal includes either the opening or closing of a Eustachian tube. Likewise, the velopharyngeal event ANN 540 is trained with audio signals comprising bolus transits, and is therefore configured to determine whether an audio signal includes a bolus transit. The ensemble ANN 550 may comprise both of the intra-aural event ANN 530 and velopharyngeal event ANN 540 utilized in conjunction, and each receiving a signal from one of the internal auscultation device 110′ or the external auscultation device 120′. The pattern recognition engine 170′ may be further figured to produce one or more output reports 570, 580 to a desired user interface. The output reports 570, 580 may include data pertaining to determinations made by the pattern recognition engine 170′, such as whether an intra-aural event occurred, whether a velopharyngeal event occurred, and a time duration of the velopharyngeal event. The pattern recognition engine 170′ may also communicate data to an audio visualization 400.
The system 500, 500′ may also include a data storage unit 520, particularly for storage of pulse code modulated audio signals produced by the analog to digital converter 512. The data storage unit 520 may further communicate data to an audio visualization 400.
Turning to
As might be erroneously noted upon examining only the audio signal 411 captured via the external auscultation device, the bolus transit 450 appears to be approximately one-half of a second in duration. However, when corresponding audio data 412 from the same swallow, which was captured via the internal auscultation device, is also examined it can be seen that the initial intra-aural event begins just before the bolus transit 450 appears on the wave form 410. Accordingly, more accurate bolus transit times can be ascertained if the intra-aural event is observed in conjunction with the bolus, and the beginning of the bolus transit is marked with respect to the initial intra-aural event.
Since many modifications, variations and changes in detail can be made to the described preferred embodiment of the invention, it is intended that all matters in the foregoing description and shown in the accompanying drawings be interpreted as illustrative and not in a limiting sense. Thus, the scope of the invention should be determined by the appended claims and their legal equivalents.
The present non-provisional patent application claims priority pursuant to 35 U.S.C. Section 119(e) to a currently pending, and prior-filed, provisional patent application, namely that having Ser. No. 62/713,793 filed on Aug. 2, 2018; the present non-provisional patent application is also a continuation-in-part application of currently pending, and prior-filed, non-provisional patent application having Ser. No. 16/116,334 filed on Aug. 29, 2018, which itself claims priority pursuant to 35 U.S.C. Section 119(e) to provisional patent application having Ser. No. 62/554,668 filed on Sep. 6, 2017; the contents of each of the foregoing are expressly incorporated herein by reference in their entireties.
Number | Date | Country | |
---|---|---|---|
62713793 | Aug 2018 | US | |
62554668 | Sep 2017 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16116334 | Aug 2018 | US |
Child | 16530195 | US |