SYSTEMS AND METHODS FOR EUSTACHIAN TUBE FUNCTION, INTRA-AURAL, AND BOLUS TRANSIT SOUND ANALYSIS

Information

  • Patent Application
  • 20200029886
  • Publication Number
    20200029886
  • Date Filed
    August 02, 2019
    5 years ago
  • Date Published
    January 30, 2020
    4 years ago
Abstract
Acoustic listening methods, devices, and systems herein relate to auscultation of a body. An auscultation device as disclosed herein can be operable to function within a cavity of the body, and can operate in conjunction with other auscultation devices, including with external auscultation devices. Individual devices and grouped devices can operate with the addition of a computing device. Systems and methods are disclosed with facilitate analysis and diagnoses of Eustachian function, swallow sounds, and bolus transit.
Description
BACKGROUND OF THE INVENTION
Field of the Invention

The present invention relates to devices, systems, and methods for auscultation of a body, and particularly as may be utilized to analyze Eustachian tube function and intra-aural sounds during swallowing.


Description of the Related Art

Auscultation, or the term for listening to the internal sounds of a body, is of great importance to many disciplines, such as the medical fields. For example, auscultation of a body, such as the body of a patient, assists a medical professional in the diagnosis of ailments that may affect the patient. Such may be traditionally achieved with a stethoscope, which may use a wide bell and/or a diaphragm to listen to a narrow range of low frequency acoustic signals, such as those associated with patient's heartbeat. However, such approaches are fundamentally inadequate for many other diagnostic purposes, such as receiving acoustic signals associated with higher frequency signals. Also, medical professionals are increasingly in need of more sensitive and more precise means of diagnosing patients. Current approaches, while effective, are subject to inherent limits based on past technologies.


One area of the art in need of improved systems and methods for auscultation is in the diagnosis of dysphagia and other issues affecting the throat, nasal cavity, and surrounding areas. Typical diagnostics for dysphagia include the use of fluoroscopy and a test known as the Modified Barium Swallow. A clinician monitors the patient's ability to swallow various boluses with differing volume and consistency while using a fluoroscope to view the action of the throat area, including the soft palate area known as the vellum. The transit time of a given bolus can be one indication that a patient is suffering from dysphagia.


However, there is a need in the art for more precise methods of diagnosing dysphagia other than with visual observation. Auscultation techniques, such as those disclosed herein, can enhance diagnostic abilities.


SUMMARY OF THE INVENTION

Some or all of the above needs and/or problems may be addressed by certain embodiments of the disclosure. Certain embodiments may include systems, devices, and methods for auscultation of a body. According to one embodiment of the disclosure, there is disclosed a system. The system can include an internal auscultation device for use within a cavity of the body. The system can also include an external auscultation device for use outside the body. And the system can also include an external computing device in communication with the internal auscultation device and the external auscultation device.


According to another embodiment of the disclosure, there is disclosed a method. The method can include configuring an internal auscultation device to be disposed within a cavity of a body. The method can include configuring the device with one end structured to receive an acoustic signal. The method can also include configuring the device to include a transducer capable of converting an acoustic signal into an electrical signal.


According to another embodiment of the disclosure, there is disclosed a device. The device can include an exterior molding configured to fit within a cavity of a body, such as the auditory canal of the ear, and the exterior molding can include an open end and a closed end. The exterior molding can include a chamber structured to receive an acoustic signal. The device can also include a transducer capable of converting an acoustic signal into an electrical signal.


In connection with the analysis and diagnosis of Eustachian tube function, intra-aural sounds, and bolus transit times, one or more features of the present invention may be deployed to accomplish same in connection with one or more inventive methodologies. Such a system can facilitate auscultation of a patient, as well as provide methods for diagnosing dysphagia.


According to one such methodology, a patient may be provided with at least an internal auscultation device which is disposed within the external auditory canal of the patient. In order to isolate intra-aural sounds, a relatively snug fit within the external auditory canal should be accomplished, in order to block external noise from reaching the internal auscultation device. The patient may be directed to swallow (with or without food, liquid, or other bolus) and the resulting audio signal may be observed and/or recorded. For the normal function of the patient's Eustachian tube during swallowing, an initial intra-aural event should be detected which is approximately 30-80 milliseconds in duration, and included frequency content between approximately 10 kHz and 12 kHz, and which further precedes the bolus transit. This initial intra-aural event is indicative of the opening of the Eustachian tube that naturally occurs to equalize pressure just prior to swallowing. Once a bolus transit is complete, a secondary intra-aural event with the same or similar characteristics may be detected which is indicative of the closing of the Eustachian tube. Accordingly, a system and method of the present invention may be used to determine that a patient's Eustachian tube is functioning normally during swallowing.


The patient may also be provided with an external auscultation device disposed against the patient's throat. When a patient swallows, the resulting velopharyngeal event can be monitored and/or recorded by the external auscultation device. The velopharyngeal event can comprise the closure of the velum of the patient against the posterior pharyngeal wall of the patient, as well as the transit of a bolus through the pharynx of the patient.


Accordingly, when a patient is confirmed to have normal Eustachian tube function, the timing of the bolus transit may be more accurately determined by marking the beginning of the bolus transit to coincide with the first intra-aural event, i.e., the opening of the Eustachian tube. Such a method is preferable as it provides a clear marker for the timing of a bolus transit as compared to, e.g., visual observation with a fluoroscope. However, the present invention may be utilized in conjunction with a fluoroscope to achieve synergies in diagnosis as well.


In yet further embodiments, the use of computerized pattern recognition engines may aid in diagnoses. Accordingly one or more artificial neural nets may be trained with respect to the various audio events determined by the internal and external auscultation devices. An intra-aural event artificial neural net may be trained with respect to audio signals corresponding to the opening and closing of the Eustachian tubes during swallowing. Accordingly, the intra-aural event artificial neural net can be configured to determine whether an audio signal, or portion thereof, corresponds to an intra-aural event, and can be further configured to report such a determination to a desired user interface, such as a computer program or visual representation (e.g., waveform or spectrogram). A velopharyngeal event can likewise be trained with respect to the audio signal's inclusion of a bolus transit, and is accordingly configured to determine whether an audio signal, or portion thereof, corresponds to a bolus transit, and can be further configured to report such a determination to a desired user interface.


These and other objects, features and advantages of the present invention will become clearer when the drawings as well as the detailed description are taken into consideration.





BRIEF DESCRIPTION OF THE DRAWINGS

For a fuller understanding of the nature of the present invention, reference should be had to the following detailed description taken in connection with the accompanying drawings in which:



FIG. 1 illustrates an example system of auscultation of a body according to one embodiment of the disclosure.



FIG. 2 is a flow diagram of an example method for auscultation of a body.



FIG. 3A illustrates a cross-section view of an example device for auscultation of a body, according to one embodiment of the disclosure.



FIG. 3B illustrates a view into an open end of an example device for auscultation of a body, according to one embodiment of the disclosure.



FIG. 4 is a schematic depiction of an analysis system in accordance with one embodiment of the present invention.



FIG. 5 is a schematic depiction of an analysis system in accordance with another embodiment of the present invention.



FIG. 6 is an exemplary depiction of an audio visualization of audio signals captured by at least one embodiment of the present invention.





Like reference numerals refer to like parts throughout the several views of the drawings.


DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT

Illustrative embodiments of the disclosure will now be described more fully hereinafter with reference to the accompanying drawings in which some, but not all, embodiments of the disclosure are shown. The disclosure can be embodied in many different forms and should not be construed as limited to the embodiments set forth herein, rather, these embodiments are provided so this disclosure will satisfy applicable legal requirements.


Certain embodiments disclosed herein relate to systems for auscultation of a body. Referring to FIG. 1, depicted is an example system 100 for auscultation of a body. System 100 can be used in conjunction with method 200 and device 300. System 100 can capture acoustic signals and process the acoustic signals for analysis and diagnosis by a human or by a pattern recognition engine 170, or both. The system 100 can detect higher frequency signals that are not normally used in clinical diagnosis. One goal of system 100 is to provide bedside screening for dysphagia or other disorders related to the head and neck region. The system 100 can use a microphone enclosure for capturing sounds 110 from cavities, such as ear and nasal cavities; and the system 100 can also use a tuned resonant structure enclosing a microphone 120. The signal can be fed to an adaptive audio signal processing system capable of normalizing audio level and frequency content for the desired sounds. The audio processing can include dynamic range control and frequency filtering. System 100 can be used for biological screening and analysis, among other applications. System 100 can be used by a bedside clinician or caretaker to administer a screening test or to do more in-depth analysis. System 100 can include an acoustic capture device 110 for use within a cavity of a body. In some embodiments, the cavity can be an ear or nasal cavity. System 100 can include an internal auscultation device 110. Internal auscultation device 110 can include an exterior molding, and the exterior molding can include a proximal and a distal end. The proximal end can include an opening 115 that is dimensioned and configured for acoustic engagement within a cavity of the body. Acoustic engagement within a particular cavity can depend on the cavity, for example, one operative orientation for device 110 is when opening 115 is pointed toward the inside of the cavity while the distal end of device 110 is pointed toward outside the cavity. Internal auscultation device 110 can also include one or more chambers configured within the exterior molding, and the chambers can be collectively structured to receive an acoustic signal, for example, when opening 115 is pointed toward a source of the acoustic signal. Device 110 can include one or more transducers 117 disposed within the chambers and configured to receive the acoustic signals. Transducer 117 can then convert the acoustic signals to electrical signals, and the electrical signals can be used by other components of system 100. System 100 can include another auscultation device 120. Device 120 can be configured and disposed for use outside a body, and can be structured to receive an acoustic signal. The acoustic signal received by external auscultation device 120 can be different from the signal received by device 110, or the signals can be the same, or some components of the signals can be the same while other components of the signals can be different. Device 120 can include a proximal end and a distal end, and the proximal end can include opening 125 for orientation toward a source of an acoustic signal. Device 120 can include an exterior molding configured to dampen ambient noise. Device 120 can also include multiple chambers collectively structured to receive the acoustic signal. Device 120 can also include at least one transducer 127 operatively situated within the exterior molding of device 120 and configured to convert acoustic signals to electrical signals. The electrical signals can then be used by other components of system 100. The system 100 can also include an external computing device for processing, analysis, and/or display of the information associated with acoustic signals. In some embodiments, the external computing device can include a microphone preamplifier 130. Microphone preamplifier 130 can receive electrical signals from devices 110 and 120 and can boost those signals for more efficient processing of the signal information by the other components of system 100. Microphone preamplifier 130 can be communicatively coupled to a digital signal processor (DSP) 140. DSP 140 can be operable to process electrical signals received from microphone preamplifier 130 as well as from all auscultation devices included in system 100, for example, one or more internal devices 110 and one or more external devices 120. In some embodiments, signals may be received by the DSP 140 directly from devices 110 and/or 120 without the signals first being boosted by microphone preamplifier 130.


In some embodiments, device 110 or device 120 or both can include an external layer that dampens ambient noise levels. This noise-dampening layer can be useful in reducing the unwanted acoustic information received by transducers 117 and/or 127. In this way, the acoustic signals ultimately processed by transducers 117 and/or 127 can then be more accurately directed from the desired source of acoustic signals. In some embodiments, system 100 can include an audio output to one or more headphone devices 150. System 100 can include a time and frequency analysis 160 which can be displayed on a screen 180 or analyzed for pattern recognition 170, or both.


In some embodiments, device 110 and/or device 120 can be configured with a bell structure, for example, with a wider opening at the proximal end. The bell structure can be vented into the primary opening 115 and/or 125 via a small hole within one of the internal chambers. The diameter of the hole is constructed to allow a desired amount of low frequency content into a high frequency primary opening. When the device 110 or 120 is placed against a body, high frequencies can be captured in the primary opening 115 or 125. Lower frequency content can be simultaneously, and separately, captured in the larger concentric bell. Lower frequencies can be captured more efficiently so they are attenuated before being “mixed” with high frequency content. The attenuation and mixing can be accompanied by allowing low frequency content to pass into the high frequency opening through a small diameter hole. The composite sound can be captured by a single microphone in a microphone chamber, and this chamber can be sealed with a cap.


In some embodiments, devices 110 and/or 120 can include exterior moldings with layering of multiple, dissimilar materials. This layering can, among other things, create an impedance barrier to vibrational energy and dampen resonant characteristics of denser materials. In a basic form, layering could require three materials layered with each other, and the layering can be expanded to include more layers which can increase performance if, for example, the material for each layer is of a different density than its neighboring material(s). In one embodiment, the layering can include a first material that constitutes the outer body of device 110 and/or 120. This outer body can be a rigid material of moderate to high density such as, but not limited to, aluminum, steel, stainless steel, or any number of high density plastics. The outer body of device 110 and/or 120 can then be shrouded in a layer of a second material on all faces except for the proximal end oriented toward the source of an acoustic signal. The second material can be pliable such as composed of putty, gel rubber, or foam. This second material can impede the transmission of vibrational energy and serve to dampen resonant characteristics of the first material. This second layer can, in turn, be shrouded by a third material on all faces except the proximal end. The third material can be similar to the first material in a property of rigidness and of moderate to high density. In some embodiments, the first and third materials are different from each other even though they may share some qualities and properties. Configuring the materials to be dissimilar can increase performance of device 110 and/or 120. However, device 110 and/or 120 still performs as desired if the first and third materials are the same but, in some scenarios, that performance may be decreased. The usage of multiple, dissimilar layers works to create multiple impedance barriers which can significantly reduce the amount of vibrational energy transmitted through device 110 and/or 120. The layers also can serve to dampen resonant characteristics of the rigid materials.


In some embodiments, for example for purposes of sanitization, acoustic, and/or functional requirements of some stethoscope applications, a diaphragm can be attached, such as temporarily, to the outermost layer of device 110 and/or 120 at the proximal end. The diaphragm can be molded from a single piece of plastic or it can be constructed using multiple materials, depending on the desired acoustic or other requirements. Some benefits of a disposable diaphragm include economical production as a single piece of plastic, construction with varying thicknesses or materials to provide alternative acoustic characteristics, sanitary barrier to microphone and interior elements of a stethoscope, provided through hands-free attachment packaging (similar to otoscope ends) so a new diaphragm can be attached without a user touching the diaphragm before use, facilitating auscultation over clothing for cases where that scenario is required, and providing mechanical isolation of the microphone housing from the body. The outer ring of the diaphragm can be configured to fit over the outer shell of device 110 and/or 120 at the proximal end. This ring can be rigid and can include a locking mechanism to prevent the diaphragm from falling off during use. A seal can be created with same or different materials to provide an acoustic (air-tight) closure over the proximal end. The seal can also provide a mechanical stand-off so the diaphragm does not come in contact with the primary inner bell structure as the seal can be made with the outer shell only. The diaphragm can be within 0.1 millimeters and 0.75 millimeters thick to provide good isolation while allowing vibrations from the body to pass through with minimal impedance. The entire diaphragm assembly need not touch any part of the inner microphone housing thereby providing, among other things, mechanical isolation from environmental and other unwanted noise sources.


In one embodiment, system 100 can be used for obtaining acoustic information relating to the physiology of a person's swallowing. System 100 can be used to monitor the person's swallowing over a period of time or during fluoroscopy. In one embodiment of the swallowing monitoring, an external auscultation device 120 can be attached to a person via straps and/or an adhesive. This external auscultation device 120 can be placed at the midline of the neck, for example, inferior to the thyroid cartilage and superior to the jugular notch. In this embodiment, an internal auscultation device 110 can be placed in an ear canal of the person and held in place by the foam or elastic material comprising the device's 110 exterior molding. This placement could also be within a nasal cavity. In this and other embodiments, auscultation devices 110 and 120 can be amplified by microphone preamplifier 130 and processed by DSP 140 and analyzer 160 and recognition device 170 before the data is displayed to a clinician via display 180. The information received by separate auscultation devices, for example by devices 110 and 120, can be carried by separate cables and/or the separate signals can be carried by a single multi-channel cable. In one embodiment, some or all of the components such as the microphone preamplifier, DSP, time/frequency analysis, pattern recognition system, and display can be contained within a single device, such as a handheld device. The handheld device can include one or more light-emitting diodes (LEDs) to denote the presence or absence of some information, or to convey other information to a clinician, for example, and can be included in the display 180. In this or other embodiments, the display 180 can include a screen, such as a touchscreen to both convey information to a user as well as receive input from a user.


According to another embodiment of the invention, and with reference to FIG. 2, depicted is a flow diagram of an example method 200 for auscultation of a body. The method 200 can be utilized in association with various systems and devices, such as system 100 and device 300. The method 200 can begin at block 210. At block 210, a device can be assembled to be used for auscultation within a cavity of the body, for example, within an ear or nasal cavity. The assembly can be configured to fit within the cavity. In one embodiment, an elastic material can be used in the assembly, and the elastic material can compress to fit within differently sized cavities and still create a seal between the cavity and the ambient air. In one embodiment, the elastic assembly can include a foam or foam-like material. As block 220, the auscultation assembly can be configured for acoustic engagement of the body. For example, the assembly can include one end, in some cases called a proximal end, that includes an airway opening. In some embodiments, the assembly can include multiple openings. The assembly can be configured so an opening of the assembly, for example the proximal end, can point toward the interior of the cavity. The opening can receive an acoustic signal originating from within the body. For example, the acoustic signal can travel from its origin within the body, through the cavity, and then through the opening of the assembly. At block 230, the assembly can be configured with a chamber to receive the acoustic signal. The chamber can be designed to allow the acoustic signal to travel the entire distance of the chamber, through and then away from the proximal end and toward the distal end, while preserving the integrity of the acoustic signal. In one embodiment, the chamber can have parallel “walls” where the chamber is essentially cylindrically shaped with virtually the same distance between the walls at the proximal end as there is at the distal end. In another embodiment, the chamber can be bell-shaped such that the distance between the walls is greater at the base of the opening (proximal end) than at the other end of the chamber (distal end). The chamber can also include subchambers, some of which are bell-shaped and some of which are cylindrical, or the subchambers can be all of a similar shape. At block 240, the assembly can be configured with a transducer. An interior chamber of the assembly can be designed to receive and hold a transducer, and the interior chamber can be configured to enable and promote operation of the transducer. The transducer can reside at the end of the assembly opening such that the transducer is the first object encountered by an acoustic signal after entering the opening at the proximal end. The transducer can convert the acoustic signal into an electrical signal, and the electrical signal can then be used in support, analysis, and/or storage of information for the assembly.


Method 200 may optionally end following block 240.


The operations described and shown in method 200 of FIG. 2 may be carried out or performed in any suitable order as desired in various embodiments of the disclosure, and method 200 may repeat any number of times. Additionally, in certain embodiments, at least a portion of the operations may be carried out in parallel. Furthermore, in certain embodiments, fewer or more operations as illustrated in FIG. 2 may be performed.


According to another embodiment of the invention, and with reference to FIGS. 3A and 3B, disclosed is a device 300 for auscultation of a body. Device 300 can include an exterior molding 310. The exterior molding 310 can be formed as a single piece, for example, through casting, forming, or 3-D printing. The exterior molding 310 can be of a shape to dampen acoustic signals, and the material composing the molding 310 can be an acoustic dampening material. In this way, device 300 can reduce the reception of unwanted acoustic signals and channel the desired internal acoustic signals from inside the body through opening 320 and to transducer 330. The material used in the molding 310 can include a degree of elasticity so that device 300 can be compressed, if necessary, to fit inside the cavity, for example an ear or nasal cavity, and then naturally expand by retaking its original shape, creating a seal between the walls of the cavity. The shape of the molding 310 can be customized to fit within the particular cavity, while still allocating space within molding 310 for a transducer 330 and acoustic opening 320. In one embodiment, molding 310 can be cylindrically shaped with rounded edges. Molding 310 can include a proximal end with an opening 320 for acoustic engagement within the cavity. The opening 320 can be oriented toward the inside of the cavity and directly receive the acoustic signal originating from inside the body. Opening 320 can be shaped to receive the acoustic signal. Opening 320 can also convey the acoustic signal from the proximal end in through to the distal end of molding 310. The passageway beginning at opening 320 can be cylindrical in shape, among other possible shapes. In one embodiment, the passageway can be bell-shaped having, for example, a wider diameter at opening 320. In other embodiments, the passageway can be divided into chambers to facilitate the passage of the acoustic signal, and/or to accommodate other components of device 300. Device 300 can also include one or more transducers 330. The transducer(s) 330 can convert the acoustic signals into electrical signals.


In some embodiments, device 300 can include an external computing device. The external computing device can receive communication from transducer 330, for example, through wireless network communication. In one embodiment, transducer 330 receives a body's acoustic signal via opening 320. Transducer 330 can then convert that acoustic signal into an electrical signal for, among other reasons, more efficient transmission of the acoustic signal to a remote location. In some embodiments the electrical signal originating at the transducer 330 can be received by a microphone preamplifier. The microphone preamplifier can boost the electrical signal for continued transmission. In some embodiments, device 300 can include a DSP. The DSP can receive the signal from the microphone preamplifier, or from the transducer 330, or both. The DSP can include processing that incorporates audio frequency dynamic range control and/or equalization. The audio processing can also include frequency filtering. Device 300 can perform time and frequency analysis on the audio signal. In some embodiments, a time and frequency analysis can be used to perform a pattern recognition evaluation of the frequency, intensity, and/or time. In some embodiments, device 300 includes a display. The display can output, for example, the pattern recognition evaluation, the time and frequency analysis, and/or other information pertaining to the auscultation. The display can include one or more light-emitting diodes (LEDs) for displaying information. The display can also include a screen for displaying information. In some embodiments, the display can include an interactive touch screen.


In some embodiments, device 300 can include multiple assemblies containing a molding 310 with a transducer 330. Some or all of the assemblies can transmit their respective acoustic information to a DSP. Some of the multiple assemblies can be designed for internal (e.g. within a cavity) placement, and some of the assemblies can be designed for external (e.g. outside a cavity) placement. In some embodiments, device 300 can include one or more headphone outputs to enable listening to the signals that have been captured. The headphone outputs can be connected to the DSP. The headphone outputs can be standard headphones and the headphone outputs can be purpose-built to work with device 300 for auscultation of a body.


One example of headphone outputs can provide hearing protection in high-noise environments while simultaneously providing high quality, electronic sound with situational/directional integrity of the sound. These headphone output embodiments can also be utilized in other applications, such as in extremely loud ambient noise scenarios, in addition to use in the immediate disclosure. The outputs can include a circumaural muff designed to reduce ambient sound by at least 30 decibels. The headphone outputs can include one or more in-ear “buds.” The buds can use foam eartips and a fully sealed system to provide additional ambient noise rejection of 20 decibels and higher. The buds can include a speaker for audio playback. The outputs can also include electronic voice communication input, for example, a wired audio connection or wireless audio receiver, such as Bluetooth, 2.4 GHz, etc. The outputs can also include situational awareness microphone input. One embodiment of the situational awareness microphone input can include at least one microphone mounted on the outside of each circumaural earcup, and each microphone can be positioned to face forward relative to the wearer's face. Each microphone may be contained within a manifold designed to mimic mechanical filtering of a human ear, and the output of each microphone can feed a pre-amplifier. The headphone outputs can also include DSP and amplification. The DSP can receive the electronic sound or voice communication and the preamplified situational awareness. The DSP is programmed to provide increased speech intelligibility for voice communications and create a natural, realistic recreation of the directional and situational (e.g. outside world) on the microphone signal. The output signal from the DSP can be fed into an amplifier which drives the speakers in the in-ear buds. The in-ear buds can be tethered to the interior of the earcups such that no external wires need exit the earcups and compromise the seal of the muffs against the wearer's head. The tether wire (which can carry the signal to speakers in the buds) can be governed by a spring-loaded or ratcheting take-up reel. Inclusion of the take-up reel could allow an unusually long tether wire to be used. A longer tether wire can allow for easier placement of the buds into the user's ears. Any excess length of wire could then be automatically (or at the press of a button) coiled back when the muffs are positioned on the user's head. This could eliminate the need to bunch up the excess wire inside the earcup which makes the system easier to put on. Eliminating excess wire bunch can also provide superior comfort to the user. The DSP can be programmed in multiple ways, and the signal could be affected as described, for example, in U.S. Pat. Nos. 8,160,274 and/or 9,264,004, for the purpose of, among other things, to effectively and measurably increase the intelligibility of the incoming sound signal, including human speech. Some benefits of using this type of method can include: superior frequency response control allowing for natural and realistic representation of real-world acoustic environments; ability to limit extremely loud transient sounds to safe levels without any loss of or artifacting of other environmental sounds (e.g. If a person is speaking and a gunshot occurs nearby, the gunshot can be limited to a safe level while the person's voice would be perceived to remain at a consistent level) and/or if coupled with the aforementioned microphone manifold, this processing can achieve a perfect recreation of the directionality of environmental sounds on all axes. Output signals from the DSP can be combined in multiple, different ways for different embodiments of the system. For example, the DSP can provide a user with level control by which the mix between voice communication and situational awareness can be continually adjusted. Also, voice communication may always be enabled with the situational awareness muted. The situational awareness can then be turned on by use of a momentary switch, for example, located on an external portion of one or both ear muffs. This could allow a push-to-talk type of feature for communicating with persons within the environment. Additionally, the DSP can, by default, have both voice communication and situational awareness turned on, while being programmed with a threshold for automatic muting and unmuting of the situational awareness microphones. Yet another example is a voice communication input can be combined with or replaced by an additional wired or wireless audio input designed to carry entertainment, e.g. music, etc. If the two are combined into a single channel, the DSP can be programmed for multiple modes in order to, among other things, provide superior speech intelligibility for voice and digital audio enhancement for entertainment. If the two remain in separate channels, they could be processed separately by the DSP for their respective purposes.


Turning to FIGS. 4 and 5, various embodiments of an analysis system 500, 500′ are depicted therein. According to the depicted embodiments, the systems 500, 500′ may include at least an internal auscultation device 110′ and in additional embodiments, the internal auscultation device 110′ may be coupled with an external auscultation device 120′. By way of example, the internal auscultation device 110′ may comprise a transducer or microphone dimensioned and configured to be disposed within an external auditory canal of a patient. In a preferred embodiment, the internal device 110′ may be constructed in accordance with device 300 disclosed herein. Additionally, the external auscultation device 120′ may comprise a transducer or microphone dimensioned and configured to be disposed against a patient's throat. In a preferred embodiment, external device 120′ may be constructed in accordance with the external device 120 disclosed herein.


The internal auscultation device 110′ and external auscultation device 120′ may also be disposed in communication with an audio processing module 510 such that the audio signal captured by either of the auscultation devices 110′, 120′ is processed by the audio processing module 510. According to various embodiments, the audio processing module 50 may include a microphone preamp 511, and analog-to-digital converter 512, an audio pre-processing module 513 (such as to adjust the gain of certain frequencies of the audio signal), and an analysis buffer 514. The analysis buffer 514 can be configured to hold the processed audio signal in a memory and transmit the audio signal to a pattern recognition engine 170′, 170″ in predetermined packets and/or predetermined time intervals.


According to various embodiments, the pattern recognition engine 170′, 170″ may comprise an ensemble of artificial neural networks (“ANN”) 550, including an intra-aural event ANN 530 and a velopharyngeal event ANN 540. The intra-aural event ANN 530 is trained with audio signals comprising the opening and closing of Eustachian tubes and is therefore configured to determined whether an audio signal includes either the opening or closing of a Eustachian tube. Likewise, the velopharyngeal event ANN 540 is trained with audio signals comprising bolus transits, and is therefore configured to determine whether an audio signal includes a bolus transit. The ensemble ANN 550 may comprise both of the intra-aural event ANN 530 and velopharyngeal event ANN 540 utilized in conjunction, and each receiving a signal from one of the internal auscultation device 110′ or the external auscultation device 120′. The pattern recognition engine 170′ may be further figured to produce one or more output reports 570, 580 to a desired user interface. The output reports 570, 580 may include data pertaining to determinations made by the pattern recognition engine 170′, such as whether an intra-aural event occurred, whether a velopharyngeal event occurred, and a time duration of the velopharyngeal event. The pattern recognition engine 170′ may also communicate data to an audio visualization 400.


The system 500, 500′ may also include a data storage unit 520, particularly for storage of pulse code modulated audio signals produced by the analog to digital converter 512. The data storage unit 520 may further communicate data to an audio visualization 400.


Turning to FIG. 6, an exemplary audio visualization 400 according to one embodiment of a user interface is depicted therein. Such an audio visualization 400 may include a spectrogram 420, waveform amplitude graph 410, or any other visual representation of audio data as may be desired. The audio signal 411 captured by the external auscultation device may be represented on a channel separate from the audio signal 412 captured by the internal auscultation device, to facilitate visual analysis. The depicted audio visualization 400 represents a recorded swallow of a patient utilizing an internal auscultation device placed within the patient's external auditory canal, as well as an external auscultation device placed against the patient's throat. The initial intra-aural event 430 is detected by the internal auscultation device and corresponds to an opening of the patient's Eustachian tube prior to a bolus transit 450. A second intra-aural event 440 occurs after the bolus transit 450, and in the depicted embodiment after a breath 460. The second intra-aural event 440 corresponds to a closing of the patient's Eustachian tube.


As might be erroneously noted upon examining only the audio signal 411 captured via the external auscultation device, the bolus transit 450 appears to be approximately one-half of a second in duration. However, when corresponding audio data 412 from the same swallow, which was captured via the internal auscultation device, is also examined it can be seen that the initial intra-aural event begins just before the bolus transit 450 appears on the wave form 410. Accordingly, more accurate bolus transit times can be ascertained if the intra-aural event is observed in conjunction with the bolus, and the beginning of the bolus transit is marked with respect to the initial intra-aural event.


Since many modifications, variations and changes in detail can be made to the described preferred embodiment of the invention, it is intended that all matters in the foregoing description and shown in the accompanying drawings be interpreted as illustrative and not in a limiting sense. Thus, the scope of the invention should be determined by the appended claims and their legal equivalents.

Claims
  • 1. A system for auscultation of a patient, said system comprising: at least an internal auscultation device;said internal auscultation device configured to generate an audio signal and further disposed to communicate said audio signal to at least a pattern recognition engine;said pattern recognition engine operatively configured to determine whether said audio signal includes at least an intra-aural event; andsaid pattern recognition engine further configured to report said intra-aural event.
  • 2. The system as recited in claim 1 wherein determining whether said audio signal includes an intra-aural event includes at least determining whether said audio signal includes data corresponding to a sound lasting between approximately 30-80 milliseconds in duration.
  • 3. The system as recited in claim 2 wherein determining whether said audio signal includes an intra-aural event includes at least determining whether said audio signal includes data corresponding to a sound having frequency content between approximately 10 kHz to 12 kHz.
  • 4. The system as recited in claim 1 wherein said internal auscultation device is disposed within an external auditory canal of the patient.
  • 5. The system as recited in claim 1 further comprising an external auscultation device; said external auscultation device configured to generate an audio signal and further disposed to communicate said audio signal to at least said pattern recognition engine.
  • 6. The system as recited in claim 5 wherein said external auscultation device is disposed against the throat of the patient.
  • 7. The system as recited in claim 6 wherein said pattern recognition engine is further configured to determine whether said audio signal includes at least a velopharyngeal event.
  • 8. The system as recited in claim 7 wherein said velopharyngeal event includes at least closure of the velum of the patient against the posterior pharyngeal wall of the patient.
  • 9. The system as recited in claim 7 wherein said velopharyngeal event includes a transit of a bolus through the pharynx of the patient.
  • 10. The system as recited in claim 9 wherein said pattern recognition engine comprises at least one artificial neural net.
  • 11. The system as recited in claim 9 wherein said pattern recognition engine comprises at least two artificial neural nets combined in an ensemble.
  • 12. A method of auscultating a patient, the method comprising: providing at least an internal auscultation device to the external auditory cavity of the patient;providing an external auscultation device to the throat of the patient;utilizing the internal auscultation device and the external auscultation device concurrently to generate an audio signal.
  • 13. The method as recited in claim 12 wherein said internal auscultation device and said external auscultation device each include a microphone.
  • 14. The method as recited in claim 12 further comprising utilizing the internal auscultation device to determine whether an intra-aural audio event has occurred.
  • 15. The method as recited in claim 14 further comprising utilizing the external auscultation device to determine whether a velopharyngeal audio event has occurred which at least partially corresponds to the intra-aural audio event.
  • 16. A method of diagnosing dysphagia in a patient, the method comprising: utilizing an internal auscultation device to determine whether an intra-aural event has occurred;utilizing an external auscultation device to determine whether a bolus transit has occurred; andmeasuring the elapsed time between the intra-aural event and completion of the bolus transit.
  • 17. The method as recited in claim 16 wherein said intra-aural event corresponds to an opening of a Eustachian tube of the patient.
  • 18. The system as recited in claim 16 wherein determining whether said audio signal includes an intra-aural event includes at least determining whether said audio signal includes data corresponding to a sound having frequency content between approximately 10 kHz to 12 kHz.
  • 19. The system as recited in claim 16 wherein determining whether said audio signal includes an intra-aural event includes at least determining whether said audio signal includes data corresponding to a sound lasting between approximately 30-80 milliseconds in duration.
CLAIM OF PRIORITY

The present non-provisional patent application claims priority pursuant to 35 U.S.C. Section 119(e) to a currently pending, and prior-filed, provisional patent application, namely that having Ser. No. 62/713,793 filed on Aug. 2, 2018; the present non-provisional patent application is also a continuation-in-part application of currently pending, and prior-filed, non-provisional patent application having Ser. No. 16/116,334 filed on Aug. 29, 2018, which itself claims priority pursuant to 35 U.S.C. Section 119(e) to provisional patent application having Ser. No. 62/554,668 filed on Sep. 6, 2017; the contents of each of the foregoing are expressly incorporated herein by reference in their entireties.

Provisional Applications (2)
Number Date Country
62713793 Aug 2018 US
62554668 Sep 2017 US
Continuation in Parts (1)
Number Date Country
Parent 16116334 Aug 2018 US
Child 16530195 US