Ultrasound imaging is a useful medical imaging modality. For example, internal structures of a patient's body may be imaged before, during or after a therapeutic intervention. A healthcare professional typically holds a portable ultrasound probe, sometimes called a “transducer,” in proximity to the patient and moves the transducer as appropriate to visualize one or more target structures in a region of interest in the patient. A transducer may be placed on the surface of the body or, in some procedures, a transducer is inserted inside the patient's body. The healthcare professional coordinates the movement of the transducer so as to obtain a desired representation on a screen, such as a two-dimensional cross-section of a three-dimensional volume.
Particular views of an organ or other tissue or body feature (such as fluids, bones, joints or the like) can be clinically significant. Such views may be prescribed by clinical standards as views that should be captured by the ultrasound operator, depending on the target organ, diagnostic purpose or the like.
Clinical decision trees have been developed that, given a set of anatomical features (signs) shown in radiological studies such as ultrasound images, return likely diagnoses. Diagnostic protocols based on decision trees are often used by physicians to quickly rule out or confirm specific diagnoses and influence therapeutic decisions.
Among other medical specializations, clinical protocols have been developed for critical care ultrasound imaging of the lungs. For example, the BLUE-protocol (Bedside Lung Ultrasound in Emergency) provides immediate diagnosis of acute respiratory failure, and defines profiles for pneumonia, congestive heart failure, COPD, asthma, pulmonary embolism, and pneumothorax. The FALLS-protocol (Fluid Administration Limited by Lung Sonography) provides decisions for the management of acute circulatory failure, and allows a physician to sequentially rule out obstructive, cardiogenic, hypovolemic, and septic shock. The BLUE- and FALLS-protocols are described in greater detail in the following, each of which is hereby incorporated by reference in its entirety: Lichtenstein DA. BLUE-protocol and FALLS-protocol: two applications of lung ultrasound in the critically ill. Chest. 2015; 147(6):1659-1670. doi:10.1378/chest.14-1313, available at pubmed.ncbi.nlm.nih.gov/26033127; and Lichtenstein, D. A. Lung ultrasound in the critically ill. Ann. Intensive Care 4, 1 (2014), available at doi.org/10.1186/2110-5820-4-1; Tricia Smith, MD; Todd Taylor, MD; Jehangir Meer, MD, Focused Ultrasound for Respiratory Distress: The BLUE Protocol, Emergency Medicine. 2018 January, 50(1):38-40|DOI 10.12788/emed.2018.0077, available at www.mdedge.com/emergencymedicine/article/156882/imaging/emergency-ultrasound-focused-ultrasound-respiratory; and Lichtenstein D. FALLS-protocol: lung ultrasound in hemodynamic assessment of shock, Heart Lung Vessel. 2013; 5(3):142-147, available at www.ncbi.nlm.nih.gov/pmc/articles/PMC3848672.
It is common for diagnostic protocols designed for ultrasound imaging to rely on the capture and evaluation of two different types of ultrasound images: Brightness Mode (“B-Mode”) and Motion Mode (“M-Mode”) images. B-Mode is a two-dimensional ultrasound image composed of bright dots representing ultrasound echoes. The brightness of each dot is determined by the amplitude of the returned echo signal. This allows for visualization and quantification of anatomical structures, as well as for the visualization of diagnostic and therapeutic procedures for small animal studies. In M-Mode ultrasound, pulses are emitted in quick succession—each time, either an A-mode or B-mode image is taken. Over time, this is analogous to recording a video in ultrasound. As the organ boundaries that produce reflections move relative to the probe, this can be used to determine the velocity of specific organ structures. The typical process for capturing an M-Mode image involves manually specifying an M-Mode line with respect to a captured B-Mode image.
The inventors have recognized that conventional approaches to using ultrasound to identify all of the necessary signs to make a clinical decision using a decision tree can be time-consuming, as the physician must typically manually examine multiple B-Mode images to assess the presence of certain structures or other signs, and in some cases obtain M-Mode based on features in the B-Mode images. Collecting M-Mode from B-Mode images is particularly burdensome, as the physician must manually specify an M-Mode line in the region of a B-Mode image for which M-Mode is to be captured.
In particular, many lung ultrasound diagnostic protocols rely on determining the presence or absence of ten different signs. Six of these ten signs are identified in B-Mode images: 1) pleural line (bat sign), 2) A-Lines, 3), quad sign 4) tissue sign, 5) fractal/shred sign, and 6) B-lines/lung rockets. The other four of these signs are identified in M-Mode images: 7) seashore sign, 8) sinusoidal sign, 9) stratosphere/barcode sign, and 10) lung point. Conventionally, performing these protocols require the physician to manually identify and keep track of the various signs while switching back and forth between B-Mode and M-Mode on the ultrasound device. The inventors have recognized that this can become burdensome when different regions of the anatomy must be examined, or when the physician must collect different windows of the same region.
In response to recognizing these disadvantages, the inventor has conceived and reduced to practice a software and/or hardware facility that provides automatic assistance for the process of evaluating a diagnostic protocol with respect to ultrasound images (“the facility”). In some embodiments, the facility uses neural networks or machine learning models of other types to automatically identify signs in B-Mode and M-Mode so that the physician does not have to manually search for them.
Further, if M-Mode collection is required to confirm the presence/absence of a particular sign (for example, seashore sign), in some embodiments the facility uses a neural network or a machine learning model of another type to identify the region or regions of a B-Mode image at which the M-Mode line is to be placed. For the four M-mode signs listed above, the M-Mode line must be placed at one or more points across the pleural line. The facility uses object detection or segmentation performed by the neural network to locate the boundaries of the pleural line in the B-Mode image, which enables the facility to automatically place the M-Mode line in the proper location. Once the M-Mode line is identified, in some embodiments the facility automatically collects M-Mode images without requiring user input. The facility then uses a neural network to identify signs in the collected M-mode images.
In some embodiments, once the facility confirms the presence or absence of all of the signs needed by the protocol, it applies the protocol's clinical decision tree to automatically obtain a diagnosis.
In some embodiments, throughout this process, the facility displays the results of sign identification, M-Mode collection, and diagnosis to the user. In some embodiments, this includes displaying text output of sign identification, drawing the M-Mode line on the B-Mode image, and highlighting the returned path of the clinical decision tree.
By operating in some or all of the ways described above, the facility speeds up the process of identifying clinically relevant signs in ultrasound and making clinical diagnoses. Its automatic identification of signs in B-Mode and M-Mode images saves the physician time from manually searching for and keeping track of signs. The facility's automatic placement of M-Mode lines based on the features detected in the B-mode image eases the burden of having to manually select and record M-mode from the ultrasound interface. The facility's evaluation of clinical decision trees given the identified signs provides a faster and more transparent way of suggesting clinical diagnoses. For critical care ultrasound, procedures such as lung evaluation provide urgent diagnoses with immediate therapeutic interventions, so automating the process can lead to significantly more efficient and effective patient care.
Additionally, the facility improves the functioning of computer or other hardware, such as by reducing the dynamic display area, processing, storage, and/or data transmission resources needed to perform a certain task, thereby enabling the task to be permitted by less capable, capacious, and/or expensive hardware devices, and/or be performed with lesser latency, and/or preserving more of the conserved resources for use in performing other tasks. For example, by maximizing the usability of ultrasound images by more frequently identifying all structures visualized therein, the facility avoids many cases in which re-imaging is required. By reducing the need to reimage, the facility consumes, overall, less memory and processing resources to capture additional images and perform additional rounds of automatic structure identification. Also, by reducing the amount of time needed to successfully complete a single diagnostic session, the facility permits an organization performing ultrasound imaging to purchase fewer copies of an ultrasound apparatus to serve the same number of patients, or operate an unreduced number of copies at a lower utilization rate, which can extend their useful lifespan, improves their operational status at every time in their lifespan, reduces the need for intra-lifespan servicing and calibration, etc.
The probe 12 is configured to transmit an ultrasound signal toward a target structure and to receive echo signals returning from the target structure in response to transmission of the ultrasound signal. The probe 12 includes an ultrasound sensor 20 that, in various embodiments, may include an array of transducer elements (e.g., a transducer array) capable of transmitting an ultrasound signal and receiving subsequent echo signals.
The device 10 further includes processing circuitry and driving circuitry. In part, the processing circuitry controls the transmission of the ultrasound signal from the ultrasound sensor 20. The driving circuitry is operatively coupled to the ultrasound sensor 20 for driving the transmission of the ultrasound signal, e.g., in response to a control signal received from the processing circuitry. The driving circuitry and processor circuitry may be included in one or both of the probe 12 and the handheld computing device 14. The device 10 also includes a power supply that provides power to the driving circuitry for transmission of the ultrasound signal, for example, in a pulsed wave or a continuous wave mode of operation.
The ultrasound sensor 20 of the probe 12 may include one or more transmit transducer elements that transmit the ultrasound signal and one or more receive transducer elements that receive echo signals returning from a target structure in response to transmission of the ultrasound signal. In some embodiments, some or all of the transducer elements of the ultrasound sensor 20 may act as transmit transducer elements during a first period of time and as receive transducer elements during a second period of time that is different than the first period of time (i.e., the same transducer elements may be usable to transmit the ultrasound signal and to receive echo signals at different times).
The computing device 14 shown in
In some embodiments, the display screen 22 may be a touch screen capable of receiving input from a user that touches the screen. In such embodiments, the user interface 24 may include a portion or the entire display screen 22, which is capable of receiving user input via touch. In some embodiments, the user interface 24 may include one or more buttons, knobs, switches, and the like, capable of receiving input from a user of the ultrasound device 10. In some embodiments, the user interface 24 may include a microphone 30 capable of receiving audible input, such as voice commands.
The computing device 14 may further include one or more audio speakers 28 that may be used to output acquired or conditioned auscultation signals, or audible representations of echo signals, blood flow during Doppler ultrasound imaging, or other features derived from operation of the device 10.
The probe 12 includes a housing, which forms an external portion of the probe 12. The housing includes a sensor portion located near a distal end of the housing, and a handle portion located between a proximal end and the distal end of the housing. The handle portion is proximally located with respect to the sensor portion.
The handle portion is a portion of the housing that is gripped by a user to hold, control, and manipulate the probe 12 during use. The handle portion may include gripping features, such as one or more detents, and in some embodiments, the handle portion may have a same general shape as portions of the housing that are distal to, or proximal to, the handle portion.
The housing surrounds internal electronic components and/or circuitry of the probe 12, including, for example, electronics such as driving circuitry, processing circuitry, oscillators, beamforming circuitry, filtering circuitry, and the like. The housing may be formed to surround or at least partially surround externally located portions of the probe 12, such as a sensing surface. The housing may be a sealed housing, such that moisture, liquid or other fluids are prevented from entering the housing. The housing may be formed of any suitable materials, and in some embodiments, the housing is formed of a plastic material. The housing may be formed of a single piece (e.g., a single material that is molded surrounding the internal components) or may be formed of two or more pieces (e.g., upper and lower halves) which are bonded or otherwise attached to one another.
In some embodiments, the probe 12 includes a motion sensor. The motion sensor is operable to sense a motion of the probe 12. The motion sensor is included in or on the probe 12 and may include, for example, one or more accelerometers, magnetometers, or gyroscopes for sensing motion of the probe 12. For example, the motion sensor may be or include any of a piezoelectric, piezoresistive, or capacitive accelerometer capable of sensing motion of the probe 12. In some embodiments, the motion sensor is a tri-axial motion sensor capable of sensing motion about any of three axes. In some embodiments, more than one motion sensor 16 is included in or on the probe 12. In some embodiments, the motion sensor includes at least one accelerometer and at least one gyroscope.
The motion sensor may be housed at least partially within the housing of the probe 12. In some embodiments, the motion sensor is positioned at or near the sensing surface of the probe 12. In some embodiments, the sensing surface is a surface which is operably brought into contact with a patient during an examination, such as for ultrasound imaging or auscultation sensing. The ultrasound sensor 20 and one or more auscultation sensors are positioned on, at, or near the sensing surface.
In some embodiments, the transducer array of the ultrasound sensor 20 is a one-dimensional (1D) array or a two-dimensional (2D) array of transducer elements. The transducer array may include piezoelectric ceramics, such as lead zirconate titanate (PZT), or may be based on microelectromechanical systems (MEMS). For example, in various embodiments, the ultrasound sensor 20 may include piezoelectric micromachined ultrasonic transducers (PMUT), which are microelectromechanical systems (MEMS)-based piezoelectric ultrasonic transducers, or the ultrasound sensor 20 may include capacitive micromachined ultrasound transducers (CMUT) in which the energy transduction is provided due to a change in capacitance.
The ultrasound sensor 20 may further include an ultrasound focusing lens, which may be positioned over the transducer array, and which may form a part of the sensing surface. The focusing lens may be any lens operable to focus a transmitted ultrasound beam from the transducer array toward a patient and/or to focus a reflected ultrasound beam from the patient to the transducer array. The ultrasound focusing lens may have a curved surface shape in some embodiments. The ultrasound focusing lens may have different shapes, depending on a desired application, e.g., a desired operating frequency, or the like. The ultrasound focusing lens may be formed of any suitable material, and in some embodiments, the ultrasound focusing lens is formed of a room-temperature-vulcanizing (RTV) rubber material.
In some embodiments, first and second membranes are positioned adjacent to opposite sides of the ultrasound sensor 20 and form a part of the sensing surface. The membranes may be formed of any suitable material, and in some embodiments, the membranes are formed of a room-temperature-vulcanizing (RTV) rubber material. In some embodiments, the membranes are formed of a same material as the ultrasound focusing lens.
Returning to
At node 402, where the Lung Sliding sign is present, the facility branches based upon whether a B-Profile sign is present or an A-Profile sign is present. If the B-Profile sign is present, then the facility traverses node 405 to leaf node 410 specifying a Pulmonary Edema diagnosis; the B-Profile sign is present if the Lung Sliding sign is present, and at least 3 B-Lines are present in at least one point of each lung. If the A-Profile sign is present, the facility traverses node 406 to leaf node 411 specifying that Sequential Venous Analysis is required for diagnosis; the A-Profile sign is present if the Lung Sliding sign is present, and A-Lines are present and fewer than 3 B-Lines are present at each point of each lung.
At node 404, where the Lung Sliding sign is not present, the facility branches based upon whether a B′-Profile sign is present or an A′-Profile sign is present. If the B′-Profile sign is present, then the facility traverses node 408 to leaf node 413 specifying a Pneumonia diagnosis; the B′-Profile sign is present if the Lung Sliding sign is not present, and at least 3 B-Lines are present in at least one point of each lung. If the A′-Profile sign is present, then the facility traverses node 409; the A′-Profile sign is present if the Lung Sliding sign is not present, and A-Lines are present and fewer than 3 B-Lines are present at each point of each lung. At node 409, if a Lung Point sign is present, then the facility traverses node 414 to leaf node 416, specifying a Pneumothorax diagnosis. At node 409, if a Lung Point sign is not present, then the facility traverses node 415 to leaf node 417, indicating a failure of the protocol to make a diagnosis.
At node 403, where the Lung Sliding sign may or may not be present, if an A/B Profile sign is present or a C Profile sign is present, then the facility proceeds to leaf node 412 specifying a Pneumonia diagnosis; the A/B Profile sign is present if fewer than 3 B-Lines are present in at each point of one lung, and at least 3 B-Lines are present in at least one point of the other lung, while the C profile sign is present if a Tissue Sign or a Fractal/Shred sign is present in at least one point of either lung.
Additional information about the BLUE protocol is as follows: Lung sliding is evaluated through the M-Mode image of an M-Mode line collected across the Pleural Line. The M-Mode signs listed in the table are mutually exclusive; each M-Mode image of an M-Mode line placed across the Pleural Line will be classified as exactly one out of the four possible M-Mode signs listed in the table. The Seashore Sign indicates Lung Sliding. The Stratosphere/Barcode sign indicates no Lung Sliding. The Lung Point is a mix of Seashore Sign and Stratosphere/Barcode Sign, it indicates a point at which part of the lung is sliding, and part of the lung is not sliding (i.e. a collapsed or partially collapsed lung). The Sinusoidal Sign indicates Pleural Effusion, which is orthogonal to the concept of lung sliding, and is not explicitly included in the diagram of the BLUE protocol.
While
Returning to
Returning to
Returning to
Returning to
Returning to
Returning to
Returning to
Returning to
Returning to
Returning to
Returning to
Returning to
Those skilled in the art will appreciate that the acts shown in
In various embodiments, the facility employs various other types of machine learning models to recognize signs. In particular, in various embodiments, the facility uses U-Net neural networks of other types; convolutional neural networks of other types; neural networks of other types; or machine learning models of other types.
The various embodiments described above can be combined to provide further embodiments. All of the U.S. patents, U.S. patent application publications, U.S. patent applications, foreign patents, foreign patent applications and non-patent publications referred to in this specification and/or listed in the Application Data Sheet are incorporated herein by reference, in their entirety. Aspects of the embodiments can be modified, if necessary to employ concepts of the various patents, applications and publications to provide yet further embodiments.
These and other changes can be made to the embodiments in light of the above-detailed description. In general, in the following claims, the terms used should not be construed to limit the claims to the specific embodiments disclosed in the specification and the claims, but should be construed to include all possible embodiments along with the full scope of equivalents to which such claims are entitled. Accordingly, the claims are not limited by the disclosure.
This application claims the benefit of 63/022,987, filed May 11, 2020 and entitled “AUTOMATICALLY IDENTIFYING CLINICALLY IMPORTANT SIGNS IN ULTRASOUND B-MODE AND M-MODE IMAGES FOR PREDICTION OF DIAGNOSES,” which is hereby incorporated by reference in its entirety. In cases where the present application conflicts with a document incorporated by reference, the present application controls.
Number | Date | Country | |
---|---|---|---|
63022987 | May 2020 | US |