Ultrasound imaging is a useful medical imaging modality. For example, internal structures of a patient's body may be imaged before, during or after a therapeutic intervention. Also, qualitative and quantitative observations in an ultrasound image can be a basis for diagnosis. For example, ventricular volume determined via ultrasound is a basis for diagnosing, for example, ventricular systolic dysfunction and diastolic heart failure.
A healthcare professional typically holds a portable ultrasound probe, sometimes called a “transducer,” in proximity to the patient and moves the transducer as appropriate to visualize one or more target structures in a region of interest in the patient. A transducer may be placed on the surface of the body or, in some procedures, a transducer is inserted inside the patient's body. The healthcare professional coordinates the movement of the transducer so as to obtain a desired presentation on a screen, such as a two-dimensional cross-section of a three-dimensional volume.
Particular views of an organ or other tissue or body feature (such as fluids, bones, joints or the like) can be clinically significant. Such views may be prescribed by clinical standards as views that should be captured by the ultrasound operator, depending on the target organ, diagnostic purpose or the like.
It is common for ultrasound machines to operate in accordance with values set for one or more user settings of the machine. Typical machines permit users to set values for settings such as one or more of the following: Depth, Gain, Time-Gain-Compensation (“TGC”), Body Type, and Imaging Scenario. Depth specifies the distance into the patient the ultrasound image should reach. Time-Gain-Compensation specifies the degree to which received signal intensity should be increased with depth to reduce non-uniformity of image intensity resulting from tissue attenuation of the ultrasound signal. Body Type indicates the relative size of the patient's body. And Imaging Scenario specifies a region or region type of the body to be imaged, such as Heart, Lungs, Abdomen, or Musculoskeletal. In some embodiments, the facility uses the value specified for the Imaging Scenario setting as a basis for automatically specifying values for other common constituent settings, such as transmit wave form, transmit voltage, bandpass filter, apodization, compression, gain, persistence, smoothing/spatial filter, and speckle reduction.
The inventors have recognized that it is burdensome to require the operator of an ultrasound machine to adjust the value of user settings for each patient, and often for each of multiple imaging studies for the same patient. Further, it typically takes time for a new operator to learn how to choose the correct values for these settings; until s/he does, many studies may need to be repeated in order to obtain good quality results for them.
In response, the inventors have conceived and reduced to practice a software and/or hardware facility that automatically establishes values for ultrasound settings for an ultrasound study (“the facility”).
The facility acquires an initial image using a set of initial setting values. In various embodiments, these initial setting values are default setting values that are the same for every study; user-specified setting values; and/or setting values automatically determined based upon inputs about the user, such body type setting values determined using photographic images, electronic medical record fields for the patient specifying body type, body mass index, or weight, etc.
In some “setting improvement” embodiments, the facility uses a setting value evaluation machine learning model to discern from the initial image whether the initial value of each setting was optimal. The facility automatically adjusts the setting values that it determines were not optimal to be more optimal for reimaging of the patient. In some embodiments, the facility uses the same setting value evaluation model to evaluate one or more subsequent images for setting value optimality, and continues to adjust those settings whose values are still not optimal.
In some embodiments, the facility trains the setting value evaluation model using training observations generated based on sets of images captured from each training subject in a group of training subjects. The group of training subjects is constituted in a way designed to cover the range of possible values of any patient-specific settings, such as body type. For each subject, the facility captures a number of ultrasound images using sets of setting values selected from the n-dimensional volume in which each dimension represents the range of possible values for a different setting. In some cases, the images captured for each patient collectively cover a set of organs, or imaging sites or scenarios of other types. For each subject, for each imaging site, the facility solicits a human expert to select the one of these images that is of the highest-quality. The facility uses the selection of the highest quality image for each combination of subject and imaging site to construct a training observation for each image that it uses to train the setting value evaluation model, where the independent variable is the image, and the dependent variables are the setting values are the setting values of the image of the same subject and site that was selected as highest-quality.
In some “generative” embodiments, the facility applies a generative model to transform the initial image into an improved image whose level of quality is higher. In some embodiments, the generative model is a conditional generative adversarial network, or “cGAN.” In some embodiments, the facility trains the generative network using training observations it constructs from the images captured and selected as described above. In particular, the training observation generated by the facility for each image has as its independent variable the captured image, and has as its dependent variable the image selected as highest-quality for the same combination of subject and site.
By performing in some or all of these ways, the facility reduces the levels of needed operator skill and experience, time, and inaccuracy incurred by ultrasound studies.
Additionally, the facility improves the functioning of computer or other hardware, such as by reducing the dynamic display area, processing, storage, and/or data transmission resources needed to perform a certain task, thereby enabling the task to be permitted by less capable, capacious, and/or expensive hardware devices, and/or be performed with lesser latency, and/or preserving more of the conserved resources for use in performing other tasks. For example, by reducing the amount of time for which the ultrasound machine is used for a particular study, the ultrasound machine can be used for a great number of studies during its lifetime, or a version that can be used for the same number of studies while being manufactured at lower cost. Also, by reducing the number of unsuccessful studies that must be repeated, the facility increases the availability of ultrasound machines for additional original studies.
The probe 12 is configured to transmit an ultrasound signal toward a target structure and to receive echo signals returning from the target structure in response to transmission of the ultrasound signal. The probe 12 includes an ultrasound sensor 20 that, in various embodiments, may include an array of transducer elements (e.g., a transducer array) capable of transmitting an ultrasound signal and receiving subsequent echo signals.
The device 10 further includes processing circuitry and driving circuitry. In part, the processing circuitry controls the transmission of the ultrasound signal from the ultrasound sensor 20. The driving circuitry is operatively coupled to the ultrasound sensor 20 for driving the transmission of the ultrasound signal, e.g., in response to a control signal received from the processing circuitry. The driving circuitry and processor circuitry may be included in one or both of the probe 12 and the handheld computing device 14. The device 10 also includes a power supply that provides power to the driving circuitry for transmission of the ultrasound signal, for example, in a pulsed wave or a continuous wave mode of operation.
The ultrasound sensor 20 of the probe 12 may include one or more transmit transducer elements that transmit the ultrasound signal and one or more receive transducer elements that receive echo signals returning from a target structure in response to transmission of the ultrasound signal. In some embodiments, some or all of the transducer elements of the ultrasound sensor 20 may act as transmit transducer elements during a first period of time and as receive transducer elements during a second period of time that is different than the first period of time (i.e., the same transducer elements may be usable to transmit the ultrasound signal and to receive echo signals at different times).
The computing device 14 shown in
In some embodiments, the display screen 22 may be a touch screen capable of receiving input from an operator that touches the screen. In such embodiments, the user interface 24 may include a portion or the entire display screen 22, which is capable of receiving operator input via touch. In some embodiments, the user interface 24 may include one or more buttons, knobs, switches, and the like, capable of receiving input from an operator of the ultrasound device 10. In some embodiments, the user interface 24 may include a microphone 30 capable of receiving audible input, such as voice commands.
The computing device 14 may further include one or more audio speakers 28 that may be used to output acquired or conditioned auscultation signals, or audible representations of echo signals, blood flow during Doppler ultrasound imaging, or other features derived from operation of the device 10.
The probe 12 includes a housing, which forms an external portion of the probe 12. The housing includes a sensor portion located near a distal end of the housing, and a handle portion located between a proximal end and the distal end of the housing. The handle portion is proximally located with respect to the sensor portion.
The handle portion is a portion of the housing that is gripped by an operator to hold, control, and manipulate the probe 12 during use. The handle portion may include gripping features, such as one or more detents, and in some embodiments, the handle portion may have a same general shape as portions of the housing that are distal to, or proximal to, the handle portion.
The housing surrounds internal electronic components and/or circuitry of the probe 12, including, for example, electronics such as driving circuitry, processing circuitry, oscillators, beamforming circuitry, filtering circuitry, and the like. The housing may be formed to surround or at least partially surround externally located portions of the probe 12, such as a sensing surface. The housing may be a sealed housing, such that moisture, liquid or other fluids are prevented from entering the housing. The housing may be formed of any suitable materials, and in some embodiments, the housing is formed of a plastic material. The housing may be formed of a single piece (e.g., a single material that is molded surrounding the internal components) or may be formed of two or more pieces (e.g., upper and lower halves) which are bonded or otherwise attached to one another.
In some embodiments, the probe 12 includes a motion sensor. The motion sensor is operable to sense a motion of the probe 12. The motion sensor is included in or on the probe 12 and may include, for example, one or more accelerometers, magnetometers, or gyroscopes for sensing motion of the probe 12. For example, the motion sensor may be or include any of a piezoelectric, piezoresistive, or capacitive accelerometer capable of sensing motion of the probe 12. In some embodiments, the motion sensor is a tri-axial motion sensor capable of sensing motion about any of three axes. In some embodiments, more than one motion sensor 16 is included in or on the probe 12. In some embodiments, the motion sensor includes at least one accelerometer and at least one gyroscope.
The motion sensor may be housed at least partially within the housing of the probe 12. In some embodiments, the motion sensor is positioned at or near the sensing surface of the probe 12. In some embodiments, the sensing surface is a surface which is operably brought into contact with a patient during an examination, such as for ultrasound imaging or auscultation sensing. The ultrasound sensor 20 and one or more auscultation sensors are positioned on, at, or near the sensing surface.
In some embodiments, the transducer array of the ultrasound sensor 20 is a one-dimensional (1D) array or a two-dimensional (2D) array of transducer elements. The transducer array may include piezoelectric ceramics, such as lead zirconate titanate (PZT), or may be based on microelectromechanical systems (MEMS). For example, in various embodiments, the ultrasound sensor 20 may include piezoelectric micromachined ultrasonic transducers (PMUT), which are microelectromechanical systems (MEMS)-based piezoelectric ultrasonic transducers, or the ultrasound sensor 20 may include capacitive micromachined ultrasound transducers (CMUT) in which the energy transduction is provided due to a change in capacitance.
The ultrasound sensor 20 may further include an ultrasound focusing lens, which may be positioned over the transducer array, and which may form a part of the sensing surface. The focusing lens may be any lens operable to focus a transmitted ultrasound beam from the transducer array toward a patient and/or to focus a reflected ultrasound beam from the patient to the transducer array. The ultrasound focusing lens may have a curved surface shape in some embodiments. The ultrasound focusing lens may have different shapes, depending on a desired application, e.g., a desired operating frequency, or the like. The ultrasound focusing lens may be formed of any suitable material, and in some embodiments, the ultrasound focusing lens is formed of a room-temperature-vulcanizing (RTV) rubber material.
In some embodiments, first and second membranes are positioned adjacent to opposite sides of the ultrasound sensor 20 and form a part of the sensing surface. The membranes may be formed of any suitable material, and in some embodiments, the membranes are formed of a room-temperature-vulcanizing (RTV) rubber material. In some embodiments, the membranes are formed of a same material as the ultrasound focusing lens.
Those skilled in the art will appreciate that the acts shown in
The network takes as its input an ultrasound image 701, such as a 1×224×224 grey scale ultrasound image. The network produces four outputs: a gain output 702 that predicts whether the gain setting value used to capture image 701 was too low, optimal, or too high; a depth output 703 that predicts whether the depth setting value used to capture image 701 was too shallow, optimal, or too deep; a body type output 704 that predicts whether the patient's body type is small, medium, or large; and a preset output 705 that predicts the region or region type of the body that was imaged, or other imaging scenario, such as heart, lungs, abdomen, or musculoskeletal. The output of branch 710 issuing from ConvBlock 715 is shared by branch 720 to produce the gain output, branch 730 to produce the depth output, branch 740 to produce the body type output, and branch 750 to produce the preset output.
Those skilled in the art will appreciate that a variety of neural network types and particular architectures may be straightforwardly substituted for the architecture shown in
For the setting value evaluation model, the facility generates a training observation in act 1104 as follows: for each setting, the facility uses the setting value used to capture the image to the setting value used to capture the image identified as the highest-quality image produced for the subject. The facility then establishes a training observation for the image in which the independent variable is the image, and the dependent variables are, for each setting, the result of the comparison of the value used for that setting to capture the image to the value used for that setting to capture the highest-quality image produced for the subject. For example, if the value of the depth setting used to capture the image was 9 cm and the value of the depth setting used to capture the highest-quality image produced for the subject was 11 cm, then the facility would use a “depth too shallow” value for one of the dependent variables in this observation. In some embodiments, for some settings, the facility simply uses the value of the setting used to capture the highest-quality image produced for the subject, without comparison to the corresponding value of the setting used to capture the image; for example, in such embodiments, where the value “large” is used for a body type setting to capture the highest-quality image produced for the subject, the facility uses this “large” setting value as a dependent variable for each of the observations produced from the images captured from the same subject.
For the generative model, the facility generates a training observation for each image in act 1104 as follows: the facility uses the image as the independent variable, and the highest-quality image produced for the same subject as the dependent variable.
The various embodiments described above can be combined to provide further embodiments. All of the U.S. patents, U.S. patent application publications, U.S. patent applications, foreign patents, foreign patent applications and non-patent publications referred to in this specification and/or listed in the Application Data Sheet are incorporated herein by reference, in their entirety. Aspects of the embodiments can be modified, if necessary to employ concepts of the various patents, applications and publications to provide yet further embodiments.
These and other changes can be made to the embodiments in light of the above-detailed description. In general, in the following claims, the terms used should not be construed to limit the claims to the specific embodiments disclosed in the specification and the claims, but should be construed to include all possible embodiments along with the full scope of equivalents to which such claims are entitled. Accordingly, the claims are not limited by the disclosure.
This application claims the benefit of provisional U.S. Application No. 63/333,953, filed Apr. 22, 2022 and entitled “OPTIMIZING ULTRASOUND SETTINGS,” which is hereby incorporated by reference in its entirety. In cases where the present application conflicts with a document incorporated by reference, the present application controls.
Number | Date | Country | |
---|---|---|---|
63333953 | Apr 2022 | US |