This application relates to contrast enhanced imaging. More specifically, this application relates to generating contrast accumulation images.
In contrast-enhanced imaging, a contrast agent is provided to an area or volume to be imaged in order to provide a higher signal strength from the area or volume, or selectively enhance signals from areas or volumes with high contrast concentration. For example, in contrast-enhanced ultrasound (CEUS), microbubbles may be injected into a subject's bloodstream and ultrasound images may be acquired of the subject's vasculature. Without the microbubbles, little to no signal may be provided by the blood vessels. In contrast accumulation imaging (CAI), multiple contrast-enhanced images (e.g., multiple image frames) are acquired and combined to form the final image, which can be used to map contrast agent progression and enhance vessel topology and conspicuity. This temporal accumulation imaging of CEUS has been commercialized and widely used for vascularity visualization.
Systems and methods for an adaptive contrast-enhanced ultrasound technique that replaces microbubble identification and localization steps in contrast accumulation image processing with an adaptive point spread function (PSF) thinning/skeletonization technique (e.g., thinning technique). The PSF size may be adaptive both spatially (e.g., to blood vessel size) and temporally (e.g., to different perfusion times) based, at least in part, by adapting an aggressiveness parameter. This may provide enhanced vascular imaging performance for both large branches and microvessels at different contrast perfusion phases with reduced processing requirements.
In some examples, a contrast infused tissue image loop may be acquired. The adaptive PSF thinning technique may be applied to each frame of the contrast loop, in which the size of the PSF may be adaptive based on spatial regions within the image and/or adaptive over time. After adaptive PSF thinning technique is performed, a temporal accumulation of the contrast image frames to achieve high resolution may be performed. In some examples, the temporal accumulation may use a maximum intensity each frame at each pixel or show average intensity at each pixel.
In accordance with at least one example disclosed herein, an ultrasound system may include an ultrasound probe for receiving ultrasound signals for a plurality of transmit/receive events, and at least one processor in communication with the ultrasound probe, the at least one processor configured to perform an adaptive thinning technique on the ultrasound signals for the plurality of transmit/receive events, wherein the adaptive thinning technique is based, at least in part, on an aggressiveness parameter that is adapted in at least one of a temporal domain or a spatial domain and temporally accumulate the ultrasound signals for the plurality of transmit/receive events on which the adaptive thinning technique is performed to generate an adaptive contrast accumulation image.
In accordance with at least one example disclosed herein, a method may include receiving a plurality of contrast enhanced ultrasound images performing an adaptive thinning technique on individual ones of the plurality of contrast enhanced ultrasound images, wherein the adaptive thinning technique is based, at least in part, on an aggressiveness parameter that is adapted in at least one of a temporal domain or a spatial domain, and temporally accumulating at least two of the individual ones of the plurality of ultrasound images to provide an adaptive contrast accumulation image.
In accordance with at least one example disclosure herein, a non-transitory computer readable medium may include instructions that when executed cause an ultrasound imaging system to receive a plurality of contrast enhanced ultrasound images, perform an adaptive thinning technique on individual ones of the plurality of contrast enhanced ultrasound images, wherein the adaptive thinning technique is based, at least in part, on an aggressiveness parameter that is adapted in at least one of a temporal domain or a spatial domain, and temporally accumulate at least two of the individual ones of the plurality of ultrasound images to provide an adaptive contrast accumulation image.
The following description of certain exemplary examples is merely exemplary in nature and is in no way intended to limit the invention or its applications or uses. In the following detailed description of examples of the present systems and methods, reference is made to the accompanying drawings which form a part hereof, and in which are shown by way of illustration specific examples in which the described systems and methods may be practiced. These examples are described in sufficient detail to enable those skilled in the art to practice the presently disclosed systems and methods, and it is to be understood that other examples may be utilized and that structural and logical changes may be made without departing from the spirit and scope of the present system. Moreover, for the purpose of clarity, detailed descriptions of certain features will not be discussed when they would be apparent to those with skill in the art so as not to obscure the description of the present system. The following detailed description is therefore not to be taken in a limiting sense, and the scope of the present system is defined only by the appended claims.
Accumulation CEUS may have limited spatial resolution due to the large size of the point spread function (PSF) in contrast mode. The PSF is a measure of blurring or spreading of a point source by an imaging system. To improve the spatial resolution, image processing techniques, such as super resolution imaging (SRI) may be performed. In SRI, individual microbubbles are identified and represented as single pixels (e.g., localized). However, the number of images required to be accumulated to generate an SRI image may be significant. Furthermore, the processing time and/or power necessary for the identification and localization of the microbubbles may also be prohibitive, especially if real-time or near real-time visualization is desired.
Another issue in CEUS imaging is that there may be a wide distribution of vessel size in an image. While traditional accumulation CEUS may be sufficient for larger vessels, visualization of smaller vessels may benefit more from other techniques such as SRI. Similarly, in a series of images, there may be a wide distribution in contrast agent concentration over time. Applying a single image processing technique may lead to enhanced visualization of only one vessel type and/or enhanced visualization only for a particular time period (e.g., early perfusion phase, late accumulation phase) of the contrast imaging scan.
The present disclosure is directed to systems and methods for performing image processing techniques that are spatially and temporally adaptive. The techniques described herein may be referred to as “adaptive CM.” These techniques may adapt an aggressiveness parameter of a PSF thinning/skeletonization technique to provide better visualization of both large and small vessels and over all phases of a contrast enhanced imaging scan.
In some examples, the transducer array 214 may be coupled to a microbeamformer 216, which may be located in the ultrasound probe 212, and which may control the transmission and reception of signals by the transducer elements in the array 214. In some examples, the microbeamformer 216 may control the transmission and reception of signals by active elements in the array 214 (e.g., an active subset of elements of the array that define the active aperture at any given time).
In some examples, the microbeamformer 216 may be coupled, e.g., by a probe cable or wirelessly, to a transmit/receive (T/R) switch 218, which switches between transmission and reception and protects the main beamformer 222 from high energy transmit signals. In some examples, for example in portable ultrasound systems, the T/R switch 218 and other elements in the system can be included in the ultrasound probe 212 rather than in the ultrasound system base, which may house the image processing electronics. An ultrasound system base typically includes software and hardware components including circuitry for signal processing and image data generation as well as executable instructions for providing a user interface.
The transmission of ultrasonic signals from the transducer array 214 under control of the microbeamformer 216 is directed by the transmit controller 220, which may be coupled to the T/R switch 218 and a main beamformer 222. The transmit controller 220 may control the direction in which beams are steered. Beams may be steered straight ahead from (orthogonal to) the transducer array 214, or at different angles for a wider field of view. The transmit controller 220 may also be coupled to a user interface 224 and receive input from the user's operation of a user control. The user interface 224 may include one or more input devices such as a control panel 252, which may include one or more mechanical controls (e.g., buttons, encoders, etc.), touch sensitive controls (e.g., a trackpad, a touchscreen, or the like), and/or other known input devices.
In some examples, the partially beamformed signals produced by the microbeamformer 216 may be coupled to a main beamformer 222 where partially beamformed signals from individual patches of transducer elements may be combined into a fully beamformed signal. In some examples, microbeamformer 216 is omitted, and the transducer array 214 is under the control of the beamformer 222 and beamformer 222 performs all beamforming of signals. In examples with and without the microbeamformer 216, the beamformed signals of beamformer 222 are coupled to processing circuitry 250, which may include one or more processors (e.g., a signal processor 226, a B-mode processor 228, a Doppler processor 260, and one or more image generation and processing components 268) configured to produce an ultrasound image from the beamformed signals (i.e., beamformed RF data).
The signal processor 226 may be configured to process the received beamformed RF data in various ways, such as bandpass filtering, decimation, I and Q component separation, and harmonic signal separation. The signal processor 226 may also perform additional signal enhancement such as speckle reduction, signal compounding, and electronic noise elimination. The processed signals (also referred to as I and Q components or IQ signals) may be coupled to additional downstream signal processing circuits for image generation. The IQ signals may be coupled to a plurality of signal paths within the system, each of which may be associated with a specific arrangement of signal processing components suitable for generating different types of image data (e.g., B-mode image data, Doppler image data). For example, the system may include a B-mode signal path 258 which couples the signals from the signal processor 226 to a B-mode processor 228 for producing B-mode image data.
The B-mode processor 228 can employ amplitude detection for the imaging of structures in the body. According to principles of the present disclosure, the B-mode processor 228 may generate signals for tissue images and/or contrast images. In some embodiments, signals from the microbubbles may be extracted from the B-mode signal for forming a separate contrast image. Similarly, the tissue signals may be separated from the microbubble signals for generating a tissue image. The signals produced by the B-mode processor 228 may be coupled to a scan converter 230 and/or a multiplanar reformatter 232. The scan converter 230 may be configured to arrange the echo signals from the spatial relationship in which they were received to a desired image format. For instance, the scan converter 230 may arrange the echo signal into a two dimensional (2D) sector-shaped format, or a pyramidal or otherwise shaped three dimensional (3D) format. In another example of the present disclosure, the scan converter 230 may arrange the echo signals into side-by-side contrast enhanced and tissue images.
The multiplanar reformatter 232 can convert echoes which are received from points in a common plane in a volumetric region of the body into an ultrasonic image (e.g., a B-mode image) of that plane, for example as described in U.S. Pat. No. 6,443,896 (Detmer). The scan converter 230 and multiplanar reformatter 232 may be implemented as one or more processors in some examples.
A volume renderer 234 may generate an image (also referred to as a projection, render, or rendering) of the 3D dataset as viewed from a given reference point, e.g., as described in U.S. Pat. No. 6,530,885 (Entrekin et al.). The volume renderer 234 may be implemented as one or more processors in some examples. The volume renderer 234 may generate a render, such as a positive render or a negative render, by any known or future known technique such as surface rendering and maximum intensity rendering.
In some examples, the system may include a Doppler signal path 262 which couples the output from the signal processor 226 to a Doppler processor 260. The Doppler processor 260 may be configured to estimate the Doppler shift and generate Doppler image data. The Doppler image data may include color data which is then overlaid with B-mode (i.e. grayscale) image data for display. The Doppler processor 260 may be configured to filter out unwanted signals (i.e., noise or clutter associated with non-moving tissue), for example using a wall filter. The Doppler processor 260 may be further configured to estimate velocity and power in accordance with known techniques. For example, the Doppler processor may include a Doppler estimator such as an auto-correlator, in which velocity (Doppler frequency) estimation is based on the argument of the lag-one autocorrelation function and Doppler power estimation is based on the magnitude of the lag-zero autocorrelation function. Motion can also be estimated by known phase-domain (for example, parametric frequency estimators such as MUSIC, ESPRIT, etc.) or time-domain (for example, cross-correlation) signal processing techniques. Other estimators related to the temporal or spatial distributions of velocity such as estimators of acceleration or temporal and/or spatial velocity derivatives can be used instead of or in addition to velocity estimators. In some examples, the velocity and power estimates may undergo further threshold detection to further reduce noise, as well as segmentation and post-processing such as filling and smoothing. The velocity and power estimates may then be mapped to a desired range of display colors in accordance with a color map. The color data, also referred to as Doppler image data, may then be coupled to the scan converter 230, where the Doppler image data may be converted to the desired image format and overlaid on the B-mode image of the tissue structure to form a color Doppler or a power Doppler image. For example, Doppler image data may be overlaid on a B-mode image of the tissue structure.
Output (e.g., B-mode images, Doppler images) from the scan converter 230, the multiplanar reformatter 232, and/or the volume renderer 334 may be coupled to an image processor 236 for further enhancement, buffering and temporary storage before being displayed on an image display 238. Optionally, in some embodiments, the image processor 236 may receive I/Q data from the signal processor 226 and/or RF data from the beamformer 222 for enhancement, buffering and temporary storage before being displayed.
According to principles of the present disclosure, in some examples, the image processor 236 may receive imaging data corresponding to image frames of a sequence (e.g., multi-frame loop, cineloop) of contrast enhanced images. Each image frame in the sequence may have been acquired at a different time (e.g., the image frames may be temporally spaced). In some examples, the image processor 236 may perform an adaptive point spread function (PSF) thinning/skeletonization technique (also referred to herein as simply an adaptive thinning technique) on each frame in the sequence. The adaptive thinning technique may be performed on each pixel of each image frame (e.g., the imaging data corresponding to each pixel), including both separable microbubbles and microbubble clusters in some examples. The technique may reshape and/or resize the PSF of the system 100. The size of the adapted PSF may be based, at least in part, on a value of an aggressiveness parameter. The greater the value of the aggressiveness parameter, the smaller the size of the adapted PSF. The lower the value of the aggressiveness parameter, the closer the adapted PSF is to the original PSF of the system 100. The aggressiveness parameter may be adapted in the spatial domain and/or the temporal domain. As will be explained in more detail with reference to
After performing the adaptive thinning technique on all of the images of the sequence, the image processor 236 may perform temporal accumulation in some examples. In other words, the image processor 236 may combine multiple image frames to create the final images of the adaptive CAI image sequence (e.g., high resolution loop), which may include one or more image frames. A variety of techniques may be used. For example, the temporal accumulation step may be performed for the entire sequence (e.g., infinite temporal window) or for a moving window in the temporal domain (e.g., finite temporal window). Techniques for temporal accumulation are described in more detail with reference to
A graphics processor 240 may generate graphic overlays for display with the images. These graphic overlays can contain, e.g., standard identifying information such as patient name, date and time of the image, imaging parameters, and the like. For these purposes the graphics processor may be configured to receive input from the user interface 224, such as a typed patient name or other annotations. The user interface 244 can also be coupled to the multiplanar reformatter 232 for selection and control of a display of multiple multiplanar reformatted (MPR) images.
The system 200 may include local memory 242. Local memory 242 may be implemented as any suitable non-transitory computer readable medium (e.g., flash drive, disk drive). Local memory 242 may store data generated by the system 200 including B-mode images, contrast images, executable instructions, inputs provided by a user via the user interface 224, or any other information necessary for the operation of the system 200.
As mentioned previously, system 200 includes user interface 224. User interface 224 may include display 238 and control panel 252. The display 238 may include a display device implemented using a variety of known display technologies, such as LCD, LED, OLED, or plasma display technology. In some examples, display 238 may comprise multiple displays. The control panel 252 may be configured to receive user inputs (e.g., exam type, time of contrast agent injection). The control panel 252 may include one or more hard controls (e.g., buttons, knobs, dials, encoders, mouse, trackball or others). In some examples, the control panel 252 may additionally or alternatively include soft controls (e.g., GUI control elements or simply, GUI controls) provided on a touch sensitive display. In some examples, display 238 may be a touch sensitive display that includes one or more soft controls of the control panel 252.
According to principles of the present disclosure, in some examples, a user may select an adaptive thinning technique and/or set an aggressiveness parameter to be used for generating the adaptive CAI images via the user interface 224. Adjusting (e.g., varying) the aggressiveness parameter may adjust a final spatial resolution of an adaptive CAI image. In some examples, the user may indicate different aggressiveness parameters for different regions in an image frame and/or indicate how the aggressiveness parameter should change over time. In some examples, the user may select an average or starting aggressiveness parameter to be used and the system 100 adjusts the aggressiveness parameter used spatially over an image frame and/or temporally over multiple image frames. In some examples, the adaptive thinning technique may be pre-set based on exam type, contrast agent type, and/or properties of the image). In some examples, the aggressiveness parameter and/or how it is adjusted spatially and/or temporally may be based on analysis of individual image frames and/or all of the image frames of a sequence.
In some examples, various components shown in
The processor 300 may include one or more cores 302. The core 302 may include one or more arithmetic logic units (ALU) 304. In some examples, the core 302 may include a floating point logic unit (FPLU) 306 and/or a digital signal processing unit (DSPU) 308 in addition to or instead of the ALU 304.
The processor 300 may include one or more registers 312 communicatively coupled to the core 302. The registers 312 may be implemented using dedicated logic gate circuits (e.g., flip-flops) and/or any memory technology. In some examples the registers 312 may be implemented using static memory. The register may provide data, instructions and addresses to the core 302.
In some examples, processor 300 may include one or more levels of cache memory 310 communicatively coupled to the core 302. The cache memory 310 may provide computer-readable instructions to the core 302 for execution. The cache memory 310 may provide data for processing by the core 302. In some examples, the computer-readable instructions may have been provided to the cache memory 310 by a local memory, for example, local memory attached to the external bus 316. The cache memory 310 may be implemented with any suitable cache memory type, for example, metal-oxide semiconductor (MOS) memory such as static random access memory (SRAM), dynamic random access memory (DRAM), and/or any other suitable memory technology.
The processor 300 may include a controller 314, which may control input to the processor 300 from other processors and/or components included in a system (e.g., control panel 252 and scan converter 230 shown in
The registers 312 and the cache 310 may communicate with controller 314 and core 302 via internal connections 320A, 320B, 320C and 320D. Internal connections may implemented as a bus, multiplexor, crossbar switch, and/or any other suitable connection technology.
Inputs and outputs for the processor 300 may be provided via a bus 316, which may include one or more conductive lines. The bus 316 may be communicatively coupled to one or more components of processor 300, for example the controller 314, cache 310, and/or register 312. The bus 316 may be coupled to one or more components of the system, such as display 238 and control panel 252 mentioned previously.
The bus 316 may be coupled to one or more external memories. The external memories may include Read Only Memory (ROM) 332. ROM 332 may be a masked ROM, Electronically Programmable Read Only Memory (EPROM) or any other suitable technology. The external memory may include Random Access Memory (RAM) 333. RAM 333 may be a static RANI, battery backed up static RAM, Dynamic RAM (DRAM) or any other suitable technology. The external memory may include Electrically Erasable Programmable Read Only Memory (EEPROM) 335. The external memory may include Flash memory 334. The external memory may include a magnetic storage device such as disc 336. In some examples, the external memories may be included in a system, such as ultrasound imaging system 200 shown in
In some examples, a multi-frame loop (e.g., image sequence) of conventional side-by-side contrast and tissue images may be used as inputs to the signal processor as indicated by block 402. The image format may be DICOM, AVI, WMV, JPEG, and/or other format. The image-domain-based processing may be implemented as an off-line processing feature in some examples. In some applications, the images may be log-compressed with a limited dynamic range, thus, the image-domain implementation of adaptive CAI may have limited performance. In some examples, adaptive CAI may also be implemented at IQ-domain (input is IQ data) or RF-domain (input is RF data) rather than a multi-frame loop as shown in
At block 404, the image processor may perform image formatting. In some examples, the multi-frame loops are processed to separate the tissue images and contrast images so that they can be processed independently as indicated by blocks 406 and 408. The tissue and contrast images may be properly formatted for following processing blocks. For example, red-green-blue (RGB) images may be converted to gray-scale images (or indexed images) with a desired dynamic range (e.g., normalized from 0 to 1). In some examples, the image processor may receive the tissue images and contrast images from separate imaging streams that do not need to be separated. In examples where enhanced CAI is performed on RF data and/or IQ data rather than a multi-frame loop, image formatting may include separating signals resultant from the contrast agent and signals resultant from the tissue.
At block 410, motion estimation may be performed on the tissue images. Frame-to-frame displacements for each image pixel may be estimated by motion estimation methods, for example, speckle tracking and/or optical flow. Both rigid motion (e.g., translation and rotation) and non-rigid motion (e.g., deformation) may be estimated. Spatial and/or temporal smoothing methods may be applied to the estimated displacements for the tissue images.
At block 412, motion compensation is performed based, at least in part, on the motion estimation performed on the tissue images at block 410. In some examples, tissue images may not be used for motion estimation and the motion estimation techniques discussed above in reference to block 410 may be performed directly on the contrast images. However, in these examples motion compensation may not be as robust.
Optionally, at block 414, clutter rejection filtering may be performed on the contrast images. This may reduce the effect of stationary echoes (especially in the near field), reverbs, etc. Clutter rejection filters can be implemented as finite impulse response (FIR), infinite impulse response (IIR)-based high-pass filters with sufficient numbers of coefficient delay pairs (e.g., taps), a polynomial least-squares curve fitting filter, and/or singular value decomposition (SVD)-based high-pass filter. Filter parameters may be optimized to suppress most of the residual clutter but preserve most of the contrast signals. In some examples, removal of tissue clutter prior to accumulation may be performed using an adaptive algorithm such as the “TissueSuppress” feature in VM6.0 distributed by Philips, where nonlinearity differences between tissue and microbubbles are used to mask and then suppress pixels containing tissue on a per-frame basis. In some examples, block 414 may be omitted.
After motion compensation (and optionally clutter rejection), an adaptive PSF thinning technique may be performed at block 416.
At block 508, the images from blocks 502 and 504 may be segmented by a suitable image segmenting technique. In some examples, contrast images alone may be used to define different spatial zones with different contrast concentration. In these examples, tissue images may not be segmented. Additionally, temporal (slow-time) filtering can be performed on contrast images to segment out different spatial zones of flow velocities (i.e. blood vessel sizes) based on the slow-time frequencies. Different aggressiveness can be assigned to different zones based on the segmentation, as explained below.
At block 510, the segmented images may be analyzed to generate a spatially adaptive aggressiveness parameter. For example, the aggressiveness parameter may be based, at least in part, on blood vessel size.
In another example for spatially adapting the aggressiveness parameter, the aggressiveness may be based, at least in part, on contrast agent concentration. For example, large vessels may be perfused much earlier than smaller ones. Hence, the concentration of microbubbles will be much higher in the large vessels at early times. A signal intensity threshold value may be set to visualize microvessels with a certain concentration of microbubbles so that large vessels are either masked or their intensity is lowered to enhance smaller vessels. In other words, regions with high signal intensity may be assigned a high value aggressiveness parameter and regions with low signal intensity may be assigned a low value aggressiveness parameter. In other examples, instead of a single threshold value, a function may define how the aggressiveness parameter changes with signal intensity.
Returning to
Although the examples described herein use both spatially and temporally adaptive aggressiveness parameters, in other examples, the aggressiveness parameter may be adaptive in only one of the spatial domain or temporal domain.
Returning to
In a first example thinning technique, a morphological operation using image erosion may be used. Image erosion may be performed on each frame of the contrast loop to erode away the boundaries of regions and leave shrunken areas of contrast agent signals. Aggressiveness in this example may be a size of a structuring element (e.g., the aggressiveness parameter defines a size of the structuring element). The structuring element may be a shape used to apply functions to the image frame. The shape of the structuring element can be a simple square or rectangle in some examples, but may be more complex shapes in other examples. In this technique, a high aggressiveness parameter (e.g., an aggressiveness parameter having a high value) corresponds to a large size of structuring element. The larger the structuring element, the more boundaries are eroded away, leaving smaller sizes of remaining contrast agent signals. A low aggressiveness parameter (e.g., an aggressiveness parameter having a low value) corresponds to a smaller size of the structuring element and fewer boundaries are eroded away, leaving larger sizes of remaining contrast agent signal areas. The size of the structuring element may be adapted spatially and/or temporally.
In some examples, the image erosion may be grayscale erosion algorithm is applied on each frame of image based on the constructed structuring element. In this technique, the erosion of an image pixel is the minimum of the image pixel in its neighborhood, with that neighborhood defined by the structuring element. The output of the image erosion operation is the output of block 514.
In some examples, the output of the image erosion operation may be referred to as a mask (Mask). The mask may be normalized with values between 0 and 1. A power of an exponent may be applied to the mask. The final output of block 514 (Output) may be the product of the original input (Input) and the normalized mask with the exponent as shown in the equations below:
Mask=Erosion(Input) Equation (1)
Output=Input×(Normalized Mask)Exponent Equation (2)
The exponent (Exponent) may be used to control the aggressiveness parameter in addition to the structuring element. The aggressiveness increases as the exponent increases (e.g., greater than 1).
In a second example thinning technique, a morphological operation using image dilation may be used. In this example, a structuring element, such as the one described in reference to the first example, is also used. Image dilation is provided by the formula:
The Output is the output (e.g., thinned) image and the Input is the input image (e.g., the contrast image from block 502). The output image is equal to the input image divided by input image after a dilation operation. Aggressiveness in this example may again be the size of structuring element of image dilation. With high aggressiveness (large size of structuring element), more boundaries are enlarged, leaving smaller regions of remaining contrast agent signals. With low aggressiveness (smaller size of structuring element), less boundaries are enlarged, leaving larger regions of remaining contrast agent signals. The size of the structuring element may be adapted spatially and/or temporally. The dilation technique may be applied to each frame.
In some examples, the image dilation may be a grayscale dilation algorithm. In this algorithm, the dilation of an image pixel is the maximum of the image pixel in its neighborhood, with that neighborhood defined by the structuring element. After the dilation operation, the PSF expands to a larger size depending on the aggressiveness parameter (size of structuring element). The output of this step (each frame of image) is referred as dilation(Input) in Equation 3. The input image may then be scaled based on the dilation output. According to Equation 3, the input image is scaled with the output of the dilation step. Specifically, each pixel of the output image, output(x, y), is the product of the image pixel, input(x, y), and the corresponding scaling factor, 1/[dilation(input)(x, y)]. Due to the scaling factor, the output PSF shrinks to a smaller size depending on the aggressiveness parameter. Optionally, a normalization step may be applied to the output to remove the outliers (e.g. infinite elements), and normalize the output dynamic range to certain limits.
Examples of image dilation-based thinning with different aggressiveness parameters (e.g., different sized structuring elements) in accordance with examples of the present disclosure are shown in
Similar to the first example describing image erosion, the Output of Equation 3 may be referred to as a mask which may be normalized and raised to an exponent that may be used to control the aggressiveness as described in Equation 2. The final output of block 514 may then be original image multiplied by the normalized mask with the exponent as described in Equation 2.
In a third example, a spatial smoothing thinning technique may be used. The spatial smoothing may be described by the equation:
The Output is the output (e.g., thinned) image and the Input is the input image (e.g., the contrast image from block 502). The output image is equal to the input image divided by input image after spatial smoothing operation. Aggressiveness in this example may be the size of smoothing kernel for spatial smoothing. A high aggressiveness parameter corresponds to a large smoothing kernel size (e.g., 8×8) and more boundaries are smoothed, leaving smaller sizes of remaining contrast signal regions. A low aggressiveness parameter corresponds to a smaller smoothing kernel size (e.g., 3×3) and fewer boundaries are smoothed, leaving larger sizes of remaining contrast signal regions. Generating a smoothing kernel may be similar to generating the structuring element aforementioned. The shape of the smoothing kernel may be a simple square or rectangle in some examples. The size of the smoothing kernel may be adapted temporally and/or spatially.
The output of the spatial smoothing operation, smoothing(input) of Equation 4, may be a result of a 2D spatial convolution between the input image and the smoothing kernel. If the filter coefficient of the smoothing kernel is 1 (e.g., all elements have a value of 1), the spatial smoothing can be simplified as a 2D moving average in some examples. If the filter coefficient of the smoothing kernel is based on certain distribution (e.g. 2D Gaussian distribution), the spatial smoothing may be a weighted moving average. After the smoothing operation, the PSF expands to a larger size depending on the aggressiveness. The output of this step (each frame of image) is referred as smoothing(Input) in Equation 4. The input image may then be scaled based on the smoothing output. According to Equation 4, the input image is scaled with the output of the smoothing step. Specifically, each pixel of the output image, output(x, y), is the product of the image pixel, input(x, y), and the corresponding scaling factor, 1/[smoothing(input)(x, y)]. Due to the scaling factor, the output PSF shrinks to a smaller size depending on the aggressiveness parameter. Optionally, a normalization step may be applied to the output to remove the outliers (e.g. infinite elements), and normalize the output dynamic range to certain limits.
Similar to the examples describing morphological operations, the Output of Equation 4 may be referred to as a mask which may be normalized and raised to an exponent that may be used to control the aggressiveness as described in Equation 2. The final output of block 514 may then be original image multiplied by the normalized mask with the exponent as described in Equation 2.
In a fourth example, a low-pass filter (LPF) thinning technique may be used. The LPF technique may be described by the equation:
The Output is the output (e.g., thinned) image and the Input is the input image (e.g., the contrast image from block 502). The output image is equal to the input image divided by input image after spatial smoothing operation. Aggressiveness in this example may be the cut-off spatial frequency for spatial LPF. A high aggressiveness parameter corresponds to a lower cut-off frequency and more boundaries are filtered, leaving smaller sizes of remaining contrast signal regions. A low aggressiveness parameter corresponds to a higher cut-off frequency and fewer boundaries are filtered, leaving larger sizes of remaining contrast signal regions. The cut-off frequency may be adapted spatially and/or temporally.
A 2D spatial Fast Fourier Transform (FFT) is performed on the input image. The LPF is then applied to the output of the FFT. This step removes and/or suppresses high spatial frequency components (e.g., the spatial frequency components above the cut-off frequency) of the input image. An inverse FFT is then performed on the filtered image which brings the image from the frequency domain back to the image (e.g., spatial) domain. Because the high frequency components have been removed and/or suppressed, the PSF expands to a larger size depending on the aggressiveness. The output of this step is referred to as LPF(Input) in Equation 5. The input image may then be scaled based on the LPF output. According to Equation 5, the input image is scaled with the output of the LPF step. Specifically, each pixel of the output image, output(x, y), is the product of the image pixel, input(x, y), and the corresponding scaling factor, 1/[LPF(input)(x, y)]. Due to the scaling factor, the output PSF shrinks to a smaller size depending on the aggressiveness parameter. Optionally, a normalization step may be applied to the output to remove the outliers (e.g. infinite elements), and normalize the output dynamic range to certain limits.
Similar to the examples describing morphological operations, the Output of Equation 5 may be referred to as a mask which may be normalized and raised to an exponent that may be used to control the aggressiveness as described in Equation 2. The final output of block 514 may then be original image multiplied by the normalized mask with the exponent as described in Equation 2.
The examples provided herein are for illustrative purposes only and other adaptive thinning techniques may be used. For example, multi-resolution pyramid decomposition, blob-detection, and/or tube-detection image processing techniques may be used. With all of the thinning techniques, the aggressiveness parameters used may vary spatially and/or temporally.
Returning to
Any appropriate temporal accumulation method may be used. Two examples are provided herein for illustrative purposes, but the principles of the present disclosure are not limited to the examples provided. In some examples, a peak-hold or maximum intensity projection method may be used for temporal accumulation. In this method, the final CAI image frame shows only the maximum intensity among all previous input frames at each image pixel. An example MATLAB algorithm is provided below for illustration:
Input and output loops have the same dimension of [Nz, Nx, Nt], where Nz is the axial dimension, Nx is the lateral dimension, and Nt is the temporal (time) dimension.
In some examples, averaging with exponential correction or average intensity projection may be used. The final CAI image frame shows the temporal average intensity (with exponential correction) among all previous input frames at each image pixel. An example MATLAB algorithm is provided below for illustration:
The expCoef is the exponential coefficient to correct the dynamic range of the final sequence of adaptive CAI images (e.g., the output loop), which is typically set to 0.5, but may be adjusted based on the properties of the ultrasound imaging system (e.g., imaging system 100) and/or exam type.
As noted in reference to
In some examples, users may manually control and/or overwrite the aggressiveness parameter of the adaptive PSF thinning/skeletonization step. For example, after processing is complete (e.g., the blocks shown in
In some examples, at the adaptive PSF thinning/skeletonization block 416, multiple levels of adapted aggressiveness parameters may be applied and the results may be stored (e.g., in local memory 242) for the same downstream processing (e.g., block 418). The user may review the results with the different levels of aggressiveness without re-process the datasets. In other examples, instead of calculating different versions of the output loops at multiple aggressiveness levels at block 416, a blending algorithm may be performed between the regular CM images (e.g., images at block 502) and highly thinned (e.g., high aggressiveness parameter) adaptive CM images at each pixel and frame. Users can adjust the blending ratio between the two results to achieve the best blending images. In some examples, the original image may be combined with a thinned version of itself based on one or more weights. In some examples, the weights may vary as a function of time. For example, a same thinning operation may be applied to all of the image frames of a sequence (e.g., loop), and the images may contribute to the temporal accumulation process by summing a weight (e.g., a percentage) of the original image with a weight of the thinned image, where the weight may be high (e.g., 90-100%) for a first set of frames and gradually decrease (e.g., down to 10-0%) for subsequent frames.
The systems and methods are described herein for performing adaptive CM techniques. The adaptive CM techniques may adapt (e.g., adjust, vary) an aggressiveness parameter of a PSF thinning/skeletonization technique. The aggressiveness parameter may be adapted spatially and/or temporally. Adapting the aggressiveness parameter may allow the adaptive CM technique to provide improved visualization. The adaptive thinning techniques disclosed herein may allow for improved visualization of both high and low intensity signals within CM image frames (e.g., areas of high and low perfusion, large and small vessels) in some applications.
In various examples where components, systems and/or methods are implemented using a programmable device, such as a computer-based system or programmable logic, it should be appreciated that the above-described systems and methods can be implemented using any of various known or later developed programming languages, such as “C”, “C++”, “FORTRAN”, “Pascal”, “VHDL” and the like. Accordingly, various storage media, such as magnetic computer disks, optical disks, electronic memories and the like, can be prepared that can contain information that can direct a device, such as a computer, to implement the above-described systems and/or methods. Once an appropriate device has access to the information and programs contained on the storage media, the storage media can provide the information and programs to the device, thus enabling the device to perform functions of the systems and/or methods described herein. For example, if a computer disk containing appropriate materials, such as a source file, an object file, an executable file or the like, were provided to a computer, the computer could receive the information, appropriately configure itself and perform the functions of the various systems and methods outlined in the diagrams and flowcharts above to implement the various functions. That is, the computer could receive various portions of information from the disk relating to different elements of the above-described systems and/or methods, implement the individual systems and/or methods and coordinate the functions of the individual systems and/or methods described above.
In view of this disclosure it is noted that the various methods and devices described herein can be implemented in hardware, software, and/or firmware. Further, the various methods and parameters are included by way of example only and not in any limiting sense. In view of this disclosure, those of ordinary skill in the art can implement the present teachings in determining their own techniques and needed equipment to affect these techniques, while remaining within the scope of the invention. The functionality of one or more of the processors described herein may be incorporated into a fewer number or a single processing unit (e.g., a CPU) and may be implemented using application specific integrated circuits (ASICs) or general purpose processing circuits which are programmed responsive to executable instructions to perform the functions described herein.
Although the present system may have been described with particular reference to an ultrasound imaging system, it is also envisioned that the present system can be extended to other medical imaging systems where one or more images are obtained in a systematic manner. Accordingly, the present system may be used to obtain and/or record image information related to, but not limited to renal, testicular, breast, ovarian, uterine, thyroid, hepatic, lung, musculoskeletal, splenic, cardiac, arterial and vascular systems, as well as other imaging applications related to ultrasound-guided interventions. Further, the present system may also include one or more programs which may be used with conventional imaging systems so that they may provide features and advantages of the present system. Certain additional advantages and features of this disclosure may be apparent to those skilled in the art upon studying the disclosure, or may be experienced by persons employing the novel system and method of the present disclosure. Another advantage of the present systems and method may be that conventional medical image systems can be easily upgraded to incorporate the features and advantages of the present systems, devices, and methods.
Of course, it is to be appreciated that any one of the examples, examples or processes described herein may be combined with one or more other examples, examples and/or processes or be separated and/or performed amongst separate devices or device portions in accordance with the present systems, devices and methods.
Finally, the above-discussion is intended to be merely illustrative of the present systems and methods and should not be construed as limiting the appended claims to any particular example or group of examples. Thus, while the present system has been described in particular detail with reference to exemplary examples, it should also be appreciated that numerous modifications and alternative examples may be devised by those having ordinary skill in the art without departing from the broader and intended spirit and scope of the present systems and methods as set forth in the claims that follow. Accordingly, the specification and drawings are to be regarded in an illustrative manner and are not intended to limit the scope of the appended claims.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2020/082512 | 11/18/2020 | WO |
Number | Date | Country | |
---|---|---|---|
62938554 | Nov 2019 | US |