The present application claims priority to Japanese Patent Application No. 2023-040594, filed on Mar. 15, 2023. The entire contents of the above-listed application are incorporated by reference herein in their entirety.
The present invention relates to an ultrasonic diagnostic device that creates a data set representing a temporal change in a feature in a region of interest, an operating method for the ultrasonic diagnostic device, and a program for the ultrasonic diagnostic device.
An ultrasound examination may be performed using contrast-enhanced ultrasound, in which a contrast agent is injected into a subject to create a contrast image of the subject. In contrast-enhanced ultrasound, a region of interest (ROI) may be assigned to a target (for example, a tumor) in a contrast image and a time intensity curve may be generated for the region of interest. The time-intensity curve is also referred as a TIC or time-luminance curve. The time-intensity curve can be used for the differential diagnosis of tumors, etc.
In contrast-enhanced ultrasound, new diagnostic information can be obtained by conducting observations over a relatively long time period, for example, 5 minutes or 10 minutes after administration of a contrast agent. For example, it is said that the complexity of the vascular structure of a tumor, vascular resistance, etc. can be estimated by observing a gradual decrease in signal intensity from 2 minutes to 5 minutes after the administration of a contrast agent. In addition, hepatic function may potentially be quantified by comparing the accumulated contrast agent in the hepatic parenchyma immediately after administration, 5 minutes after administration, and 10 minutes after administration.
In order to obtain such diagnostic information, it is necessary to collect data over a relatively long period of time after the administration of the contrast agent. However, a longer examination time places a greater burden on the examiner and the subject.
When acquiring intermittent data, a change in signal intensity from the contrast agent is to be observed over a period of 10 minutes, ultrasonic scanning and image recording are not performed continuously for 10 minutes, but rather the first minute is captured continuously, followed by, for example, 5 second lengths of video every time a predetermined amount of time (for example, 1 minute) has elapsed. Therefore, since scanning is continuously performed for the first minute, it is necessary to acquire an image group including a large number of ultrasonic images. However, once the first minute has elapsed, an image group corresponding to a short time length of time, for example, approximately 5 seconds, may be acquired every time a predetermined time (for example, 1 minute) elapses. Therefore, once the images for the first minute have been acquired, it is only necessary to concentrate on image acquisition for a short period of time, reducing the burden on the examiner and the subject. After acquiring these image groups, the examiner creates a time-intensity curve for each image group, arranges the created time-intensity curves in a time series, and integrates them to display a single time-intensity curve. In this way, the examiner can obtain various types of diagnostic information by referring to a single time-intensity curve obtained by integration.
When creating a single integrated time-intensity curve, the user first selects an image group from the plurality of image groups obtained during the examination. Then, a region of interest is assigned to the selected image group and the position, size, and/or shape of the region of interest is adjusted while checking the data set in the assigned region of interest in order to determine the region of interest. By determining a region of interest for the selected image group in this way, a data set corresponding to the selected image group can be obtained. When the data set is obtained, the user selects the next image group from among the plurality of image groups. Then, a region of interest is determined for the selected next image group and a data set is obtained. Similarly, image groups are selected from the plurality of image groups in order to obtain data sets for all image groups. Then, a single integrated time-intensity curve can be created based on the data set obtained for each image group. In this case, if a desired data set can be obtained for all the image groups, a single integrated time-intensity curve is considered suitable for obtaining information useful for diagnosis. However, when the desired data set cannot be obtained for one or more image groups among the plurality of image groups obtained during an examination, a single integrated time-intensity curve may not be suitable for obtaining information useful for diagnosis. Therefore, in order to obtain a desired data set for all image groups, the user carefully examines the optimum position for the region of interest in a selected image group and selects the next image group once it has been determined that the desired data set has been obtained for the selected image group. However, it is not always easy for the user to determine whether a desired data set has been obtained for the selected image group. Therefore, when a user is performing the work of obtaining a data set for a certain image group, the user may wish to correct the data set for image groups on which the work of obtaining a data set has already been completed. In this case, if a single integrated time-intensity curve is created without performing the necessary correction work on the data sets, useful information for diagnosis may potentially not be obtained. One possible way of addressing this problem would be for the user to carefully perform the work of determining the optimum region of interest for each image group with sufficient time so that the work of correcting the data set does not have to be performed later. However, this is problematic in that such work increases the burden on the user, in addition to actually being difficult for a user to determine the region of interest so that the user does not need to correct the data sets later.
Therefore, there is a demand for the development of an ultrasonic diagnostic device that allows a user to easily perform correction work on data sets
According to an aspect, an ultrasonic diagnostic device may include: a user interface; a processor; a storage device configured to store a plurality of image groups, each image group including a plurality of ultrasound images arranged in a time series; and a display, wherein the processor is configured to: (i) accept, through the user interface, an input from a user selecting one image group from among the plurality of image groups, (ii) responsive to the input selecting an image group, display the ultrasound images included in the selected image group on the display, (iii) accept, through the user interface, an input for assigning a region of interest to an ultrasound image displayed on the display, (iv) responsive to the input assigning the region of interest, create a data set representing a temporal change in a feature in the region of interest based on the ultrasonic images included in the selected image group, and (v) arrange and integrate a plurality of data sets obtained by repeatedly executing operations (i) to (iv) in a time series, then create a data series representing a temporal change in a feature in the region of interest in the plurality of data sets, wherein the user input to select one image group includes a first input from the user to select a first image group of the plurality of image groups and a second input from the user to select a second image group of the plurality of image groups after accepting the first input, and wherein the user interface is configured to accept user input in order to reselect the first image group after the second input has been accepted, so that the user can perform correction work on the data set representing the temporal change in the feature in the first image group selected by the first input that was accepted before the second input.
According to another aspect, A method for operating an ultrasonic diagnostic device including a user interface, a processor, a storage device that stores a plurality of image groups, each image group including a plurality of ultrasound images arranged in a time series, and a display, the method may include (i) obtaining an input from a user selecting one image group from among the plurality of image groups; (ii) responsive to the user interface has accepted input from the user selecting the one image group, displaying the ultrasound images included in the selected image group on the display; (iii) receiving an input from a user performing an operation of assigning a region of interest to the ultrasound image displayed on the display; (iv) responsive to the user interface accepting input from the user assigning the region of interest, creating a data set representing a temporal change in a feature in the region of interest based on the ultrasonic images included in the selected image group, and (v) integrating a plurality of data sets obtained by repeatedly executing (i) to (iv) in time series, then creating a data series representing a temporal change in the feature in the region of interest in the plurality of data sets, the user input to select one image group including a first input from the user to select a first image group from the plurality of image groups and a second input from the user to select a second image group from the plurality of image groups after accepting the first input, and the method may further include: accepting, after the user interface has accepted the second input, input from the user to reselect the first image group so that the user can perform correction work on the data set representing the temporal change of the feature in the first image group selected in the first input before the second input was accepted.
According to yet another aspect, A non-transitory computer readable medium may be configured to be executed by a computer to: (i) obtain an input from a user through a user interface selecting one image group from among the plurality of image groups; (ii) responsive to the user interface accepting input from the user selecting the one image group, display the ultrasound images included in the selected image group on the display; (iii) receiving an input from a user performing an operation of assigning a region of interest to the ultrasound image displayed on the display; (iv) responsive to the user interface accepting input from the user assigning the region of interest, create a data set representing a temporal change in a feature in the region of interest based on the ultrasonic images included in the selected image group, and (v) integrate a plurality of data sets obtained by repeatedly executing (i) to (iv) in time series, then create a data series representing a temporal change in the feature in the region of interest in the plurality of data sets, the user input to select one image group including a first input from the user to select a first image group from the plurality of image groups and a second input from the user to select a second image group from the plurality of image groups after accepting the first input, and accepting, after the user interface has accepted the second input, input from the user to reselect the first image group so that the user can perform correction work on the data set representing the temporal change of the feature in the first image group selected in the first input before the second input was accepted.
Embodiments for carrying out the invention will be described below; however, the present invention is not limited to the following embodiment.
The ultrasonic diagnostic device 1 has an ultrasonic probe 2, a transmission beamformer 3, a transmitter 4, a receiver 5, a reception beamformer 6, a processor 7, a display 8, a memory 9, and a user interface 10.
The ultrasonic probe 2 has a plurality of vibrating elements 2a arranged in an array. The transmission beamformer 3 and the transmitter 4 drive the plurality of vibrating elements 2a, which are arrayed within the ultrasonic probe 2, and ultrasonic waves are transmitted from the vibrating elements 2a. The ultrasonic waves transmitted from the vibrating elements 2a are reflected inside the subject 57 (see
The reception beamformer 6 may be a hardware beamformer or a software beamformer. If the reception beamformer 6 is a software beamformer, the reception beamformer 6 may include one or more processors, including one or more: i) a graphics processing unit (GPU); ii) a microprocessor; iii) a central processing unit (CPU); iv) a digital signal processor (DSP); or v) another type of processor capable of executing logical operations. A processor configuring the reception beamformer 6 may be configured by a processor different from the processor 7 or may be configured by the processor 7.
The ultrasonic probe 2 may include an electrical circuit for performing all or a portion of transmission beamforming and/or reception beamforming. For example, all or a portion of the transmission beamformer 3, the transmitter 4, the receiver 5, and the reception beamformer 6 may be provided in the ultrasonic probe 2.
The processor 7 controls the transmission beamformer 3, the transmitter 4, the receiver 5, and the reception beamformer 6. Furthermore, the processor 7 is in electronic communication with the ultrasonic probe 2. The processor 7 controls which of the vibrating elements 2a is active and the shape of ultrasonic beams transmitted from the ultrasonic probe 2. The processor 7 is in electronic communication with the display 8. The processor 7 can process echo data to generate an ultrasonic image. The term “electronic communication” may be defined to include both wired and wireless communications. The processor 7 may include a central processing unit (CPU) according to one embodiment. According to another embodiment, the processor 7 may include another electronic component that may perform a processing function such as a digital signal processor, a field programmable gate array (FPGA), a graphics processing unit (GPU), another type of processor, and the like. According to another embodiment, the processor 7 may include a plurality of electronic components capable of executing a processing function. For example, the processor 7 may include two or more electronic components selected from a list of electronic components including a central processing unit, a digital signal processor, a field programmable gate array, and a graphics processing unit.
The processor 7 may also include a complex demodulator (not illustrated in the drawings) that demodulates RF data. In an another embodiment, demodulation may be executed in an earlier step in the processing chain.
Moreover, the processor 7 may generate various ultrasonic images (for example, a B-mode image, color Doppler image, M-mode image, color M-mode image, spectral Doppler image, elastography image, TVI image, strain image, and strain rate image) based on data obtained by processing via the reception beamformer 6. In addition, one or more modules can generate these ultrasonic images.
An image beam and/or an image frame may be saved and timing information may be recorded indicating when the data is retrieved to the memory. The module may include, for example, a scan conversion module that performs a scan conversion operation to convert an image frame from a coordinate beam space to display space coordinates. A video processor module may also be provided for reading an image frame from the memory while a procedure is being implemented on the subject and displaying the image frame in real-time. The video processor module may save the image frame in an image memory, while ultrasound images are read from the image memory and displayed on the display 8.
In the present Specification, the term “image” can broadly indicate both a visual image and data representing a visual image. Furthermore, the term “data” can include raw data, which is ultrasound data before a scan conversion operation, and image data, which is data after the scan conversion operation.
Note that the processing tasks described above handled by the processor 7 may be executed by a plurality of processors.
Furthermore, when the reception beamformer 6 is a software beamformer, a process executed by the beamformer may be executed by a single processor or may be executed by the plurality of processors.
The display 8 can be an LED (light emitting diode) display, an LCD (liquid crystal display), or an organic EL (electro-luminescence) display, etc. The display 8 displays an ultrasonic image. In the first embodiment, the display 8 includes a display monitor 18 and a touch panel 28, as illustrated in
The memory 9 is any known data storage medium. In one example, the ultrasonic image display system includes a non-transitory storage medium and a transitory storage medium. In addition, the ultrasonic image display system may also include a plurality of memories. The non-transitory storage medium can be, for example, a non-volatile storage medium such as a hard disk drive (HDD) drive, a read only memory (ROM), etc. The non-transitory storage medium may include a portable storage medium such as a CD (compact disk) or a DVD (digital versatile disk). A program executed by the processor 7 is stored in the non-transitory storage medium. The transitory storage medium is a volatile storage medium such as random access memory (RAM).
The memory 9 stores one or more commands that can be executed by the processor 7. One or more commands cause the processor 7 to execute predetermined operations
Note that the processor 7 may also be configured so as to be able to connect to an external storing device 15 by a wired connection or a wireless connection. In this case, the command(s) causing execution by the processor 7 can be distributed to both the memory 9 and the external storing device 15 for storage.
The user interface 10 may accept input from an operator 56. For example, the user interface 10 accepts commands and information input from an operator 56. The user interface 10 includes a touch panel 28 and an operation panel 38. The operating panel 38 can be configured so as to include a keyboard, a hard key, a trackball, a rotary control, a soft key, etc. The touch panel can also display soft keys, buttons, etc.
The ultrasonic diagnostic device 1 is configured as described above.
Next, the operation of the ultrasonic diagnostic device 1 in the present example will be described. In the present embodiment, an example will be described in which the ultrasonic diagnostic device 1 operates so as to acquire data on contrast images in a subject and create a time-intensity curve based on the acquired contrast image data.
First, a method for acquiring data on contrast images will be described. Specifically, the contrast image data is acquired as follows.
The operator 56 injects a contrast agent into the subject 57 and acquires a contrast image. A contrast agent containing micro-bubbles as an active ingredient can be used as the contrast agent. The operator 56 operates the probe 2 to start acquisition of a contrast image at the examination site from time point t11 (a time point at which the contrast agent is injected or a time point close to the time point at which the contrast agent is injected).
The processor 7 controls the ultrasonic probe 2 to transmit ultrasonic waves to the subject 57. The ultrasonic probe 2 receives ultrasonic waves (an echo) reflected by the contrast agent. The processor 7 performs predetermined processing on the echo signals and creates data indicating the signal intensity of the ultrasonic waves reflected by the contrast agent. Data indicating the signal intensity of the ultrasonic waves reflected by the contrast agent can be obtained as contrast image data.
The operator 56 acquires contrast image data intermittently over a predetermined time period from time point t11. For example, the operator 56 acquires contrast images intermittently over a period from time t11 to time tn2. The length of time from time point t11 to time point tn2 can be set to, for example, approximately 10 minutes.
During the period from time t11 to time tn2, the operator 56 does not continuously perform ultrasonic scanning and image recording, but rather performs data collection at the predetermined times. In the present embodiment, the acquisition SC1 of data on the subject 57 is continuously performed from time point t11 to time point t12, after which the acquisition SC2 to SCn of data is performed for several seconds every time a certain time period elapses. In the present embodiment, the subject 57 is scanned in accordance with the time line illustrated in
First, the operator 56 administers a contrast agent, continuously performs scanning between time points t11 and t12, and acquires data of an image group 21 including a plurality of ultrasonic images. The time length between time points t11 and t12 is represented by “TS1”, with the time length TS1 being, for example, 60 seconds. Between time points t11 and t12, the processor 7 controls the ultrasonic probe 2 to transmit ultrasonic waves to the subject 57. The ultrasonic probe 2 receives ultrasonic waves (an echo) reflected by the contrast agent. The processor 7 performs predetermined processing on the echo signals and creates an ultrasonic image indicating the signal intensity of the ultrasonic waves reflected by the contrast agent. The ultrasonic image can be, for example, a cross-sectional image of the liver in the subject 57. The processor 7 stores the acquired data of the image group 21 in the memory 9. In the present embodiment, two types of ultrasonic images are created as ultrasound images. One is a contrast image in which nonlinear signals included in echo signals reflected by the contrast agent are enhanced, while the other is a B-mode image. Therefore, the image group 21 contains an image group 211 including a plurality of contrast images arranged in a time series and an image group 212 including a plurality of B-mode images arranged in a time series.
After acquiring the data from time points t11 to t12, the operator 56 interrupts the acquisition of data and stands by until time point t21 at which the next data acquisition is started following injection of the contrast agent. Then, the operator 56 acquires data between time points t21 and t22 (time length TS2). Time length TS2 can be, for example, 5 seconds. The processor 7 performs predetermined processing on the echo signals to create image group 22. Image group 22 contains an image group 221 including a plurality of contrast images arranged in a time series and an image group 222 including a plurality of B-mode images arranged in a time series. The data for image group 22 is stored in the memory 9.
After acquiring data from time points t21 to t22, the operator 56 interrupts the data acquisition and stands by until time point t31 at which the next data acquisition is started following injection of the contrast agent. Then, the operator 56 acquires data between time points t31 and t32 (time length TS3). Time length TS3 can be, for example, 5 seconds. The processor 7 performs predetermined processing on the echo signals to create image group 23. Image group 23 contains an image group 231 including a plurality of contrast images arranged in a time series, along with an image group 232 including a plurality of B-mode images arranged in a time series. The data for image group 23 is stored in the memory 9.
Subsequently, the step of acquiring data after standing by for a certain period of time is repeatedly executed.
Finally, the operator 56 acquires information during a period from time tn1 to time tn2 (time length TSn). The processor 7 performs predetermined processing on the echo signals to create image group 2n. Image group 2n contains an image group 2n1 including a plurality of contrast images arranged in a time series and an image group 2n2 including a plurality of B-mode images arranged in a time series. The data for image group 2n is stored in the memory 9.
Examination data on the subject 57 is acquired in this manner.
Data obtained by scanning the subject 57 is stored in the memory 9.
Next, the operator 56 creates a time-intensity curve based on the data acquired from the subject 57. This method of creating a time-intensity curve will now be described with reference to the flowchart in
In step ST1, the user (for example, the operator 56, another operator, and/or a doctor) operates the user interface 10 to display an initial screen for creating a time-intensity curve on the monitor 18 (see
A thumbnail region 20 is shown on the screen of the monitor 18. In the thumbnail region 20, thumbnails corresponding to n image groups from image group 21 to image group 2n acquired by scanning the subject 57 are displayed. In
In Step ST2, the user operates the user interface 10 to input a command to select the image group to be used for creating a time-intensity curve from among the plurality of image groups 21 to 2n. The user can select an image group to be used for creating a time-intensity curve from among the plurality of image groups 21 to 2n by operating, for example, a trackball or various buttons on the operation panel 38. For example, when selecting image group 21, the user operates the operation panel 38 to input a command to select image group 21. When this command is input, the user interface 10 accepts input from the user to select image group 21 from among the plurality of image groups 21 to 2n. When the user interface 10 accepts input from the user, image group 21 is selected and the next screen is displayed on the monitor 18 (see
An image display region 30 is illustrated to the right of the thumbnail region 20 displayed on the screen of the monitor 18. When the user selects image group 21, the processor reproduces an image from selected image group 21 in the image display region 30. Image group 21 contains an image group 211 of contrast images and an image group 212 of B-mode images, with a contrast image 311 from image group 211 displayed in the right half of the image display region 30 and a B-mode image 312 from the image group 212 displayed in the left half. Image groups 211 and 212 are reproduced as video having time length TS1 (see
In step ST3, the processor determines whether the selected image group is an image that has been a previously selected or an image group that is being selected for the first time (that is, a newly selected). If the selected image group is an image group that has been previously selected, the process proceeds to step ST8, whereas if the selected image group is an image group that is being selected for the first time (that is, newly selected), the process proceeds to step ST4. Here, the image group 21 is not an image group that has been previously selected but rather an image group that is being selected for the first time (newly selected). Therefore, the process proceeds to step ST4.
In step ST4, the processor assigns an identifier to the image group selected by the user. Any identifier can be used as the identifier as long as it can identify a newly selected image group. In the present embodiment, an example is described in which an image number is used as an identifier.
In step ST4, the processor increments parameter i representing the image number of the new selected image group. Here, it is assumed that parameter i is set to an initial value i=0. Consequently, the processor increments parameter i from the initial value i=0 to i=1. In this way, as illustrated in
In step ST5, the user operates the user interface to input a command to display an analysis screen for analyzing the time-intensity curve of selected image group 21 on the monitor 18. This command can be input, for example, by operating the touch panel 28. Specifically, as illustrated in
A thumbnail region 20, an image display region 40, and a TIC region 60 are displayed on the monitor 18. In the thumbnail region 20, image groups 21 to 2n are displayed. In the image display region 40, image group 211 (contrast image 311) and image group 212 (B-mode image 312) included in selected image group 21 are displayed. The TIC region 60 is a region in which a time-intensity curve is displayed. After the analysis screen has been displayed on the monitor 18, the process proceeds to step ST6.
In step ST6, the user performs analysis work. The specific details of this analysis work are described below.
First, the user assigns a region of interest to the contrast image 311 (and the B-mode image 312) displayed on the monitor 18.
While referring to the contrast image 311 (and the B-mode image 312), the user confirms the position of a target (for example, a tumor or a lesion) in the examination region. Then, the user operates the user interface to input a command for assigning a region of interest to a location of a target of the contrast image 311 (and the B-mode image 312). When the user interface receives input from the user to assign a region of interest, the processor 7 displays figures representing the regions of interest 41 and 42 on the contrast image 311 (and the B-mode image 312). In
Once a region of interest has been assigned, the processor 7 calculates a feature in the region of interest 41 with respect to each contrast image 311 (each frame) of the image group 211. The feature in the region of interest 41 can be calculated as, for example, a value representing luminance information in the region of interest 41. In one example, the feature in the region of interest 41 can be calculated as an average value, a median value, or a standard deviation of the pixel values in the region of interest 41. In this way, a feature in the region of interest 41 can be calculated for each contrast image 311 in image group 211.
Once the feature in the region of interest has been calculated, the processor displays the feature in the region of interest 41 calculated for each contrast image 311 in the image group 211 as illustrated in the TIC region 60 of
The user can visually recognize the temporal change in the feature in the region of interest 41 by referring to the data set 61 displayed in the TIC region 60. In
As described above, by referring to the time-intensity curve 63 displayed in the TIC region 60, the user can visually recognize how the feature in the region of interest 41 has changed over time.
The user can change the conditions for creating the time-intensity curve as necessary. For example, the user can change the size and shape of the region of interest in accordance with the size of the target, the shape of the target, and the like. If the target has, for example, an elliptical shape, the user can adjust the size and shape of the region of interest so as to correspond to the elliptical shape of the target.
The user can also adjust the degree of smoothing for the time-intensity curve. For example, by referring to the time-intensity curve 63 (see
By adjusting the degree of smoothing for the time-intensity curve, the user can visually recognize the trend in the temporal change of the feature in the regions of interest with case. The smoothing processing can use, for example, the moving average method for calculating the average value of m data points adjacent to each other in the time axis direction. By increasing or decreasing the value of m, the degree of smoothing for the time-intensity curve can be adjusted. In this way, the user can adjust the smoothness of the time-intensity curve to his or her own liking.
When examining, for example, a pulsating site in the subject or a region that moves due to the breathing motion of the subject, the user assigns a region of interest to coincide with a target (for example, a tumor or a lesion); however, the target moves due to the movement of the examination region, so the position of the target may be displaced from the position in the region of interest. If the positional deviation between the target and the region of interest becomes too large, it may potentially not be possible to obtain a time-intensity curve which properly reflects the temporal change in the feature in the target. Therefore, when a region of interest is assigned to a moving site such as the heart or the abdomen, motion compensation for the region of interest may be performed so that the region of interest can follow the motion of the target. When the user operates the user interface to input a command for performing motion compensation on the region of interest and the user interface receives input from the user to perform motion compensation on the region of interest, the processor executes motion compensation on the region of interest so that the region of interest moves with the motion of the target. Therefore, because the region of interest is positioned over the target even when the examination region moves, it is possible to obtain a time-intensity curve in which the temporal change in the feature of the target is correctly reflected.
Should the user wish to adjust the position of the region of interest, the user can move the position of the region of interest to the desired position by operating the user interface.
As described above, the user sets the region of interest on the image with reference to the location of the target, adjusting the position of the region of interest with reference to the time-intensity curve. In addition, the user adjusts the size and shape of the region of interest or adjusts the degree of smoothing for the time-intensity curve as necessary. In this way, the user analyzes image group 21. When analysis information (position, size, shape, and color of the region of interest, degree of smoothing, and whether the region of interest is motion-corrected or not, etc.) for image group 21 with respect to the time-intensity curve has been determined, the user stores the determined analysis information in the memory. For example, the user operates the user interface to input a command to store the analysis information in the memory. When the user interface receives input from the user to store the analysis information in the memory, the processor stores the analysis information in the memory.
The user may also assign other regions of interest as necessary. For example, there may be a case in which the user wishes to create not only a time-intensity curve of a target (for example, a tumor or a lesion) but also a time-intensity curve of tissue to be compared with the target in order to diagnose an examination region. In such a case, the user may also assign other regions of interest. Here, a time-intensity curve is created for tissue to be compared with the target. Thus, the user assigns a region of interest to the target tissue to be compared.
When the user assigns another region of interest, the user assigns the region of interest according to the above-described method. In the event all necessary regions of interest have been assigned, the process proceeds to step ST7. Here, it is assumed that all necessary regions of interest have been assigned. Therefore, the process proceeds to step ST7.
In step ST7, the user determines whether to select another image group. If an image group is to be selected, the process returns to step ST2, while if an image group is not to be selected, the process proceeds to step ST11. Here, the user decides to select another image group. Therefore, the process returns to step ST2.
In step ST2, the user selects the next image for analyzing the time-intensity curve.
In step ST3, the processor determines whether the selected image group 22 is an image group that has been previously selected or an image group that is being selected for the first time (that is, a newly selected). Image group 22 is not an image group that has been previously selected but rather an image group that is being selected for the first time (newly selected). Therefore, the process proceeds to step ST4.
In step ST4, the processor increments parameter i representing the image number of the new selected image group. Since the current value of parameter i is i=1, the processor increments parameter i from i=1 to i=2. Then, as illustrated in
In step ST5, the processor 7 reads the selected image group 22 and displays an analysis screen for analyzing the time-intensity curve of the selected image group 22 on the monitor 18 (see
Image group 22 includes an image group 221 of contrast images and an image group 222 of B-mode images. Contrast image 321 in image group 221 is displayed in the upper part of the image display region 40, while B-mode image 322 in image group 222 is displayed in the lower part of the image display region 40. Image groups 221 and 222 are reproduced as video with time length TS2 (see
After displaying contrast image 321 and B-mode image 322 of image group 22, the process proceeds to step ST6.
In step ST6, the user performs an analysis operation on the selected image group 22 (see
While referring to the contrast image 321 (and B-mode image 322) displayed on the monitor 18, the user confirms the position of a target (for example, a tumor or a lesion) in the examination region. Then, the user operates the user interface to input a command for assigning a region of interest to a location of a target of the contrast image 321 (and B-mode image 322). For example, the user can assign a region of interest on a contrast image and B-mode image by operating a trackball or various buttons of the operation panel 38. When the user interface receives input from the user to assign a region of interest, the processor 7 displays a graphic figure representing regions of interest 43 and 44 on the contrast image 321 (and B-mode image 322). The position of the region of interest 44 in the B-mode image 322 corresponds to the position of the region of interest 43 in the contrast image 321. The user can simultaneously move the regions of interest 43 and 44 by manipulating the user interface.
The processor 7 calculates a feature in the region of interest 43 for each contrast image 321 (each frame) of the image group 221. The feature may be, for example, an average value, a median value, or a standard deviation of pixel values included in the region of interest 43. Then, as illustrated in the TIC region 60 of
The user adjusts the position of the region of interest with reference to the time-intensity curve, etc. In addition, the user adjusts the size and shape of the region of interest or adjusts the degree of smoothing for the time-intensity curve as necessary.
In this way, the user determines the analysis information for image group 22 with respect to the time-intensity curve 68.
The user also assigns regions of interest 53 and 54 to the target tissue to be compared and creates a time-intensity curve 69. Then, analysis information for image group 22 relating to the time-intensity curve 69 is also determined.
When the user determines the analysis information for image group 22 with respect to the time-intensity curves 68 and 69, the user operates the user interface to input a command to store the analysis information in the memory. When the user interface receives input from the user to store the analysis information in the memory, the processor stores the analysis information in the memory.
In step ST7, the user determines whether to select another image group. For example, when the user wants to select image group 23, the user can select the next image group 23 by operating the user interface.
During the work of analyzing the time-intensity curve for each image group, the user may determine that he or she wants to check analysis information for a previously selected image group (for example, image group 21) again and perform correction work on the data set of the previously selected image group (for example, the data set 61 for image group 21). For example, when making a determination from the position of region of interest 43 (see
However, referring to
Therefore, in the present embodiment, the user interface is configured so as to allow the user to perform correction work on the data set (time-intensity curve) of a previously selected image group (for example, image group 21) as necessary. Specifically, the user interface is configured so as to accept input from the user to reselect previously selected image group 21 even after accepting input from the user to select image group 22.
Here, the flow will be described for the case in which the user in step ST7 wants to reselect the previously selected image group 21.
In step ST7, if the user wants to reselect previously selected image group 21, the process returns to step ST2.
In Step ST2, the user operates the touch panel 28 to reselect previously selected image group 21 (see
When the user wants to modify the data set for previously selected image group 21, the user operates the user interface to input a command to reselect previously selected image group 21. In the present embodiment, in order to select previously selected image group 21, the user first touches the button 102 on the touch panel 28 by finger 80 as illustrated in
In step ST3, the processor determines whether selected image group 21 is an image group that has been previously selected or an image group that is being selected for the first time (that is, newly selected). Image group 21 is not a newly selected image but rather a previously selected image. Therefore, the process proceeds to step ST8.
In step ST8, the processor reads and restores the analysis information mapped to image group 21 (the analysis information in the data number D1 column illustrated in
In step ST9, the processor displays a correction screen for image group 21 on the monitor 18 (see
Contrast image 311 and B-mode image 312 from image group 21 reselected are displayed in the image display region 40 of the monitor 18. Regions of interest 41 and 51 assigned when the user newly selected image group 21 are displayed in contrast image 311, with regions of interest 42 and 52 assigned when the user newly selected image group 21 are displayed in B-mode image 312. In the TIC region 60, the time-intensity curve 66 for region of interest 41 and the time-intensity curve 67 for region of interest 51 in the contrast image 311 are also displayed. After displaying the correction screen, processing proceeds to step ST10.
In step ST10, the user checks the position of region of interests and the waveform of the time-intensity curves displayed on the screen, then determines whether or not to perform correction work on image group 21 (the time-intensity curves 66 and 67). If the user decides to perform correction work, the user operates the user interface to input a correction command. For example, when the user determines that the current position of region of interest 41 deviates from the position in the target, the user operates the user interface to input a command for correcting the position of the region of interest. When the user interface receives input from the user to correct the position of the region of interest, the processor corrects the position of the region of interest based on user input (see
In this way, the user can check the data set (time-intensity curve 661) of the region of interest after correction. The user confirms the data set (time-intensity curve) of region of interest 41 while further correcting the position of region of interest 41 as necessary. In this way, the user can correct the position of region of interest 41.
The user can also perform other correction work as necessary. Other correction work includes, for example, adding a new region of interest, changing the size, shape, or color of a region of interest, changing the degree of smoothing, and whether motion compensation has been performed on a region of interest.
In this way, the user can perform correction work on the data set representing the temporal change of the feature in a region of interest. After performing correction work, the user operates the user interface to input a command to update the analysis information stored in the memory to the corrected analysis information. When this command has been input, the processor stores the corrected analysis information in the memory, and as a result, the analysis information is updated. Once the correction work has been completed, the process returns to step ST7.
In step ST7, the user determines whether to select another image group. If an image group is to be selected, the process returns to step ST2, while if an image group is not to be selected, the process proceeds to step ST11. Here, the user decides to select another image group. Therefore, the process returns to step ST2.
In step ST2, the user selects the next image for analyzing the time-intensity curve.
In step ST3, the processor determines whether selected image group 23 is an image group that has been previously selected or an image group that is being selected for the first time (that is, newly selected). Image group 23 is not an image group that has been previously selected but a (newly selected) image group that is being selected for the first time. Therefore, the process proceeds to step ST4.
In step ST4, the processor increments the parameter i representing the image number for newly selected image group 23. Since the current value of parameter i is i=2, the processor increments parameter i from i=2 to i=3. Then, as illustrated in
In step ST5, the processor 7 reads the selected image group 23 and displays an analysis screen for analyzing the time-intensity curve of the selected image group 23 on the monitor 18 (see
Image group 23 includes an image group 231 of contrast images and an image group 232 of B-mode images, with contrast image 331 from image group 231 displayed in the upper part of the image display region 40 and B-mode image 332 from image group 232 displayed in the lower part. Image groups 231 and 232 are reproduced as video with time length TS3 (see
After displaying contrast image 331 and B-mode image 332 from image group 23, the process proceeds to step ST6.
In step ST6, the user performs analysis work to analyze the time-intensity curve in the selected image group 23. The analysis operation is performed in the same manner as that described above, so further description has been omitted. Upon completion of the analysis work, the process proceeds to step ST7, where the user determines whether or not to select another image group. When it is decided that an image is to be selected, the process returns to step ST2.
Returning to step ST2, when the user wants to modify a previously selected image group, for example, when the user wants to select image group 22, the user touches button 102 on the touch panel 28 with a finger 80 as illustrated in
In a similar manner, steps ST2 to ST10 are repeated until further selection of image groups is deemed unnecessary by the user in step ST7. While steps ST2 to ST10 are being repeated, if the user wants to execute correction work on a previously selected image group, as illustrated in
In step ST7, if the user determines that it is not necessary to select an image group, the process proceeds to step ST11. For example, in the event the user has selected all the images necessary for the diagnosis of a patient and thus determines that subsequent images do not have to be selected, the process proceeds to step ST11.
In step ST11, the user operates the user interface to input a command to display a plurality of data sets obtained by executing steps ST1 to ST10 as a single integrated data series. In the present embodiment, for example, as illustrated in
A merge graph region 70 is displayed on the monitor 18. Data series 91 and 92 are displayed in the merge graph region 70. In
The data series 91 is indicated by a thick line, while the data series 92 is indicated by a thin line.
The data series 91 represents the temporal change in a feature in a region of interest assigned to a target (for example, a tumor) in image groups 21, 22, and 23. The data series 91 includes a data set (time-intensity curve 71) obtained based on image group 21, a data set (time-intensity curve 72) obtained based on image group 22, and a data set (time-intensity curve 73) obtained based on image group 23.
Meanwhile, data series 92 represents the temporal change in a feature in a region of interest assigned to the tissue to be compared with the target. Data series 92 includes a data set (time-intensity curve 74) obtained based on image group 21, a data set (time-intensity curve 75) obtained based on image group 22, and a data set (time-intensity curve 76) obtained based on image group 23.
By referring to data series 91 and 92, the user can visually and easily recognize how a feature in a region of interest changes over time not only in the data set for each image group but also in a plurality of image groups 21, 22, and 23. This makes it possible to provide the user with more meaningful information for diagnosing a patient.
In
In this way, the flow in
In the present embodiment, when the user wants to modify the data set of a previously selected image group after having selected another image group, the user can display a correction screen for the previously selected image group simply by touching the button for the image group to be reselected on the touch panel 28. In this way, the user can easily correct the data set of a previously selected image group.
In the present embodiment, the user reselects a previously selected image group by touching a selection button (for example, selection button 104) displayed on the touch panel 28. However, the selection button may be displayed on the display 18 instead of the touch panel 28 and the operation panel 38 may be configured so as to receive input from the user to reselect the image group corresponding to the image number of a selection button displayed on the display 18.
In the present embodiment, the user selects a new image group by operating the operation panel 38. However, a new image group may be selected by operating the touch panel 28.
In the present embodiment, when the image group selected by the user is a newly selected image group in step ST3, the process proceeds to step ST4 and an image number is assigned to the image group selected by the user. However, as long as the newly selected image group can be identified, an identifier different from an image number may be assigned to an image group selected by the user. For example, the elapsed time from the start of administration of a contrast agent to the start of data acquisition may be assigned as an identifier to an image group selected by the user. When elapsed time is used as the identifier, a selection button including elapsed time instead of an image number can be displayed on the touch panel 28. In this case, the touch panel 28 may be configured such that, when the user touches a button displayed on the touch panel 28, the touch panel 28 receives input from the user to reselect the image group corresponding to the elapsed time. Alternatively, a selection button including elapsed time may be displayed on the display 18 instead of the touch panel 28, so when the user operates the operation panel 38 to input a command to select a selection button, the operation panel 38 may receive input from the user to reselect an image group corresponding to the elapsed time on a selection button displayed on the display 18. By displaying selection buttons including the elapsed time as the identifier, the user can visually and easily understand to which image group of data acquisitions SC1 to SCn (see
Number | Date | Country | Kind |
---|---|---|---|
2023-040594 | Mar 2023 | JP | national |