The present application claims priority under 35 U.S.C. § 119 to Japanese Patent Application No. 2022-100084, filed on Jun. 22, 2022. The above application is hereby expressly incorporated by reference, in its entirety, into the present application.
The present invention relates to an image display apparatus and a control method of the image display apparatus which display a plurality of ultrasound images in which a lesion part of a subject is imaged.
In the related art, in the medical field, an ultrasound diagnostic apparatus using an ultrasound image has been put to practical use. In general, an ultrasound diagnostic apparatus includes an ultrasound probe with a built-in transducer array, and an apparatus main body connected to the ultrasound probe, and the ultrasound diagnostic apparatus causes the ultrasound probe to transmit an ultrasound beam toward a subject, receives an ultrasound echo from the subject by the ultrasound probe, and electrically processes a reception signal thereof to generate an ultrasound image.
In a case where the ultrasonography is performed on the lesion part of the subject using the ultrasound diagnostic apparatus, a plurality of ultrasound images are usually acquired, but a wide variety of images may be included in the acquired ultrasound images. For example, ultrasound images of different image types such as a brightness (B) mode image, an image superimposed with Doppler information, and an image superimposed with elasticity information may be included. Even the same B-mode images may include images in which the positions of the lesion part are different, and images of which the sizes of the imaging range are different.
After the ultrasonography for the subject is ended, a radiologist checks and interprets the ultrasound image displayed on a viewer, and creates findings, and thus it is important to display the plurality of acquired ultrasound images in a proper order in order to improve the interpretation efficiency.
For example, JP2015-100661A discloses a medical image system that classifies a plurality of ultrasound images stored in a storage unit into a B-mode image represented in grayscale, and a color image, and then sorts the ultrasound images in time series to decide a display order of the plurality of ultrasound images.
In the medical image system disclosed in JP2015-100661A, since the ultrasound images are classified into the grayscale image and the color image, it is possible to separately display the B-mode image, the image superimposed with Doppler information, and the image superimposed with elasticity information.
However, various images may be stored even for the B-mode images alone, and in the medical image system of JP2015-100661A, it is difficult to select images in which the same lesion part is imaged, from the plurality of stored ultrasound images, and to decide the display order according to the position and imaging range of the imaged lesion part. Therefore, there is a possibility that the interpretation efficiency may be lowered.
The present invention has been made in order to solve such a problem in the related art, and an object of the present invention is to provide an image display apparatus and a control method of the image display apparatus which can improve the interpretation efficiency of the plurality of ultrasound images.
According to the following configuration, the above object can be achieved.
According to the present invention, the probe positional information of the ultrasound probe, and the imaging feature information regarding the position of the lesion part and the imaging range in each of the plurality of ultrasound images are acquired, the ultrasound image conforming to the display layout selected by the user is extracted from the plurality of ultrasound images by referring to the probe positional information and the imaging feature information, and the extracted ultrasound image is displayed on the monitor according to the display layout. Therefore, it is possible to improve the interpretation efficiency of the plurality of ultrasound images.
Hereinafter, embodiments of the invention will be described with reference to the accompanying drawings.
The description of configuration requirements described below is given on the basis of the representative embodiment of the present invention, but the present invention is not limited to such an embodiment.
In the present specification, a numerical range represented using “to” means a range including the numerical values before and after “to” as a lower limit value and an upper limit value.
In the present specification, the terms “same” and “identical” include an error range generally allowed in the technical field.
The ultrasound probe 10 includes a transducer array 11, and a transmission and reception circuit 12 is connected to the transducer array 11.
the apparatus main body 20 has an image generation unit 21 connected to the transmission and reception circuit 12 of the ultrasound probe 10, a display controller 22 and a monitor 23 are sequentially connected to the image generation unit 21, and an image memory 24 is connected to the image generation unit 21.
An analysis unit 25 is connected to the image memory 24, an information addition unit 26 and an image extraction unit 27 are sequentially connected to the analysis unit 25, and the display controller 22 is connected to the image extraction unit 27. The image memory 24 is connected to the information addition unit 26, and the image extraction unit 27 is connected to the image memory 24. Further, a display layout setting unit 28 is connected to the image extraction unit 27 and the display controller 22.
A main body controller 29 is connected to the image generation unit 21, the display controller 22, the image memory 24, the analysis unit 25, the information addition unit 26, the image extraction unit 27, and the display layout setting unit 28, and an input device 30 is connected to the main body controller 29. The transmission and reception circuit 12 of the ultrasound probe 10 is connected to the main body controller 29.
The image generation unit 21, the display controller 22, the analysis unit 25, the information addition unit 26, the image extraction unit 27, the display layout setting unit 28, and the main body controller 29 constitute a processor 31.
The transducer array 11 of the ultrasound probe 10 has a plurality of ultrasonic transducers arranged in a one-dimensional or two-dimensional manner. According to a drive signal supplied from the transmission and reception circuit 12, each of the transducers transmits an ultrasonic wave and receives a reflected wave from the subject to output an analog reception signal. For example, each transducer is configured by forming electrodes at both ends of a piezoelectric body consisting of piezoelectric ceramic represented by lead zirconate titanate (PZT), a polymer piezoelectric element represented by poly vinylidene di fluoride (PVDF), piezoelectric single crystal represented by lead magnesium niobate-lead titanate (PMN-PT), or the like.
The transmission and reception circuit 12 causes the transducer array 11 to transmit the ultrasonic wave and generates a sound ray signal on the basis of a reception signal acquired by the transducer array 11, under the control of the main body controller 29.
As illustrated in
The pulser 13 includes, for example, a plurality of pulse generators, and the pulser 13 adjusts the amount of delay of each drive signal so that ultrasonic waves transmitted from the plurality of transducers of the transducer array 11 form an ultrasound beam on the basis of a transmission delay pattern selected according to the control signal from the main body controller 29, and supplies the obtained signals to the plurality of transducers. Thus, in a case where a pulsed or continuous-wave voltage is applied to the electrodes of the transducers of the transducer array 11, the piezoelectric body expands and contracts to generate pulsed or continuous-wave ultrasonic waves from each transducer. From the combined wave of these ultrasonic waves, an ultrasound beam is formed.
The transmitted ultrasound beam is reflected by a target, for example, a site of the subject, and the ultrasound echo propagates toward the transducer array 11 of the ultrasound probe 10. The ultrasound echo propagating toward the transducer array 11 in this manner is received by each transducer constituting the transducer array 11. In this case, each transducer constituting the transducer array 11 expands and contracts by receiving the propagating ultrasound echo to generate a reception signal that is an electric signal, and outputs the reception signal to the amplification unit 14.
The amplification unit 14 amplifies the signals input from each transducer constituting the transducer array 11, and transmits the amplified signals to the AD conversion unit 15. The AD conversion unit 15 converts the signal transmitted from the amplification unit 14 into digital reception data, and transmits the reception data to the beam former 16. The beam former 16 performs so-called reception focusing processing in which addition is performed by giving delays to respective pieces of the reception data converted by the AD conversion unit 15 according to a sound speed distribution or a sound speed set on the basis of a reception delay pattern selected according to the control signal from the main body controller 29. Through the reception focusing processing, a sound ray signal in which each piece of the reception data converted by the AD conversion unit 15 is phased and added and the focus of the ultrasound echo is narrowed is acquired.
As illustrated in
The signal processing unit 41 generates an ultrasound image signal (B-mode image signal), which is tomographic image information regarding tissues inside the subject, by performing, on the sound ray signal sent from the transmission and reception circuit 12 of the ultrasound probe 10, correction of the attenuation due to the distance according to the depth of the reflection position of the ultrasonic wave and then performing envelope detection processing.
The DSC 42 converts (raster conversion) the ultrasound image signal generated by the signal processing unit 41 into an image signal according to a normal television signal scanning method.
The image processing unit 43 performs various kinds of necessary image processing such as gradation processing on the ultrasound image signal input from the DSC 42, and then outputs the signal representing the ultrasound image to the display controller 22 and the image memory 24. The signal representing the ultrasound image generated by the image generation unit 21 in this manner is simply referred to as an ultrasound image.
The transmission and reception circuit 12 of the ultrasound probe 10 and the image generation unit 21 of the apparatus main body 20 form an image acquisition unit 32 that acquires a plurality of ultrasound images in which the lesion part of the subject is imaged.
The image memory 24 is a memory that stores the ultrasound image generated by the image acquisition unit 32 under the control of the main body controller 29. For example, the image memory 24 can hold the ultrasound images of a plurality of frames, which are generated by the image acquisition unit 32, corresponding to the diagnosis for the breast of the subject.
Here, as the image memory 24, recording media such as a flash memory, a hard disc drive (HDD), a solid state drive (SSD), a flexible disc (FD), a magneto-optical disc (MO disc), a magnetic tape (MT), a random access memory (RAM), a compact disc (CD), a digital versatile disc (DVD), a secure digital card (SD card), and a universal serial bus memory (USB memory), or the like can be used.
The analysis unit 25 analyzes the plurality of ultrasound images that are generated by the image acquisition unit 32 and stored in the image memory 24 to acquire probe positional information indicating the position of the ultrasound probe 10, image type information indicating whether the ultrasound image is a B-mode image or another image, and imaging feature information regarding the position of the imaged lesion part and the imaging range, for each of the plurality of ultrasound images.
The analysis unit 25 can acquire the probe positional information on the basis of the orientation and the probe position of a body mark added to each of the plurality of ultrasound images.
A body mark schematically representing the left breast is obtained by horizontally reversing the body mark 51 illustrated in
As illustrated in
The probe mark 54 has a rectangular shape extending in an elongated manner in an arrangement direction of the transducer array 11 of the ultrasound probe 10, and it is possible to understand the position and orientation of the ultrasound probe 10 at the time of imaging, by checking the probe mark 54 plotted on the body mark 51. For example, in a case where the body mark 51 illustrated in
Further, in a case where the body mark 51 illustrated in
Furthermore, in a case where the body mark 51 illustrated in
Therefore, the analysis unit 25 can perform the image analysis on the ultrasound image to recognize the probe mark 54 of the body mark 51 added to the ultrasound image, and acquire the probe positional information regarding the position and orientation of the ultrasound probe 10 at the time of imaging the ultrasound image.
In the image memory 24 in which the ultrasound image is stored, tag information associated with the image is stored as accessory information of the ultrasound image, and in a case of being transferred from the apparatus main body 20 to a picture archiving and communication system (PACS), a workstation, or the like, the ultrasound image is converted into the image data in a Digital Imaging and Communications In Medicine (DICOM) format including the tag information.
The tag information includes, for example, that the ultrasound image is monochrome. In a case where the ultrasound image is monochrome, it can be determined that the ultrasound image is a B-mode image. Thus, the analysis unit 25 can acquire the image type information indicating whether the ultrasound image is a B-mode image or another image on the basis of the tag information associated with each of the plurality of ultrasound images.
The analysis unit 25 can perform the image analysis on the ultrasound image to recognize RGB signal values of the ultrasound image, and acquire the image type information of the ultrasound image. In a case where it is determined that the ultrasound image is a grayscale image on the basis of the RGB signal values in each pixel of the ultrasound image, it can be determined that the ultrasound image is a B-mode image.
Further, the analysis unit 25 may acquire the image type information of the ultrasound image on the basis of the operation mode of the ultrasound device at the time of imaging the ultrasound image. For example, in a case where the ultrasound device is in an elasticity mode during freezing, it can be determined that the ultrasound image is an image superimposed with the elasticity information.
For example, since breast cancer is generated from the epithelium of the breast duct, the continuity of the breast ducts extending from the mammary gland to the nipple is observed, a site where the continuity is interrupted is often extracted as the lesion part, and the user such as an examiner may want to leave an image in which not only the lesion part but also the breast duct leading to the lesion part are imaged.
In this case, by setting a lesion part F at a biased position toward the end portion of the screen as illustrated in
Therefore, the analysis unit 25 performs the image analysis on each of the plurality of ultrasound images to detect the lesion part from the ultrasound image and acquire the imaging feature information regarding the position of the lesion part in the ultrasound image. For example, as illustrated in
As the position of the lesion part in the ultrasound image, for example, three regions near the center of the ultrasound image in the horizontal direction (orientation direction), near the end portion of the ultrasound image in the horizontal direction, and near the middle of the center and the end portion of the ultrasound image in the horizontal direction are assumed, and it is determined which region the lesion part F is positioned on the basis of the calculated distance L. The analysis unit 25 can acquire the imaging feature information indicating the region where the lesion part is positioned for each of the plurality of ultrasound images.
The analysis unit 25 acquires the imaging feature information regarding the imaging range of the inside of the subject shown in the ultrasound image on the basis of the distance between pixels in each of the plurality of ultrasound images. As the distance of pixel is greater, the imaging is performed over a wider range, and for example, it can be seen that not only the lesion part but also the peripheral tissues of the lesion part are imaged. Since the distance between pixels is stored in a tag attached to the image data in the DICOM format, it is possible to acquire the distance between pixels in the ultrasound image as the imaging feature information by checking the tag.
The analysis of the ultrasound image by the analysis unit 25 can be executed by using at least one of template matching, an image analysis technique using feature amounts such as Adaptive Boosting (Adaboost), support vector machines (SVM) or scale-invariant feature transform (SIFT), and a determination model trained using a machine learning technique such as deep learning.
The information addition unit 26 adds the probe positional information, the image type information, and the imaging feature information acquired by the analysis unit 25 to each of the plurality of ultrasound images. The plurality of ultrasound images to which the probe positional information, the image type information, and the imaging feature information are added are sent from the information addition unit 26 to the image memory 24 to be stored in the image memory 24, and are sent from the information addition unit 26 to the image extraction unit 27. The addition of the probe positional information, the image type information, and the imaging feature information can be performed on the plurality of ultrasound images stored in the image memory 24, and performed in the DICOM file of the plurality of ultrasound images.
The image extraction unit 27 extracts the ultrasound image that conforms to the display layout set by the display layout setting unit 28, from the plurality of ultrasound images by referring to the probe positional information, the image type information, and the imaging feature information acquired by the analysis unit 25.
The image extraction unit 27 can first extract only the B-mode images from the plurality of ultrasound images on the basis of the image type information acquired by the analysis unit 25, and select and extract an image conforming to the display layout from the B-mode images.
In a case where the display layout set by the display layout setting unit 28 is a layout including not only the B-mode image but also an image superimposed with the elasticity information or Doppler information, the image extraction unit 27 can extract an image conforming to the display layout including the B-mode image and the image superimposed with the elasticity information or Doppler information, from the plurality of ultrasound images.
The display layout setting unit 28 sets the display layout in a case of displaying the plurality of ultrasound images on the monitor 23, on the basis of the user's input operation via the input device 30. The display layout is set by the user such that the plurality of ultrasound images of the types and imaging features according to the preference of each user are displayed on the monitor 23 at the same time, and the number of screen divisions of the monitor 23, the display order corresponding to the image types and imaging features, the display magnification, and the like can be freely specified.
For example, images captured by the ultrasound probe 10 in the orientation along the horizontal direction are displayed in the upper display regions U1 to U3, and images captured by the ultrasound probe 10 in the orientation along the vertical direction are displayed in the lower display regions L1 to L3.
Images in which the lesion part is shown near the center in the horizontal direction are displayed in the upper and lower left display regions U1 and L1, images in which the lesion part is shown near the middle of the center and the end portion in the horizontal direction are displayed in the upper and lower middle display regions U2 and L2, and images in which the lesion part is shown near the end portion in the horizontal direction are displayed in the upper and lower right display regions U3 and L3.
The display controller 22 performs predetermined processing on the ultrasound image sent from the image generation unit 21, and displays the ultrasound image on the monitor 23, under the control of the main body controller 29.
The display controller 22 displays the ultrasound image extracted by the image extraction unit 27 on the monitor 23 according to the display layout set by the display layout setting unit 28.
The monitor 23 is for displaying the ultrasound image under the control of the display controller 22, and includes a display device such as a liquid crystal display (LCD), or an organic electroluminescence (EL) display.
The main body controller 29 controls each unit of the apparatus main body 20 and the transmission and reception circuit 12 of the ultrasound probe 10 on the basis of a control program and the like stored in advance.
Although not illustrated, a main body-side storage unit is connected to the main body controller 29. The main body-side storage unit stores a control program and the. As the main body-side storage unit, for example, a flash memory, a RAM, an SD card, an SSD, and the like can be used.
The input device 30 is for a user to perform an input operation, and is configured by, for example, a device such as a keyboard, a mouse, a trackball, a touchpad, and a touch sensor superimposed on the monitor 23.
The processor 31 having the image generation unit 21, the display controller 22, the analysis unit 25, the information addition unit 26, the image extraction unit 27, the display layout setting unit 28, and the main body controller 29 is configured by a central processing unit (CPU) and a control program for causing the CPU to execute various kinds of processing, but the processor 31 may be configured by using a field programmable gate array (FPGA), a digital signal processor (DSP), an application specific integrated circuit (ASIC), a graphics processing unit (GPU), or other integrated circuits (IC) or may be configured by a combination thereof.
Further, the image generation unit 21, the display controller 22, the analysis unit 25, the information addition unit 26, the image extraction unit 27, the display layout setting unit 28, the main body controller 29 of the processor 31 can also be configured by being integrated partially or entirely into one CPU or the like.
Next, the operation of the image display apparatus according to the first embodiment will be described with reference to the flowchart illustrated in
First, in Step S1, the display layout is set by the display layout setting unit 28 on the basis of the user's input operation via the input device 30. The user can set the display layout as illustrated in
The display layout is sent from the display layout setting unit 28 to the image extraction unit 27 and the display controller 22.
Next, in Step S2, the ultrasound image in which the lesion part of the subject is imaged is acquired by the image acquisition unit 32. In this case, under the control of the main body controller 29, the transmission and reception of ultrasonic waves from the plurality of transducers of the transducer array 11 are started according to the drive signal from the pulser 13 of the transmission and reception circuit 12 of the ultrasound probe 10, the ultrasound echo from the subject is received by the plurality of transducers of the transducer array 11, and the reception signal as the analog signal is output to the amplification unit 14 to be amplified, and then is subjected to the AD conversion by the AD conversion unit 15 to acquire the reception data.
By performing the reception focusing processing on the reception data by the beam former 16, the sound ray signal generated in this manner is sent to the image generation unit 21 of the apparatus main body 20, and the ultrasound image representing the tomographic image information of the lesion part of the subject is generated by the image generation unit 21. In this case, the signal processing unit 41 of the image generation unit 21 performs the correction of the attenuation according to the depth of the reflection position of the ultrasonic wave and the envelope detection processing on the sound ray signal, the DSC 42 performs the conversion into the image signal according to a normal television signal scanning method, and the image processing unit 43 performs various kinds of necessary image processing such as gradation processing.
In a case where the ultrasound image is acquired by the image acquisition unit 32, in Step S3, the probe mark is plotted on the body mark automatically or manually by the user, and then in Step S4, the ultrasound image is frozen and stored in the image memory 24. In this case, the tag information associated with the ultrasound image as the accessory information of the ultrasound image is also stored in the image memory 24.
Until it is determined in Step S5 that the acquisition of the series of ultrasound images in the ultrasonography of the subject is completed, Step S2 to Step S5 are repeated, and the plurality of acquired ultrasound images are stored in the image memory 24.
In a case where it is determined in Step S5 that the acquisition of the series of ultrasound images is completed, the processing proceeds to Step S6, and the analysis is performed on the plurality of ultrasound images stored in the image memory 24 by the analysis unit 25. Thus, the probe positional information indicating the position of the ultrasound probe 10 at the time of imaging, the image type information indicating whether the ultrasound image is a B-mode image or another image, and the imaging feature information regarding the position of the imaged lesion part and the imaging range are acquired for each of the plurality of ultrasound images.
In this case, for example, the analysis unit 25 acquires the probe positional information from the tag information that is stored in Step S4 in the image memory 24 as the accessory information of the ultrasound image.
Further, the analysis unit 25 acquires the image type information on the basis of the tag information stored in association with the ultrasound image in the image memory 24 or by recognizing the RGB signal values of the ultrasound image. Furthermore, the analysis unit 25 may acquire the image type information of the ultrasound image on the basis of the operation mode of the ultrasound device at the time of imaging the ultrasound image. For example, in a case where the ultrasound device is in the elasticity mode during freezing, the image type information indicating an image superimposed with the elasticity information can be acquired.
The analysis unit 25 performs the image analysis on the ultrasound image to detect the lesion part, acquire the imaging feature information regarding the position of the lesion part in the ultrasound image, and acquire the imaging feature information regarding the imaging range on the basis of the distance between pixels in the ultrasound image.
The probe positional information, the image type information, and the imaging feature information acquired by the analysis unit 25 are added to the corresponding ultrasound image by the information addition unit 26. The plurality of ultrasound images in which probe positional information, the image type information, and the imaging feature information are added are sent from the information addition unit 26 to the image extraction unit 27, and are sent to and stored in the image memory 24.
In subsequent Step S7, the ultrasound image conforming to the display layout set by the display layout setting unit 28 is extracted from the plurality of ultrasound images by the image extraction unit 27.
In this case, the image extraction unit 27 extracts the ultrasound image by referring to the probe positional information, the image type information, and the imaging feature information acquired by the analysis unit 25. For example, the ultrasound image with the probe positional information indicating that the image is captured by the ultrasound probe 10 in an orientation along the horizontal direction at the position of the lesion part of interest, the image type information indicting the B-mode image, and the imaging feature information indicating that the lesion part is shown near the center in the horizontal direction is extracted corresponding to the display region U1 of the display layout illustrated in
Similarly, the ultrasound image conforming to each of the plurality of display regions of the display layout set by the display layout setting unit 28 is extracted by the image extraction unit 27.
Further, in Step S8, the plurality of ultrasound images extracted in Step S7 are displayed on the monitor 23 according to the display layout by the display controller 22.
The lesion part F is shown near the center in the horizontal direction in the ultrasound images G11 and G21 displayed in the upper and lower left display regions U1 and L1, the lesion part F is shown near the middle between the center and the end portion in the horizontal direction in the ultrasound image G12 displayed in the upper center display region U2, and the lesion part F is shown near the end portion in the horizontal direction in the ultrasound image G13 displayed in the upper right display region U3.
In
One of the image captured by the ultrasound probe 10 in an orientation along the horizontal direction, which is displayed in the upper display regions U1 to U3, and the image captured by the ultrasound probe 10 in an orientation along the vertical direction, which is displayed in the lower display regions L1 to L3, can be generated from a video imaged at the time of acquiring the other image.
In this manner, the probe positional information of the ultrasound probe 10, the image type information indicating whether the image is the B-mode image or another image, and the imaging feature information regarding the position of the lesion part and the imaging range are acquired by the analysis unit 25 for each of the plurality of ultrasound images, the ultrasound image conforming to the display layout set by the display layout setting unit 28 is extracted from the plurality of ultrasound images by the image extraction unit 27 on the basis of the probe positional information, the image type information, and the imaging feature information, and the extracted ultrasound image is displayed on the monitor 23 according to the display layout by the display controller 22. Therefore, it is possible for the user to efficiently perform the interpretation by checking the ultrasound image displayed on the monitor 23.
Since the image extraction unit 27 refers to the probe positional information indicating the position and orientation of the ultrasound probe 10 at the time of imaging, and the image type information, the image extraction unit 27 can extract a tomographic B-mode image of the same lesion part cut in the same cross-sectional direction.
Further, since the image extraction unit 27 refers to the imaging feature information regarding the position of the lesion part in the ultrasound image, the image extraction unit 27 can extract the ultrasound image in which not only the lesion part but also the peripheral portion leading to the lesion part are imaged.
Since the analysis unit 25 acquires the imaging feature information regarding the imaging range of the inside of the subject shown in the ultrasound image on the basis of the distance between pixels in each of the plurality of ultrasound images, the display layout for displaying the plurality of ultrasound images, in which the same lesion part is imaged but which have different imaging ranges, can be set by the display layout setting unit 28.
In a case where the display layout including not only the B-mode image but also the image superimposed with the elasticity information or Doppler information is set by the display layout setting unit 28, the image superimposed with the elasticity information or the image superimposed with the Doppler information can be displayed on the monitor 23 together with the B-mode image at the same time.
The position sensor 17 of the ultrasound probe 10A is connected to the analysis unit 25A of the apparatus main body 20A, and the analysis unit 25A is connected to the image memory 24 and the information addition unit 26.
A main body controller 29A is connected to the image generation unit 21, the display controller 22, the image memory 24, the analysis unit 25A, the information addition unit 26, the image extraction unit 27, and the display layout setting unit 28, and the input device 30 is connected to the main body controller 29A. The transmission and reception circuit 12 of the ultrasound probe 10 is connected to the main body controller 29A.
The image generation unit 21, the display controller 22, the analysis unit 25A, the information addition unit 26, the image extraction unit 27, the display layout setting unit 28, and the main body controller 29A constitute a processor 31A.
The position sensor 17 of the ultrasound probe 10A detects the position and orientation of the ultrasound probe 10A, and sends the position and orientation of the ultrasound probe 10A to the analysis unit 25A of the apparatus main body 20A. For example, as the position sensor 17, a magnetic sensor, an optical position sensor, an acceleration sensor, a gyro sensor, or a global positioning system (GPS) sensor can be used.
The analysis unit 25A acquires the probe positional information on the basis of the position and orientation of the ultrasound probe 10A detected by the position sensor 17. Therefore, for example, even in a case where the body mark 51 is not added to the ultrasound image, the probe positional information regarding the position and orientation of the ultrasound probe 10 can be acquired.
The image type information indicating whether the ultrasound image is the B-mode image or another image, and the imaging feature information regarding the region where the lesion part is positioned and the imaging range are acquired in the same manner as in the analysis unit 25 in the first embodiment.
The probe positional information, the image type information, and the imaging feature information acquired by the analysis unit 25A are added to the ultrasound image by the information addition unit 26, and the image extraction unit 27 extracts the ultrasound image by referring to the probe positional information, the image type information, and the imaging feature information. Therefore, as in the first embodiment, the ultrasound image extracted by the image extraction unit 27 is displayed on the monitor 23 according to the display layout, and it is possible for the user to efficiently perform the interpretation by checking the ultrasound image displayed on the monitor 23.
The communication unit 33 is connected to the image memory 24, and is connected to a server 70 via a network 60.
The main body controller 29B is connected to the display controller 22, the image memory 24, the analysis unit 25, the information addition unit 26, the image extraction unit 27, the display layout setting unit 28, and the communication unit 33, and the input device 30 is connected to the main body controller 29B.
The display controller 22, the analysis unit 25, the information addition unit 26, the image extraction unit 27, the display layout setting unit 28, the main body controller 29B, and the communication unit 33 constitute a processor 31B.
For example, the server 70 is installed in a facility such as a hospital, and is connected to the communication unit 33 of the apparatus main body 20B via the network 60. The server 70 manages image data of the ultrasound image and the like acquired by the ultrasonography for the subject, and can be used as a server in a PACS or workstation. The ultrasound image stored in the server 70 is acquired by the ultrasound device, and then converted into the image data in a DICOM format including the tag information in a case of being transferred from the ultrasound device.
The communication unit 33 is configured by a circuit including an antenna for transmitting and receiving radio waves, and a circuit or the like for performing local area network (LAN) connection, and performs communication with the server 70 via the network 60 under the control of the main body controller 29B. The communication unit 33 can receive the ultrasound image from the server 70 via the network 60.
The operation of the image display apparatus according to the third embodiment will be described with reference to the flowchart illustrated in
As in the first embodiment, first, in Step S1, the display layout is set by the display layout setting unit 28 on the basis of the user's input operation via the input device 30.
Next, in Step S9, on the basis of the user's input operation via the input device 30, the ultrasound image of the subject stored in the server 70 is transmitted to the communication unit 33 via the network 60, and is stored in the image memory 24.
Subsequent Step S6 to Step S8 are the same as those in the first embodiment. That is, in Step S6, the analysis unit 25 reads out the plurality of ultrasound images of the subject from the image memory 24, and acquires the probe positional information indicating the position of the ultrasound probe 10 at the time of imaging, the image type information indicating whether the ultrasound image is a B-mode image or another image, and the imaging feature information regarding the position of the imaged lesion part and the imaging range, for each ultrasound image. In this case, the analysis unit 25 can acquire the probe positional information by recognizing the probe mark of the body mark added to the ultrasound image. The ultrasound image transmitted to the image memory 24 from the server 70 via the network 60 has already been converted into the image data in the DICOM format including the tag information, the analysis unit 25 can acquire the image type information of the ultrasound image on the basis of the tag information included in the DICOM file.
The acquired probe positional information, image type information, and imaging feature information are added to the ultrasound image by the information addition unit 26, and in Step S7, the image extraction unit 27 extracts the ultrasound image by referring to the probe positional information, the image type information, and the imaging feature information. Thereafter, in Step S8, the plurality of ultrasound images extracted in Step S7 are displayed on the monitor 23 according to the display layout.
Therefore, as in the first and second embodiments, the ultrasound image extracted by the image extraction unit 27 is displayed on the monitor 23 according to the display layout, and it is possible for the user to efficiently perform the interpretation by checking the ultrasound image displayed on the monitor 23.
Therefore, for example, the plurality of ultrasound images that are acquired by the past examination and are stored in the server 70 are read out, and displayed according to the display layout.
As described above, the analysis unit 25 acquires the image type information of the ultrasound image on the basis of the tag information included in the DICOM file, the image display apparatus according to the third embodiment can be used as a viewer on the PACS, for example.
In the first to third embodiments described above, in a case where there are a plurality of lesion parts F, the display layout as illustrated in
The connection method of the ultrasound probes 10 and 10A with the apparatus main bodies 20 and 20A in the first and second embodiments described above is not particularly limited, and may be wired connection or wireless connection.
In the first and second embodiments described above, the ultrasound probes 10 and have the transmission and reception circuit 12, but the apparatus main bodies 20 and 20a can include the transmission and reception circuit 12. Further, the apparatus main bodies 20 and have the image generation unit 21, but the ultrasound probes 10 and 10A may have the image generation unit 21. Further, among the signal processing unit 41, the DSC 42, and the image processing unit 43 constituting the image generation unit 21 illustrated in
As the apparatus main bodies 20, 20A, and 20B in the first to third embodiments, a stationary type apparatus main body can be used, and a portable or handheld type compact apparatus main body can also be used.
Number | Date | Country | Kind |
---|---|---|---|
2022-100084 | Jun 2022 | JP | national |