Embodiments described herein relate to an image diagnostic system, an image diagnostic method, and a storage medium.
A tomographic image including an ultrasonic tomographic image of a blood vessel is generated by an intravascular ultrasound (IVUS) method using a catheter, and an ultrasonic inspection of the blood vessel is performed based on the generated tomographic image. For the purpose of assisting a doctor to make a diagnosis, a technology of adding information to such a tomographic image by image processing or machine learning has been developed. In the conventional technique, features of a blood vessel such as a luminal wall and a stent can be individually extracted from the blood vessel image.
When a blood vessel is observed using a tomographic image, a main trunk and a side branch branching from the main trunk may appear. Since the long axis of the catheter placed in the main trunk is not parallel to the long axis of the side branch, the side branch shape is drawn in various forms depending on the observation position. In particular, it is necessary to virtually define the region of the branch portion where the main trunk and the side branch are connected, but doing so is challenging.
In one aspect, there are provided an image diagnostic system, an image diagnostic method, and a storage medium capable of automatically visualizing a branch portion of a blood vessel where a main trunk and a side branch are connected to each other.
In one embodiment, an image diagnostic system comprises: a catheter insertable into a blood vessel; a memory that stores a program; a display; and a processor configured to execute the program to: control the catheter to acquire a tomographic image of a blood vessel, determine a first region corresponding to a main trunk of the blood vessel and a second region corresponding to a side branch of the blood vessel in the acquired image, and determine whether the first and second regions are connected to each other, upon determining that the first and second regions are connected to each other, determine a first contour line of the first region in the acquired image, determine a second contour line of the second region in the acquired image based on the first contour line, and control the display to display the first and second contour lines on the acquired image using different display modes.
In one aspect, it is possible to automatically visualizing a branch portion of a blood vessel where a main trunk and a side branch are connected to each other.
Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings.
The image diagnosis system 100 includes an intravascular inspection apparatus 101, an angiography apparatus 102, an image processing apparatus 3, a display apparatus 4, and an input apparatus 5. The intravascular inspection apparatus 101 includes an image diagnosis catheter 1 and a motor drive unit (MDU) 2. The image diagnosis catheter 1 is connected to the image processing apparatus 3 via the MDU 2. The display apparatus 4 and the input apparatus 5 are connected to the image processing apparatus 3. The display apparatus 4 is, for example, a liquid crystal display, an organic EL display, or the like, and the input apparatus 5 is, for example, a keyboard, a mouse, a touch panel, a microphone, or the like. The input apparatus 5 and the image processing apparatus 3 may be integrated into a single apparatus. Furthermore, the input apparatus 5 may be a sensor that receives a gesture input, a line-of-sight input, or the like.
The angiography apparatus 102 is connected to the image processing apparatus 3. The angiography apparatus 102 images a blood vessel from outside a living body of a patient using X-rays while injecting a contrast agent into the blood vessel of the patient to obtain an angiographic image that is a fluoroscopic image of the blood vessel. The angiography apparatus 102 includes an X-ray source and an X-ray sensor, and images an X-ray fluoroscopic image of the patient by the X-ray sensor receiving X-rays emitted from the X-ray source. Note that the image diagnosis catheter 1 has a marker that does not transmit X-rays, and the position of the image diagnosis catheter 1 (i.e., the marker) is visualized in the angiographic image. The angiography apparatus 102 outputs the angiographic image obtained by imaging to the image processing apparatus 3, and causes the display apparatus 4 to display the angiographic image via the image processing apparatus 3. Note that the display apparatus 4 displays the angiographic image and the tomographic image imaged using the image diagnosis catheter 1.
Note that, in the present embodiment, the image processing apparatus 3 is connected to the angiography apparatus 102 that images two-dimensional angiographic images. However, the present invention is not limited to the angiography apparatus 102 as long as it is an apparatus that images a luminal organ of a patient and the image diagnosis catheter 1 from a plurality of directions outside the living body.
The sensor unit 12 includes a housing 12d, and a distal end side of the housing 12d is formed in a hemispherical shape in order to suppress friction and catching with an inner surface of the catheter sheath 11a. In the housing 12d, an ultrasound transmitter and receiver 12a (hereinafter referred to as an IVUS sensor 12a) that transmits ultrasonic waves into a blood vessel and receives reflected waves from the blood vessel and an optical transmitter and receiver 12b (hereinafter referred to as an OCT sensor 12b) that transmits near-infrared light into the blood vessel and receives reflected light from the inside of the blood vessel are disposed. In the example illustrated in
An electric signal cable (not illustrated) connected to the IVUS sensor 12a and an optical fiber cable (not illustrated) connected to the OCT sensor 12b are inserted into the shaft 13. The probe 11 is inserted into the blood vessel from the distal end side. The sensor unit 12 and the shaft 13 can move forward or rearward inside the catheter sheath 11a and can rotate in a circumferential direction. The sensor unit 12 and the shaft 13 rotate about the central axis of the shaft 13 as a rotation axis. In the image diagnosis system 100, by using an imaging core including the sensor unit 12 and the shaft 13, a state inside the blood vessel is measured by an ultrasonic tomographic image (IVUS image) captured from the inside of the blood vessel or an optical coherence tomographic image (i.e., an OCT image) captured from the inside of the blood vessel.
The MDU 2 is a drive apparatus to which the probe 11 is detachably attached by the connector portion 15, and controls the operation of the image diagnosis catheter 1 inserted into the blood vessel by driving a built-in motor according to an operation of a medical worker. For example, the MDU 2 performs a pull-back operation of rotating the sensor unit 12 and the shaft 13 inserted into the probe 11 in the circumferential direction while pulling the sensor unit 12 and the shaft 13 toward the MDU 2 side at a constant speed. The sensor unit 12 continuously scans the inside of the blood vessel at predetermined time intervals while moving and rotating from the distal end side to the proximal end side by the pull-back operation and continuously captures a plurality of transverse tomographic images substantially perpendicular to the probe 11 at predetermined intervals. The MDU 2 outputs reflected wave data of an ultrasonic wave received by the IVUS sensor 12a and reflected light data received by the OCT sensor 12b to the image processing apparatus 3.
The image processing apparatus 3 acquires a signal data set which is the reflected wave data of the ultrasonic wave received by the IVUS sensor 12a and a signal data set which is reflected light data received by the OCT sensor 12b via the MDU 2. The image processing apparatus 3 generates ultrasonic line data from a signal data set of the ultrasonic waves, and constructs an ultrasonic tomographic image (i.e., an IVUS image) obtained by imaging a transverse section of the blood vessel based on the generated ultrasonic line data. In addition, the image processing apparatus 3 generates optical line data from the signal data set of the reflected light, and constructs an optical coherence tomographic image (i.e., an OCT image) obtained by imaging a transverse section of the blood vessel based on the generated optical line data. Here, the signal data set acquired by the IVUS sensor 12a and the OCT sensor 12b and the tomographic image constructed from the signal data set will be described.
Similarly, the OCT sensor 12b also transmits and receives the measurement light at each rotation angle. Since the OCT sensor 12b also transmits and receives the measurement light 512 times while rotating 360 degrees in the blood vessel, it is possible to obtain 512 pieces of optical line data radially extending from the rotation center during one rotation. Moreover, for the optical line data, the image processing apparatus 3 can generate a two-dimensional optical coherence tomographic image (i.e., an OCT image) similar to the IVUS image illustrated in
The two-dimensional tomographic image generated from the 512 pieces of line data in this manner is referred to as an IVUS image or an OCT image of one frame. Note that, since the sensor unit 12 scans while moving in the blood vessel, an IVUS image or an OCT image of one frame is acquired at each position rotated once within a movement range. That is, since the IVUS image or the OCT image of one frame is acquired at each position from the distal end side to the proximal end side of the probe 11 in the movement range, as illustrated in
The image diagnosis catheter 1 has a marker that does not transmit X-rays in order to confirm a positional relationship between the IVUS image obtained by the IVUS sensor 12a or the OCT image obtained by the OCT sensor 12b and the angiographic image obtained by the angiography apparatus 102. In the example illustrated in
The longitudinal cross-sectional view illustrated in
When acquiring a tomographic image of a blood vessel, the image processing apparatus 3 according to the present embodiment extracts or determines a main trunk region from the acquired tomographic image. In addition, when the acquired tomographic image includes a side branch region in addition to the main trunk region, the image processing apparatus 3 extracts or determines both the main trunk region and the side branch region. In a case where the main trunk region and the side branch region are connected, the image processing apparatus 3 defines boundary lines thereof and presents the boundary lines to the user as individual regions.
The control unit 31 includes one or more processors such as central processing units (CPU), micro processing units (MPU), graphics processing units (GPU), general purpose computing on graphics processing units (GPGPU), and tensor processing units (TPU). The control unit 31 is connected to each hardware unit constituting the image processing apparatus 3 via a bus.
The main storage unit 32, which is a temporary memory area such as a static random access memory (SRAM), a dynamic random access memory (DRAM), or a flash memory, temporarily stores data necessary for the control unit 31 to execute arithmetic processing.
The input/output unit 33 includes an interface circuit that connects external apparatuses such as the intravascular inspection apparatus 101, the angiography apparatus 102, the display apparatus 4, and the input apparatus 5. The control unit 31 acquires an IVUS image and an OCT image from the intravascular inspection apparatus 101 via the input/output unit 33, and acquires an angiographic image from the angiography apparatus 102. In addition, the control unit 31 outputs a medical image signal of an IVUS image, an OCT image, or an angiographic image to the display apparatus 4 via the input/output unit 33, thereby displaying the medical image on the display apparatus 4. Furthermore, the control unit 31 receives information input to the input apparatus 5 via the input/output unit 33.
The communication unit 34 includes, for example, a communication interface circuit conforming to a communication standard such as 4G, 5G, or WiFi. The image processing apparatus 3 communicates with an external server such as a cloud server connected to an external network such as the Internet via the communication unit 34. The control unit 31 may access an external server via the communication unit 34 and refer to various data stored in a storage of the external server. Furthermore, the control unit 31 may cooperatively perform the processing in the present embodiment by performing, for example, inter-process communication with the external server.
The auxiliary storage unit 35 is a storage device such as a hard disk drive (HDD) or a solid state drive (SSD). The auxiliary storage unit 35 stores a computer program executed by the control unit 31 and various data necessary for processing of the control unit 31. Note that the auxiliary storage unit 35 may be an external storage device connected to the image processing apparatus 3. The computer program executed by the control unit 31 may be written in the auxiliary storage unit 35 at the manufacturing stage of the image processing apparatus 3, or the computer program distributed by a remote server apparatus may be acquired by the image processing apparatus 3 through communication and stored in the auxiliary storage unit 35. The computer program may be readably recorded in a recording medium RM such as a magnetic disk, an optical disk, or a semiconductor memory, or may be read from the recording medium RM by the reading unit 36 and stored in the auxiliary storage unit 35. An example of the computer program stored in the auxiliary storage unit 35 is a contour determination program PG for causing a computer to execute processing of determining contour lines of the main trunk region and the side branch region from a tomographic image of a blood vessel and displaying the determined contour lines.
In addition, the auxiliary storage unit 35 stores a learning model MD used for processing of recognizing the main trunk region and the side branch region from the tomographic image of the blood vessel.
In the present embodiment, the input image to the learning model MD is a tomographic image (i.e., an IVUS image or an OCT image) of a blood vessel obtained by the intravascular inspection apparatus 101. The learning model MD is learned so as to output information indicating a recognition result of the main trunk region and the side branch region included in the tomographic image in response to the input of the tomographic image. The learning model MD is described by its definition information. The definition information includes information on layers constituting the learning model MD, information on nodes constituting each layer, and internal parameters such as a weight coefficient and bias between nodes. The internal parameters are learned by a predetermined learning algorithm. The auxiliary storage unit 35 stores definition information of the learning model MD including trained internal parameters.
The learning model MD in the present embodiment includes, for example, an encoder EN, a decoder DE, and a softmax layer SM. The encoder EN is configured by alternately arranging a convolution layer and a pooling layer. The convolution layer is multilayered into two to three layers.
In the convolution layer, a convolution operation of input data and a filter having a size (for example, 3×3, 5×5, or the like) determined in each of the input data and the filter is performed. That is, an input value input to a position corresponding to each element of the filter and a weight coefficient set in advance for the filter are multiplied for each element, and a linear sum of multiplication values for each element is calculated. An output in the convolution layer is obtained by adding the set bias to the calculated linear sum. Note that the result of the convolution operation may be converted by an activation function. For example, a rectified linear unit (ReLU) can be used as the activation function. The output of the convolution layer represents a feature map from which the features of the input data are extracted.
In the pooling layer, local statistical values of the feature map output from the convolution layer which is an upper layer connected to the input side are calculated. Specifically, a window having a predetermined size (for example, 2×2, 3×3) corresponding to the position of the upper layer is set, and the local statistical values are calculated from the input value in the window. As the statistical values, for example, a maximum value can be adopted. The size of the feature map output from the pooling layer is reduced (i.e., downsampled) according to the size of the window. The example of
The output of the encoder EN is input to the decoder DE. The decoder DE is configured by alternately arranging a deconvolution layer and an unpooling layer. The deconvolution layer is multilayered into two to three layers.
In the deconvolution layer, a deconvolution operation is performed on the input feature map. The deconvolution operation is computation of restoring a feature map before a convolution operation is performed under estimation that an input feature map is a result of the convolution operation using a specific filter. In this computation, when a specific filter is represented by a matrix, a product of a transposed matrix for the matrix and the input feature map is calculated to generate a feature map for output. Note that the computation result of the deconvolution layer may be converted by the activation function such as ReLU described above.
The unpooling layers included in the decoder DE are individually associated with the pooling layers included in the encoder EN on a one-to-one basis, and the associated pairs have substantially the same size. The unpooling layer again enlarges (i.e., upsamples) the size of the downsampled feature map in the pooling layer of the encoder EN. The example of
The output of the decoder DE is input to the softmax layer SM. The softmax layer SM applies the softmax function to the input value from the deconvolution layer connected to the input side, thereby outputting the probability of the label for identifying the site at each position or pixel. In the present embodiment, labels for identifying the main trunk and the side branches are set. The control unit 31 of the image processing apparatus 3 refers to the probability of the label output from the softmax layer SM, and recognizes the region belonging to the main trunk (i.e., the main trunk region) and the region belonging to the side branch (i.e., the side branch region) from the tomographic image. The control unit 31 may recognize a region that does not belong to the main trunk region or the side branch region as the background region.
Hereinafter, the operation of the image processing apparatus 3 will be described.
The control unit 31 recognizes a region based on a computation result by the learning model MD (S104). Specifically, the control unit 31 determines whether each pixel belongs to a main trunk or a side branch based on the probability of the label for each pixel output from the softmax layer SM of the learning model MD. The control unit 31 recognizes a set of pixels determined to belong to a main trunk as a main trunk region, and recognizes a set of pixels determined to belong to a side branch as a side branch region. The control unit 31 recognizes a set of pixels that do not belong to either a main trunk or a side branch as a background region.
The control unit 31 determines whether a side branch region is included in the region recognized in S104 (S105). When it is determined that a side branch region is not included (S105: NO), the contour line of a main trunk region included in the tomographic image is independently determined (S106). For example, the control unit 31 determines the contour line of a main trunk region by selecting a plurality of points on the peripheral edge of the main trunk region and obtaining an approximate curve such as a spline curve passing through the selected plurality of points (or the vicinity of the points).
When it is determined that a main trunk region and a side branch region are included (S105: YES), the control unit 31 determines whether the main trunk region and the side branch region are in contact with each other (S107). When it is determined that the main trunk region and the side branch region are in contact with each other (S107: YES), the control unit 31 determines the contour line of the main trunk region included in the tomographic image, and determines the contour line of the side branch region with reference to the determined contour line of the main trunk region (S108). A procedure for determining the contour line of the side branch region with reference to the contour line of the main trunk region will be described in detail with reference to a flowchart illustrated in
When it is determined that a main trunk region and a side branch region are not in contact with each other (S107: NO), the control unit 31 independently determines contour lines of a main trunk region and a side branch region included in the tomographic image (S109). For example, the control unit 31 determines the contour line of the main trunk region by selecting a plurality of points on the peripheral edge of the main trunk region and obtaining an approximate curve such as a spline curve passing through the selected plurality of points (or the vicinity of the points). In addition, the control unit 31 determines the contour line of the side branch region by selecting a plurality of points on the peripheral edge of the side branch region and obtaining an approximate curve such as a spline curve passing through the selected plurality of points (or the vicinity of the points).
The control unit 31 displays the contour line determined in S106, S108, or S109 (S110).
The control unit 31 selects a plurality of points on the peripheral edge of the main trunk region (S122). In the present embodiment, a set of pixels determined to belong to the main trunk is recognized as the main trunk region. Among the pixels belonging to the main trunk, in a case where all the eight pixels adjacent to the pixel of interest (four on the upper, lower, left, and right sides+four in the oblique direction) belong to the main trunk, the pixel of interest is recognized as a pixel inside the main trunk region, and in a case where one or more of the eight pixels adjacent to the pixel of interest belong to a side branch or a background, the pixel of interest is recognized as a pixel on the peripheral edge. The control unit 31 selects a predetermined number of pixels from among pixels recognized as pixels on the peripheral edge among pixels belonging to the main trunk region. The number of pixels to be selected may be set in advance or may be set according to the size of the main trunk region. The pixel selected in S122 is referred to as a contour point of the main trunk region.
The control unit 31 determines the contour line of the main trunk region based on the plurality of contour points selected in S122 (S123). The control unit 31 can determine the contour line of the main trunk region by obtaining an approximate curve such as a spline curve passing through the contour points (or the vicinity of the contour points) of the main trunk region.
The control unit 31 corrects the boundary portion between the side branch region and the main trunk region with reference to the contour line of the main trunk region determined in S123 (S124).
The control unit 31 selects a plurality of points on the peripheral edge from the side branch region whose boundary portion has been corrected (S125). In a case where all the eight pixels adjacent to the pixel of interest among the pixels belonging to the side branch belong to the side branch, the pixel of interest is recognized as a pixel inside the region, and in a case where one or more of the eight pixels adjacent to the pixel of interest belong to the main trunk or the background, the pixel of interest is recognized as a pixel on the peripheral edge. The control unit 31 selects a predetermined number of pixels from among the pixels recognized as pixels on the peripheral edge among the pixels belonging to the side branch region. The number of pixels to be selected may be set in advance or may be set according to the size of the side branch region. The pixel selected in S125 is referred to as a contour point of the side branch region.
The control unit 31 removes contour points in contact with the main trunk region from among the contour points of the side branch region selected in S125 (S126), and determines the contour line of the side branch region based on the remaining contour points (S127). That is, the control unit 31 determines the contour line of the side branch region in the non-boundary portion by obtaining an approximate curve such as a spline curve passing through the contour points (or the vicinity of the contour points) of the non-boundary portion among the contour points of the side branch region.
The control unit 31 determines the contour line of the side branch region at the boundary portion from the contour points included in the boundary portion (S128). That is, the control unit 31 determines the contour line of the side branch region at the boundary portion by obtaining an approximate curve such as a spline curve passing through the contour points (or the vicinity of the contour points) of the boundary portion among the contour points of the side branch region. The control unit 31 determines the contour line of the entire side branch region based on the contour line of the non-boundary portion determined in S127 and the contour line of the boundary portion determined in S128 (S129).
As described above, in the first embodiment, since the contour line of the side branch region is determined with reference to the contour line of the main trunk region, the region (i.e., the boundary line) at the branch portion where the main trunk and the side branch are connected can be defined and presented to the user.
In the present embodiment, the contour lines of the main trunk region and the side branch region are determined and displayed in different display modes, but the control unit 31 may calculate the areas and diameters of the main trunk region and the side branch region and display the calculated values of the areas and diameters. The control unit 31 can count the number of pixels in each region and calculate the area of each region based on the count value. In addition, since the contour line of each region is represented by a mathematical formula such as a spline curve, the control unit 31 can obtain, for example, a linear distance from coordinates indicating the rotation center of the IVUS sensor 12a to the contour line of the main trunk region or the side branch region by computation, and can determine the obtained linear distance as the diameter of each region. Note that the diameter calculated by the control unit 31 may be any of the minimum diameter, the average diameter, and the maximum diameter.
The control unit 31 can determine the contour lines of the main trunk region and the side branch region from the tomographic image of each frame. The control unit 31 may generate a three-dimensional image of the main trunk and side branches based on the contour line determined from each frame. The control unit 31 can display the main trunk and the side branches as wire frames (i.e., three-dimensional images) by displaying the contour lines determined from the respective frames side by side in the long axis direction of the blood vessel. In addition, the control unit 31 may calculate the centroid of the contour line determined from each frame and render the contour line along the trajectory of the calculated centroid to display the main trunk and side branches as a surface model of a three-dimensional image.
Furthermore, since the control unit 31 detects the main trunk region and the side branch region from the tomographic image of each frame, when displaying the frame in which the side branch region is detected on the display apparatus 4, character information of “with side branch”, an icon indicating the presence of the side branch, and the like may be displayed together.
Furthermore, when a side branch is detected in a plurality of consecutive frames and there is a frame in the middle of which a side branch is not detected, the control unit 31 may display character information or an icon indicating that detection omission has occurred on the display apparatus 4.
In the second embodiment, a method for determining a contour line in a case where a side branch comes into contact with a visual field boundary will be described.
Since the overall configuration of the image diagnosis system 100, the internal configuration of the image processing apparatus 3, and the like in the second embodiment are similar to those of the first embodiment, the description thereof will be omitted.
The control unit 31 determines whether the side branch region is in contact with the visual field boundary (S201). Here, the visual field boundary is a visual field boundary at the time of display, and the display depth (i.e., the maximum diameter displayed from the catheter center) can be changed according to setting by the user. Note that the data of the tomographic image may include data of a depth deeper than the display depth. Furthermore, regarding the depth, the data used for three-dimensional display does not necessarily match the depth of two-dimensional display. From the viewpoint of grasping the entire image of the blood vessel, it is preferable to use data having a deeper depth. When it is determined in S201 that the side branch region is not in contact with the visual field region (S201: NO), the control unit 31 ends the processing according to this flowchart.
When determining that the side branch region is in contact with the visual field region (S201: YES), the control unit 31 determines the contour line of the side branch region with reference to the boundary line of the visual field region (S202). A procedure for determining the contour line of the side branch region with reference to the boundary line of the visual field region will be described in detail with reference to a flowchart illustrated in
The control unit 31 selects a plurality of points on the peripheral edge of the side branch region (S222). In the present embodiment, a set of pixels determined to belong to a side branch is recognized as a side branch region. In a case where all the eight pixels adjacent to the pixel of interest among the pixels belonging to the side branch belong to the side branch, the pixel of interest is recognized as a pixel inside the side branch region, and in a case where one or more of the eight pixels adjacent to the pixel of interest belong to the main trunk or the background, or in a case where the number of pixels adjacent to the pixel of interest is less than eight (in a case where there is no peripheral pixel), the pixel of interest is recognized as a pixel on the peripheral edge. The control unit 31 selects a predetermined number of pixels from among the pixels recognized as pixels on the peripheral edge among the pixels belonging to the side branch region. The number of pixels to be selected may be set in advance or may be set according to the size of the side branch region. The pixel selected in S222 is referred to as a contour point of the side branch region.
The control unit 31 removes contour points in contact with the visual field boundary from among the plurality of points selected in S222 (S223), and determines the contour line of the side branch region based on the remaining contour points (S224). That is, the control unit 31 determines the contour line of the side branch region in the non-boundary portion by obtaining an approximate curve such as a spline curve passing through the contour points (or the vicinity of the contour points) of the non-boundary portion among the contour points of the side branch region.
The control unit 31 determines the contour line of the side branch region at the boundary portion from the contour points included in the boundary portion (S225). That is, the control unit 31 determines the contour line of the side branch region at the boundary portion by obtaining an approximate curve such as a spline curve passing through the contour points (or the vicinity of the contour points) of the boundary portion among the contour points of the side branch region. The control unit 31 determines the contour line of the entire side branch region based on the contour line of the non-boundary portion determined in S224 and the contour line of the boundary portion determined in S225 (S226).
In the flowchart of
Furthermore, when determining the contour line of the non-boundary portion in S224, the control unit 31 may extrapolate the contour line to a region outside the visual field region, estimate the contour line extending outside the visual field region, and display the estimated contour line outside the visual field region as a virtual line.
As described above, in the second embodiment, since the contour line of the side branch region is determined with reference to the visual field boundary, the region at the boundary portion where the side branch (i.e., the boundary line) and the visual field boundary are in contact can be defined and presented to the user.
In the third embodiment, an annotation tool used at the time of training the learning model MD will be described.
Since the overall configuration of the image diagnosis system 100, the internal configuration of the image processing apparatus 3, and the like in the third embodiment are similar to those of the first embodiment, the description thereof will be omitted.
In the present embodiment, the annotation for a large number of tomographic images is performed in the training phase before the recognition processing by the learning model MD is started. Specifically, in the image processing apparatus 3, the annotation tool AT (see
When the annotation tool AT is activated, the image processing apparatus 3 displays a work screen 300 as illustrated in
The file selection tool 301 is a tool for receiving selection operations of various files, and includes software buttons for reading a tomographic image, storing annotation data, reading annotation data, and outputting an analysis result. When the tomographic image is read by the file selection tool 301, the read tomographic image is displayed in the image display field 302. The tomographic image generally includes a plurality of frames. The frame designation tool 303 includes an input box and a slider for designating a frame, and is configured to be able to designate a frame of a tomographic image to be displayed in the image display field 302. The example of
The region designation tool 304 is a tool for receiving designation of a region for the tomographic image displayed in the image display field 302, and includes software buttons corresponding to the respective labels. In the example of
When designating the main trunk region using the region designation tool 304, the user selects the software button labeled “main trunk” and plots a plurality of points so as to surround the main trunk region on the image display field 302. The same applies to the case of designating other regions. The control unit 31 of the image processing apparatus 3 derives a closed curve passing through a plurality of points (or the vicinity of the points) plotted by the user, and draws the derived closed curve in the image display field 302. The interior of the closed curve is drawn in a preset color (or a color set by the user).
The example of
The segment display field 305 displays information of a region drawn in the image display field 302. The example of
The editing tool 306 is a tool for accepting editing of a region drawn in the image display field 302, and includes a selection button, an edit button, an erase button, an end button, and a color setting field. By using the editing tool 306, with respect to the region already drawn in the image display field 302, it is possible to move, add, and erase points that define the region, and change the color of the region.
When saving of the annotation is selected by the file selection tool 301 after the designation or editing of the region is completed, the control unit 31 stores a data set (i.e., annotation data) including the data of the tomographic image and the data of the label attached to each region in the auxiliary storage unit 35.
In the present embodiment, the annotation is performed by manual work of the user. However, if learning of the learning model MD progresses, the annotation can be performed using the recognition result of the learning model MD. In addition, identification information such as presence/absence of frame-out and presence/absence of side branch may be stored in the auxiliary storage unit 35 for learning in units of frames.
Specifically, the control unit 31 displays the acquired tomographic image in the image display field 302, performs region recognition using the learning model MD in the background, calculates a plurality of points passing through the contour of the recognized region, and plots the points in the image display field 302. When the main trunk region and the side branch region are in contact with each other, the control unit 31 may determine the contour line of the side branch region with reference to the contour line of the main trunk region and plot points on each contour line by a procedure similar to that in the first embodiment. In addition, when the side branch region and the visual field boundary are in contact with each other, the control unit 31 may determine a contour line of the side branch region with reference to the visual field boundary and plot points on each contour line by a procedure similar to that of the second embodiment.
Since the image processing apparatus 3 grasps the type of the recognized region, it is possible to automatically assign a label to the region. The image processing apparatus 3 accepts editing of the points plotted in the image display field 302 as necessary, and stores data of a region surrounded by the finally determined points and a label of the region in the auxiliary storage unit 35 as annotation data.
As described above, in the third embodiment, the training data can be easily generated using the annotation tool.
It should be understood that the embodiments disclosed herein are illustrative in all respects and are not restrictive. The scope of the present invention is defined not by the meanings described above but by the claims, and is intended to include meanings equivalent to the claims and all modifications within the scope.
Number | Date | Country | Kind |
---|---|---|---|
2022-156624 | Sep 2022 | JP | national |
This application is a continuation of International Patent Application No. PCT/JP2023/035281 filed Sep. 27, 2023, which is based upon and claims the benefit of priority from Japanese Patent Application No. 2022-156624, filed Sep. 29, 2022, the entire contents of which are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2023/035281 | Sep 2023 | WO |
Child | 19094743 | US |