The present invention relates to a diagnostic support system and a diagnostic support method pertaining to display of basis for a determination result by image diagnostic support using a computer.
A medical imaging system using CT, MRI, ultrasound, or the like is not accompanied by surgery in which a living body is directly cut and observed. Thus, the medical imaging system has widely been used, as a technique of imaging internal information of a subject, in the medical field.
A doctor who uses an acquired image to determine presence or absence of a tumor, or the like needs to have substantial experience to accurately interpret the image. Meanwhile, due to advancement of the imaging technique, the number of the images per subject is increased. As a result, a user has to effectively identify the image suggesting necessity of the determination from a large number of the images, which increases a burden associated with the interpretation of the images. For example, in a case of breast cancer screening, a probability of a cancer patient in a subject group is approximately 0.5%. Thus, the extremely small number of the images suggesting breast cancer have to be carefully found from the large number of the images, which significantly increases the burden associated with the interpretation of the images.
As a device for supporting the interpretation of the images, a diagnostic support device has been developed. The diagnostic support device acquires an examination image from the medical imaging system, detects an abnormal region, such as the tumor, in the image through image processing or machine learning processing, and presents the detected abnormal region to a doctor or the like so as to support an image diagnosis.
For example, a case image retrieval device including a finding information output unit (see Patent Document 1), a similar image retrieval device including a feature value calculation unit, a probability calculation unit, and a degree of similarity calculation unit (see Patent Document 2), and the like have been proposed. The finding information output unit associates finding information with a similar image retrieved by a retrieval unit and outputs the finding information to a specified output device. The finding information corresponds to a feature value that contributes to retrieval of the similar image. The feature value calculation unit calculates the feature value that corresponds to a pre-registered lesion pattern. The probability calculation unit calculates a probability of existence of the lesion pattern in the retrieved image on the basis of the feature value. The degree of similarity calculation unit calculates a degree of similarity.
However, while image diagnostic support display for the doctor using machine learning displays the calculated probability of cancer as well as the similar image, basis of a displayed content is not presented.
In the case of supporting a doctor's diagnosis, even when the probability of cancer is calculated and displayed, it is difficult for the doctor to make a diagnosis without the basis therefor. In addition, in the case where the acquired probability is presented while basis for such a numerical value is not presented, a diagnostic result relies on the determination by the doctor. As a result, the numerical value itself becomes meaningless.
Furthermore, in the case where a similar case is displayed while basis for detection of the displayed similar case is not presented, accuracy of the similar case itself becomes vague for the doctor. As a result, a degree of contribution of the similar case to the diagnosis becomes low.
Patent Document 1: JP 2011-118543 A
Patent Document 2: JP 2016-45662 A
The present invention has been made in view of the above points and provides a diagnostic support system and a diagnostic support method enhancing accuracy of a diagnosis by displaying information used as basis for a determination by a diagnostic support device.
To achieve the above object, a diagnostic support system IS according to the first aspect of the present invention includes, as shown in
With this configuration, the display unit can display the similarity between the examination image and each of the plural comparison images. Thus, compared to a case where a probability of cancer or a similar case is only displayed, beneficial information that can support a doctor to make a diagnosis can additionally be provided.
As for the diagnostic support system IS according to the second aspect of the present invention, as shown in
With this configuration, the similarity can be expressed by a numerical value. Thus, the similarity can reliably be identified.
As for the diagnostic support system IS according to the third aspect of the present invention, as shown in
With this configuration, as a method for displaying the similarity between the examination image and each of the plural comparison images, it is possible to display the virtual space image in which these images are plotted on the space. Thus, the similarity therebetween can be understood at first glance.
As for the diagnostic support system IS according to the fourth aspect of the present invention, as shown in
With this configuration, as the method for displaying the similarity between the examination image and each of the plural comparison images, it is possible to display the virtual space image in which these images are plotted on the space. Thus, the similarity therebetween can be understood at first glance.
As for the diagnostic support system IS according to the fifth aspect of the present invention, as shown in
With this configuration, the similarity can be expressed by the numerical value. Thus, the similarity can easily be understood.
As for the diagnostic support system 1S according to the sixth aspect of the present invention, as shown in
With this configuration, the comparison image with the high degree of similarity can be selected and then displayed. As a result, the doctor can further easily make the diagnosis.
As for the diagnostic support system 1S according to the seventh aspect of the present invention, as shown in
With this configuration, the label information of each of the comparison images can be displayed by adding the specified visual effect. Thus, the doctor can understand a diagnosis result of the comparison images and the like at first glance when making the diagnosis.
As for the diagnostic support system 1S according to the eighth aspect of the present invention, as shown in
With this configuration, it is possible to display whether a biological image, for example, a malignant tumor or the like is included in the examination image. Thus, the doctor can easily make a determination by the diagnosis.
As for the diagnostic support system 1S according to the ninth aspect of the present invention, as shown in
With this configuration, even in the case where the image data of the examination image is in such a data format that it is difficult for the calculation unit to calculate the feature value information as is, the data format can be adjusted. As a result, it is possible to provide the diagnostic support system that can provide the diagnostic support regardless of a type of the examination image.
A diagnostic support method using a computer according to the tenth aspect of the present invention includes, as shown in
With this configuration, compared to the case where the similarity between the examination image and each of the plural comparison images is displayed, and thus only the probability of cancer or the similar case is displayed, the beneficial information for supporting the doctor to make the diagnosis can additionally be provided.
The diagnostic support system according to the present invention can enhance accuracy of the diagnosis by displaying the information used as the basis for the determination by the diagnostic support device and can reduce a burden of image interpretation.
This application is based on the Patent Application No. 2018-144549 filed on Jul. 31, 2018 in Japan, the contents of which are hereby incorporated in its entirety by reference into the present application, as part thereof.
The present invention will become more fully understood from the detailed description given hereinbelow. Further range of application of the present invention will become clearer from the detailed description given hereinbelow. However, the detailed description and the specific embodiment are illustrated of desired embodiments of the present invention and are described only for the purpose of explanation. Various changes and modifications will be apparent to those ordinary skilled in the art on the basis of the detailed description.
The applicant has no intention to give to public any disclosed embodiment. Among the disclosed changes and modifications, those which may not literally fall within the scope of the patent claims constitute, therefore, a part of the present invention in the sense of doctrine of equivalents.
At first, a description will be made on an outline of a classification method using machine learning. First, in order to describe classification using the machine learning, a description will be made on, as a comparison target, a method for classifying data on a space spanned on N-dimensional orthonormal basis E (e1, e2, . . . , eN). In the case where E is the orthonormal basis, data d can be expressed by using the base as in the following mathematical formula (i).
d=Σ
i=1
N
w
i
e
i (i)
For example, in Fourier expansion of a waveform signal using a trigonometric function as the orthonormal basis, a waveform can be decomposed using orthogonal basis E and can be classified using distribution (spectrum) of a weight value wi. As a method other than the Fourier expansion, a method for calculating an eigenvector capable of classifying data the most by converting the eigenvector has been studied for a long period of time. However, due to limited expression of an eigenspace, it is difficult for a machine to determine classification of images that can be identified by a person. In recent years, an image recognition field has drawn significant attention due to improvement in accuracy of the classification, which is achieved by diversity of acquired expression using the machine learning such as deep learning. In the case where the data is projected onto a multidimensional space, in which accuracy of separation in a classification process is high, via a so-called convolution network, and a boundary between plural sets can be defined in this multidimensional space, the data can be classified. However, there is a case where it is difficult to set a boundary line such as classification of a lesion part and a normal area and classification of a lesion from the plural lesions. The characteristic of the present invention is to make a diagnostic reading doctor acknowledge, by visually displaying the vague boundary line, that his/her determination is necessary on the multidimensional space with a difficulty in setting the boundary line.
Referring to
When the similarity (particularly, a distance between the comparison image and the examination image in the feature value space or the virtual space) displayed by this diagnostic support system can be checked, an existing diagnostic criteria recognizable by the person and the feature value information calculated by using the machine learning are combined. Thus, this diagnostic support system can support a user who is short on experience in interpretation of the images to improve efficiency and accuracy of a diagnosis. In addition, in the case where the diagnostic support system also makes determinations on presence or absence of the lesion and benignancy/malignancy thereof, the information on the similarity is used as information serving as basis. In this way, it is possible to check adequacy of a determination result by the diagnostic support system. The determination result described herein means display of the comparison image as the similar image, a probability of cancer (a malignant tumor) in the examination image based on these types of the information, and the like.
A detailed description will be made on the first embodiment as a preferred aspect of the present invention.
The imaging device 4 captures a medical image of a subject to acquire an internal information image. For example, an ultrasonic diagnostic device (as disclosed in WO 2017/051903) is applied to the imaging device 4 exemplified in this embodiment.
This imaging device 4 is primarily used for examination of a tumor of breast cancer and can acquire internal information of the breast as the subject as a 3D image. The breast as the subject is inserted in a subject insertion section of a probe 31. An ultrasonic array arranged around the subject vertically scans the subject while transmitting/receiving ultrasound, so as to generate an examination image of the internal information of the subject. In addition to the probe 31, the imaging device 4 includes units such as a transceiving control unit 32, an image generation unit 33, a device control unit 34, and a storage unit 35.
The transceiving control unit 32 controls transmission and reception of an ultrasound signal from the probe 31. The device control unit 34 controls operation of the imaging device 4, including scanning of the probe 31, and the like. The image generation unit 33 reconstructs the ultrasound signal received by the transceiving control unit 32 and generates the examination image. Here, the generated examination image may be an image showing the entire breast as the subject or may be an image showing part of the subject, for example, only a lesion estimated area.
The storage unit 35 stores and accumulates the acquired received signal, subject information, captured image, and the like in a manner capable of calling up as needed. The storage unit 35 is a known storage device such as a HDD or a SSD, or the like. The storage unit 35 can be incorporated in the imaging device 4 as shown in the drawing or can be substituted by an external server (not shown) of the imaging device 4 or the like when being connected to this external server.
In this embodiment, a description will hereinafter be made on ultrasonic image diagnostic support for breast cancer in a three-dimensional medical image of the breast that is acquired by the imaging device 4. It is needless to say that a target of the present invention is not limited to a diagnosis of breast cancer by the device. For example, the internal information of a target area may be that of a head, a body, a limb, or the like. In addition, the diagnosis thereof is not limited to an ultrasound diagnosis. The ultrasound diagnosis is also combined with two-dimensional or three-dimensional CT or MRI or another imaging technology.
As described above, the diagnostic support system IS in the embodiment includes the diagnostic support device 2 and the pre-acquired data storage unit 3. The diagnostic support device 2 includes, at least, a communication control unit 11, a control unit 12, a calculation unit 13, a virtual space data storage unit 14, a display control unit 15, a display unit 16, and an input unit 17, and performs the image diagnostic support by using the examination image acquired by the imaging device 4.
The pre-acquired data storage unit 3 stores a group of plural comparison images as comparison targets at the time of diagnosing the examination image. Each of the images is an internal biological information image that is acquired in advance, and can be two- or three-dimensional image data or image data that includes a case image composed of (radiofrequency or high-frequency) data before being converted into the image data. The plural comparison images that are stored herein further include feature value information with which a degree of similarity between the plural comparison images can be identified. However, the comparison images in the pre-acquired data storage unit 3 do not necessarily and directly have this feature value information (in detail, N-dimensional parameter information, which will be described below). This feature value information only needs to be derived from data on the comparison images by using the calculation unit 13, which will be also described below, for example. Furthermore, as this comparison image, in addition to the case image, a focal case simulation image that is acquired by computer calculation, intermediate data on the focal case simulation image, an image of a finding or a diagnostic criteria, an image of a normal tissue, or the like can be adopted. Moreover, the comparison image may be a captured image of the entire breast as the subject or a captured image of the part of the subject, for example, only the lesion estimated area. In this embodiment, it is assumed that ultrasound images are compared. However, the comparison image is not limited to such an ultrasound image, and a medical image that is acquired by another type such as X-ray CT may be used as the comparison image.
In addition, the pre-acquired data storage unit 3 stores information indicating biological tissue information and shape information (of a biological tissue) of each of the comparison images, in detail, label information including the lesion feature information of each of the comparison images, and these are linked to each of the comparison images. The label information including the lesion feature information is used for diagnostic support of the subject, and is also read to indicate an attribute of the comparison image at the time of displaying the comparison image as an image showing basis for a diagnostic support determination result. The label information including the lesion feature information includes diagnostic information by a doctor, biological information of the subject, and the like such as the finding or a diagnosis result that is determined comprehensively on the basis of the diagnostic criteria or the plural diagnostic criterion, a pathological diagnosis by needle biopsy or the like, a temporal change in the subject, and history of treatment. This label information has to be linked to each of the plural comparison images in the pre-acquired data storage unit 3. However, the label information does not always have to be linked to all the comparison images in the pre-acquired data storage unit 3.
In addition, since the label information including the image lesion feature information is linked to each of these comparison images, the comparison images constitute, as tagged supervised data, a learning data set of a learned model for the calculation unit 13, which will be described below.
In this embodiment, the pre-acquired data storage unit 3 is arranged in the server that is connected to the outside of the diagnostic support device 2, or the like. However, the pre-acquired data storage unit 3 may be incorporated in the diagnostic support device 2 (not shown). In addition, the plural comparison images in the pre-acquired data storage unit 3 may be provided to the pre-acquired data storage unit 3 via a network or a portable recording medium.
The diagnostic support device 2 is configured to include a CPU, a GPU, main memory, another LSI, ROM, RAM, and the like of the control unit 12. Operation thereof is performed with a diagnostic support program that is loaded to the main memory, or the like. That is, the diagnostic support device 2 can be realized by using any of various computers (calculation resources) such as a personal computer (PC), a main frame, a work station, and a cloud computing system.
In the case where each function unit of the diagnostic support device 2 is realized by software, the diagnostic support device 2 is realized by executing a command of a program as software for implementing each function. As a recording medium storing this program, a “non-temporal tangible medium” such as a CD, a DVD, semiconductor memory, or a programmable logic circuit can be used. In addition, this program can be supplied to the computer in the diagnostic support device 2 via a specified transmission medium (a communication network, a broadcast wave, or the like) capable of transmitting the program.
The communication control unit 11 is an interface for controlling the transmission and the reception of the data between the image diagnostic device 4 and the pre-acquired data storage unit 3. The communication control unit 11 primarily acquires the examination image, the group of comparison images, the label information including the lesion feature information of the comparison images, and the like.
The control unit 12 includes at least processors such as the CPU and the GPU, and controls all the function units in the diagnostic support device 2. In particular, in this embodiment, this control unit 12 has a function of identifying the similarity between the examination image and each of the plural comparison images. Such a function will be described below.
The calculation unit 13 calculates and acquires the feature value information of the examination image, which is received via the communication control unit 11, and of the plural comparison images as needed. This calculation unit 13 constitutes a so-called classifier and has the specified learned model therein. This learned model is generated by a well-known machine learning method, for example, through supervised learning using a neural network (preferably including a convolutional neural network (CNN)) model. In addition, this learned model is a learned model that has learned (trained) to output the feature value information to a neuron in an output layer by inputting the data on the examination image and the plural comparison images into a neuron in an input layer thereof. The machine learning technique for the learned model is not limited to the above. Any of techniques such as a support vector machine (SVM), a model tree, a decision tree, multiple linear regression, locally weighted regression, and an established search method can be used alternatively, or the methods can appropriately be combined and used.
This learned model for the calculation unit 13 is acquired by learning some or all of the plural comparison images, which are stored in the pre-acquired data storage unit 3 and include the mutually-linked label information, as the learning data set. Accordingly, it should particularly be noted that the data on this learned model, which is input to the neuron in the input layer, has to be in the same format as the data on the plural comparison images. For example, in the case where the comparison image includes the three-dimensional image data, the data input to each of the neurons in the input layer can be a (eight-bit) gray scale value of each voxel that constitutes this three-dimensional image data, for example. Similarly, in the case where the comparison image includes the two-dimensional image data, the data input to each of the neurons in the input layer can be the gray scale value of each pixel that constitutes this two-dimensional image data, for example. The data input to each of the neurons in the input layer is not limited thereto, and can appropriately be changed according to the format of the data constituting the comparison image, presence or absence of additional information, or the like.
In this calculation unit 13, the feature value information in the learned model which is output to the output layer includes information with which a feature of the image can be identified in a machine learning network, but the format and the number of the information are not limited. In this embodiment, the feature value information is information including the multidimensional, for example, N (a natural number equal to or larger than 2) dimensional parameter that is the feature value identified at the learning stage. As described above, this learned model is generated through the machine learning of the learning data set that includes the comparison images and the label information linked thereto. The label information can be classified by a value such as the diagnostic criteria that can be recognized by the user, for example, a value indicating presence or absence of a whitened area in specified size or larger in the image, the size and a location of the whitened area, a thickness of a peripheral vein, or the like.
The virtual space data storage unit 14 generates a specified virtual space (a first virtual space) by using the N-dimensional parameter that is output from the learned model provided in the calculation unit 17. This virtual space data storage unit 14 stores various types of data used to plot the plural comparison images and the examination image at specified coordinate positions on this virtual space. The various types of the data described herein include a calculation formula and the like used to plot the image on the displayable virtual space in the dimensional number (for example, first to third dimensions) by adjusting the value of the N-dimensional parameter calculated by the calculation unit 17. This calculation formula is specifically exemplified. For example, in the case where the calculation unit 17 desires to output 10-dimensional parameter information as the feature value information of the plural comparison images and to plot the plural comparison images on the two-dimensional space as the virtual space, a calculation formula that specifies a 2-dimensional value by dividing the 10-dimensional parameter information into two, multiplying the divided 10-dimensional parameter information by a preset weight value when needed, and adding the divided 10-dimensional parameter information to each other, a calculation formula that specifies the 2-dimensional value by using a well-known multivariate analysis technique for the 10-dimensional parameter information, or the like may be adopted. Needless to say, in the case where the plural comparison images are plotted on such a virtual space, the coordinate of each of the comparison images needs to have a specified correlation with relevant information (that is, the diagnosis result) that is linked to respective one of the comparison images. Accordingly, for example, the parameter information, which is output from the calculation unit 17, the weight value included in the calculation formula stored in this virtual space data storage unit 14, and the like are adjusted on the basis of the diagnosis result of the comparison image. Needless to say, in the case where the dimensional number of the N-dimensional parameter output from the calculation unit 17 is small (for example, the three dimensions), the comparison images and the examination image can be plotted on the virtual space (in the same dimensional number) without performing an additional calculation and the like. As a result, in this case, the virtual space and the feature value space constitute the same space.
In addition, in this embodiment, it is exemplified that the calculation unit 13 calculates the N-dimensional parameter of each of the plural comparison images from moment to moment. However, since the comparison images are not images that are frequently added, changed, or the like, the plural comparison images and the N-dimensional parameters thereof may be stored in this virtual space data storage unit 14. In this way, it is possible to reduce a calculation amount by the calculation unit 13 at the time of generating the virtual space and thus to reduce burdens on the calculation unit 13 and the control unit 12.
Here, a brief description will be made on an example of a similarity identification technique by the control unit 12. The control unit 12 calls the N-dimensional parameters as the feature value information of the plural comparison images calculated by the calculation unit 13 and the N-dimensional parameter as the feature value information of the examination image also calculated by the calculation unit 13, and plots the N-dimensional parameters on the feature value space configured as the N-dimensional space with the N-dimensional parameters being the coordinates. Then, the labeled diagnostic criterion and the label information of the diagnosis result are read into the comparison images arranged on this feature value space. In this way, for example, a boundary line between benignancy/malignancy is drawn on the feature value space. Here, the feature value space has the N number of axes. Accordingly, in the case where the coordinate of the examination image on the feature value space is a vector X=(x1, x2, . . . , xN), and the coordinate of the comparison image, in which a distance is a calculation target, on the feature value space is a vector Y=(y1, y2, . . . , yN), a distance L is calculated by a mathematical formula (ii) expressed as follows. Furthermore, a mathematical formula (iii) may be used by weighing each component on the feature value space.
L=√{square root over (Σi=1N(xi−yi)2)} (ii)
L=√{square root over (Σi=1Nwi(xi−yi)2)} (iii)
The distance L that is identified herein is a value representing the similarity between the examination image and each of the plural comparison images. In addition, a distance between the sets such as a benign tumor (a tumor mass) and a malignant tumor is calculated by a sum of the distances between the comparison images belonging to the set. Needless to say, the set and the distance may not be calculated as the sum of the distances of all the comparison images that belong to the set. Instead, the top M number of the comparison images in the set may be picked from the comparison images located near the examination image, and a sum of the distances of the picked comparison images may be calculated. Alternatively, the distances to a boundary line may be calculated. Thus, the calculations of the set and the distance are not limited. The distance L is the value that is identified as the distance on the feature value space. However, the space in which the distance is measured is not limited to this feature value space. For example, each of the comparison images and the examination image may be plotted on an m-dimensional (1≤m<N) virtual space, which is acquired with reference to the virtual space data storage unit 14, by using the calculated N-dimensional parameter. Then, the distance on this virtual space may be calculated by using a similar mathematical formula to the above-described mathematical formula (ii) or mathematical formula (iii). The adoption of the distance on the virtual space as the value indicative of the similarity between the examination image and each of the comparison images is especially advantageous for a case where a virtual space image 45, which will be described below, is displayed. In detail, in this case, the distance between the examination image and each of the comparison images displayed in the virtual space image matches the distance as the value indicative of the similarity between the examination image and each of the plural comparison images. Thus, the similarity therebetween can accurately be comprehended simply by looking at the virtual space image 45.
The accuracy of the feature value space or the virtual space depends on: the comparison image as the learning data set used to generate the learned model for the calculation unit 13; and quality and an amount of the label information that is linked to this comparison image. In the case where the examination image is arranged at a position, a distance of which from a set of the benignant comparison images is equal to a distance of which from a set of the malignant comparison images, on the feature value space or the virtual space, an attribute distribution map including benignancy, malignancy, and the like is changed by increasing the number of the comparison images. As a result, the examination image that is located on a boundary with the different attribute can further easily be determined. Meanwhile, since the types and the number of the comparison images are increased, there is possible appearance of a portion, in which the sets of the different attributes overlap each other, on the feature value space or the virtual space. In the case where the determination on the feature value space or the virtual space is difficult, just as described, a degree of vagueness of the diagnostic determination is increased, which loses meaning of the numerical value of the probability of the determination and the like. In such a case, the diagnostic determination is not made by the machine, only the virtual space image is displayed, and the determination by the doctor himself/herself is added. In this way, it is possible to make the further accurate diagnostic determination. As it is understood from the above description, the number of the comparison images affects the output of the calculation unit. Accordingly, in order to prevent a change in the output of the diagnostic support device 2 according to use thereof, when this learned model is generated, only batch learning that uses the learning data set prepared in advance is adopted, and so-called online learning, in which the learned model is updated by using the examination image and the like for learning, does not have to be adopted.
The display control unit 15 generates the image used as the basis for the diagnostic determination result by the control unit 12. The image used as the basis may only be the numerical value that represents the similarity identified by the control unit 12, may adopt a display layout based on the similarity, or may be the virtual space image that is generated when the control unit 12 arranges the plural comparison images and the examination image on the feature value space or the virtual space by taking the virtual space data storage unit 14 into consideration, for example. In addition to the above, the image used as the basis may be a display example that displays a correlation of an Euclidean distance between the examination image and each of the comparison images on the feature value space or the virtual space, or may be a display example that displays a function of the coordinate of each of the comparison images. However, the image used as the basis is not limited thereto. In addition, this display control unit 15 can add a specified visual effect to a point indicative of the comparison image that is plotted on the virtual space. As this visual effect, for example, the label information, such as the lesion feature information, that is linked to each comparison image is taken into consideration. Then, a point of the comparison image (a malignant case image), the label information of which includes information including the “malignant tumor,” can be shown in red, a point of the comparison image (a benign case image), the label information of which includes information including the “benign tumor,” can be shown in blue, and a point of the comparison image (a normal tissue image), the label information of which includes information of being “normal,” can be shown in black. The visual effect is not limited to any of the above-described effects, and any of various other visual effects can be adopted. In this way, when the similarity between the examination image and the comparison image is displayed in such a format that allows the doctor or the like to comprehend the similarity through visual sensation, the relative similarity of each of the plural comparison images to the examination image is clarified. With this as the basis for the diagnostic determination by the machine, the user determines reliability of the diagnostic determination by the diagnostic support system 1S, and can thereby effectively derive the diagnosis result of the image interpretation.
The display unit 16 is a display device such as a display. This display unit 16 displays the examination image acquired by the communication control unit 11, the determination result acquired by the control unit 12, and the information on the similarity (for example, the virtual space image) that is generated by the display control unit 15 and serves as the basis for the determination result. That is, the display unit 16 not only displays the determination result acquired from the comparison (for example, the specified number of the similar comparison images or a speculation result by a speculation unit 18, which will be described below) but also displays the value of the distance on the feature value space, which serves as the basis for the derivation of the determination result, the virtual space image, and the like. The display unit 16 also synthesizes the images or the like in order to display required information for the operation of the diagnostic support system 1S, and the like. A detailed description on the display will be made below with reference to schematic views. In this embodiment, the display unit 16 is incorporated into the diagnostic support device 2. However, the display unit 16 may be connected to a display unit of an external PC terminal, a mobile terminal, or the like via the Internet.
The input unit 17 is a keyboard, a touchscreen, a mouse, or the like for the operation. The input unit 17 can be used to make input for the operation of the diagnostic support device 2, designate an examination area in the examination image, select a display pattern, enter a finding comment to a workstation, and the like.
The diagnostic support device 2 according to this embodiment may further include the speculation unit 18 and a pre-processing unit 19.
The speculation unit 18 speculates whether specified preset biological information, in detail, the malignant tumor or the like is included in the examination image. Similar to the calculation unit 13, this speculation unit 18 includes a learned model. Similar to the learned model for the calculation unit 13, this learned model in the speculation unit 18 is generated by the well-known machine learning method, for example, through the supervised learning using the neural network model. In addition, this learned model can be generated by performing the machine learning using the learning data set, in which the plural comparison images stored in the pre-acquired data storage unit 2 and presence or absence of the label information linked to each of the comparison images, in particular, the biological information as the diagnosis result are provided as the set, for example. The thus-generated learned model is a learned model that has learned to output whether the specified biological information is included in the neuron of the output layer or a probability that the specified biological information is included (also referred to as a confidence value) by inputting (the image data of) the examination image to the neuron of the input layer. Instead of the image data as the image data of the examination image, the data that is input to the neuron in the input layer may be the N-dimensional parameter information of the examination image calculated by the calculation unit 13 or RF data of the examination image. The machine learning method for this learned model is not limited to the above. Any of methods such as the SVM, the model tree, the decision tree, the multiple linear regression, the locally weighted regression, and the established search method can be used alternatively, or the methods can appropriately be combined and used. In addition, the biological information described herein is not limited to the malignant tumor but also includes the benign tumor and an artifact. The learned model in the speculation unit 18 may include any of these types of the biological information or may output the probability that any of these types of the biological information is included in the examination image. By adopting such a speculation unit 18, the diagnostic support device 2 can provide the user with the probability that the specified biological information is included in the examination image as the determination result of the diagnostic support system 1S, in addition to the similarity between the examination image and each of the plural comparison images. As a result, it is possible to further improve diagnostic efficiency of the doctor.
The pre-processing unit 19 adjusts the data format of the data on the examination image, which is received by the communication control unit 11, before the calculation by the calculation unit 13 so that the feature value information thereof can be calculated by the calculation unit 13. This pre-processing includes various types of processing in addition to processing (for example, noise filtering, data volume adjustment, FFT, or the like) that is normally executed in the technical field of a machine learning device. Specific examples of the various types of this processing are: processing to generate one or more pieces of two-dimensional slice data (automatically or via the operation by the user) from the three-dimensional image data in the case where the learned model in the calculation unit 13 has learned to output the feature value information by inputting the two-dimensional image data and where the examination image received by the communication control unit 11 is the three-dimensional image data; processing to generate the two-dimensional image data from RF data in the case where the learned model in the calculation unit 13 has learned to output the feature value information by inputting the two-dimensional image data and where the examination image received by the communication control unit 11 is the RF data; and processing to generate the linguistic expression information from the image data in the case where the learned model in the calculation unit 13 has learned to output the feature value information by inputting the linguistic expression information, which will be described below, and where the examination image received by the communication control unit 11 is the two-dimensional or three-dimensional image data. The input/output information of the learned model is specified according to the data configuration of the learning data set at the time of the machine learning. Accordingly, by adopting such a pre-processing unit 19, the plural learned models no longer have to be prepared according to the data format of the examination image.
First, as advance preparation for the diagnostic support, in the comparison image storage step (S1000), the plural comparison images collected in advance are stored in the pre-acquired data storage unit 3. Each of the plural comparison images collected herein is primarily composed of the three-dimensional image data, for example. Next, in the comparison image multidimensional parameter generation step (S1100), the multidimensional (N-dimensional) parameter as the feature value information of each of the comparison images is generated by the calculation unit 13 and stored in the virtual space data storage unit 14. Here, it can also be configured to generate the multidimensional parameters of the plural comparison images in advance and to store each of the comparison images and this multidimensional parameter thereof as a set in the pre-acquired data storage unit 3. In such a case, this comparison image multidimensional parameter generation step (S1100) can be omitted. In addition, the above-described comparison image storage step (S1000) and the comparison image multidimensional parameter generation step (S1100) may be executed for each examination. However, since a content of the comparison image is not frequently changed, the above-described comparison image storage step (S1000) and the comparison image multidimensional parameter generation step (S1100) may be executed only at update timing of the content of the comparison image.
Next, in the examination image acquisition step (S1200), the examination image that is captured by the imaging device 4 and serves as an examination target is acquired via the communication control unit 11. This examination data is also composed of the three-dimensional image data, for example. Furthermore, in the comparison image and multidimensional parameter acquisition step (S1300), the pre-acquired data storage unit 3 or the virtual space data storage unit 14 acquires the set of the plural comparison images and the multidimensional parameters corresponding thereto. Then, the calculation unit 13 calculates the multidimensional parameter as the feature value information of the acquired examination image (the examination image multidimensional parameter calculation step (S1400)). Furthermore, in the virtual space image formation display step (S1500), the control unit 12 and the display control unit 15 generate a virtual space image to be displayed on the basis of various types of the data in the virtual space data storage unit 14, and the display unit 16 displays the virtual space image. In this embodiment, the comparison image and the examination image are plotted at particular coordinate positions on the virtual space image, just as described. In this way, the similarity therebetween is displayed. Furthermore, by selecting the coordinate of the comparison image on the virtual space image, the user can preferably display the comparison image and the label information such as the lesion feature information (the comparison image display step (S1600)).
As it has been described so far, in the diagnostic support system according to this embodiment, when the similar image, which is determined to be similar to the examination image, of the comparison images is displayed as the determination result, the information on the similarity between the images, such as the virtual space images, is displayed as the basis for the selection. In this way, it is possible for the user to check adequacy of the determination by this system, resulting in an improvement of diagnosis efficiency.
The examination image acquisition step (S1200) and the comparison image including the multidimensional parameter acquisition step (S1300) may be executed in parallel or may be executed sequentially. In addition, instead of the virtual space image formation display step (S1500), the distance on the feature value space can be identified from the multidimensional parameter of each of the examination image and the plural comparison images, and the display unit 16 can display such a distance as a value representing the similarity.
Furthermore, in the case where the diagnostic support device is used for an educational purpose, or the like, the user can select a mode, in which the diagnostic determination by the machine is not displayed, in advance.
The virtual space data storage unit 14 that uses the N-dimensional parameter as the feature value information to generate the first virtual space has been exemplified. However, the virtual space data storage unit 14 according to this embodiment is not limited thereto. As another aspect, the virtual space data storage unit 14 may adopt the linguistic expression information that corresponds, as the feature value information, to each of the images and may generate a linguistic space including this linguistic expression information as a second virtual space. In this case, the display unit 16 displays the similarity that is calculated with the distance of each of the examination image and the plural comparison images on the linguistic space being a reference.
For example, the linguistic space can be based on linguistic expression such as image interpretation report expression or the like that is linked to the comparison image. First, the control unit 12 uses the learned model for the calculation unit 13 to convert the comparison image into an image interpretation language included in the comparison image, and reads the image interpretation language as language information. Similarly, the control unit 12 also converts the examination image into the image interpretation language and reads the image interpretation language. For example, the image interpretation language is the language information or the finding that is determined from an image indicating that progress of the tumor is “2” or the like, or is language data that is converted into natural language data or the like. Here, the finding may include similar information to the above-described lesion feature information, that is, may include the information on the position of the tumor (the position thereof in the breast, the position thereof in the mammary gland, the distance to the skin), whether solid or cystic, presence or absence of the structure in the tumor, the presence or the absence of the posterior shadow, the aspect ratio (the ratio among the lengths of the a-axis, the b-axis, and the c-axis in the case of being approximated as the spheroid), the property of the boundary (whether the echo in the boundary part is the hyper-echo or the low echo, or whether the shape of the boundary is smooth or not smooth), the presence or the absence of the architectural distortion of surrounding the normal tissues, the presence or the absence of the plural tumor masses such as the daughter nodule, and the presence or the absence of the calcification. Thereafter, the control unit 12 replaces the examination image and the comparison image with indexes on the linguistic space (the second virtual space), and the indexes are displayed on the linguistic space so as to visually recognize the similarity between the examination image and the comparison image. In addition, the similarity between the examination image and the comparison image can be identified as a numerical value by measuring distance distribution of the examination image and the comparison image.
In this example, the learned model is provided. The learned model can verbalize and extract the feature value that corresponds to the existing diagnostic criteria for the lesion estimated area in the examination image by performing the machine learning using the learning data set that includes the comparison images and the label information including the lesion feature information. As a result, it is possible to verbalize new diagnostic criteria that are not obvious for human eyes.
Next, a specific description will be made on a display content of the display unit 16 in this embodiment.
As described above, the virtual space image 45 is displayed as the image showing the basis for the determination result. Thus, the doctor or the like can determine the vagueness of the determination result and can use the virtual space image 45 to confirm the diagnosis. In addition, since some of the comparison images corresponding to the similar case are also displayed with the determination result 43 in the display unit 16, it is possible to improve the diagnostic efficiency.
In addition to the various images, for example, a patient information display section 47, an imaging condition display section 48, and the like are also displayed on the display of the display unit 16. The types, the arrangement, and the like of the display information are only illustrative and thus are not limited thereto. In the case where the examination image and the comparison image are compared in the format of the three-dimensional image, the displayed comparison image may be three-dimensional.
In addition, a window for displaying the various types of the data does not have to be one. The examination image 40, the virtual space image 45, and the like may appear on another window or tab display according to an input command.
Next, referring to
It can be understood that, in the histogram illustrated in
As it has been described so far, the virtual space image is generated only on the basis of the information such as the feature value space 50 illustrated in
Preferably, in order to facilitate understanding of relevance between these virtual spaces, the information in the virtual spaces is associated. As the association described herein, any of various types of processing can be adopted as long as processing identifies a relationship between the information displayed in the different virtual spaces. An example of such processing is processing to add the same visual effect such as commonalizing shapes and colors of points of the same comparison images plotted on these two virtual spaces. In this way, it is possible to supplement the diagnostic determination by the user with further another virtual space proposed for such a feature that cannot be separated in one of the virtual spaces.
Meanwhile,
As shown in
Just as described, the images of two or more types such as properties of benignancy (normal) and malignancy are displayed as the comparison images in alignment. In this way, the doctor or the like can check the adequacy of the diagnosis by the machine. Instead of the user who designates the specified area in the virtual space image 45, the control unit 12 may select and display the comparison images, which are calculated in advance, juxtaposed, and displayed. In this case, the distance on the feature value space can be identified only from the feature value information. Thus, the virtual space image 45 does not always have to be displayed. Alternatively, the display mode shown in
In the first embodiment that has been described so far, the description has been made on the example in which the comparison images constructed of the three-dimensional image data are displayed when the examination image and each of the comparison images are compared to derive the diagnostic determination. As another display example, when the comparison images are displayed, a drawing, such as a heat map, that has a visual effect to further prominently show the feature per pixel may be added to the lesion estimated area in each of the displayed images, and may be shown in each of the images. At this time, the image and the drawing such as the heat map may be superposed or juxtaposed with each other.
In regard to the comparison image, the location of the tumor and the lesion estimated area in the three-dimensional image are designated in advance, and information thereon is linked as the label information to the comparison image. Thus, the drawing such as the heat map can additionally be shown with reference to this label information. Meanwhile, the area to be checked is not identified in the examination image that is checked by the user. According to this embodiment, the user can efficiently identify the lesion estimated area to be checked from the three-dimensional examination image with reference to the lesion estimated area that is estimated by the machine learning technique, for example. Thus, it is possible to reduce a burden of the interpretation of the image.
In the case where the comparison image is of the normal tissues or the like, the lesion area is not shown. Meanwhile, an area that is estimated as a false lesion in the process of machine learning is shown. Thus, it is possible to reduce a chance of making a false-positive determination. This embodiment corresponds to an example in which the present invention is applied to computer-aided detection (CADe).
In the first embodiment and the second embodiment, the lesion estimated area is estimated by using the machine learning technique. Thereafter, the user determines the adequacy of the determination. In another embodiment, the user may first identify the lesion estimated area in the examination image. Thereafter, the comparison image having a similar area to the identified area may be presented.
According to this embodiment, the user can identify the area, the examination of which is desired, and extracts a similar case to the lesion area. In this way, it is possible to improve efficiency of such differential work by the user that identifies the type of the identified lesion. This embodiment corresponds to a case where the present invention is applied to a so-called computer-aided diagnosis (CADx).
All references, including publications, patent applications, and patents, cited herein are hereby incorporated by reference to the same extent as if each reference were individually and specifically indicated to be incorporated by reference and were set forth in its entirety herein.
The use of the terms “a” and “an” and “the” and similar referents in the context of describing the invention (especially in the context of the following claims) is to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. The terms “comprising,” “having,” “including,” and “containing” are to be construed as open-ended terms (i.e., meaning “including, but not limited to,”) unless otherwise noted. Recitation of ranges of values herein are merely intended to serve as a shorthand method of referring individually to each separate value falling within the range, unless otherwise indicated herein, and each separate value is incorporated into the specification as if it were individually recited herein. All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of any and all examples, or exemplary language (e.g., “such as”) provided herein, is intended merely to better illuminate the invention and does not pose a limitation on the scope of the invention unless otherwise claimed. No language in the specification should be construed as indicating any non-claimed element as essential to the practice of the invention.
Preferred embodiments of this invention are described herein, including the best mode known to the inventors for carrying out the invention. Variations of those preferred embodiments may become apparent to those of ordinary skill in the art upon reading the foregoing description. The inventors expect skilled artisans to employ such variations as appropriate, and the inventors intend for the invention to be practiced otherwise than as specifically described herein. Accordingly, this invention includes all modifications and equivalents of the subject matter recited in the claims appended hereto as permitted by applicable law. Moreover, any combination of the above-described elements in all possible variations thereof is encompassed by the invention unless otherwise indicated herein or otherwise clearly contradicted by context.
According to the diagnostic support system and the diagnostic support method of the present invention, as the basis for the display of the similar image as the determination result and the display of the probability that the biological information is included, the similarity between the examination image and each of the plural comparison images is provided to the user, which contributes to improvement in diagnosis efficiency.
Number | Date | Country | Kind |
---|---|---|---|
2018-144549 | Jul 2018 | JP | national |
This application is the United States national phase of International Application No. PCT/JP2019/030091 filed Jul. 31, 2019, and claims priority to Japanese Patent Application No. 2018-144549 filed Jul. 31, 2018, the disclosures of which are hereby incorporated by reference in their entirety.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2019/030091 | 7/31/2019 | WO | 00 |