The present invention relates to processing of medical images and, more particularly, to a method and device for detecting medical indices in medical images.
Diseases are a condition that causes a disorder and thus impedes the normal function of a human mind or body, and depending on the seriousness of diseases, man undergoes sufferings and even end their lives. Accordingly, over the course of human history, a variety of social systems and technologies have been developed to diagnose, treat and even prevent diseases. For diagnosis and treatment of diseases, various tools and methods have been devised along with impressive technical advances, but the final judgments are still dependent on doctors.
Meanwhile, the recent advancement of artificial intelligence (AI) technology is so remarkable as to draw attention from various fields. In particular, massive accumulations of medical data and the image-centered diagnostic data environment encourage various attempts and studies to graft AI algorithms onto medicine. Specifically, various studies are using AI algorithms to provide solutions to the diagnosis and prediction of diseases and other tasks that still depend on clinical judgments. In addition, various studies are being conducted to solve the task of processing and analyzing medical data as an intermediate process for diagnosis using artificial intelligence algorithms.
An object of the present invention is to provide a method and device for effectively obtaining information from a medical image using an artificial intelligence (AI) algorithm.
An object of the present invention is to provide a method and device for automatically detecting medical indices from a medical image.
An object of the present invention is to provide a method and device for detecting medical indices based on segmentation of a medical image.
It is to be understood that the technical objects to be achieved by the present invention are not limited to the aforementioned technical objects, and other technical objects not mentioned herein will be apparent to those of ordinary skill in the art to which the present invention pertains from the following description.
A method of obtaining information from a medical image according to an embodiment of the present invention may include performing segmentation on regions of a heart in the medical image, generating at least one reference line based on the segmentation and determining at least one medical index based on the at least one reference line.
According to an embodiment of the present invention, the at least one medical index may include at least one of a length of the at least one reference line or a value determined based on the length of the at least one reference line.
According to an embodiment of the present invention, the method may further include generating at least one reference point based on the segmentation.
According to an embodiment of the present invention, the at least one reference point may be generated based on at least one center point of the regions.
According to an embodiment of the present invention, the regions may include at least one of a first region, a second region, a third region, a fourth region, a fifth region or a sixth region.
According to an embodiment of the present invention, the first region may be located between the second region and the third region, the second region may be located between the fourth region and the first region, the fourth region may be located at the top, the fifth region may be adjacent to a right side of the first region, or the sixth region may be adjacent to the right side of the first region and be located below the fifth region.
According to an embodiment of the present invention, the generating the at least one reference line may include identifying a transversal line passing through a reference point of the first region and a center of a left boundary line of the first region and crossing the first region, identifying a first orthogonal line passing through the reference line and orthogonal to the transversal line and generating a first reference line to include at least a part of the first orthogonal line.
According to an embodiment of the present invention, the generating the at least one reference line may include identifying a second orthogonal line passing through a point of contact between the first reference line and a segmentation contour of the second region and orthogonal to a long axis of the second region and generating a second reference line to include at least a part of the second orthogonal line.
According to an embodiment of the present invention, the generating the at least one reference line may include identifying a third orthogonal line passing through a point of contact between the first reference line and a segmentation contour of the third region and orthogonal to a long axis of the third region and generating a third reference line to include at least a part of the third orthogonal line.
According to an embodiment of the present invention, the generating the at least one reference line may include identifying a center line of a short axis of a region corresponding to the second region, identifying a first vertical line perpendicular to a boundary line of the second region at a point where the center line and the boundary line of the region meet, and generating a first reference line to include at least a part of the first vertical line, and the first reference line may be used to measure a diameter of the first region.
According to an embodiment of the present invention, the generating the at least one reference line may include identifying a second vertical line perpendicular to a boundary line of the third region at a point where the first reference line and the boundary line of the third region meet and generating a second reference line to include at least a part of the second vertical line, and the second reference line may be used to measure a thickness of the third region.
According to an embodiment of the present invention, the generating the at least one reference line may include identifying a closest point closest to the fourth region among points on a boundary line of the fifth region, identifying a parallel line including the closest point and parallel to a junction of the first region and the fifth region, and generating a third reference line to include at least a part of the parallel line.
According to an embodiment of the present invention, the generating the at least one reference line may include identifying a sinus point on a boundary line of the fifth region, identifying a vertical line including a point closest to the sinus point on a boundary line of a region corresponding to the sixth region, parallel to a vertical axis of the medial image and penetrating the inside of a region corresponding to the sixth region and generating a fourth reference line to include at least a part of the vertical line.
According to an embodiment of the present invention, the generating the at least one reference line may include identifying a sinus point on a boundary line of the fifth region, identifying a vertical line including a point closest to the sinus point on a boundary line of a region corresponding to the sixth region, perpendicular to a long axis of a region corresponding to the sixth region and penetrating the inside of the region corresponding to the sixth region, and determining a fourth reference line to include at least a part of the vertical line.
According to an embodiment of the present invention, the generating the at least one reference line may include identifying a vertical line including one point on a junction of the fifth region and the first region and perpendicular to a center line of a long axis of the fourth region and generating a fifth reference line to include at least a part of the vertical line.
According to an embodiment of the present invention, the generating the at least one reference line may include identifying a vertical line including one point on a junction of the fifth region and the first region and perpendicular to a vertical axis of the medical image and generating a fifth reference line to include at least a part of the vertical line.
According to an embodiment of the present invention, the regions may include at least one of right ventricle (RV), interventricular septum (IVS), aorta, left ventricle (LV), LV posterior wall (LVPW) or left atrium (LA).
A device for obtaining information from a medical image according to an embodiment of the present invention may include a storage unit configured to store a set of commands for operation of the device and at least one processor connected to the storage unit. The at least one processor may perform segmentation on regions of a heart in the medical image, generate at least one reference line based on the segmentation, and determine at least one medical index based on the at least one reference line.
A program stored on a medium according to an embodiment of the present invention may implement the above-described method when operated by a processor.
The features briefly summarized above for the present invention are only illustrative aspects of the detailed description of the invention that follows, but do not limit the scope of the present invention.
According to the present invention, it is possible to efficiently obtain information from medical images using an artificial intelligence (AI) algorithm.
It is to be understood that effects to be obtained by the present invention are not limited to the aforementioned effects, and other effects not mentioned herein will be apparent to those of ordinary skill in the art to which the present invention pertains from the following description.
Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings, which will be easily implemented by those skilled in the art. However, the present invention may be embodied in many different forms and is not limited to the embodiments described herein.
In the following description of the exemplary embodiments of the present invention, a detailed description of known functions and configurations incorporated herein will be omitted when it may make the subject matter of the present invention rather unclear. In addition, parts not related to the description of the present invention in the drawings are omitted, and like parts are denoted by similar reference numerals.
Referring to
The service server 110 provides a service based on an artificial intelligence model. That is, the service server 110 performs a learning and prediction operation by using the artificial intelligence model. The service server 110 may perform communication with the data server 120 or the at least one client device 130 via a network. For example, the service server 110 may receive learning data for training the artificial intelligence model from the data server 120 and perform training. The service server 110 may receive data necessary for a learning and prediction operation from the at least one client device 130. In addition, the service server 110 may transmit information on a prediction result to the at least one client device 130.
The data server 120 provides learning data for training of an artificial intelligence model stored in the service server 110. According to various embodiments, the data server 120 may provide public data accessible to anyone or data requiring permission. When necessary, learning data may be preprocessed by the data server 120 or the service server 120. According to another embodiment, the data server 120 may be omitted. In this case, the service server 110 may use an artificial intelligence model that is externally trained, or learning data may be provided offline to the service server 110.
The at least one client device 130 transmits and receives data associated with an artificial intelligence model, which is managed by the service server 110, to and from the service server 110, respectively. The at least one client device 130 may be an equipment used by a user, transmits information input by the user to the service server 110, and store or provide (e.g., mark) information received from the service server 110 to the user. According to a situation, a prediction operation is performed based on data transmitted from any one client, and information associated with a result of prediction may be provided to another client. The at least one client device 130 may be a computing device with various forms like a desktop computer, a laptop computer, a smartphone, a tablet PC, and a wearable device.
Although not illustrated in
As described with reference to
Referring to
The communication unit 210 accesses a network and performs a function for communicating with another device. The communication unit 210 supports at least one of wired communication and wireless communication. For communication, the communication unit 210 may include at least one of a radio frequency (RF) processing circuit and a digital data processing circuit. According to a case, the communication unit 210 may be understood as a component including a terminal for connecting a cable. Since the communication unit 210 is a component for transmitting and receiving data and a signal, the communication unit 210 may be referred to as ‘transceiver’.
The storage unit 220 stores data, a program, a micro code, a set of instructions, and an application, which are necessary to operate a device. The storage unit 220 may be embodied as a temporary or non-temporary storing medium. In addition, the storage unit 210 may be embodied in a fixed form in a device or in a separable form. For example, the storage unit 220 may be embodied in at least one of a NAND flash memory such as a compact flash (CF) card, a secure digital (SD) card, a memory stick, a solid-state drive (SSD) and a micro SD card and a magnetic computer memory device like a hard disk drive (HDD).
The controller 230 controls an overall operation of a device. To this end, the controller 230 may include at least one processor and at least one microprocessor. The controller 230 may execute a program stored in the storage unit 220 and access a network via the communication unit 210. Particularly, the controller 230 may execute algorithms according to various embodiments described below and control a device to operate according to the embodiments described below.
Based on a structure described with reference to
Being modeled after neurons of a living thing, perceptrons have a structure of outputting a single signal from a plurality of input signals.
As illustrated in
In case prediction is performed, when input data is provided to each node of the input layer 402, the input data is forward propagated to the output layer 406 through the input layer 402, weight application by perceptrons constituting the hidden layers 404a and 404b, a transfer function operation, and an activation function operation. On the other hand, in case training is performed, an error may be calculated through backward propagation from the output layer 406 to the input layer 402, and weights defined in each perceptron may be updated according to the calculated error.
The present invention proposes a technique for detecting medical indices from medical images. Specifically, the present invention proposes a technique for deriving clinically important indices by more robustly segmenting multiple structures (e.g., left atrium, left ventricle, right atrium, right ventricle, inner wall, etc.) in a 2D ultrasonic image used to evaluate or diagnose a patient's heart shape, structure, or function) and detecting major landmarks (e.g., Apex, Annulus, etc.).
Echocardiography is a primary and essential examination that is used daily, and it is non-invasive and may be performed at a low cost. Echocardiography allows structural and functional evaluation of the heart, and secondary examinations may be subsequently performed if additional examinations are required. Due to the above features, constructing data from echocardiography images is relatively easier than other medical equipment-based methods. However, since the manpower cost and time cost of medical experts generated during the analysis process are high, echocardiography images are data that have the potential to further increase usefulness.
In the case of echocardiographic data, a subdivided name is determined based on the location where the heart was imaged according to the recommended standards of the American Heart Association, and the echocardiography analysis method is also different according to the subdivided name. Therefore, an expert in echocardiography analysis is required to quantify echocardiographic data. Recently, in order to solve this problem, studies for automating echocardiography analysis have been conducted, and studies for segmenting the structural location of the left ventricle using artificial intelligence techniques have been reported.
Although a number of techniques for segmenting specific cardiac structures using artificial intelligence have been introduced, techniques for detecting landmarks that are actually clinically meaningful are far fewer than segmentation techniques, and most of them are performed in specific image views (e.g., A2CH, A3CH, A4CH, PSAX, PLAX (parasternal long axis), etc.). For this reason, even if they have the same structure, the image views that look different from each other cannot be considered, which may cause performance degradation.
Even if the same heart structure is included in the image, it has different characteristics depending on the image view. For example, A2CH includes only the left atrium and left ventricle, and A4CH includes not only left atrium and left ventricle but also right atrium and right ventricle. As such, A2CH and A4CH represent the same heart structure, but, for more robust landmark detection from each of the views representing different organs, as shown in
Referring to
Referring to
Guidelines for measuring various medical indices in the echocardiography images are defined by the American Society of Echocardiography (ASE). In order to measure medical indices according to the guidelines of the ASE, clinically specific landmarks defined in the ASE are needed. At this time, in order to detect specific landmarks in the patient's echocardiography images that needs to be analyzed, human intervention is unavoidable, which takes time and money. Accordingly, the present invention proposes a technique for measuring medical indices without human intervention. Through the technology proposed in the present invention, specific landmarks may be automatically detected in the patient's echocardiography images that needs to be analyzed, and it is expected that the required time and cost will be greatly reduced compared to the conventional method.
Referring to
In step S703, the device generates at least one reference line based on the segmentation result. The reference line may be generated based on the regions obtained by segmentation. For example, the reference line may be determined based on a relative position between the regions, a point based on a positional relationship between the regions, a boundary line of each region, and the like.
In step S705, the device determines at least one medical index based on the at least one reference line. For example, the at least one medical index may include a length of at least one generated reference line, a value determined based on the at least one reference line, and the like.
Although not shown in
Referring to
In step S903, the device detects a vertical line for the IVS. The device detects a line perpendicular to the boundary line of the IVS at a point where the center line of the IVS and the boundary line of the IVS meet. In other words, the device detects a line perpendicular to the boundary line of the IVS at the point where the center line detected in step S901 and the boundary line of the region corresponding to the IVS meet. Here, the vertical line may penetrate the inside of the region corresponding to the LV. For example, the vertical line for IVS is line 1316 in
In step S905, the device detects a diameter of the LV. That is, the device may detect the diameter of the LV by measuring the length of the vertical line detected in step S903. That is, the vertical line detected in step S903 is one of the medical indices and may be used to measure the diameter of the LV.
In step S907, the device detects a vertical line in the LVPW. That is, the device detects a line perpendicular to the boundary line at the point where the vertical line detected in step S903 and the boundary line of the LVPW meet. The vertical line may penetrate the inside of the region corresponding to the LVPW. For example, the vertical line within the LVPW is line 1314b in
Referring to
In step S1003, the device detects a parallel line for a junction of the aorta and the LV. The junction of the aorta and the LV refers to a line or at least one point where the boundary line of the aorta and the boundary line of the LV obtained by segmentation overlap. The parallel line is an overlapping line, that is, a line parallel to the line corresponding to the junction and including the closest point detected in step S1001. That is, the parallel line starts from the closest point and passes through the inside of the region corresponding to the aorta. For example, the parallel line may be line 1318 in
In step S1005, the device determines an aortic line. Here, the aortic line means the parallel line detected in step S1003. The length of the aortic line may be one of the medical indices to be measured.
In step S1101, the device detects a sinus point from the aorta. That is, the device detects a sinus point on the boundary line of the aorta. The sinus point refers to a point having predefined features among points on the boundary line of the aorta. For example, the sinus point may refer to a high point or a low point of a convex part upward from the upper end or downward from the lower end of the boundary line of the aorta determined in the echocardiography image of the PLAX view.
In step S1103, the device determines a vertical line passing through the LA. One end of the vertical line coincides with the sinus point detected in step S1101. According to an embodiment, the vertical line may be a line perpendicular to the LA at the sinus point and penetrating the inside of a region corresponding to the LA. According to another embodiment, the vertical line may be a line including a point closest to the sinus point on the boundary line of the region corresponding to the LA, perpendicular to the center line of the long axis of the LA, and penetrating the inside of the region corresponding to the LA. According to another embodiment, the vertical line may be a line including a point closest to the sinus point on the boundary line of the region corresponding to the LA, parallel to the vertical axis of the medical image, and penetrating the inside of the region corresponding to the LA. For example, the vertical line may be line 1328 in
In step S1201, the device detects a junction of the IVS and the aorta. The junction of the IVS and the aorta refers to a line or at least one point where the boundary line of the IVS and the boundary line of the aorta obtained by segmentation overlap.
In step S1203, the device determines a vertical line passing through the RV. One end of the vertical line coincides with a point on the junction detected in step S1201. According to an embodiment, the vertical line may be a line parallel to the vertical axis of the image. According to another embodiment, the vertical line may be a line perpendicular to the center line of the long axis of the RV. For example, the vertical line may be line 1312 in
In the embodiment described with reference to
According to various embodiments described above, at least one medical index may be determined based on the segmentation result of a medical image. For example, examples of medical indices that may be measured from echocardiography of the PLAX view are shown in [Table 1] below.
The medical indices shown in [Table 1] are examples of measurement values that may be directly obtained from the length of the generated at least one reference line. Additionally, at least one of the medical indices shown in [Table 2] below may be further obtained based on measurement values that may be obtained from the length of at least one reference line.
As described above, at least one medical index may be determined from an echocardiography image of the heart. Embodiments for determining medical indices based on the above-described segmentation result, that is, segmentation contours of regions, may also be applied to other types of images. In other words, if segmentation may be performed on a given image, a medical index may be derived based on at least one reference point and at least one reference line. An example of at least one reference point and at least one reference line is shown in
Point a 1411 is the center point of the first region 1410. Here, the center point means central moments in a corresponding region. Point b 1412 is a point that bisects the left boundary line of the first region 1410. Here, one end of the left boundary line is a second left point 1413 where a boundary between the first region 1410 and the second region 1420 is separated at the left side of the first region 1410, and the other end of the left boundary line is a first left region 1414 where a boundary between the first region 1410 and the third region 1430 is separated. A transversal line 1415 passing through point b 1414 and crossing the first region 1410 may be drawn, and an orthogonal line 1416 orthogonal to the transversal line 1415 may be drawn. Here, at least a part of the orthogonal line 1416 may be used as a reference line.
Point c 1421 is a point of contact between the orthogonal line 1416 and the segmentation contour of the second region 1420. An orthogonal line 1423 passing through point c 1421 and orthogonal to the long axis 1423 of the second region 1420 may be drawn, and at least a part of the orthogonal line 1423 may be used as a reference line. That is, a reference line within a corresponding region (e.g., the second region 1420) may be determined in a manner extending from a reference line generated in an adjacent region (e.g., the first region 1410).
Point d 1431 is a point of contact between the orthogonal line 1416 and the segmentation contour of the third region 1430. An orthogonal line 1432 passing through point d 1431 and orthogonal to the long axis 1433 of the third region 1430 may be drawn, and at least a part of the orthogonal line 1432 may be used as a reference line. That is, a reference line within a corresponding region (e.g., the third region 1430) may be determined in a manner extending from a reference line generated in an adjacent region (e.g., the first region 1410).
According to various embodiments described above, medical indices may be automatically extracted and provided based on the segmentation result of the medical image. To this end, it is required to perform segmentation on a given medical image. Segmentation of the image is a type of image analysis task, and means an operation of detecting or extracting a region (e.g., a set of pixels or voxels) representing a target of interest among objects expressed in an image. In general, since an image obtained through photographing expresses other objects in addition to the object of interest, it is required to distinguish the object of interest from other objects through image segmentation. Image segmentation may be performed on a single image (e.g., using a model including LSTM and U-net as shown in
First, as an artificial intelligence model that may be used for segmentation, the structure of a long short-term memory (LSTM) network is as follows. A recurrent neural network (RNN) is an artificial neural network that has a structure of determining a current state by using past input information. The RNN keeps using information, which is obtained in a previous step, by using an iterative structure. As a type of RNN, a long short-term memory (LSTM) network has been proposed. The LSTM network was proposed to control long-term dependency and has an iterative structure like RNN. Th LSTM network has a structure as in
Referring to
Referring to
The sigmoid network 1512a functions as a forget gate. The sigmoid network 1512a applies a sigmoid function to a weighted sum of a hidden state value ht−1 of a hidden layer of a previous time and input xt of a current time and then provides a result value to the multiplication operator 1516a. The multiplication operator 1516a multiplies the result value of the sigmoid function by a cell memory value Ct−1 of the previous time. Thus, the LSTM network may determine whether or not to forget a memory value of the previous time. That is, an output value of the sigmoid network 1512a indicates how long the cell memory value Ct−1 of the previous time is to be maintained.
The sigmoid network 1512b and the tanh network 1514 function as an input gate. The sigmoid network 1512b applies a sigmoid function to a weighted sum of a hidden state value ht−1 of a previous time t−1 and input xt of a current time t and then provides a result value it to the multiplication operator 1516b. The tanh network 1514 applies a tanh function to a weighted sum of a hidden state value ht−1 of a previous time t−1 and input xt of a current time t and then provides a result value {tilde over (C)}t to the multiplication operator 1516b. The result value it of the sigmoid network 1512b and the result value {tilde over (C)}t of the tanh network 1514 are multiplied by the multiplication operator 1516b and then are provided to the addition operator 1510. Thus, the LSTM network may determine how much the input xt of a current time is to be reflected in the cell memory value Ct of a current time and then perform scaling according to determination. A cell memory value Ct−1 of a previous time, which is multiplied by a forget coefficient, and it·{tilde over (C)}t are added up by the addition operator 1510. Thus, the LSTM network may determine the cell memory value Ct of the current time.
The sigmoid network 1512c, the tanh network 1514b, and the multiplication operator 516c function as an output gate. An output gate outputs a filtered value based on a cell state of a current time. The sigmoid network 1512c applies a sigmoid function to a weighted sum of a hidden state value ht−1 of a previous time t−1 and input xt of a current time t and then provides a result value ot to the multiplication operator 1516b. The tanh network 1514b applies a tanh function to the cell memory value Ct of the current time t and then provides a result value to the multiplication operator 1516c. The multiplication operator 1516c generates a hidden state value ht of the current time t by multiplying a result value of the tanh network 514b and a result value of the sigmoid network 1512c. Thus, the LSTM network may determine how long the cell memory value of the current time is to be maintained in a hidden layer.
The LSTM model has various deformation models.
Referring to
As another example, the device may obtain an ultrasonic image and label of the lung. The ultrasonic image of the lung is some of time-series images and may have periodicity of expiration and inspiration. The label may indicate at least one boundary of the ultrasonic image of the lung. The present invention may be applied for segmentation of time-series images having periodicity, and is not limited to the above-described embodiment. Lung ultrasound is a diagnostic method with high sensitivity. For example, the device may distinguish pneumonia, atelectasis, tumor, diaphragmatic elevation, and pleural effusion through lung ultrasound, unlike a chest X-ray taken at a patient's bedside. The chest X-ray is routinely performed to evaluate the occurrence of pneumothorax or hydrothorax after procedures such as central vein catheterization. However, the sensitivity of chest X-ray is very low to detect small amounts of pneumothorax or hydrothorax that occur after such a procedure. In contrast, lung ultrasound may enable diagnosis of very small amounts of pneumothorax, consolidation, pulmonary edema, atelectasis, pneumonia, and pleural effusion.
In step S1603, the device may obtain a propagated heart image and a propagated label based on the obtained heart image. The device may obtain the propagated heart image and the propagated label using the heart image obtained in step S1601 and an image having a time-series relationship with the heart image. Specifically, the device may estimate a motion vector field of the heart image obtained in step S1601 and the image having the time-series relationship with the heart image. For example, the device may estimate a motion vector field based on an artificial intelligence model. Here, the device may estimate the motion vector field using a CNN. Alternatively, the device may measure motion based on optical flow. As an example, the device may measure motion based on LiteflowNet. The device may obtain the propagated heart image and the propagated label based on the motion vector field. That is, the device may augment the heart images. Accordingly, the device may augment images included in the cycle based on a reference image, which is a part of images having a cardiac cycle. The device may obtain a dataset sufficient to perform time-series data modeling.
In step S1605, the device may learn an artificial intelligence model including time-series data modeling. Specifically, the device may learn an artificial intelligence model based on the obtained heart image and label, and the propagated heart image and propagated label. For example, the artificial intelligence model may be based on LSTM.
According to the embodiment described with reference to
According to the embodiments described with reference to
Meanwhile, the pseudo image and the pseudo GT generated based on the motion estimation value of the image may have different shape from the real heart. Accordingly, the weights may be used to update parameter weight values through network learning in real images and real GTs. The generated pseudo image and pseudo GT may be distorted as the motion between the real image and the real GT increases. That is, the larger a time difference within one cardiac cycle, the more difficult it is to estimate the cardiac motion. Accordingly, the weight may decrease as the value of the time difference r with the image serving as the reference for real image creation increases. That is, the weight may decrease as the distance from the reference image increases. Here, the weight may be determined as an experimental value. In addition, in the cardiac cycle, the pulse rate per minute may be 60 to 100, for example, in the case of a normal person. Also, in the case of bradycardia patients with a slow heart rate, the pulse rate per minute may be 60 or less. In addition, in the case of tachycardia patients with fast heart rate, the pulse rate per minute may be 100 or more.
Referring to
In step S1803, the device may perform segmentation on the obtained heart image using an artificial intelligence model including time-series data modeling. According to an embodiment, the device uses time-series images (e.g., n images from a time t to a time t+n−1) as input data, and generate a segmentation result of images at one of the time t to the time t+n−1, a time after the time t+n−1 or a time before the time t. Here, the artificial intelligence model may be in a trained state based on the propagated heart image, the propagated label, and the weight as in the procedure described with reference to
Heart images are 2D images that change over time. To efficiently augment heart images and annotated masks, the device may measure a cardiac motion vector field and propagate reference data. Here, the term propagate may indicate an example of an image conversion method, unlike propagation on a neural network. For example, the device may measure the cardiac motion vector field based on a convolutional neural network (CNN).
The device may generate pairwise data composed of a label and an image. For example, the device may generate pairwise data composed of an image without a propagated label and a ground truth label. The device may augment this data. An image without a ground true label at a time point separated by m (frame) from the reference image may be expressed as Ii+m. The m-th augmented propagated label may be expressed as LPi+m.
The boundary between an image without an augmented ground true label and an augmented propagated label may not match. For example, the boundary between an image without an augmented label and an augmented propagated label may not match based on erroneousness of a motion vector field. In addition, as a propagation step increases, the boundary error between the image without the augmented label and the augmented propagated label may increase. Referring to
The device may reduce the error of the motion vector field by propagating the image as well as the label. In addition, the device may augment accurate data by making the propagation step of the label equal to the propagation step of the image.
Referring to
The exemplary methods of the present invention are represented in a series of operations for clarity of description, but this is not intended to limit the order in which the steps are performed, and each step may be performed simultaneously or in a different order, if necessary. In order to realize a method according to the present invention, the steps illustrated may include further other steps, or may include the remaining steps with the exception of some steps, or may include additional other steps with the exception of some steps.
Various embodiments of the present invention are not intended to enumerate all possible combinations, but to describe a representative aspect of the present invention, and the matters described in the various embodiments may be applied independently or in combination of two or more.
In addition, various embodiments of the present invention may be realized by hardware, firmware, software, or a combination thereof. In the case of hardware realization, the embodiments may be realized by one or more ASICs (Application Specific Integrated Circuits), DSPs (Digital Signal Processors), DSPDs (Digital Signal Processing Digital Signal Processing Devices (DSPs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), general processors, controllers, microcontrollers, microprocessors, etc.
The scope of the present invention includes software or machine-executable commands (e.g., operating systems, applications, firmware, programs, etc.) that allow an operation according to a method of various embodiments to be performed on a device or computer, and a non-transitory computer-readable medium in which such software or commands are stored and executed on the device or computer.
Number | Date | Country | Kind |
---|---|---|---|
10-2021-0026361 | Feb 2021 | KR | national |
10-2021-0139303 | Oct 2021 | KR | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/KR2022/002740 | 2/24/2022 | WO |