METHOD FOR GENERATING TRAINING DATA AND FOR TRAINING A DEEP LEARNING ALGORITHM FOR DETECTION OF A DISEASE INFORMATION, METHOD, SYSTEM AND COMPUTER PROGRAM FOR DETECTION OF A DISEASE INFORMATION

Information

  • Patent Application
  • 20230267714
  • Publication Number
    20230267714
  • Date Filed
    February 21, 2023
    a year ago
  • Date Published
    August 24, 2023
    a year ago
Abstract
A method for generating training data for training a deep learning algorithm, comprising: receiving medical imaging data of an examination area including a first part and a second part of a symmetric organ; splitting the medical imaging data along a symmetry plane or a symmetry axis into a first dataset and a second dataset, wherein the first dataset includes the medical imaging data of the first part and the second dataset includes the medical imaging data of the second part; mirroring the second dataset along the symmetry plane or the symmetry axis; generating the training data by stacking the first dataset and the mirrored second dataset; and providing the training data.
Description
CROSS-REFERENCE TO RELATED APPLICATION(S)

The present application claims priority under 35 U.S.C. § 119 to European Patent Application No. 22158204.2, filed Feb. 23, 2022, the entire contents of which are incorporated herein by reference.


FIELD

Embodiments of the present invention generally relate to a generation of training data for a deep learning algorithm to detect a disease information, especially for a classification, localization and/or characterization of disease information. The disease information is for example a cerebral disease information for a cerebral disease e.g. a large vessel occlusion. Embodiments of the present invention further generally relate to a method for training a deep learning algorithm to detect a disease information with training data, a method to detect disease information and to a computer program.


Embodiments of the present invention generally concern the field of medical imaging and medical image analysis. Medical imaging can be performed with a variety of imaging modalities such as X-ray, ultrasound, magnetic resonance imaging (MRI), computed medical (CT), PET, SPECT etc. as are known in the art. The imaging is performed with a respective imaging device, also referred to as scanner. In particular, embodiments of the present invention are in the field of the detection of cerebrovascular diseases by medical imaging. Embodiments of the present invention are especially for detecting and/or providing information for cerebrovascular diseases like a stroke and/or a large vessel occlusion.


BACKGROUND

Stroke remains one of the leading causes of death in developed countries. Nearly nine out of ten strokes are ischemic, i.e., they are caused by a thrombus (blood clot) occluding a vessel, thereby preventing part of the brain from being supplied with oxygenated blood. Mechanical thrombectomy (MT), the physical removal of thrombi, became one of the key treatments in stroke patients with acute cerebral ischemia caused by large vessel occlusions (LVO).


Typically, stroke patients first undergo a non-contrast computed tomography (CT) head scan for initial assessment. In case of an ischemic stroke, it is essential to quickly identify which parts of the brain are affected in order to choose the most promising treatment. However, this is a challenging task as the early signs of ischemia in non-contrast scans are often subtle. A CT angiography (CTA) scan is usually performed to reliably determine the affected regions, assess the collateral status and localize the thrombus. Supporting experts with their diagnosis by the automated software-based detection and localization can save time and can save time to help for earlier re-canalization.


In particular, it is differentiated between the classification and the localization of a disease information, especially of cerebrovascular disease and/or LVOs. The classification and localization are often summarized as detection. Particularly, detection, classification and/or localization of the disease information means detection, classification and/or localization of the disease, wherein the disease information is for example an anatomic, physical and/or tissue anomaly caused by the disease. In other words, the disease information is configured, results in and/or comprises a radiological finding. The classification is the task of determining, whether a patient is suffering a disease, especially a cerebral disease. This is leading to a two-class problem (e.g., LVO positive, LVO negative). This can be extended to a three-class problem, wherein the side is predicted as well (Left LVO, Right LVO, No LVO). An advancement of the classification is the localization. Here the task is to determine the coordinate of the disease information and/or of the LVO in the patients' coordinate system. The localization is a regression task. The localization is much more desirable, as this leads to the final piece of information to proceed with the treatment of the patient. Furthermore, after localization the radiologist can easily confirm the classification results.


SUMMARY

Information, especially of LVOs, is facing several challenges. The anatomy of organs, especially of the cerebrovascular vessel tree, is highly individual. Consequently, for approaches based on trainable methods, for instance deep learning based techniques, a high amount of data is required to achieve an acceptable level of generalization. Especially, the different amount of contrast agent injected is another degree of freedom in scans. Furthermore, comparing the left and right hemisphere is of crucial importance in the detection of cerebral disease information, especially of LVOs. Yet it is difficult to integrate prior knowledge in trainable architectures for the detection of cerebral disease information.


An underlying technical problem of one or more embodiments of the present invention is to improve the automated detection, classification and/or localization of disease information based on a deep learning algorithm. Furthermore, a technical problem of embodiments of the present invention is to provide the trained deep learning algorithm and, especially, to provide training data for the training. The problems are solved by the subject matter of embodiment and/or the independent claims. The dependent claims are related to further aspects of embodiments the present invention.


In the following, the solution according to embodiments of the present invention is described with respect to the claimed methods as well as with respect to the claimed computer program and system. Features, advantages or alternative embodiments herein can be assigned to the other claimed objects and vice versa. In other words, claims for the method can be improved with features described or claimed in the context of the computer program and system. In this case, the functional features of the method are embodied by objective units of the system. Furthermore, features of the different methods (generating training data, training of the deep learning algorithm and detection of the cerebral disease information) can be interpreted based on the features and description of any of the methods. Especially, any of the different methods can be improved with features of any other of the methods.


In one aspect, an embodiment the present invention relates to a method for generating training data for training a deep learning algorithm, comprising:

    • Receiving medical imaging data of an examination area of a patient, the examination area of the patient comprising a first and a second part of a symmetric organ,
    • Splitting of the medical imaging data along a symmetry plane or symmetry axis into a first dataset and a second dataset, wherein the first dataset comprises the medical imaging data of the first part of the organ and the second dataset comprises the medical imaging data of the second part of the organ,
    • Mirroring of the second dataset along the symmetry plane or symmetry axis,
    • Generating of the training data by stacking of the first dataset and the mirrored second dataset,
    • Providing the training data.


The medical imaging data are for example tomography imaging data. The medical imaging data can comprise computed tomography imaging data and/or magnetic resonance imaging data. Preferably the medical imaging data comprise at least one non-contrast computed tomography imaging data and/or computed tomography angiography (CTA) data, in particular digital subtraction computed tomography angiography data. The CTA data can be related to a single phase or to multiple phases. The medical imaging data can further comprise computed tomography perfusion data and/or spectral computed tomography imaging data. Spectral computed tomography imaging data can be based, for example, on dual-energy, in particular, dual-source, computed tomography imaging and/or photon-counting computed tomography imaging.


The examination area comprises at least a body part comprising the symmetric organ. The symmetric organ is for example a brain, kidneys, a lung, a female breast or a male breast. The symmetric organ comprises a first part and a second part, wherein the first and the second part are preferably two halves of the organ. In particular, the first and the second part of the organ are symmetric to the symmetry plane. The symmetric organ is for example a brain, wherein the first part and the second part are the two hemispheres of the brain. They symmetric organ can be two kidneys of a person, wherein the first part is one kidney, and the second part is the other kidney. For a lung as the symmetric organ, each part of the two parts is a lobe of the lung. The symmetry plane is especially a sagittal plane. The symmetry plane can be a hyperplane, for example a curved or displaced plane. The symmetry plane can be chosen based on an atlas of the organ. Alternatively, the symmetry plane can be chosen free or independent of an atlas. Particularly, the first part is a left part and/or a left cerebral hemisphere, and the second part is a right part and/or a right cerebral hemisphere. Alternatively, the first part is the right part, and the second part is the left part. Data, images and/or models of the second part can be transferred into data, images and/or models appearing as, optically looking like and/or modelling a first part, e.g., by applying a transformation, especially applying a geometric based transformation. Preferably, the transformation is based on mirroring, flipping, rotating, elastic deformation, augmentation and/or test time augmentation. For example, the data or images of the second cerebral hemisphere, e.g. the right cerebral hemisphere, can be mirrored on the sagittal plane, whereby the mirrored data and/or images appears like data for a first (left) cerebral hemisphere.


A symmetric organ and three-dimensional image data of such a symmetric organ have a high degree of symmetry with respect to a certain plane, the symmetry plane. Certain two-dimensional images or slices of the symmetric organ have a high degree of symmetry with respect to a symmetry axis. The degree of symmetry may differ for different two-dimensional images or slices of the symmetric organ. The symmetry plane may be defined by the plane which results in the highest degree of symmetry of the two parts of the symmetric organ or three-dimensional image data of such symmetric organ. The symmetry axis may be defined by the axis which results in the highest degree of symmetry of the two parts of the symmetric organ or the two-dimensional images or slices of such symmetric organ. Especially, paired organs are understood as symmetric organs. Symmetric organs are for example parotid gland, tonsils, nasal cavity, lungs, pleura, bones of extremities, ribs and clavicles, coccyx and pubis, pelvic bones, skin of eyelid, including canthus, skin of ear and external auditory canal, including shoulder and hip, breast, ovary, fallopian tubes, testis, epididymis, kidneys, renal pelvis, ureters, eyes, adrenals.


Preferably, the method comprises receiving of organ atlas data. The organ atlas data are for example a brain atlas data. The method may also comprise generating of registered imaging data based on the medical imaging data and the organ atlas data. The organ atlas data are used for registration. The organ atlas data can be a probabilistic brain atlas data. The generating of the registered imaging data can comprise a segmenting of the plurality of organ regions, especially brain regions, based on the medical imaging data, thereby generating, for each region of the plurality of regions, a representation of that region. The generating of the registered imaging data can comprise a scaling of the medical imaging data and/or a rigid pre-registration of the medical imaging data. For embodiments comprising receiving at least one channel (e.g. different scanning principles) and/or comprising the calculation of a channel based on the medical imaging data, the channel data comprised by the channel are preferably also registered based on the brain atlas data. The registration is especially used for registering the medical imaging data and/or the channel data into a reference coordinate system and to standardize the imaging data to a canonical pose.


The medical imaging data are split into a first dataset and a second dataset. In a preferred embodiment, for splitting of the medical imaging data the registered imaging data are used. The medical imaging data may comprise a three-dimensional image and/or model, wherein the three-dimensional image and/or model is preferably split into two three-dimensional images and/or models. Especially, the medical imaging, in particular the registered imaging data, are split and/or separated according to sagittal plane and/or a symmetry plane. Preferably, medical imaging, in particular the registered imaging data, are split into registered medical imaging data, in particular registered imaging data, data of the first and of the second part of the organ, e.g. data of the first and the second cerebral hemisphere. The first dataset comprises the medical imaging data, in particular the registered imaging data, of the first part of the organ and/or information generated based on the medical imaging data, in particular the registered imaging data, of the first part of the organ, wherein the second dataset comprises the medical imaging data, in particular the registered imaging data, of the second part of the organ and/or information generated based on the medical imaging data, in particular the registered imaging data, of the second part of the organ.


The second dataset is mirrored into the mirrored second dataset. The mirroring is especially a left-right mirroring. The second dataset is based on the second part of the organ and/or representing the second part of the organ. The second dataset is preferably mirrored in such way that the mirrored second dataset represents and/or appears as a dataset for a first part of the organ. Representing and/or appearing as a first part of the organ means especially to optically and/or geometrically look like a first part of the organ. For example, the second dataset comprises and/or describes three-dimensional model and/or images of the second cerebral hemisphere, wherein the mirrored second dataset is generated by mirroring the three-dimensional model and/or images on the sagittal plane, wherein the mirrored three-dimensional model and/or images represents and/or appears as a three-dimensional model and/or images of a first cerebral hemisphere. The mirroring of the second dataset can comprise transformations like flipping, rotating, elastic deformation, augmentation and/or test time augmentation.


The training data are generated by stacking the first dataset and the mirrored dataset together. Preferably, the training data are configured as a data package comprising the first dataset and the mirrored second dataset. The training data may have a fixed and/or standardized structure, layout and/or setup, e.g., having a stack direction. The stack direction is for example a read-out direction for the training data. Preferably, the training data are configured as a set, vector or tensor. The training data of especially configured as a tensor D in the form D={X1, X2*}, comprising the first dataset X1 and the mirrored second dataset X2*. In other words, the training data are configured as a set of data appearing and/or representing only first parts of the organ, e.g. first cerebral hemispheres. The training data are examples to train and/or learn the deep learning algorithm. In particular, in the generation of the training data the mirrored second dataset can be used as a first dataset in the training data and/or the first dataset can be used as a mirrored second dataset in the training data. For example, the training data can be generated as the tensor D=(X2*, X1).


Regarding the deep learning algorithm, it preferably is or comprises a deep convolutional neural network (DCNN). The term deep learning algorithm is, according to embodiments of the present invention, also understood as any machine learning algorithm, trained algorithm and/or algorithm based on transfer learning. The deep learning algorithm generally uses input data, especially subject patient image, to detect the disease information, especially to determine output information comprising the determination and/or detection of the disease information. The output information may comprise a classification, location and/or additional information of the disease and/or of the disease information. The additional information can comprise a size, shape, type of disease, anatomic, tissue and/or physical property. For example, in the case of LVOs, the output information may comprise a position (e.g., left, right) and/or size of the LVO. The deep learning algorithm may be an algorithm for computer aided/assisted diagnosis (CAD). The output information, thus, may only serve as a decision support and not be a diagnosis itself.


The training data are provided. Providing is for example implemented as outputting the data via a dedicated interface, a display, a printer, a memory, a network or in any other fashion suitable for inspection by a user, storage or further processing.


In summary, embodiments of the present invention provide training data, which are based on medical imaging data. Hence, embodiments of the present invention allow to provide training data for a deep learning algorithm for the detection of disease information. In other words, embodiments of the present invention provide training data based on medical imaging data of two parts of a symmetric organ, especially of two cerebral hemispheres, wherein in the training data all data and/or datasets appear as, optically look like and/or describe parts of the organ, especially cerebral hemispheres, of one kind (side). This allows that medical imaging data of a second hemisphere can be mirrored into the mirrored second dataset, wherein such mirrored second datasets can be used in the training data as a first dataset or as actual mirrored second dataset. Hence, embodiments of the present invention allow to mix and/or combine datasets of first and second parts of a symmetric organ, especially of two cerebral hemispheres, of different samples and/or patients. This allows the generation of more training data based on a given set of medical imaging data.


Particularly, at least one channel comprising channel data is received and/or at least one channel comprising channel data is computed based on the medical imaging data. As a channel a different scan, for example multispectral CT scan, a CTA scan or a MRI scan, can be received. Furthermore, based on the medical imaging data a segmentation of a structure or information regarding anatomic, organic and/or physical features can be computed as the channel.


Particularly, at least one channel is based on windowing, especially windowing applied and/or based on the medical imaging data. For the channel, particularly, the channel data are registered based on the organ atlas data. In case of a plurality of channels, the channel data of the channels are preferably registered channel wise. The unregistered channel data or registered channel data of the channel are split into a first channel dataset and a second channel dataset. In case of a plurality of channels a plurality of first channel datasets and a plurality of second channel datasets are generated. The splitting of the channel data is especially a splitting, e.g., separation and/or portioning, along the symmetry plane, especially sagittal plane, or the symmetry axis. Hence, the unregistered or registered channel data are partitioned according to the parts of the symmetric organ, especially to the cerebral hemispheres. The first channel dataset comprises the unregistered or registered channel data of the first part of the organ, wherein the second channel dataset comprises the unregistered or registered channel data of the second part of the organ. The second channel dataset is mirrored along the symmetry plane, e.g., the sagittal plane, or along the symmetry axis. The mirroring is especially adapted and/or configured as the mirroring described for the mirroring of the second dataset. The mirrored second channel dataset appear, optically look like and/or describe channel data of a first part of the organ. The first dataset, the first channel datasets, the mirrored second dataset and the mirrored second channel dataset are stacked together as the training data. Hence, the training data comprise the first dataset, the first channel datasets, the mirrored second dataset and the mirrored second channel datasets. This embodiment generates training data including datasets for at least one additional channel, wherein all the datasets appear as datasets for the first part of the organ.


In particular, a vessel channel comprising vessel data is computed or received, wherein the vessel data are based on a segmentation of a cerebrovascular vessel tree in the medical imaging data. The vessel channel is for example computed using the method described by Thamm et al.: VirtualDSA++: Automated Segmentation, Vessel Labeling, Occlusion Detection and Graph Search on CT-Angiography Data (Tn VCBM (pp. 151-155)). Additionally and/or alternatively, a bone channel comprising bone data is computed or received, wherein the bone data are for example computed using the method described by Chen et al.: Deep learning based bone removal in computed tomography angiography (US2018116620A1). Optionally other channels might be used as well, like a non-contrast CT scan.


The training data are preferably configured as an ordered set. The ordered set can be implemented as a package, a vector or a tensor. The ordered set has for example an ordering direction, e.g., the order in which datasets will be read out by the deep learning algorithm. In the training data datasets are ordered and/or arranged such that the first dataset is prior to the first channel dataset, especially, such that the first channel dataset is prior to the mirrored second dataset and such that the mirrored second dataset is prior to the mirrored second channel datasets. When describing the structure of the training data and/or the position of the datasets in the training data the phrases first (channel) dataset and/or mirrored second (channel) dataset does not necessarily imply their origin of the part of the organ, e.g. their origin of the hemisphere. E.g., in the training data the ordering place or tensor entry of a first (channel) dataset may comprise as the first (channel) dataset an actual mirrored second (channel) dataset.


In case the datasets comprise a collection of cross section images, it is preferred that the arrangement in the training data is sectioned plane wise, e.g. {E1(z0), Ec1(z0), E2*(z0), Ec2*(z0), E1(z1), Ec1(z1), E2*(z1), Ec2*(z1), . . . } wherein Eab(z) denotes the section image at the depth z of the dataset b, wherein a denotes the channel and * denotes mirrored.


In particular, the first dataset, the first channel datasets, the mirrored second dataset and/or the mirrored second channel datasets comprise a three-dimensional imaging dataset of the related part of the organ, e.g. the cerebral hemisphere. In case the datasets comprise the three-dimensional image datasets, it is preferred that the arrangement in the training data is dataset wise, e.g. {V1, Vc1, V2*, Vc2*} wherein Vab denotes the three-dimensional image dataset b, wherein a denotes the channel and * denotes mirrored.


To increase the number of achievable training data based on a limited number of medical imaging data it is preferred to stack first datasets and mirrored second datasets of different medical imaging data, especially of different patients. Alternatively or additionally, the training data can be a stack of first data sets and mirrored datasets of the same medical imaging data, especially of the same patient. Since all datasets appear as datasets for first parts of the organ, datasets can be mixed up and recombined. To increase the achievable number of training data, the idea is to mix first and second parts of the organ from different patients. Thus, for N medical imaging data, N×N combinations of first and second parts become available for training. For example, to generate training data the hemispheres are randomly stacked together, preferably considering whether they have the cerebral disease, the cerebral disease information, especially an LVO, or not. By recombination, a vast amount of data sets can be constructed.


For example, medical imaging data of three patients are received. Patient A is healthy. Patient B has a right LVO. Patient C has a left LVO. “Left” is abbreviated with “L.” “Right” with “R.” respectively. In terms of the hemispheres that means:

    • Patient A (No LVO): L. Hemisphere healthy; R. Hemisphere healthy
    • Patient B (R. LVO): L. Hemisphere healthy; R. Hemisphere LVO
    • Patient C (L. LVO): L. Hemisphere LVO; R. Hemisphere healthy


These six hemispheres can now be recombined to training data. The stacking of two hemispheres is shown with a “+”-sign. The hemisphere on the left of the “+”-sign represent the left hemisphere in the stack.

    • L. Hemisphere of A. (healthy)+L. Hemisphere of B (healthy)→No LVO
    • R. Hemisphere of A (healthy)+R. Hemisphere of B (LVO)→Right LVO
    • R. Hemisphere of B (LVO)+R. Hemisphere of C (healthy)→Left LVO


By recombination all desirable LVO cases can be constructed. Even if, hypothetically, the given medical imaging data only consist out of right LVOs, with the recombination method it is possible to construct left LVOs only, no LVOs only, or, as preferred, a balanced distribution of all three. It must be mentioned that the combination of two LVO positive classified hemispheres does not lead to meaningful recombination.


To demonstrate the numbers emerging by recombination, assume medical imaging data of 100 patients' and suppose 50 of those patients suffer an LVO. That means 150 healthy (LVO negative) hemispheres and 50 sick (LVO positive) hemispheres are available. By recombination it would be possible to recombine 50 LVO positive hemispheres with the 150 LVO negative hemispheres (50×150=7500) and that twice (7500×2=15000), as left and right can be swapped. The LVO negative hemispheres can be recombined to in total 22350 (150×149=22350) combinations. All in all, given 100 patients, 37350 datasets can be constructed by the recombination trick. That is an increase in data of 37250%.


In particular, the first dataset and mirrored second dataset for stacking as the training data are chosen based on boundary conditions. In particular, the first channel dataset and the first dataset and/or the mirrored second channel dataset and the mirrored second dataset from the same patient, scan and/or cerebral hemisphere. In particular, for the generation of training data as a first dataset and/or as a first channel dataset an actual mirrored second dataset and/or mirrored second channel dataset can be chosen. For the generation of training data as a mirrored second dataset and/or as a mirrored second channel dataset an actual first dataset and/or first channel dataset can be chosen. The choosing can be implemented by a computer, a processing unit or preferably a machine learned algorithm. The boundary condition is for example chosen to generate training data with a preferred characteristic. Particularly, the boundary conditions chose the datasets based on a comparable amount of contrast agent, patient information data, e.g., age, sex, demography, and/or on characteristics of the cerebral disease.


In a second aspect, embodiments of the present invention also concern a method for training a deep learning algorithm to detect a disease information, in particular to detect a cerebral disease information, a cerebrovascular disease or an LVO. The method comprises the step of training the deep learning algorithm with training data comprising a first dataset and a mirrored second dataset. The training data are especially generated and/or product of applying the method for generating training data for training a deep learning algorithm on medical imaging data. In this manner, a small amount of available training data in the form of acquired image data can be generated a huge amount of training data, resulting in a better training and higher quality of output information. All features and remarks regarding the method to the training data also apply here. To train the deep learning algorithm output training data are received. The output training data belong and/or are related to the training data. The output training data comprises for example an information regarding the disease, e.g. disease yes/no or no LVO, LVO in first hemisphere or LVO in second hemisphere (mirrored second datasets). The training data {V1, Vol, V2*, Vc2*} can be combined with the output training data k to a training dataset, e.g. the training dataset {{V1, Vol, V2*, Vc2*}, k}.


The deep learning algorithm for detection of the disease information can be based on one or more of the following architectures: convolutional neural network, deep belief network, random forest, deep residual learning, deep reinforcement learning, recurrent neural network, Siamese network, generative adversarial network or autoencoder. The deep learning algorithm is trained based on the training data and the output training data, especially based on the training dataset. The training can be done based on federated learning, wherein the medical imaging data or the training data are provided by a first device, e.g., a scanner, and wherein the training of the deep learning algorithm is performed at a second device, e.g., a cloud. The trained deep learning algorithm is provided, especially provided for a computer-implementation or for storage on a storage medium.


In particular, the training of the deep learning algorithm exploits a symmetry of the training data and the related output training data, wherein the symmetry describes the influence of switching of the first dataset and the mirrored second dataset in the training data on the related output training data. Particularly, the stacking of both hemispheres of one or more patients enables new regularization methods, exploiting the symmetry of a prediction. To describe the regularization method designed, notations are introduced. The presence of a cerebral disease information on the first (left) or second (right)hemisphere or not present, in this example LVO or no LVO is indicated by y∈{−1,0,1} whereas −1 represents left LVOs, 1 stands for right LVOs and 0 for no LVOs. −1 and 1 are referred to LVO positive and 0 to LVO negative. The two hemispheres of a patient are represented with hl=h1 for the left hemisphere and respectively hr=h2 for the right hemisphere. A dataset d={s,y} therefore consists of the stack of the hemispheres s={hl,hr} and its label y. For every dataset, there exists a mirrored representation d*={s*,−y} with swapped hemispheres s*={hr,hl} and labels. With the labels defined above a negation flips the label to left or right but has no effect if no LVO is present. A classifier f(s)=y takes s and returns the prediction ŷ. It ideally flips the prediction (in case of LVO and stays at 0 if no LVO) if s* is given instead. This symmetrical behavior can be reinforced and integrated into the training. For instance, latent space feature representations inside a neural network f(g(s))=y defined as g(s)=k must be mirrored to its counterpart g(s*)=k* depending on its label. Contrastive losses for three classes e.g., utilizing the cosine similarity would provide a good fundament for the reinforcement of the desired feature space.


A further aspect of embodiments of the present invention is directed to a method for detecting a disease information, especially detecting a cerebral disease information. Particularly, the disease manifests in anomalies in the tissue, wherein the anomalies can be detected as the disease information. The manifestation of the disease information used for detection based on the medical imaging data is also called radiological finding. The method is applicable detecting diseases with an asymmetric expression in the examination area, particularly, diseases manifesting in one of the two parts of the organ, e.g. in one of the two hemispheres. The method is especially configured for detecting large vessel occlusions, infarct nuclei, penumbras and/or transitions between dead and intact tissue.


Medical imaging data of an examination area of a patient are received, e.g., from a scanner, a data storage or a cloud. The examination area of the patient comprises a first and a second part of a symmetric organ, e.g. a first and a second cerebral hemisphere. The brain atlas data are received. Based on the brain atlas data the medical imaging data are registered into registered imaging data. Channel data of one or more channels can be computed and/or provided for the medical imaging data. In case channel data are provided or computed, it is preferred to register the channel data based on the brain atlas data into registered channel data. The unregistered or registered imaging data, and in particular the unregistered or registered channel data, are split along a symmetry plane or symmetry axis into a first dataset and a second dataset, and in particular a first channel dataset and a second channel dataset. The first dataset, and in particular the first channel dataset, comprises the unregistered or registered imaging data, and in particular the unregistered or registered first channel data of the first part of the organ. The second dataset, in particular the second channel dataset, comprises the unregistered or registered imaging data, and in particular the unregistered or registered channel data, of the second part of the organ. The second dataset, and in particular the second channel dataset, is mirrored along the symmetry plane or symmetry axis. The first dataset and the mirrored second dataset are stacked together to generate subject patient image data. The subject patient image data comprise the stack of the first dataset and the mirrored second dataset. Especially, to generate the subject patient image data the first dataset, the first channel dataset, the mirrored second dataset and the mirrored second channel dataset are stacked. The subject patient image data are preferably constructed, structured and/or assembled as the training data according to the method disclosed above. In particular, the medical imaging data, the registering, the splitting and the mirroring are implemented and/or configured as outlined for the method to generate the training data.


The subject patient image data are analysed using the trained deep learning algorithm. The subject patient image data are input data for the trained deep learning algorithm. The deep learning algorithm is configured to detect, especially to classify and/or localize the disease, especially the disease information.


Preferably, the method comprises the determination of a preferred view information, wherein the preferred view information relates to a radiological finding. The preferred view information comprises and/or defines a projection and/or a preferred projection angle to show and/or display the radiological finding, especially a region of the radiological finding based on the medical imaging data. For shortening, in the following the term preferred view is used synonymous for the best projection and/or best projection angle. Preferred view means especially the view, image, projection and/or projection angle able to display or show the radiological finding to a doctor or person, e.g., for setting a medical diagnosis. The radiological finding is preferably based on the cerebral disease information or the manifestation of the disease in the medical imaging data. The radiological finding is received. The radiological finding is for example made by a doctor or by applying computer implemented method onto medical imaging data. The radiological finding may be provided by a user or a system, e.g., the computer implemented method. The preferred view information is detected based on applying a view determining algorithm on the computed medical image data or on the subject patient image data.


It is noted that this embodiment is also advantageous independently from using the subject patient image data and/or using the trained deep learning algorithm. Thus, a method and system for detecting the cerebral disease information and determining the preferred view information are conceivable. The method would comprise the following steps:

    • Receiving medical imaging data of an examination region of subject patient,
    • Receiving a radiological finding for the medical imaging data,
    • Analyzing the medical imaging data using a view determining algorithm,
    • Determining a preferred view information by applying a view determining algorithm on the medical imaging data, wherein the preferred view information comprises a preferred projection and/or a preferred projection angle to show a region of the radiological finding based on the medical imaging data.


This embodiment can be integrated into the visualization software of a system, especially a tomography, as a tool which recommends or automatically chooses the ideal 2D view of the volumetric (3D) segmentation and raw data, such that in said view, the pathology of concern is visible as best as possible. This would ease the diagnosis and shorten the time for it. The main challenge lies in how to obtain suitable labels for what constitutes a good view in order to build a trainable recommender system. In case a trainable system is required for the localization of a disease information, these 3D positions would typically need to be labeled as well, which is a tedious process. The method is especially configured to be implemented in the diagnosis of cerebrovascular diseases using a segmentation of the vessel tree, where it is tedious to provide proper labels. The method is in particular configured to solve the problem of finding the camera position in space which best displays the key indicator for the identification of some disease, and the location of said key characteristic position responsible for the disease.


The method to determine the be preferred view information is based on a multi-view projection approach classifying the disease on 2D views of the segmentation. Preferably, the method uses saliency maps to evaluate the importance of each view.


Related to this problem is the computation of a saliency map on 3D meshes. Song and Liu [Mesh Saliency via Weekly Supervised Classification-for-Saliency CNN″, IEEE Transactions on Visualization and Computer Graphics, 2019] presented a method which can compute the saliency map on 3D meshes. Their method uses a multi-view approach and maps the saliency map of these views onto the original 3D surface to visualize the importance of each vertex of the graph of interest. However, their method is restricted to the mesh itself. If the pathologic sign of interest is the absence of parts of the segmented structure, as it is the case for various vascular diseases like LVOs, indicating saliency on the structure itself is insufficient, hence a different approach is suggested.


The goal of this embodiment is to predict the preferred view information T*, especially the best possible view, defining essentially the extrinsic “camera” parameter to be chosen for the visualization of a disease. According to an embodiment of the present invention, a recommender system is applied, here driven by a neural network, which receives a 3D representation and returns T*. To realize this several steps are required as labels for T* are hard to assess, especially as for one case, several ideal views might exist. Instead, the trainable method proposed here, is weakly supervised, wherein labels are used for the case-level classification of the disease to detect, e.g., whether a patient suffers an AVM or not.


Preferably, the method to determine the preferred view information comprises two phases followed by an optional third phase:


Phase 1 is a multi-view classification, wherein a network trains the classification task at hand, where a network classifies whether a patient suffers a particular disease or not. This is required for the view proposal, as the network must learn first to differentiate between positive and negative cases. This is also required as the network needs to learn to extract the descriptive features required to be identified for mentioned diseases. However, the network does not receive the volume as a plain 3D representation, instead it receives arbitrarily many projections N of the volume defined by a set of transformations T={T1, T2, . . . , TN} where each transformation has different Euler angles (orientation in space) and/or translation vectors (position in space). Between all cases the various projections are fixed, so the network learns which of them is relevant and which not. The network receives all these projections, for instance in the form of input channels. The multi view classification can be based on the method published by Su et al. [“Multi-view Convolutional Neural Networks for 3D Shape Recognition”, Proceedings of ICCV 2015. http://arxiv.org/abs/1505.00880]. However, the classification on the multiple views can be preceding to the method for determining the preferred view information, and just one step out of three required to provide the trained best preferred algorithm.


Phase 2 is for determining the preferred view information. Given a trained multi-view classifier, the goal of phase 2 is to identify the most relevant projection by analysis of the net saliency or relevance of each view. For this, several methods can be chosen, which visualize and quantify the relevance of each input channel. Notable methods include GradCam, Saliency Maps and LRP, just to name a few. Once the most relevant channel has been found, implicitly the preferred view information as the best possible transformation out of the set T={T1, T2, . . . , TN} is determined. With the approach, only weak labels are required. Weak labels in this context means that only labels for the first phase are required to train the network for the classification task at hand. Hence, the effort in preparing such a method to determine the preferred view information is manageable. Crucially, the definition of what constitutes an optimal view follows directly from the objective ability (of the trained phase 1 model) to classify the pathology on this view, i.e. it is inherently optimal for the specific clinical decision. In contrast, manual labeling of views, apart from being tedious, would likely be much more subjective.


An optional phase 3 is directed to a coarse localization. In the third phase, the projection can furthermore be used to determine the region of interest in 3D, essentially yielding a coarse localization based solely on a global weak label.


To this end, the projections are used for a back-projection, with the accumulated attention values forming a peak in overlapping positions. This is done using all projections in 3D. Alternatively, the peaks can be identified as 2D points in the projection views that define lines (projection rays) in 3D, and the 3D point with the least total distance to all such lines can be marked as the location of interest in the original volume.


The benefit in the usage of the method for determining the preferred view information is manifold. The characteristic view can be used either as thumbnail in the series overview to give the reader a quick grasp about the pathology of a patient or can be visualized alongside with the other standard view when loading the patient. Furthermore, findings are easier to make as a characteristic image has been found and determined already. All of these aspects may help to accelerate the diagnosis. Further benefit is in the third step, especially in the back projection which is performed spatially in terms of image coordinates and allows to identify regions which are not part of the mesh. This is crucial for example in the diagnosis of LVOs, as their key indicator is the occlusion and the consequent absence important subbranches in the vessel tree like Cerebri Media.


In one aspect, embodiments of the present invention relate to a system for determining a disease information, comprising an interface to receive subject patient image data. Further comprising an analyzing unit to analyze the subject patient image data received using a trained deep learning algorithm for determining the disease information, wherein the trained deep learning algorithm has been trained with training data to detect the disease information. The system comprises a determining unit to detect the disease information, especially to determine output information regarding the disease information, based on the analysis of the analyzing unit. Further comprising an interface to output the detected disease information, especially to output the output information determined.


In another aspect, embodiments of the present invention relate to a computer program product comprising program elements which induce a data processing system to carry out the steps of the method according to one or more of the disclosed aspects, when the program elements are loaded into a memory of the data processing system.


In another aspect, embodiments of the present invention relate to a computer-readable medium on which program elements are stored that can be read and executed by the system, in order to perform the steps of the method according to one or more of the disclosed aspects, when the program elements are executed by the system.


Any of the components of the system mentioned herein or any interface between the components of the system can be embodied in form of hardware and/or software. In particular, an interface can be embodied in form of at least one of a PCI-Bus, a USB or a Firewire. In particular, a unit can comprise hardware elements and/or software elements, for example a microprocessor, a field programmable gate array (an acronym is “FPGA”) or an application specific integrated circuit (an acronym is “ASIC”).


The system can, for example, comprise at least one of a cloud-computing system, a distributed computing system, a computer network, a computer, a tablet computer, a smartphone or the like. The system can comprise hardware and/or software. The hardware can be, for example, a processor system, a memory system and combinations thereof. The hardware can be configurable by the software and/or be operable by the software. Calculations for performing a step of a method and/or for training a machine learning model may be carried out in a processor.


Data, in particular, the medical imaging data, the brain atlas data and the plurality of datasets, can be received, for example, by receiving a signal that carries the data and/or by reading the data from a computer-readable medium. Data, in particular, the cerebral disease information, can be provided, for example, by transmitting a signal that carries the data and/or by writing the data into a computer-readable medium and/or by displaying the data on a display.


The computer program is for example a computer program product comprising another element apart from the computer program. This other element can be hardware, for example a memory device, on which the computer program is stored, a hardware key for using the computer program and the like, and/or software, for example, a documentation or a software key for using the computer program. A computer-readable medium can be embodied as non-permanent main memory (e.g. random-access memory) or as permanent mass storage (e.g. hard disk, USB stick, SD card, solid state disk).


Wherever not already described explicitly, individual embodiments, or their individual aspects and features, can be combined or exchanged with one another without limiting or widening the scope of the described invention, whenever such a combination or exchange is meaningful and in the sense of this invention. Advantages which are described with respect to one embodiment of the present invention are, wherever applicable, also advantageous of other embodiments of the present invention.


In the context of the present invention, the expression “based on” can in particular be understood as meaning “using, inter alia”. In particular, wording according to which a first feature is calculated (or generated, determined etc.) based on a second feature does not preclude the possibility of the first feature being calculated (or generated, determined etc.) based on a third feature.


Reference is made to the fact that the described methods and the described units are merely preferred example embodiments of the present invention and that the present invention can be varied by a person skilled in the art, without departing from the scope of the present invention as it is specified by the claims.





BRIEF DESCRIPTION OF THE DRAWINGS

The present invention will be illustrated below with reference to the accompanying figures using example embodiments. The illustration in the figures is schematic and highly simplified and not necessarily to scale.



FIG. 1 shows a diagram illustrating methods for generating training data, for training of a deep learning algorithm and for detecting a disease information,



FIG. 2 shows a diagram illustrating a splitting of registered imaging data,



FIG. 3 shows a diagram illustrating applying of a deep learning algorithm to stacked data, and



FIG. 4 shows a diagram illustrating a system for detecting the disease information.





DETAILED DESCRIPTION


FIG. 1 shows a diagram illustrating the combination of three methods I, II and III, wherein method I is for generating training data to train a deep learning algorithm. Method II trains the deep learning algorithm based on the training data. Method III is for determining a disease information based on the trained deep learning algorithm, wherein the disease information according to this example is a cerebral disease information. Furthermore, the example is based on a brain as the symmetric organ, wherein the first part of the organ is a first hemisphere of the brain, and the second part is second hemisphere of the brain.


Method I is configured for generating the training data and comprises:

    • Receiving RT1 medical imaging data of an examination area of a patient, the examination area of the patient comprising a first and a second cerebral hemisphere,
    • Receiving RA1 brain atlas data,
    • Generating GR1 registered imaging data based on the medical imaging data and the brain atlas data,
    • Splitting SP1 of the registered imaging data along a symmetry plane or symmetry axis into a first dataset and a second dataset, wherein the first dataset comprises the registered imaging data of the first cerebral hemisphere and the second dataset comprises the registered imaging data of the second cerebral hemisphere,
    • Mirroring MD1 of the second dataset along the symmetry plane or symmetry axis,
    • Generating GT1 of the training data by stacking of the first dataset and the mirrored second dataset
    • Providing PT the training data.


Method II trains the deep learning algorithm and comprises:

    • Receiving RT2 training data, wherein the training data are provided by executing method I,
    • Receiving RO output training data, wherein the output training data are related to the training data,
    • Training TD the deep learning algorithm based on the training data and the output training data,
    • Providing PD the deep learning algorithm.


Method III utilizes the trained deep learning algorithm to detect a cerebral disease information and comprises:

    • Receiving RT3 medical imaging data of an examination area of a patient, the examination area of the patient comprising a first and a second cerebral hemisphere,
    • Receiving RA2 brain atlas data,
    • Generating GR2 registered imaging data based on the medical imaging data and the brain atlas data,
    • Splitting SP2 of the registered imaging data along a symmetry plane or symmetry axis into a first dataset and a second dataset, wherein the first dataset comprises the registered imaging data of the first cerebral hemisphere and the second dataset comprises the registered imaging data of the second cerebral hemisphere,
    • Mirroring MD2 of the second dataset along the symmetry plane or symmetry axis,
    • Generating GP of subject patient image data by stacking of the first dataset and the mirrored second dataset
    • Analyzing AS the subject patient image data using a trained deep learning algorithm
    • Detecting DO the cerebral disease information, wherein the trained deep learning algorithm has been trained with training data.



FIG. 2 shows an example for generating training data 1 according to an embodiment of the present invention. Given is a tomographic scan 2 comprising a CTA dataset of patient as medical imaging data 3. The medical imaging data 3 comprise imaging data of a left cerebral hemisphere as first cerebral hemisphere and imaging data of a right cerebral hemisphere as second cerebral hemisphere. In a first step, additional channels 4, 5 are computed next to the original CTA scan. The channels 4, 5 comprise cannel data of the left hemisphere and cannel data of the right hemisphere. As channel 4 a vessel channel is computed based on segmenting the cerebrovascular vessel tree. As channel 5 a bone channel is computed based on a bone mask. Optionally other channels might be beneficial as well, like a non-contrast CT scan.


In a second step the medical imaging data 3 and the channel data of the channels 4, 5 are registered into a reference coordinate system by registering the volume to a probabilistic atlas of the brain. This standardizes the medical imaging data 3 and the channels 4, 5 to a canonical pose and prepares the dataset for the third step.


In the third step, the medical imaging data 3 and the channel data 4, 5 are split sagittally, such that both hemispheres are separated from each other. One side is flipped to its opposite including all its channels 4, 5. In other words the registered and split medical imaging data of the second cerebral hemisphere are mirrored in the mirrored first dataset 3b*, that visually looks like medical imaging data of a first cerebral hemisphere. Same is done for the channels 4, 5, which results in the mirrored second channel datasets 4b*, 5b*. As a result, we either have visually only left or right sides only. The first dataset 3a, the first channel datasets 4a, 5a, the mirrored second dataset 3b* and the mirrored second datasets 4b*, 4b* are than stacked channel-wise resulting in a 6-channeled volume, whereas the first three channel represent the left hemisphere and the last three channel the right one. Stacking of the data result in the training data 1 configured as a 4D-tensor (3D+6 Channel). According to this method subject patient image data are generated based on medical imaging data of the patient.



FIG. 3 shows an example of applying a deep learning algorithm 6 for determining a cerebral disease information 7 to the training data 1. The deep learning algorithm 6 is configured as a CNN and receives the 6-channeled training data 1 and detects 8 the existence of an LVO and/or localizes 9 it, e.g., by determining the coordinates of the LVO.



FIG. 4 shows an example of a system U for detecting a cerebral disease information, comprising an interface I1-U to receive subject patient image data, an analyzing unit A-U to analyze the subject patient image data received using a provided PD trained deep learning algorithm, wherein the trained deep learning algorithm has been trained with provided PT training data to detect the cerebral disease information, a determining unit D-U to determine the output information regarding the cerebral disease information based on analysis of the analyzing unit A-U. Furthermore, the system U comprises a second interface I2-U to output the detected cerebral disease information.


It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, components, regions, layers, and/or sections, these elements, components, regions, layers, and/or sections, should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of example embodiments. As used herein, the term “and/or,” includes any and all combinations of one or more of the associated listed items. The phrase “at least one of” has the same meaning as “and/or”.


Spatially relative terms, such as “beneath,” “below,” “lower,” “under,” “above,” “upper,” and the like, may be used herein for ease of description to describe one element or feature's relationship to another element(s) or feature(s) as illustrated in the figures. It will be understood that the spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as “below,” “beneath,” or “under,” other elements or features would then be oriented “above” the other elements or features. Thus, the example terms “below” and “under” may encompass both an orientation of above and below. The device may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly. In addition, when an element is referred to as being “between” two elements, the element may be the only element between the two elements, or one or more other intervening elements may be present.


Spatial and functional relationships between elements (for example, between modules) are described using various terms, including “on,” “connected,” “engaged,” “interfaced,” and “coupled.” Unless explicitly described as being “direct,” when a relationship between first and second elements is described in the disclosure, that relationship encompasses a direct relationship where no other intervening elements are present between the first and second elements, and also an indirect relationship where one or more intervening elements are present (either spatially or functionally) between the first and second elements. In contrast, when an element is referred to as being “directly” on, connected, engaged, interfaced, or coupled to another element, there are no intervening elements present. Other words used to describe the relationship between elements should be interpreted in a like fashion (e.g., “between,” versus “directly between,” “adjacent,” versus “directly adjacent,” etc.).


The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments. As used herein, the singular forms “a,” “an,” and “the,” are intended to include the plural forms as well, unless the context clearly indicates otherwise. As used herein, the terms “and/or” and “at least one of” include any and all combinations of one or more of the associated listed items. It will be further understood that the terms “comprises,” “comprising,” “includes,” and/or “including,” when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. Expressions such as “at least one of,” when preceding a list of elements, modify the entire list of elements and do not modify the individual elements of the list. Also, the term “example” is intended to refer to an example or illustration.


It should also be noted that in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may in fact be executed substantially concurrently or may sometimes be executed in the reverse order, depending upon the functionality/acts involved.


Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which example embodiments belong. It will be further understood that terms, e.g., those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.


It is noted that some example embodiments may be described with reference to acts and symbolic representations of operations (e.g., in the form of flow charts, flow diagrams, data flow diagrams, structure diagrams, block diagrams, etc.) that may be implemented in conjunction with units and/or devices discussed above. Although discussed in a particularly manner, a function or operation specified in a specific block may be performed differently from the flow specified in a flowchart, flow diagram, etc. For example, functions or operations illustrated as being performed serially in two consecutive blocks may actually be performed simultaneously, or in some cases be performed in reverse order. Although the flowcharts describe the operations as sequential processes, many of the operations may be performed in parallel, concurrently or simultaneously. In addition, the order of operations may be re-arranged. The processes may be terminated when their operations are completed, but may also have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, subprograms, etc.


Specific structural and functional details disclosed herein are merely representative for purposes of describing example embodiments. The present invention may, however, be embodied in many alternate forms and should not be construed as limited to only the embodiments set forth herein.


In addition, or alternative, to that discussed above, units and/or devices according to one or more example embodiments may be implemented using hardware, software, and/or a combination thereof. For example, hardware devices may be implemented using processing circuitry such as, but not limited to, a processor, Central Processing Unit (CPU), a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a field programmable gate array (FPGA), a System-on-Chip (SoC), a programmable logic unit, a microprocessor, or any other device capable of responding to and executing instructions in a defined manner. Portions of the example embodiments and corresponding detailed description may be presented in terms of software, or algorithms and symbolic representations of operation on data bits within a computer memory. These descriptions and representations are the ones by which those of ordinary skill in the art effectively convey the substance of their work to others of ordinary skill in the art. An algorithm, as the term is used here, and as it is used generally, is conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of optical, electrical, or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.


It should be borne in mind that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise, or as is apparent from the discussion, terms such as “processing” or “computing” or “calculating” or “determining” of “displaying” or the like, refer to the action and processes of a computer system, or similar electronic computing device/hardware, that manipulates and transforms data represented as physical, electronic quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.


In this application, including the definitions below, the term ‘module’ or the term ‘controller’ may be replaced with the term ‘circuit.’ The term ‘module’ may refer to, be part of, or include processor hardware (shared, dedicated, or group) that executes code and memory hardware (shared, dedicated, or group) that stores code executed by the processor hardware.


The module may include one or more interface circuits. In some examples, the interface circuits may include wired or wireless interfaces that are connected to a local area network (LAN), the Internet, a wide area network (WAN), or combinations thereof. The functionality of any given module of the present disclosure may be distributed among multiple modules that are connected via interface circuits. For example, multiple modules may allow load balancing. In a further example, a server (also known as remote, or cloud) module may accomplish some functionality on behalf of a client module.


Software may include a computer program, program code, instructions, or some combination thereof, for independently or collectively instructing or configuring a hardware device to operate as desired. The computer program and/or program code may include program or computer-readable instructions, software components, software modules, data files, data structures, and/or the like, capable of being implemented by one or more hardware devices, such as one or more of the hardware devices mentioned above. Examples of program code include both machine code produced by a compiler and higher level program code that is executed using an interpreter.


For example, when a hardware device is a computer processing device (e.g., a processor, Central Processing Unit (CPU), a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a microprocessor, etc.), the computer processing device may be configured to carry out program code by performing arithmetical, logical, and input/output operations, according to the program code. Once the program code is loaded into a computer processing device, the computer processing device may be programmed to perform the program code, thereby transforming the computer processing device into a special purpose computer processing device. In a more specific example, when the program code is loaded into a processor, the processor becomes programmed to perform the program code and operations corresponding thereto, thereby transforming the processor into a special purpose processor.


Software and/or data may be embodied permanently or temporarily in any type of machine, component, physical or virtual equipment, or computer storage medium or device, capable of providing instructions or data to, or being interpreted by, a hardware device. The software also may be distributed over network coupled computer systems so that the software is stored and executed in a distributed fashion. In particular, for example, software and data may be stored by one or more computer readable recording mediums, including the tangible or non-transitory computer-readable storage media discussed herein.


Even further, any of the disclosed methods may be embodied in the form of a program or software. The program or software may be stored on a non-transitory computer readable medium and is adapted to perform any one of the aforementioned methods when run on a computer device (a device including a processor). Thus, the non-transitory, tangible computer readable medium, is adapted to store information and is adapted to interact with a data processing facility or computer device to execute the program of any of the above mentioned embodiments and/or to perform the method of any of the above mentioned embodiments.


Example embodiments may be described with reference to acts and symbolic representations of operations (e.g., in the form of flow charts, flow diagrams, data flow diagrams, structure diagrams, block diagrams, etc.) that may be implemented in conjunction with units and/or devices discussed in more detail below. Although discussed in a particularly manner, a function or operation specified in a specific block may be performed differently from the flow specified in a flowchart, flow diagram, etc. For example, functions or operations illustrated as being performed serially in two consecutive blocks may actually be performed simultaneously, or in some cases be performed in reverse order.


According to one or more example embodiments, computer processing devices may be described as including various functional units that perform various operations and/or functions to increase the clarity of the description. However, computer processing devices are not intended to be limited to these functional units. For example, in one or more example embodiments, the various operations and/or functions of the functional units may be performed by other ones of the functional units. Further, the computer processing devices may perform the operations and/or functions of the various functional units without sub-dividing the operations and/or functions of the computer processing units into these various functional units.


Units and/or devices according to one or more example embodiments may also include one or more storage devices. The one or more storage devices may be tangible or non-transitory computer-readable storage media, such as random access memory (RAM), read only memory (ROM), a permanent mass storage device (such as a disk drive), solid state (e.g., NAND flash) device, and/or any other like data storage mechanism capable of storing and recording data. The one or more storage devices may be configured to store computer programs, program code, instructions, or some combination thereof, for one or more operating systems and/or for implementing the example embodiments described herein. The computer programs, program code, instructions, or some combination thereof, may also be loaded from a separate computer readable storage medium into the one or more storage devices and/or one or more computer processing devices using a drive mechanism. Such separate computer readable storage medium may include a Universal Serial Bus (USB) flash drive, a memory stick, a Blu-ray/DVD/CD-ROM drive, a memory card, and/or other like computer readable storage media. The computer programs, program code, instructions, or some combination thereof, may be loaded into the one or more storage devices and/or the one or more computer processing devices from a remote data storage device via a network interface, rather than via a local computer readable storage medium. Additionally, the computer programs, program code, instructions, or some combination thereof, may be loaded into the one or more storage devices and/or the one or more processors from a remote computing system that is configured to transfer and/or distribute the computer programs, program code, instructions, or some combination thereof, over a network. The remote computing system may transfer and/or distribute the computer programs, program code, instructions, or some combination thereof, via a wired interface, an air interface, and/or any other like medium.


The one or more hardware devices, the one or more storage devices, and/or the computer programs, program code, instructions, or some combination thereof, may be specially designed and constructed for the purposes of the example embodiments, or they may be known devices that are altered and/or modified for the purposes of example embodiments.


A hardware device, such as a computer processing device, may run an operating system (OS) and one or more software applications that run on the OS. The computer processing device also may access, store, manipulate, process, and create data in response to execution of the software. For simplicity, one or more example embodiments may be exemplified as a computer processing device or processor; however, one skilled in the art will appreciate that a hardware device may include multiple processing elements or processors and multiple types of processing elements or processors. For example, a hardware device may include multiple processors or a processor and a controller. In addition, other processing configurations are possible, such as parallel processors.


The computer programs include processor-executable instructions that are stored on at least one non-transitory computer-readable medium (memory). The computer programs may also include or rely on stored data. The computer programs may encompass a basic input/output system (BIOS) that interacts with hardware of the special purpose computer, device drivers that interact with particular devices of the special purpose computer, one or more operating systems, user applications, background services, background applications, etc. As such, the one or more processors may be configured to execute the processor executable instructions.


The computer programs may include: (i) descriptive text to be parsed, such as HTML (hypertext markup language) or XML (extensible markup language), (ii) assembly code, (iii) object code generated from source code by a compiler, (iv) source code for execution by an interpreter, (v) source code for compilation and execution by a just-in-time compiler, etc. As examples only, source code may be written using syntax from languages including C, C++, C #, Objective-C, Haskell, Go, SQL, R, Lisp, Java®, Fortran, Perl, Pascal, Curl, OCaml, Javascript®, HTML5, Ada, ASP (active server pages), PHP, Scala, Eiffel, Smalltalk, Erlang, Ruby, Flash®, Visual Basic®, Lua, and Python®.


Further, at least one example embodiment relates to the non-transitory computer-readable storage medium including electronically readable control information (processor executable instructions) stored thereon, configured in such that when the storage medium is used in a controller of a device, at least one embodiment of the method may be carried out.


The computer readable medium or storage medium may be a built-in medium installed inside a computer device main body or a removable medium arranged so that it can be separated from the computer device main body. The term computer-readable medium, as used herein, does not encompass transitory electrical or electromagnetic signals propagating through a medium (such as on a carrier wave); the term computer-readable medium is therefore considered tangible and non-transitory. Non-limiting examples of the non-transitory computer-readable medium include, but are not limited to, rewriteable non-volatile memory devices (including, for example flash memory devices, erasable programmable read-only memory devices, or a mask read-only memory devices); volatile memory devices (including, for example static random access memory devices or a dynamic random access memory devices); magnetic storage media (including, for example an analog or digital magnetic tape or a hard disk drive); and optical storage media (including, for example a CD, a DVD, or a Blu-ray Disc). Examples of the media with a built-in rewriteable non-volatile memory, include but are not limited to memory cards; and media with a built-in ROM, including but not limited to ROM cassettes; etc. Furthermore, various information regarding stored images, for example, property information, may be stored in any other form, or it may be provided in other ways.


The term code, as used above, may include software, firmware, and/or microcode, and may refer to programs, routines, functions, classes, data structures, and/or objects. Shared processor hardware encompasses a single microprocessor that executes some or all code from multiple modules. Group processor hardware encompasses a microprocessor that, in combination with additional microprocessors, executes some or all code from one or more modules. References to multiple microprocessors encompass multiple microprocessors on discrete dies, multiple microprocessors on a single die, multiple cores of a single microprocessor, multiple threads of a single microprocessor, or a combination of the above.


Shared memory hardware encompasses a single memory device that stores some or all code from multiple modules. Group memory hardware encompasses a memory device that, in combination with other memory devices, stores some or all code from one or more modules.


The term memory hardware is a subset of the term computer-readable medium. The term computer-readable medium, as used herein, does not encompass transitory electrical or electromagnetic signals propagating through a medium (such as on a carrier wave); the term computer-readable medium is therefore considered tangible and non-transitory. Non-limiting examples of the non-transitory computer-readable medium include, but are not limited to, rewriteable non-volatile memory devices (including, for example flash memory devices, erasable programmable read-only memory devices, or a mask read-only memory devices); volatile memory devices (including, for example static random access memory devices or a dynamic random access memory devices); magnetic storage media (including, for example an analog or digital magnetic tape or a hard disk drive); and optical storage media (including, for example a CD, a DVD, or a Blu-ray Disc). Examples of the media with a built-in rewriteable non-volatile memory, include but are not limited to memory cards; and media with a built-in ROM, including but not limited to ROM cassettes; etc. Furthermore, various information regarding stored images, for example, property information, may be stored in any other form, or it may be provided in other ways.


The apparatuses and methods described in this application may be partially or fully implemented by a special purpose computer created by configuring a general purpose computer to execute one or more particular functions embodied in computer programs. The functional blocks and flowchart elements described above serve as software specifications, which can be translated into the computer programs by the routine work of a skilled technician or programmer.


Although described with reference to specific examples and drawings, modifications, additions and substitutions of example embodiments may be variously made according to the description by those of ordinary skill in the art. For example, the described techniques may be performed in an order different with that of the methods described, and/or components such as the described system, architecture, devices, circuit, and the like, may be connected or combined to be different from the above-described methods, or results may be appropriately achieved by other components or equivalents.


Although the present invention has been shown and described with respect to certain example embodiments, equivalents and modifications will occur to others skilled in the art upon the reading and understanding of the specification. The present invention includes all such equivalents and modifications and is limited only by the scope of the appended claims.

Claims
  • 1. A method for generating training data for training a deep learning algorithm, the method comprising: receiving medical imaging data of an examination area of a patient, the examination area of the patient including a first part of a symmetric organ and a second part of the symmetric organ;splitting the medical imaging data along a symmetry plane or a symmetry axis into a first dataset and a second dataset, wherein the first dataset includes the medical imaging data of the first part of the symmetric organ and the second dataset includes the medical imaging data of the second part of the symmetric organ;mirroring the second dataset along the symmetry plane or the symmetry axis;generating the training data by stacking the first dataset and the mirrored second dataset; andproviding the training data.
  • 2. The method according to claim 1, further comprising: receiving organ atlas data;generating registered imaging data based on the medical imaging data and the organ atlas data; andusing the registered imaging data as the medical imaging data for the splitting of the medical imaging data into the first dataset and the second dataset, wherein the first dataset includes the registered imaging data of the first part of the symmetric organ and the second dataset includes the registered imaging data of the second part of the symmetric organ.
  • 3. The method according to claim 2, wherein the organ atlas data are brain atlas data and the symmetric organ is a brain, wherein the first part of the symmetric organ is a first cerebral hemisphere and the second part of the symmetric organ is a second cerebral hemisphere.
  • 4. The method according to claim 1, further comprising: receiving at least one channel or calculating the at least one channel based on the medical imaging data, wherein the at least one channel includes channel data;generating registered channel data based on the channel data and organ atlas data;splitting the registered channel data along the symmetry plane or the symmetry axis into at least a first channel dataset and at least a second channel dataset, wherein the first channel dataset includes the registered channel data of the first part of the symmetric organ and the second channel dataset includes the registered channel data of the second part of the symmetric organ;mirroring the second channel dataset along the symmetry plane or the symmetry axis; andgenerating the training data by stacking the first dataset, the first channel dataset, the mirrored second dataset and the mirrored second channel dataset.
  • 5. The method according to claim 4, wherein at least one of (i) the at least one channel is a vessel channel including vessel data or (ii) the at least one channel is a bone channel including bone data,the vessel data are based on a segmentation of a cerebrovascular vessel tree in the medical imaging data, andthe bone data are based on at least one of a bone mask or a segmentation of bones in the medical imaging data.
  • 6. The method according to claim 4, wherein the training data are configured as an ordered set,the first dataset is prior to the first channel dataset,the first channel dataset is prior to the mirrored second dataset, andthe mirrored second dataset is prior to the mirrored second channel dataset.
  • 7. The method according to claim 1, further comprising: generating the training data by stacking a first dataset and a mirrored second dataset based on medical imaging data of different patients.
  • 8. The method according to claim 7, further comprising: choosing the first dataset and the mirrored second dataset for generating the training data based on boundary conditions, wherein the boundary conditions concern patient information data.
  • 9. A method for training a deep learning algorithm to detect disease information in medical imaging data of an examination area of a patient, the method comprising: receiving training data, wherein the training data are generated based on the method according to claim 1;receiving output training data, wherein the output training data include disease information of the training data;training the deep learning algorithm based on the training data and the output training data; andproviding the trained deep learning algorithm.
  • 10. The method according to claim 9, wherein the training of the deep learning algorithm exploits a symmetry of the training data and the output training data, wherein the symmetry describes an influence of switching the first dataset and the mirrored second dataset in the training data on disease information in the output training data.
  • 11. A method for detecting disease information, the method comprising: receiving medical imaging data of an examination area of a patient, the examination area of the patient including a first part of a symmetric organ and a second part of the symmetric organ;splitting the medical imaging data along a symmetry plane or a symmetry axis into a first dataset and a second dataset, wherein the first dataset includes the medical imaging data of the first part of the symmetric organ and the second dataset includes the medical imaging data of the second part of the symmetric organ;mirroring the second dataset along the symmetry plane or the symmetry axis;generating subject patient image data by stacking the first dataset and the mirrored second dataset;analyzing the subject patient image data by applying a trained deep learning algorithm to the subject patient image data; anddetecting the disease information based on the analyzing, wherein the trained deep learning algorithm has been trained with training data.
  • 12. The method according to claim 11, further comprising: receiving a radiological finding based on the medical imaging data or the subject patient image data; anddetermining view information by applying a view determining algorithm to the medical imaging data or to the subject patient image data, wherein the view information includes at least one of a projection or a projection angle to show a region of the radiological finding based on the medical imaging data or based on the subject patient image data.
  • 13. The method according to claim 12, wherein the medical imaging data includes 2D-projections from different views onto the region of the radiological finding, and the method further comprises: determining a coarse location of the region of the radiological finding by applying a back-projection on the 2D-projections.
  • 14. A system for detecting disease information, the system comprising: a first interface configured to receive medical imaging data of an examination area of a patient, wherein the examination area of the patient includes a first part of a symmetric organ and a second part of the symmetric organ;a processing unit configured to split the medical imaging data along a symmetry plane or a symmetry axis into a first dataset and a second dataset, wherein the first dataset includes the medical imaging data of the first part of the symmetric organ and the second dataset includes the medical imaging data of the second part of the symmetric organ,mirror the second dataset along the symmetry plane or the symmetry axis, andgenerate subject patient image data by stacking the first dataset and the mirrored second dataset;an analyzing unit configured to analyze the subject patient image data using a trained deep learning algorithm for determining the disease information;a determining unit configured to determine output information regarding the disease information based on analysis of the subject patient image data by the analyzing unit; anda second interface configured to output the output information.
  • 15. A non-transitory computer-readable storage medium storing computer-executable instructions that, when executed by a computer, cause the computer to carry out the method according to claim 1.
  • 16. The method according to claim 2, further comprising: receiving at least one channel or calculating the at least one channel based on the medical imaging data, wherein the at least one channel includes channel data;generating registered channel data based on the channel data and the organ atlas data;splitting the registered channel data along the symmetry plane or the symmetry axis into at least a first channel dataset and at least a second channel dataset, wherein the first channel dataset includes the registered channel data of the first part of the symmetric organ and the second channel dataset includes the registered channel data of the second part of the symmetric organ;mirroring the second channel dataset along the symmetry plane or the symmetry axis; andgenerating the training data by stacking the first dataset, the first channel dataset, the mirrored second dataset and the mirrored second channel dataset.
  • 17. The method according to claim 3, further comprising: receiving at least one channel or calculating the at least one channel based on the medical imaging data, wherein the at least one channel includes channel data;generating registered channel data based on the channel data and the organ atlas data;splitting the registered channel data along the symmetry plane or the symmetry axis into at least a first channel dataset and at least a second channel dataset, wherein the first channel dataset includes the registered channel data of the first part of the symmetric organ and the second channel dataset includes the registered channel data of the second part of the symmetric organ;mirroring the second channel dataset along the symmetry plane or the symmetry axis; andgenerating the training data by stacking the first dataset, the first channel dataset, the mirrored second dataset and the mirrored second channel dataset.
  • 18. The method according to claim 17, wherein at least one of (i) the at least one channel is a vessel channel including vessel data or (ii) the at least one channel is a bone channel including bone data,the vessel data are based on a segmentation of a cerebrovascular vessel tree in the medical imaging data, andthe bone data are based on at least one of a bone mask or a segmentation of bones in the medical imaging data.
  • 19. The method according to claim 5, wherein the training data are configured as an ordered set,the first dataset is prior to the first channel dataset,the first channel dataset is prior to the mirrored second dataset, andthe mirrored second dataset is prior to the mirrored second channel dataset.
  • 20. A method for detecting disease information, the method comprising: receiving medical imaging data of an examination area of a patient, the examination area of the patient including a first part of a symmetric organ and a second part of the symmetric organ;splitting the medical imaging data along a symmetry plane or a symmetry axis into a first dataset and a second dataset, wherein the first dataset includes the medical imaging data of the first part of the symmetric organ and the second dataset includes the medical imaging data of the second part of the symmetric organ;mirroring the second dataset along the symmetry plane or the symmetry axis;generating subject patient image data by stacking the first dataset and the mirrored second dataset;analyzing the subject patient image data by applying a trained deep learning algorithm to the subject patient image data; anddetecting the disease information based on the analyzing, wherein the trained deep learning algorithm has been trained with training data trained according to the method of claim 1.
Priority Claims (1)
Number Date Country Kind
22158204.2 Feb 2022 EP regional