The present disclosure relates generally to image segmentation, and more specifically, to exemplary embodiments of an exemplary system, method, and computer-accessible medium for multi-class segmentations from single-class datasets.
One area of significant interest in the field of image processing is segmentation, a process in which objects of interest in an image are automatically identified. Progress has been made in deep learning for semantic segmentation of images, and one of the major factors of such advances is the public availability of large-scale multi-class datasets, such as ImageNet (see, e.g., Reference 7), COCO (see, e.g., Reference 24), PASCAL VOC (see, e.g., Reference 12), and others. Such variety of available datasets not only provides the means to train and evaluate different segmentation models but also to exhibit diverse labels. However, in contrast to natural images, there are certain domains where despite the critical importance of segmentation research, the generation of ground truth annotations and labeling is extremely costly and remains a bottleneck in advancing research.
Biomedical images is one such domain where the accurate segmentation of various structures is a fundamental problem, especially in clinical research. In traditional clinical practice, segmentation is often omitted during the diagnostic process. However, manual analysis of biomedical images, including measurements, is subject to large variability, as it depends on different factors, including the structure of interest, image quality, and the clinician's experience. Moreover, segmentation is an essential component in various medical systems that support computer-aided diagnosis (“CAD”) (see, e.g., References 9 and 14), and surgery and treatment planning. Furthermore, early cancer detection and staging often depend on the results of segmentation.
Remarkable progress has been made in the segmentation of radiological images, such as magnetic resonance imaging (“MRI”) and computed tomography (“CT”) 3D scans. Radiological images exhibit various objects, such as abdominal organs (see e.g.,
However, while single-class datasets often contain the same objects within a single image, the ground truth annotations are provided for only a particular class of objects in the form of binary masks, and the sets of images from different datasets do not overlap. Thus, it is obstructive to simply combine the datasets to train a single model for multi-class segmentation. Classically, single-class datasets have been used to develop highly tailored solutions for the segmentation of particular classes.
Conditioning has been widely used in image synthesis. Conditioning can include providing some identifiable information regarding the image (e.g., an identification of one or more targets within an image, an identification of the origin of the image, etc.). Work has been performed on generating images conditioned on certain attributes, such as category or labels. (See, e.g., References 23, 38, 42 and 43). Additionally, a framework has been proposed for person image synthesis based in arbitrary poses. (See, e.g., Reference 26). A distribution of potential results of the image-to-image translation has been modelled. (See, e.g., Reference 49). The synthesis of images given the desired content and its location within the image has also been demonstrated. (See, e.g., Reference 32). However, the area of conditional convolutional neural networks (“convnets”) for semantic segmentation of images has not been explored.
Additionally, it can be difficult to collect large-scale, carefully annotated datasets for semantic segmentation. (See, e.g., References 37, 39 and 46). Various approaches have been proposed for learning to perform segmentation using weakly labeled data. Weak annotations, in the form of image labels (see, e.g., Reference 22), points and scribbles (see, e.g., References 1 and 18), bounding boxes (see, e.g., Reference 6), and their combinations (see, e.g., References 30 and 41), have been explored for learning image segmentation models. Weakly-supervised segmentation still assumes the availability of annotations of every object from a collection of pre-defined target classes if one is present in an image. With regard to CT images, each slice would require a set of annotations for every target organ present on a slice, be it seeds, bounding boxes or labels. However, single-class datasets do not come with such annotations, and provide details for only one particular class.
Segmentation of anatomical structures, especially abdominal organs, can be a difficult problem, as the organs may demonstrate a high variability in size, position, and shape. (See e.g.,
Several studies have been proposed for the simultaneous multi-class, or multi-organ, segmentation of anatomical structures in medical images. The majority of these utilize probabilistic atlases (see, e.g., References 4, 29 and 40), and statistical shape models. (See, e.g., Reference 28). These methods require all volumetric images in the training dataset to be registered. This pre-processing step can be computationally expensive and often imperfect due to the considerable variations in size, shape, and location of abdominal organs between patients. Recently, a few convnet-based solutions (see, e.g., Reference 35), were proposed for simultaneous multi-organ segmentation. However, all such methods were developed and evaluated on publicly unavailable multi-class segmentation datasets. Moreover, the multi-class datasets that were used were acquired by a single institution and exhibit the same image quality and lack chronic abnormalities. In contrast, diverse single-class datasets can be leveraged, and conditioning of a convnet can be used to develop a multi-class segmentation model of high generalization ability.
Thus, it may be beneficial to provide an exemplary system, method, and computer-accessible medium for multi-class segmentations from single-class datasets which can overcome at least some of the deficiencies described herein above.
The exemplary system, method, and computer-accessible medium for generating a multiclass image segmentation model(s) can include receiving multiple single-class image datasets, receiving a target mask for each of the single-class image datasets, receiving a condition of an object associated with each of the single-class image datasets, and generating the multiclass image segmentation model(s) based on the single-class image datasets, the target masks, and the identification of the target objects. The single-class image datasets can include computer tomography images of abdominal organs. The single-class image datasets can be non-overlapping single-class image datasets. The single-class image datasets can include medical imaging datasets or cityscape datasets. The condition can include (i) an identification of a target object associated with each image in each single-class image dataset, (ii) a classification of each image associated with each single-class image dataset or (iii) an identifiable detail regarding each image in each single-class image datasets.
In some exemplary embodiments of the present disclosure, the target mask can be a segmentation mask. The multiclass image segmentation model(s) can be generated using a convolutional neural network(s) (“CNN”). The multiclass image segmentation model(s) can be generated using the condition as an input into the CNN(s). The multiclass image segmentation model(s) can be generated using the condition in an encoder stage of the CNN(s). The encoder can include (i) a convolutional layer(s) and (ii) at least six DenseBlock+MaxPooling layers. A number of feature channels in each of the DenseBlock+MaxPooling layers can be proportional to a depth of each of the DenseBlock+MaxPooling layers. The multiclass image segmentation model(s) can be generated using the condition in a decoder stage of the CNN(s). The decoder can include (i) at least two convolutional layers and (ii) at least six Transposed Convolutions+DenseBlock layers. The Transposed Convolutions can include strides as upsampling layers.
In certain exemplary embodiments of the present disclosure, the CNN(s) can include a skip connection(s). The multiclass image segmentation model(s) can be generated by training the multiclass image segmentation model(s) separately on (i) each class of the single-class image datasets, (ii) the target mask associated with each of the single-class image datasets, and (iii) the condition associated with each of the single-class image datasets. The condition can be obtained from a lookup table containing an entry for each of the single-class image datasets. A further single-class image dataset can be received, a further target mask for the further single-class image dataset can be received, a further condition associated with the further single-class image dataset can be received, and the multiclass image segmentation model(s) can be updated based on the further single-class image dataset, the further target mask, and the further condition.
These and other objects, features and advantages of the exemplary embodiments of the present disclosure will become apparent upon reading the following detailed description of the exemplary embodiments of the present disclosure, when taken in conjunction with the appended claims.
Further objects, features and advantages of the present disclosure will become apparent from the following detailed description taken in conjunction with the accompanying Figures showing illustrative embodiments of the present disclosure, in which:
Throughout the drawings, the same reference numerals and characters, unless otherwise stated, are used to denote like features, elements, components or portions of the illustrated embodiments. Moreover, while the present disclosure will now be described in detail with reference to the figures, it is done so in connection with the illustrative embodiments and is not limited by the particular embodiments illustrated in the figures and the appended claims.
The exemplary system, method and computer-accessible medium, according to an exemplary embodiment of the present disclosure, can utilize a single convolutional network (“convnet”) for multi-class segmentation using non-overlapping single-class datasets for training by conditioning the neural network. Conditioning can include an identification of the target object to be segmented. For example, when training a model to identify a liver, a spleen, and a pancreas, the model can be training on multiple single-class images (e.g., images having only one of the objects identified) for each target object, along with an identification of the specific target object present in a particular image. The exemplary system, method and computer-accessible medium can share implicitly all of its parameters by all target classes being modeled. This can drive the exemplary system, method and computer-accessible medium to effectively learn the spatial connections between objects of different classes and improve its generalization ability.
A system, method and computer-accessible medium, according to an exemplary embodiment of the present disclosure, can condition a convnet for segmentation and can produce multi-class segmentations using a single model trained on non-overlapping single-class datasets. The exemplary conditioning facilitates an efficient multi-class segmentation with a single model trained on single-class datasets, drastically reducing the training complexity and the total number of parameters, in comparison to separate class-specific models. The exemplary system, method and computer-accessible medium can also provide improved state-of-the-art results, up to a 2.7% improvement over the state of the art, on publicly available datasets for the segmentation of liver, spleen, and pancreas with significantly reduced computational cost. Additionally, the present system, method and computer-accessible medium can be applied to segment various image types and applications, such as natural images, and can be used to evaluate, for example, Cityscapes datasets. (See, e.g., Reference 5).
Exemplary Method
As opposed to generating and overlaying separate models for each object in single-class datasets, the present system, method and computer-accessible medium for multi-class image segmentation can simultaneously learn multi-class knowledge given a set of single-class datasets. Consider a set of single-class datasets {D1, . . . , DK}, where each dataset Dk={(Xk; Yk,c
Exemplary Model
As shown in
where [ . . . ] can be a concatenation operation of the feature maps from previous layers. In the exemplary experiments Fl(⋅) can be defined as a leaky rectified linear unit (“LReLU”) (see, e.g., Reference 27), with α=0.3, followed by a 3×3×3 convolution. Encoder 310 can include a convolutional layer, followed by six densely connected convolutional blocks, sequentially connected via 2×2×2 maxpooling layers. The number of feature channels in each dense block can be proportional to its depth. The number of feature channels can be an arbitrarily determined number. Alternatively, the number of feature channels can double with each new layer in the neural network. Decoder 315 can utilize transposed convolutions with strides as upsampling layers and can be topologically symmetric to encoder 310. The last convolutional layer can end with a sigmoid function, which can be used to generate a probability map.
It will be appreciated that there are numerous machine learning models available to those skilled in the art and that the exemplary convnet model 205 described herein is but one acceptable model. Other models may be suitable for use in practicing the present segmentation systems and methods in accordance with the present teachings.
Exemplary Conditioning
Unlike classic approaches of training separate models for each class cm∈C, the exemplary system, method and computer-accessible medium can infer (e.g., determine or predict) the segmentations and the relationships of multiple classes from single-class datasets and learn to generate segmentations for all classes cm with a single model. The segmentations can be generated for all classes in any order, which can be randomly selected. To introduce such ability to the exemplary model, conditioning the base convolutional model with a target class cm that needs to be segmented can be performed.
An exemplary goal was to keep the base model fully-convolutional, simple, and efficient in order to avoid additional overhead that could negatively affect the performance. To achieve this, the conditional information can be incorporated as a part of the intermediate activation signal after performing convolutional operations and before applying nonlinearities. While some examples of conditioned generative adversarial nets (“GANs”) (see, e.g., Reference 32) suggest to learn the conditional function, a more computationally efficient approach can be used for the task of segmentation. Specifically, the following exemplary function can be used:
where ⊙ can be an element-wise multiplication, OH
where xl-1 can be the output of the previous layer. (See e.g.,
During training time, the network can be trained on pairs (xik; ik,c
ik,c
c
Conditioning can be performed at various locations in the model.
Exemplary Experiments
The exemplary system, method and computer-accessible medium was tested with different kinds of loss functions and various methods of conditioning, and the results were compared to the solutions, which were individually customized for each single-class dataset or designed for multi-class datasets. The conditioned multi-class segmentation, as described herein, outperformed state-of-the-art single-class segmentation approaches for biomedical images.
Exemplary Datasets: To evaluate the exemplary system, method and computer-accessible medium, three datasets of abdominal CT volumes were utilized. In particular, volumes of the publicly available Sliver07 dataset (see, e.g., Reference 15) of liver segmentations, 82 volumes of the publicly available NIH Pancreas dataset (see, e.g., Reference 16) of pancreas segmentations, and 74 volumes from an additional dataset of liver and spleen segmentations were used. In the exemplary experiments, cm∈C={liver, spleen, pancreas}. The segmentation masks in the latter dataset have been binarized and stored as separate single-class files. Examples of the CT images and the corresponding ground-truth segmentation masks are illustrated in
The input images have been minimally preprocessed: each dataset was sampled with an equal probability, and subvolumes of size 256×256×32 have been extracted and normalized to create input images. Additionally, all training examples have been augmented with small random rotations, zooms, and shifts.
Exemplary Training: the exemplary system, method and computer-accessible medium, according to an exemplary embodiment of the present disclosure, was trained on examples from all used single-class datasets. It was optimized with the following objective:
where i(Yc
Exemplary Inference: During the inference time (e.g., the time when the segmentation is determined or predicted), the target segmentation class ci can be manually specified. However, to simplify the use of the exemplary system, method and computer-accessible medium during the inference time, the process of specifying the target segmentation class can be automated by iteratively going through all the entities in the lookup table. Alternatively, specifically for segmentation of abdominal organs, a set of presets can be defined, such as liver and gallbladder, which can be gathered by clinicians.
Exemplary Implementation: the exemplary system, method and computer-accessible medium was implemented using Keras library with TensorFlow backend. The exemplary network was trained from scratch using Adam optimizer (see, e.g., Reference 21), with the initial learning rate or 0.00005, and β1=0.9, β2=0.999, with a batch size of 2 for 25K iterations.
Exemplary Ablation Experiments
The probability map can be binarized by thresholding them at 0.5 to generate the segmentation mask. Other suitable thresholding values (e.g., above or below 0.5) can be used based on the desired sensitivity or specificity. To measure the similarity between binary segmentation masks Y and Ŷ, the common Dice Similarity Coefficient (“DSC”) metric was used, which can be defined as
The results were compared against the current state-of-the-art segmentation methods, which are proposed specifically for single-class segmentation, and can be tailored for a particular class only. In particular, the results were compared against a two-step coarse-to-fine convnet-based solution for pancreas segmentation, and yielded 82.4% DSC on the NIH Pancreas. (See, e.g., References 16 and 48). The exemplary system, method and computer-accessible medium was also compared to another convnet-based segmentation (see, e.g., Reference 44), which showed 95% DSC on a private datasets of 1000 CT images of liver. Additionally, the results were compared against a two-stage coarse-to-fine multi-organ convnet-based solution (see, e.g., Reference 35), which was evaluated on a private multi-class dataset and resulted in 95.4%, 92.8%, and 82.2% DSC for liver, spleen, and pancreas, respectively.
In all experiments described below, αi=1 and the following DSC-based loss function was used:
Binary cross-entropy loss function was tested, which showed significantly worse performance. Experiments were performed by analyzing the performance of an exemplary model trained separately for each class cm without the use of conditioning. This can be referred as indivs. Channels for each for each class cm (e.g., such as liver, spleen, and pancreas) are illustrated in the graphs shown in
A naive approach of training a single model on single-class datasets was examined to produce reasonable multi-class segmentation results by predicting a volume of the same dimensions but with three additional channels for liver, spleen, and pancreas. This can referred to as no cond and the learning curves are illustrated in the graphs shown in
The next experiments describe the results of the conditioned model. In the experiment cond-2nd, conditioning a model by providing the conditional information as the second channel of the input volume was examined. (See e.g., graphs shown in
Further, conditioning the decoder part of the exemplary model was examined. This can be referred to as cond-dec. (See e.g., graphs shown in
It was observed that the exemplary model accurately delineates all the target objects even in a difficult case illustrated in
Importance of spatial connections between classes: To evaluate the spatial correlation between classes on the model's performance, cond-dec model was evaluated on corrupted images. For CT images in particular, the baseline performance, as shown in Table 1, was compared to the performance on images where different classes were corrupted by randomly replacing 70% of the corresponding voxels with intensity values common for fatty tissue between organs. An example of a corrupted image for spleen is illustrated in
Applicability to natural images: The exemplary system, method and computer-accessible medium, according to an exemplary embodiment of the present disclosure, can be used on a wide range of image types, and is not limited to medical imaging applications. To demonstrate the applicability of the exemplary system, method and computer-accessible medium to other domains, an exemplary model for semantic segmentation of natural images was trained. While datasets of natural images are generally multi-class, for example, multiple objects in an image can be annotated, the validation of the exemplary system, method and computer-accessible medium on natural images datasets can be valuable. Thus, as shown in
The dataset used to develop the segmentation of
Exemplary Model
Table 3 below shows details on the architecture of the exemplary model.
As shown in Table 3, the exemplary model can include at least one convolutional layer, and at least six DenseBlock+MaxPooling layers. The combination of the convolutional and DenseBlock+MaxPooling layers can form a decoder used by the exemplary system, method and computer-accessible medium. As also shown in Table 3, the exemplary model can also include at least six TransConv+DenseBlock layers, and two convolutional layers.
Exemplary Discussion
The exemplary system, method and computer-accessible medium, according to an exemplary embodiment of the present disclosure, can be used to learn multi-class segmentations of 2D or 3D images from single-class datasets by conditioning a convnet for the purpose of multi-class segmentation. Experimental evaluations of the various ways of conditioning the model were performed, which determined that providing each layer in the decoder a direct access to the conditional information can yield the accurate segmentation results. The exemplary system, method and computer-accessible medium was evaluated on the task of segmentation of medical images, where the problem of single-class datasets naturally arises, but can be broadly applied to other multi-class segmentation applications. While being significantly more computationally efficient, the method outperforms current state-of-the-art solutions, which were specifically tailored for each single-class dataset. Additionally, the exemplary system, method and computer-accessible medium can be applied to the semantic segmentation of any images, such as natural images, using any exemplary dataset.
The exemplary system, method and computer-accessible medium, according to an exemplary embodiment of the present disclosure, can be used in various domains. In particular, the exemplary system, method and computer-accessible medium can be applied for the detection of cancer metastases in pathology images. Pathology datasets show similar fragmentation—a unified database of pathology images of various biological tissues, such as brain or breast, currently does not exist and research focuses on separate subproblems. Similarly to the exemplary experiments, a convnet can be conditioned on the target type of metastasized cancer cells in different tissue samples. The exemplary system, method and computer-accessible medium can be used for instance-level segmentation, where each instance can be conditioned on certain attributes, such as size, color, etc., or something more sophisticated, such as species or kind. Furthermore a method of learning data representations in multiple visual domains for the purpose of classification has been described. (See, e.g., Reference 31). The exemplary system, method and computer-accessible medium, according to an exemplary embodiment of the present disclosure, can be used to augment such works for the purpose of segmentation.
The exemplary system, method and computer-accessible medium, according to an exemplary embodiment of the present disclosure, can incorporate images from different domains. For example, multimodal radiological images (e.g., CT images, MRI images, etc.) can be used for training, and classes can be transferred between imaging modalities.
As shown in
Further, the exemplary processing arrangement 1005 can be provided with or include an input/output arrangement 1035 (e.g., input/output ports), which can include, for example a wired network, a wireless network, the internet, an intranet, a data collection probe, a sensor, etc. Input/output arrangement 1035 can be used to communicate with one or more remote storage arrangements where the single-class datasets, the target masks, and the lookup tables can be stored. As shown in
The foregoing merely illustrates the principles of the disclosure. Various modifications and alterations to the described embodiments will be apparent to those skilled in the art in view of the teachings herein. It will thus be appreciated that those skilled in the art will be able to devise numerous systems, arrangements, and procedures which, although not explicitly shown or described herein, embody the principles of the disclosure and can be thus within the spirit and scope of the disclosure. Various different exemplary embodiments can be used together with one another, as well as interchangeably therewith, as should be understood by those having ordinary skill in the art. In addition, certain terms used in the present disclosure, including the specification, drawings and claims thereof, can be used synonymously in certain instances, including, but not limited to, for example, data and information. It should be understood that, while these words, and/or other words that can be synonymous to one another, can be used synonymously herein, that there can be instances when such words can be intended to not be used synonymously. Further, to the extent that the prior art knowledge has not been explicitly incorporated by reference herein above, it is explicitly incorporated herein in its entirety. All publications referenced are incorporated herein by reference in their entireties.
The following references are hereby incorporated by reference in their entireties.
This application relates to and claims priority from U.S. Patent Application No. 62/854,660, filed on May 30, 2019, the entire disclosure of which is incorporated herein by reference.
This invention was made with government support under Grant No. HL 127522, awarded by the National Institutes of Health. The government has certain rights in the invention.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2020/034959 | 5/28/2020 | WO |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2020/243333 | 12/3/2020 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
7492938 | Brinson, Jr. | Feb 2009 | B2 |
7648460 | Simopoulos et al. | Jan 2010 | B2 |
8073220 | Khamene et al. | Dec 2011 | B2 |
8175376 | Marchesotti et al. | May 2012 | B2 |
8442309 | Ranganathan | May 2013 | B2 |
8463360 | Yamamoto | Jun 2013 | B2 |
8538117 | Najarian | Sep 2013 | B2 |
8575554 | Qian | Nov 2013 | B2 |
9035941 | Endo | May 2015 | B2 |
9799098 | Seung et al. | Oct 2017 | B2 |
10453197 | Cholakkal et al. | Oct 2019 | B1 |
10713569 | Ros et al. | Jul 2020 | B2 |
10713794 | He et al. | Jul 2020 | B1 |
10755112 | Mabuchi | Aug 2020 | B2 |
10803568 | Iwase | Oct 2020 | B2 |
10839211 | Reinstein et al. | Nov 2020 | B2 |
11030738 | Li et al. | Jun 2021 | B2 |
11188783 | Cricri et al. | Nov 2021 | B2 |
20100183217 | Seung et al. | Jul 2010 | A1 |
20140079314 | Yakubovich et al. | Mar 2014 | A1 |
20140270350 | Rodriguez-Serrano et al. | Sep 2014 | A1 |
20150170005 | Cohen et al. | Jun 2015 | A1 |
20160379062 | Khan et al. | Dec 2016 | A1 |
20170351941 | Mishra et al. | Dec 2017 | A1 |
20180315193 | Paschalakis et al. | Nov 2018 | A1 |
20180330198 | Harary et al. | Nov 2018 | A1 |
20180357514 | Zisimopoulos et al. | Dec 2018 | A1 |
20190050625 | Reinstein et al. | Feb 2019 | A1 |
20190080146 | Santamaria-Pang et al. | Mar 2019 | A1 |
20190087726 | Greenblatt et al. | Mar 2019 | A1 |
20190164290 | Wang et al. | May 2019 | A1 |
20190205758 | Zhu et al. | Jul 2019 | A1 |
20190311202 | Lee et al. | Oct 2019 | A1 |
20200272822 | Lin et al. | Aug 2020 | A1 |
20200394413 | Bhanu et al. | Dec 2020 | A1 |
20200394459 | Xu et al. | Dec 2020 | A1 |
20210081677 | Wang et al. | Mar 2021 | A1 |
20210097691 | Liu | Apr 2021 | A1 |
20210158043 | Hou et al. | May 2021 | A1 |
20210319264 | Shabtay et al. | Oct 2021 | A1 |
20210383171 | Lee et al. | Dec 2021 | A1 |
Number | Date | Country |
---|---|---|
104063685 | Sep 2014 | CN |
106600571 | Apr 2017 | CN |
113420607 | Jan 2018 | CN |
108256527 | Jul 2018 | CN |
108648194 | Oct 2018 | CN |
109173263 | Jan 2019 | CN |
109360208 | Feb 2019 | CN |
109360222 | Feb 2019 | CN |
109472801 | Mar 2019 | CN |
109508639 | Mar 2019 | CN |
110399927 | Nov 2019 | CN |
110689037 | Jan 2020 | CN |
111161275 | May 2020 | CN |
111754531 | Oct 2020 | CN |
11950693 | Nov 2020 | CN |
112215128 | Jan 2021 | CN |
112241758 | Jan 2021 | CN |
112258436 | Jan 2021 | CN |
112258504 | Jan 2021 | CN |
112272830 | Jan 2021 | CN |
112598676 | Apr 2021 | CN |
112861667 | May 2021 | CN |
112889068 | Jun 2021 | CN |
113111684 | Jul 2021 | CN |
113255779 | Aug 2021 | CN |
113269781 | Aug 2021 | CN |
113330490 | Aug 2021 | CN |
113378696 | Sep 2021 | CN |
113435407 | Sep 2021 | CN |
113449606 | Sep 2021 | CN |
113554068 | Oct 2021 | CN |
113706441 | Nov 2021 | CN |
113762266 | Dec 2021 | CN |
112016001796 | Jan 2018 | DE |
102021110051 | Oct 2021 | DE |
2909793 | Aug 2015 | EP |
3171297 | May 2017 | EP |
3477591 | May 2019 | EP |
3754560 | Dec 2020 | EP |
3822865 | May 2021 | EP |
3839822 | Jun 2021 | EP |
111126346 | May 2020 | IN |
2654199 | May 2018 | RU |
2008133951 | Nov 2008 | WO |
2012124000 | Sep 2012 | WO |
2014205231 | Dec 2014 | WO |
2017040691 | Mar 2017 | WO |
2018125580 | Jul 2018 | WO |
2018200840 | Jul 2018 | WO |
2018138104 | Aug 2018 | WO |
2019084697 | Nov 2018 | WO |
2018229490 | Dec 2018 | WO |
2020014903 | Jan 2020 | WO |
2020112188 | Jun 2020 | WO |
2020112189 | Jun 2020 | WO |
2020206408 | Oct 2020 | WO |
2020243333 | Dec 2020 | WO |
2021005426 | Jan 2021 | WO |
2021016596 | Jan 2021 | WO |
2021030629 | Feb 2021 | WO |
2021041082 | Mar 2021 | WO |
2021146700 | Jul 2021 | WO |
2021155246 | Aug 2021 | WO |
2021188446 | Sep 2021 | WO |
2021191908 | Sep 2021 | WO |
2021194490 | Sep 2021 | WO |
2021226296 | Nov 2021 | WO |
2021243294 | Dec 2021 | WO |
2021247746 | Dec 2021 | WO |
Entry |
---|
Yao et al., Learning to Diagnose From Scratch by Exploiting Dependencies Among Labels, arXiv:1710.10501v2 [cs.CV] Feb. 1, 2018. |
Liu et al., Structured Knowledge Distillation for Semantic Segmentation, CVPR, pp. 2604-2613. |
Neven et al., Towards End-to-End Lane Detection: an Instance Segmentation Approach, 2018 IEEE Intelligent Vehicles Symposium, Jun. 2018, pp. 286-291. |
Chartsias et al., Disentangled representation learning in cardiac image analysis, arXiv:1903.09467v4 [cs.CV] Sep. 16, 2019. |
Chen et al., Graph Convolutional Networks for Classification with a Structured Label Space, arXiv:1710.04908v2 [cs.LG] Feb. 22, 2018. |
Joyce et al., Deep Multi-Class Segmentation Without Ground-Truth Labels. |
Dmitriev et al., Learning Multi-Class Segmentations From Single-Class Datasets, IEEE Xplore, pp. 9501-9511. |
Li et al., A Multi-label Image Classification Algorithm Based on Attention Model, ICIS 2018, Singapore, pp. 728-731. |
Huang et al., Multi-Task Deep Neural Network for Multi-Label Learning, ICIP 2013, pp. 2897-2900. |
Huang et al., Scene Labeling using Gated Recurrent Units with Explicit Long Range Conditioning, arXiv:1611.07485v2 [cs.CV] Mar. 28, 2017. |
Lanchantin et al., Neural Message Passing for Multi-Label Classification, arXiv:1904.08049v1 [cs.LG] Apr. 17, 2019. |
Yang et al., SGM: Sequence Generation Model for Multi-Label Classification, arXiv:1806.04822v3 [cs.CL] Jun. 15, 2018. |
Mahmood et al., Deep Adversarial Training for Multi-Organ Nuclei Segmentation in Histopathology Images, arXiv:1810.00236v2 [cs.CV] Oct. 19, 2018. |
Ramasinghe et al., A Context-aware Capsule Network for Multi-label Classification, ECCV 2018. |
Number | Date | Country | |
---|---|---|---|
20220237801 A1 | Jul 2022 | US |
Number | Date | Country | |
---|---|---|---|
62854660 | May 2019 | US |