This disclosure relates generally to medical devices. More particularly, this disclosure relates to automated methods for segmenting corneal nerve fiber images.
Peripheral neuropathy is a frequent neurological complication occurring in a variety of pathologies including diabetes, human immunodeficiency virus (HIV), Parkinson's, multiple sclerosis, as well as a number of systemic illnesses. In HIV, for example, it affects more than one-third of infected persons, with the typical clinical presentation is known as distal sensory polyneuropathy, a neuropathy that is characterized by bilateral aching, painful numbness or burning, particularly in the lower extremities. This debilitating disorder greatly compromises patient quality of life.
Conventionally, monitoring patients for peripheral neuropathies is performed by skin biopsy. The skin biopsy is used to measure loss of small, unmyelinated C fibers in the epidermis—one of the earliest detectable signs of damage to the peripheral never system. However, skin biopsy is a painful and invasive procedure, and longitudinal assessment requires repeated surgical biopsies. The development and implementation of non-invasive approaches is therefore paramount.
One promising non-invasive approach for detecting peripheral neuropathies is with corneal nerve assessments. Such assessments may be made by analyzing images of nerve fibers. Unfortunately, such methods are poorly developed and lack the accuracy needed to diagnosis and monitor patients.
This disclosure relates to systems and methods for assessing corneal nerve fibers from images captured by non-invasive imaging techniques to generate data for detecting neural pathologies. In particular, methods of the invention take images of corneal nerve fibers, pre-process those images, apply a deep learning based segmentation to the data, and report nerve fiber parameters, such as length, density, tortuosity, etc. Such metrics are useful clinically to diagnosis and stage a variety of neuropathies that attack the central nervous system. The image data may be from different modalities, including in vivo confocal microscopy, optical coherence tomography and other sensing techniques that create images where the nerves are visible in 2 or 3 dimensional data.
In one aspect, the invention provides a method that includes obtaining imaging data comprising images of nerve fibers; pre-processing the imaging data; training a classifier to recognize nerve fiber locations in the images using pre-processed images and labels, e.g., hand drawn labels; applying the trained classifier to assign a score to each of a plurality of image pixels of an input image, wherein the score comprises a probability, or represents a likelihood, that each of the plurality of image pixels represents a nerve fiber; and post-processing the input image to create a new image (e.g., a binary image) that indicates locations of pixels that represent nerves from the input image.
In some embodiments, pre-processing the imaging data comprises equalizing contrast and correcting non-uniform illumination specific to each of the images of never fibers. Equalizing may be performed using at least one of a top-hat filter, low-pass filtering and subtraction, or flat-fielding based on a calibration step.
In some embodiments, the imaging data comprises images of never fibers that are taken with a microscope. For example, the imaging data may comprise images taken with a confocal microscope, e.g., by in vivo corneal confocal microscopy. In other embodiments, the imaging data comprises optical coherence tomography data. In such instances where optical coherence tomography data is used, contrast equalization by methods of the invention may be based on limiting the integration range, or based on one of a minimum, a maximum, an average, a sum, or a median in the depth direction.
Some steps of the method are preferably performed offline. For example, training the classifier may be performed offline. Preferably, applying the trained classifier to assign scores to image pixels is performed online.
In some embodiments, methods of the invention employ one of a classifier, a detector, or a segmenter that comprises one of a deep neural network or a deep convolutional neural network. The deep neural network may comprise an encoding and decoding path, as in the auto-encoder architecture, or for example, a SegNet or U-net architectures.
In some embodiments, post-processing comprises thresholding the input image and a skeletonization of the thresholded image. Post-processing may comprise a classifier trained to take a probability image and return a binary image. Alternatively, post-processing may comprise a thresholding of the input image and a center-line extraction of the thresholded image. The binary image may be useful for diagnosing neuropathies. The binary image may be useful for monitoring a patient response to a treatment, e.g., a chemotherapy treatment.
In other aspects, the present disclosure relates to a non-transitory computer-readable medium storing software code representing instructions that when executed by a computing system cause the computing system to perform a method of identifying the nerve fibers in an image. The method comprises obtaining an imaging data set containing an image of nerve fibers; preprocessing the data to equalize the contrast and correct non-uniform illumination specific to each of the images; training, preferably, offline, a segmenter or classifier to recognize nerve locations in an image using the preprocessed images as input and hand drawn labels as truth; applying, preferably online, the trained classifier to assign a probability of representing a nerve to each of the image pixels of an input image; and post-processing the probability image to create a binary image indicating the locations of all of the pixels in the input image representing nerve fibers.
Preferably, the contrast equalization step comprises one of a top-hat filter, a low-pass filtering and subtraction step, or flat-fielding based on a calibration step. The image data may comprise images from a microscope, such as a confocal microscope. Alternatively, the image data comprise optical coherence tomography data. In embodiments where the image data comprises optical coherence tomography data, the contrast equalization step used for the optical coherence tomography data may be based on limiting the integration range, or based on minimum or maximum or average or sum or median in the depth direction.
In preferred embodiments, the classifier used by methods of the invention comprise one of a deep neural network or a deep convolutional neural network. The deep convolutional neural network may comprise an auto-encoder architecture. The auto-encoder architecture may follow a SegNet architecture or a U-net architecture.
In certain embodiments, post-processing comprises thresholding of the image and a skeletonization of the thresholded image. In some embodiments, post-processing comprises a classifier trained to take a probability image and return a binary image. In other embodiments, the post-processing involves thresholding of an image and a center-line extraction of the thresholded image or involves using a classifier trained to take a probability image and return a binary image.
This disclosure provides systems and methods for robust, repeatable, quantification of corneal nerve fibers from image data. The cornea is the most densely innervated tissue in the body and analysis of corneal nerve is sensitive for detecting small sensory nerve fiber damage. Segmentation of the nerve fibers in these images is a necessary first step to quantifying how the corneal nerve fibers may have changed as a result of disease or some other abnormality. The procedure, at a high level is detailed in
Acquired image data may be messy or may come from different sources. To feed them into machine learning systems or neural network according to methods of the invention, the data may need to be standardized and/or cleaned up. Preprocessing may be used to reduce training complexity—i.e., by narrowing the learning space—and/or to increase the accuracy of applied algorithms, e.g., algorithms involved in image segmentation. Data preprocessing techniques according to aspects of the invention might include may comprise one of an intensity adjustment step, or a contrast equalization step. Additionally, pre-processing may include converting color images to grayscale to reduce computation complexity. Grayscale is generally sufficient for recognizing certain objects.
Pre-processing may involve standardizing images. One important constraint that may exist in some machine learning algorithms, such as convolutional neural networks, is the need to resize the images in the image dataset to a unified dimension. For example, the images may be scaled to a specific width and height before being fed to the learning algorithm.
Pre-processing may involve techniques for augmenting the existing dataset with perturbed versions of the existing images. Scaling, rotations and other affine transformations may be involved. This may be performed to enlarge a dataset and expose a neural network to a wide variety of variations of images. Data augmentation may be used to increase a probability that the system recognizes objects when they appear in any form and shape. Many preprocessing techniques may be used to prepare images for train a machine learning model. In some instances, it may be desirable to remove variant background intensities from images to create a more uniform appearance and contrast. In other instances, it may be desirable to brighten or darken your images. Preferably, pre-processing comprises an intensity adjustment step, or contrast equalization step.
In some embodiments, an explicit calibration step may be used in instances where the inhomogeneity results mostly from the optics of the system. This is often referred to as flat fielding, and involves imaging a uniform target—such as a white, flat, surface—to directly measure how intensity falls off at the periphery. The correction is then applied based on this calibration image.
In other embodiments, a simple histogram equalization or adaptive histogram equalization step may be employed. A person of skill in the art will recognize that any technique that can more evenly distribute the intensities of the foreground and background pixels may be useful for pre-processing step. This of course may depend on the modality. For example, in optical coherence tomography data, the process might involve restricting the integration range of the data used to create a 2-dimensional image from a 3-dimensional image, as optical coherence tomography data is depth resolved. In such cases, a 3-dimensional volume, acquired at the cornea, may be converted to 2-dimensional via integration of the data through an axial direction. Alternatively, the 2-dimensional image may be produced by taking the maximum, minimum, median or average value through the axial dimension. Furthermore, the choice of axial range could be limited based on structural landmarks. Once the pre-processing is complete, the equalized images may then be provided to a segmentation module.
Methods of the invention provide for the automated segmentation of fibers. Such methods provide for a more accurate and repeatable measure of the nerve fiber density and calculation of higher order features from the segmentations such as tortuosity, curvature statistics, branch points, bifurcations etc. This ability to automatically and accurately quantify never fibers from image data is useful for diagnosing neuropathies secondary to a very large number of pathologies, including diabetes and HIV. It can also detect and monitor neuropathies stemming from chemotherapy and other potentially damaging treatment protocols. An exemplary segmentation pipeline is depicted in
Segmentation, according to aspects of the invention, may rely on a classifier. The classifier offers a supervised learning approach in which a computer program learns from input data, e.g., images with hand labeled nerves, and then uses this learning to classify new observations, e.g., locations of nerves from unlabeled images. The classifier may comprise any known algorithm used in the art. For example, the classifier may comprise a linear classifier, logistic regression, naive bayes classifier, nearest neighbor, support vector machines, decision trees, boosted trees, random forest, or a neural network algorithm. Preferably, the classifier uses a deep convolutional neural network, for example, as described in, Ronneberger, 2015, U-Net: Convolutional Networks for Biomedical Image Segmentation, incorporated by reference. Alternative architectures may include an auto-encoder, such as the auto-encoder described in Badrinarayanan, 2015, SegNet: A Deep Convolutional Encoder-Decoder Architecture for Robust Semantic Pixel-Wise Labelling, incorporated by reference. Preferably, the segmentation is performed using a deep convolutional neural network U-Net as a classifier for associating each input pixel with a probability of being a nerve pixel. Alternative embodiments include, but are not limited to any supervised learning based classifier including: support vector machine, a random forest, a Deep conventional neural network auto encoder architecture, a Deep convolutional V-Net architecture (3d U-Net), or a logistic regression. The model is trained using input images which have had the nerves hand-labeled to serve as a ground truth (See,
Post-processing may involve two steps: thresholding and skeletonization. For example, first the probability map may be thresholded to separate the foreground (nerve pixels) from the background. Preferably this is performed using a method referred to as Otsu's method. Otsu's method, named after Nobuyuki Otsu, performs automatic image thresholding. In the simplest form, the algorithm returns a single intensity threshold that separate pixels into two classes, foreground and background. This threshold is determined by minimizing intra-class intensity variance, or equivalently, by maximizing inter-class variance. Otsu's method is a one- dimensional discrete analog of Fisher's Discriminant Analysis, is related to Jenks optimization method, and is equivalent to a globally optimal k-means performed on the intensity histogram. The extension to multi-level thresholding was described in the original paper, and computationally efficient implementations have since been proposed. For example, as described in, Nobuyuki Otsu (1979), A threshold selection method from gray-level histogram, IEEE Trans Sys Man Cyber, 9 (1): 62-66, incorporated by reference. Although, a number of alternative methods may be used including: a non-maximum suppression followed by hysteresis thresholding, a k-means clustering, a spectral clustering, a graph cuts or graph traversal, or level sets.
Optionally, a skeletonization step may be applied. Thresholding provides a good estimate of the number of nerve pixels. What may be desired, however, is a count of the number of nerves and their lengths. If a person simply counted the number of pixels from the thresholded image, one may overcount images with thicker nerves and score lengths incorrectly. It may also help as an important step ahead of deriving higher order features such as curvature and tortuosity that are useful clinically. Thus is may be preferable to use an “skeletonization” algorithm to reduce the width of the thresholded nerves to 1 pixel. For example, as described in Shapiro, 1992, Computer and Robot Vision, Volume I. Boston: Addison-Wesley. Other methods may include: a center line extraction, which finds the shortest path between two extremal points, medial axis transform, ridge detection, grassfire transform. Skeletonization, according to methods of the invention, is optional as one might want to also measure nerve fiber width as a clinical end point. Accordingly, it may be desirable to not skeletonize the data if, for example, nerve fiber width is an important parameter. The output of post-processing is a binary image where each “on” pixel represents a segmented nerve.
The binary image may be used for analyzing and quantifying nerve fibers. For example, as described in Al-Fandawi, 2016, A fully automatic nerve segmentation and morphometric parameter quantification system for early diagnosis of diabetic neuropathy in corneal images. Comput Methods Programs Biomed;135:151-166; Annunziata, 2016, A fully automated tortuosity quantification system with application to corneal nerve fibres in confocal microscopy images, Medical image analysis, 32:216-232; Chen X, 2017, An Automatic Tool for Quantification of Nerve Fibers in Corneal Confocal Microscopy Images, IEEE Trans Biomed Eng, 64:786-794; Dorsey J L, 2015, Persistent Peripheral Nervous System Damage in Simian Immunodeficiency Virus-Infected Macaques Receiving Antiretroviral Therapy, Journal of neuropathology and experimental neurology, 74:1053-1060; Dorsey, 2014, Loss of corneal sensory nerve fibers in SIV-infected macaques: an alternate approach to investigate HIV-induced PNS damage. The American journal of pathology 184:1652-1659, Dabbah, 2010, Dual-model automatic detection of nerve-fibres in corneal confocal microscopy images, Medical Image Computing and Computer-Assisted Intervention—MICCAI, 300-307, Oakley, 2018, Automated Analysis of In Vivo Confocal Microscopy Corneal Images Using Deep Learning, ARVO Meeting Abstracts, Laast V A, 2007, Pathogenesis of simian immunodeficiency virus-induced alterations in macaque trigeminal ganglia, Journal of neuropathology and experimental neurology, 66:26-34, Laast V A, 2011, Macrophage-mediated dorsal root ganglion damage precedes altered nerve conduction in SIV-infected macaques, The American journal of pathology, 179:2337-2345, Mangus L M, Unraveling the pathogenesis of HIV peripheral neuropathy: insights from a simian immunodeficiency virus macaque model, ILAR, 54:296-303, each of which is incorporated herein by reference.
This application claims priority to U.S. Provisional Application No. 62/849356, filed on May 17, 2019, the contents of which are incorporated by reference.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2020/033425 | 5/18/2020 | WO | 00 |
Number | Date | Country | |
---|---|---|---|
62849356 | May 2019 | US |