Traditionally, only histopathological sections have been used to visualize cellular changes in the skin. However, this gold standard method is invasive and not favored by patients with cosmetic concerns. In recent decades, an increasing number of non-invasive imaging tools, such as optical coherence tomography (OCT), reflectance confocal microscopy (RCM), and multiphoton microscopy, have become available to detect cellular changes in the skin with novel findings that might influence physicians' treatment decisions.
Non-invasive techniques, as described above, already detect pigmentary changes at a cellular level of resolution. The recently developed cellular resolution full-field optical coherence tomography (FF-OCT) device also allows real-time, non-invasive imaging of the superficial layers of the skin and provides an effective way to perform a digital skin biopsy of superficial skin diseases. Nevertheless, studies with quantitative measurements of the amount and intensity of pigment and analysis of its distribution in different skin layers remain scarce.
The present invention relates to a method of segmenting features from an optical image of a skin, which is used to provide a novel way to label the features from the non-invasive optical images.
In one aspect, the present invention provides a method of processing optical image of a skin comprising
In another aspect provides a computer-aided system for skin condition diagnosis comprising an optical imager configured to provide an optical image of a skin; a processor coupled to the imager, a display coupled to the processor, and a storage coupled to the processor, the storage carrying program instructions which, when executed on the processor, cause it to carry out the method of disclosed herein.
Yet, in another aspect provides a method of identifying a pigment disorder comprising:
All publications, patents and patent applications mentioned in this specification are herein incorporated by reference to the same extent as if each individual publication, patent or patent application was specifically and individually indicated to be incorporated by reference.
A better understanding of the features and advantages of the present invention will be obtained by reference to the following detailed description that sets forth illustrative embodiments, in which the principles of the invention are used, and the accompanying drawings which:
Skin is the largest organ of the body. Skin contains three layers: the epidermis, the outermost layer of skin; the dermis, beneath the epidermis containing hair follicles and sweat; and a deeper subcutaneous tissue, which is made of fat and connective tissue. Melanocytes have dendrites that deliver melanosomes to the keratinocytes within the unit. The skin's color is created by melanocytes, which produce melanin pigment, and located in epidermis.
Skin pigmentation is accomplished by the production of melanin in specialized membrane-bound organelles termed melanosomes and by the transfer of these organelles from melanocytes to surrounding keratinocytes. Pigmentation disorders (or skin pigment disorders) are disturbances of human skin color, either loss or reduction, which may be related to loss of melanocytes or the inability of melanocytes to produce melanin or transport melanosomes correctly. Most pigmentation disorders involve the underproduction or overproduction of melanin. In some embodiments, a skin pigment disorder is albinism, melasma, or vitiligo.
In some pigment disorder diseases, such as melasma, the activated melanocyte has dendritic morphology; therefore, the activated melanocyte is also called the “dendritic cell”.
Dark skin-type individuals are prone to pigmentary disorders, such as melasma, solar lentigo, and freckles, among which melasma is especially refractory to treat and often recurs. The melanin amount in the skin is commonly used to monitor treatment response and classify patients with melasma. The existing melanin measurement tools are limited to skin surface detecting and cannot observe the distribution of melanin in the actual tissue structure.
In order to more precisely evaluate melanin related parameter of the skin and provide more specific treatments for each type of skin, detecting and identifying actual melanin features (such as content, density, area, or distribution) directly is needed.
Non-invasive techniques, including optical coherence tomography (OCT), reflectance confocal microscopy (RCM), and confocal optical coherence tomography, can be used to detect tissue changes (e.g., pigmentary changes) in the superficial layers of the skin at a cellular resolution to perform a digital skin biopsy of superficial skin diseases. In some embodiments, the tissue optical image is provided by an optical coherence tomography (OCT) device, a reflective confocal microscopy (RCM) device, a two-photon confocal microscopy device, an ultrasound imager, or the like. In certain embodiment, the optical image is provided through an OCT device or an RCM device. In certain embodiments, the tissue optical image is provided by an OCT device. With the non-invasive devices, such as an FF-OCT device, three-dimensional skin imaging can provide remarkable capabilities to visualize skin tissue structure and identify the critical features in skin layers, that can be used in assisting diagnosis of skin diseases and disorders. In some embodiments, the tissue optical image comprises epidermis slicing images. In some embodiments, the tissue optical image comprises a three-dimensional image (3D image), a cross sectional image (B-scan), or a vertical sectional image (E-scan). In certain embodiments, the tissue optical image is a B-scan image.
In some embodiments, the present invention provides a method of processing optical image of a skin, and applications therefrom enabling the detection (or identification) of skin diseases and/or disorders (such as a pigment disorder). The invention methods can be employed in a computer-aided system, which comprises an optical imager configured to provide an optical image of a skin; a processor coupled to the imager, a display coupled to the processor, and a storage coupled (i.e., in communication with) to the processor, the storage carrying program instructions which, when executed on the processor, cause it to carry out the method disclosed herein.
With a two-dimensional image sensor for parallel detection, and a low spatial coherence light source for illumination, a non-invasive device such as FF-OCT can acquire three-dimensional volumetric image with only one-dimensional mechanical scanning along axial direction. However, the image quality of cellular-resolution cross-sectional biological image may be suffered from the speckle noise because of the nature of coherent detection, even with low spatial coherence light source. Spatial compounding is a technique to reduce the speckle contrast significantly without much loss of resolution by averaging adjacent B-scans. For above mentioned cross-sectional imaging mode in FF-OCT devices that two-dimensional data acquired simultaneously and the B-scans are aligned inherently, the optional denoising image step base on spatial compounding can be realized without a pre-process of image registration. In some embodiments, the step comprises averaging the demodulated data in the thickness dimension which is close to 5 μm to approximate the typical thickness for H&E section. Since the sample structures from neighboring B-scan share some degree of correlation, the signal-to-noise ratio (SNR) can be improved by averaging, and the resultant image shows the average sample structure within a finite thickness.
Although common image filter (i.e. Gaussian, median filter) can be used to suppress the speckle noise, the drawback is loss of image detail, especially in case the image features of interest is of the similar dimension as the speckle grains.
In some embodiments, the denoising step comprises using a denoising neural network, such as spatial compounding-based denoising convolutional neural network (SC-DnCNN), which is trained with the compounded image data and can distinguish noises from signals while preserving the image details.
Spatial compounding (SC) is a commonly used technique to mitigate the speckle and Gaussian noise. The principle of SC is to induce changes in the speckle pattern between repeated measurements through the tiny position change of the subject. Then, these partially decorrelated multiple images measured from the sample are averaged to obtain low speckle images.
To train the denoising model, the noise maps are defined as the difference between before and after image averaging within a specific thickness. Through the powerful learning ability of deep convolutional neural network that can automatically extract multiple features as representations from the data, the trained SC-DnCNN model improves the image quality by noise prediction on single B-scan. In addition, the sampling thickness required to achieve spatial compounding can be reduced to increase the imaging speed.
The SC-DnCNN is a pixel-wise noise prediction method that, in some embodiments, to be used to distinguish the noise in the signal, thereby improving the image quality. It follows the advantages of a denoising convolutional neural network (DnCNN), taking residual learning and batch normalization (BN) to speed up the training process and improve the denoising performance. As exemplary illustrated in
In model training, the residual learning concept of deep residual network (ResNet) is applied to simplify the optimization process. The difference is that DnCNN does not add a shortcut connection between several layers, but directly changes the output of the network to a residual image. This means that the optimization goal of DnCNN is not the mean square error (MSE) between the real clean image and the network output, but the MSE between the real residual image and the network output. The residual image, the noise map, could be obtained by subtracting the clean image from the noisy image. The noise is randomly added to a clear image to simulate a noisy image. For example, regarding OCT images, the noise is mainly composed of the speckle noise, which multiplies the noise by the structure signal. Therefore, the ground truth is generated by using real OCT images rather than simulated ones.
Not limited to the exemplary method disclosed herein, SC-DnCNN is trained by a database containing noisy images and clean images, wherein the clean image is acquired by averaging N number of adjacent optical images, and the noisy image is acquired by averaging M number of adjacent optical images. N is greater than M. For example, N is 2 to 20, especially 5 to 15, especially 7 to 12.
To result more suitable optical images to identify the features in an object (i.e., a target of interest in the skin) and perform further image analysis, in some embodiments, some post-processing methods based on scanning depth and image brightness are used. First, the image correction is performed to compensate for the depth-dependent signal decay. The weights of image pixels can be set based on the distance from the skin surface to adjust the influence of the device (e.g., OCT) diffraction limit on the imaging depth in tissues.
In some embodiments, a contrast enhancement is applied, for example, by sharpening or brightening an image, to highlight key features. For instances, the contrast-limited adaptive histogram equalization was made. Different from ordinary histogram equalization, the advantage of the contrast-limited adaptive histogram equalization is to improve the local contrast and enhance the sharpness of the edges in each area of the image. Rather than using the contrast transform function on the entire image, this adaptive method operates several histograms on small regions in the image to redistribute the lightness values of the image. The neighboring areas are then combined using bilinear interpolation to eliminate artificially induced boundaries.
Object segmentation is the process of partitioning an optical image into multiple image segments, also known as image regions or image objects (sets of pixels). For example, for extracting melanin (an object) related feature from background tissues in an OCT image, a binary image is created by segmenting the image in two parts (foreground and background) with a given brightness level b. By intensity thresholding, all pixels in the grayscale image with brightness greater than level b are replaced with the value 1, and other pixels are replaced with the value 0. The object segmentation process, in some embodiments, is handled by an algorithm for thresholding, clustering, and/or region growing that analyze the intensity, gradient, or texture, to produce a set of object regions.
In some embodiments, the object of the object segmentation step is melanin, melanosomes, melanocyte, melanophage, activated melanocyte (dendritic cell), or the combination thereof. In certain embodiments, the nonlimited feature is selected from the group consisting of a number, a distribution inside the skin, an occupied area in the skin, a size, a density, a brightness, a specific shape, and other optical signal features.
The E-scan OCT images are provided herein as an example to illustrate the process of segmenting an object (e.g., pigment related object) from the optical image of the skin of the present invention. As shown in
In accordance with the practice of the present invention, the denoising step is optional when the need arises.
Feature quantification provides an effective way for physicians to monitor skin diseases or disorders (e.g., the pigment disorders).
By way of illustration, the features of melanin are quantified. In certain embodiments, the melanin related parameters (features) are listed in Table 1 where the feature quantification is based upon. Images acquired from E-scan are used as an example to describe the complete image processing flow of melanin feature quantization. For B-scan and C-scan, the methods and steps of image processing and analysis can be adjusted reasonably and flexibly based on the data under the same concept.
In an example of feature quantification processing provided herein, 96 lesion images and 48 perilesional skin images that contained three layers: the en-face stratum spinosum, the dermal-epidermal junction (DEJ), and papillary dermis were used. Melanin is segmented as described herewith. As to the melanin feature quantification, the quantitative features extracted from the segmented melanin are classified into two groups: grain and confetti melanin in accordance with the practice of the present invention. Per Table 1, the area-based features separately count the total area of all grain melanin and confetti melanin segmented from an optical image. The distribution-based feature of all grain melanin, G_density, is based on the total area of the tissue in the image to calculate the proportion of its area, where the tissue is defined as the signal whose grayscale value is greater than 38 in the enhanced image. The distribution-based features of all confetti melanin are related to their distance in two-dimensional space. C_distance_mean and C_distance_SD, respectively, use the centroid of each confetti melanin to compute the average and standard deviation of the distance between each other. In addition, the features based on shape and brightness, respectively, provide statistical information to determine the size and intensity of all melanin in the image. To extract the C_roundness feature, a simple metric indicating the roundness of confetti melanin is defined as
To explore the correlation between melasma and melanin, the potential of quantitative features in distinguishing lesion images from perilesional skin images was evaluated by several statistical hypothesis tests. For comparison, all data before and after the image denoising were also tested to observe the effect of the SC-DnCNN model under the method of the present invention. Whether there was a normal distribution of a feature was determined by the Kolmogorov-Smirnov test. Subsequently, the difference of each feature between the lesion and perilesional skin cases was evaluated with the mean+SD in a normal distribution and the median in a non-normal distribution by using Student's t-test and the Mann-Whitney U-test, respectively. With the significance analysis, a p-value of less than 0.05 indicated the difference was significant.
Tables 2 to 3 list the performance difference under the method disclosed herein performed with and without SC-DnCNN. The p-values and mean+SD of all distinct features generated before and after image denoising were extracted and analyzed. Table 2 shows that the C_distance_mean, a feature representing the average distance of each centroid of all confetti melanin, differs markedly between lesions and perilesional skin (p=0.0402) with denoising. The average distances of confetti melanin in perilesional skin and lesion images were 200 um and 193.5 um, respectively, while they were 206.1 um and 200.3 um, respectively, for the method without SC-DnCNN. The value of the C_distance_mean in the lesion image tended to be smaller than that of the perilesional skin image. However, the difference was not statistically significant (p=0.0502) when image denoising was not performed.
Besides this, the dataset was divided into three subsets according to the skin layer (stratum spinosum, DEJ, and papillary dermis), and evaluated the difference between the melanin features that could distinguish lesions in each subset. The p-values and mean±SD of different features generated before and after image denoising for each subset are also summarized in Table 3. In the stratum spinosum, both significant features symbolize the distribution of the confetti melanin, where the larger the C_distance_mean is, the more dispersed the melanin will be. Plus, the smaller the C_distance_SD is, the more evenly distributed the melanin in the entire image will be. That means that compared with the perilesional skin, the distribution of confetti melanin in the lesion is more clustered in the local area of the image. The p-values of C_distance_mean and C_distance_SD were 0.0036 and 0.0202, respectively, before image denoising; while they were 0.0032 and 0.0312, respectively, after image denoising. Without executing the image denoising step, all the quantitative features of the DEJ and papillary dermis were not significantly different between the lesion and the perilesional skin. Through SC-DnCNN, the p-value of all_density in the DEJ was reduced from 0.1393 to 0.0426. For the lesion images, it indicates that the feature of the grain melanin density tends to be higher than that in the perilesional skin image.
Based on the above results, it is applicable to quantitatively evaluate and compare some melanin characteristics belonging to the lesion, including its appearance in different skin layers. When observing the OCT images within a lesion, the confetti melanin appears dense and concentrated in the stratum spinosum, while the grain melanin has a higher density in the DEJ. Different skin layers produce different forms of melanin, and their appearance on OCT images is also different.
In certain embodiments provides a method of identifying a pigment disorder of a skin comprising receiving an optical image of a suspected pigment disorder skin; optionally performing a noise reduction to reduce the noise of the optical image; contrast-enhancing the feature's signals of an object from the background signals wherein said object is melanin, melanosomes, melanocyte, melanophage, activated melanocyte, or combinations thereof; segmenting the object in the enhanced optical image through at least one threshold value of the feature; categorizing the segmented object; quantifying the feature of said object from the optical image of the skin; and identifying the suspected pigment disorder skin through the quantified value.
In some embodiments yet provides another example of an object segmentation, wherein the object is activated melanocytes. As reported previously, UV or laser treatment will activate melanocyte forming dendrite feature to secrete out melanin to epidermis layer to protect skin damage. For this reason, the dendrite morphology melanocytes are called “dendritic cells”. The general steps for segmenting the dendritic cells are the same as shown in
In accordance with the practice of the present invention, quantifications of the segmented dendritic cells are also realized based on the features listed in table 4.
In some embodiments provide a method of processing optical image of a skin comprising: a. receiving an optical image of a skin that contains a feature of an object; b. optionally performing a noise reduction to reduce the noise of the optical image; c. contrast-enhancing the feature's signals of the object from the background signals; d. segmenting the object in the enhanced optical image through at least one threshold value of the feature; e. optionally categorizing the segmented object; and f. quantifying the feature of said object from the optical image of the skin. In some embodiments, the method further comprises a computer-aided diagnosis step after the step of feature quantification. In some embodiments, the optical image is an optical coherence tomography (OCT) image, a reflectance confocal microscopy (RCM) image, or a confocal optical coherence tomography image. In some embodiments, the optional noise reduction step reduces the noise of the optical image through a spatial compounding-based denoising convolutional neural network (SC-DnCNN). In certain embodiments, the SC-DnCNN is trained to distinguish the noise of the optical image. In certain embodiments, the SC-DnCNN is trained by a database containing noisy images and clean images. In certain embodiments, the clean image is acquired by averaging N number of adjacent optical images, the noisy image is acquired by averaging M number of adjacent optical images, and N is greater than M. In some embodiments, the object is melanin, melanosomes, melanocyte, melanophage, activated melanocyte, or combinations thereof. In certain embodiments, the object is melanin, melanocyte, or activated melanocyte. In certain embodiments, the object is melanin. In certain embodiments, the feature is brightness, particle area, particle size, particle shape, or distribution position in the skin. In certain embodiments, the feature is brightness and/or particle shape (e.g., an elongated structure). In some embodiments, the optical image is acquired by averaging at least two adjacent optical images. In certain embodiments, Step e comprises categorizing the object to grain melanin, or confetti melanin. In some embodiments, the contrast enhancement step is applied by sharpening or brightening the optical image to highlight a feature of said object. In some embodiments, the object segmentation step is handled by an algorithm for thresholding, clustering, and/or region growing, that analyzes intensity, gradient, or texture to produce a set of object regions.
In some embodiments provides a computer-aided system for skin condition (e.g., skin diseases or disorders such as skin pigment disorder) diagnosis, which comprises an optical imager configured to provide an optical image of a skin; a processor (such as a computer) coupled to the imager, a display coupled to the processor, and a storage coupled to the processor, the storage carrying program instructions which, when executed on the processor, cause it to carry out the invention method disclosed herein in the computer program (e.g., See
In some embodiments, the system, network, method, and media disclosed herein include at least one computer program, or use of the same. A computer program includes a sequence of instructions, executable in the digital processing device's CPU, written to perform a specified task which may refer to any suitable algorithm. Computer readable instructions may be implemented as program modules, such as functions, objects, Application Programming Interfaces (APIs), data structures, and the like, that perform particular tasks or implement particular abstract data types. In light of the disclosure provided herein, those of skill in the art will recognize that a computer program may be written in various versions of various languages.
In some embodiments, computer systems or cloud computing services are connected to the cloud through network links and network adapters. In an embodiment, the computer systems are implemented as various computing devices, for example servers, desktops, laptops, tablet, smartphones, Internet of Things (IoT) devices, and consumer electronics. In an embodiment, the computer systems are implemented in or as a part of other systems.
Owing to its capability of reaching real-time and stable detection results, together with its objectivity and precision when describing melanin features, this method could surely represent an attractive tool to address pigment classification problems with such requirements.
Although preferred embodiments of the present invention have been shown and described herein, it will be obvious to those skilled in the art that such embodiments are provided by way of example only. Numerous variations, changes, and substitutions will now occur to those skilled in the art without departing from the invention. It should be understood that various alternatives to the embodiments of the invention described herein can be employed in practicing the invention. It is intended that the following claims define the scope of the invention and that methods and structures within the scope of these claims and their equivalents be covered thereby.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2022/039218 | 8/2/2022 | WO |
Number | Date | Country | |
---|---|---|---|
63228580 | Aug 2021 | US |