This application is a U.S. national stage application under 35 U.S.C. 371 of co-pending International Application No. PCT/IB2020/055763, filed Jun. 19, 2020, entitled PROCESSING OPTICAL COHERENCE TOMOGRAPHY SCANS, which in turn claims priority to Great Britain Patent Application No. GB1908766.7, filed Jun. 19, 2019, the entire contents of which are incorporated by reference herein for all purposes.
This invention relates to processing optical coherence tomography (OCT) scans.
Optical Coherence Tomography (OCT) was invented in 1991 at MIT in the USA and is commonly used for imaging human tissue of various organs, in particular the eye, and also skin (J. Welzel, “Optical coherence tomography in dermatology: a review,” Skin Research and Technology, vol. 7, pp. 1-9. 2001).
Those skilled in the art will be familiar with the VivoSight OCT device, manufactured and marketed by Michelson Diagnostics Ltd of Maidstone, Kent, United Kingdom, which is designed for use by professional dermatologists in the assessment of skin lesions of patients. The VivoSight OCT device scans the skin and presents to the user images of the skin subsurface structure, in a plane perpendicular to the skin surface. This is known as a ‘B-scan’.
Also, VivoSight can acquire scans at multiple locations across the skin surface in order to build up a series of B-scans across a lesion of interest. This is known as a multi-slice ‘stack’ and can be viewed by the user in a variety of ways to elicit tissue features of medical interest such as nests of cancer cells. For example, the user can view the stack of B-scans in rapid succession to ‘fly through’ the lesion area.
However, it is desirable to aid the users of such apparatus in determining features within the structure of the skin, rather than simply displaying the scans to the user an allowing them to interpret the images as shown to them. This would reduce the size of the ‘learning curve’ for clinicians to make use of the apparatus, make the procedure faster and more convenient and help the clinician find the features of most interest.
According to a first aspect of the invention, there is provided a method of processing optical coherence tomography (OCT) scans through a subject's skin, the method comprising:
As such, this invention provides for an automatic determination of structures found in a subject's skin, and makes use of the three-dimensional nature of an OCT scan of a volume of a subject's skin.
Typically, the method could comprise generating a confidence with which the structure can be assigned to the class of structures. The method could comprise generating a confidence with which the structure could be assigned to a plurality, or all, of the plurality of classes of structures.
The method may comprise segmenting the subject's skin within the OCT scan or scans; the segmentation may be of the volume and so be carried out in three dimensions. The segmentation may comprise the segmentation of the skin into different components (for example, epidermis, dermis, hair follicles and so on). The segmentation may determine the position of the structure relative to the components of the subject's skin. The step of classifying may use the position of the structure relative to the components of the subject's skin to classify the structure as belonging to a class.
The step of classifying the structure, and optionally the step of detecting the structure, may comprise using a machine learning algorithm. Typically, the machine learning algorithm will previously have been trained to classify structures in a subject's skin; alternatively, the method may comprise training the machine learning algorithm. Machine learning algorithms are typically trained by providing them with known inputs—in this invention most likely OCT scans, or structures that have been identified in OCT scans—and an indication of what the correct classification of the known input is.
Example machine learning algorithms that may be employed include a convolutional neural net or a random forest algorithm.
The OCT scans may comprise dynamic OCT scans, in that they comprise time-varying OCT scans through the subject's skins. Such scans allow for the identification of in particular blood vessels in the subject's skin. The method may therefore comprise determining the position of blood vessels in dynamic OCT scans and typically using the position of the blood vessels relative to the structure in the step of classifying the structure.
The method may comprise capturing at least one image of the surface of the user's skin, typically using a camera, and using each image to classify the structure. The images may be captured using visible light and/or infrared light. For example, the method may comprise determining a pigment of the skin, and using that to classify the structure.
The method may comprise the determination of at least one numerical parameter of the skin. The at least one numerical parameter may comprise at least one parameter selected from the group comprising: optical attenuation coefficient, surface topography/roughness, depth of blood vessel plexus, blood vessel density. Each numerical parameter may be used in classifying the structure.
The method may comprise:
The method may comprise using the compensated intensity to classify the structure. A problem with using uncompensated intensity to classify the structure is that the intensity associated with a given structure depends upon the depth of the structure within the tissue due to the scattering and absorption of the tissue above it; this dependence of intensity on depth makes the classification process more difficult; whereas structures in images with compensated intensity exhibit depth-dependence to a much lesser degree and so may be more easily classified.
The plurality of classes of structures may be selected from the group comprising:
The method may comprise allowing a user to select which structures are in the plurality of classes. This will assist a user if they are unsure which of a selection of structures a particular structure might be. Alternatively, the user may simply indicate that the plurality of classes may be any from a set that the processor is programmed to recognise.
The step of outputting may comprise outputting any features of the OCT scans which contributed as part of the classification of the structure as belonging to the class. As such, a user can then see why a particular structure was so classified. The method may comprise allowing a user to reject some of the features, in which case the method may then repeat the step of classifying the structure without using the rejected features.
In accordance with a second aspect of the invention, there is provided an optical coherence tomography (OCT) image processing apparatus, comprising a processor, a display coupled to the processor and storage coupled to the processor, the storage carrying program instructions which, when executed on the processor, cause it to carry out the method of the first aspect of the invention.
The image processing apparatus may comprise an OCT apparatus by means of which the OCT scans are captured. As such, the image processing apparatus may comprise an OCT probe arranged to generated interferograms, and the processor may be arranged to generate the images from the interferograms. As such, the image processor may be arranged to process the images as they are captured.
Alternatively, the image processing apparatus may be separate from any OCT apparatus and may be arranged to process the images subsequent to their capture. As such the image processing apparatus may comprise data reception means (such as a network connection or media drive) arranged to receive the OCT scans for processing.
There now follows, by way of example only, description of an embodiment of the invention, described with reference to the accompanying drawings, in which:
An optical coherence tomography (OCT) apparatus in accordance with an embodiment of the invention is shown in
The apparatus further comprises an OCT interferometer 5 and associated probe 6. The interferometer 5 interferes light reflected from sample 7 (here, a subject's skin) through probe 6 with light passed along a reference path to generate interferograms. These are detected in the interferometer 5; the measured signal is then passed to the computer 1 for processing. Example embodiments of suitable OCT apparatus can be found in the PCT patent application published as WO2006/054116 or in the VivoSight® apparatus available from Michelson Diagnostics of Maidstone, Kent, United Kingdom.
Such OCT apparatus typically generate multiple B-scans: that is, scans taken perpendicularly through the skin 7. The result of analysis of each interferogram is a bitmap in which the width of the image corresponds to a direction generally parallel to the skin surface and the height corresponds to the depth from the sensor into the skin. By taking many parallel scans, a three-dimensional stack of bitmaps can be built up.
The processor can then be used to process the OCT scans taken. The probe is used to capture scans of the subject's skin and the data is transmitted to the image processor unit in the form of a ‘stack’ of B-scans. Each B-scan is an image of the skin at a small perpendicular displacement from the adjacent B-scan. Typically and preferably, the stack comprises 120 B-scan images, each of which is 6 mm wide by 2 mm deep, with a displacement between B-scans of 50 microns.
In
These raw images are first processed to compensate for attenuation of OCT signal with depth and then filtered to reduce the amount of noise and image clutter. The optical attenuation compensation method is disclosed in the PCT application published as WO2017/182816. The filtering is the median filter which is well known to those skilled in the art. The resulting processed images are shown in
The next step carried out is a binary threshold operation and further denoising of the image by performing an ‘open’ and ‘close’ binary morphology operation on the image stack. The results are shown in
This image stack is then processed to segment and extract all of the individual 3-dimensional objects present in the stack that have continguous interconnected regions. The features of each such object are extracted, namely the horizontal length and width, vertical size, total volume, surface area, depth of the object centroid, ellipticity and density. This is not an exhaustive list and others may also be found to be useful.
These object features are fed into a Random Forest machine learning algorithm, or alternatively a neural network, which has been taught with many such examples of scans of skin with benign or malignant growths. It will be appreciated that the vessel in this example has completely different feature results from the tumour and so it is easy to unambiguously distinguish the vessel from the tumour. The vessel object is elongated in 3-dimensional space and located deep in the dermis, whereas the tumour is irregularly shaped in three dimensions and located at a shallow depth in the dermis. By placing the vessel object into a class of deep, highly elongated objects, and the tumour object into a class of shallow, irregular objects, the classifier is able to correctly identify the former as a benign vessel and the latter as a malignant basal cell carcinoma.
Number | Date | Country | Kind |
---|---|---|---|
1908766 | Jun 2019 | GB | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/IB2020/055763 | 6/19/2020 | WO |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2020/255048 | 12/24/2020 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
7907772 | Wang | Mar 2011 | B2 |
20090306520 | Schmitt et al. | Dec 2009 | A1 |
20120274898 | Sadda et al. | Nov 2012 | A1 |
20190110739 | Tey | Apr 2019 | A1 |
Number | Date | Country |
---|---|---|
107993221 | May 2018 | CN |
3065086 | Sep 2016 | EP |
2515761 | Jan 2015 | GB |
2549515 | Oct 2017 | GB |
2007131190 | Aug 2007 | RU |
2017171643 | Oct 2017 | WO |
2019104217 | May 2019 | WO |
2019104221 | May 2019 | WO |
Entry |
---|
International Search Report and Written Opinion for Application No. PCT/IB2020/055763 dated Sep. 17, 2020. |
Search Report for Application No. GB1908766.7 dated Nov. 22, 2019. |
Welzel, “Optical coherence tomography in dermatology: a review,” Skin Research and Technology, 2001; 7:1-9. |
Number | Date | Country | |
---|---|---|---|
20220304620 A1 | Sep 2022 | US |