Optical coherence tomography (OCT) is a technique for in-vivo imaging and analysis of various biological tissues (as, for example, two-dimensional slices and/or three-dimensional volumes). Images created from three-dimensional (3D) volumetric OCT data show different appearances/brightness for different components of the imaged tissue. Based on this difference, those components can be segmented out from the images for further analysis and/or visualization. For example, choroidal vasculature has a darker appearance than choroidal stroma in OCT images. Therefore, the choroidal vasculature in OCT images can be segmented out by applying an intensity threshold. However, due to inherent properties of OCT imaging, artifacts in vessel segmentation will emerge if the thresholding is directly applied to the images. Other techniques have thus been developed to segment components of OCT data, but these too suffer from various deficiencies and limitations.
For example, when determining luminal and stromal areas of the choroid by a local binarization method, a special imaging acquisition protocol and averaged line scans are needed to achieve sufficient quality at a depth being analyzed, and to avoid noisy results depending on the type of OCT system utilized. Further, the final threshold is applied manually. Using a choroidal vessel density measurement in 2D projection images lacks depth resolution and can suffer from shadow artifact. Similarly, automated detection of vessel boundaries (even with machine-learning) can be affected by shadow artifacts, and is additionally limited to application in two-dimensional (2D) B-scans only and for larger vessels. Further, the segmented vessel continuity may be poor due the segmentation is repeated for each B-scan in a volume, rather than applied to the volume as a whole. This can thus require each segmented B-scan to be spliced or otherwise pieced together to generate a segmented volume. Other segmentation techniques are only applicable for normal (non-diseased eyes) and suffer errors when retinal structure changes due to diseases. Further, some segmentations are subject to inaccuracies related to the application of noise reduction filters on underlying data.
In short, without noise reduction, averaging of repeated B-scans or along a depth direction is needed to produce data from which the choroidal vasculature can be properly segmented. As a result, the segmentation can be limited in dimension and location. And still further, when applied to 3D data, computation time can be so long as to limit the data that can be analyzed.
Because of these limitations it has not been practical and/or not even possible to present many clinically valuable visualizations and quantifications of choroidal vasculature. For instance, even though a quantitative analysis may be performed on 3D volumetric data or resulting images, the resulting metrics compress the 3D information into a single value. This greatly diminishes the value of, and does not fully utilize, the data. In other instances, the quantifications are taken from OCT data that remains too noisy to perform an accurate analysis, utilize averages taken from many volumes, which can still suffer from noise and also requires increased scanning times (for each iterative volume form which the average is taken), or are limited to relatively small regions of interest (e.g., 1.5 mm under the fovea in single B-scan). Accordingly, medical practitioners have not been able to fully appreciate clinically pertinent information available 3D volumetric OCT data.
According to the present disclosure, a three dimensional (3D) quantification method comprises: acquiring 3D optical coherence tomography (OCT) volumetric data of an object of a subject, the volumetric data being from one scan of the object; pre-processing the volumetric data, thereby producing pre-processed data; segmenting a physiological component of the object from the pre-processed data, thereby producing 3D segmented data; determining a two-dimensional metric of the volumetric data by analyzing the segmented data; and generating a visualization of the two-dimensional metric.
In various embodiments of the above example, segmenting the physiological component comprises: performing a first segmentation technique on the pre-processed data, thereby producing first segmented data, the first segmentation technique being configured to segment the physiological component from the pre-processed data; performing a second segmentation technique on the pre-processed data, thereby producing second segmented data, the second segmentation technique being configured to segment the physiological component from the pre-processed data; and producing the 3D segmented data by combining the first segmented data and second segmented data, wherein the first segmentation technique is different than the second segmentation technique; the pre-processing includes de-noising the volumetric data; the object is a retina, and the physiological component is choroidal vasculature; the metric is a spatial volume, diameter, length, or volumetric ratio, of the vasculature within the object; the visualization is a two-dimensional map of the metric in which a pixel intensity of the map indicates a value of the metric at the location of the object corresponding to the pixel; a pixel color of the map indicates a trend of the metric value at the location of the object corresponding to the pixel; the trend is between the value of the metric of the acquired volumetric data and a corresponding value of the metric from an earlier scan of the object of the subject; the trend is between the value of the metric of the acquired volumetric data and a corresponding value of the metric from the object of a different subject; determining the trend comprises: registering the acquired volumetric data to comparison data; and determining a change between the value of the metric of the acquired volumetric data and a corresponding value of the metric of the comparison data; portions of the acquired volumetric data and the comparison data used for registration are different than portions of the acquired volumetric data and the comparison data used for determining the metrics; the object is a retina, and the physiological component is choroidal vasculature, and the metric is a spatial volume of the vasculature within the object; pre-processing the volumetric data comprises: performing a first pre-processing on the volumetric data, thereby producing first pre-processed data; and performing a second pre-processing on the volumetric data, thereby producing second pre-processed data, and segmenting the physiological component comprises: performing a first segmentation technique on the first pre-processed data, thereby producing first segmented data, performing a second segmentation technique on the second pre-processed data, thereby producing second segmented data; and producing the 3D segmented data by combining the first segmented data and the second segmented data; the first segmentation technique and the second segmentation technique are the same; the first segmentation technique and the second segmentation technique are different; pre-processing the volumetric data comprises: performing a first pre-processing on a first portion of the volumetric data, thereby producing first pre-processed data; and performing a second pre-processing on a second portion of the volumetric data, thereby producing second pre-processed data, segmenting the physiological component comprises: segmenting the physiological component from the first pre-processed data, thereby producing first segmented data; segmenting the physiological component from the second pre-processed data, thereby producing second segmented data; and producing the 3D segmented data by combining the first segmented data and the second segmented data, and the first portion and the second portion do not fully overlap; segmenting the physiological component comprises applying a 3D segmentation technique to the pre-processed data; the pre-processing comprises applying a local Laplacian filter to the volumetric data that corresponds to a desired depth range and region of interest; the pre-processing comprises applying a shadow reduction technique to the volumetric data; the method further comprises aggregating the metric within a region of interest, wherein the visualization is a graph of the aggregated metric; and/or the method further comprises generating a visualization of the 3D segmented data.
The present disclosure relates to clinically valuable analyses and visualizations of three-dimensional (3D) volumetric OCT data that was not previously practical and/or possible with known technologies. Such analyses and visualizations may improve a medical practitioner's ability to diagnose disease, monitor, and manage treatment. Briefly, the analysis is performed on, and the visualizations are created by, segmenting OCT data for a component of interest (e.g., choroidal vasculature) in three dimensions following a series of pre-processing techniques. The segmentation can be applied to the data following pre-processing, and then combined to produce a final full 3D segmentation of the desired component. Post-processing, such as a smoothing technique, may be then applied to the segmented component. While choroidal vasculature of OCT data is particularly discussed herein, the disclosure is not to be so limited.
An example method for producing clinically valuable analyses and visualizations according to the present disclosure is illustrated in
Intensity attenuation along the depth dimension may be addressed by applying a intensity compensation and contrast enhancement techniques. Such techniques may be locally applied, for example, as a local Laplacian filter at desired depths and regions of interest (in either 2D or 3D). In addition to, or alternatively, a contrast-limited adaptive histogram equalization (CLAHE) technique, may be applied to enhance contrast. Of course, other contrast enhancement techniques (applied locally or globally), and/or other pre-processing techniques may be applied.
The pre-processing 102 may be applied to entire images or volumes, or only selected regions of interest. As a result, for each raw image or volume input to the pre-processing 102, multiple pre-processed images may be produced. Put another way, individual B-scans or C-scans taken from raw volumetric OCT data may be subject to different pre-processing techniques to produce multiple pre-processed images. Following pre-processing 102, the pre-processed images (or data underlying the images) are segmented 104 for a desired component in the images/data, such as choroidal vasculature. The segmentation process 104 may utilize one or more different techniques, where each applied segmentation technique may individually be relatively simple and fast to perform, and have different strengths and weaknesses.
For example, some segmentation techniques may utilize different thresholding levels, and/or may be based on analysis from different views (e.g., a B-scan or C-scan). More particularly, performing segmentation on C-scans can improve continuity of vessels relative to segmentation performed on B-scans because each C-scan image contains information in the entire field of view of volume. This further allows for segmentation of smaller vessels relative to segmentation on B-scans, and makes manual validation of the segmentation easier for a user. However, segmentation on C-scans may be dependent on the accuracy of a preceding Bruch's membrane segmentation used to flatten the volumetric data.
In view of the above, the different segmentation techniques can be selectively applied to one or more of the pre-processed images. Further, as suggested above, global segmentation on an entire OCT volume has not been practically possible due to noise and attenuation (e.g., causing artifacts). However, following application of the above-described pre-processing, the segmentation techniques may also be applied to entire OCT volumes, rather than individual B-scans or C-scans from the volumes. In any case, each of the segmentation techniques segments the desired component in the pre-processed images/data. Segmentation applied to entire volumes can further improve connectivity of the segmentation, since individual segmentations need not be pieced together, although such segmentations may be less sensitive to local areas of the volume with relatively low contrast, but this can be mitigated by depth compensation and contrast enhancement techniques described above.
In one example embodiment, each segmentation technique may be applied to images/data having been separately pre-processed. In another embodiment, segmentation techniques may be selectively applied to images/data corresponding to different regions of interest. For example, a first two pre-processed images may be segmented according to a first segmentation technique, while a second two pre-processed images may be segmented according a second segmentation technique. In another embodiment, after 3D volumetric OCT data has been pre-processed according to any number of techniques, a local thresholding segmentation technique is applied on B-scan images taken from the pre-processed 3D volumetric OCT data to generate a first determination of choroidal vasculature, a local thresholding technique is applied on C-scan images taken from the pre-processed 3D volumetric OCT data to generate a second determination of choroidal vasculature, and a global thresholding technique is applied to the entirety of the pre-processed 3D volumetric data to generate a third determination of choroidal vasculature.
Regardless of the number of pre-processing and segmentation techniques applied, the segmentations are then combined to produce a composite segmented image or data, which is free from artifacts and of sufficient quality for both processing to determine different quantitative metrics as part of an analysis 108, and visualization of the segmentation and/or the metrics 110. The composite image may thus include all of the pre-processing and segmentation techniques, and may be combined according to any method such as union, intersection, weighting, voting, and the like. Following segmentation 104, the segmented image or data may also be further post-processed, for example, for smoothing.
The above combination of pre-processing and segmentation is illustrated schematically with respect to
As noted above, utilizing the plurality of pre-processing and segmentation techniques to produce a composite result, rather than performing a single complex pre-processing and segmentation reduces the total pre-processing and segmentation time and computational power. Nevertheless, the same quality may be achieved, and the segmentation can be applied to, entire 3D volumes. The resulting segmentation can thus be free from noise and shadow artifacts and be of sufficient quality for visualization and quantification (discussed below). An example composite image according to the above is illustrated in
Referring back to
For example, within a 3D volume, the spatial volume (and relatedly, density being a proportion of the entire volume in a given region that is vasculature or like segmented component), diameter, length, volumetric ratio (also referred to as an index), and the like, of vasculature can be identified by comparing data segmented out in the composite segmented image relative to the un-segmented data. For example, counting the number of pixels segmented out may provide an indication of the amount of vasculature (e.g., volume or density) within a region of interest. By projecting those metrics alone one dimension (e.g., taking a maximum, minimum, mean, sum, or the like) such as depth, then a volume map, diameter map, index map, and the like can be generated. Such a map can visually show the quantified value of the metric for each location on the retina. Further, it is possible to identify the total volume, representative index, or the like by aggregating those metrics in a single dimension (e.g., over the entire map). Quantifying such metrics over large areas and from a single OCT volume permits previously unavailable comparison of volumetric OCT data between subjects, or of an individual subject over time.
The metrics may also be comparative. For example, a comparative metric may be based on metrics of OCT volumes obtained from a single subject at different times, from different eyes (e.g., right and left eyes of a single individual), from multiple subjects (e.g., between an individual collective individuals representative of a population), or from different regions of interest of the same eye (e.g., different layers). These comparisons may be made by determining the metric for each element of the comparison and then performing any statistical comparison technique. For example, the comparative metric may be a ratio of the comparative data, a difference between the comparative data, an average of the comparative data, a sum of the comparative data, and/or the like. The comparisons may be made generally for a total volumetric data or on a location-by-location basis (e.g., at each pixel location of a comparative map).
When comparing metrics from common regions of interest, the compared elements (different data sets, images, volumes, metrics, and the like) are preferably registered to each other so that like comparisons can be made. In other words, the registration permits corresponding portions of each element to be compared. In some instances, for example when comparing changes in choroidal vasculature, the registration may not be made based on the vasculature itself because the vasculature is not necessarily the same in each element (e.g., due to treatments over the time periods being compared). Put more generally, registration is preferably not performed based on information that may be different between the elements or that is used in the metrics being compared. In view of this, in some embodiments registration may be performed based on en face images generated from raw (e.g., not pre-processed) OCT volumes of each compared element. These en face images may be generated be summation, averaging, or the like of intensities along each A-line in the region being used for registration. En face images are helpful in registration because retinal vessels can cast shadows, thus on OCT en face images, the darker retinal vasculature that stays relatively stable can serve as a landmark. Further, by nature, any metrics, choroidal vasculature images, or like images generated from an OCT volume are co-registered with the en face image because they come from the same volume. For example, superficial vessels in a first volume may be registered to superficial vessels in a second volume, and choroidal vessels (or metrics of the choroidal vessels) in the first volume may be compared to choroidal vessels in the second volume.
Visualizations of these metrics may then be produced and displayed 110 or stored for later viewing. That is, the techniques described herein are capable of producing not only visualizations of the segmented components of volumetric OCT data (e.g., choroidal vasculature) but also visualizations (e.g., maps and graphs) of quantified metrics related to that segmented component. Visualization of these quantified metrics further simplifies the above-noted comparisons. Such visualizations may be 2D representations of the metrics representing 3D volumetric information, and/or representations of the comparative metrics representing changes and/or differences between two or more OCT volumes. Considering the above-mentioned metrics, the visualizations may be, for example, a choroidal vessel index map, a choroidal thickness map, or a vessel volume map, and/or comparisons of each.
Information may be encoded in the visualizations in various forms. For example, an intensity of each pixel of the visualization may indicate a value of the metric at the location corresponding to the pixel, while color may indicate a trend of the value (or utilize intensity for the trend and color for the value). Still other embodiments may use different color channels to identify different metric information (e.g., a different color for each metric, with intensity representing a trend or value for that metric). Still other embodiments may utilize various forms of hue, saturation, and value (HSV) and/or hue, saturation, and light (HSL) encoding. Still other embodiments may utilize transparency to encode additional information. Example visualizations are illustrated in
As suggested above, metrics used to generate the 2D visualization maps may be further aggregated over regions of interest for additional analysis. For example, the metric values and/or pixel intensities may be aggregated for regions corresponding to the fovea (having a 1 mm radius), parafovea (superior, nasal, inferior, tempo) (having a 1-3 mm radius from the fovea center), perifovea (superior, nasal, inferior, tempo) (having a 3-5 mm radius from the fovea center), and/or the like. The aggregation may be determined by any statistical calculation, such as a summation, standard deviation, and the like. If the aggregated numbers are collected at different points in time, a trend analysis can be performed and a corresponding trend visualization generated. The aggregated numbers can also be compared between patients or to a normative value(s).
An example visualization of a choroidal volume trend for the fovea and perifovea nasal is illustrated in
Another example visualization is illustrated in
Of course, similar 2D map and trend visualizations may be generated for different metrics. For example, a vessel thickness map and trend visualization may be generated by determining a total number of choroidal vasculature pixels for each A-line of a 3D volumetric data set; or a non-vessel index map and trend visualization may be generated by determining a total number of non-vessel pixels within a region (such as the choroid).
The above-described aspects are envisioned to be implemented via hardware and/or software by a processor. A “processor” may be any, or part of any, electrical circuit comprised of any number of electrical components, including, for example, resistors, transistors, capacitors, inductors, and the like. The circuit may be of any form, including, for example, an integrated circuit, a set of integrated circuits, a microcontroller, a microprocessor, a collection of discrete electronic components on a printed circuit board (PCB) or the like. The processor may be able to execute software instructions stored in some form of memory, either volatile or non-volatile, such as random access memories, flash memories, digital hard disks, and the like. The processor may be integrated with that of an OCT or like imaging system but may also stand alone or be part of a computer used for operations other than processing image data.