The present invention is generally directed to the field of visual field testing. More specifically, it is directed to a system and method for optimizing a field test for improved accuracy, improved repeatability, reduced overall test time, and for suggesting/identifying new locations in the visual field to test.
Glaucoma is one of the leading causes of blindness in the world with 44.7 Million people with open-angle glaucoma world-wide and projected to reach 58.6 million worldwide in 2020. While the use of optical coherence tomography (OCT) and optical coherence tomography angiography (OCTA) are becoming more common in the management of glaucoma, the analysis of visual fields (VFs) remains the clinical gold standard of diagnosing and staging glaucoma, as well as for monitoring functional vision loss over time.
A visual field test is a method of measuring an individual's entire scope of vision, e.g., their central and peripheral (side) vision. Visual field testing is a way to map the visual fields of each eye individually and can detect blind spots (scotomas) as well as more subtle areas of dim vision.
A campimeter, or “perimeter,” is a dedicated machine/device/system that applies a visual field test to a patient. There are different types of perimeters and different types of visual field tests, but all visual field tests are subjective examinations. A patient must therefore be able to understand the testing instructions, fully cooperate, and complete the entire test while alert in order to provide useful information. Complicating this is the reality that a visual field test can take a relatively long time, which may fatigue a patient and compromise test results.
A common visual field test type, or algorithm, is the standard automated perimetry (SAP) test, which determines how dim a light can be and still be perceived (e.g., the threshold) at various points in an individual eye's visual field. Various algorithms have been developed to determine this threshold for different, individual test points in a single visual field. The Swedish interactive thresholding algorithm (SITA) may be combined with the SAP test to determine visual fields more efficiently, for example, when used with a Humphrey Field Analyzer (HFA) from ZEISS®. The SITA algorithm optimizes the determination of perimetry thresholds by continuously estimating what the expected threshold is based on the patient's age and neighboring thresholds. For example, depending on a patient's response to a first stimulus, the intensity of each subsequent stimulus presentation is modified. This iterative procedure is repeated until the likely threshold measurement error is reduced to below a predetermined level, with 1 or more reversals typically occurring at every test location. In this manner, it can reduce the time necessary to acquire a visual field, decrease patient fatigue, and thereby increase reliability. Improvements to SITA have resulted in SITA Fast and SITA Faster algorithms, which can reduce test times even further. Similar to the SITA test strategy for the HFA, the tendency-oriented perimeter (TOP) algorithm was developed for use with the Octopus™ perimeter as an alternative to its lengthy staircase threshold procedures. Nonetheless, visual field tests typically still take several minutes to perform for each eye, even with state of the art test strategies, such as the various versions of SITA. Test times also tend to increase with more damaged or glaucomatous visual fields.
Overall, test strategies with shorter test times may help increase the frequency of visual field testing in glaucoma management, bringing clinical glaucoma care more in line with current recommendations of professional organizations. Shorter test times are generally preferred by patients, minimize the effects of patient fatigue leading to more reliable test results, and reduce the cost of testing.
It is an object of the present invention to reduce the overall test time of a visual field test.
It is another object of the present invention to reduce thresholding visual field test durations (e.g., the time needed for a patient to reach his/her minimum visible light threshold for an individual test point) with minimal or no loss of clinical accuracy.
It is a further object of the present invention to provide a system and method for improved predictions of a patient's expected threshold for individual test points.
It is still another object of the present invention to make use of structural and/or functional characteristics of a patient's eye, obtained by use of a different ophthalmic examination modality, to aid in the reducing of thresholding visual field test durations.
The above objects are met in a method/system for customizing visual field tests. The method/system may have multiple elements, including: a data system for selecting a visual field test for a patient, where the selected visual field test has one or more test points of definable light intensity. A biometric (e.g., structural or functional) measurement of a retina of the patient is obtained, or otherwise accessed, such as from an electronic medical record (EMR). The biometric measurement may be collected by use of an optical coherence tomography (OCT) system, OCT angiography system, fundus imager, or other ophthalmic examination system modality for collecting physical/empirical ophthalmic data. For example, the biometric measurement may be based, at least in part, on an image of the retina, which may include 3D, or depth-resolved, data. A computing system or network, such as one embodying a machine learning architecture (e.g., an artificial intelligence system and/or neural network system), may be used to predict a respective threshold sensitivity value for one or more select test points of the selected visual field test based at least in part on the obtained biometric measurement(s). Each predicted threshold sensitivity value may include a light intensity measure that the patient is expected to discern with a predefined success rate (e.g., a 50% success rate), and/or an area measure (e.g., illuminated point/shape/region of specific size) that the patient is expected to discern at a given brightness level, and/or a combination of both. A visual test system may use the predicted threshold sensitivity values as “priors,” e.g., inputs to the selected visual field test (which may use the priors to optimize the patient's FA test), and/or as starting intensity/area values for the one or more select test points when applying the selected visual field test to the patient. By using starting intensity values close to the patient's final test results, the patient can reach his/her threshold values more quickly, resulting in an overall shorter test duration.
Additionally or alternatively, predicted threshold sensitive values may be used as synthesized VF priors in place of, or in addition to, true VF prior in a VF forecast system. The VF forecast system may use the synthetized VF priors (and optionally any available true VF priors) to forecast a future visual field for a patient.
Other objects and attainments together with a fuller understanding of the invention will become apparent and appreciated by referring to the following description and claims taken in conjunction with the accompanying drawings.
Several publications may be cited or referred to herein to facilitate the understanding of the present invention. All publications cited or referred to herein, are hereby incorporated herein in their entirety by reference.
The embodiments disclosed herein are only examples, and the scope of this disclosure is not limited to them. Any embodiment feature mentioned in one claim category, e.g. system, can be claimed in another claim category, e.g. method, as well. The dependencies or references back in the attached claims are chosen for formal reasons only. However, any subject matter resulting from a deliberate reference back to any previous claims can be claimed as well, so that any combination of claims and the features thereof are disclosed and can be claimed regardless of the dependencies chosen in the attached claims.
In the drawings wherein like reference symbols/characters refer to like parts:
In a typical visual field (VF) test, a patient is presented with a number of test points distributed (e.g., sequentially) over a visual field, and asked to discern the appearance of individual test points. The size and/or light intensity of individual test points may be adjusted until the patient is able to identify the appearance of an individual test point with at predefined success rate, such as 50%. This final size and/or intensity of a test point defines that test point's threshold value, which may be the basis for a visual sensitivity measure incorporated into the visual field test's results. If the initial size and/or intensity of a test point is far from its final threshold value, many adjustment iterations may be needed before “thresholding” (e.g., reaching the patient's threshold value for that specific test point), leading to a longer test time. Thus, a goal of efficient thresholding strategies is to select initial size and/or intensity values for individual test points that are close to their final threshold values for a specific patient, and thereby lead to shortened visual field test times.
Efficient thresholding strategies have been pushing the limits of threshold testing. One approach toward improving thresholding is using visual field “priors,” or prior information (e.g., historical data or statistically models derived from historical data) used to estimate a patient's future VF test performance. Bayesian strategies by default incorporate the idea of prior information or data that are updated with each stimulus presentation (e.g., test point) and response. The Swedish Interactive Thresholding Algorithm (SITA) and the Zippy Estimation by Sequential Testing (ZEST) perimetric algorithms are examples of strategies that use Bayesian prior techniques. A discussion of SITA may be found in “SITA Fast, A New Rapid Perimetric Threshold Test, Description of Methods and Evaluation in Patients with Manifest and Suspect Glaucoma,” by Boel Bengtsson et al., Acta Ophthalmologica Scandinavica, 1998: 76: 431-437, and in “A New SITA Perimetric Threshold Testing Algorithm: Construction and a Multicenter Clinical Study,” by Anders Hejil et al., American Journal of Ophthalmology, Vol. 198, February 2019, Pages 154-165. Similarly, a discussion of ZEST may be found in “Targeted Spatial Sampling Using GOANNA Improves Detection of Visual Field Progression,” by Chong et al., Ophthalmic Physiol Opt, 2015, March, 35(2):155-69. All of these references are herein incorporated in their entirety by reference. The priors (e.g., previously collected data and/or population-derived data) are often based on uniform values (often supra-threshold/bright), related to age-matched data, or even derive from previous visual fields of the same patient. Some limitations of using these priors are that uniform or age-matched data are not individualized to a given patient, meaning extra stimuli at a given location might be required. Visual field priors of the same patient are possible, but may often not be unavailable (e.g., first visit by patient) or be out of date due to VF tests not being administered as often as other tests. That is, because VF tests are subjective and take more time than other more typical ophthalmic tests, such as structural/imaging tests, VF tests might not be administered at their recommended intervals.
A newer approach toward facilitating perimetry is to construct structure-derived visual fields, which may include one or more of derived visual fields, derived visual sensitivity measures, and derived priors based on one or more sources of quantifiable data, such as ophthalmic images, patient-specific physiological characteristics/measures, medical condition(s), medical treatment(s), (visual) evoked potential tests, and/or other vision-related testing. Structural imaging, such as optical coherence tomography (OCT) imaging, has been used to estimate (e.g., derive) visual fields, which have typically been positioned as “replacement” fields for functional VF testing (e.g., for use in place of standard/functional visual field testing). Because structural data is often more reproducible than functional VF data, structural data may provide the benefit of more reproducible, derived visual fields. A limitation of previous structure-derived visual fields is that they are typically generated with custom mathematical models that are often tied to a specific instrument, as is discussed in “Relationships of Retinal Structure and Humphrey 24-2 Visual Field Thresholds in Patients with Glaucoma,” by Bogunovic et al., Invest. Ophthalmol. Vis. Sci., 2015; 56(1): 259-271, herein incorporated in its entirety by reference. This use of custom mathematical models tied to specific instruments limits the utility of structure-derived visual fields. Another obstacle to previous structure-derived visual fields is that standard (functional) visual fields are still considered the gold standard for evaluating visual function. Consequently, functional visual fields may be more trusted by the general clinician than estimated visual fields derived from structural priors.
In evoked potential (EP) tests, or evoked response (ER) tests, electrodes are used to record an electrical potential response from a specific part of a patient's nervous system, typically the brain, following presentation of a stimulus (sensory stimulation), such as through light, sound, or touch. For example, an evoked potential test may measure the time it takes for the brain to respond to a sensory stimulation. In a visual evoked potential (VEP) test, electrodes may be placed on the patient's scalp while the patient sits in front of a screen and watches a changing light pattern (e.g., first with one eye, and then with the other). A VEP test may record each eye's response to the changing pattern. For example, the patient may be asked to gaze at a checkerboard pattern on the screen while the colors of the squares alternate at a predefined frequency and/or in a predefined pattern, and the VEP test records which changes the patient was able to perceive based on the patient's evoked potential response.
However, it is believed that structural priors have not been used to facilitate the construction/administering of standard, functional visual fields. Herein is proposed a method, system, and/or workflow that generates accurate (true/functional) visual fields with reduced test times in a novel way.
The present invention combines the use of ophthalmic imaging/examining systems (and/or their output results) with a visual field testing system (e.g., perimeter) to optimize a functional visual field test (e.g., optimize its starting points, e.g., the initial light intensity and/or size of test points of the functional visual field test). For example, the optimized starting points may be estimated/predicted to be close to a patient's expected thresholding (e.g., final) value for a given test point. In this manner, the number of (intensity and/or size) iterative adjustments for a given test point to reach the patient's threshold value is reduced, leading to a reduced overall test time. A discussion of a typical visual test system and typical (functional) visual field test, in general, is provided below in section “Visual Field Test System”.
Various types of ophthalmic imaging/examining systems are known in the art, such as fundus imagers, OCT systems, and OCT angiography (OCTA) systems. Fundus imagers may take two-dimensional (2D) images of the surface of the retina, or other parts of the eye. Various structural measurements/observations may be made from fundus images. OCT and OCTA enable noninvasive, depth-resolved (e.g., A-scan), volumetric (e.g., C-scan) and 2D (e.g., en face or cross-sectional/B-scan) visualization of retinal vasculature. OCT may provide structural images of vasculature whereas OCTA may provide functional images (e.g., blood flow) of vasculature. For example, OCTA may image vascular flow by using the motion of flowing blood as an intrinsic contrast. These types of ophthalmic imaging systems are discussed below in section “Fundus Imaging System” and in section “Optical Coherence Tomography (OCT) Imaging System.” Unless otherwise stated, aspects of the present invention(s) may apply to any, or all, such ophthalmic imaging systems. For example, the methods/systems presented herein for optimizing thresholding (e.g., optimizing the starting values of test points in, and/or providing synthesized “priors” for, a functional visual field test) may incorporate structural and/or functional (e.g., motion) ophthalmic information (biometrics measurements) extracted from an eye, and this ophthalmic information (biometric measurements) may be obtained by use of a fundus imager, OCT system, and/or OCTA system.
Some embodiments of the present invention leverage existing Bayesian type of strategies and follow-up with adding in synthesized/derived “prior” (e.g., synthesized visual fields) that are derived from structural and/or functional ophthalmic data/imaging (e.g., fundus image, OCT scan/image, OCTA scan/image, patient-specific physiological characteristics/measures, medical condition(s), medical treatment(s), (visual) evoked potential tests, and/or other vision-related testing) in place of true VF priors (e.g., prior functional VF test results taken by use of a perimeter). These prior synthesized visual fields may be derived using machine learning (ML) techniques, such as deep learning (DL) and/or artificial intelligence (AI) methods. That is, unlike prior approaches that use true VF priors to attempt to accelerate functional VF testing, the present approach proposes using a synthesized VF prior that is determined from structural (such as OCT and fundus imaging) and/or functional (such as OCTA imaging) ophthalmic information, herein collectively referred to as “biometric” and/or “physical characteristic” (measure/measurement) derived priors. An advantage of this approach is that biometric derived priors may be more repeatable (e.g., have less variability) and may often be less onerous to obtain for a subject (i.e. could be derived quickly at the same clinic visit of a patient prior to the patient receiving a traditional functional visual field test) than generating multiple (true) prior visual fields to establish a VF history for the subject. Additionally, the biometric derived priors may be created using methods of Artificial Intelligence (AI), Machine Learning (ML), and/or Deep Learning (DL).
It is to be understood that there are various different types of machine learning models known in the art, such as supervised learning, unsupervised learning, semi-supervised learning, reinforcement learning, etc. Although aspects of the present discussion provide examples using specific machine learning models, such as DL and AI, it is to be understood that other types of machine learning models, singularly or in combination, may be used with the present invention. For example, one or more of Nearest Neighbor, Naive Bayes, Decision Trees, Linear Regression, Support Vector Machines (SVM), and Neural Networks may be used to implement a supervised learning model in accord with the present invention.
By using machine learning techniques to derive biometric derived priors, the present invention not only has the potential to generate more robust and reproducible input (synthesized/derived) visual fields (VFs), but also takes advantage of some features intrinsic to those methods that may help provide better a understanding of an ophthalmic biometric in relation to a VF function model (e.g., identify relationships between observed biometric measurements and VF tests). By using biometric (e.g., image) data that is usually collected as part of a standard clinical workflow to create structural priors (e.g., biometric derived priors) in lieu of a true visual field prior (which may not be available from previous visits or is less reproducible) as input to a fast VF testing strategy (which may have no other available VF prior data), such as SITA, it is estimated that the present approach can potentially reduce current threshold VF test time by up to 30% in glaucomatous eyes. That is, the present invention pushes the limits of threshold testing beyond what can be achieved using modern thresholding strategy types alone, such as SITA and its variants, which may be reaching their optimization limits. For example, the present approach may expand these limited by introducing biometric measurements as additional sources of prior information for optimizing functional VF testing.
The present system obtains one or more biometric measurement (e.g., a physical characteristic measure), such as of the retina of the patient, (optionally including prior functional tests of the patient) to whom the visual test is to be administered, as indicated by block 12, to construct structure-derived visual fields. The biometric measurement may be based on an image of the retina obtained using any of multiple imaging modalities and/or images (e.g., photocopies, bitmap/raster/vector or other digital images, print-outs, etc.) of previous patient tests. For example, the imaging modality may be grayscale, color, infrared, retinal layer thickness map, fundus photography, optical coherence tomography (OCT), Doppler OCT, OCT angiography, and/or fluorescein angiography. The biometric measurement 12 may be extracted from (e.g., be based on) or include the entirety (or portion) of one or more OCT/OCTA image 12A, prior visual field test result 12B (or the main sensitivity values of the prior visual field test), fundus image 12C, fluorescence angiography (FA) image(s) 12D, VEP 12E, or other imaging modality or retinal/vision measuring technique/device. The biometric measurement may be obtained by use of an ophthalmic test system (e.g., an OCT system or fundus imager, not shown) directly on the patient at the time of the patient's visit to a clinic, or may be accessed from a data store of the patient's medical records, such as from an electronic medical record (EMR). Examples of the biometric measure may include one or more A-scans, B-scans, C-scans, or en face image obtained by use of an OCT/OCTA system. The biometric measure may include the shape, size, color, and/or relative position of individual ophthalmic structures, such as the optic nerve head (OHN), fovea, retinal thickness, and thickness measure of individual retina layer(s). Other examples of biometric measures may include blood flow measures and/or tissue motion measures at specific regions of the retina, regions of discoloration from an expected norm, regions of vascular conversion (e.g., their size, locations, and/or number), exudate formation (e.g., their size, locations, and/or number), large vessel count, small vessel count, identification of specific structures, some of which may be indicative of (e.g., associated with) pathology. For example, exudate-associated derangements are lesions that have been associated with certain types of “wet” age-related macular degeneration (AMD). The biometric measure may further include a comparison of the relative measures of different physiological features, such as the distance(s) between (and/or relative orientations/positionings of) specific structures and/or comparative size ratio(s) of specific structures.
The obtained biometric measure(s) may be submitted to a machine learning model 15, which may be embodied within one or more computing systems (e.g. electronic processors). It is to be understood that individual retinal images (e.g., OCT/OCTA, fundus, and/or FA images) may be submitted to machine model 15 as one or more biometric measure, and machine model 15 may extract individual biometric sub-measures from the submitted image(s), as needed. Optionally, the machine model 15 may also receive as input information regarding the specific VF test algorithm selected to be administered to the patient. For example, machine model 15 may be informed of the type of VF test that is to be administered to the patient, which may enable it to better cater its construction of a suitable biometric derived prior. Machine learning model 15 may determine (e.g., predict/synthesize/derive) a respective threshold value (e.g., visual sensitivity value) for one or more select test points of the selected VF test type based at least in part on its received biometric measurement(s). Each threshold sensitivity value may be based on a light intensity measure and/or point size measure for an individual VF test point that the patient is expected to discern with a predefined success rate (e.g., a 50% success rate). That is, machine learning model 15 outputs synthesized VF thresholds (e.g. VFTh_out), which may constitute one or more VF priors, e.g., a collection of numerical data (illustratively shown as a derived VF test output 10), and which may be used in conjunctions with the selected functional VF test administered to the patient, as indicated by block 13. Consequently, the present system results in an accelerated functional VF exam 17 (e.g., a VF exam of shorted time duration).
Optionally, the individual threshold sensitivity value(s) VFTh_out may be further based on additional, non-structural or image, patient related data, such as may be accessed from an EMR, as indicated by block 14. For example, determination of the threshold sensitivity value for the one or more select test points of the selected visual field test may be further based on patient-age specific normative data associated with the specific imaging device(es) (e.g., OCT and/or fundus imager) from which one or more of the biometric measure(s) were obtained. The prediction of the threshold sensitivity value(s) may also be based on non-structural patient-specific data (e.g., physiological data not extracted from the input retinal image(s) of block 12), such as one or more of the patient's age, ethnic group, and medical history. The determination of the threshold sensitivity values may also be based on prior patient-specific functional tests, such as prior VF test results and/or prior (visual) evoked potential test data.
To reiterate, the thus determined (e.g., predicted/derived) visual sensitivity value(s) VFTh_out may be submitted to the perimeter 13, which may base its starting VF test point values (e.g., intensity and/or size input priors) for the corresponding one or more select test point(s) (or otherwise optimize its VF test) when applying the selected functional VF test to the patient. That is, the derived sensitivities VFTh_out may be modified in the construction of priors. For example, the chosen VF test may start using input priors having an offset (e.g., higher or lower intensity) from the derived sensitivities VFTh_out.
Alternatively, or in addition, the determined, or estimated, threshold sensitivity value(s) may be used as VF priors and/or be used to determine a prediction of the patient's visual field that may be used for diagnostic/clinical interpretation or structure-function analyses. For example, the patient's predicted visual field may be used as part of a clinical decision support (CDS) system, which provides clinicians, staff, patients or other individuals with knowledge and person-specific information, intelligently filtered or presented at appropriate times, to enhance health care. The present system may be incorporated as an additional tool in a CDS system to enhance decision-making in the clinical workflow. For example, the present system may provide computerized alerts and reminders to care providers and patients and provide clinical guidelines, condition-specific order sets (e.g., recommendation for a visual field test or other medical test), focused patient data reports and summaries, documentation templates, diagnostic support, and contextually relevant reference information. For example, a current, derived sensitivities VFTh_out may compared with one or more previous derived sensitivity results and/or true visual field test results (e.g., from prior doctor visits), and issue a warning flag/message when the current, derived sensitivities VFTh_out indicate that the patient's visual field may be changing beyond a predefined range and/or a predefined area and/or a predefined rate of change. The warning flag/message may indicate that the patient should be scheduled for a true visual field test.
Machine learning model 15 may be based on one or more of linear regression, logistic regression, decision tree, support vector machine, naive Bayes, k-nearest neighbors, k-means, random forest, dimensionality reduction, gradient boosting, and neural network. Generally, a machine learning model is a computing system that can be trained to perform a specific function or functions, and selection of a specific model may depend on the type of problem being addressed. For example, a support vector machine (SVM) is a machine learning, linear model for classification and regression problems, and may be used to solve linear and non-linear problems. The idea of an SVM is to create a line or hyperplane that separates data into classes. More formally, an SVM defines one or more hyperplanes in a multi-dimensional space, where the hyperplanes are used for classification, regression, outlier detection, etc. Essentially, an SVM model is a representation of labeled training examples as points in multi-dimensional space, mapped so that the labeled training examples of different categories are divided by hyperplanes, which may be thought of as decision boundaries separating the different categories. When a new test input sample is submitted to the SVM model, the test input is mapped into the same space and a prediction is made regarding what category it belongs to based on which side of a decision boundary (hyperplane) the test input lies.
In a preferred implementation of the present invention, however, machine learning model 15 is realized, at least in part, within a computing system that includes/embodies a trained neural network, which may be based on deep learning. Various examples of neural networks are discussed below with reference to
For illustration purposes,
It is noted that taking into consideration previous visual field test results may be helpful in identifying trends in a patient's changing visual field, which may lead to more accurate predictions. However, because heretofore visual field tests have been time-consuming and not always been administered at prescribed (e.g., regular) intervals, there may be gaps in the visual field test results of a patient. Consequently, there may not be enough data to determine a trend or tendency in the patient's changing visual field. The present system addresses this issue by providing the synthesized/derived visual field tests to fill in such gaps. For example, although a patient might have skipped taking a visual field at a particular clinic visit (or particular month/time), the patient may have taken a retina image (e.g., OCT, OCTA, fundus image, FA, etc.) at that clinic visit (or within a predefined time frame, e.g., month or other set number of weeks/days). In this case, the taken retina image may be used to extract a derived visual field. This derived visual field may then be used in place of a true functional visual field in a VF-related analysis. For example, such derived visual fields may be used to create additional training sets (e.g., used as a VF target output VFTRi in a particular training pair TPi, as illustrated in
A preliminary proof-of-concept study was conducted to evaluate the performance of using structure-derived visual field priors (S-priors) for simulated visual fields (VFs). Qualified (e.g., retrospective) data from 1399 subjects (single eyes) from a Singapore population study were used in this study. Data from the Humphrey Field Analyzer (HFA2i)® (ZEISS, Dublin, Calif.) SITA Standard 24-2 VFs and the CIRRUS® HD-OCT (ZEISS, Dublin, Calif.), including Optic cubes, were collected at the study visit. Seventy percent of the eyes were used to train regressors (e.g., a random forest regressor) to predict a 54-point VF. A random forest (RF) using the 256-point circumpapillary retinal nerve fiber layer data and age was constructed. A simplified mixed-scale dense convolutional neural network (CNN) using the RNFL thickness map was constructed, see for example, Pelt et al., “A Mixed-Scale Dense Convolutional Neural Network for Image Analysis,” PNAS, 2018, 115 (2), 254-259, herein incorporated in its entirety by reference. The remaining 30% of the eyes were used to predict S-priors and to provide input fields to a VF simulator.
The VF simulator implemented a Bayesian ZEST using a bi-modal starting probability distribution (SPD) with no prior (ZEST), as described in “Targeted Spatial Sampling Using GOANNA Improves Detection of Visual Field Progression,” (Chong et al., Ophthalmic Physiol Opt, 2015, March; 35(2):155-69), except the normal mode was instead centered on age normal values determined from a normal cohort of 118 eyes, as described in “Exploring the Structure-Function Relationship for Perimetry Stimulus Sizes III, V and VI and OCT in Early Glaucoma,” Flanagan et al., ARVO (Association for Research in Vision and Ophthalmology) Abstract, Investigative Ophthalmology & Visual Science (IOVS), September 2016, Volume 57, 376, herein incorporate in its entirety by reference.
ZESTs using a uni-modal SPD designed for custom priors centered on both types of S-priors were also simulated (e.g., ZEST-RF, ZEST-CNN). Slopes of frequency of seeing responses were modeled, as described “Response Variability in the Visual Field: Comparison of Optic Neuritis, Glaucoma, Ocular Hypertension, and Normal Eyes,” (Henson et al., IOVS, February 2000, Vol. 41, 417-421), herein incorporate in its entirety by reference. False answer rates were set to 0%, 5%, and 20% as 3 types of responders. Performance between simulated (e.g., synthesized) and true VFs was evaluated by observing the mean absolute error (MAE) between simulated and true VFs and the total number of questions. The two locations nearest the blind spot were excluded from the analyses. Significance testing (2 one-sided, paired t-tests, α=0.05) for inter-strategy equivalence versus ZEST was performed using limits of agreement of ±5% dB for MAE and ±5% for total questions.
The results show that Mean VF MD were −1.8±2.4 dB and −2.7±2.7 dB for training and test sets, respectively (p<0.001).
However,
Hereinafter is provided a description of various hardware and architectures suitable for the present invention.
Visual Field Test System
The improvements described herein may be used in conjunction with any type of visual field tester/system, e.g., perimeter. One such system is a “bowl” visual field tester VF0, as illustrated in
A projector, or other imaging device, VF4 under control of a processor VF5 displays a series of test stimuli (e.g., test points of any shape) VF6 onto the screen VF2. The subject VF1 indicates that he/she sees a stimulus VF6 by actuating a user input VF7 (e.g., depressing an input button). This subject response may be recorded by processor VF5, which may function to evaluate the visual field of an eye based on the subject's responses, e.g., determine the size, position, and/or intensity of a test stimulus VF6 at which it can no longer be seen by the subject VF1, and thereby determine the (visible) threshold of the test stimulus VF6. A camera VF8 may be used to capture the gaze (e.g., gaze direction) of the patient throughout the test. Gaze direction may be used for patient alignment and/or to ascertain the patient's adherence to proper test procedures. In the present example, the camera VF8 is located on the Z-axis relative to the patient's eye (e.g. relative to trial lens holder VF9) and behind the bowl (of screen VF2) for capturing live images(s) or video of the patient's eye. In other embodiments, this camera may be located off this Z-axis. The images from the gaze camera VF8 can optionally be displayed on a second display VF10 to a clinician (who may also be interchangeably referred to herein as a technician) for aid in patient alignment or test verification. The camera VF8 may record and store one or more images of the eye during each stimulus presentation. This may lead to a collection of anywhere from tens to hundreds of images per visual field test, depending on the testing conditions. Alternatively, the camera VF8 may record and store a full length movie during the test and provide time stamps indicating when each stimulus is presented. Additionally, images may also be collected between stimulus presentations to provide details on the subject's overall attention throughout the VF test's duration.
Trial lens holder VF9 may be positioned in front of the patient's eye to correct for any refractive error in the eye. Optionally, the lens holder VF9 may carry or hold a liquid trial lens (see for example U.S. Pat. No. 8,668,338, the contents of which are hereby incorporated in their entirety by reference), which may be utilized to provide variable refractive correction for the patient VF1. However, it should be noted that the present invention is not limited to using a liquid trial lens for refraction correction and other conventional/standard trial lenses known in the art may also be used.
In some embodiments, one or more light sources (not shown) may be positioned in front of the eye of the subject VF1, which create reflections from ocular surfaces such as the cornea. In one variation, the light sources may be light-emitting diodes (LEDs).
While
Visual field tester VF0 may incorporate an instrument-control system (e.g. running an algorithm, which may be software, code, and/or routine) that uses hardware signals and a motorized positioning system to automatically position the patient's eye at a desired position, e.g., the center of a refraction correction lens at lens holder VF9. For example, stepper motors may move chin rest VF12 and the forehead rest VF14 under software control. A rocker switch may be provided to enable the attending technician to adjust the patient's head position by causing the chin rest and forehead stepper motors to operate. A manually moveable refraction lens may also be placed in front of the patient's eye on lens holder VF9 as close to the patient's eye as possible without adversely affecting the patient's comfort. Optionally, the instrument control algorithm may pause perimetry test execution while chin rest and/or forehead motor movements are under way if such movements would disrupt test execution.
Fundus Imaging System
Two categories of imaging systems used to image the fundus are flood illumination imaging systems (or flood illumination imagers) and scan illumination imaging systems (or scan imagers). Flood illumination imagers flood with light an entire field of view (FOV) of interest of a specimen at the same time, such as by use of a flash lamp, and capture a full-frame image of the specimen (e.g., the fundus) with a full-frame camera (e.g., a camera having a two-dimensional (2D) photo sensor array of sufficient size to capture the desired FOV, as a whole). For example, a flood illumination fundus imager would flood the fundus of an eye with light, and capture a full-frame image of the fundus in a single image capture sequence of the camera. A scan imager provides a scan beam that is scanned across a subject, e.g., an eye, and the scan beam is imaged at different scan positions as it is scanned across the subject creating a series of image-segments that may be reconstructed, e.g., montaged, to create a composite image of the desired FOV. The scan beam could be a point, a line, or a two-dimensional area such a slit or broad line.
From the scanner LnScn, the illumination beam passes through one or more optics, in this case a scanning lens SL and an ophthalmic or ocular lens OL, that allow for the pupil of the eye E to be imaged to an image pupil of the system. Generally, the scan lens SL receives a scanning illumination beam from the scanner LnScn at any of multiple scan angles (incident angles), and produces scanning line beam SB with a substantially flat surface focal plane (e.g., a collimated light path). Ophthalmic lens OL may focus the scanning line beam SB onto the fundus F (or retina) of eye E and image the fundus. In this manner, scanning line beam SB creates a traversing scan line that travels across the fundus F. One possible configuration for these optics is a Kepler type telescope wherein the distance between the two lenses is selected to create an approximately telecentric intermediate fundus image (4-f configuration). The ophthalmic lens OL could be a single lens, an achromatic lens, or an arrangement of different lenses. All lenses could be refractive, diffractive, reflective or hybrid as known to one skilled in the art. The focal length(s) of the ophthalmic lens OL, scan lens SL and the size and/or form of the pupil splitting mirror SM and scanner LnScn could be different depending on the desired field of view (FOV), and so an arrangement in which multiple components can be switched in and out of the beam path, for example by using a flip in optic, a motorized wheel, or a detachable optical element, depending on the field of view can be envisioned. Since the field of view change results in a different beam size on the pupil, the pupil splitting can also be changed in conjunction with the change to the FOV. For example, a 45° to 60° field of view is a typical, or standard, FOV for fundus cameras. Higher fields of view, e.g., a widefield FOV, of 60°-120°, or more, may also be feasible. A widefield FOV may be desired for a combination of the Broad-Line Fundus Imager (BLFI) with another imaging modalities such as optical coherence tomography (OCT). The upper limit for the field of view may be determined by the accessible working distance in combination with the physiological conditions around the human eye. Because a typical human retina has a FOV of 140° horizontal and 80°-100° vertical, it may be desirable to have an asymmetrical field of view for the highest possible FOV on the system.
The scanning line beam SB passes through the pupil Ppl of the eye E and is directed towards the retinal, or fundus, surface F. The scanner LnScn1 adjusts the location of the light on the retina, or fundus, F such that a range of transverse locations on the eye E are illuminated. Reflected or scattered light (or emitted light in the case of fluorescence imaging) is directed back along as similar path as the illumination to define a collection beam CB on a detection path to camera Cmr.
In the “scan-descan” configuration of the present, exemplary slit scanning ophthalmic system SLO-1, light returning from the eye E is “descanned” by scanner LnScn on its way to pupil splitting mirror SM. That is, scanner LnScn scans the illumination beam from pupil splitting mirror SM to define the scanning illumination beam SB across eye E, but since scanner LnScn also receives returning light from eye E at the same scan position, scanner LnScn has the effect of descanning the returning light (e.g., cancelling the scanning action) to define a non-scanning (e.g., steady or stationary) collection beam from scanner LnScn to pupil splitting mirror SM, which folds the collection beam toward camera Cmr. At the pupil splitting mirror SM, the reflected light (or emitted light in the case of fluorescence imaging) is separated from the illumination light onto the detection path directed towards camera Cmr, which may be a digital camera having a photo sensor to capture an image. An imaging (e.g., objective) lens ImgL may be positioned in the detection path to image the fundus to the camera Cmr. As is the case for objective lens ObjL, imaging lens ImgL may be any type of lens known in the art (e.g., refractive, diffractive, reflective or hybrid lens). Additional operational details, in particular, ways to reduce artifacts in images, are described in PCT Publication No. WO2016/124644, the contents of which are herein incorporated in their entirety by reference. The camera Cmr captures the received image, e.g., it creates an image file, which can be further processed by one or more (electronic) processors or computing devices (e.g., the computer system shown in
In the present example, the camera Cmr is connected to a processor (e.g., processing module) Proc and a display (e.g., displaying module, computer screen, electronic screen, etc.) Dspl, both of which can be part of the image system itself, or may be part of separate, dedicated processing and/or displaying unit(s), such as a computer system wherein data is passed from the camera Cmr to the computer system over a cable or computer network including wireless networks. The display and processor can be an all in one unit. The display can be a traditional electronic display/screen or of the touch screen type and can include a user interface for displaying information to and receiving information from an instrument operator, or user. The user can interact with the display using any type of user input device as known in the art including, but not limited to, mouse, knobs, buttons, pointer, and touch screen.
It may be desirable for a patient's gaze to remain fixed while imaging is carried out. One way to achieve this is to provide a fixation target that the patient can be directed to stare at. Fixation targets can be internal or external to the instrument depending on what area of the eye is to be imaged. One embodiment of an internal fixation target is shown in
Slit-scanning ophthalmoscope systems are capable of operating in different imaging modes depending on the light source and wavelength selective filtering elements employed. True color reflectance imaging (imaging similar to that observed by the clinician when examining the eye using a hand-held or slit lamp ophthalmoscope) can be achieved when imaging the eye with a sequence of colored LEDs (red, blue, and green). Images of each color can be built up in steps with each LED turned on at each scanning position or each color image can be taken in its entirety separately. The three, color images can be combined to display the true color image, or they can be displayed individually to highlight different features of the retina. The red channel best highlights the choroid, the green channel highlights the retina, and the blue channel highlights the anterior retinal layers. Additionally, light at specific frequencies (e.g., individual colored LEDs or lasers) can be used to excite different fluorophores in the eye (e.g., autofluorescence) and the resulting fluorescence can be detected by filtering out the excitation wavelength.
The fundus imaging system can also provide an infrared reflectance image, such as by using an infrared laser (or other infrared light source). The infrared (IR) mode is advantageous in that the eye is not sensitive to the IR wavelengths. This may permit a user to continuously take images without disturbing the eye (e.g., in a preview/alignment mode) to aid the user during alignment of the instrument. Also, the IR wavelengths have increased penetration through tissue and may provide improved visualization of choroidal structures. In addition, fluorescein angiography (FA) and indocyanine green (ICG) angiography imaging can be accomplished by collecting images after a fluorescent dye has been injected into the subject's bloodstream. For example, in FA (and/or ICG) a series of time-lapse images may be captured after injecting a light-reactive dye (e.g., fluorescent dye) into a subject's bloodstream. It is noted that care must be taken since the fluorescent dye may lead to a life-threatening allergic reaction in a portion of the population. High contrast, greyscale images are captured using specific light frequencies selected to excite the dye. As the dye flows through the eye, various portions of the eye are made to glow brightly (e.g., fluoresce), making it possible to discern the progress of the dye, and hence the blood flow, through the eye.
Optical Coherence Tomography Imaging System
In addition to fundus photography, fundus auto-fluorescence (FAF), fluorescein angiography (FA), ophthalmic images may also be created by other imaging modalities, such as, optical coherence tomography (OCT), OCT angiography (OCTA), and/or ocular ultrasonography. The present invention, or at least portions of the present invention with minor modification(s) as it would be understood in the art, may be applied to these other ophthalmic imaging modalities. More specifically, the present invention may also be applied to ophthalmic images produces by an OCT/OCTA system producing OCT and/or OCTA images. For instance, the present invention may be applied to en face OCT/OCTA images. Examples of fundus imagers are provided in U.S. Pat. Nos. 8,967,806 and 8,998,411, examples of OCT systems are provided in U.S. Pat. Nos. 6,741,359 and 9,706,915, and examples of an OCTA imaging system may be found in U.S. Pat. Nos. 9,700,206 and 9,759,544, all of which are herein incorporated in their entirety by reference. For the sake of completeness, an exemplary OCT/OCTA system is provided herein.
The sample and reference arms in the interferometer could consist of bulk-optics, fiber-optics, or hybrid bulk-optic systems and could have different architectures such as Michelson, Mach-Zehnder or common-path based designs as would be known by those skilled in the art. Light beam as used herein should be interpreted as any carefully directed light path. Instead of mechanically scanning the beam, a field of light can illuminate a one or two-dimensional area of the retina to generate the OCT data (see for example, U.S. Pat. No. 9,332,902; D. Hillmann et al, “Holoscopy—Holographic Optical Coherence Tomography,” Optics Letters, 36(13): 2390 2011; Y. Nakamura, et al, “High-Speed Three Dimensional Human Retinal Imaging by Line Field Spectral Domain Optical Coherence Tomography,” Optics Express, 15(12):7103 2007; Blazkiewicz et al, “Signal-To-Noise Ratio Study of Full-Field Fourier-Domain Optical Coherence Tomography,” Applied Optics, 44(36):7722 (2005)). In time-domain systems, the reference arm needs to have a tunable optical delay to generate interference. Balanced detection systems are typically used in TD-OCT and SS-OCT systems, while spectrometers are used at the detection port for SD-OCT systems. The invention described herein could be applied to any type of OCT system. Various aspects of the invention could apply to any type of OCT system or other types of ophthalmic diagnostic systems and/or multiple ophthalmic diagnostic systems including but not limited to fundus imaging systems, visual field test devices, and scanning laser polarimeters.
In Fourier Domain optical coherence tomography (FD-OCT), each measurement is the real-valued spectral interferogram (Sj(k)). The real-valued spectral data typically goes through several post-processing steps including background subtraction, dispersion correction, etc. The Fourier transform of the processed interferogram, results in a complex valued OCT signal output Aj(z)=|Aj|eiφ. The absolute value of this complex OCT signal, |Aj|, reveals the profile of scattering intensities at different path lengths, and therefore scattering as a function of depth (z-direction) in the sample. Similarly, the phase, φj can also be extracted from the complex valued OCT signal. The profile of scattering as a function of depth is called an axial scan (A-scan). A set of A-scans measured at neighboring locations in the sample produces a cross-sectional image (tomogram or B-scan) of the sample. A collection of B-scans collected at different transverse locations on the sample makes up a data volume or cube. For a particular volume of data, the term fast axis refers to the scan direction along a single B-scan whereas slow axis refers to the axis along which multiple B-scans are collected. The term “cluster scan” may refer to a single unit or block of data generated by repeated acquisitions at the same (or substantially the same) location (or region) for the purposes of analyzing motion contrast, which may be used to identify blood flow. A cluster scan can consist of multiple A-scans or B-scans collected with relatively short time separations at approximately the same location(s) on the sample. Since the scans in a cluster scan are of the same region, static structures remain relatively unchanged from scan to scan within the cluster scan, whereas motion contrast between the scans that meets predefined criteria may be identified as blood flow. A variety of ways to create B-scans are known in the art including but not limited to: along the horizontal or x-direction, along the vertical or y-direction, along the diagonal of x and y, or in a circular or spiral pattern. B-scans may be in the x-z dimensions but may be any cross-sectional image that includes the z-dimension.
In OCT Angiography, or Functional OCT, analysis algorithms may be applied to OCT data collected at the same, or approximately the same, sample locations on a sample at different times (e.g., a cluster scan) to analyze motion or flow (see for example US Patent Publication Nos. 2005/0171438, 2012/0307014, 2010/0027857, 2012/0277579 and U.S. Pat. No. 6,549,801, all of which are herein incorporated in their entirety by reference). An OCT system may use any one of a number of OCT angiography processing algorithms (e.g., motion contrast algorithms) to identify blood flow. For example, motion contrast algorithms can be applied to the intensity information derived from the image data (intensity-based algorithm), the phase information from the image data (phase-based algorithm), or the complex image data (complex-based algorithm). An en face image is a 2D projection of 3D OCT data (e.g., by averaging the intensity of each individual A-scan, such that each A-scan defines a pixel in the 2D projection). Similarly, an en face vasculature image is an image displaying motion contrast signal in which the data dimension corresponding to depth (e.g., z-direction along an A-scan) is displayed as a single representative value (e.g., a pixel in a 2D projection image), typically by summing or integrating all or an isolated portion of the data (see for example U.S. Pat. No. 7,301,644 herein incorporated in its entirety by reference). OCT systems that provide an angiography imaging functionality may be termed OCT angiography (OCTA) systems.
Neural Networks
As discussed above, the present invention may use a neural network (NN) machine learning (ML) model. For the sake of completeness, a general discussion of neural networks is provided herein. The present invention may use any, singularly or in combination, of the below described neural network architecture(s). A neural network, or neural net, is a (nodal) network of interconnected neurons, where each neuron represents a node in the network. Groups of neurons may be arranged in layers, with the outputs of one layer feeding forward to a next layer in a multilayer perceptron (MLP) arrangement. MLP may be understood to be a feedforward neural network model that maps a set of input data onto a set of output data.
Typically, each neuron (or node) produces a single output that is fed forward to neurons in the layer immediately following it. But each neuron in a hidden layer may receive multiple inputs, either from the input layer or from the outputs of neurons in an immediately preceding hidden layer. In general, each node may apply a function to its inputs to produce an output for that node. Nodes in hidden layers (e.g., learning layers) may apply the same function to their respective input(s) to produce their respective output(s). Some nodes, however, such as the nodes in the input layer InL receive only one input and may be passive, meaning that they simply relay the values of their single input to their output(s), e.g., they provide a copy of their input to their output(s), as illustratively shown by dotted arrows within the nodes of input layer InL.
For illustration purposes,
The neural net learns (e.g., is trained to determine) appropriate weight values to achieve a desired output for a given input during a training, or learning, stage. Before the neural net is trained, each weight may be individually assigned an initial (e.g., random and optionally non-zero) value, e.g. a random-number seed. Various methods of assigning initial weights are known in the art. The weights are then trained (optimized) so that for a given training vector input, the neural network produces an output close to a desired (predetermined) training vector output. For example, the weights may be incrementally adjusted in thousands of iterative cycles by a technique termed back-propagation. In each cycle of back-propagation, a training input (e.g., vector input or training input image/sample) is fed forward through the neural network to determine its actual output (e.g., vector output). An error for each output neuron, or output node, is then calculated based on the actual neuron output and a target training output for that neuron (e.g., a training output image/sample corresponding to the present training input image/sample). One then propagates back through the neural network (in a direction from the output layer back to the input layer) updating the weights based on how much effect each weight has on the overall error so that the output of the neural network moves closer to the desired training output. This cycle is then repeated until the actual output of the neural network is within an acceptable error range of the desired training output for the given training input. As it would be understood, each training input may require many back-propagation iterations before achieving a desired error range. Typically, an epoch refers to one back-propagation iteration (e.g., one forward pass and one backward pass) of all the training samples, such that training a neural network may require many epochs. Generally, the larger the training set, the better the performance of the trained ML model, so various data augmentation methods may be used to increase the size of the training set. For example, when the training set includes pairs of corresponding training input images and training output images, the training images may be divided into multiple corresponding image segments (or patches). Corresponding patches from a training input image and training output image may be paired to define multiple training patch pairs from one input/output image pair, which enlarges the training set. Training on large training sets, however, places high demands on computing resources, e.g. memory and data processing resources. Computing demands may be reduced by dividing a large training set into multiple mini-batches, where the mini-batch size defines the number of training samples in one forward/backward pass. In this case, and one epoch may include multiple mini-batches. Another issue is the possibility of a NN overfitting a training set such that its capacity to generalize from a specific input to a different input is reduced. Issues of overfitting may be mitigated by creating an ensemble of neural networks or by randomly dropping out nodes within a neural network during training, which effectively removes the dropped nodes from the neural network. Various dropout regulation methods, such as inverse dropout, are known in the art.
It is noted that the operation of a trained NN machine model is not a straight-forward algorithm of operational/analyzing steps. Indeed, when a trained NN machine model receives an input, the input is not analyzed in the traditional sense. Rather, irrespective of the subject or nature of the input (e.g., a vector defining a live image/scan or a vector defining some other entity, such as a demographic description or a record of activity) the input will be subjected to the same predefined architectural construct of the trained neural network (e.g., the same nodal/layer arrangement, trained weight and bias values, predefined convolution/deconvolution operations, activation functions, pooling operations, etc.), and it may not be clear how the trained network's architectural construct produces its output. Furthermore, the values of the trained weights and biases are not deterministic and depend upon many factors, such as the amount of time the neural network is given for training (e.g., the number of epochs in training), the random starting values of the weights before training starts, the computer architecture of the machine on which the NN is trained, selection of training samples, distribution of the training samples among multiple mini-batches, choice of activation function(s), choice of error function(s) that modify the weights, and even if training is interrupted on one machine (e.g., having a first computer architecture) and completed on another machine (e.g., having a different computer architecture). The point is that the reasons why a trained ML model reaches certain outputs is not clear, and much research is currently ongoing to attempt to determine the factors on which a ML model bases its outputs. Therefore, the processing of a neural network on live data cannot be reduced to a simple algorithm of steps. Rather, its operation is dependent upon its training architecture, training sample sets, training sequence, and various circumstances in the training of the ML model.
In summary, construction of a NN machine learning model may include a learning (or training) stage and a classification (or operational) stage. In the learning stage, the neural network may be trained for a specific purpose and may be provided with a set of training examples, including training (sample) inputs and training (sample) outputs, and optionally including a set of validation examples to test the progress of the training. During this learning process, various weights associated with nodes and node-interconnections in the neural network are incrementally adjusted in order to reduce an error between an actual output of the neural network and the desired training output. In this manner, a multi-layer feed-forward neural network (such as discussed above) may be made capable of approximating any measurable function to any desired degree of accuracy. The result of the learning stage is a (neural network) machine learning (ML) model that has been learned (e.g., trained). In the operational stage, a set of test inputs (or live inputs) may be submitted to the learned (trained) ML model, which may apply what it has learned to produce an output prediction based on the test inputs.
Like the regular neural networks of
Convolutional Neural Networks have been successfully applied to many computer vision problems. As explained above, training a CNN generally requires a large training dataset. The U-Net architecture is based on CNNs and can generally can be trained on a smaller training dataset than conventional CNNs.
The contracting path is similar to an encoder, and generally captures context (or feature) information by the use of feature maps. In the present example, each encoding module in the contracting path may include two or more convolutional layers, illustratively indicated by an asterisk symbol “*”, and which may be followed by a max pooling layer (e.g., DownSampling layer). For example, input image U-in is illustratively shown to undergo two convolution layers, each with 32 feature maps. As it would be understood, each convolution kernel produces a feature map (e.g., the output from a convolution operation with a given kernel is an image typically termed a “feature map”). For example, input U-in undergoes a first convolution that applies 32 convolution kernels (not shown) to produce an output consisting of 32 respective feature maps. However, as it is known in the art, the number of feature maps produced by a convolution operation may be adjusted (up or down). For example, the number of feature maps may be reduced by averaging groups of feature maps, dropping some feature maps, or other known method of feature map reduction. In the present example, this first convolution is followed by a second convolution whose output is limited to 32 feature maps. Another way to envision feature maps may be to think of the output of a convolution layer as a 3D image whose 2D dimension is given by the listed X-Y planar pixel dimension (e.g., 128×128 pixels), and whose depth is given by the number of feature maps (e.g., 32 planar images deep). Following this analogy, the output of the second convolution (e.g., the output of the first encoding module in the contracting path) may be described as a 128×128×32 image. The output from the second convolution then undergoes a pooling operation, which reduces the 2D dimension of each feature map (e.g., the X and Y dimensions may each be reduced by half). The pooling operation may be embodied within the DownSampling operation, as indicated by a downward arrow. Several pooling methods, such as max pooling, are known in the art and the specific pooling method is not critical to the present invention. The number of feature maps may double at each pooling, starting with 32 feature maps in the first encoding module (or block), 64 in the second encoding module, and so on. The contracting path thus forms a convolutional network consisting of multiple encoding modules (or stages or blocks). As is typical of convolutional networks, each encoding module may provide at least one convolution stage followed by an activation function (e.g., a rectified linear unit (ReLU) or sigmoid layer), not shown, and a max pooling operation. Generally, an activation function introduces non-linearity into a layer (e.g., to help avoid overfitting issues), receives the results of a layer, and determines whether to “activate” the output (e.g., determines whether the value of a given node meets predefined criteria to have an output forwarded to a next layer/node). In summary, the contracting path generally reduces spatial information while increasing feature information.
The expanding path is similar to a decoder, and among other things, may provide localization and spatial information for the results of the contracting path, despite the down sampling and any max-pooling performed in the contracting stage. The expanding path includes multiple decoding modules, where each decoding module concatenates its current up-converted input with the output of a corresponding encoding module. In this manner, feature and spatial information are combined in the expanding path through a sequence of up-convolutions (e.g., UpSampling or transpose convolutions or deconvolutions) and concatenations with high-resolution features from the contracting path (e.g., via CC1 to CC4). Thus, the output of a deconvolution layer is concatenated with the corresponding (optionally cropped) feature map from the contracting path, followed by two convolutional layers and activation function (with optional batch normalization). The output from the last expanding module in the expanding path may be fed to another processing/training block or layer, such as a classifier block, that may be trained along with the U-Net architecture.
Computing Device/System
In some embodiments, the computer system may include a processor Cpnt1, memory Cpnt2, storage Cpnt3, an input/output (I/O) interface Cpnt4, a communication interface Cpnt5, and a bus Cpnt6. The computer system may optionally also include a display Cpnt7, such as a computer monitor or screen.
Processor Cpnt1 includes hardware for executing instructions, such as those making up a computer program. For example, processor Cpnt1 may be a central processing unit (CPU) or a general-purpose computing on graphics processing unit (GPGPU). Processor Cpnt1 may retrieve (or fetch) the instructions from an internal register, an internal cache, memory Cpnt2, or storage Cpnt3, decode and execute the instructions, and write one or more results to an internal register, an internal cache, memory Cpnt2, or storage Cpnt3. In particular embodiments, processor Cpnt1 may include one or more internal caches for data, instructions, or addresses. Processor Cpnt1 may include one or more instruction caches, one or more data caches, such as to hold data tables. Instructions in the instruction caches may be copies of instructions in memory Cpnt2 or storage Cpnt3, and the instruction caches may speed up retrieval of those instructions by processor Cpnt1. Processor Cpnt1 may include any suitable number of internal registers, and may include one or more arithmetic logic units (ALUs). Processor Cpnt1 may be a multi-core processor; or include one or more processors Cpnt1. Although this disclosure describes and illustrates a particular processor, this disclosure contemplates any suitable processor.
Memory Cpnt2 may include main memory for storing instructions for processor Cpnt1 to execute or to hold interim data during processing. For example, the computer system may load instructions or data (e.g., data tables) from storage Cpnt3 or from another source (such as another computer system) to memory Cpnt2. Processor Cpnt1 may load the instructions and data from memory Cpnt2 to one or more internal register or internal cache. To execute the instructions, processor Cpnt1 may retrieve and decode the instructions from the internal register or internal cache. During or after execution of the instructions, processor Cpnt1 may write one or more results (which may be intermediate or final results) to the internal register, internal cache, memory Cpnt2 or storage Cpnt3. Bus Cpnt6 may include one or more memory buses (which may each include an address bus and a data bus) and may couple processor Cpnt1 to memory Cpnt2 and/or storage Cpnt3. Optionally, one or more memory management unit (MMU) facilitate data transfers between processor Cpnt1 and memory Cpnt2. Memory Cpnt2 (which may be fast, volatile memory) may include random access memory (RAM), such as dynamic RAM (DRAM) or static RAM (SRAM). Storage Cpnt3 may include long-term or mass storage for data or instructions. Storage Cpnt3 may be internal or external to the computer system, and include one or more of a disk drive (e.g., hard-disk drive, HDD, or solid-state drive, SSD), flash memory, ROM, EPROM, optical disc, magneto-optical disc, magnetic tape, Universal Serial Bus (USB)-accessible drive, or other type of non-volatile memory.
I/O interface Cpnt4 may be software, hardware, or a combination of both, and include one or more interfaces (e.g., serial or parallel communication ports) for communication with I/O devices, which may enable communication with a person (e.g., user). For example, I/O devices may include a keyboard, keypad, microphone, monitor, mouse, printer, scanner, speaker, still camera, stylus, tablet, touch screen, trackball, video camera, another suitable I/O device, or a combination of two or more of these.
Communication interface Cpnt5 may provide network interfaces for communication with other systems or networks. Communication interface Cpnt5 may include a Bluetooth interface or other type of packet-based communication. For example, communication interface Cpnt5 may include a network interface controller (NIC) and/or a wireless NIC or a wireless adapter for communicating with a wireless network. Communication interface Cpnt5 may provide communication with a WI-FI network, an ad hoc network, a personal area network (PAN), a wireless PAN (e.g., a Bluetooth WPAN), a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a cellular telephone network (such as, for example, a Global System for Mobile Communications (GSM) network), the Internet, or a combination of two or more of these.
Bus Cpnt6 may provide a communication link between the above-mentioned components of the computing system. For example, bus Cpnt6 may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a front-side bus (FSB), a HyperTransport (HT) interconnect, an Industry Standard Architecture (ISA) bus, an InfiniBand bus, a low-pin-count (LPC) bus, a memory bus, a Micro Channel Architecture (MCA) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCIe) bus, a serial advanced technology attachment (SATA) bus, a Video Electronics Standards Association local (VLB) bus, or other suitable bus or a combination of two or more of these.
Although this disclosure describes and illustrates a particular computer system having a particular number of particular components in a particular arrangement, this disclosure contemplates any suitable computer system having any suitable number of any suitable components in any suitable arrangement.
Herein, a computer-readable non-transitory storage medium or media may include one or more semiconductor-based or other integrated circuits (ICs) (such, as for example, field-programmable gate arrays (FPGAs) or application-specific ICs (ASICs)), hard disk drives (HDDs), hybrid hard drives (HHDs), optical discs, optical disc drives (ODDs), magneto-optical discs, magneto-optical drives, floppy diskettes, floppy disk drives (FDDs), magnetic tapes, solid-state drives (SSDs), RAM-drives, SECURE DIGITAL cards or drives, any other suitable computer-readable non-transitory storage media, or any suitable combination of two or more of these, where appropriate. A computer-readable non-transitory storage medium may be volatile, non-volatile, or a combination of volatile and non-volatile, where appropriate.
While the invention has been described in conjunction with several specific embodiments, it is evident to those skilled in the art that many further alternatives, modifications, and variations will be apparent in light of the foregoing description. Thus, the invention described herein is intended to embrace all such alternatives, modifications, applications and variations as may fall within the spirit and scope of the appended claims.
| Filing Document | Filing Date | Country | Kind |
|---|---|---|---|
| PCT/EP2020/074766 | 9/4/2020 | WO |
| Number | Date | Country | |
|---|---|---|---|
| 62897025 | Sep 2019 | US |