This disclosure relates to the field of pathology and more particularly to an improved microscope system and method for assisting a pathologist in classifying biological samples such as blood or tissue, e.g., as containing cancer cells or containing a pathological agent such as plasmodium protozoa or tuberculosis bacteria.
In order to characterize or classify a biological sample such as tissue, the sample is placed on a microscope slide and a pathologist views it under magnification with a microscope. The sample may be stained with agents such as hematoxylin and eosin (H&E) to make features of potential interest in the sample more readily seen. Alternatively, the sample may be stained and scanned with a high resolution digital scanner, and the pathologist views magnified images of the sample on a screen of a workstation or computer.
For example, the assessment of lymph nodes for metastasis is central to the staging of many types of solid tumors, including breast cancer. The process requires highly skilled pathologists and is fairly time-consuming and error-prone, especially for nodes that are negative for cancer or have a small foci of cancer. The current standard of care involves examination of digital slides of node biopsies that have been stained with hematoxylin and eosin. However, there are several limitations inherent with manual reads including reader fatigue, and intra and inter-grader reliability that negatively impact the sensitivity of the process. Accurate review and assessment of lymph node biopsy slides is important because the presence of tumor cells in the lymph node tissue may warrant new or more aggressive treatment for the cancer and improve the patient's chances of survival.
The prior art includes descriptions of the adaptation of deep learning techniques and trained neural networks to the context of digital tissue images in order to improve cancer diagnosis, characterization and/or staging. Pertinent background art includes the following articles: G. Litjens, et al., Deep learning as a tool for increasing accuracy and efficiency of histopathological diagnosis, www.nature.com/scientificreports 6:26286 (May 2016); D. Wang et al., Deep Learning for Identifying Metastatic Breast Cancer, arXiv:1606.05718v1 (June 2016): A. Madabhushi et al., Image analysis and machine learning in digital pathology: Challenges and opportunities, Medical Image Analysis 33, p. 170-175 (2016); A. Schuamberg, et al., H&E-stained Whole Slide Deep Learning Predicts SPOP Mutation State in Prostate Cancer, bioRxiv preprint http:/.bioRxiv.or/content/early/2016/07/17/064279. Additional prior art of interest includes Quinn et al., Deep Convolutional Neural Networks for Microscopy-based Point of Care Diagnostics, Proceedings of International Conference on Machine Learning for Health Care 2016.
The art has described several examples of augmenting the field of view of a microscope to aid in surgery. See U.S. patent application publication 2016/0183779 and published PCT application WO 2016/130424A1. See also Watson et al., Augmented microscopy: real-time overlay of bright-field and near-infrared fluorescence images, Journal of Biomedical Optics, vol. 20 (10) October 2015.
A method is disclosed for assisting a user in review of a slide containing a biological sample with a microscope having an eyepiece. The method includes steps of (a) capturing, with a camera, a digital image of a view of the sample as seen through the eyepiece of the microscope, (b) using a first machine learning pattern recognizer to identify one or more areas of interest in the sample from the image captured by the camera, and a second machine learning pattern recognizer trained to identify individual cells, and (c) superimposing an enhancement to the view of the sample as seen through the eyepiece of the microscope as an overlay, wherein the enhancement is based upon the identified areas of interest in the sample and further comprises quantitative data associated with the areas of interest. The method includes step (d), wherein when the sample is moved relative to the microscope optics or when a magnification or focus of the microscope changes, a new digital image of a new view of the sample is captured by the camera and supplied to the machine learning pattern recognizer, and a new enhancement is superimposed onto the new view of the sample as seen through the eyepiece in substantial real time, whereby the enhancement assists the user in classifying or characterizing the biological sample.
In one embodiment the one or more areas of interest comprise cells positive for expression of a protein and wherein the quantitative data takes the form of a percent of the cells in the view as being positive for such protein expression. Examples of the protein are Ki-67, P53, and Progesterone Receptor (PR). As another example, the one or more areas of interest can take the form of individual microorganism cells and the quantitative data comprises a count of the number of microorganism cells in the view. As another example, the one or more areas of interest take the form of individual cells undergoing mitosis and wherein the quantitative data is a count of the number of cells in the view undergoing mitosis. As another example, the areas of interest are tumor cells and the quantitative data is an area measurement of the tumor cells, either absolute or relative area within a defined region in the sample.
In one possible embodiment, the quantitative data comprises a measurement, e.g., a distance measurement. As another example the measurement is an area measurement. In one specific example, the areas of interest are prostate tissue with specific Gleason grades and the quantitative measurement is relative or absolute area measurements of tumor regions having specific Gleason grades, e.g., Grade 3, Grade 4 etc.
As another example, the quantitative data can take the form of a count of the number of areas of interest in the view. For example, the machine learning model identifies individual microorganism cells in the view and displays a count of the number of such cells.
In another aspect of this disclosure, a system is disclosed for assisting a user in review of a slide containing a biological sample. The system includes a microscope having a stage for holding a slide containing a biological sample, at least one objective lens, and an eyepiece, a digital camera configured to capture digital images of a view of the sample as seen through the eyepiece of the microscope, and a compute unit comprising a machine learning pattern recognizer configured to receive the digital images from the digital camera, wherein the pattern recognizer is trained to identify regions of interest in biological samples of the type currently placed on the stage, and wherein the pattern recognizer recognizes regions of interest on a digital image captured by the camera and wherein the compute unit generates data representing an enhancement to the view of the sample as seen through the eyepiece of the microscope, wherein the enhancement is based upon the regions of interest in the sample. The system further includes one or more optical components coupled to the eyepiece for superimposing the enhancement on the field of view.
In one configuration the compute unit implements a first machine learning pattern recognizer trained to identify individual cells within the view and a second machine learning pattern recognizer trained to identify individual cells within the view which are positive for expression of a protein. The enhancement takes the form of a display of quantitative data relating to the cells which are positive for the expression of the protein. The protein can take the form of comprises Ki-67, P53, or Progesterone Receptor (PR).
In another configuration, the system includes a workstation associated with the microscope having a display providing tools for a user of the workstation to draw an annotation on an image of the view, and wherein the annotation is saved along with the image of the view in a computer memory. The workstation may further include a graphical display providing access to tools to customize the presentation of the enhancement on the field of view.
In another configuration, the compute unit implements a machine learning pattern recognizer trained to identify individual cells which are undergoing mitosis. The enhancement includes a display of quantitative data relating to the cells which are undergoing mitosis.
In another configuration, the compute unit implements one or more machine learning pattern recognizers trained to identify individual tumor cells or areas of tumor cells which are classified in accordance with specific Gleason grade (e..g, Grade 3, Grade 4 etc.). The enhancement takes the form of a display of quantitative area data relating to the tumor cells or areas of tumor cells which are classified in accordance with the specific Gleason grades.
As used in this document, the term “biological sample” is intended to be defined broadly to encompass blood or blood components, tissue or fragments thereof from plants or animals, sputum, stool, urine or other bodily substances, as well as water, soil or food samples potentially containing pathogens.
The microscope includes an optics module 120 which incorporates a component, such as a semitransparent mirror 122 or beam combiner/splitter for overlaying an enhancement onto the field of view through the eyepiece. The optics module 120 allows the pathologist to see the field of view of the microscope as he would in a conventional microscope, and, on demand or automatically, see an enhancement (heat map, boundary or outline, annotations, etc.) as an overlay on the field of view which is projected into the field of view by an augmented reality (AR) display generation unit 128 and lens 130. The image generated by the display unit 128 is combined with the microscope field of view by the semitransparent mirror 122. As an alternative to the semitransparent mirror, a liquid crystal display (LCD) could be placed in the optical path that uses a transmissive negative image to project the enhancement into the optical path.
The optics module 120 can take a variety of different forms, and various nomenclature is used in the art to describe such a module. For example, it is referred to as a “projection unit”, “image injection module” or “optical see-through display technology.” Literature describing such units include US patent application publication 2016/0183779 (see description of
The semi-transparent mirror 122 directs the field of view of the microscope to both the eyepiece 104 and also to a digital camera 124. A lens for the camera is not shown but is conventional. The camera may take the form of a high resolution (e.g., 16 megapixel) video camera operating at say 10 or 30 frames per second. The digital camera captures magnified images of the sample as seen through the eyepiece of the microscope. Digital images captured by the camera are supplied to a compute unit 126. The compute unit 126 will be described in more detail in
Briefly, the compute unit 126 includes a machine learning pattern recognizer which receives the images from the camera. The machine learning pattern recognizer may take the form of a deep convolutional neural network which is trained on a set of microscope slide images of the same type as the biological specimen under examination. Additionally, the pattern recognizer will preferably take the form of an ensemble of pattern recognizers, each trained on a set of slides at a different level of magnification, e.g., 5×, 10×, 20×, 40×. The pattern recognizer is trained to identify regions of interest in an image (e.g., cancerous cells or tissue, pathogens such as viruses or bacteria, eggs from parasites, etc.) in biological samples of the type currently placed on the stage. The pattern recognizer recognizes regions of interest on the image captured by the camera 124. The compute unit 126 generates data representing an enhancement to the view of the sample as seen by the user, which is generated and projected by the AR display unit 128 and combined with the eyepiece field of view by the semitransparent mirror 122.
The essentially continuous capture of images by the camera 124, rapid performance of interference on the images by the pattern recognizer, and generation and projection of enhancements as overlays onto the field of view, enables the system 100 of
By “substantial real time,” we mean that an enhancement or overlay is projected onto the field of view within 10 seconds of changing magnification, changing depth of focus, or navigating and then stopping at a new location on the slide. In practice, as explained below, with the optional use of inference accelerators, we expect that in most cases the new overlay can be generated and projected onto the field of view within a matter of a second or two or even a fraction of a second of a change in focus, change in magnification, or change in slide position.
In summary then, a method is disclosed of assisting a user (e.g., pathologist) in review of a slide 114 containing a biological sample with a microscope 102 having an eyepiece 104. The method includes a step of capturing with a camera 124 a digital image of the sample as seen by the user through the eyepiece of the microscope, using a machine learning pattern recognizer (200,
In one possible configuration, the microscope 102 includes a capability to identify which microscope objective lens is currently in position to image the sample, e.g., with a switch or by user instruction to microscope electronics controlling the operation of the turret containing the lenses, and such identification is passed to the compute unit 126 using simple electronics so that the correct machine learning pattern recognition module in an ensemble of pattern recognizers (see
Another possible enhancement is a confidence score that the cells of the sample are cancerous. For example, the enhancement could take the form of a probability or confidence score, such as 85% confidence that the cells in the outline are Gleason Grade 3, and 15% confidence that the cells in the outline are Gleason Grade 4. Additionally, the measurement (0.12 μm) could be the diameter of the whole outlined region.
The superimposing of the outline and annotations
Table 1 below lists optical characteristics of a typical microscope for pathology and the digital resolution of a camera 124 which could be used in
In
In another possible configuration, the compute unit 126 could take the form of a general purpose computer (e.g., PC) augmented with the pattern recognizer(s) and accelerator, and graphics processing modules as shown in
In use, assuming multiple different pattern recognizers are loaded into the compute unit, an automatic specimen type detector or manual selector switches between the specimen dependent pattern recognition models (e.g. prostate cancer vs breast cancer vs malaria detection), and based on that the proper machine learning pattern recognizer or model is chosen, Movement of the slide to a new location (e.g., by use of a motor 116 driving the stage) or switching to another microscope objective 108 (i.e. magnification) triggers an update of the enhancement, as explained previously. Optionally, if only the magnification is changed, an ensemble of different models operating at different magnification levels (see
Deep convolutional neural network pattern recognizers, of the type used in the compute unit of
Additional literature describing deep neural network pattern recognizers include the following G. Litjens, et al., Deep learning as a tool for increasing accuracy and efficiency of histopathological diagnosis, www.nature.com/scientificreports 6:26286 (May 2016); D. Wang et al., Deep Learning for Identifying Metastatic Breast Cancer, arXiv:1606.05718v1 (June 2016); A. Madabhushi et al., Image analysis and machine learning in digital pathology: Challenges and opportunities, Medical Image Analysis 33 p 170-175 (2016); A. Schuamberg, et al., H&E-stained Whole Slide Deep Learning Predicts SPOP Mutation State in Prostate Cancer, bioRxiv preprint http:/.bioRxiv.or/content/early/2016/07/17/064279.
Sources for training slides for training the deep neural network pattern recognizer 200 can be generated from scratch by whole slide scanning of a set of slides of the type of samples of interest. For example, slide images for training can be obtained from Naval Medical Center in San Diego, Calif. (NMCSD) and publicly available sources such as from the CAMELYON16 challenge and The Cancer Genome Atlas (TCGA). Alternatively, they could be generated from a set of images of different slides captured by the camera of
Digital whole slide scanners and systems for staining slides are known in the art. Such devices and related systems are available from Aperio Technologies, Hamamatsu Photonics, Philips, Ventana Medical Systems, Inc., and others. The digital whole slide image can be obtained at a first magnification level (e.g. 40×), which is customary. The image can be upsampled or downsampled to obtain training images at other magnifications. Alternatively, the training slides can be scanned multiple times at different magnifications, for example at each magnification level offered by conventional manually-operated microscopes.
Inference Speed
In some implementations it may be possible to perform inference on a digital image that is the entire field of view of the microscope. In other situations, it may be desirable to perform inference on only a portion of the image, such as several 299×299 rectangular patches of pixels located about the center of the field of view, or on some larger portion of the field of view.
Using an Inception v3-based model with 299×299 pixel input size and a 16 MP camera, a dense coverage of a spherical area of the optical FoV (2700 pixels diameter) requires ˜120 patch inferences. If inference is run only for the center third (increasing inference granularity, and using the other two third as context), it will require ˜1200 inference calls. Additional inference calls might be required if one adds rotations and flips, or ensembling.
Table 2 lists the number of inference calls and inference times using conventional state of the art graphics processing units and inference accelerators.
Assuming a camera 124 operates at 30 frames per second (fps) for a seamless substantial near real time experience, a dense coverage with a reasonable combination of rotation, flips, and ensembling is possible.
Inference Accelerator (214,
Inference accelerators, also known as artificial intelligence (AI) accelerators, are an emerging class of microprocessors or coprocessors which are designed to speed up the process of performing inference of input data sets for pattern recognition. These systems currently take the form of a combination of custom application-specific integrated circuit chips (ASICs), field programmable gate arrays (FPGAs), graphics processing units (GPUs) and general purpose computing units, In some applications of the system of
In a simple implementation, the system of
Generation of Enhancement
The generation of the enhancement to project onto the field of view can be performed as follows:
1) the machine learning pattern recognizer 200 in the compute unit 126 runs model inference on the field of view, to create tumor probability per region (using cancer detection as an example here).
2a) heatmap: the tumor probability for each image patch in the field of view is translated into a color value (e.g. RGB), and those color values are stitched together to create a heatmap. This task can be performed by the graphics card 206.
2b) polygon outline: the tumor probabilities are thresholded at a certain score (e.g. probability>50%), and the boundary of the remaining region (or regions, if there are several not connected regions) form the polygon outline. Again this task can be performed by the graphics card 206.
3) the digital image data from step 2A or 2B is translated into an image on a display by the AR display unit 128, that is then projected into the optical path by lens 130 and semi-transparent mirror 120.
Additionally, the graphics card 206, either alone or with outputs from the machine learning pattern recognizer can generate Gleason score grading, annotations etc. for including in the digital enhancement data and provide such additional enhancements to the AR display module 128.
Communication of the microscope with a computer about the location on the slide.
In practice, in some situations it may be useful to perform a whole slide scan of the specimen slide in addition to pathologist use of the microscope system of
1. highlighting of the microscope current field of view (FoV) on the whole slide image (e.g. for teaching purposes). Localization of the FoV could be done either via image registration of the microscope image onto the whole slide image, or by use of the motor 116 driving the microscope stage 110 with the motor coordinates mapped onto the whole slide image coordinates.
2. automatic navigation of the microscope FoV to a designated area on the slide. For example, the microscope could operate in a “pre-scan” mode in which the motor 116 drives the microscope slide to a series of X-Y positions and obtains low magnification images with the camera at each position. The images are passed to the machine learning pattern recognizer in the compute unit 126 and the pattern recognizer identifies those images from respective positions that contain areas of interest (e.g., cells likely to be cancerous). Then, during use by the pathologist, the motor 116 could be operated to drive the slide to those positions and the operator prompted to investigate the field of view at each position and the field of view augmented with suitable enhancements (heat maps, outlines, etc.). In this embodiment, the compute unit may operate in conjunction with a user interface for the microscope to aid the pathologist work flow. Such user interface could be incorporated in the microscope per se or be presented in the display 142 of the workstation 140. For example, in
3. transfer of labels and annotations from the whole slide image to the microscope image
A whole slide image of the specimen slide obtained by a whole slide scanner can be provided with labels or annotations for various objects of interest in the image. Because it is possible to obtain registry between the whole slide image and the slide on the motorized stage 110 (e.g., from a mapping of motor 116 positions to whole slide image coordinates), it may be possible transfer the labels and annotations to the microscope image seen through the eyepiece. This is possible by providing the labels and annotations to the graphics card 206 in the compute unit, and then providing the digital data of such labels and annotations to the AR display unit 128 when the motor drives the slide to the coordinates where such labels and annotations exist.
The method of obtaining registration between the whole slide image and the slide on the microscope could be implemented as an algorithmic solution, or by using computer vision approaches, such as image registration, to locate the region of the whole slide image that corresponds to the camera image.
4. Output of the field of view along with the prediction to a local storage, for usage in e.g. a pathology report.
In practice, it may be desirable for the pathologist to make records of their work in characterizing or classifying the sample. Such records could take the form of digital images of the field of view (with or without enhancements) which can be generated and stored (e.g., in the memory 212 of the compute unit) and then transmitting them via interface 216 to the attached pathology workstation 140, The workstation software will typically include workflow software that the pathologist follows in performing a classification or characterization task on a sample and generating a report. Such software includes a tool, e.g., icon or prompts, which permit the pathologist to insert into the report the stored digital images of the field of view and relevant annotations or enhancements which are stored in the memory 212.
Further optional features may be included in the system.
A. Output port for displaying field of view on a monitor
The compute unit includes an interface or port 216 for connecting the compute unit to the attached peripheral pathologist workstation 140. This interface allows the field of view captured by the camera and any enhancement generated by the graphics card to be transmitted to the monitor 142 of the workstation 140.
B. On demand a connected monitor displays image regions that are similar to the one in the current field of view, with annotations etc.
In one possible configuration, the monitor 142 of the workstation 140 displays image regions from other slides (e.g., from other patients) that are “similar” to the one in the current field of view, along with any enhancements or annotations which may exist for the other slide(s). In particular, the workstation 140 may include a memory loaded with digital image data of a set of other slides from other patients, and potentially hundreds or thousands of such slides. The workstation may include a pattern recognizer which performs pattern recognition of the field of view of the slide on the microscope on all of such other digital slide images and selects the ones that are closest to the field of view. Fields of view (i.e., portions of the selected digital slides stored in memory) can be presented on the display 142 of the workstation 140 alongside the current field of view through the microscope 100. Each of the slides stored in memory on the workstation is associated with metadata such as the patient diagnosis, date, treatment, outcome or survival data after treatment, age, smoker status, etc. The display of the fields of view of the selected digital slides can be augmented with the display of the metadata.
1. heat map
In one embodiment, the scores for small groups of pixels (“patches”) in the digital slide image captured by the camera 124 range from 0.0 to 1.0. The areas of the heatmap 20 with the highest scores are shown as dark red, whereas the areas with the areas with the lowest scores are either left alone (not enhanced) or shown in another contrasting color, such as violet. The code 22 of
Further details on the generation and calculation of heatmaps and tumor probability scores are described in the pending PCT application “Method and System for Assisting Pathologist Identification of Tumor Cells in Magnified Tissue Images”, serial no. PCT/US17/019051 filed Feb. 23, 2017, which is incorporated by reference.
2. outlines of regions of interest and annotations
3. rectangles identifying objects
4. Quantitative data
See the section below.
Workflow
At step 306 an image of the field of view is captured by the digital camera 124 and send to the compute unit 126. If the operator moves the slide (e.g., by operation of the stage motor 116 in a panning mode) a new image of the field of view is captured by the camera. Similarly, if the operator changes the objective lens 108 (e.g., to zoom in or out) a new image is captured. The new images are sent to the compute unit 126. (In practice, the camera 124 could be operated at a continuous frame rate of say 10 or 30 frames per second and the updating of the field of view in the compute unit could be essentially continuous and not merely when either stage position or objective lens are changed.)
At step 312 the image of the field of view is provided as input to the relevant machine learning pattern recognizer 200 in the compute unit 126 (
At step 314 the graphics card or GPU 206 in the compute unit 126 generates digital image data corresponding to the enhancement or augmentation relevant to the sample type and this digital image data is provided to the AR display unit 128 for projection onto the field of view for viewing by the pathologist in the eyepiece 104.
The compute unit may include controls (e.g., via the attached workstation) by which the user can specify the type of annotations or enhancements they wish to see projected onto the field of view, thereby giving the user control as to how they wish the microscope to operate in augmented reality mode. For example, the user could specify enhancements in the form of heat map only. As another example, if the specimen is a blood sample, the user could specify enhancements in the form of rectangles identifying plasmodium present in the sample. In a prostate sample, the user can specify boundaries our outlines surrounding cells which a Gleason score of 3 or more, as well as annotations such as shown and described previously in
Ensemble of machine learning pattern recognizers
It will be noted that the system of
Pattern recognizer 406D is trained on 5× magnification slide images. Ideally, each of the magnification levels the pattern recognizers are trained at correspond to the magnification levels which are available on the microscope of
In operation, a patch (i.e., a portion of the microscope FoV, such as a 299×299 rectangular patch of pixels) 402A, 402B, 402C or 402D is provided as an input 404A, 404B, 404C or 404D to the relevant pattern recognizer 406A, 406B, 406C, 406D depending on the current objective lens being used on the microscope. In a heat map application, the score for a patch of pixels between 0 and 1 is generated as the last layer of the neural network pattern recognizers 406A, 406B, 406C, 406D, in the form of a multinomial logistic regression, which generates a prediction, in the form of a probability of between 0 and 1, of which of the classes (here, healthy vs tumor) the input data (patch) belongs to. Multinomial logistical regression is known in the art of supervised learning and optimization, and is sometimes referred to as “Softmax Regression.” A tutorial found on the web, http://ufldl.stanford.edu/tutorial/supervised/SoftmaxRegression/ provides further details, which is incorporated by reference herein. The output 408A, 408B, 408C, 408D is thus the score for the patch of pixels.
In one configuration, the process of generating a score for a patch of pixels is performed for all of the patches forming the field of view of the microscope. The outputs 408A, 408B, 408C 408D are provided to the graphics card (GPU) 206 in the compute unit to generate data representing the augmentation, in this example the heat map. In the situation where the stage remains stationary but the user changes magnification, then two of the members of the ensemble shown in
It will also be appreciated that the compute unit preferably includes an ensemble of pattern recognizers trained on a set of microscope slide images at different magnification levels for each of the pathology applications the microscope is used for (e.g., breast cancer tissue, lymph node tissue, prostate tissue, malaria, etc.), as indicated in
Portable media with machine learning pattern recognizers
In one embodiment, the compute unit 126 of
While SD cards are illustrated in
Specific Applications
While several specific applications of the microscope for pathology review have been described, including breast cancer detection, prostate cancer detection, identification of pathogens (e.g., plasmodium, tuberculosis, malaria parasites, eggs of parasites) etc., it will be appreciated that other applications in the field of pathology are of course possible. Additionally, the principles of the system of
Stand-Alone System
The microscope system of
Networked Configuration
In another configuration, the system of
Motor-driven Stage 110/116
The incorporation of a motor driven stage 110 (which is common in pathology microscopes) allows for additional functions to be formed to further assist the pathologist. For example, the motor 116 could drive the slide to a sequence of positions to capture low magnification images with the camera of the entire slide. The low magnification images are then supplied to a machine learning pattern recognizer in the compute unit trained at low magnification levels to provide preliminary detection of suspicious regions (e.g., regions likely containing cancer cells or likely to contain tuberculosis mycobacteria. Then, the microscope stage could be driven automatically in a series of steps to those fields containing potentially relevant areas. The incremental positioning of the slide could be executed upon command of the user, e.g., via controls for the microscope or via the user interface of the attached workstation.
An exhaustive search of the whole slide at 40× for areas of interest in a short amount of time is not currently feasible with current technology. However, the use of a low magnification model able to detect suspicious regions at low magnification and then only zoom in on demand is currently feasible using the system of
Model Training
Images obtained from the camera 124 may, in some implementations, be different in terms of optical quality or resolution than images from whole slide scanners on which the machine learning pattern recognizers are trained. The quality of the digital camera 124 and associated optical components has a lot to do with this, and ideally the quality of the digital camera and associated optics is the same as, or nearly the same as, the quality of the optical components and camera used for capturing the training slide images. While the image resolution should be comparable, the images from the microscope camera 124 are likely to have some artifacts such as geometric distortion that are absent or less frequently present in the whole slide scanner training images. Collecting microscope-specific training images for training new models is in theory possible. However it is not a particularly scalable solution. A more practical solution is make sure the whole slide image-based pattern recognition models generalize to the microscope images captured by the camera 124. If generalization with the default models is not acceptable, it should be possible to generate artificial training data from whole slide image scans that “look like” their corresponding microscope camera images. Such artificial training data can be generated by introducing parametric deformations to the whole slide image scan images and using the deformed images for training. Examples of such parametric deformations include warping, adding noise, lowering resolution, blurring, and contrast adjustment.
An alternative is to use the camera of a microscope to generate a large number of training images from a multitude of slides, and then use such images to train the models instead of images obtained from a whole slide scanner.
Another alternative training a generative adversarial network (GAN) to produce the images for training the machine learning pattern recognizers.
Further Considerations
The image quality of the camera 124 of
One particular challenge is that the optical resolution of the human eye is much higher than that of current digital cameras. For instance, in order to detect a tiny metastasis, a machine learning model might require zooming in further (switching to higher power objectives) than a human might need to for the same metastasis. One way of addressing this is prompting the user to switch to high (or higher) magnification levels when they are viewing areas of potential interest and then generating new enhancements at the higher power. Another approach is to use an ultra-high resolution camera such as the Cannon 250 megapixel CMOS sensor.
As noted above, the optical component 120 including the semi-transparent mirror 122 should be placed in the optical path so that it renders the best visual experience. In one possible configuration the microscope may take the form of a stereoscopic microscope with two eyepieces and it may be possible to project the enhancement into the field of view of one or both of the eyepieces.
Another consideration is making sure the eye sees the enhancement or overlay on the field of view with the same registration as the camera. This could be performed using fiduciary markers which are present in the field of view and the image captured by the camera.
It is also noted that labels which may be present on whole slide images of the slide under examination can be transferred to the camera images and projected into the field of view, e.g., using image registration techniques, as described previously,
Changes to the optics by the user (e.g. focusing, diopter correction) will affect the image quality on the camera image and the displayed image. The camera images need to remain sharp and high quality so that inference can be performed In one possible configuration, the compute unit includes an image quality detector module that assesses when the image is good enough to perform inference. If the image is not of sufficient quality the user could be prompted to make appropriate correction, such as adjust the focus or make other optical adjustments to the microscope.
It was also noted previously that the augmented reality microscope of this disclosure is suitable for other uses, such as inspection or quality control, e.g., in manufacturing of electronic components or other products where the inspection occurs via a microscope. Thus, as an additional aspect of this disclosure, a method for assisting a user in review of a object (e.g., manufactured object) with a microscope having an eyepiece has been disclosed, including the steps of (a) capturing, with a camera, a digital image of the object as seen by a user through the eyepiece of the microscope, (b) using a machine learning pattern recognizer to identify areas of interest (e.g., defects) in the object from the image captured by the camera, and (c) superimposing an enhancement to the view of the object as seen by the user through the eyepiece of the microscope as an overlay. As the user moves the sample relative to the microscope optics and then stops or changes magnification or focus of the microscope, a new digital image is captured by the camera and supplied to the machine learning pattern recognizer, and a new enhancement is superimposed onto the new view of the object as seen through the eyepiece in substantial real time, whereby the enhancement assists the user in classifying or characterizing the object. The features of the appended claims are deemed to be applicable to this variation wherein instead of a biological sample on a slide an object (e.g., manufactured object, computer chip, small part, etc.) is viewed by the microscope and the camera captures images of the object as seen in the microscope field of view.
An aspect may also provide a system assisting a user in review of a slide containing a biological sample, comprising, in combination: a microscope having a stage for holding a slide containing a biological sample, at least one objective lens, and an eyepiece, a digital camera capturing magnified digital images of the sample as seen through the eyepiece of the microscope, a compute unit comprising a machine learning pattern recognizer which receives the digital images from the digital camera, wherein the pattern recognizer is trained to identify regions of interest in biological samples of the type currently placed on the stage, and wherein the pattern recognizer recognizes regions of interest on the digital image captured by the camera and wherein the compute unit generates data representing an enhancement to the field of view of the sample as seen by the user through the eyepiece; and one or more optical components coupled to the eyepiece for superimposing the enhancement on the field of view; wherein the camera, compute unit and one or more optical components operate such that as the user moves the sample relative to the microscope optics and then stops or changes magnification or focus of the microscope, a new digital image is captured by the camera and supplied to the machine learning pattern recognizer, and a new enhancement is superimposed onto the new field of view of the sample as seen through the eyepiece in substantial real time.
The augmented microscope, shown in
Component design and selection was driven by final performance requirements. Camera and display devices were chosen for effective cell and gland level feature representation. The camera 124 (Adimec S25A80) included a 5120×5120 pixel color sensor with high sensitivity and global shutter capable of running up to 80 frames/sec. Camera images were captured by an industrial frame-grabber board (Cyton CXP-4) with PCI-E interface to the workstation. The microdisplay (eMagin SXGA096, 1292×1036 pixels) was mounted on the side of the microscope and imaged with an achromatic condenser (Nikon MBL71305) at a location tuned to minimize parallax and ensure that the specimen and display image are simultaneously in focus. The microdisplay includes an HDMI interface for receiving images from the workstation. Due to the limited brightness of this display, BS2 was chosen to transmit 90% of the light from the display and 10% from the sample, which resulted in adequate contrast between PI and SI when operating the microscope light source near half of its maximum intensity.
Software and Hardware System
The application driving the entire system runs on a standard off-the-shelf PC with a BitFlow frame grabber connected to a camera 124 (
The primary pipeline consists of a set of threads that continuously grab an image frame from the camera, debayer it (i.e. convert the raw sensor output into a color image), prepare the data, run algorithm, process the results, and finally display the output. Other preprocessing steps such as flat-field correction and white balancing can be done in this thread as well for cameras which cannot do them directly on-chip. To reduce the overall latency, these steps run in parallel for a sequence of successive frames, i.e. the display of frame ‘N’, generation of heatmap of frame ‘N+1’, and running algorithm on frame ‘N+2’ all happen in parallel.
In addition to this primary pipeline, the system also runs a background control thread. One purpose of this thread is to determine whether the camera image is sufficiently in focus to yield accurate deep learning algorithm results. The system uses an out-of-focus detection algorithm to assess focus quality. A second purpose of this thread is to determine the currently used microscope objective, so that the deep learning algorithm tuned for the respective magnification is used. Additionally, settings for white balance and exposure time on camera 124 can be set to optimal profiles for the respective lens.
To enable AI algorithms on analog microscope requires three novel technologies working in unison. First, state-of-the-art convolutional neural networks for high accuracy detection and classification are weeded. High accuracy neural networks have been shown possible in the literature on digitally scanned images alone or images from microscope alone. A contribution of this work is the demonstration of successful generalization of deep learning algorithms from digitally scanned images to microscope images. The feasibility of overcoming the differences in image modality allows us to use digitally scanned images for deep learning algorithm development for the ARM. Second, the ability to run these algorithms in real-time to provide an interactive user experience is needed. This is achieved by a tightly integrated software, hardware and AI algorithm system with a real-time performance for live algorithm predictions. Third, a parallax-free head-up display in the microscope to project high-resolution heatmaps, contours, or textual information onto the sample is needed. This is made possible by the novel configuration of the optic components that are generally available.
The deep learning image analysis workflow includes two phases: algorithm development and algorithm application, illustrated in
For algorithm development,
For algorithm application,
Algorithm Evaluation
We evaluate the algorithm performance of tumor detection within the field of view with the following metrics: receiver operating characteristic (ROC) curves (the true positive rate against the false positive rate), area under the ROC curve (AUC), accuracy, precision, and recall (TP: true positive; FP: false positive; FN: false negative):
Precision=TP/(TP+FP),
Recall/True Positive Rate=TP/(TP+FN),
False Positive Rate=FP/(FP+TN).
The following performance metrics were observed:
Receiver operating characteristic (ROC) plots are provided in the appendix of the U.S. Provisional application cited at the beginning of this document.
Results
Using modern deep learning algorithms with off-the-shelf graphics card for accelerated computation, the system achieved a total latency of about 100 ms (10 frames per second) which is fast enough for most workflows. The projection of the deep learning predictions into the optics was high enough contrast to be clearly visible on top of the tissue sample using common background illumination levels and was parallax-free. Operating the ARM was seamless for first-time users (pathologists that were not part of the study) who tested it, with almost no learning curve
User Interface for set-up and configuration
In one possible configuration the workstation 140 (
The pane 1520 on the left side of the display 1500 provides a drawing interface for the user to manually create annotations on the displayed image, for example by means of a finger stroke on a touch sensitive display or by mouse action on a conventional display. The image 1522 displayed is the current field of view of the microscope and the user is provided with the ability to draw an outline over regions of interest (as shown). The outline is preserved along with the image of the field of view. The region 1524 of the display provides drawing tools to control the user annotations, including a scroll bar 1526 to change line thickness and a scroll bar 1528 to change the brightness of the line.
It is further contemplated that configuration of the annotations or the AR display superimposed on the field of view can be done by voice command, e.g., using the workstation 140 associated with the AR microscope as in
Quantitative Analysis and Enhancements
Quantitative measurements of human epidermal growth factor receptor 2 (HER2), Estrogen Receptor (ER), and Progesterone Receptor (PR) immunohistochemistry (IHC) biomarker expression is critical for breast cancer therapy selection. These advances in cancer classification, treatment and associated companion diagnostics have driven increasing complexity and reporting requirements for the pathologist, in some cases stretching human capability for efficient assessment.
Even though this study details two clinical diagnostic applications, the AR microscope of this disclosure is application agnostic and can be used for any kind of image analysis task for which a deep learning algorithm has been trained. We envision the usage of this device for many other clinical and research applications, including quantitative analysis, such as mitotic rate estimation, IHC quantification and positive margin detection in frozen sections.
All the measurements that are made in a quantitative annotations can be saved as structured metadata (e.g., .xml files) along with the field of view image captured by the AR microscope camera of
All measurements that are taken per field of view can be aggregated (e.g., summed or averaged) across multiple fields of view, for instance to estimate the overall density of tumor cells or other features. This could also include tracking the features in the field of view, and thereby determining whether the current field of view has been analyzed already. This tracking enables several other possible features: 1) the ability to show cached results that were previously computed; 2) avoid double counting the area in the field of view, and 3) combine or “stitch” the measurements/metrics across the slide to obtain for example a density map that is larger than single fields of view. In this regard, a low power image of the slide may be obtained initially and used to create essentially a map of the slide; this map (and the pixel or stage coordinates associated with the map) can be used to keep track of where in the map the current field of view is. Feature tracking techniques may also be used to know where in the overall slide the current field of view is positioned. Some microscopes include motorized stages and a hardware solution for keeping track of the position of the microscope objective/field of view relative to the overall slide and this solution, or a lower cost measuring version available on less complex microscopes can be used for creating the map and tracking the position of the field of view.
In quantitative analysis, some measurements do not require sampling or counting the entire slide, but rather they employ a technique where only a certain number of fields of view need to be measured. For example some measurements require say 5 or 10 fields of view at high power, and the results are aggregated or averaged over such fields of view.
While presently preferred embodiments are described with particularity, all questions concerning scope of the invention are to be answered by reference to the appended claims interpreted in light of the foregoing.
This application claims priority benefits of U.S. Provisional application Ser. No. 62/656,557 filed Apr. 12, 2018.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2019/020570 | 3/4/2019 | WO | 00 |
Number | Date | Country | |
---|---|---|---|
62656557 | Apr 2018 | US |