1. Technical Field
The present invention relates to digital pathology and, more particularly, to image analysis performed with whole-slide imaging.
2. Description of the Related Art
Pathologists and medical doctors analyze very large digital images of whole histopathology slides using whole-slide imaging browsers. Such browsers form a kind of virtual microscope running on a computer, allowing a user to manipulate the image in a user-friendly fashion, e.g., by panning and zooming, and implement computer-based image analysis on the slide. Image analysis normally operates on a portion of the image, called the region of interest (ROI), but the size of the ROI is often constrained by the complexity of the analysis and the available computing resources. Analytics on histopathological images, including image processing, image analysis, and machine learning, is frequently computationally intensive and cannot be performed in an interactive way.
Existing systems are desktop or web-browser based and perform viewing and limited analysis. These systems do not have the capability of executing full analysis of tissues. As a result, analysis on such systems is not scalable, and demanding functions take too long for interactive execution.
Furthermore, existing distributed computing systems are inadequate to address the needs of digital pathology, because the computation and communication demands may overwhelm even powerful distributed systems. Images of histology slides can be, for example, several gigabytes in size, such that it is usually infeasible to transfer such images back and forth between client and server. Moreover, the computations involved in modern analytics can be very intensive, particularly if performed on the entire image. As such, existing cloud servers are not optimized to handle digital pathology services.
A method for electronic pathology analysis includes scanning received slides that include a pathology sample to produce a sample image in a shared memory; analyzing the sample image using one or more execution nodes, each including one or more processors, according to one or more analysis types to produce intermediate results; transmitting some or all of the sample image to a client device; further analyzing the sample image responsive to a request from the client device to produce a final analysis based on the intermediate results; and transmitting the final analysis to the client device.
A system for electronic pathology analysis includes a shared memory configured to store a scanned image from a received slide; one or more execution nodes, each including one or more processors, configured to analyze the scanned image according to one or more analysis types to produce intermediate results and to further analyze the scanned image responsive to a request from a client device to produce a final analysis based on the intermediate results; and a network transceiver configured to transmit some or all of the scanned image to the client device and to transmit the final analysis to the client device responsive to the request from the client device.
These and other features and advantages will become apparent from the following detailed description of illustrative embodiments thereof, which is to be read in connection with the accompanying drawings.
The disclosure will provide details in the following description of preferred embodiments with reference to the following figures wherein:
The present principles provide a multi-layer system that allows for distributed processing of analytical tasks, allowing users to perform analysis in real-time with a high degree of responsiveness. Processing is separated into a user interface layer that permits a user to interactively view and direct analysis, and an interpretation layer that distributes computation-heavy analysis to a back-end server or servers that have greater computational power than the interface layer. By focusing computational tasks at a place other than the end-user's terminal, the whole-slide imaging (WSI) browser may be implemented on a much smaller device, e.g., a tablet or laptop.
Referring now in detail to the figures in which like numerals represent the same or similar elements and initially to
The interpretation layer 104 may include one or more execution nodes 112. The execution nodes 112 may represent a single computer system with a single processor, having one or more processing cores, or with multiple processors. The execution nodes 112 may also represent multiple distinct computer systems that have been networked in, e.g., a cloud arrangement. The execution nodes 112 have access to a shared memory 114 which stores the image information being used by the interface layer 102. This image information may be communicated by the interface layer 102 or may be stored in advance to minimize communication times and improve responsiveness. An execution controller 116 accepts analysis requests from the interface layer 102 and divides large analyses into multiple sub-jobs, which the execution controller 116 then distributes to execution nodes 112 for analysis. The execution controller 116 includes a scheduler configured to prioritize sub-tasks in such a way as to provide low-latency feedback to the user, optimizing for interactive use. Scheduling may include ordering tasks within a single processing node 112, may include distributing the tasks between processing nodes 112 for parallel execution, or may represent a combination of the two. Different types of run-time schedulers, such as Hadoop®, may be used in execution controller 116 and may be implemented transparently to the user. Upon completion of the analysis by the execution nodes 112, the execution controller assembles the results of the sub-jobs into a single analysis or result and communicates that result back to interface layer 102, where it may be stored and/or displayed to the user.
To obtain good performance, as measured by the subjective delay that the user experiences with the interface layer 102 when requesting analyses, both computational loads and communication loads between the layers are optimized. In one example, a user may use a pointer in user interface 104 to select a portion of an image. The interface layer 102 communicates the selection to the interpretation layer 104, which automatically transforms the freeform region selected by the user into a set of rectangular sections in the image. The number of sections, as well as their size, is optimized to maximize processing speed on the available execution nodes 112 while preserving algorithmic constraints of the selected analytics, such as a minimal resolution needed for proper detection of particular pathological indicators or features. The results of each rectangular section are then filtered, combined, and integrated into a single report at the execution controller 116 and sent to interface layer 102, where the results are displayed as an overlay graphic on the image. The whole process of selecting optimal resolution and of splitting the analysis into sub-jobs takes place transparently, without explicit user direction.
Referring now to
Block 206 transfers the image sub-sections to execution nodes 112 for processing according to a schedule generated by execution controller 116. One exemplary schedule may assign sub-sections to execution nodes 112 as they become available in a round-robin fashion. The execution nodes 112 access image data from shared memory 114 and perform their assigned analyses. The results from each of the execution nodes 112 are collected by block 208 at execution controller 116 to obtain a final result, which is then transmitted to interface layer 102. The result is then stored in local memory 110 and displayed on the user interface 106 as, e.g., an overlay on the original image. This process may be repeated for as many different selections as a user chooses, and for as many types of analysis as are available.
Embodiments described herein may be entirely hardware, entirely software or including both hardware and software elements. In a preferred embodiment, the present invention is implemented in software, which includes but is not limited to firmware, resident software, microcode, etc.
Embodiments may include a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system. A computer-usable or computer readable medium may include any apparatus that stores, communicates, propagates, or transports the program for use by or in connection with the instruction execution system, apparatus, or device. The medium can be magnetic, optical, electronic, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium. The medium may include a computer-readable storage medium such as a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk and an optical disk, etc.
A data processing system suitable for storing and/or executing program code may include at least one processor coupled directly or indirectly to memory elements through a system bus. The memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code to reduce the number of times code is retrieved from bulk storage during execution. Input/output or I/O devices (including but not limited to keyboards, displays, pointing devices, etc.) may be coupled to the system either directly or through intervening I/O controllers.
Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modem and Ethernet cards are just a few of the currently available types of network adapters.
In the same vein as above, the user experience can be further optimized by pre-processing images. For example, pre-processing may include applying the most time-consuming types of analysis to an entire slide. In situations where computational resources for pre-processing are limited, the system may prioritize those portions of an image that are more likely to be of interest to the user, determined using a trained classifier that is trained using a large set of training images which include exemplary selections of regions of interest for particular types of analysis. Alternatively, parts of an image to be pre-processed may be selected by a human technician. Each type of analysis may have its own region of interest, as each image analysis will be looking for different things within the image. The results of pre-processing may be stored in shared memory 114 or transferred to the local memory 110 with the image itself.
When a user requests analysis from user interface 106, cached intermediate results are recalled and only the final analysis, which may involve parameters chosen by the final user, is performed in real-time. Because the most computationally intensive steps have been performed in pre-processing, the final computation is usually much less complex and low-latency. The user interface 106 may indicate regions of the image that are pre-processed by, e.g., overlaying a semi-transparent colored grid on the image. In this way, the user may place a request knowing whether the operation will complete quickly (if the selection is in a pre-processed region) or will take additional time to compute (if some parts of the selection fall outside the pre-processed region). It should be recognized that the “grid” need not be a square or rectangular grid. The grid may instead have corners that do not align, sections that are of varying size and shape, and sections which overlap.
Referring now to
Execution controller 116 determines potential regions of interest at block 306. This may include segmenting the image into discrete sections and may further include performing an initial analysis on the image to locate potions of the image that may be of particular interest when performing one or more of the available types of analysis. For example, the execution controller may predict a region of interest by noting changes of color, texture, or brightness within the image that would signify changes of tissue type. In one embodiment, potential regions of interest may be selected offline by a technician.
Having determined potential regions of interest, the execution controller 116 may pass information regarding said regions on to execution nodes 112. The execution nodes 112 apply one or more forms of analysis to the potential region(s) of interest at block 308. Because computational resources may still be limited, priority can be assigned by the execution controller 116 to regions of interest that are particularly noteworthy (judged by, e.g., a likelihood score) and to forms of analysis that are most commonly used. Once the execution nodes 112 produce a result, the result is collected by execution controller 116 and stored in shared memory 114 as an intermediate result. In one example, tumor edges may be located automatically or by a technician by determining an area in an image that shows a highest density of dye. This edge region may subsequently be used for analyses such as performing a mitotic count. In this example, regions close to the edge of the tumor would have the highest priority. If there is additional time, regions of interest within the tumor could be further processed using available resources.
Referring now to
Having received the user request, block 408 completes the analysis. Toward this end, execution controller 116 accesses intermediate results stored in shared memory 114 and determines what further analysis is needed to meet the user's request. The remaining work is assigned to execution nodes 112 to produce a final analysis. Block 410 transfers the final analysis result from shared memory 114 to the interface layer 102, where block 412 displays the results using the user interface 106. The analysis results may be sent all at once, or may be provided to the interface layer 102 in a progressive fashion, with some results being provided immediately to provide a higher degree of responsiveness to the user.
It should also be noted that state information on the analyses may be preserved in broader contexts. For example, after performing an analysis, the user may request an additional area to be analyzed, or may merely increase or reduce the size of the current area. The state information of any previous analysis may be stored as intermediate results in shared memory 114 and used as a basis for subsequent analyses. Without maintaining a state on both the interface layer 102 and interpretation layer 104, the system would have to recompute initial analyses together with the additional requests. Instead, the present principles maintain the state of analysis and quickly compute only the missing parts, merging them with the current results.
Implementing the present principles greatly increases the speed and responsiveness of WSI browser applications. Not only does the WSI browser gain the speed advantages of parallelism, but pre-processing and communication optimization allow the client to receive analytical information in real-time providing a comfortable level of responsiveness to the user. Furthermore, by offloading analysis to a dedicated offsite service, the up-front cost of analysis is reduced, allowing smaller and less expensive client terminals to be used.
Referring now to
Block 508 determines an optimal number of sub-sections based on, e.g., the number of available execution nodes 112. This number may be, for example, an integer multiple of the number of nodes 112, such that computational resources are used to their fullest. Block 510 performs an initial tiling of the image to produce a rectangular grid that includes, e.g., a user's freeform selection. Block 512 iteratively resizes the grid elements to conform to the freeform region while meeting the above-determined constraints. Resizing grid elements may include removing elements entirely as well as lengthening/widening or shrinking particular grid elements to more closely approximate the selected freeform shape. There is no constraint on the proportion of element width to element height. Having divided the selected region of interest into sub-sections, the execution controller 116 assigns one or more sub-sections to each execution node 112 for analysis.
Having described preferred embodiments of a system and method for cloud-based digital pathology (which are intended to be illustrative and not limiting), it is noted that modifications and variations can be made by persons skilled in the art in light of the above teachings. It is therefore to be understood that changes may be made in the particular embodiments disclosed which are within the scope of the invention as outlined by the appended claims. Having thus described aspects of the invention, with the details and particularity required by the patent laws, what is claimed and desired protected by Letters Patent is set forth in the appended claims.
This application claims priority to provisional application Ser. No. 61/514,143, filed on Aug. 2, 2011, to provisional application Ser. No. 61/514,144, filed Aug. 2, 2011, and to provisional application Ser. No. 61/514,146, filed Aug. 2, 2011, each incorporated herein by reference. This application is related to application Ser. No. 13/564,418, “INTERACTIVE ANALYTICS OF DIGITAL HISTOLOGY SLIDES,” filed concurrently herewith and incorporated herein by reference. This application is related to application Ser. No. 13/564,437, entitled, “DIGITAL PATHOLOGY WITH LOW-LATENCY ANALYTICS,” filed concurrently herewith and incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
6711283 | Soenksen | Mar 2004 | B1 |
6917696 | Soenksen | Jul 2005 | B2 |
7035478 | Crandall et al. | Apr 2006 | B2 |
7116440 | Eichhorn et al. | Oct 2006 | B2 |
7257268 | Eichhorn et al. | Aug 2007 | B2 |
7428324 | Crandall et al. | Sep 2008 | B2 |
7457446 | Soenksen | Nov 2008 | B2 |
7463761 | Eichhorn et al. | Dec 2008 | B2 |
7502519 | Eichhorn et al. | Mar 2009 | B2 |
7518652 | Olson et al. | Apr 2009 | B2 |
7542596 | Bacus et al. | Jun 2009 | B2 |
7602524 | Eichhorn et al. | Oct 2009 | B2 |
7646495 | Olsen et al. | Jan 2010 | B2 |
7668362 | Olson et al. | Feb 2010 | B2 |
7689024 | Eichhorn et al. | Mar 2010 | B2 |
7738688 | Eichhorn et al. | Jun 2010 | B2 |
7756309 | Gholap et al. | Jul 2010 | B2 |
7760927 | Gholap et al. | Jul 2010 | B2 |
7787674 | Eichhorn | Aug 2010 | B2 |
7826649 | Crandall et al. | Nov 2010 | B2 |
7844125 | Eichhorn et al. | Nov 2010 | B2 |
7860292 | Eichhorn et al. | Dec 2010 | B2 |
7893988 | Olson et al. | Feb 2011 | B2 |
7949168 | Crandall et al. | May 2011 | B2 |
20040169883 | Eichhorn et al. | Sep 2004 | A1 |
20050136509 | Gholap et al. | Jun 2005 | A1 |
20050136549 | Gholap et al. | Jun 2005 | A1 |
20050265588 | Gholap et al. | Dec 2005 | A1 |
20050266395 | Gholap et al. | Dec 2005 | A1 |
20060014238 | Gholap et al. | Jan 2006 | A1 |
20060015262 | Gholap et al. | Jan 2006 | A1 |
20060026111 | Athelogou et al. | Feb 2006 | A1 |
20060195407 | Athelogou et al. | Aug 2006 | A1 |
20070019854 | Gholap et al. | Jan 2007 | A1 |
20070030529 | Eichhorn et al. | Feb 2007 | A1 |
20070036440 | Schaepe et al. | Feb 2007 | A1 |
20070112823 | Baatz et al. | May 2007 | A1 |
20070122017 | Binnig et al. | May 2007 | A1 |
20070147673 | Crandall | Jun 2007 | A1 |
20080008349 | Binnig et al. | Jan 2008 | A1 |
20080065645 | Eichhorn | Mar 2008 | A1 |
20080124002 | Eichhorn | May 2008 | A1 |
20080137937 | Athelogou et al. | Jun 2008 | A1 |
20080240613 | Dietz et al. | Oct 2008 | A1 |
20080273788 | Soenksen et al. | Nov 2008 | A1 |
20080292153 | Binnig et al. | Nov 2008 | A1 |
20080292159 | Soenksen et al. | Nov 2008 | A1 |
20080304722 | Soenksen | Dec 2008 | A1 |
20090087051 | Soenksen et al. | Apr 2009 | A1 |
20090141126 | Soenksen | Jun 2009 | A1 |
20090208134 | Eichhorn et al. | Aug 2009 | A1 |
20090231689 | Pittsyn et al. | Sep 2009 | A1 |
20090241060 | Schmidt et al. | Sep 2009 | A1 |
20100141753 | Olson et al. | Jun 2010 | A1 |
20100226926 | Loney et al. | Sep 2010 | A1 |
20100260407 | Eichhorn et al. | Oct 2010 | A1 |
20100265267 | Schaepe et al. | Oct 2010 | A1 |
20100321387 | Eichhorn | Dec 2010 | A1 |
20110037847 | Soenksen | Feb 2011 | A1 |
20110047502 | Schmidt et al. | Feb 2011 | A1 |
20110060766 | Ehlke et al. | Mar 2011 | A1 |
20110090223 | Eichhorn et al. | Apr 2011 | A1 |
20110115897 | Najmabadi et al. | May 2011 | A1 |
20110141263 | Olson et al. | Jun 2011 | A1 |
Number | Date | Country | |
---|---|---|---|
20130034279 A1 | Feb 2013 | US |
Number | Date | Country | |
---|---|---|---|
61514143 | Aug 2011 | US | |
61514144 | Aug 2011 | US | |
61514146 | Aug 2011 | US |