The present invention is directed to a system, method, and apparatus for navigation of virtual images.
Navigation of very high resolution Whole Slide Images (WSIs) in digital pathology is challenging when attempted over extended sessions or time periods. The frequency and scale of mouse movements needed, the unfamiliarity of keyboard controls for navigating, and the gross-scale arm movements needed to use a touch screen with a sufficient resolution for a diagnostically relevant image are all barriers toward effective WSI navigation.
In contrast, reviewing slides using a microscope is a quick process, requiring very fine finger movements to scan a slide at an overview level, then rapidly realigning and jumping to significantly higher magnifications. Microscope stages have fine controls for moving the slide in the horizontal and vertical (X & Y) directions, but many pathologists do not even use these controls, instead opting to manipulate the slide directly with their fingertips.
Embodiments of the present invention are directed to a Virtual Slide Stage (VSS) method and system, which provide a solution that allows for a pathologist to effectively review a complete set of digital Whole Slide Images (WSIs) for a given case. This review of WSIs occurs in the place of a review of the corresponding physical slides. The usability and ergonomics of the VSS closely emulate the process by which a user reviews a set of physical slides with a microscope. The VSS facilitates the user virtually reviewing WSIs for the given case by: (a) enabling switching from one slide to the next via a quick function, (b) enabling navigation within a slide via fine motor movements (physically similar to the movements that pathologists are accustomed to using for reviewing physical slides), (c) quickly changing magnifications via a quick function, (d) flagging key slides for later reference via a quick function, and (e) navigating between focal planes (Z-stacks) on images with multiple captured focal planes. This virtual review facilitated by the VSS has direct correlation to the steps in a physical review of slides, which include: (a) placing one of a series of slides on the stage, (b) navigating within a slide by physically moving the slide, (c) changing magnifications by switching objectives with the microscope turret, (d) flagging key slides with a permanent marker, and (e) adjusting the focal plane by adjusting the distance between an objective and the target.
Embodiments of the present invention provide slide navigation systems, methods, and devices that address challenges in digital pathology of navigating very high resolution slide images. These systems, methods, and devices enable virtually navigating and viewing Whole Slide Images (WSI), such that a pathologist may review a set of slides without physically handling them, utilizing skills analogous to physically reviewing slides. The systems include a virtual slide stage (VSS) having at least one sensor detecting user movement of a target placed on the VSS. The systems also include an input component, coupled to the VSS, which provides additional application movement control of the target via quick functions. The systems further include a connector component connecting the VSS to a user device. The connector component is configured to transmit output from the at least one sensor and the input component to the user device. The systems also include a computer processor, in communication with the VSS, which computationally processes the detected movement of the target relative to the VSS (transmitted output) using a computational model (movement profile) to generate processed movement data representing the desired movement of a WSI image in the viewing application. The computer processor executes a software component, which causes the generated processed movement data to be relayed via the viewing application executing on the user device.
In some embodiments of the systems, the VSS is further configured with a surface on which the target rests, and either: (i) the at least one sensor is configured within the surface, or (ii) a camera is mounted to the surface. In these embodiments of the systems, the at least one sensor of the VSS possesses sensitivity levels that enable detecting changes in position of the target relative to the VSS, including detecting slight changes in the position. In these embodiments of the systems, the computer processor may process the change in horizontal and vertical position of the target relative to the surface of the VSS, and may further process rotation of the target relative to the surface of the VSS.
In some embodiments of the systems, the VSS is further coupled to an artificial light source, the target is an opaque slide, and the at least one sensor includes at least one optical sensor that detect movement of the opaque slide by sensing light from the artificial light source. In some embodiments of the systems, the target is a translucent slide, and the at least one sensor includes at least one optical sensor that detects movement of the translucent slide by sensing ambient light. In some embodiments of the systems, the target is a blank slide, and the at least one sensor includes at least one infrared distance sensor that detects movement of the blank slide by sensing physical positioning of the blank slide.
In some embodiments of the systems, the target is an opaque slide, and the at least one sensor detects movement of the opaque slide via a camera, wherein at least one of coloration and marking are applied to facilitate the camera tracking and distinguishing the opaque slide from the VSS. In these embodiments of the systems, the camera may capture a new image of the target, and the computer processor calculates the movement of the target by comparing the captured new image to reference data stored in computer memory communicatively coupled to the computer processor. In some embodiments of the systems, the target is magnetized, and the at least one sensor further includes a set of magnetometers positioned below surface of the VSS. The set of magnetometers detect orientation and position of the magnetized target. In some embodiments of the systems, the at least one sensor detects movement of a touchscreen or touchpad, rather than the target.
In some embodiments of the systems, the quick functions may enable one or more keys or buttons coupled to the input component that is mounted or otherwise connected to the VSS, the one or more keys or buttons being at least one of physically-enabled or digitally-enabled components. In some embodiments of the systems, the quick functions may enable a dial or potentiometer, and accept at least one of: (i) digital inputs that enable one or more fixed settings and (ii) analog inputs that enable scrolling between discrete settings. In some embodiments of the systems, the quick functions may enable at least one of: gestures, camera detection, touchscreen, or touchpad. In some embodiments of the systems, the quick functions are in communication with the user device. In each of these embodiments, the quick functions enable: (i) switching between slides using the quick functions, (ii) navigating within a slide via fine motor movements, (iii) changing magnifications or resolutions using the quick functions, (iv) flagging key slides using the quick functions, and (v) switching between focal planes that have been captured and stored in the WSI.
In some embodiments of the systems, the connector component couples the VSS to the user device via at least one of: a USB connection, a wired-network, a WiFi network, and Bluetooth. In some embodiments of the systems, the processing of the detected movement is performed by the computer processor positioned either: (i) within the VSS or (ii) within the user device. In some embodiments of the systems, the processing is performed by the connected computer's processor (e.g., within the user device). In other embodiments, the processing is performed by a processor contained within the VSS. The processing transforms the detected data from the sensors and quick functions according to a user or system configuration that implements a movement profile, a model of movement of the VSS with respect to the movement of the target. The movement profile may be implemented to (i) remain linear to the movement of the target independent of magnification, (ii) geometrically scale to increase or decrease the responsiveness of the viewing application relative to the movement, or (iii) be user-configured, non-linear, and non-geometric. Once transformed, the movement and quick function data is transferred to the viewing application via at least one of (i) a native software or library, including a device driver, and (ii) a networking component, including a web server or websockets.
The methods of these embodiments detect, by at least one sensor coupled to a virtual slide stage (VSS), user movement of a target placed on the VSS. The methods further receive movement of the target by the user via quick functions, and transmit output from the at least one sensor and quick functions to the user device. The methods process the output using a model to generate processed movement data representing movement profiles of the target, and relay the output, translated based on the movement profiles, via a viewing application configured on the user device.
The devices of these embodiments include a virtual slide stage (VSS) having at least one sensor detecting user movement of a target placed on the VSS. The devices further include an input, coupled to the VSS, which provides quick function movement control of the target via quick functions. The devices further include a communication connection which connects the VSS to a user device and transmits output from the at least one sensor and the input component to the user device. The devices also include a computer processor, in communication with the VSS, which computationally processes the transmitted VSS output using a computational model to generate processed movement data representing movement profiles of the target. The computer processor executes software, which causes the transmitted VSS output translated based on the movement profiles, to be relayed via a viewing software application executing on the user device.
The foregoing will be apparent from the following more particular description of example embodiments, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating embodiments.
A description of example embodiments follows.
The virtual slide stage (VSS) system and method of the present invention provides the primary purpose of quickly and efficiently viewing and navigating one or more Whole Slide Images (WSIs), representing a pathology case. Since WSIs are digital representations of physical slides, the viewing location is independent of the slide location.
A VSS system/device includes several basic components. The VSS system includes one or more sensors that detect the user's motion/movement of a target. The VSS system further includes an input component for providing navigation control and functionality related to the target to the user (quick functions). The VSS system also includes a connection component that connects the VSS to a computer system of the user and transmits navigation outputs to a computer system (connectivity). The VSS system further includes a processor configured to process navigation outputs from the navigation component (one or more sensors) and the input component. The processor further interprets the VSS system's outputs using a preset or user-configured motion/movement model, which is used to implement in software a movement profile to translate (transform) the motion/movement of the target for viewing via an application configured at the user's computer system.
Multiple embodiments of the VSS system will be described, but a preferred embodiment utilizes an opaque target (slide) of roughly the same dimensions as a typical glass slide. The opaque target is placed on a Virtual Slide Stage (VSS) and one or more sensors of different types detect fine (sub-millimeter or micron) movements. Additionally, the VSS can optionally receive additional quick functions for navigating the target, including allowing for navigation between resolutions, different WSIs, or other types of user controls. The VSS system, coupled with the processing for the various inputs, is then paired/coupled with a user's computer and software configured on the user's computer to navigate the target image.
In a preferred embodiment, the VSS system contains one or more optical sensors (similar to an optical mouse) to detect movement of the target, and a series of buttons to implement quick functions to further control movement/navigation of the target. These buttons are connected to the VSS (either directly or via a cable). In other embodiments, the buttons may be replaced with any other input controls without limitation. A processor within the VSS system translates the input (movement of the target) via a model into a movement profile and transmits the translated input (movement profile) to the user's computer via a USB connection, as shown in
Sensors
In another embodiment including optical sensors, in place of the combination of light sources (or lasers) and an opaque target, a semi-translucent target slide can be used, allowing ambient light to reach the one or more optical sensors. The use of ambient light allows these embodiments to forego the use of a light source (or laser).
In another embodiment of the sensors, infrared distance sensors can be used in place of optical sensors. The distance sensor are placed to sense the target (blank slide), determining the X/Y position and rotation of the target, thereby replacing the optical sensors. In this embodiment, a thicker target (blank slide) is used. In an embodiment using infrared distance sensors, a pair of sensors mounted in parallel would capture/sense one vertical face of the blank slide. A second pair of parallel sensors would be mounted perpendicular to the first set, capturing/sensing an additional face of the blank slide. This capturing of the vertical faces allows the calculation of the X/Y position of the blank slide, as well as rotation of the blank slide.
In another embodiment of the sensors, sonar (lidar) sensors are used in place of the optical sensors. These sensors use the physical positioning described in the infrared embodiment to calculate X/Y position of the blank slide.
In another embodiment, the VSS (110) may use a highly sensitive touch screen (230) (such as those found on tablets or smartphones) or touch sensor (similar to a touch pad from a laptop) in place of the blank slide and associated sensors. This implementation may determine X and Y position from a user's finger interfacing with the touch screen. Two-finger use would allow the VSS (110) to capture X/Y position and rotation from the user without use of a blank slide.
In another embodiment of the sensors, the VSS (110) may use a camera (240) to capture the relative movement of a target (slide). The target may be an opaque slide that may be marked with specific icons. In one processing method, the VSS camera captures a new image and calculates the movement of the target (opaque slide) by comparing this new image to reference data in memory communicatively coupled to the VSS. This reference data may be one or more video frames recorded from an earlier point in time. In a second processing method, the movement of the target is relative to either the absolute field of view of a camera image, or is relative to some demarcation (e.g., a box drawn on the surface of the VSS) physically marked on the VSS. If the target is marked with icons, those icons' positions in the image may be used for processing in either processing methodology. Any new images taken may also be stored as future reference data in memory communicatively coupled to the VSS.
In another embodiment of the sensors, a set of magnetometers (220) in some configuration are laid out below the surface of the VSS, and the target is magnetized or contains small magnets. The orientation and position of the magnetized target can then be determined by the magnetometers.
Quick Functions
In another embodiment, the Quick Functions may be wholly or partially implemented by alternative input devices such as dials, potentiometers, or wheels (320) to control the zoom, either digitally (at preset magnifications) or in a more analog manner (scrolling non-preset amounts).
In another embodiment, one may use voice commands in combination with, or as replacements for all Quick Functions.
In embodiments using a camera or a touch screen (330), such embodiments may include the use of gestures as a replacement for one or more of the Quick Functions.
In another embodiment, the Quick Functions are implemented as a mixture of other embodiment implementations (such as a wheel for magnification and buttons for other Quick Functions).
In another embodiment, there are no Quick Functions implemented.
Digital Processing Environment
Client computer(s)/devices 50 can also be linked through communications network 70 (e.g., via interface 107) to other computing devices, including other client devices/processes 50 and server computer(s) 60. Communications network 70 can be part of a remote access network, a global network (e.g., the Internet), cloud computing servers or service, a worldwide collection of computers, Local area or Wide area networks, and gateways that currently use respective protocols (TCP/IP, Bluetooth, etc.) to communicate with one another. Other electronic device/computer network architectures are suitable.
Client computers/devices 50 may include a Virtual Slide Stage (VS S), as shown in
Attached to system bus 79 is I/O device interface 82 for connecting various input and output devices (e.g., keyboard, mouse, wheels, buttons, touch screens, displays, printers, speakers, voice controls, VSS, Quick Functions, etc.) to the computer 50, 60. Network interface 86 allows the computer to connect to various other devices attached to a network (e.g., network 70 of
In an example mobile implementation, a mobile agent implementation of the invention may be provided. A client-server environment can be used to enable mobile configuration of the capturing of the navigation of slide images. It can use, for example, the XMPP protocol to tether a 50 to VSS 50. The server 60 can then issue commands via the mobile phone on request. The mobile user interface framework to access certain components of the slide navigation system may be based on XHP, Javelin and WURFL. In another example mobile implementation for OS X, iOS, and Android operating systems and their respective APIs, Cocoa and Cocoa Touch may be used to implement the client side components 115 using Objective-C or any other high-level programming language that adds Smalltalk-style messaging to the C programming language.
Disk storage 95 provides non-volatile storage for computer software instructions 92 and data 94 used to implement an embodiment of the slide navigation system. The system may include disk storage accessible to the server computer 60. The server computer (e.g., user computing device) or client computer (e.g., sensors) may store information, such as logs, regarding the navigation of the slide navigation, including position and orientation of one or more slides. Central processor unit 84 is also attached to system bus 79 and provides for the execution of computer instructions.
In one embodiment, the processor routines 92 and data 94 are a computer program product (generally referenced 92), including a computer readable medium (e.g., a removable storage medium such as one or more DVD-ROM's, CD-ROM's, diskettes, tapes, etc.) that provides at least a portion of the software instructions for the slide navigation system. Executing instances of respective software components of the slide navigation system, may be implemented as computer program products 92, and can be installed by any suitable software installation procedure, as is well known in the art. In another embodiment, at least a portion of the software instructions may also be downloaded over a cable, communication and/or wireless connection, via for example, a browser SSL session or through an app (whether executed from a mobile or other computing device). In other embodiments, the invention programs are a computer program propagated signal product 107 embodied on a propagated signal on a propagation medium (e.g., a radio wave, an infrared wave, a laser wave, a sound wave, or an electrical wave propagated over a global network such as the Internet, or other network(s)). Such carrier medium or signals provide at least a portion of the software instructions for the routines/program 92 of the slide navigation system.
In alternate embodiments, the propagated signal is an analog carrier wave or digital signal carried on the propagated medium. For example, the propagated signal may be a digitized signal propagated over a global network (e.g., the Internet), a telecommunications network, or other network. In one embodiment, the propagated signal is a signal that is transmitted over the propagation medium over a period of time, such as the instructions for a software application sent in packets over a network over a period of milliseconds, seconds, minutes, or longer. In another embodiment, the computer readable medium of computer program product 92 is a propagation medium that the computer system 50 may receive and read, such as by receiving the propagation medium and identifying a propagated signal embodied in the propagation medium, as described above for computer program propagated signal product.
Generally speaking, the term “carrier medium” or transient carrier encompasses the foregoing transient signals, propagated signals, propagated medium, storage medium and the like.
In other embodiments, the program product 92 may be implemented as a so called Software as a Service (SaaS), or other installation or communication supporting end-users.
Connectivity
In a preferred embodiment for connectivity, the VSS will have a USB connector to connect with the user's computer.
In another embodiment of connectivity, the USB connector that connects the VSS to the user's computer can be an ethernet network connector.
In another embodiment of connectivity, the USB connector that connects the VSS to the user's computer can be replaced by a bluetooth network connection.
In another embodiment of connectivity, the USB connector that connects the VSS to the user's computer can be replaced by a WiFi network connection.
VSS Signal Processing
In a preferred embodiment, the processing of the inputs is performed by a processor contained within the VSS itself.
In another embodiment, the VSS may use USB- or network-addressable sensors, allowing a consuming or viewing software application to access data directly from said sensors. In this embodiment, the VSS does not have a local processor. Instead, the processing of the inputs is performed by the processor in the user's computer.
In either processing embodiment, measured inputs by the VSS sensors of the target are transferred into relative movements (relative to the VSS) that are transferred to the user's computer. The user's computer may have a software component installed and running to relay information from the VSS to the consuming or viewing software application, also running on the user's computer. In one embodiment, the software component is a device driver. In another embodiment, the software component is a server component that performs internal networking or data transfer.
In one embodiment, the component will implement Web Sockets, which allows for one or more software components to connect to a Web Socket server, and every connected application (e.g., consuming and viewing applications) of on the user's computer may receive the data packets sent by other connected applications. In this manner, the processor sends both key press and positional information to the connected applications via the server.
Signal Transformation and Modeling
The movements of the system corresponding to the target movements can be modeled in multiple ways (and formatted in a movement profile of the target). First, the modeled movements may remain linear to the target slide's movement (independent of magnification). Second, the modeled movements can be scaled geometrically to increase or decrease the responsiveness of the application relative to the movement of the target. Third, a custom movement profile may be created that has user-defined scaling, where the scaling can vary for different ranges of magnification (and thus is neither linear nor geometric).
In the first instance (linear), the modeled movement of the entire range of the WSI image is controlled by the movement of the target (slide) within an approximately 1 inch target region. This means that at low resolutions (e.g., 1×), moving the target ½ inch may translate to about 500 pixels worth of movement on the screen, but at higher resolutions (e.g., 20×), the same movement would correspond to about 10,000 pixels. Regardless of the magnification, the same movement should navigate the user to the same location on the slide.
In the second instance (scaled), the modeled movement of the target slide can be modified to produce finer movements at higher resolutions. The ½ inch movement described above may still be the equivalent of 500 pixels at 1×, but at 20× magnification, the movement could remain at 500 pixels or be scaled by some other arbitrary factor to produce a finer slide navigation (e.g., 1000 pixels).
In an example embodiment, magnification mappings could be configured on a per-user basis. The scaling can be applied as a user or system level configuration in the VSS or in the consuming or viewing application, as can the movement/motion models.
Navigation Analytics
Information related to additional context verification test/factors used in determining the performance of navigating a slide at the VSS (e.g., user moving slide or using quick functions to navigate the slide), including information regarding which tests/factors are successfully applied versus those that were processed but were not successfully applied can be used to improve the quality of the VSS. For example, an analytics tool (such as a web analytics tool or BI tool) may produce various metrics such as measures of additional context verification factor/test success based on the combination of other criteria (e.g. variables associated with level of user's movement of the slide), and filter these results by time of the day or time period or location. The analytics tools may further filter results of physically moving a slide by the user versus the user using quick functions to move the slide. The analytics tools may further filter results of switching between slides, versus movement with the slide, changing magnifications, navigating between focal planes, and the like. Such measures can be viewed per test/factor to help improve the VSS and viewing application because the results may be aggregated across a multitude of devices and users.
An analytics tool offers the possibility of associating other quantitative data beside frequency data with a successful test/factor application. For instance, the results of high performance in interactively navigating a target at the VSS could be joined against the metrics derived by an analytics system (such as a web analytics solution or a business intelligence solution).
Furthermore, analytics data for interactive communication within online content for a user can be aggregated per type of user. For example, it could be of interest to know which types of tests/factors are most or least conducive to a high performance in the interactive communication, or on the contrary, applied to a low performance in the interactive communication.
The teachings of all patents, published applications and references cited herein are incorporated by reference in their entirety.
While example embodiments have been particularly shown and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the embodiments encompassed by the appended claims.
This application claims the benefit of U.S. Provisional Application No. 62/442,403, filed on Jan. 4, 2017. The entire teachings of the above application are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
5561556 | Weissman | Oct 1996 | A |
5733721 | Hemstreet, III et al. | Mar 1998 | A |
5741648 | Hemstreet, III et al. | Apr 1998 | A |
6011809 | Tosaka | Jan 2000 | A |
6049421 | Raz | Apr 2000 | A |
6466352 | Shahar et al. | Oct 2002 | B1 |
6606413 | Zeineh | Aug 2003 | B1 |
6650357 | Richardson | Nov 2003 | B1 |
6798571 | Wetsel et al. | Sep 2004 | B2 |
6905300 | Russum | Jun 2005 | B1 |
6961080 | Richardson | Nov 2005 | B2 |
7109464 | Cartlidge | Sep 2006 | B2 |
7132636 | Cartlidge et al. | Nov 2006 | B1 |
7151246 | Fein et al. | Dec 2006 | B2 |
7155049 | Wetzel et al. | Dec 2006 | B2 |
7224839 | Zeineh | May 2007 | B2 |
7248716 | Fein et al. | Jul 2007 | B2 |
7338168 | Cartlidge et al. | Mar 2008 | B2 |
7346200 | Tsipouras et al. | Mar 2008 | B1 |
7417213 | Krief | Aug 2008 | B2 |
7463761 | Eichhorn et al. | Dec 2008 | B2 |
7518652 | Olson et al. | Apr 2009 | B2 |
7522757 | Tsipouras et al. | Apr 2009 | B2 |
7605356 | Krief et al. | Oct 2009 | B2 |
7638748 | Krief et al. | Dec 2009 | B2 |
7640112 | Tsipouras et al. | Dec 2009 | B2 |
7646495 | Olsen et al. | Jan 2010 | B2 |
7689024 | Eichhorn et al. | Mar 2010 | B2 |
7692131 | Fein et al. | Apr 2010 | B2 |
7756357 | Yoneyama | Jul 2010 | B2 |
7772535 | Krief et al. | Aug 2010 | B2 |
7835869 | Tsipouras et al. | Nov 2010 | B2 |
7863552 | Cartlidge et al. | Jan 2011 | B2 |
7867126 | Nakajima et al. | Jan 2011 | B2 |
7876948 | Wetzel et al. | Jan 2011 | B2 |
7901887 | Tafas et al. | Mar 2011 | B2 |
7945391 | Tsipouras et al. | May 2011 | B2 |
8170698 | Gusack | May 2012 | B1 |
8271251 | Schwartz et al. | Sep 2012 | B2 |
8574590 | Doranz et al. | Nov 2013 | B2 |
8774518 | Cosatto et al. | Jul 2014 | B2 |
8891851 | Spaulding | Nov 2014 | B2 |
8896620 | Orrock | Nov 2014 | B2 |
8897537 | Cosatto et al. | Nov 2014 | B2 |
8934699 | Yamane et al. | Jan 2015 | B2 |
8934718 | Cosatto et al. | Jan 2015 | B2 |
9036255 | Loney et al. | May 2015 | B2 |
9097909 | Halushka | Aug 2015 | B2 |
9134524 | Yamamoto et al. | Sep 2015 | B2 |
9213027 | Doranz et al. | Dec 2015 | B2 |
9217694 | Sieckmann et al. | Dec 2015 | B2 |
9224106 | Cosatto et al. | Dec 2015 | B2 |
9332190 | Murakami | May 2016 | B2 |
9335534 | Schlaudraff | May 2016 | B2 |
9430829 | Madabhushi et al. | Aug 2016 | B2 |
9547898 | Hall | Jan 2017 | B2 |
9575301 | Loney et al. | Feb 2017 | B2 |
9581800 | Corwin | Feb 2017 | B2 |
9609196 | Yamakawa et al. | Mar 2017 | B2 |
9712732 | Kaneko et al. | Jul 2017 | B2 |
9759551 | Schlaudraff et al. | Sep 2017 | B2 |
9772282 | Tucker-Schwartz et al. | Sep 2017 | B2 |
9791659 | Yamakawa | Oct 2017 | B2 |
9805166 | Spaulding | Oct 2017 | B1 |
9818190 | Chukka et al. | Nov 2017 | B2 |
9846938 | Steigauf et al. | Dec 2017 | B2 |
9892341 | Reicher et al. | Feb 2018 | B2 |
9928278 | Welinder et al. | Mar 2018 | B2 |
9934568 | Reicher et al. | Apr 2018 | B2 |
9940719 | Kneepkens | Apr 2018 | B2 |
9963745 | Mai et al. | May 2018 | B2 |
9972087 | Isoda et al. | May 2018 | B2 |
9979868 | Fujinami et al. | May 2018 | B2 |
10012537 | Garsha et al. | Jul 2018 | B2 |
10048462 | Mitsuyasu | Aug 2018 | B2 |
10061107 | Loney et al. | Aug 2018 | B2 |
10175154 | Campbell | Jan 2019 | B2 |
10444486 | Rainbolt | Oct 2019 | B2 |
10527836 | Sakamoto | Jan 2020 | B2 |
10573001 | Bredno | Feb 2020 | B2 |
20080123916 | Adams | May 2008 | A1 |
20100194681 | Halushka | Aug 2010 | A1 |
20120002034 | Matsunobu | Jan 2012 | A1 |
20120068928 | Bruss et al. | Mar 2012 | A1 |
20140257857 | Meissner | Sep 2014 | A1 |
20150110348 | Solanki et al. | Apr 2015 | A1 |
20150279032 | Hall | Oct 2015 | A1 |
20150330776 | Hayashi | Nov 2015 | A1 |
20160019695 | Chukka et al. | Jan 2016 | A1 |
20160042511 | Chukka et al. | Feb 2016 | A1 |
20160252713 | Corwin | Sep 2016 | A1 |
20160321809 | Chukka et al. | Nov 2016 | A1 |
20170348509 | Burkholz | Dec 2017 | A1 |
20170372471 | Eurcn | Dec 2017 | A1 |
20190355113 | Wirch et al. | Nov 2019 | A1 |
Number | Date | Country |
---|---|---|
WO 2017087415 | May 2017 | WO |
2018129318 | Jul 2018 | WO |
Entry |
---|
Notification of Transmittal of the International Search Report and Written Opinion of the International Searching Authority from counterpart International Application No. PCT/US2018/012584, “Virtual Slide Stage (VSS) Method for Viewing Whole Slide Images,” dated Jul. 6, 2018. |
Rojo, M.G., et al., “Critical Comparison of 31 Commercially Available Digital Slide Systems in Pathology,” International Journal of Surgical Pathology, vol. 14, No. 4, pp. 285-305, Oct. 1, 2006. |
International Preliminary Report on Patentability for Int'l Application No. PCT/US2018/012584, titled: Virtual Slide Stage (VSS) Method for Viewing Whole Slide Images, dated Jul. 9, 2019. |
Trahearn, Nicholas et al., “Hyper-Stain Inspector: A Framework for Robust Registration and Localised Co-Expression Analysis of Multiple Whole-Slide Images of Serial Histology Sections,” Scientific Reports, 7: 5641, 13 pages (Published Jul. 2017). |
Roullier, Vincent et al., “Multi-resolution graph-based analysis of histopathological whole slide images: Application to mitotic cell extraction and visualization,” Computerized Medical Imaging and Graphics, 35: 603-615 (2011). |
Number | Date | Country | |
---|---|---|---|
20180188519 A1 | Jul 2018 | US |
Number | Date | Country | |
---|---|---|---|
62442403 | Jan 2017 | US |