The embodiments described herein relate to methods and apparatus for visualization of imaging data of tissue. More specifically, systems, methods, and apparatuses described herein provide a visual user interface for viewing of medical imaging data.
Some common medical imaging applications involve generation of large amounts of imaging data that is difficult, tedious, and/or time consuming to review or curate manually. Thus, a need exists for improved systems, apparatuses, and methods for efficient review of imaging data.
In some embodiments, an apparatus includes a memory and a processor operatively coupled to the memory. The processor can be configured to receive imaging data obtained from an imaging device imaging tissue. In some embodiments, the imaging data can include two-dimensional scans of tissue, such as, for example, optical coherence tomography (OCT) B-scans. The processor can be configured to analyze the imaging data, and to identify imaging data associated with diagnostically relevant areas. The processor can be configured to generate a visual interface that displays the imaging data associated with the diagnostically relevant areas in a two-dimensional stitched view. In some embodiments, the view can be a stitched panoramic view. The processor, via the visual interface, can enable a user to navigate to locations in the original imaging data that correspond to imaging data associated with diagnostically relevant areas that the user determines to be positive for a characteristic (e.g., cancer).
Systems, methods, and apparatuses described herein relate to visualization of imaging data of tissue. In embodiments, systems, methods, and apparatuses described herein provide a visual user interface for viewing imaging data of tissue.
Wide-field imaging techniques enable scanning of surgically excised tissue. For example, imaging systems as described in International Patent Application No. PCT/CA2018/050874, published as International Patent Application Publication No. WO 2019/014767 (“the '767 Publication”), filed Jul. 18, 2018, titled “Sample container for stabilizing and aligning excised biological tissue samples for ex vivo analysis,” incorporated herein by reference, enable scanning of large areas of surgically excised tissue at high resolution. Such imaging techniques and systems provide opportunities for intraoperative assessment of surgical margins in a variety of clinical applications, e.g., in breast-conserving surgical treatment for breast cancer in which patients may undergo multiple surgeries due to imprecise intraoperative tumor margin assessment.
Wide-field imaging techniques, however, typically generate large datasets of images, which makes their interactive visual review in a clinical setting challenging and time-consuming Systems, methods, and apparatuses described herein provide a visual tool for facilitating efficient review and analysis of large datasets. Such review and analysis of imaging data can be used in several applications, including, for example, diagnosis, research, technology development, etc.
Biological tissue is frequently imaged in one, two, and/or three dimensions to evaluate properties of the tissue to characterize the tissue. In some instances, the characterization of the tissue is used to diagnose health conditions of the origin of the tissue. OCT, one example imaging technique, is a non-invasive imaging technique that renders cross-sectional views of a three-dimensional tissue portion. Wide-field OCT imaging technique enables efficient scanning of large areas of surgically excised tissue (e.g., breast tissue) with high resolution to make a medical diagnosis, for example, a diagnosis of DCIS.
When using such imaging techniques for diagnostic purposes, e.g., in detecting areas suspicious for DCIS, it can be important to have visualization systems that enable high speed and accurate review of imaging data. Since imaging techniques such as wide-field OCT provide new opportunities for intraoperative assessment of tissue in a variety of clinical applications, it can be important that such assessment is accurate and does not lead to unnecessary treatment. For example, to treat primary breast cancer, a patient may opt for a breast-conserving surgical treatment to remove malignant tissue. However, if imprecisely and/or inaccurately diagnosed at the margin, the patient may be required to undergo repeat surgeries to treat the breast cancer.
Imaging techniques, such as wide-field OCT, typically generate large datasets. For example, a dataset acquired by OCT of a volume of tissue can include several B-scans or cross-sectional slices of the tissue. In some instances, the number of B-scans can lie in the range of 400-600, which makes their visual review challenging. Existing methods of reviewing such OCT data include visual assessment of each B-scan of an acquired tissue volume with the purpose of detecting certain visual cues that have known associations with malignant tissue via pathological evaluation studies. For example,
For example, a CAD algorithm can be programmed to detect suspicious areas in imaging data, e.g., in the B-scans. One method of presenting the results of a CAD algorithm can be to highlight the suspicious areas in individual B-scans of the image data when a user scrolls through the B-scans during his visual review. For instance, the suspicious areas can be marked by color, boxes, arrows, transparent overlays, etc. Such method, while drawing attention to suspicious areas, can still be rather inefficient, especially when a CAD algorithm produces false positive detections.
Systems, methods, and apparatuses disclosed herein provide a visual user interface that is configured to reduce the time required for review of imaging data and/or scanning OCT volumes. Instead of highlighting individual areas detected by a CAD algorithm as being suspicious in individual B-scans, such systems, methods, and apparatuses can generate a consolidated or combined view of portions of the image data (e.g., patches of fixed size) that have been identified as suspicious by the CAD algorithm. In some embodiments, this consolidated view can be a stitched view, e.g., a stitched panoramic view. The consolidated view can reduce the amount of image data that needs to be required to a few pages of combined data. Further details of such a view are described below with reference to
The compute device 101 can be a hardware-based computing device and/or a multimedia device, such as, for example, a server, a desktop compute device, a smartphone, a tablet, a wearable device, a laptop and/or the like. The compute device 101 includes a processor 111, a memory 112 (e.g., including data storage), and an input/output interface 113.
The processor 111 can be, for example, a hardware-based integrated circuit (IC) or any other suitable processing device configured to run and/or execute a set of instructions or code associated with presenting image data. For example, the processor 111 can be a general purpose processor, a central processing unit (CPU), an accelerated processing unit (APU), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a programmable logic array (PLA), a complex programmable logic device (CPLD), a programmable logic controller (PLC) and/or the like. The processor 111 can be operatively coupled to the memory 112 through a system bus (for example, address bus, data bus and/or control bus). As depicted in
The memory 112 of the compute device 101 can be, for example, a random access memory (RAM), a memory buffer, a hard drive, a read-only memory (ROM), an erasable programmable read-only memory (EPROM), and/or the like. The memory 112 can store, for example, one or more software modules and/or code that can include instructions to cause the processor 111 to perform one or more processes, functions, and/or the like (e.g., relating to a CAD algorithm and/or presenting image data). In some embodiments, the memory 112 can include extendable storage units that can be added and used incrementally. In some implementations, the memory 112 can be a portable memory (for example, a flash drive, a portable hard disk, and/or the like) that can be operatively coupled to the processor 111. In other instances, the memory 112 can be remotely situated and coupled to the processor 111 of the compute device 101. For example, a remote database server can serve as the memory 112 and be operatively coupled to the processor 111 of the compute device 101.
In some embodiments, the memory 112 can be configured to store imaging dataset(s) 117 including B-scans 117A-B. In some embodiments, the dataset(s) 117 can represent volumes of biological tissue portions imaged using a wide-field OCT imaging system, e.g., as described in the '767 Publication. In some embodiments, the memory 112 can be configured to store consolidated views or visual representations 119 (e.g., stitched views) of patches of the imaging data associated with anomalous or suspicious areas of tissue.
The input/output interface 113 can be operatively coupled to the processor 111 and memory 112. The input/output interface 113 can include, for example, a network interface card (NIC), a Wi-Fi™ module, a Bluetooth® module and/or any other suitable wired and/or wireless communication device. Furthermore the input/output interface 113 can include a switch, a router, a hub and/or any other network device. The input/output interface 113 can be configured to connect the compute device 101 to a communication network such as, for example, the Internet, an intranet, a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a worldwide interoperability for microwave access network (WiMAX®), an optical fiber (or fiber optic)-based network, a Bluetooth® network, a virtual network, and/or any combination thereof.
In some instances, the input/output interface 113 can facilitate receiving and/or transmitting data (e.g., raw imaging data sets, analyzed imaging data sets, image patches or stitched views of image patches associated with suspicious areas, etc.) through a communication network, e.g., to an external compute device (e.g., a mobile device such as a smart phone, a local computer, and/or a remote server). In some instances, received data can be processed by the processor 111 and/or stored in the memory 112 as described in further detail herein. In some instances, the input/output interface 113 can be configured to send data analyzed by processor 111 to an external compute device such that the external compute device can further provide visualization of the image data and/or analyze such data. In some embodiments, the input/output interface 113 can be configured to periodically connect, e.g., 10 times per day, to an external device to log data stored in the onboard memory. In some embodiments, the input/output interface 113 can be activated on demand by a user to send and/or receive data from an external compute device.
In some embodiments, the input/output interface 113 can include a user interface that can be configured to receive inputs and/or send outputs to a user operating the compute device 111. The user interface can include, for example, a display device (e.g., a display, a touch screen, etc.), an audio device (e.g., a microphone, a speaker), and optionally one or more additional input/output device(s) configured for receiving an input and/or generating an output to a user.
The processor 111, operating as data analyzer 115, can be configured to receive imaging data (e.g., image dataset(s) 117) associated with tissue (e.g., a tissue sample such as, for example, a core of tissue), and process and/or analyze that imaging data to detect anomalous or suspicious areas in the imaged tissue. For example, the data analyzer 115 can be configured to receive raw imaging data and to parse that imaging data into B-scans or patches (e.g., portions of a B-scan). In some embodiments, the data analyzer 115 can be configured to process (e.g., filter, transform, etc.) the imaging data. In some embodiments, the imaging data provided to data analyzer 115 can come pre-processed, such that the data analyzer 115 does not need to further process the imaging data before analysis.
The data analyzer 115 can be configured to analyze portions of the imaging data (e.g., pixels, lines, slices, voxels, etc.) to detect anomalous or suspicious areas in the imaging data. In some implementations, the data analyzer 115 can use CAD algorithms to identify areas (e.g., patches of a fixed size) of scanned image data that capture anomalous or suspicious features. For example, the CAD algorithms can be algorithms that have been trained or calibrated using image datasets of benign and malignant tissue such that the algorithms are capable of identifying suspicious areas in the tissue that includes features similar to previously identified malignant tissue. In some embodiments, the CAD algorithms can be configured to use one or more analytical tools, such as, for example, a convolutional neural network model, a statistical model, machine learning techniques, or any other suitable tools, to perform the detection of suspicious or anomalous areas. For example, in some implementations, a computer algorithm, based on a convolutional neural network, can be trained to differentiate between the malignant and benign patches. The patches that are identified by the algorithm as being suspicious can be passed to the data visualizer 116, e.g., for generating a stitched view as further described below.
In some embodiments, the data analyzer 115 can identify in the imaging data where areas of anomalous or suspicious data has been detected. For example, the data analyzer 115 can use a suitable annotation tool (e.g., colored marking, arrows, outline, etc.) to flag or mark portions of the imaging data detected to include suspicious features. In some embodiments, the data analyzer 115 can generate a dataset including patches of imaged areas having fixed width that include suspicious areas, and pass this dataset onto the data visualizer 116. In some embodiments, the data analyzer 115 can make in each patch where suspicious features were identified (e.g., a portion of a lesion) that led to each patch being identified as suspicious.
The processor, operating as data visualizer 116, can be configured to receive the analyzed data from the data analyzer 115 and generate one or more consolidated view(s) (e.g., stitched view(s) 119) of those portions of the image data (e.g., patches) that include detected suspicious areas. In some implementations, the data visualizer 116 can generate a stitched view (e.g., a stitched panoramic view) of the patches including the suspicious areas, as further described with reference to
In some embodiments, the consolidated view can be visually presented on a user interface (e.g., a user interface of input/output interface 113) that allowed a user to further interact with the consolidated view and/or B-scan portions corresponding to one or more patches in the consolidated view. In some implementations, the user interface can include a set of controls to view and manipulate the imaging data, analyzed data, and/or the one or more consolidated view(s) (e.g., stitched views of patches including suspicious areas detected by the data analyzer 115). The data visualizer 116, based on inputs received from the user via the set of controls, can be configured to perform any suitable image manipulation, image segmentation, and or image processing function, to aid in visual review of the imaging data. For example, a user can provide inputs via the set of controls to perform manipulations such as zooming, panning, overlaying, etc. In some embodiments, the set of control can provide options to the user to perform advanced processing functions, such as, for example, contour detection, foreground/background detection, distance or length measurements, etc. In some implementations, the data visualizer 116 can be configured to link portions of the consolidated view (e.g., stitched view) to portions of the entire imaging dataset (e.g., B-scans) to enable to user to jump between viewing the consolidated view and the B-scan including a patch that was included in the consolidated view. For example, the user interface can present a first screen with a stitched panoramic view, and upon user selection of a particular patch in the stitched panoramic view, present a second screen with the B-scan that included the selected patch. This second screen with the B-scan can enable a user to confirm a diagnosis of the tissue based on viewing the patch.
At 371, a compute device receives image data, e.g., including cross-sectional slices obtained from an imaging system such as a wide field OCT system. The image data can be obtained from imaging a tissue portion (e.g., biological tissue sample). In some embodiments, the image data can be obtained using an imaging system, as described in the '767 Publication. For example, such a system can include a sample container configured to support a biological tissue sample during imaging and an imaging device for generating optical images of the biological tissue sample. The sample container can be configured to interface with the imaging device such that the imaging device can image the biological tissue sample through an imaging window of the sample container, where the imaging window is partially transparent to light emitted by the imaging device. The imaging device can include a light source that emits light towards the sample and a detector configured to receive and detect light reflected from the sample. In some embodiments, the imaging device can be an OCT system, and the image data can include B-scans (i.e., cross-sectional views) of the tissue sample.
At 372, the compute device (e.g., via the processor 111 that operates as data analyzer 115) analyzes the image data to identify portions of the image data (e.g., patches of the slices) that include one or more suspicious areas. In some implementations, the compute device can implement an image processing algorithm (e.g., machine learning tool or statistical tool) to perform computer-assisted detection of the suspicious areas. For example, the compute device can be configured with a set of image features that have been validated as being indicative of the presence of tumorous tissue (e.g., though histopathological studies). The compute device can use these image features as reference to identify a set of patches that include areas suspected of being tumorous. In some implementations, the compute device can use a CAD algorithm that has been trained to classify patches of image data into different classes or categories, e.g., benign and malignant. In some implementations, the compute device can receive input from a user that further aids in its analysis and/or classification (e.g., including identification of classes, identification of areas of interest, etc.).
In some embodiments, the compute device can be configured to employ a convolutional neural network (CNN) to identify abnormal tissue in the image data, such as that described in International Patent Application No. PCT/CA2019/051532, published as International Patent Application Publication No. WO 2020/087164 (“the '164 Publication”), filed Oct. 29, 2019, and titled “Methods and Systems for Medical Image Processing Using a Convolutional Neural Network (CNN),” incorporated herein by reference. For example, the compute device can use a CNN that has a symmetric neural network architecture including (1) a first half of layers for extracting image features, reducing the feature map size, and retrieving the original image resolution, and (2) a second half of layers for identifying the likely regions of interest (ROIs) in the image data that are associated potential anomalies.
At 373, the compute device (e.g., via the processor 111 that operates as data visualizer 116) generates a consolidated view (e.g., a stitched view of 2D patches of images or stitched panoramic view, such as stitched panoramic view 119) of the portions of the image data (e.g., patches of slices) detected to include suspicious area(s) or feature(s). In some implementations, the compute device can generate a user interface that facilitates review of the suspicious patches and scans. At 374, the compute device presents, e.g., via a visual display (e.g. a display of an external device operatively coupled to the compute device 101, or a display forming part of the input/output interface 113), the consolidated view (e.g., stitched panoramic view). In some embodiments, as further described below with reference to
At 382, the compute device can determine whether the input is a navigation input. For example, the input can request that certain views of the tissue sample be displayed and/or a larger volume of patches from the image data be displayed. In some embodiments, the input can indicate a selection of a set of one or more patches presented in the consolidated view of diagnostically relevant portions of image data (e.g., patches having features associated with abnormal or suspicious tissue, or patches identified as having a ROI). At 383, in response to receiving a selection of a set of one or more patches from the consolidated view, the compute device can display a larger volume of the image data, e.g., a larger volume of patches from the original image data that includes patches surrounding the selected patches (i.e., patches showing portions of tissue that are spatially close to the selected patches) or a view of a larger portion of the tissue. A user, for example, can navigate between the consolidated view presented at 374 or 390 and the larger volume of image data presented at 383, e.g. by selecting various patches and/or providing inputs into the user interface (e.g., selecting or clicking on an icon or patch, selecting or clicking a region of a screen, swiping, etc.). Further details of such navigation are described with reference to
The compute device can optionally display other visual representations of the tissue sample, including different 2D and/or 3D views of the tissue sample, at 384. For example, as further described with reference to
While
As depicted, the stitched view 419 includes 37 patches of tissue that were stitched (e.g., combined) together. While 37 patches with a specific fixed size are depicted, it can be appreciated that any number of patches and/or sizes of patches can be used to generate a stitched view. Below each patch of image data, the spatial location of that path can be provided. Alternatively, this information can be displayed at other locations relative to each patch (e.g., above, adjacent to, etc.). In some embodiments, other information, such as malignancy scores (e.g., DCIS score), confidence values, spatial location (e.g., location in B-scan), etc., can also be displayed near each patch to further assist a user in reviewing the image data for potential malignancy.
With the spatial location of any particular patch in the image data being known (e.g., which B-scan or portion of a B-scan), a user can easily navigate to these locations in the image data (e.g., to a complete B-scan), e.g., using the user interface and/or manually, to confirm the diagnosis. Since a diagnostic decision on the margin can be taken based on a single detection that is deemed positive, the user interface can enable faster identification of a positive or negative tissue portion and substantially decrease the time required for image review. For example, when presenting the image data, a window in the display can be populated by patches identified as suspicious (e.g., by a CAD algorithm) after acquiring each B-scan, and a user reviewing the image data can stop the scanning and analysis process once a definitive diagnostic decision has been made.
In some embodiments, systems, devices, and methods disclosed herein can include provisions for navigating between a stitched view of patches of diagnostically relevant portions of image data (e.g., patches including pathological tissue markers and/or flagged as being suspicious) and a larger volume of the image data (e.g., image dataset 117 including the original scanned volume of image data). For example, such systems, devices, and methods can provide a user interface that displays patches flagged as being diagnostically relevant with features for navigating between those patches and larger volumes of image data, e.g., to facilitate further visual review.
While different colors are used to associate the various sets of patches with one another (e.g., first set of patches 502, 504, 506, and second set of patches 512, 514, 516, 518), it can be appreciated that other characteristics and/or types of markings can be used to associated sets of patches with one another, and with their spatial location(s) in a tissue sample. For example, a letter (e.g., “A” or “B”) displayed proximate to one or more patches can be used to associate those patches with one another, while that same letter can be displayed in a perspective view of the tissue sample at the general spatial location corresponding to those patches. Other examples of suitable markings include symbols, characters, line patterns, highlighting, etc.
In some embodiments, the patches in the stitched view 601 can be interactive. For example, a user can select a particular patch 612 (or any other patch) and, in response to receiving such selection, the user interface 600 can change to show a view of the two-dimensional scan that includes the selected patch 612. More specifically, a processor (e.g., processor 111) controlling user interface 600 can be configured to, in response to receiving the selection by a user of a patch 612, determine the two-dimensional scan (e.g., B-scan) from the larger three-dimensional stack or volume of imaging data that includes the patch 612 and cause the user interface 600 to display at least a portion of that two-dimensional scan.
For example, by selecting one patch 612, e.g., indicated by a cursor 660, a user can navigate to two-dimensional scan including that patch 612 for further visual inspection (e.g., inspection of adjacent areas, etc.) to, for example, further assess if the flagged patch depicts an area that is potentially a true positive, e.g., for one or more suspicious markers, or is a negative and can be omitted from further analysis.
While various embodiments have been described above, it should be understood that they have been presented by way of example only, and not limitation. Where methods and/or schematics described above indicate certain events and/or flow patterns occurring in certain order, the ordering of certain events and/or flow patterns may be modified. While the embodiments have been particularly shown and described, it will be understood that various changes in form and details may be made.
Although various embodiments have been described as having particular features and/or combinations of components, other embodiments are possible having a combination of any features and/or components from any of embodiments as discussed above.
Some embodiments described herein relate to a computer storage product with a non-transitory computer-readable medium (also can be referred to as a non-transitory processor-readable medium) having instructions or computer code thereon for performing various computer-implemented operations. The computer-readable medium (or processor-readable medium) is non-transitory in the sense that it does not include transitory propagating signals per se (e.g., a propagating electromagnetic wave carrying information on a transmission medium such as space or a cable). The media and computer code (also can be referred to as code) may be those designed and constructed for the specific purpose or purposes. Examples of non-transitory computer-readable media include, but are not limited to, magnetic storage media such as hard disks, floppy disks, and magnetic tape; optical storage media such as Compact Disc/Digital Video Discs (CD/DVDs), Compact Disc-Read Only Memories (CD-ROMs), and holographic devices; magneto-optical storage media such as optical disks; carrier wave signal processing modules; and hardware devices that are specially configured to store and execute program code, such as Application-Specific Integrated Circuits (ASICs), Programmable Logic Devices (PLDs), Read-Only Memory (ROM) and Random-Access Memory (RAM) devices. Other embodiments described herein relate to a computer program product, which can include, for example, the instructions and/or computer code discussed herein.
In this disclosure, references to items in the singular should be understood to include items in the plural, and vice versa, unless explicitly stated otherwise or clear from the context. Grammatical conjunctions are intended to express any and all disjunctive and conjunctive combinations of conjoined clauses, sentences, words, and the like, unless otherwise stated or clear from the context. Thus, the term “or” should generally be understood to mean “and/or” and so forth. The use of any and all examples, or exemplary language (“e.g.,” “such as,” “including,” or the like) provided herein, is intended merely to better illuminate the embodiments and does not pose a limitation on the scope of the embodiments or the claims.
Some embodiments and/or methods described herein can be performed by software (executed on hardware), hardware, or a combination thereof. Hardware modules may include, for example, a general-purpose processor, a field programmable gate array (FPGA), and/or an application specific integrated circuit (ASIC). Software modules (executed on hardware) can be expressed in a variety of software languages (e.g., computer code), including C, C++, Java™, Ruby, Visual Basic™, and/or other object-oriented, procedural, or other programming language and development tools. Examples of computer code include, but are not limited to, micro-code or micro-instructions, machine instructions, such as produced by a compiler, code used to produce a web service, and files containing higher-level instructions that are executed by a computer using an interpreter. For example, embodiments may be implemented using imperative programming languages (e.g., C, Fortran, etc.), functional programming languages (Haskell, Erlang, etc.), logical programming languages (e.g., Prolog), object-oriented programming languages (e.g., Java, C++, etc.) or other suitable programming languages and/or development tools. Additional examples of computer code include, but are not limited to, control signals, encrypted code, and compressed code.
This application claims priority to and the benefit of U.S. Provisional Patent Application No. 62/881,579, filed Aug. 1, 2019, titled “SYSTEMS, METHODS AND APPARATUSES FOR VISUALIZATION OF IMAGING DATA,” and PCT Application WO2021/016721 filed Jul. 31, 2020, titled “SYSTEMS, METHODS AND APPARATUSES FOR VISUALIZATION OF IMAGING DATA,” the disclosures of which are incorporated by reference herein.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/CA2020/051057 | 7/31/2020 | WO |
Number | Date | Country | |
---|---|---|---|
62881579 | Aug 2019 | US |