Surgical optics can be used for remote and/or magnified viewing of a field of view or a patient or subject. The surgical optics can take various forms such as microscopes, endoscopes, laparoscopes, loupes, goggles, etc. A surgeon or other operator may benefit from additional information other than the view provided directly by the surgical optics. For many surgical procedures, however, it may be inconvenient or risky for the surgeon to look away from the view of the optics, or from the field of view. Also, histopathologists often prefer direct sample observation but would like to have machine assistance during examination of multiple sections at large fields of view afforded by modern optics. Same can be said about ophthalmologists examining eye fundus with specialized cameras during routine checkups and while diagnosing adverse events.
The accompanying drawings are not intended to be drawn to scale. Like reference numbers and designations in the various drawings indicate like elements. For purposes of clarity, not every component may be labeled in every drawing. Subsystems are normally surrounded by dashed lines, and some of them may be optional. Visual information or optical signal travel is indicated by means of dash-dotted arrows. Optional procedures and data flow are indicated with dashed arrows or elements in methods illustrations by flowcharts. In the drawings:
The following detailed description of the present subject matter refers to the accompanying drawings that show, by way of illustration, specific aspects and embodiments in which the present subject matter may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the present subject matter. The subject technology can assume various embodiments that are suitable to its specific applications.
The present disclosure relates generally to systems and methods for presentation of static or dynamic information within an image projection subsystem, in various types of telescopic, macroscopic, microscopic, laparoscopic, and endoscopic imaging applications.
This disclosure pertains to providing the user of an imaging system, for example a microscope or a telescope, the ability to view augmented information pertaining to the field of view. This disclosure describes systems and methods to optically overlay recently or concurrently acquired and processed or stored data on to a field of view of an imaging system. An embodiment of such a system may be described as a digital eyepiece, that is, an eyepiece with an inbuilt display module receiving electronic data that can provide visualization of the microscope's field of view augmented with a static or dynamic textual, numerical, graphical, or image rendition of the electronic data. An embodiment of such a system may be useful as a diagnostic, pre-operative and especially intraoperative tool during surgery, where the digital eyepiece receives calculated blood flow data from the surgical field of view in real-time and with minimal delay, and overlays a pseudo-color rendition of blood flow index onto the original visual field, thereby permitting the surgeon or histopathologist to instantaneously visualize this critical information without looking away from the eyepiece of the operation microscope. The system and method may be useful to visualize data obtained using other imaging modalities in other professional and recreational activities, too, and due to its real-time nature, can be classified as a type of Augmented Reality (AR).
In our personal and professional life we sometimes encounter situations where their direct visual perception, however aided or enhanced by means of an optical system, be it a “magnifier” or an “intensifier”, needs an additional augmentation with information not readily available or detected by our natural senses or neural circuitries. Such relevant information, however, may be retrieved from an external storage or derived in real time by means of artificial signal or image registration and processing systems; for example, machine vision devices. And when presented in a timely fashion in context of a current situation a substantially more enhanced or informed version of reality thus delivered to user's consciousness. Recently termed as Augmented Reality (AR), such aid to our senses and brain processing capacity promises tremendous improvement in a person's situational awareness, decision making process and reliability of predicting the final outcome in a dynamic situation.
One example of such an area of activity, often in need of additional critical information delivered to a person's field of perception, is medical surgery where procedure success or failure usually depends on real-time response of a practitioner to the actively developing patient's state and the situation. Nor normally perceptible, but routinely critical to the patient, are characteristics of blood flow (pressure, speed and direction and their dynamics) within his or her cardiovascular system. The cardiovascular system is the fundamental mechanism by which human and animal organisms provide nutrient supply to, and remove waste products from, tissues, organs, and organ systems to maintain their homeostasis, viability, integrity, and functionality. Anatomical characteristics of blood vessels (e.g., size, structure, orientation, location, quantity, distribution, and type) are specific to each biological system, tissue, organ, or organ system. Many pathologies manifest as changes in these anatomical characteristics and are also accompanied by changes in vascular physiology (e.g., velocity or flow rates of blood within an individual vessel, group of vessels, or network of vessels; distribution of blood flow within a group or network of connected or independent vessels, including regions of vascular profusion and non-perfusion; and vessel compliance, contractility, and proliferation). For example, diabetic retinopathy (DR) is a vision-threatening complication of diabetes that manifests as progressive narrowing of arteriolar caliber until occlusion occurs. Vessel occlusion is typically followed by vessel proliferation (i.e., angiogenesis), which results in increased vision loss and progression toward blindness. Numerous other diseases and conditions involve pathologies or symptoms that manifest in blood vessel anatomy or physiology. As another example, various dermatological diseases and conditions, including melanoma, diabetic foot ulcers, skin lesions, wounds, and burns, involve injury to or pathophysiology of the vasculature. In neurosurgery and tissue or organ reconstruction, temporarily disrupting and then reestablishing normal blood flow to a particular area is a critical step in success of procedures, often performed under an operation microscope.
These anatomical and physiological characteristics are important for the quality application of existing and development of novel diagnostics and procedures for the advancement of standard of care for patients (human and animal). By evaluating the anatomical and physiological characteristics of the vasculature (directly or indirectly, quantitatively or qualitatively), a scientist, clinician, or veterinarian can begin to understand the viability, integrity, and functionality of the biological system, tissue, organ, or whole organism being studied. Depending on the specific condition being studied, important markers may manifest as acute or long-term alterations in blood flow, temperature, pressure or other anatomical and physiological characteristics of the vasculature. For example, anatomical and physiological information, in either absolute terms or as relative changes, may be used as a mechanism for evaluating grade and character of brain aneurisms and inform of potential blood flow management and treatment options, among other things. Likewise, almost all types of tumors are accompanied by vascular changes to the cancerous tissue; tumor angiogenesis and increased blood flow is often observed in cancerous tissue due to increased metabolic demand of the tumor cells. Similar vascular changes are associated with healing of injuries, including wounds and burns, where self-regulated angiogenesis serves a critical role in the healing process. Hence, anatomical and physiological information may assist a clinician or veterinarian in the monitoring and assessment of healing after a severe burn, recovery of an incision site, or the effect of a therapeutic agent or other type of therapy (e.g., skin graft or negative pressure therapy) in the treatment of a wound or diabetic foot ulcer.
As already mentioned above, monitoring and assessment of anatomical and physiological information can be critically important for surgical procedures. The imaging of blood vessels, for example, can serve as a basis for establishing landmarks during surgery. During brain surgery, when a craniotomy is performed, the brain often moves within the intracranial cavity due to the release of intracranial pressure, making it difficult for surgeons to use preoperatively obtained images of the brain for anatomical landmarks. In such situations, anatomical and physiological information may be used by the surgeon as vascular markers for orientation and navigation purposes. Anatomical and physiological information also provides a surgeon with a preoperative, intraoperative, and postoperative mechanism for monitoring and assessment of the target tissue, organ, or an individual blood vessel within the surgical field.
The ability to quantify, visualize, and assess anatomical and physiological information in real-time or near-real-time can provide a surgeon or researcher with feedback to support diagnosis, treatment, and disease management decisions. An example of a case where real-time feedback regarding anatomical and physiological information is important is that of intraoperative monitoring during neurosurgery, or more specifically, cerebrovascular surgery. The availability of real-time blood flow assessment in the operating room (OR) allows the operating neurosurgeon to guide surgical procedures and receive immediate feedback on the effect of the specific intervention performed. In cerebrovascular neurosurgery, real-time blood flow assessment can be useful during aneurysm surgery to assess decreased perfusion in the feeder vessels as well as other proximal and distal vessels throughout the surgical procedure.
Likewise, rapid examination of vascular anatomy and physiology has significant utility in other clinical, veterinary, and research environments. For example, blood flow is often commensurate with the level of activity of a tissue and related organ or organ system. Hence, vascular imaging techniques that can provide rapid assessment of blood flow can be used for functional mapping of a tissue, organ, or organ system to, for example, evaluate a specific disease, activity, stimulus, or therapy in a clinical, veterinary, or research setting. To illustrate, when the somatosensory region of the brain is more active because of a stimulus to the hand, the blood flow to the somatosensory cortex increases and, at the micro-scale, the blood flow in the region of the most active neurons increases commensurately. As such, a scientist or clinician may employ one or more vascular imaging techniques to evaluate the physiological changes in the somatosensory cortex associated with the stimulus to the hand.
In addition, a number of imaging approaches exist to evaluate anatomical and physiological information of the tissue vasculature, its level of metabolism or stress condition. Magnetic resonance imaging (MRI), X-ray or computerized tomography (CT), ultrasonography, acousto-optical or opto-acoustic imaging, Doppler optical coherence tomography (D-OCT), laser speckle contrast imaging (LSCI), laser Doppler spectroscopy (LDS), polarization spectroscopy, multispectral reflectance spectroscopy, coherent Raman spectroscopy (CRS), spontaneous Raman spectroscopy, fluorescence angiography, fluorescence and phosphorescence lifetime imaging (FLIM and PLIM), and positron emission tomography (PET) are among a number of imaging techniques that offer quantitative and qualitative information about the vascular anatomy and tissue or organ physiology. Each technique offers unique features, not accessible by normal human faculties that may be more relevant to the real-time evaluation of a particular biological system, tissue, organ, or organ system or a specific disease or medical condition.
LSCI has particular relevance in the rapid, intraoperative examination of vascular anatomy and physiology. LSCI is an optical imaging technique that uses interference patterns (called speckles), which are formed when a camera captures photographs of a rough surface illuminated with coherent light (e.g., a laser), to estimate and map flow of various particulates in different types of enclosed spaces. If the rough surface comprises of moving particles, then the speckles corresponding to the moving particles cause a blurring effect during the exposure time over which the photograph is acquired under properly defined imaging conditions. The blurring can be mathematically quantified through the estimation of a quantity called laser speckle contrast (K), which is defined as the ratio of standard deviation to mean of pixel intensities in a given neighborhood of pixels. The neighborhood of pixels may be adjacent in the spatial (i.e., within the same photograph) or temporal (i.e., across sequentially acquired photographs) domains or a combination thereof. In the context of vascular imaging, LSCI quantifies the blurring of speckles caused by moving blood cells and other particulate such as lipid droplets, within the blood vessels of the illuminated region of interest (ROI) and can be used to analyze detailed anatomical information (which includes but is not limited to one or more of vessel diameter, vessel tortuosity, vessel density in the ROI or sub-region of the ROI, depth of a vessel in the tissue, length of a vessel, and type of blood vessel, e.g., its classification as artery or vein) and physiological information (which includes but is not limited to one or more of blood flow and changes thereof in the ROI or a sub-region of the ROI, blood flow in an individual blood vessel or group of individual blood vessels, and fractional distribution of blood flow in a network of connected or disconnected blood vessels).
While non-LSCI methods of intraoperative real-time blood flow assessment are currently used, no single method is considered adequate in all scenarios. For example, in the context of cerebrovascular surgery such as aneurysm surgery, imaging of small yet important vessels called perforators necessitates a high-resolution imaging technique for monitoring anatomical and physiological information, which is currently unavailable in the neurosurgical OR. The use of Indocyanine Green (ICG) Video angiography has been assessed for this purpose, but challenges still remain because of the potential for dye leakage from damaged blood vessels into surgical ROIs. Intraoperative (X-ray) angiography is currently considered the gold standard to assess vessel patency following a number of cerebrovascular procedures (e.g., aneurysm clipping and arteriovenous malformation, or AVM, obliteration). However, angiography does not provide real-time assessment during the actual performance of surgery. Furthermore, given the invasive nature of this technique, and despite advancements, the risk of complications is not eliminated, especially due to adverse side effects of contrast agents used. In AVM surgery, real-time blood flow assessment helps the surgeon better understand whether particular feeding vessels carry high flow or low flow, which could ultimately impact the manner in which those vessels are disconnected from the AVM (i.e., bipolar cautery versus clip ligation). Finally, in a disease such as Moyamoya, which may require direct vascular bypass, real-time flow assessment can be useful in identifying the preferred recipient vessels for the bypass as well as assessing the flow in that bypass and surrounding cortex once the anastomosis is completed.
The real-time assessment of blood flow may be helpful in other surgery fields that rely on vascular anastomoses as well, specifically plastic surgery, vascular surgery, and cardiothoracic surgery. Currently, technology such as the use of Doppler ultrasonography is used to confirm the patency of an anastomosis. However, real-time, quantitative imaging can add a tremendous benefit in assessing the adequacy of a bypass, revealing problems to the surgeon in real time to facilitate clinical decision making during surgery rather than postoperatively when either it is too late or the patient requires a repeat surgery.
LSCI has been used as a blood flow monitoring technique in the OR. LSCI has been considered for functional mapping in awake craniotomies to prevent damage to eloquent regions of the brain, to assess the surgical shunting of the superior temporal artery (STA) and the middle cerebral artery (MCA), and for intraoperative monitoring during neurosurgery. Until recently, these approaches had limitations of spatio-temporal resolution and availability of anatomical and physiological information on a real-time or near-real-time basis.
Of particular importance is the method for presentation and appearance of visual information obtained from LSCI and other modalities in the context of an OR environment. While projection of images and video sequences obtained, whether in monochromatic or pseudo-color format, to built-in or external displays is helpful in principle, in actual practice taking eye focus away from the operating field can amount to a disruptive distraction or inconvenience for a surgeon or his or her attendant. As a workaround, many surgical manipulations such as arterial occlusion or vessel cauterization require constant monitoring of the course of procedure while routinely their progress and eventual success or failure are reported orally or otherwise by an attendant or a specialized device, such as a heartbeat pacer or monitor. Reliance on such an approach is naturally prone to delay in timing of response, potential obscuration by background noises, misinterpretation and other interfering factors of human as well as purely technical nature.
Consequently there is a need for a more timely, precise and reliable delivery of physiological information, especially visual, to a person actively engaged in conductance of procedures or observation under; for example, a surgical microscope. One possible approach to address this problem is by means of an Augmented Reality (AR) approach to presentation of visual and other contextual information. In this modality, relevant pictorial, video and/or any other qualitative or quantitative data are merged with visual, audial or other perceptual field of view of an observer. Such overlay can be done with some delay and, preferably, the minimal one, so that changes in condition and parameters registered are perceived as if occurring in real time and responding to currently observed changes in the sensory field, hence a “Reality”.
This disclosure presents a solution as it pertains to situation in the actual or remotely controlled surgical room. In either scenario, it helps a surgeon or attendant to be more responsive and efficient by preventing disruption of their attention due to turning head and eyes away from a pair of microscope binoculars when observing the surgical field or from a remote display. All of the pertinent visual and textual data, including that confirm validity of projected information can be overlaid within total of visual field observed through imaging system in near-real time or with minimal delay. Thus, a system and method to effectively accomplish such a solution in variety of common imaging modalities employed is disclosed below.
The system and method described in this disclosure could be embodied in multiple ways, potentially depending on the application. For example, to assist in open cerebrovascular micro-surgeries, the augmented reality display module may be integrated into a custom-designed eyepiece that replaces a traditional eyepiece of the surgical microscope. Therefore, during procedures such as placing a clip around an aneurysm, the surgeon may benefit from real-time information about blood flow in the field of view presented within the eye-piece itself. In other applications such as open cardiac surgeries that use surgical loupes rather than surgical microscopes, the augmented reality module may be integrated into the surgical loupes such that complementary information is presented overlaid on the view through the surgical loupes. In such a case, the imagery overlaid by the augmented reality module on the field of view is routed through optical waveguides and combined by the image combiner to form the same overlaid imagery on the retina of the viewer. Embodiments of the disclosure may also be embodied such that the surgical microscope is an endoscope, and the augmented reality module overlays additional information on the endoscopic field of view.
An example embodiment of the disclosure includes an augmented reality eyepiece system for a surgical microscope. The system according to the example embodiment includes a processor configured to generate a signal based on image data pertaining to a field of view of the surgical microscope. The system according to the example embodiment includes an eyepiece configured to integrate into the surgical microscope. The eyepiece includes an image generation module configured to generate an image based on the signal received from the processor. The eyepiece includes an image combiner configured to combine the image generated by the image generation module with light received from the field of view. The eyepiece includes visualization optics configured to combine the light received from the field of view and the image generated by the image generation module, and present a combined image to an eye of a user.
In some implementations, the eyepiece according to the example embodiment includes an image splitter configured to split the light received from the field of view into a first portion and a second portion. The image splitter directs the first portion to the visualization optics. The eyepiece according to the example embodiment includes a camera module configured to receive the second portion of the light from the image splitter and generate the image data. The processor is configured to receive the image data and generate the signal with a latency less than or comparable to a persistence of vision.
In some implementations, the system according to the example embodiment includes an illumination source and illumination optics configured to deliver light from the illumination source to the field of view. The eyepiece according to the example embodiment includes an image splitter configured to split the light received from the field of view into a first portion and a second portion, the image splitter directing the first portion to the visualization optics. The eyepiece includes a camera module configured to receive the second portion of the light from the image splitter and generate the image data. The processor is configured to receive the image data and generate the signal with a latency lower than a persistence of vision, and the processor is configured to generate a laser speckle contrast imaging (LSCI) image based on the second portion of light received from the image splitter, and provide the LSCI image to the image generation module via the signal. The image generation module is configured to generate the image based on the signal.
In some implementations, the system according to the example embodiment includes a memory storing information relating to the field of view. The processor is configured to generate the signal according to the stored information, and provide one or more of a textual, numerical, graphical, or image rendering for display by the image generation module. In some implementations, the memory and provision of one or more of a textual, numerical, graphical, or image rendering for display by the image generation module can be implemented in combination with the features of the preceding two paragraphs.
The Augmented Reality (AR) Module 130 includes an arrangement of one or more light manipulation components, which includes but is not limited to lenses, mirrors, dichroic mirrors, apertures, filters, beam splitters, beam shapers, polarizers, wave retarders, diffractive and adaptive optical elements, fixed and variable phase and/or amplitude masks, analog and/or digital light processors (DLP), microdisplays, visible light sources, micro electro-mechanical system (MEMS) devices and fiber optics, that serve the purpose of delivering optical imaging content from the Augmented Reality Projection (ARP) Module 131 to the Visualization Optics 133, such as e.g. a microscope eyepiece or a Head-Mounted Display (HMD). The various embodiments of AR Module 130 include components that manipulate the light in a manner than is useful for imaging modality of interest based on the specific application. In some embodiments, the optical assembly for Image Combiner module 132 includes polarizers, depolarizers, neutral density filters, waveplate retarders and/or polarizing beam splitters in the imaging paths from ARP Module 131 and Imaging optics 122 that polarize or depolarize the light in a manner that is optimally combined with the light coming back from the target object 190 and directed towards Visualization Optics 133.
The illumination module 110 includes two sub-modules 1) illumination source 111 and 2) illumination optics 112.
The illumination source 111 includes one or more light sources such that at least one of the sources produces coherent light (e.g., a laser) for speckle production and LSCI. In some embodiments, the illumination source 111 includes additional light sources that produce coherent, incoherent, or partially coherent light. The wavelength of the one or more lights being emitted by the light sources in an example embodiment lies in the 0.1 nm (X-ray) to 10,000 nm (micro-wave) range. In some embodiments, one or more wide-band light sources is used to produce light with more than one wavelength. In some embodiments, the one or more wide-band light sources is fitted with one or more filters to narrow the band for specific applications. Typically, incoherent illumination sources are useful for reflectance- or absorption-based photography. In some embodiments, direct visualization and focusing of the AR system 100 on the target object 190 is achieved under incoherent illumination. In some embodiments, the illumination source 111 incorporates mechanisms to control one or more of the power, intensity, irradiance, timing, polarization or duration of illumination. Such a control mechanism may be electronic (examples include a timing circuit, an on/off switching circuit, a variable resistance circuit for dimming the intensity, or a capacitor-based circuit to provide a flash of light) or mechanical where one or more optical elements (examples include an aperture, a shutter, a filter, or the source itself) may be moved in, along or out of the path of illumination. In various embodiments, the light sources included in the illumination source 111 may be pulsatile or continuous, and polarized partially, linearly, circularly, or randomly (non-polarized). They can be based on any laser, plasma discharge (flash), luminescence phenomena, incandescent, halogen or metal vapor (e.g. mercury) lamp, light emitting diode (LED or SLED (super-luminescent LED)), X-ray, gamma-ray, Charged Particle (e.g. Electron) Accelerator, Variable Electromagnetic Field (such as those used in magnetic resonance imaging or spectroscopy) or other ionizing or non-ionizing radiation and technology.
The second sub-module of illumination module 110, the illumination optics 112, includes an arrangement of one or more light manipulation components, which includes but is not limited to lenses, mirrors, apertures, filters, beam splitters, beam shapers, polarizers, wave retarders, and fiber optics, that serve the purpose of delivering light from the illumination module 110 to the desired ROI in the target object 190. In some embodiments, additional light manipulation elements such as optical diffraction element can be used to project a pattern onto the target. Optical diffraction element can be configured to project a light pattern on the target object 190. The illumination optics 112 for the various embodiments includes components that manipulate the light in a manner than is useful for imaging the tissue of interest based on the specific application. In some embodiments, the illumination optics 112 include a polarizer in the path of illumination that polarizes the light in a manner that significantly attenuates the light except when reflected or scattered (depending on operator's preference) by the target object 190.
The image acquisition (IA) module 120 consists of two sub-modules 1) camera module 121 and 2) imaging optics 122 designed to undertake desired imaging schemes such as LSCI, ICG (video-) angiography, other kinds of fluorescence microscopy or imaging modalities.
The camera module 121 includes at least one camera sensor or image acquisition device that is capable of transducing incident light to a digital representation (called image data). In some embodiments, camera module 121 includes at-least two such camera sensors or image acquisition device where one would be used to capture live visible FOV, or ROI of the FOV, of target object 190 while the rest of the acquisition devices are specific for capturing data from FOV, or ROI of the FOV, of target tissue illuminated with one or more coherent light sources. The camera module 121 is configured to direct the image data for further processing, display, or storage. In some embodiments, the camera module 121 includes mechanisms that control image acquisition parameters, including exposure time (i.e., time for which the camera sensor pixel integrates photons prior to a readout), pixel sensitivity (i.e., gain of each pixel), binning (i.e., reading multiple pixels as if it was one compound pixel), active area (i.e., when the entire pixel array is not read out but a subset of it), among others. In the various embodiments, the at least one camera sensor used in the camera module 121 is a charge coupled device (CCD), complementary metal oxide semiconductor (CMOS), metal oxide semiconductor (MOS), array of (avalanche or hybrid) photodiodes, photo-tubes, photo- and electron multipliers, light or image intensifiers, position-sensitive devices, thermal imagers, photo-acoustic and ultrasound array detectors, raster- or line-(confocal) scanners, nipkow-disc or confocal spinning-disc devices, streak cameras or another similar technology designed to excite, detect and capture imaging data.
The imaging optics 122 includes an arrangement of one of more light manipulation components that serve the purpose of focusing the ROI of FOV of the target object 190 on to the at least one camera sensor of the camera module 121. In some embodiments, the imaging optics 122 comprise a means to form more than one image of ROI or sub-regions of the ROI of the target object 190. In some embodiments, the more than one image projects onto the one or more camera sensors or on a retina of an observer 191 through an eyepiece. In the various embodiments, the imaging optics 122 determine the imaging magnification, the field of view (FOV), size of the speckle (approximated by the diameter of the Airy disc pattern), and spot size at various locations within the FOV. In some embodiments, the imaging optics 122 includes light manipulation components that, in conjunction with components of the illumination optics 112, reduce the undesired glare resulting from various optical surfaces. In some embodiments, additional light manipulation control using opto-mechanical components for aperture control for manipulation of speckle size of the image data (e.g., manual or electronics iris), adjustment for depth of focus (e.g., focusing lens), switching filter sets (e.g., including but not limited to electronics slider, filters, polarizers, lens), alignment of focused light plane orthogonal to optical path (e.g., 45° mirrors); some or all of which are connected by wired or wireless means and controlled using the user interface module 150 through the information processor module 140.
The information processor (IP) module 140 includes one or more processing elements configured to calculate, estimate, or determine, in real-time or near-real-time, one or more anatomical, physiological and functional information and/or related parameters derived from the imaging and other sensor data, and generate a data signal based on image data pertaining to a field of view of the surgical microscope for presentation to the user in context of other information available. The IP module 140 includes one or more processing elements designed and configured to implement control functions for the AR system 100, including control of operation and configuration parameters of the acquisition module 120 and its sub-modules 1) camera module 121 (e.g., exposure time, gain, acquisition timing) and 2) Imaging optics 122 (e.g., iris control, focus control, switching filter sets); the illumination module 110 and its sub-modules 1) Illumination source 111 (e.g., irradiation power, timing, duration, and synchrony of illumination) and 2) Illumination optics 112 (e.g., focus control); control of the transmission of image data or derivatives thereof to and from the display module 180, the User interface module 150 (preview of image data) and/or the storage module 170 and a data transmission module 160; control of which anatomical, physiological and functional information and/or related parameters should be calculated, estimated, or determined by the processor module 140; control display or projection of the same by Display Module 180 and AR module 130 and its sub-modules; and control of the overall safety criteria, sensors, interlocks and operational procedures of the AR system 100. In some embodiments, the information processor module includes subroutines for machine learning (supervised (task-driven), unsupervised (data driven) and some cases reinforcement (algorithm react to an environment/event)) which leverages access to information from one or more such as image data and other sub-modules such has 110, 120, 130, 160, and 170.
In various embodiments, the information processor module 140 is configured to calculate, estimate, or determine one or more anatomical and physiological information or equivalent parameters calculated from the image data in one or more of the following modes:
Real-time video mode—In the real-time video mode, the information processor module 140 is configured to calculate, estimate, or determine one or more anatomical and physiological information or equivalent parameters calculated from the image data based on certain predetermined set of parameters and in synchrony or near-synchrony with the image acquisition. In the real-time video mode, the frame rate of the video presented by the display module 160 is greater than 16 frames per second (fps), allowing the surgeon to perceive uninterrupted video (based on the persistence of vision being 1/16th of a second).
Real-time vessel mode—In real-time vessel mode, the AR system 100 is configured to allow the surgeon to select, using automatic or semi-automatic means, one or more vessels and to emphasize the anatomical and physiological information in the selected vessels over other vessels in the field of view (FOV). In some embodiments, the AR system 100 is configured to allow the surgeon to select all arteries or all veins, extracted automatically, in the entire FOV, or a region of interest (ROI), of the FOV. In such embodiments, the extraction may be achieved by either (a) computing the anatomical or physiological information in the entire field but displaying only the anatomical or physiological information in the selected vessels, or (b) computing the anatomical or physiological information only in the selected vessels and displaying the anatomical or physiological information accordingly, or (c) computing the anatomical or physiological information in the entire field and enhancing the display of the selected vessels through an alternate color scheme or by highlighting the pre-selected vessels centerlines or edges.
Real-time relative mode—In the real-time relative mode, the processor module 150 includes the baseline values of anatomical and physiological information in its computation of instantaneous values of anatomical or physiological information. The real-time relative mode may be implemented as a difference of instantaneous values of anatomical or physiological information from the baseline values, or as a ratio of the anatomical or physiological information with respect to baseline values.
Snapshot mode—In the snapshot mode, the processor module 150 generates a single image of the anatomical or physiological information in the surgical FOV. In this embodiment, the processor module 150 may utilize a greater number of frames for computing the anatomical or physiological information than it utilizes during the real-time modes, since the temporal constraints are somewhat relaxed. In the snapshot mode, all the functionalities of the real-time modes are also possible (e.g., display of change of blood flow instead of blood flow, or enhanced display of a set of selected vessels).
The display module 180 comprises one or more display screens or projectors configured to present the estimated anatomical and physiological information or equivalent parameters calculated from the image data by the information processor module 140; augmented overlaid image data of AR module 130 which contains processed image data projected using AR projection module 131 overlaid onto unobstructed FOV, or ROI of the FOV, (from one arm of image splitter 134) of the target object 190); or the raw data acquired by the acquisition module 120. In some embodiments, the one or more display screens can be physically located in close proximity to the remaining elements of the AR system 100. In some embodiments, the one or more display screens are physically located remotely from the remaining elements of the AR system 100. In the various embodiments, the one or more display screens are connected by wired or wireless means to the processor module 140. In some embodiments, the display module 180 is configured to provide the observer with a visualization of the ROI and the estimated anatomical and physiological information or equivalent parameters calculated from the image data by the information processor module 140. In the various embodiments, the display module 180 is configured for real-time visualization, near-real-time visualization, or retrospective visualization of imaged data or estimated anatomical and physiological information or equivalent parameters calculated from the image data that is stored in the storage module 170. Various aspects of anatomical and physiological information, or equivalent parameters and other outputs of the processor may be presented in the form of monochrome, color, or pseudo-color images, videos, graphs, plots, or alphanumeric values.
The storage module 170 includes one or more mechanisms for storing electronic data, including the estimated anatomical and physiological information or equivalent parameters calculated from the image data by the processor module 140, overlaid eye-piece image data from the image combiner module 132 of the AR module 130, or the raw data acquired by the acquisition module 120. In various embodiments, the storage module 170 is configured to store data for temporary use in a primary storage module 171, and for long-term use the data can be transferred to a data library module 172. In various embodiments, the storage module 170 and/or the data library module 172 can include random access memory (RAM) units, flash-based memory units, magnetic disks, optical media, flash disks, memory cards, or external server or system of servers (e.g., a cloud-based system) that may be accessed through wired or wireless means. The storage module 170 can be configured to store data based on a variety of user options, including storing all or part of the estimated anatomical and physiological information or equivalent parameters calculated from the image data by the processor module 140, the raw data acquired by the acquisition module 120 or the system information like setting or parameters used while recording/capturing the raw or processed image data from the user interface module 150.
The storage module 170 includes two sub-modules: 1) the Primary storage module 171 and 2) the Data library module 172. The primary storage module 171 can embody of all the functionality discussed for the storage module 170 above when the image data and the system data information are captured/stored during the working of the AR system 100 for temporary use. The data and/or system information and parameters that are useful for future AR system 100 operations (e.g., adjustments of system parameters, optical or mechanical corrections) can be transferred to and stored in the data library module 172. In various embodiments, the transferring of data can be done using one or more mechanisms which includes random access memory (RAM) units, flash-based memory units, magnetic disks, optical media, flash disks, memory cards, or external server or system of servers (e.g., a cloud-based system) that may be accessed through wired or wireless means.
The user interface module 150 includes one or more user input mechanisms to permit the user to control the operation and settings of the various modules 110, 120, 130, 140, 160, 170, 180 and their sub-modules of the system 100. In various embodiments, the one or more user input module includes a touch-screen, keyboard, mouse or an equivalent navigation and selection device, and virtual or electronic switches controlled by hand, foot, one or both eyes, mouth, head or voice. In some embodiments, the one or more user input mechanisms is the same as the one or more display screens of the display module 180.
The Augmented Reality (AR) module 130 includes three sub-modules: 1) an Augmented Reality Projection (ARP) module 131, 2) an Image Combiner (IC) module 132, and 3) Visualization Optics 133.
The Augmented Reality Projection (ARP) module 131 includes one or more miniaturized projection display or screens configured to present the estimated anatomical and physiological information or equivalent parameters calculated from the image data by the information processor module 140 and provided to the ARP module 131 in the form of a data signal. The data signal can include data pertaining to one or more of image sequences, graphical representations, numerical values, or text-based representations. In some embodiments, miniaturized projection display (ARP-display unit) includes of one of many micro-displays made from liquid crystal display (LCD) or its derivatives, organic light-emitting diode (OLED) or its derivatives or digital light processing (DLP). In some embodiments, the processed image data is communicated/transferred to the ARP-display unit of the ARP module using optical elements (e.g., optical fiber bundle) or electronics (e.g., wire or wireless). Various aspects of anatomical and physiological information, or equivalent parameters and other outputs of the processor may be presented in the form of monochrome, color, or pseudo-color images, videos, graphs, plots, or alphanumeric values onto the ARP-display unit of the ARP module. In some embodiments, the ARP module 131 incorporates mechanisms to control one or more of the brightness, alignments, timing, frame rate, or duration of image data. Such a control mechanism may be electronic (examples include a timing circuit, an on/off switching circuit, a variable resistance circuit for dimming the brightness, dedicated microcontroller/microprocessor (evaluation boards) or a capacitor-based circuit) or mechanical where one or more optical elements (examples include an aperture, a shutter, a filter, or ARP-display unit itself) may be moved in or out of the path of projection onto the Image combiner (IC) module 132.
The IC module 132 includes of arrangement of one or more light manipulation components or relay optics, which includes but is not limited to lenses, mirrors, apertures, filters, beam splitters, beam shapers, polarizers, wave retarders, neutral density filters and fiber optics, that serve the purpose of refining and delivering light containing the processed image data from the ARP-display unit of the ARP module 131 onto the visualization optics 133 (e.g., eyepiece optics) to be observed in real-time or near real-time by the observer (e.g., surgeon, technician). Some embodiments may include arrangement of one or more opto-mechanical components such as the electronic or manual iris (to control system aperture), lenses, 3D or 2D optical translation stages, optical rotary stages, etc. The purpose of opto-mechanical components is finely tune/adjust the magnification, alignment along rotational, orthogonal and depth plane with respect to the optical light coming from the ARP module 131. The purpose of the IC module 132 is to combine (or overlay) the processed image data onto the unobstructed FOV, or ROI of the FOV, of the target object 190, thus creating a combined image. Thus, the estimated anatomical and physiological information or equivalent parameters calculated from the image data is presented over and along with the unobstructed FOV, or ROI of the FOV, of the target object 190 to the observer through the visualization optics 133. In some embodiments, additional arrangement of above mentioned optical and opto-mechanical components can be used to relay the overlaid (combined) image data to the eyepiece camera.
Visualization optics 133 includes of arrangement of one or more light manipulation components or relay optics, which includes but is not limited to lenses, mirrors, apertures, filters, beam splitters, beam shapers, polarizers, wave retarders, neutral density filters and fiber optics, that serve the purpose of delivering combined image from the image combiner (IC) module 132 to relay the information to observer's retina (e.g., surgeon, technician). The purpose of the visualization optics 133 is to match the magnification, depth of perception of the FOV, or ROI of the FOV, of the combined image data relayed from the IC module 132.
The data transmission module 160 includes one or more input/output data transmission mechanisms (wired or wireless) to permit the user to transmit and receive information between the system 100 and a remote location or other system. The information includes systems parameters, raw and/or processed image data, etc.
The image splitter (IS) module 134 can include imaging optics leveraged from the optical assembly of a surgical microscope (for example, Zeiss OPMI series, Leica Microsystems M- and MS-series, and similar microscopes) or a physically-integrated surgical microscope. In some embodiments, one or both optical paths within the surgical microscope can be integrated with the augmented reality (AR) module 130, thus achieving mono- or stereo-AR eye-piece capabilities. This integrated system would have the ability to estimate particulate flow within a FOV, the size of which is determined by the magnification settings of the surgical microscope. The system 100 can estimate the particulate flow within the depth of focus as set by the surgical microscope. When used in human surgical environments, the FOV has a diameter that ranges from approximately 10 mm to 50 mm in diameter. When used in veterinary environments, the FOV has a diameter that ranges from approximately 5 mm to 50 mm in diameter.
The implementation depicted presents AR modules 210 comprised of three sub-modules 1) an Augmented Reality Projection (ARP) module, 2) an Image combiner module, and 3) Visualization Optics (VO) module which are all integrated into one subsystem with a purpose to replace the regular eye-piece 262 of a surgical microscope 260, and have minimal alteration of a light path upstream of that.
The ARP module includes a miniaturized projection display or screen 223 configured to present the estimated anatomical and physiological information or equivalent parameters calculated from the image data by the information processor module and combined with the visual field by means of a beam splitting element 221, to be projected into observer's 280 single eye or both eyes using optical elements such as projection lenses 222.
The Image Combiner (IC) module, represented by the beam splitting element 221 includes an arrangement of one or more light manipulation components or relay optics, which includes but is not limited to lenses, mirrors, apertures, filters, beam splitters, beam shapers, polarizers, wave retarders, neutral density filters and fiber optics, that serve the purpose of delivering light containing the processed image data from the ARP-display unit or units onto the visualization optics 225 (e.g., eyepiece optics including an ocular lens) to be observed in real-time or near real-time by the observer (e.g., surgeon, technician). The beam splitting element 221 can combine the processed image data with light from a field of view of the surgical microscope 260 received via objective lens optics 224 of the AR module 210. In some embodiments, arrangement of one or more opto-mechanical components such as the electronic or manual iris (e.g., control aperture), lens, 3D or 2D optical translation stages, optical rotary stages, etc. The purpose of opto-mechanical components is finely tune/adjust the magnification, alignment along rotational, orthogonal and depth plane with respect to the optical light coming from the projection display 223.
The Visualization Optics (VO) module, represented here by visualization optics 225, includes an arrangement of one or more light manipulation components or relay optics, which includes but is not limited to lenses, mirrors, apertures, filters, beam splitters, beam shapers, polarizers, wave retarders, neutral density filters and fiber optics, that serve the purpose of delivering combined image from the beam splitting elements 221 to relay the information to observer's 280 retina (e.g., surgeon, technician). The purpose of the VO module is to match the magnification, depth of perception of the FOV, or ROI of the FOV, of the combined image data relayed from the IC module or modules mentioned above.
The embodiment in
Once the image frames are acquired by the acquisition module, the information processor module employs a rolling FIFO acquisition algorithm, at 605, then acquires the specified number of frames Mat 606. The next step is to check whether the number of M frames is equal to a predetermined number N frames. If the M is less than N, the system waits for the collection or acquisition of M frames at 606 followed by generation of N frame stack at 610 and preparation of the stock for processing within the selected region of interest at 609. In either case, an N frame stack is generated at step 610. Raw data from the last M frames are also optionally saved to the library 622 and optionally displayed in a diagnostic display 608. Next, this loop restarts and, while awaiting for a next M frames to arrive, the system employs a subroutine, at 611-618, to apply one or more of the Image processing algorithms such as image enhancement, registration, segmentation, etc., for the pixels of interest in the field of view, using the newly arrived data in the stack of M frames of acquired fluorescence intensity data in the buffer, estimating contrast agent quantity within the region of interest at 611, generating a monochromatic brightness Image Result 1 at 612, and also potentially, if overlay is desired 613, computing particulate velocity, perfusion rate or flow or any other quantity (index), which may be a linear or non-linear function of them. At 614, the system converts Image Result 1 to its pseudo-color representation Image Result 2, according to a user-specified manually or computer-generated color and brightness table (palette), providing for intuitive and effective visualization of perfusion, flow information, or actively perfused vasculature and related characteristics (angiogenesis, hemorrhaging, occluded vessels, etc.), and potentially overlays it with other imaging modalities, which may necessitate additional processing steps, including but not limited to background and offset subtraction, outlier rejection, denoising, bandpass filtering, smoothing, and artifact elimination. Process step 616 optionally implements a subroutine to convert and transform generated image result 614 and other data (from data library 615—comprising of data from other modalities or vital monitoring data, etc.) to compound image data with different data represented in different format thereby presenting more than one image modality data in Image Result 3. The system forwards Image Result 2 or Image Result 3, with aligning/rescaling 617 as appropriate, depending on the user-selected or preset display setting to AR module 618. In some implementations, the system can utilize techniques to accelerate processing, including parallel computing, analog processing, Machine Vision, Deep Learning, and processing techniques based on estimation rather than exact computation of quantities of interest calculations. In such implementations, post-processing for the stack can be finished before another set of frames is integrated on the imaging device and transferred to computer memory at 606. Based on the parameter settings at 607, the fluorescence excitation and detection process 600 continues with image acquisition and processing cycles providing real-time examination of target object. In parallel to this process, all the raw data arriving from steps 606, 612, 616, and 623 can be saved to computer random-access memory, or streamed to permanent storage solution or a remote destination for archival and potentially, at 622, more detailed processing and analysis. Based on the parameter settings at 607 or the user input at 625, the imaging process checks to enable or disable, at 617, AR module 618. Next, at 619, if ‘YES’, the process 600 continues with image acquisition and processing cycle providing real-time examination of chosen ROI, checking whether current imaging parameters should be adjusted at 620. If ‘NO’, the process 600 terminates at 630. During termination of imaging process, 624 switches off the illumination source. In some embodiments, either or both of the Image Result 2 or 3 can be sent to be displayed in external user display devices (TV, monitors, 3D-OLED, etc.) at 621.
Using the newly arrived data in the stack of N frames residing in buffer, the information processor module generates a monochromatic brightness image, at 714, and also potentially, if overlay is desired 716, computes such quantities as average oxygen saturation level or any other quantity (index) which may be a linear or non-linear function of data acquired, generating a monochrome Image Result 1 at 715. At 717, the system converts Image Result 1 to its pseudo-color representation Image Result 2, according to a user-specified or computer-generated color and brightness table (palette), to provide intuitive and effective visualization of, for example, an oxygen saturation map in vasculature and related characteristics. This visualization can be combined with other imaging modalities. This may necessitate additional processing steps including but not limited to background and offset subtraction, outlier rejection, denoising, bandpass filtering, smoothing and artifact examination of image oxygen saturation map. Process step 718 also implements a subroutine to convert and transform generated image result 717 and other data (from data library 727—comprising of data from other modalities or vital monitoring data, etc.) to compound image data with different data represented in different format thereby presenting more than one image modality data in Image Result 3. The system forwards, at 719, Image Result 2 or Image Result 3, aligning and rescaling 719 as appropriate, depending on the user-selected or preset display setting to AR module 726. In some implementations, the system can utilize techniques to accelerate processing, including parallel computing, analog processing, Machine Vision, Deep Learning, and processing techniques based on estimation rather than exact computation of quantities of interest calculations. In such implementations, post-processing for the stack can be finished before another set of frames is integrated on the imaging device and transferred to computer memory at 708. Based on the parameter settings at 703, the illumination and detection process 700 continues with image acquisition and processing cycles providing real-time examination of target object. In parallel to this process, all the raw data arriving from steps 708, 715, 718 and 723 can be saved to computer random-access memory, or streamed to permanent storage solution or a remote destination for archival and potentially, at 724, more detailed processing and analysis. Based on the parameter settings at 703 or the user input at 720, the imaging process checks to enable or disable, at 719, ARP module 726. Next, at 722, if ‘YES’, the process 700 continues with image acquisition and processing cycle providing real-time examination of chosen ROI. If ‘NO’, the process 700 terminates at 730. During termination of imaging process, 725 switches off the illumination source. In some embodiments, either or both of the Image Result 2 or 3 can be sent to be displayed in external user display devices (TV, monitors, 3D-OLED, etc.) at 721.
Having now described some illustrative embodiments, it is apparent that the foregoing is illustrative and not limiting, having been presented by way of example. In particular, although many of the examples presented herein involve specific combinations of method acts or system elements, those acts and those elements may be combined in other ways to accomplish the same objectives. Acts, elements and features discussed in connection with one embodiments are not intended to be excluded from a similar role in other embodiments or implementations.
The phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. The use of “including” “comprising” “having” “containing” “involving” “characterized by” “characterized in that” and variations thereof herein, is meant to encompass the items listed thereafter, equivalents thereof, and additional items, as well as alternate embodiments consisting of the items listed thereafter exclusively. In one embodiment, the systems and methods described herein consist of one, each combination of more than one, or all of the described elements, acts, or components.
Any references to embodiments or elements or acts of the systems and methods herein referred to in the singular may also embrace embodiments including a plurality of these elements, and any references in plural to any embodiment or element or act herein may also embrace embodiments including only a single element. References in the singular or plural form are not intended to limit the presently disclosed systems or methods, their components, acts, or elements to single or plural configurations. References to any act or element being based on any information, act or element may include embodiments where the act or element is based at least in part on any information, act, or element.
Any embodiment disclosed herein may be combined with any other embodiment or embodiment, and references to “an embodiment,” “some embodiments,” “an alternate embodiment,” “various embodiments,” “one embodiment” or the like are not necessarily mutually exclusive and are intended to indicate that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one implementation or embodiment. Such terms as used herein are not necessarily all referring to the same embodiment. Any embodiment may be combined with any other embodiment, inclusively or exclusively, in any manner consistent with the aspects and embodiments disclosed herein.
References to “or” may be construed as inclusive so that any terms described using “or” may indicate any of a single, more than one, and all of the described terms.
Where technical features in the drawings, detailed description or any claim are followed by reference signs, the reference signs have been included to increase the intelligibility of the drawings, detailed description, and claims. Accordingly, neither the reference signs nor their absence have any limiting effect on the scope of any claim elements.
The systems and methods described herein may be embodied in other specific forms without departing from the characteristics thereof. The foregoing embodiments are illustrative rather than limiting of the described systems and methods. Scope of the systems and methods described herein is thus indicated by the appended claims, rather than the foregoing description, and changes that come within the meaning and range of equivalency of the claims are embraced therein.
This application claims the benefit of U.S. Provisional Application No. 62/800,920, filed Feb. 4, 2019, the entirety of which is incorporated herein by reference.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2020/016312 | 2/3/2020 | WO | 00 |
Number | Date | Country | |
---|---|---|---|
62800920 | Feb 2019 | US |