The present invention relates to registration of laparoscopic or endoscopic image data to 3D volumetric image data, and more particularly, to registering intra-operative 2D/2.5D laparoscopic or endoscopic image data to pre-operative 3D volumetric image data in order to overlay information from the pre-operative 3D volumetric image data on the intra-operative laparoscopic or endoscopic image data.
During minimally invasive surgical procedures, sequences of laparoscopic or endoscopic images are acquired to guide the surgical procedures. Multiple 2D images can be acquired and stitched together to reconstruct a 3D intra-operative model of an observed organ of interest. This reconstructed intra-operative model may then be fused with pre-operative or intra-operative volumetric image data, such as magnetic resonance (MR), computed tomography (CT), or positron emission tomography (PET), to provide additional guidance to a clinician performing the surgical procedure. However, registration is challenging due to a large parameter space and a lack of constraints on the registration problem. One strategy for performing this registration is to attach the intra-operative camera to an optical or electromagnetic external tracking system in order to establish the absolute pose of the camera with respect to the patient. Such a tracker-based approach does help establish an initial registration between the intra-operative image stream (video) and the volumetric image data, but introduces the burden of additional hardware components to the clinical workflow.
The present invention provides a method and system for registration of intra-operative images, such as laparoscopic or endoscopic images, with pre-operative volumetric image data. Embodiments of the present invention register a 3D volume to 2D/2.5D intra-operative images by simulating virtual projection images from the 3D volume according to a viewpoint and direction of a virtual camera, and then calculate registration parameters to match the simulated projection images to the real intra-operative images while constraining the registration using relative orientation measurements associated with the intra-operative images from orientation sensors, such as gyroscopes or accelerometers, attached to the intra-operative camera. Embodiments of the present invention further constrain the registration based on a priori information of a surgical plan.
In one embodiment of the present invention, a plurality of 2D/2.5D intra-operative images of a target organ and corresponding relative orientation measurements for the intraoperative images are received. A 3D medical image volume of the target organ is registered to the plurality of 2D/2.5D intra-operative images by calculating pose parameters to match simulated projection images of the 3D medical image volume to the plurality of 2D/2.5D intra-operative images, wherein the registration is constrained by the relative orientation measurements for the intra-operative images.
These and other advantages of the invention will be apparent to those of ordinary skill in the art by reference to the following detailed description and the accompanying drawings.
The present invention relates to a method and system for registering intra-operative images, such as laparoscopic or endoscopic images, to 3D volumetric medical images. Embodiments of the present invention are described herein to give a visual understanding of the registration method. A digital image is often composed of digital representations of one or more objects (or shapes). The digital representation of an object is often described herein in terms of identifying and manipulating the objects. Such manipulations are virtual manipulations accomplished in the memory or other circuitry/hardware of a computer system. Accordingly, is to be understood that embodiments of the present invention may be performed within a computer system using data stored within the computer system.
The fusion of 3D medical image data with an intra-operative image (e.g., frame of endoscopic or laparoscopic video) can be performed by first performing an initial rigid alignment and then a more refined non-rigid alignment. Embodiments of the present invention provide the rigid registration between the 3D volumetric medical image data and the intra-operative image data using sparse relative orientation data from an accelerometer or gyroscope attached to the intra-operative camera, as well as surgical planning information, to constrain an optimization for registration parameters which best align the observed intra-operative image data with simulated projections of a 3D pre-operative medical image volume. Embodiments of the present invention further provide an advantageous surgical planning workflow in which surgical planning information can be used in a biomechanical model to predict motion of tissue in a surgical plan, which is used to provide feedback to the user with respect to a predicted registration quality and guidance on what changes can be made to the surgical plan in order to improve the registration.
Embodiments of the present invention perform co-registration of a 3D pre-operative medical image volume and 2D intra-operative images, such as laparoscopic or endoscopic images, having corresponding 2.5D depth information associated with each image. Is to be understood that the terms “laparoscopic image” and “endoscopic image” are used interchangeably herein and the term “intra-operative image” refers to any medical image data acquired during a surgical procedure or intervention, including laparoscopic images and endoscopic images.
Referring to
The pre-operative 3D medical image volume includes a target anatomical object, such as a target organ. In an advantageous implementation, the target organ can be the liver. The pre-operative volumetric imaging data can provide for a more detailed view of the target anatomical object, as compared to intra-operative images, such as laparoscopic and endoscopic images. The target anatomical object and other anatomical objects can be segmented in the pre-operative 3D medical image volume. Surface targets (e.g., liver), critical structures (e.g., portal vein, hepatic system, biliary tract, and other targets (e.g., primary and metastatic tumors) may be segmented from the pre-operative imaging data using any segmentation algorithm. For example, the segmentation algorithm may be a machine learning based segmentation algorithm. In one embodiment, a marginal space learning (MSL) based framework may be employed, e.g., using the method described in U.S. Pat. No. 7,916,919, entitled “System and Method for Segmenting Chambers of a Heart in a Three Dimensional Image,” which is incorporated herein by reference in its entirety. In another embodiment, a semi-automatic segmentation technique, such as, e.g., graph cut or random walker segmentation can be used.
At step 104, a sequence of intra-operative images is received along with corresponding relative orientation measurements. The sequence of intra-operative images can also be referred to as a video, with each intra-operative image being a frame of the video. For example, the intra-operative image sequence can be a laparoscopic image sequence acquired via a laparoscope or an endoscopic image sequence acquired via an endoscope. According to an advantageous embodiment, each frame of the intra-operative image sequence is a 2D/2.5D image. That is each frame of the intra-operative image sequence includes a 2D image channel that provides typical 2D image appearance information for each of a plurality of pixels and a 2.5D depth channel that provides depth information corresponding to each of the plurality of pixels in the 2D image channel. For example, each frame of the intra-operative image sequence can include RGB-D (Red, Green, Blue+Depth) image data, which includes an RGB image, in which each pixel has an RGB value, and a depth image (depth map), in which the value of each pixel corresponds to a depth or distance of the considered pixel from the camera center of the image acquisition device (e.g., laparoscope or endoscope). The intra-operative image acquisition device (e.g., laparoscope or endoscope) used to acquire the intra-operative images can be equipped with a camera or video camera to acquire the RGB image for each time frame, as well as a time of flight or structured light sensor to acquire the depth information for each time frame. The intra-operative image acquisition device can also be equipped with an orientation sensor, such as an accelerometer or a gyroscope, which provides a relative orientation measurement for each of the frames. The frames of the intra-operative image sequence may be received directly from the image acquisition device. For example, in an advantageous embodiment, the frames of the intra-operative image sequence can be received in real-time as they are acquired by the image acquisition device. Alternatively, the frames of the intra-operative image sequence can be received by loading previously acquired intra-operative images stored on a memory or storage of a computer system.
According to an embodiment of the present invention, the sequence of intra-operative images can be acquired by a user (e.g., doctor, clinician, etc.) performing a complete scan of the target organ using the image acquisition device (e.g., laparoscope or endoscope). In this case the user moves the image acquisition device while the image acquisition device continually acquires images (frames), so that the frames of the intra-operative image sequence cover the complete surface of the target organ. This may be performed at a beginning of a surgical procedure to obtain a full picture of the target organ at a current deformation. A 3D stitching procedure may be performed to stitch together the intra-operative images to form an intra-operative 3D model of the target organ, such as the liver.
At step 106, the pre-operative 3D medical image volume is registered to the 2D/2.5D intra-operative images using the relative orientation measurements of the intra-operative images to constrain the registration. According to an embodiment of the present invention, this registration is performed by simulating camera projections from the pre-operative 3D volume using a parameters space defining the position and orientation of a virtual camera (e.g., virtual endoscope/laparoscope). The simulation of the projection images from the pre-operative 3D volume can include photorealistic rendering. The position and orientation parameters determine the appearance and well as the geometry of simulated 2D/2.5D projection images from the 3D medical image volume, which are directly compared to the observed 2D/2.5D intra-operative images via a similarity metric.
An optimization framework is used to select the pose parameters for the virtual camera that maximize the similarity (or minimize the difference) between the simulated projection images and the received intra-operative images. That is, the optimization problem calculates position and orientation parameters that maximize a total similarity (or minimizes a total difference) between each 2D/2.5 intra-operative image and a corresponding simulated 2D/2.5D projection image from the pre-operative 3D volume over all of the intra-operative images. According to an embodiment of the present invention, the similarity metric is calculated for the target organ in intra-operative images and the corresponding simulated projection images. This optimization problem can be performed using any similarity or difference metric and can be solved using any optimization algorithm. For example, the similarity metric can be cross correlation, mutual information, normalized mutual information, etc., and the similarity metric may be combined with a geometry fitting term for fitting the simulated 2.5D depth data to the observed 2.5D depth data based on the geometry of the target organ. As described above the orientation sensors mounted to the intra-operative image acquisition device (e.g., endoscope/laparoscope) provide relative orientations of the intra-operative images with respect to each other. These relative orientations are used to constrain the optimization problem. In particular, the relative orientations of the intra-operative images constrain the set of orientation parameters calculated for the corresponding simulated projection images. Additionally, the scaling is known due to metric 2.5D sensing, resulting in an optimization for pose refinement on the unit sphere. The optimization may be further constrained based on other a priori information from a known surgical plan used in the acquisition of the intra-operative images, such as a position of the operating room table, position of the patient on the operating room table, and a range of possible camera orientations.
Returning to
At step 304, deformation of the target organ is simulated using a biomechanical model of the segmented organ. In particular, a 3D mesh of the target organ can be generated from the segmented target organ in the pre-operative 3D medical image volume, and a biomechanical model can be used to deform the 3D mesh in order to simulate expected tissue motion of the target organ given the conditions defined in the surgical plan. The biomechanical model calculates displacements at various points of the 3D mesh based on mechanical properties of the organ tissue and forces applied to the target organ due to the conditions of the surgical plan. For example, one such force may be a force due to gas insufflation of the abdomen in the surgical procedure. In a possible implementation, the biomechanical model models the target organ as a homogeneous linear elastic solid whose motion is governed by the elastodynamics equation. The biomechanical model may be implemented as described in International Patent Application No. PCT/US2015/28120, entitled “System and Method for Guidance of Laparoscopic Surgical Procedures through Anatomical Model Augmentation”, filed Apr. 29, 2015, or International Publication No. WO 2014/127321 A2, entitled “Biomechanically Driven Registration of Pre-Operative Image to Intra-Operative 3D Images for Laparoscopic Surgery”, the disclosures of which are incorporated herein by reference in their entirety.
At step 306, simulated intra-operative images for the surgical plan are generated using the simulated deformed target organ. The simulated intra-operative images are generated by extracting a plurality of virtual projection images of the simulated deformed target organ based on the conditions of the surgical plan, such as the designated portion of the organ to view, a range of possible orientations of the intra-operative camera, and the location of the laparoscope entry point. At step 308, rigid registration of the pre-operative 3D medical image volume to the simulated intra-operative images is performed. In particular, the method of
At step 310, a predicted registration quality measurement is calculated. In a possible implementation a surface error for predicted registration. In particular, a total surface error between the simulated projection images of the pre-operative 3D volume and the simulated intra-operative images extracted from the simulated deformed target organ can be calculated. In addition other metrics measuring the extent and quality of organ structure features within the intra-operative camera field of view for the current surgical plan can also be calculated. At step 312, it is determined if the predicted registration quality is sufficient. If it is determined that the predicted registration quality is not satisfactory, the method proceeds to step 314. If it is determined that the predicted registration quality is satisfactory, the method proceeds to step 316. In a possible implementation, it can be automatically determined if the predicted registration quality is sufficient, for example by comparing the predicted registration quality measurement (e.g., surface error) to a threshold value. In another possible implementation, the surgical planning module can present the results to the user and the user can decide whether the predicted registration quality is sufficient. For example, the predicted registration quality measurement or multiple predicted registration quality measurements, as well as the deformed target organ resulting from the biomechanical simulation, can be displayed on a display device. In addition to presenting the results of the biomechanical simulation and corresponding registration to the user to help guide the planning process, the surgical planning module may also provide suggestions regarding parameters of the surgical plan, such as port placement and patient orientation, to improve the registration results.
At step 314, if it is determined that the predicted registration quality is not satisfactory, the surgical plan is refined. For example the surgical plan can be refined by automatically adjusting parameters, such as port placement and patient orientation, to improve the registration results, or the surgical plan can be refined by the user manually changing parameters of the surgical plan via user input to the surgical planning module. It is possible, that the user manually changes the parameters of the surgical plan to incorporate suggested changes provided to the user by the surgical planning module. The method then returns to step 304 and repeats steps 304-312 to simulate the deformation of the organ and predict the registration quality for the refined surgical plan.
At step 316, when it is determined that the predicted registration quality for the surgical plan is sufficient, constrained rigid registration is perform using the surgical plan. The registration method of
The above-described methods for registering 3D volumetric image data to intra-operative images and for surgical planning to improve such a registration may be implemented on a computer using well-known computer processors, memory units, storage devices, computer software, and other components. A high-level block diagram of such a computer is illustrated in
The foregoing Detailed Description is to be understood as being in every respect illustrative and exemplary, but not restrictive, and the scope of the invention disclosed herein is not to be determined from the Detailed Description, but rather from the claims as interpreted according to the full breadth permitted by the patent laws. It is to be understood that the embodiments shown and described herein are only illustrative of the principles of the present invention and that various modifications may be implemented by those skilled in the art without departing from the scope and spirit of the invention. Those skilled in the art could implement various other feature combinations without departing from the scope and spirit of the invention.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2015/030080 | 5/11/2015 | WO | 00 |