The present invention relates to an augmented reality method, more particularly to an augmented reality method and system for an endoscope.
Conventionally, surgical anatomy is typically visualized as a 2D image on a screen produced with the help of a camera and an optical system passed through the small incisions or natural orifices on a patient's body during endoscopic surgery. Special surgical equipment is further introduced into the body through small incisions to perform the operation. Ideally, endoscopic surgery may cause less tissue injury compared to open surgery. This therefore helps patients in rapid recuperation with less pain after surgery. However, when operating endoscopic surgery, a surgeon may only perform anatomy with a narrow visual field. Moreover, the 2D image of the conventional endoscope may not provide depth perception of the visual field. An inadvertent injury may easily occur during surgery if the surgeon is not well experienced.
Augmented reality (AR) is a technology that superimposes a computer-generated image on a user's visual field of the real world, thus providing a composite view. Various methods of applying AR to the visualization of endoscope have been carried out to enhance anatomical structures displayed on the video of the endoscope. However, these methods are still immature in the aspects of model building, alignment, and tracking in terms of anatomy.
Hence, there is still a need for a method capable of combining the 3D message of a virtual model with an endoscopic image to help the surgeon easily view the structure of the posterior surface of the organ.
The present invention aims to provide an augmented reality method and system that may combine the image of a virtual 3D model of the patient with the endoscopic image in real-time along with a real-time display of the relevant instruments for endoscopic surgery.
One aspect of the present invention provides an augmented reality method for an endoscope, including: obtaining a volume image of a subject and constructing a first virtual three-dimensional model by using the volume image; setting a reference frame of a position tracking device as a global reference frame; obtaining a second virtual three-dimensional model of the subject by using laser scanning and registering the second virtual three-dimensional model to the global reference frame; aligning the first virtual three-dimensional model with the global reference frame, matching the first virtual three-dimensional model with the second virtual three-dimensional model by an iterative closest point algorithm (ICP) in order to calculate a first transformation, and applying the first transformation to the first virtual three-dimensional model to generate a third virtual three-dimensional model on a render window; constructing an endoscopic virtual model based on geometrical parameters of an endoscope mounted with a first tracker, and tracking the first tracker by the position tracking device to provide an endoscopic virtual position; and moving the endoscopic virtual model on the render window to the endoscopic virtual position, imaging a virtual image corresponding to an endoscopic image imaged by the endoscope based on the endoscopic virtual position and the third virtual three-dimensional model, and superimposing the endoscopic image imaged by the endoscope with a virtual image to display a superimposed image.
Preferably, the volume image is an image imaged by means of CT or MRI.
Preferably, a specific area in the volume image is performed with segmentation and images of the specific area which is segmented are stacked to form the first virtual three-dimensional model, and the first virtual three-dimensional model is registered to the global reference frame.
Preferably, the relatively static surface of the subject is obtained by laser scanning to construct a second virtual three-dimensional model, and the second virtual three-dimensional model is registered to the global reference frame.
Preferably, before the first virtual three-dimensional model is matched with the second virtual three-dimensional model, a local reference frame is established at a center of the first virtual three-dimensional model, and the local reference frame is aligned with the global reference frame.
Preferably, the method further includes displaying a relative position of the endoscopic virtual model to the third virtual three-dimensional model on the render window based on the endoscopic virtual position and superimposing the endoscopic image with the virtual image to display a superimposed image.
Preferably, the method further includes constructing a surgical instrument virtual model based on geometrical parameters of a surgical instrument mounted with a second tracker, tracking the second tracker by the position tracking device in order to provide a surgical instrument virtual position, and displaying a relative position of the surgical instrument virtual model to the third virtual three-dimensional model on the render window based on the surgical instrument virtual position.
Preferably, the method further includes photographing an endoscopic calibration tool having a plurality of marked points by using the endoscope to image the plurality of marked points, identifying the plurality of marked points by a computer algorithm to calculate an intrinsic parameter of the endoscope, and adjusting parameters of a virtual camera on the render window by using the intrinsic parameter.
Preferably, the endoscopic calibration tool is a hemisphere tool, and the plurality of marked points is marked on a curved surface of the hemisphere tool.
When the method of the present invention is applied to the augmented reality of endoscopic surgery, the 3D message from the virtual model of the organ may be used to enhance the endoscopic image. In the view of the endoscopic augmented reality, the surgeon may see the structure of the posterior surface of the organ. This helps the less experienced surgeon avoid the damage to the structure of the posterior surface. The virtual model further provides messages of adjacent structures that are usually outside the visual field of the endoscope to improve the operability when the surgeon operates the augmented reality.
To make the aforementioned purpose, the technical features, and the gains after actual implementation more obvious and understandable to a person of ordinary skill in the art, the following description shall be explained in more detail with reference to the preferable embodiments together with related drawings.
Please refer to
In step S101, preoperative volume imaging of the subject is acquired. The subject may be a human. The preoperative volume image may be from tomographic imaging, magnetic resonance imaging, or any preoperative volume imaging technique known to a person of ordinary skill in the art. The preoperative volume image of the subject obtained by the aforementioned manner is input to the computer 101 to construct the first virtual three-dimensional model which is displayed on the render window of the display 102. The first virtual three-dimensional model may be used as a three-dimensional model of the preoperative organ.
In step S103, the global reference frame is created by the position tracking device 103, and the global reference frame is then registered in the computer 101.
In step S105, the relatively static surface of the subject is obtained by laser scanning to construct a second virtual three-dimensional model, and the second virtual three-dimensional model is registered to the global reference frame and displayed on the render window of the display 102. In an embodiment, the second virtual three-dimensional model may be used as a real-time virtual three-dimensional model.
In step S107, the first virtual three-dimensional model is aligned with the global reference frame. The first virtual three-dimensional model is matched with the second virtual three-dimensional model by the iterative closest point algorithm to make the two models position in the same frame so as to calculate the first transformation between the two models. The computer 101 applies the first transformation to the first virtual three-dimensional model to generate a third virtual three-dimensional model on the render window.
In step S201, the endoscopic virtual model is constructed based on the geometrical parameters of the endoscope 105 mounted with the first sensor. Specifically, a virtual model including the endoscope 105 and the first sensor is constructed as a virtual model of the endoscope by the known geometrical parameters of the endoscope 105 and the first sensor, such as length, width, height, and other specific size parameters. The endoscopic virtual model is displayed on the render window of display 102.
In step S203, the surgical instrument virtual model is constructed based on the geometrical parameters of the surgical instrument 107 mounted with the second sensor. Specifically, a virtual model including the surgical instrument 107 and the second sensor is constructed as the surgical instrument virtual model by the known geometrical parameters of the surgical instrument 107 and the second sensor, such as length, width, height, and other specific size parameters. The surgical instrument virtual model is then displayed on the render window of display 102.
In step S205, before the surgery, the third sensor is fixed on the subject to access the real-time movement of the subject.
In step S109, the first sensor mounted on the endoscope 105 and the second sensor mounted on the surgical instrument 107 are tracked by the position tracking device 103 to respectively obtain the endoscopic virtual position and the surgical instrument virtual position. Specifically, since the global reference frame is created based on the position tracking device 103, the endoscopic virtual model and the surgical instrument virtual model are registered to the global reference frame based on the endoscopic virtual position and the surgical instrument virtual position. Thus, the relative position of the endoscopic virtual model and the surgical instrument virtual model to the third virtual three-dimensional model may be displayed on the render window.
In step S111, the endoscopic image imaged by the endoscope 105 is superimposed with the virtual image corresponding to the endoscopic image on the render window to generate a superimposed image, wherein the virtual image is imaged based on the third virtual three-dimensional model. This enables the surgeon to view both the endoscopic image and the virtual three-dimensional model in the render window.
In step S113, the computer 101 calculates the closest distance between the surgical instrument virtual model and the third virtual three-dimensional model, and the closest distance is shown in the superimposed image of the render window such that the surgeon determines a relative position of the surgical instrument to the organ in real-time.
In short, the augmented reality method for an endoscope of the present invention may achieve the purpose of combining endoscopic images with virtual three-dimensional models of organs by introducing preoperative volume images and real-time images into the same frame and constructing an integrated virtual three-dimensional model. Further, the present invention provides a relative position of the surgical instrument to the virtual three-dimensional model of the organ so that the surgeon may obtain the structure of the posterior surface of the organ to avoid damage to the structure of the posterior surface.
Hereafter, the augmented reality method for an endoscope of the present invention is further described by means of specific examples.
Please refer to
Please refer to
The global reference frame: The reference frame of the position tracking device 103 is considered the global reference frame of the system.
The alignment of the reference frame of the laser scanner: Please refer to
The endoscope and the surgical instrument virtual model: Please refer to
Constructing and scanning a real-time virtual three-dimensional model with a laser scanner: Please refer to
Registration of the preoperative organ three-dimensional model and the real-time virtual three-dimensional model:
A. Initial alignment: Please refer to
B. Registration refinement: Please refer to
Registration of the endoscope camera head: Please refer to
Tracking and displaying: Please refer to
The present invention has specifically described the augmented reality method and system for an endoscope in the aforementioned embodiment. However, it is to be understood by a person of ordinary skill in the art that modifications and variations of the embodiment may be made without departing from the spirit and scope of the present invention. Therefore, the scope of the present invention shall be described as in the following claims.