The present disclosure relates to a registration method/registering method for registering a patient and/or an image of a patient to a navigation system, in particular to 3D image data. In addition, the disclosure relates to a surgical navigation system for a surgical intervention on a patient for registration and for following/tracking as well as to a computer-readable storage medium.
Surgical navigation usually involves the use of special instruments that have special marking systems with markers, such as infrared reference points. These instruments are called indicators or pointers and are specially adapted to mark a position in three-dimensional space with their tip, which is detected by a navigation system. In particular, such navigated pointers are used to scan landmarks on a patient's face in order to perform a registration of the patient and/or of an image of the patient with respect to the navigation system to be used.
Another alternative of a registration is the use of a followed (video) camera in combination with a navigation camera/tracking camera.
However, these registration methods have the disadvantage that they work very poorly in a prone position of the patient, since the patient's face is facing downward. In this case, the surgeons have to kneel down and get under the patient in order to detect the patient's face for registration, which leads to problems with a field of view or an (optical) line of sight between the manually guided pointer or video camera and the tracking camera/following camera/navigation camera. In order to establish visual contact between the tracking camera and the followed pointer or followed camera, surgeons often have to reposition the navigation camera/tracking camera several times during registration so that visual contact is re-established. This delays an intervention procedure and also poses safety risks for both the patient and the medical staff.
As an alternative to registration in the prone position, the patient can be registered in a supine or lateral position and then turned into the prone position for the intervention. However, this method is very risky and greatly increases the complexity of the surgical procedure.
Manual registration is also a question of the individual's skills. Based on different individual experiences and skills, the quality of the registration may therefore vary depending on which person carries out the registration. Manual registration is very time-consuming, in particular if the registration has to be repeated several times due to a lack of experience in order to obtain a sufficient result.
The objects of the present disclosure are therefore to avoid or at least reduce the disadvantages of the prior art and, in particular, to provide a registration method, a surgical navigation system and a computer-readable storage medium which provides a higher precision of a registration and a navigation and, in particular, allows a continuous and uninterrupted following and preferably an even better detection of an intervention region. A user, in particular a surgeon, is to receive an even better and safer registration and thus navigation modality. In addition, a partial object is to support a surgical workflow even better and to perform it faster. Another partial object can be seen in increasing the available navigation time during an intervention.
The objects of the present disclosure are solved with respect to a generic registration method, with respect to a generic surgical navigation system, and with respect to a computer-readable storage medium.
A basic idea of the present disclosure can thus be seen in the fact that an optical robot camera provided on a robot arm is used for precise registration of a patient (and/or of an image of a patient), which in particular spatially detects and samples/scans a face of the patient. This optical, robot-guided robot camera is in turn detected and followed in its pose by a tracking system, in particular with and via a navigation camera. The tracking system preferably detects the patient, for example via a patient tracker, and detects the robot camera, for example via a robot camera tracker. The robot camera can be guided by the robot to scan relevant body portions of the patient, in particular the face, in different poses from different viewing directions and can create corresponding images. If the robot camera enters a spatial area in which there is no direct visual contact between the tracking system with the tracking camera and the robot head with the robot camera, in particular with the robot tracker, a robot kinematics-based tracking system (robot-kinematics tracking system) can be used and supplemented for following the pose of the robot camera in order to determine a pose of the robot head or of the robot camera relative to the patient and/or relative to the navigation system, in particular to the tracking camera, even without direct visual contact. Since the robot head has a fixed transformation to the robot camera, conclusions can be drawn from one local coordinate system (KOS) to the other local KOS or a pose of the robot head allows conclusions to be drawn about the pose of the robot camera. In particular, a transformation from a robot camera tracker to the robot camera is known to the navigation system or can be determined.
Such a registration method or navigation method and navigation system offer the advantages of a high efficiency of a registration as well as a significant improvement of a reproducibility or repetition of such a registration, since these are carried out automatically and robot-guided. In particular, this reduces the stress factor for surgeons and assistants and an intervention can be performed even more efficiently and quickly. The robot of the navigation system can preferably also implicitly check the accessibility of a working volume or intervention region during the registration process. In particular, the registration method and the navigation system can therefore be used to check the accessibility of a surgical working area for the robot during the registration process as an additional advantage. In this way, repositioning of the robot, such as a mobile robot with a mobile cart as the robot base, can be avoided during the intervention.
In other words, in particular a registration in the field of robotics and a navigation in surgery, in particular neurosurgery, is disclosed. Here, an automatic and precise registration is performed using a (view) camera (robot camera) attached or mounted to a robot arm, which scans the patient's face in particular for registration of the patient to the navigation system. This (view) camera/robot camera is followed by a combination of using a tracking camera/navigation camera of the navigation system and an (integrated) robot tracking system (robot-kinematics tracking system). The surgical navigation system of the present disclosure thus has two different following modalities available for following the robot camera and can choose the best modality or use a combination for even more precise registration.
The major problem that assistants and surgeons have different skills for performing a registration is eliminated in the present registration method and in the surgical navigation system according to the present disclosure. Likewise, registrations do not have to be performed several times or repeated at a later point in time, for example if the surgeon was not satisfied with the work of the assistant, but in particular a new automatic registration can be performed, for example at regular time intervals or at certain points in time during the intervention (at any time), in order to improve both the quality of a registration and its reproducibility and efficiency. In particular, automatic registration can be carried out at all possible times during the intervention in order to improve quality, reproducibility and efficiency.
Whereas in the prior art, manual registration with a manually guided pointer or a followed video camera requires repeated repositioning of a tracking camera of the navigation system, the present registration method and navigation system provides for the use of a robot with a robot arm and a robot camera connected to the robot arm in order to overcome line-of-sight problems without having to move the external tracking system, in particular the navigation camera/tracking camera. Thus, the use of a robot or robot arm is provided for automatic registration by scanning the patient and automatically performing the comparison. In the event of line-of-sight problems or an interruption of a line of sight or field of view between the tracking system and the followed robot camera, the robot positioning system/robot tracking system/robot-kinematics tracking system is used as an intermediate coordinate reference in order to determine the pose of the robot camera via the robot-kinematics tracking system. An intervention with navigation procedures also becomes even safer, as the net navigation time available to the surgeon is further increased.
In other words, a registration method for an (automatic) registration of a patient for surgical navigation in a surgical intervention is disclosed, comprising the steps of: preferably detecting a pose of the patient, in particular a head of the patient, by a tracking camera, in particular a 3D-tracking camera, of a navigation system; detecting (a pose of) a robot head with a robot camera (of a medical robot), in particular via a robot-camera tracker, by the tracking camera of the navigation system and following the (position and/or orientation, in particular the pose of the) robot camera; controlling and moving the robot arm such that the robot camera in a first pose is directed to a region of interest of the patient, in particular to a face of the patient, and creating an image by the robot camera; checking whether there is a visual connection between the tracking camera and the robot head with the robot camera, in particular with the robot tracker, in the first pose; detecting, when there is a visual connection, a pose of the robot head with the robot camera via the tracking camera, or detecting, when the visual connection is missing or interrupted or reduced, a pose of the robot head with the robot camera via a robot-kinematics tracking system; and executing a registration of the patient and/or of the image (created by the robot camera) with respect to the navigation system, in particular with respect to the tracking camera. In particular, during registration, the image created by the robot camera is converted into coordinates or into a coordinate system of the tracking camera.
One could also say that the registration method or the navigation system for following (of a pose) primarily uses the tracking camera, but switches to a second mode of following when a visual contact deteriorates, i.e. to following based on the robot-kinematics tracking system.
The term “position” means a geometric position in three-dimensional space, which is specified in particular via coordinates of a Cartesian coordinate system. In particular, the position can be specified by the three coordinates X, Y and Z.
The term “orientation” in turn indicates an orientation (such as position) in space. It can also be said that the orientation indicates a direction or rotation in three-dimensional space. In particular, the orientation can be indicated via three angles.
The term “pose” includes both a position and an orientation. In particular, the pose can be specified using six coordinates, three position coordinates X, Y and Z and three angular coordinates for the orientation.
The term 3D defines that the data or structures are spatial, i.e. three-dimensional. The patient's body or at least a part of the body with a spatial extension can be digitally available as image data in a three-dimensional space with a Cartesian coordinate system (X, Y, Z), for example.
In particular, no tracking camera of the navigation system is required if only the robot-kinematics tracking system of the robot is used. In particular, a transformation between the robot base and the patient is already known or can be determined via the robot camera.
According to a further variant, the registration method may further comprise the steps of: controlling and moving the robot arm such that the robot camera in at least a second pose is directed to a further region of interest of the patient, in particular to a further facial region of the patient, and creating a second image by the robot camera; checking whether in the second pose there is a visual connection between the tracking camera and the robot head with the robot camera, in particular with the robot tracker; detecting, when there is a visual connection, the second pose of the robot head with the robot camera via the tracking camera, or detecting, when the visual connection is missing or interrupted or reduced, a pose of the robot head with the robot camera via the robot-kinematics tracking system. In particular, the registration method can run through several poses of the robot camera in succession and can create a corresponding image in each pose. In particular, a continuous movement with continuous followed poses of the robot camera and continuous images, similar to a video recording, can be created.
Preferably, in the step of detecting the pose, when a visual connection is missing or interrupted or reduced, the detection of the pose of the robot head with the robot camera may be performed via the robot-kinematics tracking system, wherein a transformation between a robot base of the robot and the tracking camera is used in order to determine a transformation between the local coordinate system of the tracking camera and the coordinate system of the robot camera and thus its pose via the transformation from the tracking camera to the robot base and the transformation of the robot-kinematics tracking system from the robot base to the robot head with the robot camera.
According to one embodiment, the registration method may further comprise the step of: detecting a relative pose of the robot head with the robot camera to the pose of the head of the patient via the tracking camera in order to provide the robot with an orientation for automatic registration.
In particular, the registration method may further comprise the step of: using a 3D camera as a robot camera and scanning a face of the patient, in particular a head of the patient, from multiple sides in order to obtain an overall image of the face, in particular of the head.
In particular, the registration method may have the step of detecting a pose of the robot base and thus a reference coordinate system of the robot and the robot-kinematics tracking system by the tracking camera. In particular, the navigation system can also detect a pose of the patient, in particular of the patient's head, via the tracking camera, or the registration method may have this step, so that a transformation between the local coordinate system (KOS) of the robot and the local coordinate system (KOS) of the patient is known, and the robot-kinematics tracking system can act independently of the tracking camera.
According to a further embodiment, the step of registering the patient can only be carried out once the images have been created and the robot camera is back in the field of view of the tracking camera.
With regard to a surgical navigation system for a surgical intervention on a patient, the objects of the present disclosure are solved in particular by comprising: a robot with a robot base as a local connection point, a robot arm connected to the robot base and a robot head connected to the robot arm, in particular mounted, with a robot camera, so that a pose (i.e. a position and orientation) of the robot camera relative to the robot base is adjustable; a tracking system with a tracking camera adapted for following (a pose of) preferably a patient, in particular a head of the patient, preferably via a patient tracker, and for following the robot camera (10), in particular via a robot camera tracker; a robot-kinematics tracking system adapted to determine a relative pose of the robot head with the robot camera with respect to its robot base; and a control unit adapted to control the robot and to perform a registration of the patient, wherein the control unit is configured: in a first pose of the robot camera, to create an image by the robot camera, in particular of a face of the patient; when there is a visual connection between the tracking camera and the robot head with the robot camera, in particular with a robot-camera tracker, to detect the pose of the robot camera by the tracking camera (in particular relative to the latter) or, when the visual connection is missing or interrupted or reduced, to detect the pose of the robot camera (in space) via the robot-kinematics tracking system; and to perform a registration of the patient and/or of the image of the robot camera with respect to the navigation system, in particular with respect to the tracking camera. In particular, during registration, the image created by the robot camera has to be transferred to the coordinates or coordinate system (KOS) of the navigation system, in particular of the tracking camera. In particular, a transformation is determined between the coordinate system of the image of the robot camera and the coordinate system of the navigation system, in particular the KOS of the tracking camera.
According to a further embodiment, the surgical navigation system may comprise a robot-guided surgical microscope as a robot and a microscope head of the surgical microscope may function as a robot head with a robot camera. In other words, in particular, the robot camera used for registration may be a surgical microscope or a microscope head of the surgical microscope with an optical system and a sensor for a corresponding image.
Preferably, the surgical navigation system may comprise an infrared-based (optical) tracking system and/or may comprise an electromagnetic (EM) or EM-based tracking system and/or may comprise a machine vision-based tracking system or may be based on machine vision.
With respect to a computer-readable storage medium, the objects are fulfilled by comprising instructions which, when executed by a computer, cause the computer to perform the method steps of the registration method according to the present embodiment.
Any disclosure related to the navigation system of the present disclosure applies equally to the registration method of the present disclosure and vice versa.
The present invention is explained in more detail below with reference to the accompanying Figures via preferred configuration examples.
The Figures are schematic in nature and are intended only to aid understanding of the invention. Identical elements are marked with the same reference signs. The features of the various embodiments can be interchanged.
The navigation system 1 has a robot 2 with a robot base 4 as a local connection point. A multi-jointed robot arm 6 with several robot arm segments is connected to the robot base 4 to enable articulation. In turn, a terminal robot head 8, which can be moved to different positions in different orientations in the area of the patient, is attached to the robot arm 6. The robot head 8 in turn has a robot camera 10 at the front, which can create optical images, in this case three-dimensional images, i.e. 3D images with depth information. The robot arm 6 and the robot head 8 can also be used to set a pose, i.e. a position and orientation of the robot camera 10 in relation to the robot base 4 and thus also in relation to the patient P.
For detecting a position and a pose of objects, the navigation system 1 has an infrared-based tracking system 12, which has a tracking camera 14 at such a position in an operating room that it has a good overview of the intervention region, in particular is arranged in an elevated position. In the present case, the tracking camera 14 is a 3D tracking camera in the form of a stereo camera, so that a pose of an object to be tracked can be detected via two cameras spaced apart from each other and corresponding processing of the images, in particular via triangulation.
In this embodiment, infrared (IR) markers/IR markers or IR trackers are arranged on the objects to be tracked. These IR markers have four rigidly spaced IR spheres for position and orientation detection. Specifically, a patient tracker 16 in the form of an IR tracker is arranged on a patient's head and a robot camera tracker 18 is also arranged on the robot head 8 with the robot camera 10. This enables the (stereo) tracking camera 14 of the navigation system 1 to determine and track the pose of the patient and the pose of the robot camera 10 via a predefined relationship between the tracker and the patient's head or the robot camera.
In addition to the optical, infrared-based tracking system 12, the present navigation system 1 also has a robot-kinematics tracking system 20 of the robot, which is adapted to determine a relative pose of the robot head 8 with the robot camera 10 in relation to its robot base 4. Specifically, sensors are installed in the robot 2 at the joints, which detect a joint position of the robot 2 and provide these settings of the joints in a computer-readable form. A processing unit, here formed by a central control unit 22, then determines the pose of the robot head 8 on the basis of the detected position of the robot and then the relative pose (of the local coordinate system (KOS)) of the robot camera 10 in relation to (the local coordinate system (KOS)) of the robot base 4 via a known transformation from robot head 8 to robot camera 10.
The control unit 22 is further adapted to control the robot 2 and to perform a registration of the patient P, wherein the control unit 22 is configured to control and move the robot camera 10 into a first pose and to create an image by the robot camera 10 in this first pose of a face of the patient P. Thus, a 3D surface image is created by the robot camera 10 in a first pose.
If, as shown in
If, on the other hand, such a visual connection is missing or is interrupted or severely impaired/reduced, the present navigation system 1 also has a robot-based or kinematics-based tracking system, i.e. the robot-kinematics tracking system 20, in addition to the optical tracking system. This provides a relative pose of the robot camera 10 in relation to the robot base 4. The tracking system 12 can determine a transformation between the local coordinate system of the tracking system 12 or the tracking camera 14 and the local KOS of the robot base 4 via a predetermined transformation between the tracking camera 14 and the robot base 4 or via direct detection of the robot base 4 in visual connection/line of sight, in particular via a robot base tracker (not shown here). This means that in poses of the robot head 8 and the robot camera 10 in which the line of sight between the tracking camera 14 and the robot camera 10 or indirectly via the robot camera tracker 18 is disturbed (see
In the embodiment according to
In
In
If, as shown in
Once all relevant images of patient P's face have been created and the 3D surface scan has been completed, registration is carried out and completed as soon as the robot-camera tracker 18 is back in the field of view of the tracking camera 14.
This registration process may be repeated at various times before or during the intervention in order to achieve particularly accurate registration.
The registration method for an automatic registration of a patient P for a surgical navigation of a surgical intervention has as a (here first) step S1 a detecting of a pose of the patient P, in particular of a head of the patient P, by a tracking camera 14, in particular a 3D-tracking camera, of a surgical navigation system 1.
In a subsequent step S2, a pose of a robot head 8 is detected with a robot camera 10 of a medical robot 2, via a robot camera tracker 18, by the tracking camera 14 of the navigation system 1 and following the robot camera 10.
In a step S3, the robot arm is controlled and moved accordingly in such a way that the robot camera 10 is directed in a first pose toward a region of interest of the patient P, in particular toward a face of the patient P.
In a step S4, an image, in this case a 3D surface image, is then created by the 3D-capable robot camera 10.
In a condition B5, the condition is checked as to whether in the first pose there is a visual connection between the tracking camera 14 and the robot head 8 with the robot camera 10, in this case a visual connection to the robot-camera tracker 18.
If the check is positive and there is a visual connection, the registration method proceeds to step 6.1, which detects the pose of the robot head 8 with the robot camera 10 via the tracking camera 14, in this case via the robot-camera tracker 18.
If the visual connection is interrupted or severely impaired so that the pose can no longer be determined with sufficient accuracy via the tracking camera 14, the pose of the robot head 8 is determined in step S6.2 with the robot camera 10 via a robot-kinematics tracking system 20. Here, a pose of the robot camera 10 can either be determined by the robot-kinematics tracking system 20 in relation to the patient P completely without the tracking camera 14, or the transformation can be provided implicitly by integrating the tracking camera 14 and a corresponding calculation of robot camera 10—robot head 8—robot base 4—tracking camera 14—patient P.
Finally, in step S7, when the robot camera 10 is back in the field of view of the tracking camera 14, the patient P is registered with the navigation system 1.
Number | Date | Country | Kind |
---|---|---|---|
10 2021 134 553.7 | Dec 2021 | DE | national |
This application is the United States national phase entry of International Application No. PCT/EP2022/085625, filed on Dec. 13, 2022, and claims priority to German Application No. 10 2021 134 553.7, filed on Dec. 23, 2021. The contents of International Application No. PCT/EP2022/085625 and German Application No. 10 2021 134 553.7 are incorporated by reference herein in their entireties.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2022/085625 | 12/13/2022 | WO |