This application is based on and claims priority under 35 U.S.C. § 119 to Korean Patent Application No. 10-2022-0144585 filed on Nov. 2, 2022 in the Korean Intellectual Property Office, the disclosure of which is incorporated by reference herein in its entirety.
The disclosure relates to image registration, and more particularly to an image registration method and a medical operating robot system for performing the same, in which operation planning and navigation information is displayed on a desired image through registration between a pre-operative 3D image and an intra-operative 2D image.
Operative navigation technology based on image registration has been used to assist a doctor in an operation.
Before starting an operation, a doctor makes an operation plan for determining an optimal implant product, a surgical position for an implant, a trajectory of a surgical instrument, or the like based on a 3D computed tomography (CT) image of a surgical site. During the operation, the doctor operates the surgical instrument while comparing and checking the real-time positions of the surgical instrument, the implant, etc. corresponding to operating status with the operation plan to ensure that the operation is proceeding well according to the operation plan.
While the operation plan is made based on a 3D image from a CT scanner mainly used before the operation, the operating status is provided based on a 2D image from an imaging device, e.g., a C-arm mainly used during the operation because the surgical instrument and the C-arm are registered to the same coordinate system during the operation.
Therefore, 3D-2D image registration is needed to provide integrated information about the operation plan and the operating status, and it is required to improve the accuracy of the image registration and shorten the processing time of the image registration for a successful operation.
However, a patient's movement may cause twist of his/her spine, left and right asymmetry of his/her pelvis, etc., thereby resulting in an inconsistent image whenever an image is acquired. Therefore, minute changes in pose, due to the patient's movement, in particular, due to joint rotation rather than position change should be quickly processed to perform the image registration, but image registration technology reflecting such change in pose caused by the joint rotation has not been utilized in the field of conventional navigation technology.
(Patent Document 1) Korean Patent No. 2203544
(Patent Document 2) Korean Patent No. 2394901
Accordingly, an aspect of the disclosure is to provide an image registration method capable of quick image registration processing and compensation for movement due to joint rotation, and a medical operating robot system, image registration apparatus, and computer program medium for performing the same.
According to an embodiment of the disclosure, there is provided an image registration method, steps of which are performed by an image registration apparatus including a processor, includes: acquiring a 3D image of a patient's surgical site from a 3D imaging apparatus before an operation; extracting digitally reconstructed radiograph (DRR) images in an anterior-posterior (AP) direction and a lateral-lateral (LL) direction from the 3D image; acquiring 2D images for an AP image and an LL image of the patient's surgical site from a 2D imaging apparatus during an operation; determining a first rotation angle between a reference position and a predetermined first reference position of the patient's surgical site corresponding to the first reference position of the AP image or LL image, based on a first rotation axis passing through a predetermined first origin and parallel to a cross product vector of first normal vectors for planes of the AP image and the LL image, from a geospatial relationship between a source and a detector with respect to the DRR image; determining a second rotation angle between the reference position and the second reference position of the AP image or LL image corresponding to the reference position, based on a second rotation axis passing through a predetermined second origin and parallel to a cross product vector of second normal vectors for planes of the AP image and the LL image, from a geospatial relationship between a source and a detector with respect to the 2D image; and determining a transformation relationship between the 2D image and the DRR image based on the first and second rotation angles, from the geospatial relationships between the sources and the detectors of the DRR and 2D images.
Here, the first reference position and the second reference position may include a center of the AP image or LL image for each of the 2D image and the DRR image, or a line or plane including the center.
The image registration method may further include performing operation planning based on the 3D image by the image registration apparatus, wherein the first origin for the DRR image is determined based on a relative relationship of a trajectory of a surgical instrument for mounting an implant or a mounting position of the implant applied to the operation planning. Further, the reference position for the DRR image or 2D image may be determined based on a user's input.
In the image registration method, the geospatial relationship between the source and the detector for the DRR image may include an orthogonal projection relationship, and the geospatial relationship between the source and the detector for the 2D image may include a perspective projection relationship.
In addition, the image registration method may further include: determining a first volume of interest where planes intersect as the plane of the AP image and the plane of the LL image are moved in directions of the first normal vectors, from the geospatial relationship between the source and the detector for the DDR image; and determining a second volume of interest where planes intersect as the AP image and the LL image are moved in directions of the second normal vectors within a perspective projection range, wherein the geospatial relationship between the source and the detector for the 2D image includes a perspective projection relationship. Here, the first origin may include a center of the first volume of interest, and the second origin may include a center of the second volume of interest.
Further, the image registration method may further include: determining a first region of interest for each of the AP image and LL image of the DRR image; and determining a second region of interest corresponding to the first region of interest for each of the AP image and LL image of the 2D image, wherein the first reference position is positioned within the first region of interest, and the second reference position is positioned within the second region of interest. Further, the method may further include: determining a first volume of interest where planes intersect as a region of interest on the AP image and a region of interest on the LL image are moved in directions of the first normal vectors, from the geospatial relationship between the source and the detector for the DDR image; and determining a second volume of interest where planes intersect as a region of interest on the AP image and a region of interest on the LL image are moved in directions of the second normal vectors within a perspective projection relationship, wherein the geospatial relationship between the source and the detector for the 2D image includes a perspective projection relationship, wherein the first origin may include a center of the first volume of interest, and the second origin may include a center of the second volume of interest.
Further, in the image registration method, the first origin may include a center between target positions of a patient's spine pedicle screws, and wherein the first rotation angle may include an angle formed between a line segment that connects the first origin and a midpoint between the pedicle screw entry points, and the first normal vector that passes through the center of the first volume of interest, with respect to the first origin.
In addition, each first region of interest for the AP image and LL image of the DRR image may include a rectangle, and regarding the DRR image, the image registration method may include: a first step of calculating first intersection points between an epipolar line on the LL image for the vertices of the region of interest on the AP image and a midline connecting midpoints of an outer circumference or lateral sides of a region of interest on the LL image; a second step of acquiring four reconstructed points by orthogonal projection of the first intersection points to the normal vectors from the vertices of the region of interest on the AP image; a third step of calculating second intersection points between an epipolar line on the AP image for the vertices of the region of interest on the LL image and a midline connecting midpoints of an outer circumference or lateral sides of a region of interest on the AP image; a fourth step of acquiring four reconstructed points by orthogonal projection of the second intersection points to the normal vectors from the vertices of the region of interest on the LL image; and a fifth step of calculating a first volume of interest in a hexahedron formed based on eight reconstructed points obtained through the first to fourth steps.
Further, the determining the second volume of interest may include: regarding the 2D image, a first step of calculating first intersection points between an epipolar line on the LL image for the vertices of the region of interest on the AP image and a midline connecting midpoints of an outer circumference or lateral sides of a region of interest on the LL image; a second step of acquiring four reconstructed points by perspective projection of the first intersection points to perspective projection vector from the vertices of the region of interest on the AP image toward the source; a third step of calculating second intersection points between an epipolar line on the AP image for the vertices of the region of interest on the LL image and a midline connecting midpoints of an outer circumference or lateral sides of a region of interest on the AP image; a fourth step of acquiring four reconstructed points by perspective projection of the second intersection points to the perspective projection vectors from the vertices of the region of interest on the LL image toward the source; and a fifth step of calculating a second volume of interest in a hexahedron based on eight reconstructed points obtained through the first to fourth steps.
Further, according to another embodiment of the disclosure, there is provided an image registration method, steps of which are performed by an image registration apparatus including a processor, the method including: acquiring a 3D image of a patient's surgical site from a 3D imaging apparatus before an operation; extracting DRR images in an AP direction and a LL direction from the 3D image; acquiring 2D images for an AP image and an LL image of the patient's surgical site from a 2D imaging apparatus during an operation; determining a first region of interest for each of the AP image and the LL image of the DRR image; determining a second region of interest corresponding to the first region of interest with respect to each of the AP image and the LL image of the 2D image; determining a first volume of interest formed by intersection of planes upon parallel translation of a region of interest on the AP image and a region of interest on the LL image in a direction of a first normal vector to the planes of the AP image and the LL image, from a geospatial relationship between a source and a detector with respect to the DRR image; determining a second volume of interest formed by intersection of planes upon translation of a region of interest on the AP image and a region of interest on the LL image in a direction of a second normal vector to the AP image and the LL image of the 2D image within a perspective projection range, wherein the geospatial relationship between the source and the detector for the 2D image includes a perspective projection relationship; determining first displacement between a first reference position within the first volume of interest corresponding to a predetermined first reference position in the first region of interest and a predetermined reference position corresponding to the first reference position; determining second displacement between the reference position and a second reference position within the second volume of interest for a predetermined second reference position within the second region of interest corresponding to the reference position; and determining a transformation relationship to minimize a Euclidean distance between vertices of the first region of interest and vertices of the second region of interest based on a transformation relationship, as the transformation relationship between the 2D image and the DRR image is determined from geospatial relationships for the source and the detector of each of the DRR image and the 2D image, based on the first displacement and the second displacement.
Here, the determining the first displacement may include determining a first rotation angle based on an angle between the reference position and the first reference position, with respect to a first rotation axis passing through a predetermined first origin and parallel to a cross product vector of the first normal vectors for planes of the AP image and the LL image; and the determining the second displacement may include determining a second rotation angle based on an angle between the reference position and the second reference position, with respect to a second rotation axis passing through a predetermined second origin and parallel to a cross product vector of the second normal vectors for planes of the AP image and the LL image.
In addition, the determining the first and second volumes of interest may include forming a polyhedron by projecting an epipolar line of vertices of the first and second regions of interest to the first and second normal vectors.
Further, according to still another embodiment of the disclosure, there is provided an image registration apparatus includes a processor to perform the foregoing image registration method.
Further, according to still another embodiment of the disclosure, there is provided a medical operating robot system including: a 2D imaging apparatus configured to acquire a 2D image of a patient's surgical site during an operation; a robot arm including an end effector to which a surgical instrument is detachably coupled; a position sensor configured to detect a real-time position of the surgical instrument or the end effector; a controller configured to control the robot arm based on predetermined operation planning; a display; and a navigation system configured to display the planning information about the surgical instrument or implant on a 2D image acquired during an operation or display the real-time position of the surgical instrument or implant on the 2D image or a 3D image acquired before the operation, through the display, by performing the foregoing image registration method.
Further, according to still another embodiment of the disclosure, there is provided a computer program medium storing software to perform the foregoing image registration method.
The above and/or other aspects will become apparent and more readily appreciated from the following description of embodiments, taken in conjunction with the accompanying drawings, in which:
Below, embodiments of the disclosure will be described with reference to the accompanying drawings.
Referring to
The memory 11 refers to a computer-readable recording medium, and stores at least one computer program code to be performed by the processor 13. Such a computer program code may be loaded from a floppy drive, a disk, a tape, a digital versatile disc (DVD)/compact disc read only memory (CD-ROM) drive, a memory card, etc., which are separated from the memory 11, into the memory 11. The memory 11 may store software for the image registration, a patient's medical image or data, etc.
The processor 13 is to execute and process a computer program instruction through basic logic, calculations, operations, etc., and the computer program code stored in the memory 11 is loaded into and executed by the processor 13. The processor 13 may execute an algorithm stored in the memory llto perform a series of 2D-3D image registration.
As shown in
Below, the image registration method performed by the image registration apparatus 10 (or the processor 13) according to an embodiment of the disclosure will be described.
Referring to
First, a 3D image is acquired by taking a pre-operative a computed tomography (CT) image for a patient's surgical site through an imaging apparatus (S1). Here, another imaging apparatus for acquiring the 3D image, e.g., a magnetic resonance imaging (MRI), etc., may be used as an alternative to the CT, and the disclosure is not limited to a specific imaging apparatus.
A doctor uses operation planning software to make an operation plan for a patient's 3D image (S2). For example, in the case of an operation of inserting and fixing a screw into a pedicle during a spinal operation, the selection of a screw product based on the diameter, length, material, etc. of the screw, a pedicle entry point for the screw, a target where the end of the screw is settled, etc. may be set and displayed on the 3D image. Such planning software is provided by many companies in various operative fields.
Next, the image registration apparatus 10 extracts a volume of interest with respect to the surgical site for which the operation plan is established (S3). Such extraction may be automatically performed by an automated algorithm with respect to the position of a planning object, e.g., the screw displayed on the 3D image, or may be performed as large as a given volume boundary is adjusted by a doctor in person.
Referring to
From such extracted volume of interest, an anterior-posterior (AP) image and a lateral-lateral (LL) image are generated as digitally reconstructed radiograph (DRR) images (S4). In other words, as shown in
Next, the AP image and the LL image of the surgical site are acquired through the C-arm equipment during the operation (S5), and the C-arm equipment is registered to a spatial coordinate system based on a marker placed on a part of a patient's body (hereinafter, referred to as a ‘PM’ marker) or a marker to be referenced in other operative space (S6). Korean Patent No. KR2203544 filed by the present applicant's discloses technology of registering the C-arm equipment to a 3D space or registering a 2D image to the 3D space, and Patent 544' is incorporated by reference into the present disclosure in its entirety. The technology for registering the C-arm equipment to the space have been publicly known in many other documents in addition to Patent 544', and the disclosure is not limited to specific spatial registration technology.
The image registration apparatus 10 sets a first region of interest (ROI) and a second region of interest (ROI) for the DRR image and the C-arm image, respectively (S7 and S8). Each region of interest may be set in units of vertebral bones in the case of an operation of fixing a pedicle screw. As shown in
Here, it is noted that the first region of interest for the DRR image and the second region of interest for the C-arm image are extracted equivalently to each other. At least the vertices of the rectangle of the region of interest are selected as points having equivalences corresponding to each other. The equivalence does not mean a perfect match, but for example means that the first region of interest and the second region of interest are extracted or selected so that a relationship between the image feature of the vertebral bone and four vertices the selected region of interest is kept constant.
In this embodiment, for example, with respect to the vertebral bone, the first region of interest and the second region of interest are selected so that the tip of the spinous process can be disposed at the center of the region of interest for the AP image, and the outer margin of the vertebral bone can be uniformly applied to the first and second regions of interest in each of the AP/LL images.
Referring to
Next, the image registration apparatus 10 reconstructs the first region of interest displayed on the AP image and the LL image for the DRR image to a space, i.e., a volume of interest (S9). Intuitively, when the first region of interest in the AP image is moved parallel to a normal vector of the AP image on the assumption that the AP image is positioned at the virtual detector and at the same time the first region of interest in the LL image is moved parallel to the normal vector of the LL image on the assumption that the LL image is positioned at the virtual detector, a space where two planes intersect, i.e., a space where the first regions of interest on the two planes intersect each other will be called a first volume of interest.
The process of calculating the first volume of interest based on a coordinate system will be described with reference to
Referring to
AP image and the LL image plane in a space is called the CT volume, the CT volume is expressed as a hexahedron that has six boundary faces and a three-axial CT coordinate system.
Because the DRR image is based on the orthogonal projection, such a hexahedral CT volume is formed, and the normal vector passing through A1 reaches a certain height of A′1 while intersecting the top and bottom boundary faces of the CT volume.
Therefore, A′1 and the points I1 and I2 intersecting the top and bottom boundary faces are obtained as follows.
A′
1=λ1NAP+A1 [Equation 1]
I
1
=f
inter(πTop,A′1,A1) [Equation 2]
I
2
=f
inter(πBot,A′1,A1) [Equation 3]
Where, λ1 is an arbitrary number, NAP is a normal vector to the AP plane, and finter is a function that takes two points and one plane as input variables and obtains intersection points between a line connecting the two points and the plane.
Therefore, I1 is obtained as follows.
Where, PTop is πTop, and I2 is obtained in the same way as I1. Referring to
I′
1=λ2NLL+I1 [Equation 5]
I′
2=λ2NLL+I2 [Equation 6]
Where, λ2 is an arbitrary number, and NLL is a normal vector of the LL plane.
Next, referring to
By projecting the intersection point C3 between the region of interest and the epipolar line obtained as above onto the normal A1-A′1 from the vertex A1, P1 is obtained as shown in
By applying the same process as that of A1 to the other three vertices, it will be understood that the plane of the first region of interest on the AP image is reconstructed as transferred to the CT volume. Likewise, it will be understood that the vertices on the LL image are reconstructed as transferred to the CT volume.
Compared to
Therefore, I1, I2, I′1, I′2, C3, and P1 in
Where, SLL is a source position on the LL image, λ2 is an arbitrary number,
and C1 and C2 are the centers of the left and right sides in the region of interest on the LL image.
By repeating the same process with respect to eight vertices A1 to A8, eight points PMP1, PMP2, . . . PMP8 are reconstructed in the C-arm volume as shown in
Referring to
Referring back to the flowchart of
In this embodiment, the reference position is the tip of the spinous process, and the corresponding first reference position corresponds to the normal vector of an AXIAL image defined in the same direction as the vector of the cross product between the normal vector of the AP image plane and the normal vector of the LL image plane. However, a first origin determining the position of the rotation axis ideally reflects the rotation center of the spine, but it is difficult to define this. Therefore, one of two options (to be described later) is selected.
First, referring to
On the other hand, referring to
To obtain the tilted rotation angle θ (hereinafter referred to as a ‘first rotation angle’), a doctor may use a pre-operative planning object as shown in
In this case, it will be understood that the first origin through which the first rotation axis passes is selected as the center T c between the left and right targets T1 and Tr, and different in height by ‘d’ from the center of the first volume of interest. This makes it easy to calculate a rotation value based on a doctor's operation plan, and helps to perform quick registration processing. As discussed for the rotation of the C-arm image, the center of the first volume of interest may be used instead of the target center as the first rotation origin.
Meanwhile, as shown in
As shown in
Where, P1 to P8 are points to which eight vertices of the second region of interest are reconstructed, and P9 and P10 are a spinous process tip line designated by a user.
It is noted that
When a transformation relationship between the PM coordinate system and the CT coordinate system is applied to the rotated points, i.e., PMP′1 to PMP′8 and CTP′1 to CTP′8, the Euclidean distance therebetween should be 0 ideally. The optimal registration may be performed under the condition that the sum or average of the Euclidean distances between the eight point pairs is the smallest, and the purpose of initial registration is to obtain a transform matrix satisfying this condition (S11).
Where, TPMCT is a transformation matrix from the PM coordinate system to the CT coordinate system.
Referring to
Where, P9=finter(πV,P1,P3) P10=finter(πV,P2,P4).
Thus, three axes VX, VY, and VZ of the V-local coordinate system are defined as follows.
In the V-local coordinate system, the position VPi of Pi is defined as follows.
V
P
i
=R
CT
V
CT
P
i
+t
CT
V [Equation 23]
Where, RCTV is a rotational transformation matrix from the CT coordinate system to the V-local coordinate system, and tCTV is a translation vector between the CT coordinate system and the V-local coordinate system.
In addition, the points in the rotated V-local coordinate system are obtained as follows.
V
P′
t=Rodrigues(VZ,θDRR)VPi [Equation 24]
Where, the Rodrigues function is defined as a function that rotates an object by an input rotation angle with respect to the input rotation axis.
Thus, the rotated point CTP′I in the CT coordinate system is defined as follows.
CT
P′
i=(RCTV)−1VP′i−(RCTV)−1tCTV [Equation 25]
Thus, the foregoing processes are applied to the eight points, and PMP′i rotated from PMPi in the PM coordinate system is calculated and input to the following equation.
When the initial registration is completed by finding the optimal transformation matrix, the image registration apparatus 10 derives the optimal transformation matrix while adjusting a search range of the DRR image and performs a registration optimization process, thereby completing the image registration (S12). The optimization process is publicly known based on global search, and thus detailed descriptions thereof will be omitted.
As described above, the image registration method has the advantage of increasing the accuracy of the image registration according to the rotation of a human body, and quickly performing the image registration processing.
The disclosure may be implemented as a computer program recording medium in which a computer program is recorded to perform the image registration method on a computer.
Further, the disclosure may also be implemented by the medical operating robot system based on the foregoing image registration method.
Referring to
The C-arm imaging apparatus 100 is used to acquire the AP image and the LL image of a patient's surgical site during the operation.
The robot arm 203 is secured to the robot main body 201, and includes the end effector 203a, to which a surgical instrument is detachably coupled, at a distal end thereof. The position sensor 300 is implemented as an OTS that tracks the real-time position of the surgical instrument or the end effector 203a by recognizing the marker. The controller 205 is provided in the robot main body 201, and controls the robot arm 203 according to predetermined operation planning and control software.
The navigation system 400 performs the foregoing image registration method to display planning information about a surgical instrument or implant on a C-arm image acquired during an operation or display a real-time position of the surgical instrument or implant on the C-arm image or a 3D image acquired before the operation through a display, thereby assisting a doctor in performing the operation. To this end, the navigation system 400 may further include the display connected thereto so that a doctor can view the real-time position of the surgical instrument or the like as the operation plan and the operating status by his/her naked eyes during the operation. A person having ordinary knowledge in the art may easily understand that other elements than the navigation system 400 of
According to the disclosure, there is accurate and quick image registration processing which can compensate for movement due to rotation of a patient's body part
Although embodiments of the disclosure have been described so far, it will be appreciated by those skilled in the art that modifications or substitutions may be made in all or part of the embodiments of the disclosure without departing from the technical spirit of the disclosure.
Accordingly, the foregoing embodiments are merely examples of the disclosure, and the scope of the disclosure falls into the appended claims and equivalents thereof.
Number | Date | Country | Kind |
---|---|---|---|
10-2022-0144585 | Nov 2022 | KR | national |