The present invention relates to a surgery assistance device and a surgery assistance program with which navigation during surgery is performed.
In a medical facility, surgery assistance devices that allow surgery to be simulated are employed in order to perform better surgery.
A conventional surgery assistance device comprised, for example, a tomographic image information acquisition section for acquiring tomographic image information, such as an image acquired by PET (positron emission tomography), a nuclear magnetic resonance image (MRI), or an X-ray CT image, a memory connected to the tomographic image information acquisition section, a volume rendering computer connected to the memory, a display for displaying the computation results of the volume rendering computer, and an input section for giving resecting instructions with respect to a displayed object that is being displayed on the display.
For example, Patent Literature 1 discloses an endoscopic surgery assistance device with which the coordinates of a three-dimensional image of the endoscope actually being used and the coordinates of three-dimensional volume image data produced using a tomographic image are integrated, and these are displayed superposed over endoscopic video, which allows an image of the surgical site region to be displayed superposed at this location over an endoscopic image in real time, according to changes in the endoscope or surgical instrument.
However, the following problem was encountered with the conventional surgery assistance device discussed above.
Specifically, with the surgery assistance device disclosed in the above publication, since an image of the surgical site region displayed superposed at that location over endoscopic image in real time, the distance between the surgical instrument distal end and a specific region can be calculated. What is disclosed here, however, does not involve navigation during surgery, and is just a warning and a display of the distance to the site of a blood vessel, organ, or the like with which the surgical instrument must not come into contact.
It is an object of the present invention to provide a surgery assistance device and a surgery assistance program with which proper navigation can be performed during surgery while the user views the resection site, which is resected using a surgical instrument.
The surgery assistance device pertaining to the first invention is a surgery assistance device for performing navigation while displaying a three-dimensional simulation image produced from tomographic image information during surgery in which a resection-use surgical instrument is used while the user views an endoscopic image, the device comprising a tomographic image information acquisition section, a memory, a volume rendering computer, an endoscope/surgical instrument position sensor, a registration computer, a simulator, a distance calculator, and a navigator. The tomographic image information acquisition section acquires tomographic image information about a patient. The memory is connected to the tomographic image information acquisition section and stores voxel information for the tomographic image information. The volume rendering computer is connected to the memory and samples voxel information in a direction perpendicular to the sight line on the basis of the voxel information. The endoscope/surgical instrument position sensor sequentially senses the three-dimensional positions of the endoscope and the surgical instrument. The registration computer integrates the coordinates of a three-dimensional image produced by the volume rendering computer and the coordinates of the endoscope and the surgical instrument sensed by the endoscope/surgical instrument position sensor. The simulator stores the resection portion scheduled for surgery and virtually resected on the three-dimensional image produced by the volume rendering computer, in the memory after associating it with the voxel information. The distance calculator calculates a distance between the working end of the surgical instrument on the three-dimensional image and the voxel information indicating the resection portion and stored in the memory. The navigator displays the working end of the surgical instrument on the three-dimensional image by using the coordinates of the surgical instrument during surgery, and displays the distance between the working end and the voxel information indicating the resection portion stored in the memory, along with the endoscopic image displayed during surgery.
Here, for example, after a resection simulation is conducted in a state in which the area around a specific bone, blood vessel, organ, or the like is displayed using a three-dimensional image produced from a plurality of X-ray CT images, when surgery is performed using an endoscope, three-dimensional positions of the endoscope or surgical instrument actually used in the surgery are sequentially sensed, and the coordinates of a three-dimensional image formed from a plurality of X-ray CT images and the coordinates of the actual three-dimensional position of the endoscope and the surgical instrument are integrated. Then, the distance to the distal end (the working end) of the actual surgical instrument with respect to the site to be resected in the resection simulation performed using a three-dimensional image is calculated, and this distance is displayed along with the three-dimensional image to advise the surgeon, so that surgical navigation is performed seamlessly from the resection simulation.
Here, the above-mentioned tomographic image includes, for example, two-dimensional images acquired using X-ray CT, MRI, PET, or another such medical device. The above-mentioned surgical instrument includes resection instruments for resecting organs, bones, and so forth. The above-mentioned “working end” means the tooth portion, etc., of the surgical instrument that cuts out the bone, organ, or the like.
Consequently, in surgery for resecting a specific organ by using an endoscope, for example, the surgeon can accurately ascertain how far the distal end of the surgical instrument is from the site that is to be resected, while moving the resection instrument or other surgical instrument toward the resection site. This allows the surgeon to navigate properly while inserting the surgical instrument, without feeling any uncertainty due to not knowing how far apart the surgical instrument distal end and the resection site are.
The surgery assistance device pertaining to the second invention is the surgery assistance device pertaining to the first invention, wherein the simulator senses the depth of the surgical site during pre-surgery resection simulation and computes the degree of change in depth or discontinuity, and stops the resection or does not update the resection data if the degree of change exceeds a specific threshold.
Here, the simulator sets a threshold for virtual resection, and provides a restriction when resection simulation is performed.
Consequently, if the change in depth, etc., exceeds the threshold, the site will not be displayed in a post-resection state on the simulation image. Also, this avoids a situation in which the threshold value becomes too small, or the resection is halted too much, when the resection simulation is performed while the threshold is updated.
The surgery assistance device pertaining to the third invention is the surgery assistance device pertaining to the first or second invention, wherein the navigator models, by multi-point model, the working end of the surgical instrument on the three-dimensional image.
Here, the multi-point model is a model for sampling a plurality of points on the outer edge of the site where collision is expected to occur.
Consequently, when a sensor for sensing the position, angle, etc., is provided to the surgical instrument at a specific position of the actual surgical instrument, for example, the surgical instrument will be represented by multiple points in a virtual space, using the position of this sensor as a reference, and the distance to the resection portion can be calculated from these multiple points and displayed.
The surgery assistance device pertaining to the fourth invention is the surgery assistance device pertaining to any of the first to third inventions, wherein the navigator uses a vector that has a component of the direction of voxel information indicating the resected portion by the surgical instrument during surgery as the vector of the distance.
Consequently, sampling can be performed in the direction in which the surgical instrument moves closer to the resection site, while the positional relation between the resection site and the surgical instrument distal end with respect to the surgeon can be more effectively displayed, such as changing the display mode according to the speed, acceleration, and direction at which the multiple points approach.
The surgery assistance device pertaining to the fifth invention is the surgery assistance device pertaining to any of the first to fourth inventions, wherein the navigator changes the display color of the voxels for each equidistance from the resection portion.
Here, the range of equidistance, centered on the resection portion, is displayed as spheres of different colors on the navigation screen during surgery.
Consequently, in navigation during surgery, the surgeon can easily see the distance from the portion where resection is performed to the surgical instrument distal end, which facilitates navigation.
The surgery assistance device pertaining to the sixth invention is the surgery assistance device pertaining to any of the first to fifth inventions, wherein, after integrating the coordinates of a three-dimensional image and the coordinates of the endoscope and the surgical instrument, the registration computer checks the accuracy of this coordinate integration, and corrects deviation in the coordinate integration if this accuracy exceeds a specific range.
Here, the accuracy of registration in which the coordinates of the three-dimensional image produced on the basis of a plurality of X-ray CT images, etc., and the actual coordinates of the endoscope and surgical instrument is checked, and registration is performed again if a specific level of accuracy is not met.
This allows the position of the endoscope or surgical instrument displayed in the three-dimensional image to be displayed more accurately in the three-dimensional image.
The surgery assistance device pertaining to the seventh invention is the surgery assistance device pertaining to any of the first to sixth inventions, wherein the navigator sets and displays a first display area acquired by the endoscope and produced by the volume rendering computer, and a second display area in which the display is restricted by the surgical instrument during actual surgery.
Here, in the three-dimensional image displayed on the monitor screen, etc., during surgery, the display shows the portion of the field of view that is restricted by the surgical instrument into which the endoscope is inserted.
Therefore, the display is in a masked state, for example, so that the portion restricted by the retractor or other such tubular surgical instrument cannot be seen, and this allows a three-dimensional image to be displayed in a state that approximates the actual endoscopic image.
The surgery assistance device pertaining to the eighth invention is the surgery assistance device pertaining to any of the first to seventh inventions, further comprising a display component that displays the three-dimensional image, an image of the distal end of the surgical instrument, and the distance.
The surgery assistance device here comprises a monitor or other such display component.
Therefore, surgery can be assisted while a three-dimensional image that approximates the actual video from an endoscope is displayed on the display component during surgery in which an endoscope is used.
The surgery assistance program pertaining to the ninth invention is a surgery assistance program that performs navigation while displaying a three-dimensional simulation image produced from tomographic image information, during surgery in which a resection-use surgical instrument is used while an endoscopic image, wherein the surgery assistance program is used by a computer to execute a surgery assistance method comprising the steps of acquiring tomographic image information about a patient, storing voxel information for the tomographic image information, sampling voxel information in a direction perpendicular to the sight line on the basis of the voxel information, sequentially sensing the three-dimensional positions of the endoscope and surgical instrument, integrating the coordinates of the three-dimensional image and the coordinates of the endoscope and the surgical instrument, calculating the distance between the working end of the surgical instrument and the resection site included in the video acquired by the endoscope, and displaying the working end of the surgical instrument on the three-dimensional image by using the coordinates of the surgical instrument during surgery, and combining and displaying an image indicating the distal end of the surgical instrument, and the distance between the resection site and the distal end of the surgical instrument, while navigation is performed during surgery.
Here, for example, after a resection simulation is conducted in a state in which the area around a specific bone, blood vessel, organ, or the like is displayed using a three-dimensional image produced from a plurality of X-ray CT images, when surgery is performed using an endoscope, three-dimensional positions of the endoscope or surgical instrument actually used in the surgery are sequentially sensed, and the coordinates of a three-dimensional image formed from a plurality of X-ray CT images and the coordinates of the actual three-dimensional position of the endoscope and the surgical instrument are integrated. Then, the distance to the distal end of the actual surgical instrument with respect to the site to be resected in the resection simulation performed using a three-dimensional image is calculated, and this distance is displayed along with the three-dimensional image to advise the surgeon, so that surgical navigation is performed seamlessly from the resection simulation.
Here, the above-mentioned tomographic image includes, for example, two-dimensional images acquired using X-ray CT, MRI, PET, or another such medical device. The above-mentioned surgical instrument includes resection instruments for resecting organs, bones, and so forth.
Consequently, in surgery for resecting a specific organ by using an endoscope, for example, the surgeon can accurately ascertain how far the distal end of the surgical instrument is from the site that is to be resected, while moving the resection instrument or other surgical instrument toward the resection site. This allows the surgeon to navigate properly while inserting the surgical instrument, without feeling any uncertainty due to not knowing how far apart the surgical instrument distal end and the resection site are.
The surgery assistance device pertaining to the tenth invention is a surgery assistance device for performing navigation while displaying a three-dimensional simulation image produced from tomographic image information, during surgery in which a resection-use surgical instrument is used while the user views an endoscopic image, the device comprising a simulator and a navigator. The simulator stores the resection portion scheduled for surgery and virtually resected on the three-dimensional image produced by sampling voxel information for the tomographic image information of the patient in a direction perpendicular to the sight line, after associating it with the voxel information. The navigator calculates a distance between the working end of the surgical instrument on the three-dimensional image and the voxel information indicating the resection portion stored in the memory, displays the working end of the surgical instrument on the three-dimensional image by using the coordinates of the surgical instrument during surgery, and displays the distance between the working end and the voxel information indicating the resection portion, along with the endoscopic image displayed during surgery.
The personal computer (surgery assistance device) pertaining to an embodiment of the present invention will now be described through reference to
In this embodiment, a case is described in which navigation is performed in surgery for lumbar spinal stenosis using an endoscope and a resection tool or other such surgical instrument, but the present invention is not limited to this.
As shown in
The personal computer 1 functions as a surgery assistance device by reading a surgery assistance program that causes a computer to execute the surgery assistance method of this embodiment. The configuration of the personal computer 1 will be discussed in detail below.
The display (display component) 2 displays a three-dimensional image for performing resection simulation or navigation during surgery (discussed below), and also displays a setting screen, etc., for surgical navigation or resection simulation.
Since the display component for displaying navigation during surgery needs to display a navigation screen that is easy for the surgeon to understand during surgery, a large liquid crystal display 102 that is included in the surgery assistance system 100 in
The position and angle sensing device 29 is connected to the personal computer 1, the positioning transmitter 34, and the oblique endoscope 32, and the position and attitude of the oblique endoscope 32 or the surgical instrument 33 during actual surgery are sensed on the basis of the sensing result of a three-dimensional sensor 32a (see
The oblique endoscope (endoscope) 32 is inserted from the body surface near the portion undergoing surgery, into a tubular retractor 31 (discussed below), and acquires video of the surgical site. The three-dimensional sensor 32a is attached to the oblique endoscope 32.
The positioning transmitter (magnetic field generator) 34 is disposed near the surgical table on which the patient is lying, and generates a magnetic field. Consequently, the position and attitude of the oblique endoscope 32 and the surgical instrument 33 can be sensed by sensing the magnetic field generated by the positioning transmitter 34 at the three-dimensional sensor 32a or the three-dimensional sensor 33b attached to the oblique endoscope 32 and the surgical instrument 33.
As shown in
The display 2 displays three-dimensional images of bones, organs, or the like formed from a plurality of tomographic images such as X-ray CT images (an endoscopic image is displayed in the example in
As shown in
The tomographic image information acquisition section 6 is connected via the voxel information extractor 7 to the tomographic image information section 8. That is, the tomographic image information section 8 is supplied with tomographic image information from a device that captures tomographic images, such as CT, MRI, or PET, and this tomographic image information is extracted as voxel information by the voxel information extractor 7.
The memory 9 is provided inside the personal computer 1, and has the voxel information storage section 10, the voxel label storage section 11, the color information storage section 12, the endoscope parameter storage section 22, and the surgical instrument parameter storage section 24. The memory 9 is connected to the volume rendering computer 13 (distance calculator, display controller).
The voxel information storage section 10 stores voxel information received from the voxel information extractor 7 via the tomographic image information acquisition section 6.
The voxel label storage section 11 has a first voxel label storage section, a second voxel label storage section, and a third voxel label storage section. These first to third voxel label storage sections are provided corresponding to a predetermined range of CT values (discussed below), that is, to the organ to be displayed. For instance, the first voxel label storage section corresponds to a range of CT values displaying a liver, the second voxel label storage section corresponds to a range of CT values displaying a blood vessel, and the third voxel label storage section corresponds to a range of CT values displaying a bone.
The color information storage section 12 has a plurality of storage sections in its interior. These storage sections are each provided corresponding to a predetermined range of CT values, that is, to the bone, blood vessel, nerve, organ, or the like to be displayed. For instance, there may be a storage section corresponding to a range of CT values displaying a liver, a storage section corresponding to a range of CT values displaying a blood vessel, and a storage section corresponding to a range of CT values displaying a bone. Here, the various storage sections are set to different color information for each of the bone, blood vessel, nerve, or organ to be displayed. For example, white color information may be stored for the range of CT values corresponding to a bone, and red color information may be stored for the range of CT values corresponding to a blood vessel.
The CT values set for the bone, blood vessel, nerve, or organ to be displayed is the result of digitizing the extent of X-ray absorption in the body, and is expressed as a relative value (in units of HU), with water at zero. For instance, the range of CT values in which a bone is displayed is 500 to 1000 HU, the range of CT values in which blood is displayed is 30 to 50 HU, the range of CT values in which a liver is displayed is 60 to 70 HU, and the range of CT values in which a kidney is displayed is 30 to 40 HU.
As shown in
The endoscope parameter setting section 23 sets the endoscope parameters inputted via the keyboard 3 or the mouse 4, and sends them to the endoscope parameter storage section 22.
As shown in
The surgical instrument parameter setting section 25 sets surgical instrument parameters for the retractor 31, drill, etc., that are inputted via the keyboard 3 or the mouse 4, and sends them to the surgical instrument parameter storage section 24.
An endoscope/surgical instrument position and attitude acquisition section (endoscope/surgical instrument position sensor) 26 receives via a bus 16 the sensing result from the position and angle sensing device 29, which senses the position and angle of the endoscope or surgical instrument, and sends this result to the volume rendering computer 13 and a registration computer 27.
The volume rendering computer 13 acquires a plurality of sets of slice information at a specific spacing in the Z direction and perpendicular to the sight line, on the basis of the voxel information stored in the voxel information storage section 10, the voxel labels stored in the voxel label storage section 11, and the color information stored in the color information storage section 12. The volume rendering computer 13 then displays this computation result as a three-dimensional image on the display 2.
The volume rendering computer 13 also gives a real-time display that combines the movements of the actual endoscope or surgical instrument into a three-dimensional image on the basis of endoscope information stored in the endoscope parameter storage section 22, surgical instrument information stored in the surgical instrument parameter storage section 24, and the sensing result from the endoscope/surgical instrument position and attitude acquisition section 26.
The volume rendering computer 13 also displays a virtual endoscopic image on the display 2 in a masked state that reflects image information in which the field of view is restricted by the retractor 31, with respect to the image information obtained by the endoscope, on the basis of the above-mentioned endoscopic information and surgical instrument information. More specifically, the volume rendering computer 13 sets an endoscopic image display area (first display area) A1 (see
The endoscopic image display area A1 here is a display area that is displayed on the monitor screen of the display 2 during actual endoscopic surgery. The restricted display area A2 is a display area in which the display acquired by the endoscope is restricted by the inner wall portion, etc., of the surgical instrument, such as a tubular retractor 31, and refers to a region whose display is masked in endoscopic surgery simulation (see
The volume rendering computer 13 is also connected to a depth sensor 15 via the bus 16.
The depth sensor 15 measures the ray casting scanning distance, and is connected to a depth controller 17 and a voxel label setting section 18.
The voxel label setting section 18 is connected to the voxel label storage section 11 and to a resected voxel label calculation display section 19.
In addition to the above-mentioned volume rendering computer 13 and depth sensor 15, the bus 16 is also connected to the endoscope/surgical instrument position and attitude acquisition section 26 and a window coordinate acquisition section 20, such as the color information storage section 12 in the memory 9, and displays three-dimensional images and so forth on the display 2 on the basis of what is inputted from the keyboard 3, the mouse 4, the tablet 5, the position and angle sensing device 29, an endoscope video acquisition section 30, and so on.
The window coordinate acquisition section 20 is connected to a color information setting section 21 and the registration computer 27.
The color information setting section 21 is connected to the color information storage section 12 in the memory 9.
As discussed above, the endoscope/surgical instrument position and attitude acquisition section 26 acquires information related to the positions of the oblique endoscope 32 and the surgical instrument 33 by detecting the magnetic field generated by the positioning transmitter 34 at the three-dimensional sensor 32a and the three-dimensional sensor 33b attached to the oblique endoscope 32 and the surgical instrument 33.
As shown in
The registration computer 27 performs computation to match the three-dimensional image produced by the volume rendering computer 13 with the rotational angle and the three-dimensional image of the oblique endoscope 32 and the surgical instrument 33 and the reference position of the patient during actual surgery. The registration processing (coordinate conversion processing) performed by the registration computer 27 will be discussed in detail below.
A conversion matrix holder 28 is connected to the registration computer 27 and the volume rendering computer 13, and holds a plurality of conversion matrixes used in registration processing (coordinate conversion processing).
As discussed above, the position and angle sensing device 29 is connected to the personal computer 1, the positioning transmitter 34, and the oblique endoscope 32, and senses the position and attitude of the oblique endoscope 32 and the surgical instrument 33 during actual surgery on the basis of the sensing result at the three-dimensional sensor 32a (see
The endoscope video acquisition section 30 acquires video acquired by the oblique endoscope 32. The endoscope video acquired by the endoscope video acquisition section 30 is displayed on the display 2 and a display 102 via the bus 16.
As discussed above, the retractor 31 is a tubular member into which the oblique endoscope 32 or the surgical instrument 33 (such as a drill) is inserted, and in actual surgery it is inserted into and fixed in the body of the patient from the body surface near the surgical site.
The oblique endoscope (endoscope) 32 is inserted along the inner peripheral face of the above-mentioned tubular retractor 31, and acquires video of the surgical site. The three-dimensional sensor 32a is attached to the oblique endoscope 32 in order to sense the three-dimensional position or attitude of the oblique endoscope 32 in real time during surgery.
As shown in
The surgical instrument 33 in this embodiment is a drill that resects the surgical site. Similar to the oblique endoscope 32, the three-dimensional sensor 33b is attached to the surgical instrument (drill) 33 near the rear end. Consequently, the position of the distal end (working end) of the surgical instrument (drill) 33 doing the resection can also be calculated on the basis of the length and shape of the drill stored in the surgical instrument parameter storage section 24.
More specifically, as shown in
As shown in
The distance from the multiple points of the distal end of the surgical instrument 33 to the resection site planned for the surgery is sampled in the approaching direction, and the display mode is changed according to the speed, acceleration, and direction at which the multiple points approach (see
Consequently, the surgeon can ascertain the position of the surgical instrument distal end with respect to the resection site more accurately while looking at the image indicating the virtual space used for navigation.
Control Flow Related to this Surgery Assistance Method
The control flow in the surgery assistance method pertaining to the personal computer 1 in this embodiment will now be described through reference to
As shown in
Then, in S2, the voxel information extractor 7 extracts voxel information from the tomographic image information. The extracted voxel information is sent through the tomographic image information acquisition section 6 and stored in the voxel information storage section 10 of the memory 9. Voxel information stored in the voxel information storage section 10 is information about the points made up of I(x,y,z,α), for example. I here is brightness information about these points, while x, y, and z are coordinate points, and α is transparency information.
Then, in S3, the volume rendering computer 13 calculates a plurality of sets of slice information at a specific spacing in the Z direction and perpendicular to the sight line, on the basis of the voxel information stored in the voxel information storage section 10, and acquires a slice information group. This slice information group is at least temporarily stored in the volume rendering computer 13.
The above-mentioned slice information perpendicular to the sight line refers to a plane that is perpendicular to the sight line. For example, in a state in which the display 2 has been erected vertically, when it is viewed in a state in which it and the plane of the user's face are parallel, the slice information is in a plane perpendicular to the sight line.
The plurality of sets of slice information thus obtained include information about the points made up of I(x,y,z,α), as mentioned above. Thus, the slice information is such that a plurality of voxel labels 14 are disposed in the Z direction, for example. The group of voxel labels 14 is stored in the voxel label storage section 11.
Then, in S4, a rendered image is displayed on the display 2. At this point, the mouse 4 or the like is used to designate the range of CT values on the display 2, and the bone, blood vessel, or the like to be resected is selected and displayed.
Then, in S5, it is determined whether or not an instruction to perform registration has been received from the user. If a registration instruction has been received, the flow proceeds to A (S6) in order to perform registration. On the other hand, if a registration instruction has not been received, the flow proceeds to S7 to determine whether or not an instruction to perform navigation has been received.
If a registration instruction has been received in S5, registration is performed according to the flow shown in
Specifically, first, in S61, the position that will be the feature point of registration is given. More specifically, a portion of a bone whose position is easy to confirm from the body surface, such as the fifth spinous process and the left and right ilia, is used as the feature point.
Then, S62, while the surgeon, a nurse, etc., holds the sensor, it is pressed against a position near the feature point from the body surface of the patient lying on the operating table, and the position the sensor is finely adjusted while looking at the display 102 to acquire sensor position information.
Then, in S63, a conversion matrix for converting the real space coordinates indicating the acquired sensor position into virtual space coordinates is calculated.
As shown in
First, since Pv0 is a feature point triangular center of gravity designated in virtual space, we obtain the following formula (1).
The orthonormal vector in virtual space is found by the following procedure from this virtual space origin vector Pv0 and the three feature points Pv1, Pv2, and Pv3.
A uniaxial vector Vv1 is defined by the following formula (2),
a temporary biaxial vector Vv2
a triaxial vector Vv3 is found by taking the cross product of Vv1 and Vv2
[Fourth Mathematical Formula]
V
v3
=V
v1
×V
v2
Tmp (4)
a biaxial vector Vv2 is found by taking the cross product of Vv3 and Vv1.
[Fifth Mathematical Formula]
V
v2
=V
v3
×V
v1 (5)
By the same procedure, Pr0 is found from a real space feature point triangular center of gravity:
and the orthonormal vectors Vr1, Vr2, and Vr3 of real space are found as follows from Pr0 and the three feature points Pr1, Pr2, and Pr3.
Next, a rotation matrix to each of the spatial coordinates is found from virtual space and real space orthonormal vectors. First, a rotation matrix Mv in virtual space is as follows,
[Eleventh Mathematical Formula]
M
v
=[V
v1
V
v2
V
v3]T (11)
and a rotation matrix Mr in real space is as follows,
[Twelfth Mathematical Formula]
M
r
=[V
r1
V
r2
V
r3]T (12)
In order to find a rotation matrix from a real space coordinate system to a virtual space coordinate system, a rotation matrix from a real space coordinate system to a real space coordinate system is necessary. This is an inverse matrix since the conversion is the inverse of that produced with a rotation matrix of a real space coordinate system. A real space coordinate system converted by this inverse matrix is subjected to conversion by a rotation matrix of a virtual space coordinate system, which gives a rotation matrix Mrotate from a real space coordinate system to a virtual space coordinate system. Expressed as an equation, this gives the following formula (13).
[Thirteenth Mathematical Formula]
M
rotate
=M
v
M
r
−1 (13)
With a scaling matrix Hscale, the DICOM data is believed to be the same as in a real space, so the same applies to a virtual space. Thus, this is defined as a unit matrix.
The rotation matrix Mrotate thus found, and the virtual space origin Pv0, which is the average movement section with a scaling matrix, give the following conversion matrix Ht from a real space coordinate system to a virtual space coordinate system.
In this embodiment, this conversion matrix is used to convert the real space coordinates acquired from the three-dimensional sensor 32a into virtual space coordinates.
A plurality of these conversion matrixes H are kept in the conversion matrix holder 28.
Then, in S64, it is determined whether or not the registration is sufficiently accurate. At this point steps S61 to S64 are repeated until it can be confirmed that the registration accuracy is within a predetermined range. Processing is ended at the stage when accuracy has been confirmed to be within a specific range.
That is, in S64, if it is found that the registration accuracy is not within a predetermined range, registration is performed again to correct the first result. This allows the accuracy of the registration processing to be improved.
Registration correction processing will be discussed in detail below.
As discussed above, when an instruction to carry out registration has been received in S5, the flow proceeds directly to S7 after the registration is performed, or if no instruction to carry out registration has been received.
Then, in S7, if an instruction to carry out navigation during surgery has been received, the flow proceeds to B (S8). On the other hand, if an instruction to carry out navigation has not been received, the flow returns to the processing of S3.
Specifically, in S81, the endoscope/surgical instrument position and attitude acquisition section 26 acquires the three-dimensional positions of the oblique endoscope 32 and the surgical instrument 33 on the basis of the sensing result of the position and angle sensing device 29.
Then, in S82, the above-mentioned conversion matrix H is used to convert from a real space coordinate system to a virtual space coordinate system on the basis of the three-dimensional positions of the oblique endoscope 32 and the surgical instrument 33.
Then, in S83, the volume rendering computer 13 acquires endoscope parameters from the endoscope parameter storage section 22.
Then, in S84, the volume rendering computer 13 acquires surgical instrument parameters from the surgical instrument parameter storage section 24.
Then, in S85, endoscope video is acquired from the endoscope video acquisition section 30.
Then, in S86, if a plurality of sites are to be resected, it is confirmed whether or not computation of the distance from the distal end of the surgical instrument 33 to all of the resection sites has been completed. If this distance computation has been completed, the flow proceeds to S87.
Then, in S87, the volume rendering computer 13 displays a three-dimensional image (rendered image) on the displays 2 and 102, superposed with the endoscope video.
At this point, the three-dimensional sensor 33b senses the movement of the actual surgical instrument 33, and the movement of the surgical instrument 33 is displayed in real time on the three-dimensional image, which allows the surgeon to manipulate the surgical instrument 33 while checking distance information displayed on the display 102. This allows surgery navigation that is useful to the surgeon to be carried out.
The three-dimensional image displayed on the displays 2 and 102 in S87 will now be described through reference to
In the example shown in
Specifically, as shown in
More specifically, text information consisting of “Approaching resection site” is displayed in the information display area M1. An image obtained by superposing the surgical instrument image 33a, a retractor image 31a, and the resection sites Z1 to Z3 over a three-dimensional image of the area around the resection site is displayed in the navigation image area M2. The distance from the multiple points for the distal end of the drill (surgical instrument 33) to the various resection sites Z1 to Z3 is displayed in the distance display area M3.
Regarding the superposition of the various images in the navigation image area M2, the transmissivity can be set for each image, and changed so that information that is important to the surgeon will be displayed.
As shown in
Also, when the speed at which the surgeon moves the surgical instrument 33 toward the resection site Z1 is increased, there is the risk that the approach speed of the surgical instrument 33 will be too high, causing the surgical instrument to pass by the portion to be resected. In view of this, in this embodiment, as shown in
Next, when the surgeon moves the resection site Z1 toward the resection site Z1 in order to resect the resection site Z1, as shown in
Next, when the surgical instrument 33 is used to resect the resection site Z1, as shown in
As described above, the personal computer 1 of the surgery assistance device 100 in this embodiment converts the actual three-dimensional position (real space coordinates) of the oblique endoscope 32 or the surgical instrument 33 into coordinates (virtual space coordinates) on a three-dimensional image produced by the volume rendering computer 13, and then performs navigation during surgery while displaying a combination of an image indicating the distal end of the surgical instrument 33 (the surgical instrument image 33a) and the distance from the surgical instrument distal end to the resection site into a three-dimensional image.
This allows the surgeon to manipulate the surgical instrument 33 while confirming the distance from the distal end of the surgical instrument 33 to the resection site Z1, and while looking at the screen of the display 102.
Next,
Here, the display on the three-dimensional image is made on the basis of parameters such as the diameter, length, and movement direction (insertion direction) of the retractor, and the result of measuring the position and attitude with the sensor installed in the retractor.
Usually, the oblique endoscope 32 (see
As shown in
Next, since the vector RoEo′=RΘ×RoEo, the endoscope distal end position can be calculated from the equation of the endoscope distal end position Ec=Eo′+Rz*de, using the insertion depth de of the endoscope.
This allows the three-dimensional endoscope distal end position to be calculated by two-dimensional mouse operation.
Next, another example related to mapping from two-dimensional input with the mouse 4 to three-dimensional operation with the endoscope 3 will be described through reference to
An endoscope is usually connected to the rear end side of a camera head that houses a CCD camera (not shown). The rotation of the display when this camera head is rotated will now be described.
Specifically, in actual endoscopic surgery, if the image displayed on the display screens of the displays 2 and 102 ends up being displayed vertically, just the image is rotated, without changing the field of view, by rotating the camera head in order to align the orientation of the actual patient with the orientation of the display on the displays 2 and 102.
In order to achieve this by two-dimensional input using the mouse 4, first, the Θ=360*Hd/H is calculated from the mouse drag distance and the display height.
Then, the rotation matrix R2Θ after rotation of an angle Θ is calculated with respect to the axis Ry in the depth direction of the screen center coordinates of the displays 2 and 102.
Then, the image displayed on the displays 2 and 102 can be rotated 90 degrees, without changing the field of view, by using U′=R2Θ*U as the new upward vector for the upward vector U of the field of view.
Consequently, an image displayed on the displays 2 and 102 can be easily adjusted to the same orientation (angle) as the monitor screen in actual endoscopic surgery by two-dimensional input with the mouse 4.
Next, the method for producing a volume rendering image that reflects any oblique angle of the oblique endoscope 32 will be described through reference to
Specifically, in this embodiment, a rotation matrix is applied to the field vector according to the oblique angle set for each oblique endoscope 32.
More specifically, first the cross product Vc of the vertical vector Vu corresponding to the perspective direction of the oblique endoscope 32 and the endoscope axis vector Vs corresponding to the axial direction of the retractor 31 are calculated.
Then, the rotation matrix Rs that undergoes Θ rotation around the Vc is calculated.
Then, the field vector Ve that reflects the oblique angle can be found as Ve=Rs*Vs.
Consequently, even if the oblique angle is different for each oblique endoscope 32, the field of view range can be set for each oblique endoscope 32 used in surgery by calculating the field vector Ve on the basis of the information stored in the endoscope parameter storage section 22, etc.
As shown in
With the personal computer 1 in this embodiment, because of the above configuration, an endoscopic image (the endoscopic image display area A1) that shows the restricted display area A2 that is blocked by the retractor 31 is displayed as shown in
Consequently, a display that approximates the image displayed on the display screen in an actual endoscopic surgery can be displayed by creating a display state that shows the restricted display area A2, which cannot be seen because it is behind the inner wall of the retractor 31 in an actual endoscopic surgery. Therefore, surgery can be assisted more effectively.
As shown in
Furthermore, as shown in
Further, in order to display a navigation screen that is easy for the surgeon to understand, as shown in
With the superposed image in
Also, the three-dimensional image that is combined with the endoscopic image is not limited to being an endoscope view. For example, as shown in
With the superposed image in
With the surgery assistance system 100 in this embodiment, as described above, registration, in which the positions are matched between real space coordinates and virtual space coordinates, is performed before surgical navigation is carried out. This registration will now be described in greater detail.
In this embodiment, registration of the real space coordinates and virtual space coordinates (three-dimensional image coordinates) is carried out as follows.
The registration function here finds the positional relation to the most important part of the oblique endoscope 32 during surgery, so it is a function for positioning between the virtual space coordinates had by the three-dimensional image and the real space coordinates indicating position information from the three-dimensional sensor 32a attached on the endoscope 32 side. This registration function makes it possible to acquire the position of the endoscope 32 in virtual space by using a coordinate conversion matrix produced in the course of processing of this registration, and to interactively perform volume rendering that reflects the final fisheye characteristics.
In the positioning of the various coordinates, three of the feature points corresponding to within real space and three of the feature points corresponding to within virtual space are defined, the amount of scaling, the amount of parallel movement, and the amount of rotation are calculated from these coordinates, and the final coordinate conversion matrix is created.
The flow of registration will now be described.
First, three feature point coordinates (xv, yv, zv) are defined (the converted coordinate values are in the same mm units as the coordinates acquired by sensor) in virtual space sampled with a mouse, with respect to the three-dimensional image displayed in the view window.
Next, the corresponding feature point coordinates (xr, yr, zr) are pointed to with a magnetic sensor and registered in order, with respect to an object in real space. The feature point position information defined in two spaces is used to calculate the origins, thereby calculating the vector of parallel movement.
Next, the scaling matrix and the rotation matrix are calculated, and the final coordinate conversion matrix is put together and stored.
Also, with an oblique endoscope, it is necessary to sense not only the position of the endoscope distal end, but also the orientation of the endoscope axis, and since the rotation matrix produced during the above-mentioned computation is used in calculating the field of view in virtual space, the rotation matrix is also stored by itself.
As shown in
Here, after the registration has been performed, if there is more than a specific amount of deviation in the feature point position designation in real space corresponding to the feature points in virtual space, the following processing is carried out to correct this.
Specifically, the personal computer 1 in this embodiment has a correction function for correcting deviation with an interface while confirming the coordinate axes and the deviation in feature points displayed on a volume rendering image in virtual space.
The flow in registration correction using this correction function is as follows.
when the user sets a feature point correction value within the interface shown in
In this re-registration, just as with the registration function, the feature point coordinates defined in two spaces are used to perform recalculation of the rotation matrix and the coordinate conversion matrix.
When this recalculation is finished, the positions where the feature points and the coordinate axes are to be drawn are recalculated, and the volume rendering image is updated as shown in
In this embodiment, as shown in
This makes it easy for the surgeon to tell how far it is from the distal end of the surgical instrument 33 to the resection site.
In this embodiment, when a resection simulation is performed prior to surgery, the depth controller 17 computes the change in depth or discontinuity around the resection site on the basis of the depth position of the resection site sensed by the depth sensor 15.
If the extent of this change exceeds a specific threshold, the voxel label setting section 18 and the resected voxel label calculation display section 19 perform control so that resection is halted in the virtual space used for simulation, or the resection data is not updated.
More specifically, as shown in
Specifically, when the concept of threshold summing valid points is introduced, if the depth change from the immediately prior threshold summing valid point is below a specific value with respect to a resection point i−1, it is not treated as a new threshold summing valid point, so even if resection is continued in a flat plane, a restriction can be imposed so that T, does not contract to zero.
ΔDk: depth change from immediately prior threshold summing valid point at threshold summing valid point k
m: resectable point evaluation coefficient (at least 1.0)
k: threshold summing valid point evaluation coefficient (at least 0.0 and less than 1.0)
In the above formula, if ΔDi-1<kTi-1 is true, the resection point i−1 is not treated as a new threshold summing valid point, and Ti=Ti-1. Otherwise, it is treated as threshold to be added as with a conventional method.
Thus, if the resection point moves through a relatively flat portion where the depth change amount ΔDi-1 is less than a specific value (ΔDi-1<kTi), the resection simulation is performed so as not to update Ti.
Consequently, in a portion where the depth position changes greatly, either the resection data is not updated, or resection is halted, which allows the proper resection simulation image to be displayed.
Meanwhile, if the above-mentioned control of the threshold summing valid points is not performed, as shown in
However, if the above-mentioned control of the threshold summing valid points is not performed, ΔDi-1 will be zero, and Ti≈0, in a relatively flat portion, so even a tiny depth change can create a problem that leads to the cancellation of resection.
Thus, in this embodiment, when resection simulation is performed by using the above-mentioned concept of threshold summing valid points, the resulting display will be close to the intended resection simulation image.
An embodiment of the present invention was described above, but the present invention is not limited to or by the above embodiment, and various modifications are possible without departing from the gist of the invention.
(A)
In the above embodiment, an example was described in which the present invention was in the form of a surgery assistance device, but the present invention is not limited to this.
For example, the present invention can be in the form of a surgery assistance program that allows a computer to execute the control method shown in
(B)
In the above embodiment, an example was described in which a single three-dimensional sensor 32a, which is a six-axis sensor, was attached to the oblique endoscope 32 in order to sense the three-dimensional position and attitude of the oblique endoscope 32 or the surgical instrument 33, but the present invention is not limited to this.
As shown in
Furthermore, as shown in
(C)
In the above embodiment, an example was given in which the six-axis sensor 32a was attached near the rear end of the oblique endoscope 32 in order to sense the three-dimensional position and attitude of the oblique endoscope 32 or the surgical instrument 33, but the present invention is not limited to this.
For example, the position where the three-dimensional sensor is attached is not limited to being near the rear end of the endoscope or surgical instrument, and may instead be near the center or the distal end side.
The surgery assistance device of the present invention has the effect of allowing the proper navigation to be performed during surgery while the user looks at the resection site to be resected with the surgical instrument, and therefore can be widely applied as a surgery assistance device in performing various kinds of surgery.
Number | Date | Country | Kind |
---|---|---|---|
2012-077119 | Mar 2012 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2013/002065 | 3/26/2013 | WO | 00 |