This application claims priority of Taiwanese Invention Patent Application No. 107121828, filed on Jun. 26, 2018, the entire teachings and disclosure of which is incorporated herein by reference.
The disclosure relates to a surgical navigation method, and more particularly to a surgical navigation method using augmented reality.
Surgical navigation systems have been applied to neurosurgical operations for years in order to reduce damages to patients' bodies during the operations due to the intricate cranial nerves, narrow operating space, and limited anatomical information. The surgical navigation systems may help a surgeon locate a lesion more precisely and more safely, provide information on relative orientations of bodily structures, and serve as a tool for measuring distances or lengths of bodily structures, thereby aiding in the surgeon's decision making process during operations.
In addition, the surgical navigation systems may need to precisely align pre-operation data, such as computerized tomography (CT) image, magnetic resonance imaging (MRI) images, etc., with the head of the patient, such that the images are superimposed on the head in the surgeon's visual perception through a display device. Precision of the alignment is an influential factor in the precision of the operation.
Therefore, an object of the disclosure is to provide a surgical navigation method that can superimpose images on an operation target during a surgical operation with high precision.
According to the disclosure, the surgical navigation method includes, before the surgical operation is performed: (A) by a mobile device that is capable of computation and displaying images, storing three-dimensional (3D) imaging information that relates to the operation target therein; and includes, during the surgical operation: (B) by an optical positioning system, acquiring optically-positioned spatial coordinate information relating to the mobile device and the operation target in real time; (C) by the mobile device, obtaining a first optically-positioned relative coordinate set, which is a vector from the operation target to the mobile device, based on the optically-positioned spatial coordinate information acquired in step (B); and (D) by the mobile device, computing an optically-positioned 3D image that corresponds to the first optically-positioned relative coordinate set based on the 3D imaging information and the first optically-positioned relative coordinate set, and displaying the optically-positioned 3D image based on the first optically-positioned relative coordinate set such that visual perception of the operation target through the mobile device has the optically-positioned 3D image superimposed thereon.
Another object of the disclosure is to provide a surgical navigation system that includes a mobile device and an optical positioning system to implement the surgical navigation method of this disclosure.
Other features and advantages of the disclosure will become apparent in the following detailed description of the embodiment(s) with reference to the accompanying drawings, of which:
Before the disclosure is described in greater detail, it should be noted that where considered appropriate, reference numerals or terminal portions of reference numerals have been repeated among the figures to indicate corresponding or analogous elements, which may optionally have similar characteristics.
Referring to
The first embodiment of the surgical navigation method is to be implemented for a surgical operation performed on an operation target 4 which is exemplified as a head (or a brain) of a patient. In step S1, which is performed before the surgical operation, the mobile device 2 stores three-dimensional (3D) imaging information that relates to the operation target 4 in a database (not shown) built in a memory component (e.g., flash memory, a solid-state drive, etc.) thereof. The 3D imaging information may be downloaded from a data source, such as the server 1 or other electronic devices, and originate from Digital Imaging and Communications in Medicine (DICOM) image data, which may be acquired by performing CT, MRI, and/or ultrasound imaging on the operation target 4. The DICOM image data may be native 3D image data or be reconstructed by multiple two-dimensional (2D) sectional images, and relate to blood vessels, nerves, and/or bones. The data source may convert the DICOM image data into files in a 3D image format, such as OBJ and STL formats, by using software (e.g., Amira, developed by Thermo Fisher Scientific), to form the 3D imaging information.
Steps S2 to S5 are performed during the surgical operation. In step S2, the optical positioning system 3 acquires optically-positioned spatial coordinate information (P.D(O) for the mobile device 2, P.T(O) for the operation target 4) relating to the mobile device 2 and the operation target 4 in real time. In step S3, the mobile device constantly obtains a first optically-positioned relative coordinate set (V.TD(O)), which is a vector from the operation target 4 to the mobile device 2, based on the optically-positioned spatial coordinate information (P.D(O), P.T(O)) relating to the mobile device 2 and the operation target 4. In practice, the mobile device 2 may obtain the first optically-positioned relative coordinate set (V.TD(O)) by: (i) the optical positioning system 3 providing the optically-positioned spatial coordinate information (P.D(O), P.T(O)) to the mobile device 2 directly or through the server 1 which is connected to the optical system 3 by wired connection, and the mobile device 2 computing the first optically-positioned relative coordinate set (V.TD(O)) based on the optically-positioned spatial coordinate information (P.D(O), P.T(O)) in real time; or (ii) the optical positioning system 3 providing the optically-positioned spatial coordinate information (P.D(O), P.T(O)) to the server 1 which is connected to the optical system 3 by wired connection, and the server 1 computing the first optically-positioned relative coordinate set (V.TD(O)) based on the optically-positioned spatial coordinate information (P.D(O), P.T(O)) in real time and transmitting the first optically-positioned relative coordinate set (V.TD(O)) to the mobile device 2.
In step S4, the mobile device 2 computes an optically-positioned 3D image that corresponds to the first optically-positioned relative coordinate set (V.TD(O)) based on the 3D imaging information and the first optically-positioned relative coordinate set (V.TD(O)), such that the optically-positioned 3D image presents an image of, for example, the complete brain of the patient as seen from the location of the mobile device 2. Imaging of the optically-positioned 3D image may be realized by software of, for example, Unity (developed by Unity Technologies). Then, the mobile device 2 displays the optically-positioned 3D image based on the first optically-positioned relative coordinate set (V.TD(O)) such that visual perception of the operation target 4 through the mobile device 2 has the optically-positioned 3D image superimposed thereon. In the field of augmented reality (AR), the image superimposition can be realized by various conventional methods, so details thereof are omitted herein for the sake of brevity. It is noted that the surgical optical positioning system 3 used in this embodiment is developed for medical use, thus having high precision of about 0.35 millimeters. Positioning systems that are used for ordinary augmented reality applications do not require such high precision, and may have precision of about only 0.5 meters. Accordingly, the optically-positioned 3D image can be superimposed on the visual perception of the operation target 4 with high precision, so the surgeon and/or the relevant personnel may see a scene where the optically-positioned 3D image is superimposed on the operation target 4 via the mobile device 2. In step S4, the mobile device 2 may further transmit the optically-positioned 3D image to another electronic device (not shown) for displaying the optically-positioned 3D image on another display device 6; or the mobile device 2 may further transmit, to another electronic device, a superimposition 3D image where the optically-positioned 3D image is superimposed on the operation target 4 captured by the camera module of the mobile device 2 so as to display the superimposition 3D image on a display device 6 that is separate from the mobile device 2. Said another electronic device may be the server 1 that is externally coupled to the display device 6, a computer that is externally coupled to the display device 6, or the display device 6 itself. In a case that said another electronic device is the display device 6 itself, the mobile device 2 may use a wireless display technology, such as MiraScreen, to transfer the image to the display device 6 directly.
In one implementation, step S1 further includes that the mobile device 2 stores two-dimensional (2D) imaging information that relates to the operation target 4 (e.g., cross-sectional images of the head or brain of the patient) in the database. The 2D imaging information may be downloaded from the data source (e.g., the server 1 or other electronic devices), and originate from DICOM image data. The data source may convert the DICOM image data into files in a 2D image format, such as JPG and NIfTI format, by using DICOM to NIfTI converter software (e.g., dcm2nii, an open source program), to form the 2D imaging information. Step S2 further includes that the optical positioning system 3 acquires optically-positioned spatial coordinate information (P.I(O)) relating to a surgical instrument 5 in real time. Step S3 further includes that the mobile device 2 obtains a second optically-positioned relative coordinate set (V.TI(O)), which is a vector from the operation target 4 to the surgical instrument 5, based on the optically-positioned spatial coordinate information (P.I(O), P.T(O)) relating to the surgical instrument 5 and the operation target 4. In practice, the mobile device 2 may obtain the second optically-positioned relative coordinate set (V.TI(O)) by: (i) the optical positioning system 3 providing the optically-positioned spatial coordinate information (P.I(O), P.T(O)) relating to the surgical instrument 5 and the operation target 4 to the mobile device 2 directly or through the server 1 which is connected to the optical positioning system 3 by wired connection, and the mobile device 2 computing the second optically-positioned relative coordinate set (V.TI(O)) based on the optically-positioned spatial coordinate information (P.I(O), P.T(O)) relating to the surgical instrument 5 and the operation target 4 in real time; or (ii) the optical positioning system providing the optically-positioned spatial coordinate information (P.I(O), P.T(O)) relating to the surgical instrument 5 and the operation target 4 to the server 1 which is connected to the optical system 3 by wired connection, and the server 1 computing the second optically-positioned relative coordinate set (V.TI(O)) based on the optically-positioned spatial coordinate information (P.I(O), P.T(O)) relating to the surgical instrument 5 and the operation target 4 in real time and transmitting the second optically-positioned relative coordinate set (V.TI(O)) to the mobile device 2. Step S4 further includes that the mobile device 2 obtains at least one optically-positioned 2D image (referred to as “the optically-positioned 2D image” hereinafter) that corresponds to the second optically-positioned relative coordinate set (V.TI(O)) based on the 2D imaging information and the second optically-positioned relative coordinate set (V.TI(O)), and displays the optically-positioned 2D image based on the first and second optically-positioned relative coordinate sets (V.TD(O), V.TI(O)) such that visual perception of the operation target 4 through the mobile device 2 has the optically-positioned 2D image superimposed thereon, as exemplified in
As a result, the surgeon or the relevant personnel can not only see the superimposition 3D image where the optically-positioned 3D image is superimposed on the operation target 4 via the mobile device 2, but can also see the cross-sectional images (i.e., the optically-positioned 2D image) of the operation target 4 corresponding to a position of the surgical instrument 5 (as exemplified in
It is noted that the 3D imaging information and/or the 2D imaging information may further include information relating to an entry point and a plan (e.g., a surgical route) of the surgical operation for the operation target 4. In such a case, the optically-positioned 3D image and/or the optically-positioned 2D image shows the entry point and the plan of the surgical operation for the operation target 4.
In step S5, the mobile device 2 determines whether an instruction for ending the surgical navigation is received. The flow ends when the determination is affirmative, and goes back to step S2 when otherwise. That is, before receipt of the instruction for ending the surgical navigation, the surgical navigation system 100 continuously repeats steps S2 to S4 to obtain the optically-positioned 3D image and/or the optically-positioned 2D image based on the latest optically-positioned spatial coordinate information (P.D(O), P.T(O) and optionally P.I(O))), so the scenes where the optically-positioned 3D image and/or the optically-positioned 2D image is superimposed on the operation target 4 as seen by the surgeon and/or the relevant personnel through the mobile device 2 are constantly updated in real time in accordance with movement of the surgeon and/or the relevant personnel, and the mobile device 2 can provide information relating to internal structure of the operation target 4 to the surgeon and/or the relevant personnel in real time, thereby assisting the surgeon and/or the relevant personnel in making decisions during the surgical operation.
Furthermore, in step S4, the mobile device 2 may further transmit the optically-positioned 3D image and/or the optically-positioned 2D image to another electronic device for displaying the optically-positioned 3D image on another display device 6; or the mobile device 2 may further transmit, to another electronic device, a superimposition 3D/2D image where the optically-positioned 3D image and/or the optically-positioned 2D image is superimposed on the operation target 4 captured by the camera module of the mobile device 2, so as to display the superimposition 3D image on the display device 6 separate from the mobile device 2. Said another electronic device may be the server 1 that is externally coupled to the display device 6, a computer that is externally coupled to the display device 6, and the display device 6 itself. In a case that said another electronic device is the display device 6 itself, the mobile device 2 may use a wireless display technology to transfer the image(s) to the display device 6 directly. As a result, persons other than the surgeon and the relevant personnel may experience the surgical operation by seeing the images of the surgical operation from the perspective of the surgeon (or the relevant personnel) via the display device 6, which is suitable for education purposes.
Referring to
In step S43, the mobile device 2 computes a non-optically-positioned 3D image that corresponds to the first non-optically-positioned relative coordinate set (V.TD(N)) based on the 3D imaging information and the first non-optically-positioned relative coordinate set (V.TD(N)), and displays the non-optically-positioned 3D image based on the first non-optically-positioned relative coordinate set (V.TD(N)) such that visual perception of the operation target 4 through the mobile device 2 has the non-optically-positioned 3D image superimposed thereon. In step S44, the mobile device 2 determines whether the instruction for ending the surgical navigation is received. The flow ends when the determination is affirmative, and goes back to step S41 when otherwise. Accordingly, when the mobile device 2 is not within the positioning range 30 of the optical positioning system 3 or when the optical positioning system 3 is out of order, the surgeon and/or the relevant personnel can still utilize the surgical navigation. In practice, the non-optical positioning system 7 may also be used alone in the surgical navigation system, although it has lower positioning precision when compared with the optical positioning system 7.
In this embodiment, the non-optical positioning system 7 includes both of the image positioning system 71 and the gyroscope positioning system 72, and step S42 includes sub-steps S421-S425 (see
In step S422, the mobile device 2 causes the gyroscope positioning system 72 to acquire gyroscope-positioned spatial coordinate information relating to the operation target 4, and the mobile device 2 computes a second reference relative coordinate set (V.TD(G)), which is a vector from the operation target 4 to the mobile device 2, based on the gyroscope-positioned spatial coordinate information in real time. The non-optically-positioned spatial coordinate information (P.T(N)) includes the image-positioned spatial coordinate information and the gyroscope-positioned spatial coordinate information.
In step S423, the mobile device 2 determines whether a difference between the first and second reference relative coordinate sets (V.TD(I), V.TD(G)) is greater than a first threshold value. The flow goes to step S424 when the determination is affirmative, and goes to step S425 when otherwise.
In step S424, the mobile device 2 takes the first reference relative coordinate set (V.TD(I)) as the first non-optically-positioned relative coordinate set (V.TD(N)). In step S425, the mobile device 2 takes the second reference relative coordinate set (V.TD(G)) as the first non-optically-positioned relative coordinate set (V.TD(N)). Generally, the image positioning system 71 has higher precision than the gyroscope positioning system 72. However, because a speed of the gyroscope positioning system 72 acquiring the gyroscope-positioned spatial coordinate information is faster than a speed of the image positioning system 71 acquiring the image-positioned spatial coordinate information, the second reference relative coordinate set (V.TD(G)) has higher priority in serving as the first non-optically-positioned relative coordinate set (V.TD(N)), unless the difference between the first and second reference relative coordinate sets (V.TD(I), V.TD(G)) is greater than the first threshold value.
In the implementation where the mobile device 2 further stores the 2D imaging information in step S1, step S42 further includes that the non-optical positioning system 7 acquires non-optically-positioned spatial coordinate information (P.I(N)) relating to the surgical instrument 5 in real time, and the mobile device 2 obtains a second non-optically-positioned relative coordinate set (V.TI(N)), which is a vector from the operation target 4 to the surgical instrument 5, based on the non-optically-positioned spatial coordinate information (P.I(N), P.T(N)) relating to the surgical instrument 5 and the operation target 4. Step S43 further includes that the mobile device 2 obtains at least one non-optically-positioned 2D image (referred as “the non-optically-positioned 2D image” hereinafter) corresponding to the second non-optically-positioned relative coordinate set (V.TI(N)) based on the 2D imaging information and the second non-optically-positioned relative coordinate set (V.TI(N)), and displays the non-optically-positioned 2D image based on the first non-optically-positioned relative coordinate set (V.TD(N)) and the second non-optically-positioned relative coordinate set (V.TI(N)) such that visual perception of the operation target 4 through the mobile device 2 has the non-optically-positioned 2D image superimposed thereon. Method for obtaining the non-optically-positioned 2D image is similar to that for obtaining the optically-positioned 2D image, so details thereof are omitted herein for the sake of brevity. Before receipt of the instruction for ending the surgical navigation, the flow goes back to step S41 after step S44. If the mobile device 2 still fails to acquire the first optically-positioned spatial coordinate information (P.D(O), P,T(O)) within the predetermined time period in step S41, steps S42 to S44 are repeated, so as to continuously obtain the non-optically-positioned 3D image and/or the non-optically-positioned 2D image based on the latest non-optically-positioned spatial coordinate information (P.T(N), optionally P.I(N)), so the scenes where the non-optically-positioned 3D image and/or the non-optically-positioned 2D image is superimposed on the operation target 4 as seen by the surgeon and/or the relevant personnel through the mobile device 2 are constantly updated in real time in accordance with movement of the surgeon and/or the relevant personnel, and the mobile device can provide information relating to internal structure of the operation target 4 to the surgeon and/or the relevant personnel in real time, thereby assisting the surgeon and/or the relevant personnel in making decisions during the surgical operation.
Furthermore, since the non-optical positioning system 7 of this embodiment includes both of the image positioning system 71 and the gyroscope positioning system 72, in the implementation where the mobile device 2 further stores the 2D imaging information in step S1, step S421 further includes that the mobile device 2 causes the image positioning system 71 to acquire image-positioned spatial coordinate information relating to the surgical instrument 5, and the mobile device 2 computes a third reference relative coordinate set (V.TI(I)), which is a vector from the operation target 4 to the surgical instrument 5, based on the image-positioned spatial coordinate information relating to the surgical instrument 5 and the operation target 4 in real time; and step S422 further includes that the mobile device 2 causes the gyroscope positioning system 72 to acquire gyroscope-positioned spatial coordinate information relating to the surgical instrument 5, and the mobile device 2 computes a fourth reference relative coordinate set (V.TI(G)), which is a vector from the operation target 4 to the surgical instrument 5, based on the gyroscope-positioned spatial coordinate information relating to the surgical instrument 5 and the operation target 4 in real time. Then, the mobile device 2 determines whether a difference between the third and fourth reference relative coordinate sets (V.TI(I), V.TI(G)) is greater than a second threshold value. The mobile device 2 takes the third reference relative coordinate sets (V.TI(I)) as the second non-optically-positioned relative coordinate set (V.TI(N)) when the determination is affirmative, and takes the fourth reference relative coordinate sets (V.TI(G)) as the second non-optically-positioned relative coordinate set (V.TI(N)) when otherwise.
In practice, since the optical positioning system 3 may need to first transmit the optically-positioned spatial coordinate information (P.D(O), P.T(O), optionally P.I(O)) to the server 1 through wired connection, and then the server 1 provides the optically-positioned spatial coordinate information (P.D(O), P.T(O), optionally P.I(O)) or the first optically-positioned relative coordinate set to the mobile device 2, transmission delay may exist. A serious transmission delay may lead to significant difference between the computed first optically-positioned relative coordinate set and a current coordinate, which is a vector from the operation target 4 to the mobile device 2, so the first optically-positioned 3D image may not be accurately superimposed on the operation target 4 in terms of visual perception, causing image jiggling. On the other hand, the non-optical positioning system 7 that is mounted to the mobile device 2 transmits the non-optically positioned spatial coordinate information to the mobile device 2 directly, so the transmission delay may be significantly reduced, alleviating image jiggling.
Accordingly, the third embodiment of the surgical navigation method according to this disclosure is proposed to be implemented by the surgical navigation system 100′ as shown in
In this embodiment, steps S1-S5 are the same as those of the first embodiment. While the optical positioning system 3 acquires the optically-positioned spatial coordinate information (P.D(O), P.T(O)) relating to the mobile device 2 and the operation target 4 in real time (step S2), the image positioning system 71 or the gyroscope positioning system 72 of the non-optical positioning system 7 also acquires the non-optically-positioned spatial coordinate information (P.T(N)) relating to the operation target 4 (step S51). While the mobile device 2 obtains the first optically-positioned relative coordinate set (V.TD(O)) in real time (step S3), the mobile device 2 also constantly computes the first non-optically-positioned relative coordinate set (V.TD(N)) based on the non-optically-positioned spatial coordinate information (P.T(N)) relating to the operation target 4 in real time (step S52).
In step S53, the mobile device 2 determines whether a difference between the first optically-positioned relative coordinate set (V.TD(O)) and the first non-optically-positioned relative coordinate set (V.TD(N)) is greater than a third threshold value. The flow goes to step S4 when the determination is affirmative, and goes to step S54 when otherwise.
In step S54, the mobile device 2 computes the non-optically-positioned 3D image that corresponds to the first non-optically-positioned relative coordinate set (V.TD(N)) based on the 3D imaging information and the first non-optically-positioned relative coordinate set (V.TD(N)), and displays the non-optically-positioned 3D image based on the first non-optically-positioned relative coordinate set (V.TD(N)) such that visual perception of the operation target 4 through the mobile device 2 has the non-optically-positioned 3D image superimposed thereon.
In the implementation where the mobile device 2 further stores the 2D imaging information in step S1, step S51 further includes that the image positioning system 71 or the gyroscope positioning system 72 of the non-optical positioning system 7 acquires the non-optically-positioned spatial coordinate information (P.I(N)) relating to the surgical instrument 5 in real time; and step S52 further includes that the mobile device 2 computes the second non-optically-positioned relative coordinate set (V.TI(N)) based on the non-optically-positioned spatial coordinate information (P.I(N), P.T(N)) relating to the surgical instrument 5 and the operation target 4 in real time. Then, the mobile device 2 determines whether a difference between the second optically-positioned relative coordinate set (V.TI(O)) and the second non-optically-positioned relative coordinate set (V.TI(N)) is greater than a fourth threshold value. The mobile device 2 obtains the optically-positioned 2D image based on the second optically-positioned relative coordinate set (V.TI(O)), and displays the optically-positioned 2D image based on the first optically-positioned relative coordinate set (V.TD(O)) (or the first non-optically-positioned relative coordinate set (V.TD(N))) and the second optically-positioned relative coordinate set (V.TI(O)) when the determination is affirmative, such that visual perception of the operation target 4 through the mobile device 2 has the optically-positioned 2D image superimposed thereon; and the mobile device 2 obtains the non-optically-positioned 2D image based on the second non-optically-positioned relative coordinate set (V.TI(N)), and displays the non-optically-positioned 2D image based on the first optically-positioned relative coordinate set (V.TD(O)) (or the first non-optically-positioned relative coordinate set (V.TD(N))) and the second non-optically-positioned relative coordinate set (V.TI(N)) when otherwise, such that visual perception of the operation target 4 through the mobile device 2 has the non-optically-positioned 2D image superimposed thereon. In other words, the third embodiment primarily uses the non-optical positioned system 7 for obtaining the relative coordinate set(s) in order to avoid image jiggling, unless a positioning error of the non-optical positioned system 7 is too large (note that the optical positioned system 3 has higher precision in positioning).
In summary, the embodiments of this disclosure include the optical positioning system 3 acquiring the optically-positioned spatial coordinate information (P.D(O), P.T(O), P.I(O)) relating to the mobile device 2, the operation target 4 and the surgical instrument 5, thereby achieving high precision in positioning, so that the mobile device can superimpose the optically-positioned 3D/2D image(s) on the operation target 4 for visual perception with high precision at a level suitable for medical use, promoting the accuracy and precision of the surgical operation. In the second embodiment, when the mobile device 2 is not within the positioning range 30 of the optical positioning system 3 or when the optical positioning system 3 is out of order, the mobile device 2 can still cooperate with the non-optical positioning system 7 to obtain the non-optically-positioned 3D/2D image(s) and superimpose the non-optically-positioned 3D/2D image(s) on the operation target 4 for visual perception, so that the surgical navigation is not interrupted. In the third embodiment, by appropriately switching use of information from the optical positioning system 3 and the non-optical positioning system 7, possible image jiggling may be alleviated.
In the description above, for the purposes of explanation, numerous specific details have been set forth in order to provide a thorough understanding of the embodiment(s). It will be apparent, however, to one skilled in the art, that one or more other embodiments may be practiced without some of these specific details. It should also be appreciated that reference throughout this specification to “one embodiment,” “an embodiment,” an embodiment with an indication of an ordinal number and so forth means that a particular feature, structure, or characteristic may be included in the practice of the disclosure. It should be further appreciated that in the description, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of various inventive aspects, and that one or more features or specific details from one embodiment may be practiced together with one or more features or specific details from another embodiment, where appropriate, in the practice of the disclosure.
While the disclosure has been described in connection with what is (are) considered the exemplary embodiment(s), it is understood that this disclosure is not limited to the disclosed embodiment(s) but is intended to cover various arrangements included within the spirit and scope of the broadest interpretation so as to encompass all such modifications and equivalent arrangements.
Number | Date | Country | Kind |
---|---|---|---|
107121828 | Jun 2018 | TW | national |