The technology of the present disclosure relates to a medical support apparatus, an operating method for a medical support apparatus, and an operating program.
JP2018-94020A discloses a technology that uses an external sensor to acquire the position of an ultrasound probe in a surgical field and aligns an ultrasound image with a 3D image based on a tomographic image taken with a tomograph prior to surgery.
The technology according to the present disclosure provides a medical support apparatus, an operating method for a medical support apparatus, and an operating program with which preparation information can be superimposed at an appropriate position within a surgical field image without using an external sensor.
A medical support apparatus according to the technology of the present disclosure includes a processor configured to: acquire a surgical field image obtained by optically taking, with a camera, an image of a surgical field containing a target area inside the body and an ultrasound probe inserted into the body; acquire a first three-dimensional image illustrating an internal structure of the target area, the first three-dimensional image being based on an ultrasound image group taken by the ultrasound probe; derive first positional relationship information indicating a position and orientation of the first three-dimensional image within the surgical field image, on the basis of pose information that indicates the position and orientation of the ultrasound probe within the surgical field as estimated by image analysis of the surgical field image; derive second positional relationship information indicating a positional relationship between a position within the first three-dimensional image and a position within a second three-dimensional image by performing image analysis on the first three-dimensional image and the second three-dimensional image, the second three-dimensional image indicating an internal structure of the target area and having been generated on the basis of a tomographic image group taken in advance by a tomograph; generate a composite image with preparation information superimposed onto the surgical field image, the preparation information having been prepared in advance as information related to the target area, including at least the second three-dimensional image or applied information applied to the second three-dimensional image, and having a defined three-dimensional position within the second three-dimensional image, the preparation information being superimposed onto a corresponding position that corresponds to the target area within the surgical field image on the basis of the first positional relationship information and the second positional relationship information; and control displaying of the composite image.
An operating method for a medical support apparatus according to the technology of the present disclosure is an operating method for a medical support apparatus including a processor configured to: acquire a surgical field image obtained by optically taking, with a camera, an image of a surgical field containing a target area inside the body and an ultrasound probe inserted into the body to take images of the inside of the target area; acquire a first three-dimensional image illustrating an internal structure of the target area, the first three-dimensional image having been generated on the basis of an ultrasound image group taken by the ultrasound probe; derive first positional relationship information indicating a position and orientation of the first three-dimensional image within the surgical field image, on the basis of pose information that indicates the position and orientation of the ultrasound probe within the surgical field as estimated by image analysis of the surgical field image; derive second positional relationship information indicating a positional relationship between a position within the first three-dimensional image and a position within a second three-dimensional image by performing image analysis on the first three-dimensional image and the second three-dimensional image, the second three-dimensional image indicating an internal structure of the target area and having been generated on the basis of a tomographic image group taken in advance by a tomograph; generate a composite image with preparation information superimposed onto the surgical field image, the preparation information having been prepared in advance as information related to the target area, including at least the second three-dimensional image or applied information applied to the second three-dimensional image, and having a defined three-dimensional position within the second three-dimensional image, the preparation information being superimposed onto a corresponding position that corresponds to the target area within the surgical field image on the basis of the first positional relationship information and the second positional relationship information; and control displaying of the composite image.
An operating program for a medical support apparatus according to the technology of the present disclosure is an operating program for causing a computer to function as a medical support apparatus, the operating program causing the computer to execute processing to: acquire a surgical field image obtained by optically taking, with a camera, an image of a surgical field containing a target area inside the body and an ultrasound probe inserted into the body to take images of the inside of the target area; acquire a first three-dimensional image illustrating an internal structure of the target area, the first three-dimensional image having been generated on the basis of an ultrasound image group taken by the ultrasound probe; derive first positional relationship information indicating a position and orientation of the first three-dimensional image within the surgical field image, on the basis of pose information that indicates the position and orientation of the ultrasound probe within the surgical field as estimated by image analysis of the surgical field image; derive second positional relationship information indicating a positional relationship between a position within the first three-dimensional image and a position within a second three-dimensional image by performing image analysis on the first three-dimensional image and the second three-dimensional image, the second three-dimensional image indicating an internal structure of the target area and having been generated on the basis of a tomographic image group taken in advance by a tomograph; generate a composite image with preparation information superimposed onto the surgical field image, the preparation information having been prepared in advance as information related to the target area, including at least the second three-dimensional image or applied information applied to the second three-dimensional image, and having a defined three-dimensional position within the second three-dimensional image, the preparation information being superimposed onto a corresponding position that corresponds to the target area within the surgical field image on the basis of the first positional relationship information and the second positional relationship information; and control displaying of the composite image.
Exemplary embodiments according to the technique of the present disclosure will be described in detail based on the following figures, wherein:
As illustrated in
The medical support apparatus 11 is communicatively connected to an endoscope 13, an ultrasound probe 14, and a display 16. In specular surgery, a portion including the distal end part of each of the endoscope 13 and the ultrasound probe 14 are inserted into the body via trocars 17. The trocars 17 are insertion tools provided with an insertion hole through which the endoscope 13 or the like is inserted and a valve provided in the insertion hole to prevent gas leakage. In specular surgery, insufflation is performed by the injection of carbon dioxide gas into the abdominal cavity, and thus the trocars 17 are used to insert the endoscope 13, ultrasound probe 14, and the like into the body.
Also, in this example, the target area of surgery is the liver LV, and
The endoscope 13 has an insertion part 13A to be inserted into the body of the patient PT, and the distal end part of the insertion part 13A has a built-in camera 13B and a light source (such as a light-emitting diode (LED)) for illumination. As an example, the endoscope 13 is a rigid endoscope with an inflexible insertion part 13A, also referred to as a laparoscope or the like due to being used often for abdominal cavity observation. The camera 13B includes an image sensor, such as a charge-coupled device (CCD) image sensor or a complementary metal-oxide-semiconductor (CMOS) image sensor, and imaging optics including a lens that forms a subject image on the image pick-up surface of the image sensor. The image sensor is capable of picking up color images, for example.
The endoscope 13 optically takes an image of a surgical field SF including a target area (in this example, the liver LV) inside the body of the patient PT. The endoscope 13 is connected to an endoscopic image processor, not illustrated, and the image processor to generates a surgical field image 21 by performing signal processing on the image pick-up signal outputted by the image sensor. Visible light such as white light is used as the illumination light for the endoscope 13, although in some cases special light such as ultraviolet rays and infrared light may be used. Note that special light such as ultraviolet rays and infrared light may also be used as the illumination light for the endoscope 13. Light restricted to a specific wavelength, such as short-wavelength narrow-band light in which light in a short-wavelength region such as the ultraviolet region is restricted to a narrow band, may be used as the special light. The surgical field image 21 is a video image of the surgical field SF illuminated by visible light, and more specifically is a video image based on reflected light generated by visible light being reflected near the surface of the surgical field SF. Accordingly, although superficial blood vessels that exist near the surface layer of the target area may be represented in some cases, internal structures such as vascular structures are difficult to observe in the surgical field image 21. The surgical field image 21 taken by the endoscope 13 is transmitted to the medical support apparatus 11 in real time via the endoscopic image processor.
The ultrasound probe 14 has an insertion part 14A to be inserted into the body of the patient PT, similarly to the endoscope 13, and the distal end part of the insertion part 14A has a built-in ultrasonic transducer 14B. The ultrasonic transducer 14B transmits ultrasonic waves to the target area and receives reflected waves that have reflected off the target area. The ultrasound probe 14 is connected to an ultrasound probe image processor, not illustrated. The ultrasound probe image processor uses a signal according to the reflected waves received by the ultrasound probe 14 as a basis for performing image reconstruction processing on the basis of the reflected waves. Through the image reconstruction processing, an ultrasound image 22A (see
As one example, the ultrasound probe 14 is of the convex type that transmits ultrasonic waves radially, and a fan-shaped image with the ultrasonic transducer 14B as a base point is acquired. By causing the ultrasound probe 14 to scan, multiple ultrasound images 22A are taken along the scanning direction. These multiple ultrasound images 22A are collectively referred to as an ultrasound image group 22. The ultrasound image group 22 is transmitted to the medical support apparatus 11 in real time via the ultrasound probe image processor.
The medical support apparatus 11 acquires the surgical field image 21 from the endoscope 13 and acquires the ultrasound image group 22 from the ultrasound probe 14. If the ultrasound probe 14 and a treatment tool 18 are inserted into the surgical field SF, the ultrasound probe 14 and the treatment tool 18 appear in the surgical field image 21. The medical support apparatus 11 outputs the surgical field image 21 in which the ultrasound probe 14 and the treatment tool 18 appear to the display 16. Through the screen of the display 16, a visual field of the inside of the body of the patient PT is provided to the medical staff ST. A composite image 26 with preoperative preparation information 23 superimposed onto the surgical field image 21 is also displayed on the display 16. As described later, the composite image 26 is generated by the medical support apparatus 11.
The preparation information creation apparatus 12 is used to create the preoperative preparation information 23. The preparation information creation apparatus 12 is, for example, a computer in which is installed an image processing program having image processing functions for three-dimensional images. As described later, as one example, the preoperative preparation information 23 is surgical planning information or the like created prior to surgery. Also, as one example, the preoperative preparation information 23 is created by the medical staff ST using the preparation information creation apparatus 12 prior to surgery.
The medical support apparatus 11 acquires the preoperative preparation information 23 from the preparation information creation apparatus 12. The medical support apparatus 11 creates the composite image 26 with the preoperative preparation information 23 superimposed onto the surgical field image 21, and outputs the generated composite image 26 to the display 16. Through the screen of the display 16, the preoperative preparation information 23 is provided to the medical staff ST.
An example of the preoperative preparation information 23 will be described with reference to
On the basis of the tomographic image group 32 obtained by the tomograph 31, the preparation information creation apparatus 12 generates a three-dimensional image 34, which is a set of voxel data 34A, by performing three-dimensional (3D) modeling to numerically describe the three-dimensional shape of the body of the patient PT. The voxel data 34A is in units of pixels in a three-dimensional space, with each pixel having three-dimensional coordinate information and a pixel value. The three-dimensional image 34 generated by 3D modeling is also referred to as three-dimensional volume data or the like. In the tomographic image group 32, the pixel spacing of each tomographic image 32A and the slice thickness of each tomographic image 32A may differ from one another. In this case, for example, in the 3D modeling, interpolation processing between adjacent tomographic images 32A may be performed to generate a three-dimensional image 34 with isotropic voxel data 34A in which the length is equal in each direction of the three dimensions. The three-dimensional image 34 generated on the basis of the tomographic image group 32 is information created prior to surgery, and thus is referred to as the preoperative 3D image 34 out of convenience herein. The preoperative 3D image 34 is an example of a “second three-dimensional image” according to the technology of the present disclosure.
The preoperative 3D image 34 is an image that can reproduce the external shape of the body of the patient PT, anatomical areas such as organs inside the body, and internal structures of such areas. The preoperative 3D image 34 illustrated in
Also, the preoperative 3D image 34 in this example is a color image, and a pixel value for each of red (R), green (G), and blue (B) is given as the pixel value of the voxel data 34A. Note that the preoperative 3D image 34 may also be a monochrome image. For example, the pixel value of the voxel data 34A may be represented solely by the luminance (Y) based on a CT value. The value used as the pixel value of the voxel data 34A may also be converted from a CT value using a preset lookup table (LUT) or calculation formula. The pixel value of voxel data 34A may also be set to a color associated with each specific area, such as an organ or lesion identified in the preoperative 3D image 34. Opacity is also set in the voxel data 34A. Opacity is data used in volume rendering. Rendering is processing to convert a portion of the preoperative 3D image 34 into a two-dimensional projected image, and volume rendering is a rendering technique that also projects internal information about an object included in the preoperative 3D image 34 onto the projected image. Setting the opacity for each piece of the voxel data 34A makes it possible to represent internal information in different ways, such as projecting internal information onto the projected image in an opaque manner, a translucent manner, and a transparent manner, when performing volume rendering.
As illustrated in
In this way, the preoperative preparation information 23 is preparation information which is prepared in advance as information related to the target area, which includes at least the preoperative 3D image 34 or applied information applied to the preoperative 3D image 34, and which has a defined three-dimensional position within the preoperative 3D image 34. Also, since the preoperative preparation information 23 is information based on the preoperative 3D image 34, a plurality of pieces of preoperative preparation information 23 at different depths in the depth direction proceeding from the surface layer to the deep layers of the target area in the preoperative 3D image 34 is included. The plurality of pieces of preoperative preparation information 23 may be information pertaining to a plurality of regions which have been morphologically dissociated from each other, or a plurality of regions in a single area, such as a plurality of regions of the vascular structure 37. The inclusion of a plurality of pieces of preoperative preparation information 23 at different depths in this way makes it possible to provide more detailed preoperative preparation information 23 in accordance with the depth of the target area.
The medical support apparatus 11 is operated through the reception device 42 by an operator such as the medical staff ST. The reception device 42 includes a keyboard, mouse, and the like, not illustrated, and accepts instructions from the operator. The reception device 42 may also be a device such as a touch panel that receives touch input, a device such as a microphone that receives voice input, a device such as a camera that receives gesture input, and the like.
The display 16 may be an electro-luminescence (EL) display or a liquid crystal display, for example. Besides the surgical field image 21 and the composite image 26, various information is displayed on the display 16.
The processor 41 is a central processing unit (CPU), for example, centrally controls each component of the medical support apparatus 11 by following a control program, and executes various processes by following various application programs.
The storage 44 is a non-volatile storage apparatus storing various programs, various parameters, and the like. The storage 44 may be a hard disk drive (HDD) and a solid state drive (SSD), for example. Also, the storage 44 stores a medical support program 49 for causing the computer to function as the medical support apparatus 11.
The RAM 43 is a memory in which information is stored temporarily, and is used as work memory by the processor 41. The RAM 43 may be dynamic random access memory (DRAM) or static random access memory (SRAM), for example.
The communication I/F is connected to a network (omitted from illustration) such as a local area network (LAN) and/or a wide area network (WAN), and controls transmission by following a communication protocol defined by any of various wired or wireless communication standards.
The external I/F 46 is a Universal Serial Bus (USB) interface, for example, and is used to connect with peripheral equipment such as printers and memory cards.
The processor 41 performs a medical support process by reading out the medical support program 49 from the storage 44 and executing the medical support program 49 in the RAM 43. The medical support process is achieved by the processor 41 operating as an image acquisition unit 41A, a positional relationship information derivation unit 41B, a composite image generation unit 41C, and a display control unit 41D. The medical support program 49 is an example of a “medical support program” according to the technology of the present disclosure.
The image acquisition unit 41A acquires the surgical field image 21 and the ultrasound image group 22. For example, the surgical field image 21 and/or the ultrasound image group 22 are acquired via the external I/F 46 or the communication I/F 45 from an apparatus that includes the processor of the endoscope 13 and/or the processor of the ultrasound probe 14. Note that the processor of the endoscope 13 and/or the processor of the ultrasound probe 14 may be included in the medical support apparatus 11, or there may be a database storing the surgical field image 21 and/or the ultrasound image group 22 generated by an apparatus that includes the processor the endoscope 13 and/or the processor of the ultrasound probe 14, and the surgical field image 21 and/or the ultrasound image group 22 may be acquired from the database. The image acquisition unit 41A also acquires the preoperative preparation information 23 including the preoperative 3D image 34. For example, the preoperative preparation information 23 is acquired from the preparation information creation apparatus 12 via the external I/F 46 or the communication I/F 45. Note that the preparation information creation apparatus 12 and the medical support apparatus 11 may be a unified apparatus, or there may be a database storing the preoperative preparation information 23 generated by the preparation information creation apparatus 12, and the preoperative preparation information 23 may be acquired from the database.
The positional relationship information derivation unit 41B performs image analysis on the surgical field image 21, the preoperative 3D image 34, and the like to thereby derive first positional relationship information 58 (see
The processor 41 superimposes the preoperative preparation information 23 onto a corresponding position that corresponds to the target area in the surgical field image 21. The corresponding position includes the target area and the surroundings thereof. As illustrated in
To make such preoperative preparation information 23 useful support information, it is necessary to align the preoperative preparation information 23 at an appropriate position within the surgical field image 21. For example, the excision line 36 and vascular structure 37 of the liver LV are preferably displayed at a corresponding position that corresponds to the liver LV within the surgical field image 21. In the past, such alignment has been performed by using an external sensor such as a magnetic sensor.
The processor 41 of the medical support apparatus 11 according to the present disclosure aligns the surgical field image 21 and the preoperative preparation information 23 through image analysis, without using an external sensor. The following specifically describes processing to generate the composite image 26 through alignment by image analysis.
The processor 41 acquires an ultrasound three-dimensional (3D) image 51 based on the ultrasound image group 22 taken by the ultrasound probe 14. The processor 41 uses the ultrasound 3D image 51 acquired during surgery for alignment. The ultrasound 3D image 51 is an example of a “first three-dimensional image” according to the technology of the present disclosure.
As illustrated in
The processor 41 generates the ultrasound 3D image 51 by executing processing similar to the 3D modeling described in
As illustrated in
In sub-step S1310, the processor 41 detects the marker 56 from the surgical field image 21. The processor 41 then estimates the position and orientation of the ultrasound probe 14 on the basis of the form of the marker 56, and derives pose information 57. The pose information 57 includes the orientation of the axial direction of the ultrasound probe 14 and the distal end position where the ultrasonic transducer 14B is provided in the ultrasound probe 14.
In this way, the form of the marker 56 appearing in the surgical field image 21 changes depending on the orientation of the ultrasound probe 14. The processor 41 detects the orientation of the ultrasound probe 14 on the basis of such a form of the marker 56 within the surgical field image 21. Also, the relative positional relationship between the marker 56 and the distal end position of the ultrasound probe 14 is known. The processor 41 detects the distal end position of the ultrasound probe 14 within the surgical field SF on the basis of the detected position of the marker 56 in the surgical field image 21 and the known positional relationship between the marker 56 and the distal end position of the ultrasound probe 14.
The pose information 57 in the example in
Furthermore, as illustrated in
Next, as illustrated in
Presupposing the above relationship, the processor 41 derives the first positional relationship information 58, which indicates the position and orientation of the ultrasound 3D image 51 within the surgical field image 21, on the basis of the pose information 57 that indicates the position and orientation of the ultrasound probe 14 within the surgical field SF. As illustrated in
The processor 41 derives second positional relationship information 59, which indicates the positional relationship between a position within the ultrasound 3D image 51 and a position within the preoperative 3D image 34, by performing image analysis on the preoperative 3D image 34 and the ultrasound 3D image 51 that illustrate internal structures of the target area.
As illustrated in
The processor 41 performs semantic segmentation using U-Net on the ultrasound 3D image 51 and extracts the vascular structure 37A within the ultrasound 3D image 51. To compare the bifurcation points of the vascular structures 37 to one another, the processor 41 extracts a topological structure 37T, such as the number of bifurcation points included in the extracted vascular structure 37A. A topological structure refers to a structure of which the identity is maintained even when the vascular structure 37A is deformed while preserving continuity. The ultrasound 3D image 51 in this example is generated on the basis of the ultrasound image group 22 acquired in real time during specular surgery. Accordingly, the vascular structure 37 in the ultrasound 3D image 51 is deformed under the influence of the insufflation of the abdominal cavity by carbon dioxide gas, the breathing of the patient PT, and the like. By retrieving a similar portion of the vascular structure 37A on the basis of the topological structure 37T or other comparison of the bifurcation points of blood vessels to one another, the influence of real-time deformations can be mitigated to enable high-precision retrieval.
In
As illustrated in
Rendering is processing to project the preoperative preparation information 23 based on the preoperative 3D image 34 onto a two-dimensional projection plane 63 set virtually from the viewpoint position O. In other words, rendering is processing to simulate, through calculation, the image that would be formed on the image pick-up surface of the camera 13B in the case of placing the camera 13B at the viewpoint position O and taking an image of the preoperative preparation information 23.
Typical rendering techniques include perspective projection and parallel projection. Perspective projection can reproduce depth information in such a way that an object closer to the viewpoint position O is projected larger while an object farther away is projected smaller. With perspective projection, as illustrated in
In the example in
Returning to
The processor 41 superimposes the rendered and size-adjusted preoperative preparation information 23 onto the corresponding position in the surgical field image 21, on the basis of the first positional relationship information 58 and the second positional relationship information 59. Specifically, a composite image 26 is generated in a state such that the excision line 36 and the vascular structure 37 of the liver LV included in the preoperative preparation information 23 are aligned in position and orientation with the liver LV in the surgical field image 21. The processor 41 outputs the generated composite image 26 to the display 16 and controls the displaying of the composite image 26.
The action and effects of the above configuration will be described using
The medical support process in this example is executed during specular surgery. In performing the medical support process, the preoperative preparation information 23 is created prior to surgery. As one example, the preoperative preparation information 23 is created according to the procedure explained using
In step S1100, the processor 41 acquires the surgical field image 21 from the camera 13B of the endoscope 13 and outputs the surgical field image 21 to the display 16. Accordingly, a visual field of the inside of the body is provided to the medical staff ST. While observing the surgical field image 21 on the display 16, the medical staff ST use the ultrasound probe 14 to scan the surface of the liver LV, which is the target area in this example, as illustrated in
In step S1200, the processor 41 acquires the ultrasound image group 22 (corresponding to the first three-dimensional image) taken by the ultrasound probe 14. The processor 41 generates the ultrasound 3D image 51 on the basis of the ultrasound image group 22 according to the procedure illustrated in
In step S1300, first, the processor 41 estimates the position and orientation of the ultrasound probe 14 within the surgical field SF by performing image analysis on the surgical field image 21 according to the procedure illustrated in
In step S1400, the processor 41 derives the second positional relationship information 59, which indicates the positional relationship between a position within the ultrasound 3D image 51 and a position within the preoperative 3D image 34, by performing image analysis on the ultrasound 3D image 51 and the preoperative 3D image 34 according to the procedure illustrated in
In step S1500, the processor 41 superimposes the preoperative preparation information 23 related to the liver LV onto the corresponding position that corresponds to the liver LV, which is the target area in this example, within the surgical field image 21, on the basis of the first positional relationship information 58 and the second positional relationship information 59, according to the procedure illustrated in
In step S1600, the processor 41 displays the composite image 26 on the display 16.
As described above, according to the medical support apparatus 11 as in the technology of the present disclosure, the preoperative preparation information 23 can be superimposed onto an appropriate position within the surgical field image 21, without using an external sensor. That is, as illustrated in
Also, displaying the excision line 36 and the vascular structure 37 at appropriate positions within the surgical field image 21 makes it possible to carry out treatment, such as excising a portion of the liver LV, while avoiding the vascular structure 37.
Also, in this example, the optically detectable marker 56 is provided on the outer circumferential surface of the ultrasound probe 14, and when deriving the pose information 57, the processor 41 estimates the position and orientation of the ultrasound probe 14 by detecting the marker 56 from the surgical field image 21. Providing the marker 56 to be used for recognition of position and orientation allows for easier recognition of position and orientation compared to the case with no marker 56.
As for the form of the marker 56, in this example, a lattice marker 56 formed by lines in the axial direction and lines in the circumferential direction of the ultrasound probe 14 is used. Accordingly, the position and orientation of the ultrasound probe 14 is estimated more easily compared to the case where, for example, a single figure such as a simple polygon or circle is provided as a marker.
Obviously, the form of the marker 56 need not be a lattice marker as in this example, and may also be a single figure such as a simple polygon or circle, or a combination of a plurality of such figures. Furthermore, the form of the marker 56 may also be a dot pattern of equally spaced dots arrayed in a matrix.
The estimation of the position and orientation of the ultrasound probe 14 within the surgical field SF may also be performed without using a marker. A conceivable method for estimating the position and orientation of the ultrasound probe 14 without using a marker may be, for example, a method of using a machine learning model to estimate the position and orientation from shape information about the ultrasound probe 14. In this case, combinations of a plurality of surgical field images 21 with various changes in the position and orientation of the ultrasound probe 14 and pose information 57 serving as ground truth data for each are prepared as teaching data. This teaching data is used to train a machine learning model and thereby generate a trained model that outputs pose information 57 in response to an inputted surgical field image 21.
Also, in the above example, the processor 41 derives the second positional relationship information 59 by comparing the morphologies of an internal structure of the target area in each of the ultrasound 3D image 51 and the preoperative 3D image 34. Since the morphologies of an internal structure are compared, the second positional relationship information 59 is derived more easily compared to the case of not performing a morphology comparison of an internal structure. Also, in the above example, the vascular structure 37 is given as one example of an internal structure, but the internal structure need not be the vascular structure 37 insofar as the internal structure is a characteristic internal structure suitable for morphology comparison.
Also, the method for morphology comparison of an internal structure may be a rules-based method such as pattern matching, or a method that uses a machine learning model.
In the above example, the processor 41 derives the second positional relationship information 59 by comparing bifurcation points of blood vessels included in the vascular structure 37 in each of the ultrasound 3D image 51 and the preoperative 3D image 34 to one another. Since the bifurcation points of blood vessels in the vascular structure 37 are characteristic information, the degree of coincidence between vascular structures 37 is determined more easily compared to the case of not comparing bifurcation points.
Also, the above example describes an example of superimposing the preoperative preparation information 23 within the surgical field image 21, but in addition to the preoperative preparation information 23, information acquired during surgery, such as the ultrasound image 22A, may also be superimposed onto the surgical field image 21.
The timing of the superimposing of the preoperative preparation information 23, that is, the timing at which to switch from the surgical field image 21 to the composite image 26, may be the timing at which the scanning by the ultrasound probe 14 is performed. For example, the processor 41 monitors the movement of the ultrasound probe 14 on the basis of the surgical field image 21, and when the amount of movement of the ultrasound probe 14 exceeds a preset threshold value, the processor 41 switches from the surgical field image 21 to the composite image 26. Note that the switching condition is not particular limited to the above, and the medical support apparatus 11 may also switch between the surgical field image 21 and the composite image 26 when a switching instruction inputted via the reception device 42 is obtained, or the medical support apparatus 11 may have a function for displaying the composite image 26 only.
When superimposing the preoperative preparation information 23, the display position of the preoperative preparation information 23 may also be adjustable in consideration of cases in which information to be displayed in the background is present. Manual adjustment may also be possible, or the display position may be adjusted to not overlap with a specific region within the surgical field image 21 when the specific region is recognized.
The surgical field image 21 may also be corrected according to the optical characteristics of the camera 13B. In particular, if the imaging optics 13D of the camera 13B have a large amount of distortion, the surgical field image 21 will be distorted, and the liver LV or the like will be in a deformed state. If the liver LV is deformed, the alignment precision will be impaired. Distortion can be broadly classified into barrel distortion and pincushion distortion, and it is preferable to correct the surgical field image 21 according to the optical characteristics of the imaging optics 13D.
The processor 41 may also acquire deformation information indicating how the internal structure in the ultrasound 3D image 51 is deformed with respect to the internal structure in the preoperative 3D image 34 prepared in advance, deform the preoperative preparation information 23 on the basis of the acquired deformation information, and generate the composite image 26 with the deformed preoperative preparation information 23 superimposed onto the surgical field image 21.
As illustrated in
The target area is particularly susceptible to the influence of insufflation and the like when the target area is soft tissue such as the liver LV and the lung LG described later (see
Also, the ultrasound 3D image 51 has an extremely narrow image-taking range compared to the preoperative 3D image 34. Accordingly, the deformation information that can be acquired on the basis of the ultrasound 3D image 51 is extremely narrow local deformation information on the preoperative 3D image 34. In this case, the processor 41 may also predict deformation outside the image-taking range of the ultrasound 3D image 51 by extrapolation on the basis of the local deformation information acquired from the ultrasound 3D image 51.
As described in the first embodiment, since the preoperative preparation information 23 is information based on the preoperative 3D image 34, a plurality of pieces of preoperative preparation information 23 at different depths in the depth direction proceeding from the surface layer to the deep layers of the target area in the preoperative 3D image 34 is included. This makes it possible to provide more detailed preoperative preparation information 23 in accordance with the depth of the target area.
The second embodiment illustrated in
Also, as illustrated in
As illustrated in
By raising the visibility nearer the surface layer higher than the deep layers, the medical staff ST can intuitively understand the positional relationship of a plurality of pieces of preoperative preparation information 23. Note that when displaying only the preoperative preparation information 23 near the surface layer, the visibility of the preoperative preparation information 23 at the surface layer may also be raised higher than surrounding regions.
Also, as illustrated in
In the examples illustrated in
Also, in
On the other hand, in
First, the depth reference position in
Next, in
Like in
Also, as illustrated in
In contrast, as illustrated in
According to the method illustrated in
As illustrated in
Note that in regard to the condition for switching between display and non-display according to the depth of the preoperative preparation information 23 as illustrated in
The preoperative preparation information 23 is not limited to the excision line 36 and the vascular structure 37, and may also include other information. In the example illustrated in
Also, as illustrated in
By displaying the reference information in addition to the excision line 36 in the composite image 26, effects like the following can be expected. Namely, when the liver LV is deformed under the influence of insufflation or the like, the excision line 36 may be displayed out of alignment with the corresponding position within the surgical field image 21. In this case, if the anatomical regions used as reference information when determining the excision line 36 are displayed in addition to the excision line 36, the medical staff ST may be able to estimate the appropriate position of the excision line 36 by confirming the positions of the anatomical regions, even if the display position of the excision line 36 is misaligned due to deformation of the liver LV. Besides the anatomical regions illustrated in
As illustrated in
In the above example, an ultrasound probe of the convex type is described as an example of the ultrasound probe 14, but an ultrasound probe of the linear type with a flat transmitting/receiving surface for transmitting and receiving the ultrasonic waves of the ultrasonic transducer 14B may also be used. The transmitting/receiving surface may also be of the radial type disposed in the circumferential direction around the axis of the ultrasound probe 14.
Also, as illustrated in
Also, the above describes an example of using the medical support apparatus 11 in specular surgery, but usage is not limited thereto, and the medical support apparatus 11 may also be used when the main purpose of the examination is to observe a target area inside the body. Preparation information that is prepared prior to examination is used as the preparation information to be superimposed onto the surgical field image 21. Also, the surgical field image 21 means an image taken of the surgical field SF including a target area of examination instead of, or in addition to, an image taken of the surgical field SF including a target area of surgery.
Displaying an internal structure of a target area during an examination has merits like the following. For example, if a tumor internal to the liver LV is represented in a tomographic image taken with a CT apparatus but is difficult to find taking images with an ultrasound probe, the tumor is displayed as the preoperative preparation information 23 in the liver LV in the surgical field image 21 during an examination performed with the ultrasound probe 14. This allows for guidance to the position corresponding to the tumor to be scanned by the ultrasound probe 14. This also allows for guidance to a puncture position of a puncture needle for puncturing the tumor when doing a biopsy of the tumor.
The above describes a rigid endoscope with an inflexible insertion part as the endoscope 13, but the endoscope 13 may also be a flexible endoscope with a flexible insertion part.
Also, the above describes an example of using the medical support apparatus 11 in specular surgery, but the medical support apparatus 11 may also be used in abdominal surgery. In the case of abdominal surgery, the camera that takes images of the surgical field is not a camera built into an endoscope, but instead is a camera installed above the operating table, such as a camera installed on the ceiling of the operating room, for example.
The above describes the surgical field SF inside the abdominal cavity as an example of a surgical field inside the body, but the inside of the body is not limited to the inside of the abdominal cavity and may also be a body cavity other than the abdominal cavity, such as the thoracic cavity, and may also be the upper gastrointestinal tract such as the esophagus, the lower gastrointestinal tract such as the intestines, or inside a tube such as the bronchial tubes. In the case of applying the technology of the present disclosure to a surgical field inside a tube, a camera built into an endoscope inserted into the bronchial tubes is used as the camera, for example. The medical support apparatus then uses the camera of the endoscope to acquires a surgical field image taken of the surgical field, including a target area inside the tube and an ultrasound probe inserted into the tube through a forceps channel of the endoscope.
The above describes a CT apparatus, MRI apparatus, and the like as an example of the tomograph that takes the tomographic image group to serve as the basis for the preoperative preparation information 23, but the tomograph may also be an ultrasound probe. For example, an ultrasound image group 22 may be acquired in advance prior to surgery, separately from the ultrasound image group 22 to be acquired during surgery, and the preoperative preparation information 23 may be created on the basis of the acquired ultrasound image group 22. Besides an ultrasound probe, an optical coherence tomography (OCT) probe may also be used.
Also, the second embodiment above may be combined with the first embodiment or be independent from the first embodiment. The following technologies can be understood as technologies in which the second embodiment is independent from the first embodiment.
[Appendix 1] A medical support apparatus comprising a processor configured to:
[Appendix 2] The medical support apparatus according to appendix 1, wherein when changing the display appearance, the processor is configured to raise the visibility of the preparation information nearer the surface layer than the deep layers.
[Appendix 3] The medical support apparatus according to appendix 1 or appendix 2, wherein the depth direction is the direction along an image-taking optical axis of the camera that takes the surgical field image.
[Appendix 4] The medical support apparatus according to appendix 3, wherein a reference position for the depth is a viewpoint position of the camera.
[Appendix 5] The medical support apparatus according to appendix 3, wherein a reference position for the depth is a surface position of the target area.
[Appendix 6] The medical support apparatus according to appendix 5, wherein
[Appendix 7] The medical support apparatus according to appendix 5, wherein the reference position for the depth is a surface position of the target area that is closest from the viewpoint position side of the camera.
[Appendix 8] The medical support apparatus according to appendix 1, wherein the depth direction is the direction proceeding from a surface position to the deep layers, with the surface position of the target area in a second three-dimensional image as a reference position.
[Appendix 9] The medical support apparatus according to any one of appendices 1 to 8, wherein
[Appendix 10] The medical support apparatus according to any one of appendices 1 to 9, wherein the preparation information includes at least one from among an excision line where the target area is to be excised, a lesion, a vascular structure which is an internal structure of the target area, and reference information when determining the excision line.
[Appendix 11] The medical support apparatus according to any one of appendices 1 to 10, wherein the target area is the lung or the liver.
[Appendix 12] An operating method for a medical support apparatus comprising a processor, the operating method causing the processor to:
[Appendix 13] An operating program for causing a computer to function as a medical support apparatus, the operating program of the medical support apparatus causing the computer to execute processing to:
Also, in the embodiments above, any of the various types of processors indicated below can be used as the hardware structure of a processing unit that executes various processes, such as those of the image acquisition unit, the positional relationship information derivation unit, the composite image generation unit, and the display control unit. The various types of processors include: a central processing unit (CPU), which is a general-purpose processor that executes software (a program or programs) to function as any of various processing units; a programmable logic device (PLD) whose circuit configuration is modifiable after fabrication, such as a field-programmable gate array (FPGA); and a dedicated electric circuit, which is a processor having a circuit configuration designed for the specific purpose of executing a specific process, such as an application-specific integrated circuit (ASIC).
Also, the various processes above may be executed by one of these various processors, or by a combination of two or more processors of the same or different types (such as multiple FPGAs, or a combination of a CPU and an FPGA, for example). Moreover, multiple processing units may also be configured as a single processor. One example of configuring multiple processing units as a single processor is a mode utilizing a processor in which the functions of an entire system, including the plurality of processing units, are achieved on a single integrated circuit (IC) chip, such as a system on a chip (SoC).
In this way, various types of processing units are configured as a hardware structure by using one or more of the various types of processors indicated above.
More specifically, circuitry combining circuit elements such as semiconductor elements can be used as the hardware structure of these various types of processors.
Additionally, the technology of the present disclosure also extends to an operating program for a medical support apparatus as well as to a computer-readable storage medium (such as USB memory or Digital Versatile Disc Read-Only Memory (DVD-ROM)) that non-transiently stores an operating program for a medical support apparatus.
The descriptions and illustrations given above are detailed descriptions of portions related to the technology of the present disclosure, and are nothing more than examples of the technology of the present disclosure. For example, the above descriptions pertaining to configuration, function, action, and effect are descriptions pertaining to one example of the configuration, function, action, and effect of portions related to the technology of the present disclosure. Needless to say, unnecessary portions may be deleted and new elements may be added or substituted with respect to the descriptions and illustrations given above, insofar as the result does not depart from the gist of the technology of the present disclosure. Also, to avoid confusion and to facilitate understanding of the portions related to the technology of the present disclosure, in the descriptions and illustrations given above, description is omitted in regard to common technical knowledge and the like that does not require particular explanation to enable implementation of the technology of the present disclosure.
In this specification, “A and/or B” is synonymous with “at least one of A or B”. That is, “A and/or B” means that: A only is a possibility; B only is a possibility; and a combination of A and B is a possibility. Also, in this specification, the same way of thinking as for “A and/or B” also applies when three or more matters are expressively linked using “and/or”.
All documents, patent applications, and technical standards mentioned in this specification are incorporated by reference herein to the same extent that individual documents, patent applications, and technical standards are specifically and individually noted as being incorporated by reference.
Number | Date | Country | Kind |
---|---|---|---|
2022-030452 | Feb 2022 | JP | national |
This application is a continuation application of International Application No. PCT/JP2023/003867, filed Feb. 6, 2023, the disclosure of which is incorporated herein by reference in its entirety. Further, this application claims priority from Japanese Patent Application No. 2022-030452 filed on Feb. 28, 2022, the disclosure of which is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2023/003867 | Feb 2023 | WO |
Child | 18809358 | US |