The present technique relates to an information processing apparatus, an information processing method, and an operation microscope apparatus that are used for guiding an operation on an eye.
In recent years, in operations on eyes, an operation guide apparatus is being used. The operation guide apparatus generates guide information to be an operation guide based on image information of an eye as an operation target and presents it to a user. The user can perform an operation while referencing the guide information, with the result that a user's lack of experience can be compensated for or an operation error can be prevented from occurring. In addition, it helps improve an operation accuracy.
As the operation guide information, there is a tomographic image obtained by an OCT (Optical Coherence Tomography). The OCT is a technique of irradiating infrared rays onto an operation target eye and restructuring reflected waves from tissues of the eye to generate an image, and a tomographic image of an eye regarding a specific cross-section is obtained. For example, Patent Literature 1 discloses an ophthalmological analysis device that presents a tomographic image of an eye obtained by the OCT to a user.
When acquiring a tomographic image by the OCT, a cross-section thereof needs to be designated. However, it is difficult to readily designate an optimal cross-section as the operation guide information due to the reasons that the cross-section that an operator wishes to reference changes dynamically, an eyeball moves even during an operation, and the like.
In view of the circumstances as described above, the present technique aims at providing a surgical image processing apparatus, an information processing method, and a surgical microscope system that are capable of presenting appropriate operation guide information in an eye operation.
To attain the object described above, according to an embodiment of the present technique, there is provided a surgical image processing apparatus including circuitry configured to perform image recognition on an intraoperative image of an eye. The circuitry is further configured to determine a cross-section for acquiring a tomographic image based on a result of the image recognition.
With this structure, since the cross-section is determined based on the result of the image recognition of the intraoperative image, the user does not need to designate the cross-section. In addition, since the cross-section is determined according to a content of the intraoperative image (position and direction of eye and surgical instrument, etc.), the information processing apparatus can generate an appropriate tomographic image.
To attain the object described above, according to an embodiment of the present technique, there is provided an information processing method including performing, by circuitry of a surgical image processing apparatus, image recognition on an intraoperative image of an eye. The method further includes determining, by the circuitry, a cross-section for acquiring a tomographic image based on a result of the image recognition.
To attain the object described above, according to an embodiment of the present technique, there is provided a surgical microscope system including a surgical microscope and circuitry. The surgical microscope is configured to capture an image of an eye. The circuitry is configured to perform image recognition on an intraoperative image of an eye. The circuitry is configured to determine a cross-section for acquiring a tomographic image based on a result of the image recognition. The circuitry is further configured to control the surgical microscope to acquire the tomographic image of the cross-section.
As described above, according to the present technique, it is possible to provide a surgical image processing apparatus, an information processing method, and a surgical microscope system that are capable of presenting appropriate operation guide information in an eye operation. It should be noted that the effects described herein are not necessarily limited and may be any effect described in the present disclosure.
Hereinafter, an operation microscope apparatus according to an embodiment of the present technique will be described.
[Structure of Operation Microscope Apparatus]
The image information acquisition section 101 acquires image information of an operation target eye. The image information acquisition section 101 includes various structures with which image information such as a microscope image, a tomographic image, and volume data can be acquired. The various structures of the image information acquisition section 101 will be described later.
The image recognition section 102 executes image recognition processing on image information acquired by the image information acquisition section 101. Specifically, the image recognition section 102 recognizes an image of an surgical instrument or an eyeball site (pupil etc.) included in the image information. The image recognition processing may be executed by an edge detection method, a pattern matching method, and the like. The image recognition section 102 supplies the recognition result to the controller 104.
The interface section 103 acquires an image of an operation target eye taken before the operation, an operation plan, an instruction input from a user, and the like. The interface section 103 may also acquire a position or orientation of an surgical instrument measured by an optical position measurement apparatus. The interface section 103 supplies the acquired information to the controller 104.
The controller 104 determines a cross-section based on the recognition processing result obtained by the image recognition section 102. Specifically, the controller 104 can determine the cross-section based on the position or angle of the surgical instrument included in the image information, the eyeball site, and the like. The determination of the cross-section will be described later in detail.
The controller 104 also controls the image information acquisition section 101 to acquire a tomographic image of the determined cross-section. The controller 104 is also capable of controlling the respective structures of the operation microscope apparatus 100.
The guide information generation section 105 generates guide information for guiding an operation. The guide information is a tomographic image of a cross-section determined by the controller 104, an operation target line, a distance between the surgical instrument and the eyeball site, and the like. The guide information generation section 105 supplies the generated guide information to the guide information presentation section 106. The guide information generation section 105 generates an image including the guide information and supplies it to the guide information presentation section 106. The guide information generation section 105 may also generate the guide information as audio and supply it to the guide information presentation section 106.
The guide information presentation section 106 presents the guide information to the user. The guide information presentation section 106 is a display and is capable of displaying an image including the guide information generated by the guide information generation section 105. The guide information presentation section 106 is also a speaker and is capable of reproducing audio including the guide information generated by the guide information generation section 105.
[Regarding Image Information Acquisition Section]
The image information acquisition section 101 may include various structures.
As shown in
Further, as shown in
Furthermore, as shown in
Moreover, as shown in
Further, the image information acquisition section 101 may be constituted of only the front monocular image acquisition section 1011 as shown in
Furthermore, the image information acquisition section 101 may be constituted of only the tomographic information acquisition section 1012 as shown in
[Hardware Structure]
The functional structure of the information processing apparatus 120 as described above can be realized by a hardware structure described below.
The CPU (Central Processing Unit) 121 carries out, as well as control other structures according to a program stored in the memory 122, data processing according to a program and stores the processing result in the memory 122. The CPU 121 may be a microprocessor.
The memory 122 stores programs to be executed by the CPU 121 and data. The memory 122 may be a RAM (Random Access Memory).
The storage 123 stores programs and data. The storage 123 may be an HDD (Hard Disk Drive) or an SSD (Solid State Drive).
The input/output section 124 accepts an input to the information processing apparatus 120 and externally supplies an output of the information processing apparatus 120. The input/output section 124 includes an input apparatus such as a keyboard and a mouse, an output apparatus such as a display, and a connection interface for a network and the like.
The hardware structure of the information processing apparatus 120 is not limited to that described herein and only needs to be that capable of realizing the functional structure of the information processing apparatus 120. In addition, a part or all of the hardware structure may exist on a network.
[General Outline of Ophthalmic Operation]
A generation outline of a cataract operation in which the operation microscope apparatus 100 can be used will be described.
As shown in
Next, as shown in
It should be noted that the cataract operation described herein is an example of the ophthalmic operation in which the operation microscope apparatus 100 can be used, and the operation microscope apparatus 100 can be used in various ophthalmic operations.
[Operation of Operation Microscope Apparatus]
An operation of the operation microscope apparatus 100 will be described.
As a start instruction is input by a user, the controller 104 accepts the start instruction via the interface section 103 and starts processing. The controller 104 controls the image information acquisition section 101 to acquire image information of an operation target eye (St101).
The image recognition section 102 executes image recognition processing on the intraoperative image G1 under control of the controller 104 (St102). The image recognition section 102 recognizes the surgical instrument 401 in the intraoperative image G1. The image recognition section 102 is capable of recognizing the surgical instrument 401 by comparing a preregistered pattern of the surgical instrument 401 and the intraoperative image G1, for example. At this time, the image recognition section 102 is capable of extracting a longitudinal direction of the surgical instrument 401 or positional coordinates thereof in the intraoperative image G1 as the image recognition result. The image recognition section 102 supplies the image recognition result to the controller 104.
Subsequently, the controller 104 determines a cross-section using the image recognition result (St103).
Next, the controller 104 controls the image information acquisition section 101 to acquire a tomographic image of an eye on the surface D (St104).
Subsequently, the guide information generation section 105 generates guide information.
The guide information presentation section 106 presents the guide information supplied from the guide information generation section 105 to the user (St106). After that, the operation microscope apparatus 100 repetitively executes the steps described above until an end instruction is made by the user (St107: Yes). When the position or orientation of the surgical instrument 401 is changed by the user, the cross-section is determined according to that change, and a new tomographic image G2 is generated.
The operation microscope apparatus 100 performs the operation as described above. As described above, since a new tomographic image is presented according to the position or orientation of the surgical instrument 401, the user does not need to designate a desired cross-section.
[Regarding Other Cross-Section Determination Operations]
As described above, the controller 104 determines the cross-section based on the image recognition result obtained by the image recognition section 102. The controller 104 is also capable of determining the cross-section as follows.
The controller 104 can determine a surface that passes the tip end position of the surgical instrument 401 recognized by the image recognition section 102 and is different from the longitudinal direction of the surgical instrument 401 as the cross-section.
The guide information generation section 105 is capable of generating guide information including one of or both the tomographic image G2a and the tomographic image G2b. It should be noted that the controller 104 may determine 3 or more surfaces as the cross-sections and cause tomographic images of the cross-sections to be acquired.
The controller 104 is also capable of determining the cross-section based on the incised wound creation position designated in the preoperative plan.
The controller 104 acquires the preoperative image G3 in which the incised wound creation position M is designated from the image information acquisition section 101 or the interface section 103 and supplies it to the image recognition section 102 at a stage before the operation start. When the operation is started and the intraoperative image G1 is taken, the image recognition section 102 compares the intraoperative image G1 and the preoperative image G3. The image recognition section 102 is capable of detecting, by comparing locations of the eyeball sites (e.g., blood vessels 309) included in the images, a difference in the positions or angles of the eye in the images. The image recognition section 102 supplies the difference to the controller 104.
The controller 104 specifies the incised wound creation position M in the intraoperative image G1 based on the difference between the intraoperative image G1 and the preoperative image G3 detected by the image recognition section 102.
It should be noted that the user may designate a cross-section for which the user wishes to reference a tomographic image instead of the incised wound creation position M in the preoperative image G3. The controller 104 is also capable of specifying in the intraoperative image G1, based on the difference between the intraoperative image G1 and the preoperative image G3 as described above, a surface corresponding to the cross-section designated in the preoperative image G3 and determining it as the cross-section.
[Regarding Other Guide Information Generation Operations]
As described above, the guide information generation section 105 is capable of generating guide information including a front image and a tomographic image. The guide information generation section 105 may also generate the guide information as follows.
The guide information generation section 105 can generate the guide information by superimposing a target line on the tomographic image acquired as described above. The user can designate an arbitrary cross-section in the preoperative image G3, and the controller 104 controls the image information acquisition section 101 to acquire a tomographic image of the designated cross-section.
As described above, upon start of the operation, the controller 104 compares the intraoperative image G1 and the preoperative image G3 and determines a surface to be a cross-section based on a difference between the images (see
Further, the guide information generation section 105 may dynamically change the target line L along with a progress of the operation.
Furthermore, the guide information generation section 105 may deform the target line L using a distance between the target line L and the corneal epithelium 301b as a reference. In addition, the guide information generation section 105 is capable of deleting the target line L for an incised part. As a result, it becomes possible to display the target line L while reflecting a deformation of the cornea due to the incision.
Further, the guide information generation section 105 may generate guide information including angle information.
The guide information generation section 105 may generate an indicator that expresses the angle information.
Moreover, the guide information generation section 105 may generate guide information including distance information on the tip end of the surgical instrument 401 and the eyeball site.
Further, the image recognition section 102 may estimate a distribution of the eyeball site from the comparison between a feature point in the preoperative tomographic image G4 or volume data and a feature point in the intraoperative tomographic image G2 or volume data and estimate the distance between the surgical instrument tip end and the eyeball site. The image recognition section 102 may also acquire the position of the surgical instrument tip end based on the position or orientation of the surgical instrument 401 measured by the optical position measurement apparatus and estimate the distance between the surgical instrument tip end and the eyeball site based on the positional relationship with the feature points of the front stereo image and the like.
It should be noted that the feature points can be set as the position of the corneal ring part 306 in the tomographic image, apexes of the corneal ring part 306 and the cornea 301 in the volume data, and the like.
The eyeball site for which the distance with respect to the surgical instrument tip end is to be acquired is not particularly limited but is favorably the posterior capsule 303a, the corneal endothelium 301c, an eyeball surface, or the like. The distance between the surgical instrument tip end and the posterior capsule 303a is effective for preventing the posterior capsule 303a from being damaged by the aspiration process (see
It should be noted that the guide information generation section 105 may generate audio instead of an image as the guide information. Specifically, the guide information generation section 105 may use as the guide information an alarm sound obtained by varying a frequency or volume according to the distance between the surgical instrument tip end and the eyeball site described above.
Further, the guide information generation section 105 can also use as the guide information an alarm sound whose volume is varied according to the deviation amount from the target line, like a high frequency is set when the surgical instrument is facing upward higher than the target line L (see
It should be noted that the present technique may also take the following structures.
(1)
A surgical image processing apparatus, including:
circuitry configured to
perform image recognition on an intraoperative image of an eye; and
determine a cross-section for acquiring a tomographic image based on a result of the image recognition.
(2)
The surgical image processing apparatus according to (1), in which the circuitry is configured to
recognize an image of a surgical instrument in the intraoperative image, and determine the cross-section based on the image of the surgical instrument.
(3)
The surgical image processing apparatus according to (2), in which the cross-section passes a position of a tip end of the surgical instrument.
(4)
The surgical image processing apparatus according to (2) or (3), in which the circuitry is configured to
determine the cross-section based on a longitudinal direction of the surgical instrument.
(5)
The surgical image processing apparatus according to any one of (2) to (4),
in which the cross-section passes a position of a tip end of the surgical instrument and is parallel or at a predetermined angle to a longitudinal direction of the surgical instrument.
(6)
The surgical image processing apparatus according to any one of (1) to (5), in which the circuitry is configured to
compare a preoperative image of the eye with the intraoperative image of the eye, and
determine the cross-section based on a result of the comparison.
(7)
The surgical image processing apparatus according to (6), in which the circuitry is configured to
specify, based on the result of the comparison, an incised wound creation position in the intraoperative image, that has been designated in the preoperative image, and determine the cross-section based on the incised wound creation position in the intraoperative image.
(8)
The surgical image processing apparatus according to (7),
in which the cross-section passes through the incised wound creation position in the intraoperative image.
(9)
The surgical image processing apparatus according to (7) or (8), in which the circuitry is configured to
recognize a feature of the eye in the intraoperative image, and determine the cross-section based on the incised wound creation position and the feature of the eye in the intraoperative image.
(10)
The surgical image processing apparatus according to (9),
in which the feature of the eye is a pupil, iris, eyelid, or blood vessel of the eye.
(11)
The surgical image processing apparatus according to any one of (1) to (10), in which the circuitry is configured to
control an image sensor that acquires image information of the eye to acquire the tomographic image of the cross-section.
(12)
The surgical image processing apparatus according to any one of (1) to (11), in which the circuitry is configured to
generate guide information for an operation based on the tomographic image of the cross-section.
(13)
The surgical image processing apparatus according to (12),
in which the guide information includes at least one of the tomographic image of the cross-section, operation target position information, or distance information regarding a surgical instrument and a feature of the eye.
(14)
The surgical image processing apparatus according to (13),
in which the distance information indicates the distance between the surgical instrument and the feature of the eye.
(15)
The surgical image processing apparatus according to (13) or (14),
in which the feature of the eye is a posterior capsule of the eye.
(16)
The surgical image processing apparatus according to any one of (12) to (15),
in which the guide information includes distance information that indicates distances between a surgical instrument and a plurality of features of the eye.
(17)
The surgical image processing apparatus according to any one of (13) to (16),
in which the distance information is calculated based on a plurality of images of the eye captured by a stereo camera.
(18)
The surgical image processing apparatus according to any one of (13) to (17), in which the circuitry is configured to
control an image sensor that acquires image information of the eye to acquire a preoperative tomographic image of the eye and an intraoperative tomographic image of the eye corresponding to the cross-section, and
generate the operation target position information in the intraoperative tomographic image based on a preoperatively designated position in the preoperative tomographic image.
(19)
The surgical image processing apparatus according to any one of (13) to (18), further including
at least one of a display or a speaker configured to present an image or audio corresponding to the guide information generated by the circuitry to a user.
(20)
The surgical image processing apparatus according to any one of (1) to (19), in which the circuitry is configured to
dynamically change the cross-section according to changes in a position or orientation of a surgical instrument.
(21)
The surgical image processing apparatus according to any one of (1) to (20), in which the circuitry is configured to
concurrently display a preoperative tomographic image and an intraoperative tomographic image of the eye.
(22)
An surgical image processing method, including:
performing, by circuitry of an image processing apparatus, image recognition on an intraoperative image of an eye; and
determining, by the circuitry, a cross-section for acquiring a tomographic image based on a result of the image recognition.
(23)
A surgical microscope system, including:
a surgical microscope configured to capture an image of an eye; and circuitry configured to
(24)
The surgical microscope system according to (23),
in which the surgical microscope is configured to capture a stereoscopic image.
Number | Date | Country | Kind |
---|---|---|---|
2014-205279 | Oct 2014 | JP | national |
This application is a Divisional of U.S. application Ser. No. 15/504,980, filed Feb. 17, 2017, which is based on PCT filing PCT/JP2015/004693, filed Sep. 15, 2015, and claims priority to Japanese Priority Patent Application JP 2014-205279 filed Oct. 3, 2014, the entire contents of each are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
Parent | 15504980 | Feb 2017 | US |
Child | 17349926 | US |