The present disclosure relates to a surgical imaging system and a signal processing device of a surgical image.
Conventionally, for example, following Patent Document 1 discloses a configuration in which a Si-based CCD, a CMOS camera and the like are used as a first imaging means, and an InGaAs camera, a germanium camera, a vidicon camera and the like are used as a second imaging means, the second imaging means not having sensitivity to a wavelength of visible light.
However, an image sensor using indium gallium arsenide (InGaAs) generally has lower resolution than resolution of a silicon-based image sensor. For this reason, in the technology disclosed in Patent Document described above, for example, in a case of observing a surgical site, it is difficult to obtain a high-resolution image such as a visible light image because of low resolution of the InGaAs camera.
Therefore, it has been desired to improve the resolution of the image in a case of imaging with an image sensor having light receiving sensitivity in a long wavelength region.
The present disclosure provides a surgical imaging system including a first image sensor that has light receiving sensitivity in a wavelength region of visible light and images a surgical site, a second image sensor that has light receiving sensitivity in a wavelength region of visible light and near-infrared light and images the surgical site, and a signal processing device that performs a process for displaying a first image imaged by the first image sensor and a second image imaged by the second image sensor.
Furthermore, the present disclosure provides a signal processing device of a surgical image performing a process for synthesizing to display a first image imaged by a first image sensor that has light receiving sensitivity in a wavelength region of visible light and images a surgical site and a second image imaged by a second image sensor that has light receiving sensitivity in a wavelength region of visible light and near-infrared light and images the surgical site.
According to the present disclosure, it becomes possible to improve the resolution of the image in a case of imaging with the image sensor having the light receiving sensitivity in the long wavelength region.
Note that, the effect described above is not necessarily limited, and it is also possible to obtain any one of the effects described in this specification or another effect which may be grasped from this specification together with or in place of the effect described above.
A preferred embodiment of the present disclosure is hereinafter described in detail with reference to the accompanying drawings. Note that, in this specification and the drawings, the components having substantially the same functional configuration are assigned with the same reference sign and the description thereof is not repeated.
Note that the description is given in the following order.
1. Outline of the present disclosure
2. Configuration example of system
3. Alignment and synthesis of images obtained from two image sensors
4. Extraction of useful information image
5. Process performed in surgical imaging system according to this embodiment
6. Configuration example of optical system
An imaging device is widely used in order to image the inside of a human body. However, it is actually difficult to correctly determine a state of an organ and the like in the human body only with a normal visible light image. For this reason, it is assumed to mount an InGaAs image sensor sensitive to a near-infrared wavelength region on a surgical imaging system in order to make it easy to visually recognize the inside of the human body, for example, blood vessels and fat regions in deep sites. However, at present, the InGaAs image sensor has a problem of a large pixel size and low resolution as compared with an Si image sensor used in imaging of a conventional visible light image.
Therefore, in the present disclosure, in order to compensate for the low resolution of the InGaAs image sensor, a surgical imaging system utilizing two imaging elements of the InGaAs image sensor and the Si image sensor is devised. The InGaAs image sensor according to the present disclosure is an image sensor sensitive to a continuous wavelength region from a visible light region to the near-infrared wavelength region and may also obtain a signal in a visible region. That is, in the present disclosure, a maximum value Δ1 max of a wavelength region of the Si image sensor and a minimum value of a wavelength region of the InGaAs image sensor satisfy a relationship that λ2 min is “λ1 max>λ2 min”.
Especially, in the surgical imaging system, high resolution is required for an image obtained by imaging. According to the present disclosure, in a case where the InGaAs image sensor sensitive to the visible light region to the near-infrared wavelength region is used, it is possible to compensate for relatively low resolution by the InGaAs image sensor by a high-resolution Si image sensor by combining the Si image sensor. Therefore, it is possible to easily visually recognize the blood vessels and fat regions in the deep sites as described above by light receiving sensitivity in the near-infrared wavelength region, and it is possible to obtain a high-resolution image by the Si image sensor. Furthermore, by utilizing correlation between image information of the visible light image imaged by the Si image sensor and image information of the visible light image imaged by the InGaAs image sensor, alignment between the images of both the Si image sensor and the InGaAs image sensor may be performed. Moreover, by synthesizing the image information of the visible light image imaged by the Si image sensor and the visible light image and infrared light image imaged by the InGaAs image sensor, the information obtained from both the sensors may be visually recognized efficiently.
The imaging device 100 includes two imaging elements of an Si image sensor 110 and an InGaAs image sensor 120. The Si image sensor 110 and the InGaAs image sensor 120 image the same subject. For this reason, the Si image sensor 110 and the InGaAs image sensor 120 are synchronized by a synchronization signal generated by a synchronization signal generating unit 130. The synchronization signal generating unit 130 may be provided in the imaging device 100. At the time of imaging, simultaneous imaging by the Si image sensor 110 and the InGaAs image sensor 120 or frame sequential imaging by time division is performed. Note that, signal processing and display are normally performed while imaging in real time, but the signal processing and display may also be performed when reproducing recorded image data.
As a light source used when imaging, a light source capable of emitting a wide band from a visible region to the near-infrared wavelength region may be employed. Furthermore, in a case where the near-infrared wavelength region is used for fluorescence observation, a narrow wavelength light source for exciting fluorescence and a light source in the visible region may be combined.
As illustrated in
In a case of imaging a color image with one Si image sensor 110, as illustrated in
When the color filter for green is applied to the InGaAs image sensor 120, the color filter for green used in the Si image sensor 110 is applied to the same pixel position as that of the color filter for green of the Si image sensor 110. Therefore, the InGaAs image sensor 120 may image light transmitted through the color filter for green. Then, when an image transmitted through the color filter for green of the Si image sensor 110 and an image transmitted through the color filter for green of the InGaAs image sensor 120 are observed, the same object is observed in the same wavelength region. Therefore, correlation between the images imaged by the two image sensors of the Si image sensor 110 and the InGaAs image sensor 120 may be utilized, and the two images may be aligned on the basis of the correlation. This may be realized by the fact that the InGaAs image sensor 120 is also sensitive to the visible light wavelength region as described above. Note that, since the alignment is performed on the basis of a pixel value of the pixel in which the color filter for green is arranged, the resolution becomes higher than that in a case where color filters of other colors are used, so that the alignment may be performed with high accuracy.
Note that the color filter applied to the InGaAs image sensor 120 may be in red or blue. In this case also, the red or blue color filter is applied to the same pixel position as that of the color filter of the same color in the Si image sensor 110.
Furthermore, since the Si image sensor 110 is also sensitive to the near-infrared region, a transmission filter for near infrared may also be applied to the same pixel position of both the Si image sensor 110 and the InGaAs image sensor 120. Therefore, it is possible to align the images of both the Si image sensor 110 and the InGaAs image sensor 120 on the basis of the pixel value obtained from the pixel transmitted through the transmission filter for near-infrared.
Note that, in this embodiment, a single-plate Bayer system which images the respective colors of RGB with a single image sensor is assumed as for the Si image sensor 110; however, the sensor is not limited to this configuration. For example, a three-plate system which uses Si image sensors 114, 116, and 118 dedicated to R, G, and B, respectively, in combination with a dichroic mirror 112 may also be employed as illustrated in
Next, a method of extracting a useful information image regarding a living body from the image imaged by the InGaAs image sensor 120 is described.
In
When combining the InGaAs image sensor 120 with the filter which transmits the wavelength region from 1400 nm to 1500 nm and emitting wide band light covering the near-infrared region to image, the signal value obtained through the filter is a signal of the wavelength region around 1400 nm to 1500 nm.
At that time, as illustrated in
On the other hand, useful information may be extracted from the image 510 of the InGaAs image sensor 120 by the method described above, and the fat tissue has the bright signal value because of its low absorbance. Therefore, a fat portion 540 in the near-infrared image obtained from the InGaAs sensor 120 is a white and bright region in the image 510 in
Furthermore, in a case where the filter which transmits the wavelength region from 1400 nm to 1500 nm is used in the InGaAs image sensor 120, the transmissivity of the fat is high in this wavelength region and the light of the fat portion 540 is transmitted, but the light of other tissue is not transmitted. For this reason, in a case where the fat portion 540 overlaps with another tissue, it is possible to observe a state in which the fat portion 540 is made transparent.
On the other hand, in the image 510 obtained by imaging while combining the InGaAs image sensor 120 with the filter which transmits the wavelength region from 1400 to 1500 nm, the transmissivity of light in the fat portion 540 is high and the transmissivity of light in the blood vessel 542 is low, so that the light penetrates the fat portion 540 and the blood vessel 542 is seen through.
Therefore, in the superimposed image 520 obtained by synthesizing the visible light image 500 of the Si image sensor 110 and the image 510 of the InGaAs image sensor 120, the state of the blood vessel 542 through the fat portion 540 may be observed in detail. Furthermore, since the superimposed image 520 includes the visible light image 500, color reproduction is natural, and visibility and recognizability may be improved. Note that, in the superimposed image 520 in
In
For example, if the blood vessel 542 which cannot be visually recognized due to the fat portion 540 is present in a surgical scene, it is assumed that the blood vessel 542 is erroneously excised. In such a case, by using the superimposed image 520 according to this embodiment, the blood vessel 542 may be observed through the fat portion 540, so that a situation in which the blood vessel 542 is erroneously excised during the surgical operation may be certainly suppressed.
When generating the superimposed image 520, it is also possible to generate the superimposed image 520 by making the IR image a monochrome image, making the color thereof an arbitrary single color, and alpha blending the same with the visible light image. In monochromatization, green or blue which hardly exists in the human body is preferably selected.
Furthermore, in the above-described example, an example of synthesizing the visible light image 500 of the Si image sensor 110 and the image 510 of the InGaAs image sensor 120 is described; however, the two images may be simultaneously displayed in one display by a side-by-side (SideBySide) or picture-in-picture (PictureInPicture) method. Furthermore, the images may be displayed on two displays. Furthermore, not only 2D display but also stereo 3D display may be performed. Moreover, a human wearable display device such as a head-mounted display may be displayed as the display device 400.
Next, a process performed by the surgical imaging system 1000 according to this embodiment is described with reference to the block diagram in
At next step S12, the visible light/infrared light image imaged by the InGaAs image sensor 120 is obtained. At next step S14, the separating processor 204 separates the visible light/infrared light image into the IR image and the visible light image for alignment. Here, the IR image is an image including the pixel other than the pixel in which the color filter for green is arranged illustrated in
At next step S16, the deformation parameter generating processor 206 compares the visible light image imaged by the Si image sensor 120 with the visible light image for alignment separated by the separating processor 204. Then, the deformation parameter generating processor 206 generates a deformation parameter for deforming or enlarging the visible light/infrared light image obtained by the InGaAs image sensor 120 in accordance with the visible light image imaged by the Si image sensor 120.
Since the Si image sensor 110 and the InGaAs image sensor 120 are assumed to be different in resolution and angle of view depending on lens characteristics thereof, an image size is appropriately changed as the signal processing before superimposed display of the visible light image and the visible light/infrared image is performed. For example, in a case where the Si image sensor 110 has 4K resolution (3840×1080) and the InGaAs image sensor 120 has HD resolution (1920×1080) lower than that, the resolution of the visible light/infrared image imaged by the InGaAs image sensor 120 is converted to the resolution corresponding to 4K resolution (up conversion) without changing an aspect ratio thereof. The deformation parameter generating processor 206 generates the deformation parameter for changing the image size in such a manner.
Furthermore, the alignment and distortion correction of the images may be performed as the signal processing before the superimposed display of the visible light image and the visible light/infrared light image is performed. For example, in a case of performing the frame sequential imaging by time division, if the subject or the camera moves, positional displacement might occur between the two images. Furthermore, in a case of simultaneously imaging by the Si image sensor 110 and the InGaAs image sensor 120, the positional displacement according to positions of both the sensors and an optical system occurs. Alternatively, a difference in image size or distortion between the Si image sensor 110 and the InGaAs image sensor 120 might occur due to differences in axial chromatic aberration for each wavelength and in lens characteristic. The deformation parameter generating processor 206 generates the deformation parameter in order to perform the alignment and distortion correction of such images. In a case where the subject or camera moves in the frame sequential imaging by time division, it is possible to compare the visible light image of the Si image sensor 110 with the visible light image for alignment of the InGaAs image sensor 120 and perform block matching, thereby performing the alignment. Furthermore, the positional displacement according to the positions of both the sensors and the optical system, and the difference in axial chromatic aberration for each wavelength and in lens characteristic may be obtained in advance from specifications of the imaging device 100 and both the sensors.
Note that it is also possible to create a depth map by parallax estimation using image data in the same position of the visible light image and the visible light/infrared light image after the alignment is performed.
At next step S18, the filling processor 212 performs a process of filling the pixel value of the visible light image for alignment on the IR image separated by the separating processor 204. Specifically, a process of interpolating the pixel value of the pixel in which the color filter for green is arranged illustrated in
At next step S20, the image quality improving processor 214 performs a process of improving the image quality of the IR image subjected to the filling process by the filling processor 212. The image quality improving processor 214 improves the image quality of the IR image imaged by the InGaAs image sensor 120 by the signal processing on the basis of the image information of the visible light image imaged by the Si image sensor 110. For example, the image quality improving processor 214 estimates a PSF blur amount (PSF) between the visible light image and the IR image using the visible image imaged by the Si image sensor 110 as a guide. Then, by removing the blur of the IR image so as to conform the blur amount of the visible light image, a contrast of the IR image is improved and the image quality is improved.
At next step S22, the useful information image extracting unit 216 extracts the useful information image regarding the living body from the IR image subjected to the image quality improving process. The useful information image is, for example, image information indicating a region of the fat portion 540 in the IR image as illustrated in
At next step S24, the useful information image processor 217 performs an imaging process on the useful information image. Here, for example, the region of the fat portion 540 corresponding to the useful information image is colored in a color (green, blue and the like) which does not exist in the human body. Therefore, the region of the fat portion 540 may be displayed with emphasis after the synthesis with the visible light image.
At next step S26, the image deforming/enlarging processor 218 applies the deformation parameter to the useful information image to perform a deforming/enlarging process of the useful information image. Therefore, the position and size of the visible light image imaged by the Si image sensor 110 conform to those of the useful information image. Furthermore, by applying the deformation parameter, the axial chromatic aberration for each wavelength and the distortion of the lens characteristic are corrected to the same level in the visible light image imaged by the Si image sensor 110 and the useful information image. At next step S28, the synthesizing processor 220 synthesizes the visible light image processed by the white light image processor 202 and the useful information image processed by the IR image processor 210. Information of the synthesized image (superimposed image) generated by the synthesis is transmitted from the signal processing device 200 to the transmitting device 300 and further transmitted to the display device 400.
From the selector 222, any one of the synthesized image synthesized by the synthesizing processor 220, the visible light image processed by the white light image processor 202, or the useful information image processed by the IR image processor 210 is selected to be output to the transmitting device 300. Therefore, any one of the synthesized image, the visible light image, or the useful information image is transmitted from the transmitting device 300 to the display device 400, so that these images may be displayed on the display device 400. Note that switching of the images by the selector 222 is performed when operation information by a user is input to the selector. In a case where the synthesized image is displayed, the information obtained from the Si image sensor 110 and the information obtained from the InGaAs image sensor 120 may be visually recognized at once, so that the information may be obtained most efficiently.
At next step S30, the display device 400 displays the image information transmitted from the transmitting device 300. At next step S32, it is determined whether or not to finish the process. In a case where the process is not finished, the procedure returns to step S10 to perform the subsequent process.
Furthermore, as illustrated in
Furthermore,
As described above, according to this embodiment, it is possible to improve the visibility of the blood vessels and fat regions difficult to determine only with the normal visible light image. Furthermore, it becomes possible to improve a sense of resolution of the image imaged by the InGaAs image sensor 120. Moreover, simultaneous observation becomes possible by superimposed display of the images imaged by the InGaAs image sensor 120 and the Si image sensor 110.
Although the preferred embodiment of the present disclosure is described above in detail with reference to the attached drawings, the technical scope of the present disclosure is not limited to such examples. It is clear that one of ordinary skill in the technical field of the present disclosure may conceive of various modifications and corrections within the scope of the technical idea recited in claims and it is understood that they also naturally belong to the technical scope of the present disclosure.
Furthermore, the effects described in this specification are merely illustrative or exemplary, and are not limiting. That is, the technology according to the present disclosure may exhibit other effects obvious to those skilled in the art from the description of this specification together with or in place of the effects described above.
Note that, the following configuration also belongs to the technical scope of the present disclosure.
(1) A surgical imaging system including:
a first image sensor that has light receiving sensitivity in a wavelength region of visible light and images a surgical site;
a second image sensor that has light receiving sensitivity in a wavelength region of visible light and near-infrared light and images the surgical site; and
a signal processing device that performs a process for displaying a first image imaged by the first image sensor and a second image imaged by the second image sensor.
(2) The surgical imaging system according to (1) described above, in which resolution of the first image sensor is higher than resolution of the second image sensor.
(3) The surgical imaging system according to (1) or (2) described above,
in which the first image sensor includes a color filter in a predetermined color arranged for each pixel, and
the second image sensor includes a color filter in the same color as the color of the color filter in a pixel position corresponding to a pixel position of the color filter of the first image sensor.
(4) The surgical imaging system according to (3) described above, in which the predetermined color is green.
(5) The surgical imaging system according to any one of (1) to (4) described above, in which the first image sensor is an image sensor including Si and has resolution of 3840×2160 pixels or more.
(6) The surgical imaging system according to any one of (1) to (5) described above, in which the second image sensor is an image sensor including InGaAs.
(7) The surgical imaging system according to (3) described above, in which the signal processing device includes an image conforming unit that conforms the first image to the second image on the basis of a pixel value obtained through the color filter of the first image sensor and a pixel value obtained through the color filter of the second image sensor.
(8) The surgical imaging system according to (3) described above, in which the signal processing device includes a filling processor that calculates a pixel value in a state in which the color filter is not arranged in the pixel position in which the color filter is provided of the second image sensor.
(9) The surgical imaging system according to any one of (1) to (8) described above, in which the signal processing device includes a synthesizing processor that synthesizes the first image and the second image.
(10) The surgical imaging system according to any one of (1) to (9) described above, in which the signal processing device includes an image quality improving processor that improves an image quality of the second image on the basis of the first image.
(11) The surgical imaging system according to any one of (1) to (10) described above, in which the signal processing device includes an image extracting unit that extracts a specific region from the second image.
(12) The surgical imaging system according to (11) described above,
in which the second image sensor includes a filter that transmits light in a predetermined wavelength region, and
the image extracting unit extracts the specific region on the basis of a pixel value obtained through the filter.
(13) The surgical imaging system according to (12) described above, in which the predetermined wavelength region is a wavelength region not shorter than 1300 nm and not longer than 1400 nm.
(14) The surgical imaging system according to (11) described above, in which the signal processing device includes an image processor that assigns a predetermined color to the specific region.
(15) The surgical imaging system according to (14) described above, in which the predetermined color is green or blue.
(16) The surgical imaging system according to any one of (1) to (15) described above, in which the first image sensor and the second image sensor image fat or a blood vessel in a human body.
(17) A signal processing device of a surgical image, performing a process for synthesizing to display a first image imaged by a first image sensor that has light receiving sensitivity in a wavelength region of visible light and images a surgical site and a second image imaged by a second image sensor that has light receiving sensitivity in a wavelength region of visible light and near-infrared light and images the surgical site.
Number | Date | Country | Kind |
---|---|---|---|
2017-124074 | Jun 2017 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2018/020326 | 5/28/2018 | WO | 00 |