n/a
The present disclosure relates to medical imaging devices. More particularly, the disclosure exemplifies endoscopic probes with a built-in optical encoder to measure non-uniform rotational distortion (NURD). Methods and systems for measuring and compensating NURD are also disclosed.
Imaging modalities that use a mechanically rotated optical probe (e.g., endoscope or catheter) to scan a bodily lumen (e.g., an artery) often suffer from image degradation due to non-uniform rotational distortion (NURD). In this regard, successful optical scanning of internal structures of a subject requires overcoming several challenges including motion artifacts associated with patient and/or organ movement, and performing accurate NURD measurement. Several techniques have been proposed to measure and correct NRUD, but most conventional techniques fall short in accurately measuring the cause of the distortion. A known method measures the rotational speed of a catheter by determining the statistical variation in the speckle between adjacent A-lines. Other methods use cross correlation or phase information of successive frames but generally require highly correlated A-line data. These methods are data intensive and time consuming; see, e.g., U.S. Pat. Nos. 9,668,638 and 9,237,851, and pre-grant publication US 2013/0107274. In addition, U.S. Pat. No. 8,712,506 discloses an optical encoder integrated into devices with elongate, flexible rotary shafts, and the use of the output of the rotary encoder to improve co-registration of multi-modality images.
More recently, the present applicant has disclosed other methods for addressing NURD. For example, pre-grant publication US 2017/0360398 discloses exemplary probes including longitudinal marker elements arranged parallel to the probe axis for measuring NURD and ring marker elements arranged concentric to the probe axis for measuring non-uniform linear distortion (NULD). In this publication, light for NURD detection comes from rotating parts which are the same rotating optics for side-view imaging. In addition, pre-grant publication US 2018/0064396 proposes arranging electrodes concentrically to rotating and non-rotating elements of an imaging probe. The electrodes are rotationally aligned and form a capacitive sensor that can sense the rotation angle of the inner core compared to a cylindrical tube surrounding the core.
However, these known designs and techniques have significant drawbacks in particular, with respect to spectrally encoded endoscopy (SEE) probes designed for forward viewing applications. Forward viewing (or front viewing) SEE probes are particularly advantageous for endoscopic imaging applications such as, ear, nose and throat (ENT) examinations, laparoscopy, orthopedic endoscopic surgery (arthroscopy), and pediatric surgery. However, in a forward-viewing SEE probe, illumination light is projected to the front of the probe such that at least one color wavelength or at least one diffracted order of light propagates substantially parallel to the probe axis. Therefore, in the case of a forward viewing SEE probe, it is difficult to use the same light for illuminating the sample and for detecting the rotation of the probe because the illumination light needs to be projected forward while rotation should be measured radially. In addition, due to the need to miniaturize the size and lower the cost of a SEE probe, the use of capacitive sensors becomes less desirable as it can increase the size and weight of the probe. Furthermore, other methods that use cross correlation or phase information of successive image frames are data intensive and time consuming. Moreover, in ultraminiature SEE endoscopes, space for an encoder is limited. In addition to the above, in ultraminiature SEE endoscopes, the longer a rotational drive shaft, or the smaller the diameter the drive shaft, the more likely that NURD will occur.
Accordingly, it can be beneficial to address and/or overcome at least some of the deficiencies indicated herein above, and thus provide a new SEE probe having forward direction view, and an apparatus to use such a probe with appropriately corrected non-uniform rotational distortion.
According to at least one embodiment of the present disclosure, there is provided an endoscopic optical probe, comprising: a rotatable core for guiding first illumination light from a first light source towards an imaging target, the rotatable core extending from a proximal end to a distal end along an axis and having a plurality of patterns built-in at the distal end thereof; a non-rotatable sheath arranged concentrically around the rotatable core and also extending from the proximal end to the distal end; and one or more optical waveguides disposed on the outer surface of the non-rotatable sheath at the distal end thereof, wherein the one or more waveguides are configured to guide second illumination light from a second light source onto the plurality of patterns and collect light reflected from the plurality of patterns, wherein the plurality of patterns are rotationally aligned with the waveguides such that the waveguides can collect the reflected light as a function of a rotation position of at least first and second patterns on the rotatable core, and wherein the rotation position is determined by intensity variation the reflected light.
According to one embodiment, the endoscopic optical probe is a forward viewing spectrally encoded endoscopy (SEE) probe. The probe includes a rotatable core for guiding first light from a first light source to an imaging target; a non-rotatable sheath arranged concentrically around the rotatable core; and a plurality of fibers surrounding the distal end of the non-rotatable sheath. The rotatable core includes a plurality of patterns built-in at the distal end thereof. Some of the fibers are used as detecting fibers to detect the first light scattered by the imaging target, and some fibers are used as waveguides to irradiate and collect second light from the plurality of patterns. SEE illumination optics is arranged inside the rotatable core to irradiate the imaging target. The rotatable core has at least first and second patterns different from each other; the patterns are different in reflectivity or other characteristic. When the second light irradiates the first and second patterns at different times during rotation, rotation position of the rotatable core is determined by intensity variation of the detected second light. A processor can be configured (programmed) with compensating algorithms to compensate NURD in a SEE image.
These and other objects, features, and advantages of the present disclosure will become apparent upon reading the following detailed description of exemplary embodiments of the present disclosure, when taken in conjunction with the appended drawings, and provided claims.
Further objects, features and advantages of the present disclosure will become apparent from the following detailed description when taken in conjunction with the accompanying figures showing illustrative embodiments of the present disclosure.
Throughout the figures, the same reference numerals and characters, unless otherwise stated, are used to denote like features, elements, components or portions of the illustrated embodiments. Moreover, while the subject disclosure will now be described in detail with reference to the figures, it is done so in connection with the illustrative exemplary embodiments. It is intended that changes and modifications can be made to the described exemplary embodiments without departing from the true scope and spirit of the subject disclosure as defined by the appended claims.
The embodiments disclosed herein are based on an object of addressing one or more of the issues discussed in the Background section. According to the disclosed embodiments, it is advantageous to provide an endoscopic probe having a built-in encoder, and an apparatus to use such a probe with appropriate correction of non-uniform rotational distortion. Exemplary embodiments are described as referring to spectrally encoded endoscopy (SEE). However, the novel arrangements of a built-in encoder and the novel image processing algorithms disclosed herein are not limited to SEE probes. The embodiments disclosed herein can also be applicable to various imaging modalities that use a mechanically rotated optical probe (e.g., endoscope or catheter) to scan a bodily lumen (e.g., an artery) using a fiber optic rotary joint (FORJ) or similar rotary devices.
The spectrometer 220 may include (or can be replaced by) a first detector capable of measuring spectral components of broadband light 116 emitted by the light source 110. The broadband light has sufficient bandwidth to allow for spatial resolution along a spectrally dispersed dimension. In some embodiments, the broadband light 116 emitted by light source 110 is a broadband visible light in the rage of about 400-800 nm which includes a blue band of light (including wavelengths λB1 to λBN), a green band of light (including wavelengths λG1 to λGN), and a red band of light (including wavelengths λR1 to λRN). For example, the blue band includes light in a wavelength range of 400-500 nm, the green band includes light in a wavelength range of 500-600 nm, and the red band includes light in a wavelength of 600-800 nm. In other embodiments, the wavelengths of the broadband light are optimized for identifying specific features such as blood, tissue, etc., and may extend into the near infra-red (near IR) region.
The broadband light source 110 may include a plurality of light sources or may be a single light source. The broadband light source 110 may include one or more of a laser, an OLED, a LED, a halogen lamp, an incandescent lamp, a supercontinuum light source pumped by a laser, and/or a fluorescent lamp. The broadband light source 110 may be any light source that provides light which can then be split up into at least three bands in which each band is further dispersed to provide light which is then used for spectral encoding of spatial information. The broadband light source 110 may be coupled by a light guiding component or may be free-space coupled to another component of the SEE probe system 100.
The light guiding component may be a source optical fiber 112 or some other optical waveguide which serves to connected the light source 110 to the probe 120. The source optical fiber 112 may be a single-mode fiber, double clad fiber, a multi-mode fiber. The source optical fiber 112 may include one or more fibers. Preferably, a single fiber is used as the source optical fiber 112. The source optical fiber 112 is connected to an illumination fiber 130 of the optical probe through a fiber optic rotary junction (FORJ) 114. The illumination fiber 130 is arranged inside a drivecable 140. The drivecable 140 delivers torque from its proximal end to distal end. The drivecable 140 can be flexible or rigid.
The drivecable 140 is inside of an inner sheath 160. The inner sheath 160 does not rotate while the drivecable 140 is rotated. Therefore, a certain tolerance is necessary between an inner diameter of the sheath 160 and an outer diameter of the drive cable 140. In addition, the inner surface of the inner sheath 160 is preferably smooth to reduce friction between the drivecable 140 and the inner sheath 160. For example, the inner surface of the inner sheath 160 can be coated with Teflon® or other similar material. In general, at least one of the inner surface of the inner sheath 160 or the outer surface of the drivecable 140 is coated with a buffer solution and/or hydrophilic coating.
As shown in
The light focusing component 152 may be, for example, a GRIN lens or a ball lens. The spacer 154 may be made of, but is not limited to, glass, heat-curable resin, UV-curable resin, or plastic. The spacer 154 can have a reflective surface or mirror 156 and a diffractive component or grating 158. The mirror 156 may be achieved, for example, by applying a reflective coating (e.g., metal coating), or by using total internal reflection on a part of spacer 154. The grating 158 may be fabricated by techniques such as dry-etching, wet-etching, nano-imprint, and soft lithography. The grating 158 may be formed directly or may be stamped on the spacer 154. For example, the spacer 154 with grating 158 can be fabricated by dicing and angle-polishing etched glass grating. The grating 158 may be, but is not limited to, a binary grating, a blazed grating, or a holographic grating.
The blue band of light (λB1 to λBN), the green band of light (λG1 to λGN), and red band of light (λR1 to λRN) are incident on the grating 158 at substantially the same incident angle θi.
The diffraction angle θd may be determined by a known grating equation such as Equation (1):
n
i sin θi+nd sin θd=−mGλ (1)
where, ni is the refractive index of the material on the incident side of the grating 158; nd is the refractive index of the material on the diffraction side of the grating; m is the diffraction order; G is spatial frequency (slits or lines per unit length) of the grating 158, and λ is wavelength of the light.
In an exemplary embodiment where the material on the diffraction side of the grating is assumed to be air (nd=1), the diffraction conditions of grating 158 may be designed according to the following parameters: ni=1.5037; nd=1; θi=42.810; and G=860/mm. In another exemplary embodiment, the diffraction conditions may be based on the following parameters: ni=1.4713; nd=1; θi=42.7°; and G=650/mm. In some embodiments, the grating 158 is designed so that the absolute value of the product of the spatial frequency G and the diffraction order for green light mG, |G*mG|, is more than 2000/mm, 2500/mm, or 3000/mm.
It is understood that, while the red, green, and blue light bands are incident on the grating 158 at substantially the same incident angle θi, there may be some differences due to dispersion and/or manufacturing tolerances. Wavelength shift on target due to those differences can be compensated in the image reconstruction process, if necessary. For example, the incidence angle θi of the red light and the blue light may vary by about 1.0° or less. As used herein “substantially the same incident angle” in the context of the incident angle θi means a variation of 1.0° or less than 1.0°.
In order to make the blue band of illumination light, green band of illumination light, and red band of illumination light overlap on the sample 200 to form a line image, the diffraction order for the blue band of light mB, the diffraction order for the green band of light mG, and the diffraction order for the red band of light mR are restricted to satisfy Equations (2) below:
|mB|=|mG|+1
|mR|=|mG|−1
{mR,mG,mB∈|sgn(mR)=sgn(mG)=sgn(mB),(mR,mG,mB)≠0} (2)
In Equation (2), is an integer, and expression “sgn(x)” denotes a sing of “x”, where the sign of “x” is either +, or −, or 0; that is, in the expression sgn(x), “x” may have a positive value or a negative value or a zero value. Equation (2) provides the required condition so that the 3 wavelengths λB, λG, λR in the blue, green, and red channels, respectively, can provide substantially the same value for each diffraction order m when diffracted by a grating defined by Equation (1). That means that light of the 3 wavelengths upon being diffracted by a grating defined by Equation (1) can overlap on the sample.
In an embodiment, as shown in
Table 1 shows exemplary diffraction orders and wavelength ranges of red (R), green (G), and blue (B) diffracted light for various experimental examples where the 3 different wavelengths (λBX, λGX, λRX) were all diffracted at substantially the same angle θd by satisfying Equation (2).
After illumination of the spectrally diffracted light 202 (e.g., red, green, and blue light) on the sample 200 (e.g., a tissue or in-vivo sample), light is reflected, scattered, photoluminescence emitted by the sample 200. This light is collected by one or plural detection fibers 210 (shown in
A plurality of detection fibers 210 used to collect the light reflected/emitted from the sample may be attached on or near/adjacent the outer surface of the inner sheath 160. The detection fiber 210 may be a single-mode fiber, multi-mode fiber or double clad fiber.
The probe 120 can have an outer sheath 180 outside of the inner sheath 160, so that the outer sheath 180 encloses the detection fiber 210. The probe 120 can have a window cover 162 arranged over (at the distal end of) illumination optics 150. The window cover 162 can cover also detection fiber 210. The window cover 162 is preferably made of transparent material which can be glass or plastic having high transparency and low reflectivity. The window cover 162 is preferably substantially transparent for the illumination light 202. For example, transmittance of the window cover 162 is preferably at least 50%, and more preferably it is equal to or higher than 90%.
As shown in
The illumination fiber 130, the drivecable 140, and the probe distal optics 150 are rotated by a motor of the rotary junction 114 as indicated by the arrow 115 such that illumination light lines scan the sample in a circular manner. In this manner, the spectrometer 220 can obtain two-dimensional (2D) data (in coordinates of wavelength and time).
After the spectrometer 220 and one or more detectors detect the light collected by fiber 210, an image processor 230 generates three 2D images, one for red, green, and blue, from the data. In other embodiments, two, four or more 2D images can be formed using a probe with appropriate overlapping orders of dispersed light.
As noted above, the probe 120 is rotated by a motor of the rotary junction 114 to scan the sample in a circular manner to obtain the 2D data. Alternatively, the probe 120 may be oscillated to provide similar 2D data. The motor can be, for example, a Galvano motor, stepping motor, a piezo-electric motor, or a DC motor. Regardless of the type of actuation, by rotating or oscillating the spectrally encoded lines in the direction of the arrow 115, a substantially circular region can be imaged at a plane located at a working distance from probe.
The circular region can be located approximately perpendicular to the SEE probe axis, and therefore, the exemplary SEE probe shown in
The first waveguide 212 can be an optical fiber which delivers second illumination light 242 from the second light source 240 to the probe 120. The waveguide 212 can have second illumination optics 216 to direct at least part of the second illumination light 242 toward the pattern 145. The patterns 145 are configured so that throughput from the first waveguide 212 to the second waveguide 214 depends on rotation position of the drivecable 140.
The second waveguide 214 can be an optical fiber and can have second detection optics 218 to receive at least part of second detection light 244 reflected from the patterns 145. The second waveguide 214 delivers the second detection light 244 to a second detector 250 (shown in
The second detector 250 detects the second detection light 244. Rotation information of the distal tip of drivecable 140 can be obtained based on the intensity variation of the detected second detection light 244 due to the rotation of patterns 145. Specifically, as the drivecable 140 rotates, the patterns 145 reflect the second illumination light 242 as the second detection light 244 at predetermined times according to the speed of rotation and the arrangement of patterns 145 with respect to the optics 216 and 218.
When 2D data (in coordinates of wavelength and time) obtained by the spectrometer 220 is turned into 2D images by the image processor 230, the rotation information of the distal tip of the drivecable 140 is used to compensate distortion of non-uniformity of rotation speed of the drivecable 140.
In one embodiment, each of waveguides 212 and 214 can be optical fibers. One or both of the fibers can be a single mode fiber so that spatial resolution of rotation information can be improved. For convenience and ease of assembly, the waveguides 212 and 214 can be the same parameters (same diameter) as detection fibers 210. The waveguides 212 and 214 are arranged concentric to the drivecable 140 and can be located next to each other or at a predetermined distance from each other.
In one embodiment, only the second illumination optics 216 and second detection optics 218 may need to be arranged concentric to the drivecable 140 and located next to each other or at a predetermined distance from each other. Then, the optics 216 and optics 218 can be fused/connected with a single fiber. The one fiber can work as both of the waveguides 212 and 214. In this case, a circulator can be used between the fiber and the second light source 240/the second detector 250 to divide the second illumination light 242 and the second detection light 244.
The second illumination optics 216 and the second detection optics 218 can be the same optics. That is, the second illumination optics 216 and the second detection optics 218 can be one set of integrated optical elements or a single optical element. The second illumination optics 216 and/or the second detection optics 218 can be a prism, a lens, a mirror, or combination of those. The second illumination optics 216 and/or the second detection optics 218 can be a common path interferometer. In one embodiment, to implement the second illumination optics 216 and/or the second detection optics 218, a focusing lens combined with a mirror or prism can be added to the distal end of a fiber 210 which functions as both of the waveguides 212 and 214. Methods well known in the art including bonding, welding, or soft-lithography (e.g., causing a pre-polymer adhesive composition to polymerize so as to form the diffractive or reflective optics on the distal end of the fiber) can be used. For example, a miniature GRIN or ball lens can be attached to the distal end of a fiber 210, and a transparent spacer can be attached to the lens, and then the spacer can be further cut or polished at an angle, and further coated with a reflective coating. In one embodiment, the second illumination optics 216 and the second detection optics 218 can be formed by one or more ball lenses having a polished surface. In this case, the polished surface serves to direct the light along a desired path, and the ball or spherical surface of the lens serves to focus and/or collimate the light.
In one embodiment, the patterns 145 can have one unique mark and other marks different from the unique mark to enable determination of absolute rotation angle. For example, the patterns 145 contain one pattern and the other pattern where the one pattern is distinguishable from the other patterns. The pattern can be periodic or non-periodic. The one pattern can be used to determine orientation of the drivecable 140 and be used to set orientation of reconstructed image in image reconstruction process. The one pattern can be used to set a zero or home position of the drivecable 140 and can be used to set an origin or home position of a reconstructed image in the image reconstruction process.
In one embodiment, the patterns 145 may have a periodic or non-periodic arrangement but not constant reflectivity in the rotation direction when the drivecable 140 rotates. For example, a specific pattern (e.g., the zero or home pattern) can have higher reflectivity that other patterns. In one embodiment, the patterns 145 may have a periodic or non-periodic arrangement but not constant directivity of reflection when the drivecable 140 rotates. For example, a specific pattern (e.g., the zero or home pattern) can have a directivity normal (perpendicular) to the second detection optics 218, while the other patterns may have an angular directivity with respect to detection optics 218.
In one embodiment, the patterns 145 may include a fluorescent material to generate periodic or non-periodic but not constant fluorescence intensity by the second illumination light 242 when the drivecable 140 rotates. For example, a specific pattern (e.g., the zero or home pattern) can have a higher concentration of fluorescent material than that the other patterns. In one embodiment, the patterns 145 are point reflectors arranged at predetermined angular spacing. In one embodiment, the patterns 145 are grooves on the drivecable 140 and/or grooves on the can 142.
In one embodiment, the patterns 145 are stripe patterns. Stripe patterns could be produced by making the encoder pattern on a planar substrate and then wrapping the substrate around the drivecable 140 or can 142 at or near the distal end of the imaging probe 120. Alternatively, stripe patterns can be formed directly on the outer surface of can 142 and/or drivecable 140 by directly machining or printing such patterns on the outer surface thereof.
In one embodiment, a combination of at least two different patterns may used. For example, a specific pattern (e.g., a point reflector) can provide a higher intensity of reflected light, which other patterns (e.g., stripes of less reflective material or grooves) can provide reflected light with decreasingly less reflectivity. In this embodiment, the specific pattern can used as the zero, home or absolute angle marker, and the other patterns can be used to determine angular, direction, and/or speed of rotation.
In one embodiment, a portion of the drivecable 140 and/or can 142 where patterns 145 exist has polygonal shape. For example, the distal end of the drivecable or the can 142 can have the shape of a polygonal mirror with a predetermined number of sides (facets) so that when the drivecable rotates each facet of the polygonal mirror reflects light to generate periodic or non-periodic but not constant intensity.
In one embodiment, it is preferable that wavelength of the second illumination light 242 to irradiate the patterns 145 be different from wavelengths of the dispersed illumination light 202 to irradiate the sample. This is considered advantageous because using different wavelengths prevents cross-talk between light used to measure NURD and light used to irradiate the sample. In addition, this prevents “direct coupling”; that is, the first 1st illumination light upon being scattered by the sample goes through the detection path to spectrometer for imaging directly.
In one embodiment, the inner sheath 160 may have a hole so that the second illumination light 242 and the second detection light 244 can go through the sheath 160. In one embodiment, the inner sheath 160 is transparent for the second illumination light 242 and the second detection light 244. In one embodiment, the inner sheath 160 is not transparent for the broadband light 116, but transparent for the light 242 and the light 244, so that broadband light 116 and the collected light 118 cannot go into the waveguide 214.
During operation of the SEE imaging probe 100, the second illumination source 240 delivers second illumination light 242 to the patterns 145 via the first waveguide 212 and the second illumination optics 216 to make at least part of the second illumination light 242 go to the pattern 145. The second detection optics 218 receive at least part of the second detection light 244 reflected from the patterns 145, and the second waveguide 214 delivers the second detection light 244 to a second detector 250.
The polar coordinate system is defined based on the view angle and distance from probe axis. One method to obtain Q(λ, a) is to apply linear compensation to P(λ, n). For example, if Q(λ, a) is defined such that it has AA spectrum lines between Q(λ, Ai) and Q(λ, Ai+1) for any i, Q(λ, a) between Q(λ, Ai) and Q(λ, λi+1), ΔA spectrum lines, are calculated by applying linear compensation to the spectrum lines of P(λ, n) between P(λ,Ni) and P(λ,Ni+1). It should be noted here that Ni+1−Ni depends on i and is not constant when rotation is not uniform. However, by applying linear compensation to P(λ, n), the converted 2D data Q(λ, Ai) is obtained at substantially equal (regular) intervals.
The SEE probe system 100 shown in
In some exemplary embodiments, instead of guiding the broadband light from light source 110 into the illumination fiber 112, the light can first be dispersed to predetermined wavelength(s) λ1, λ2, . . . , λN. For example, the light with the wavelength λi (1≤i≤N) can be input into the illumination fiber 112 in a multiplexed manner. The input light is provided through the junction (FORJ 114), illumination fiber 130, distal optics 150 to the sample 200; and collected via the fibers 210 and guided to the detector/spectrometer 220. Optionally, in the case of imaging with light of individual wavelengths λi, the detector/spectrometer 220 can be or include a simple light intensity detector such as photodetector because the input light has a wavelength of λi. Then, by changing the wavelength i from 1 to N, it is possible to obtain the one-dimensional line image at different wavelengths, by using a simple intensity photodetector or a line sensor. By mechanically scanning the line, it is possible to acquire the two-dimensional image of the object.
One function of the fiber junction (FORJ 114) is to make the probe 120, including the illumination fiber 130 and distal optics 150, detachable. With this exemplary function, the probe 120 can be disposable and thus a sterile probe for human “in vivo” use can be provided every time an imaging operation is performed. In some embodiments, the probe 120 is a disposable probe. In these embodiments, the illumination fiber 130, the detection fiber 210, and the waveguides 212/214 may be detachable. With this exemplary function, the probe 120 may be disposable in order to ensure that a sanitary probe is being inserted into the subject which may be a human body.
The SEE probe system 100 as described and shown herein can diffract the broadband light along the axis of the probe 120, and facilitate forward viewing. The exemplary probe may be held stationary or it may be rotated, where the rotation of the probe is particularly useful for acquiring a two-dimensional front-view image as well as a color image. Since the detection fibers 210 can be attached to the outer surface of the non-rotatable inner sheath 160, continuous rotation of the probe 120 is possible without the illumination fiber 130 and distal optics 150 becoming with other instruments. However, in in some exemplary applications, it may be necessary that the probe 120 is to be rotated in an oscillating manner, e.g., +/− approximately 360 degrees back and forth. In other exemplary situations, the probe 120 can be rotated +/− approximately 180 degrees back and forth. In further exemplary embodiments, other degrees of oscillating rotation can be used, such as, e.g., 90 degrees or 270 degrees of back and forth rotation.
Whether the probe 120 is rotated continuously or in an oscillating manner, the possibility of non-uniform rotational distortion (NURD) cannot be avoided. Therefore, it is important to implement accurate NURD measurement. To that end, at the same time as the imaging operation takes plane, the image processor 230 controls the second light source 240 to emit second illumination light 242 which is directly guided or otherwise transported by the first waveguide 212 to the patterns 145. Upon reflection from the patterns 145, the reflected light (second detection light 244) is guided by the second waveguide 214 to the second detector 250. The output of the second detector 250 will then be used by the processor 230 to estimate or measure the rotational motion of the probe 120 and any distortions thereof. In this manner, the second light source 240 and second detector 250 in cooperation with the waveguides 212/214, the optics 216/218, and the patterns 145 form an optical encoder circuit.
Then the normalized 2D data (in wavelength-time coordinates) is converted to 2D data (wavelength—angular coordinate in polar coordinate system) using rotation information from the second detector 250. As explained above, the drivecable 140 or the can 142 at the tip of the drivecable 140 has the multiple patterns 145 which have periodic reflectance in the rotation direction. However, even if the rotation angle of the pattern 145 is set at regular intervals, if there is non-uniform rotation, the image data is collected at times Ti of non-uniform intervals. Therefore, at step 606, the image processor 230 interpolates the normalized data (normalized image data) using rotation information data (rotation data) from the second detector 250.
Next, at step 608, the image processor 230 splits the image data in polar coordinates to extract three (3) 2D data sets, one for each channel (Red, Green, and Blue channels) from the 2D data calculated in step 606 based on wavelength of the three channels. In the case of single color image, the image data in polar coordinates may be split in different 2D data sets according to diffraction orders of the single color.
At step 610, the image processor 230 converts each 2D data set in wavelength domain into 2D data in radial coordinate domain in polar coordinate system. To do this conversion, one method is to know the relation between wavelength and radial coordinate in polar coordinate system and apply linear compensation so that the converted image data is set at regular intervals in the radial coordinate domain. To know the relation between wavelength and radial coordinates, probe design parameters like grating groove density and incident angle can be used. Conversion from wavelength (λ) domain to radial (θd) domain can be done based on Equation (1) by the following steps. First, create θd array at regular intervals. Second, convert each θd to wavelength using Equation (1) and make a wavelength array (at irregular intervals). Third, calculate linear compensation coefficients between the wavelength array and wavelengths for obtained data (wavelengths corresponding to spectrometer pixels). Fourth, convert obtained data to linear compensated data using the coefficients of linear compensation.
At step 612, the processor 230 converts the three 2D data in polar coordinate system to three 2D images in Cartesian coordinate system. At step 614, the processor 230 can apply white balancing and gamma correction to each of the individual 2D image data sets. After applying the necessary corrections, the processor 230 merges the individually corrected 2D image data sets into one RGB color 2D image.
Certain aspects of the present disclosure be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s). The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)™), a flash memory device, a memory card, and the like. An I/O interface can be used to provide communication interfaces to input and output devices, which may include a keyboard, a display, a mouse, a touch screen, touchless interface (e.g., a gesture recognition device) a printing device, a light pen, an optical storage device, a scanner, a microphone, a camera, a drive, communication cable and a network (either wired or wireless). Also, the function of one or more detectors may be realized by computer executable instructions (e.g., one or more programs) recorded on a Storage/RAM.
The CPU 2301 is comprised of one or more processors (microprocessors) configured to read and perform computer-executable instructions stored in the storage memory 2302. The computer-executable instructions may include program code for the performance of the novel processes, methods and/or calculations disclosed herein.
The computer or image processor 230 can be programmed to apply exemplary image processing such as noise reduction, coordinate distortion correction, contrast enhancement and so on. After or even during the image processing is performed, the data can be transmitted from the image processor to a display (not shown). A liquid crystal display (LCD) can be the display. The display can display, for example, the individual images obtained from a single color or a composite color image according to the present disclosure. The display can also display information other than the image, such an amount of NURD, the date of observation, what part of the human body is observed, the patient's name, operator's name and so on.
The CPU 2301 is configured to read and perform computer-executable instructions stored in the Storage/RAM 2302. The computer-executable instructions may include those for the performance of the methods, measurements, and/or calculations described herein. For example, CPU 2301 may calculate the angular movement or speed of rotation of the SEE probe, and can use that information (rotation speed or angular movement) to calculate an amount of NURD, and it can also control the FORJ 114 view of the amount of NURD. In this manner, computer or processor 230 can obtain a new set of images where their angular positions are corrected and the amount of NURD is reduced or eliminated. Storage memory 2302 includes one or more computer readable and/or writable media, which may include, for example, a magnetic disc (e.g., a hard disk), an optical disc (e.g., a DVD, a Blu-ray), a magneto-optical disk, semiconductor memory (e.g., a non-volatile memory card, flash memory, a solid state drive, SRAM, DRAM), an EPROM, an EEPROM, etc. Storage/RAM 402 may store computer-readable data and/or computer-executable instructions. The components of the processor 230 may communicate via a data bus 2305.
The system interface 2304 provides communication interfaces to input and output devices, which may include a keyboard, a display, a mouse, a printing device, a touch screen, a light pen, an optical storage device, a scanner, a microphone, a camera, a drive, communication cable and a network (either wired or wireless). The system interface 2304 also provides communication interfaces to input and output devices.
The second detector 250 may include, for example a photomultiplier tube (PMT), a photodiode, an avalanche photodiode detector (APD), a charge-coupled device (CCD), multi-pixel photon counters (MPPC), or other like element. Also, the function of detector 250 may be realized by the processor 230 operating on computer executable instructions (e.g., one or more programs) recorded on storage memory 2302.
In an exemplary operation of the SEE probe system 100, the user/operator can place the exemplary SEE probe 120 into a sheath, and then can insert such arrangement/configuration into a predetermined position of a human body. The sheath alone may be inserted into the human body in advance, and it is possible to insert the SEE probe into the sheath after sheath insertion. The exemplary probe can be used to observe inside human body and works as endoscope such as arthroscopy, bronchoscope, sinuscope, vascular endoscope, and so on.
In referring to the description, specific details are set forth in order to provide a thorough understanding of the examples disclosed. In other instances, well-known methods, procedures, components and circuits have not been described in detail as not to unnecessarily lengthen the present disclosure.
It should be understood that if an element or part is referred herein as being “on”, “against”, “connected to”, or “coupled to” another element or part, then it can be directly on, against, connected or coupled to the other element or part, or intervening elements or parts may be present. In contrast, if an element is referred to as being “directly on”, “directly connected to”, or “directly coupled to” another element or part, then there are no intervening elements or parts present. When used, term “and/or”, includes any and all combinations of one or more of the associated listed items, if so provided.
Spatially relative terms, such as “under” “beneath”, “below”, “lower”, “above”, “upper”, “proximal”, “distal”, and the like, may be used herein for ease of description to describe one element or feature's relationship to another element(s) or feature(s) as illustrated in the various figures. It should be understood, however, that the spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as “below” or “beneath” other elements or features would then be oriented “above” the other elements or features. Thus, a relative spatial term such as “below” can encompass both an orientation of above and below. The device may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein are to be interpreted accordingly. Similarly, the relative spatial terms “proximal” and “distal” may also be interchangeable, where applicable.
The term “about,” as used herein means, for example, within 10%, within 5%, or less. In some embodiments, the term “about” may mean within measurement error.
The terms first, second, third, etc. may be used herein to describe various elements, components, regions, parts and/or sections. It should be understood that these elements, components, regions, parts and/or sections should not be limited by these terms. These terms have been used only to distinguish one element, component, region, part, or section from another region, part, or section. Thus, a first element, component, region, part, or section discussed below could be termed a second element, component, region, part, or section without departing from the teachings herein.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used herein, the singular forms “a”, “an”, and “the”, are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should be further understood that the terms “includes” and/or “including”, when used in the present specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof not explicitly stated.
In describing example embodiments illustrated in the drawings, specific terminology is employed for the sake of clarity. However, the disclosure of this patent specification is not intended to be limited to the specific terminology so selected and it is to be understood that each specific element includes all technical equivalents that operate in a similar manner.
While the present disclosure has been described with reference to exemplary embodiments, it is to be understood that the present disclosure is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.