The present disclosure generally relates to automated digital image processing methodologies and related hardware solutions for selectively enhancing digital images of an eye during an ophthalmic surgery.
Modern surgical procedures may employ a surgical microscope to provide a surgeon with a magnified view of target anatomy. Target magnification allows the surgeon to perform delicate surgical procedures on miniscule anatomical features or tissues. During a microscope-assisted procedure or microsurgery, magnified stereoscopic digital images of the target anatomy may be displayed within an operating suite via one or more high-resolution display screens, a heads-up display, or a set of oculars. Presentation of the magnified images in such a manner allows the surgeon to accurately visualize the target anatomy when evaluating its health or when maneuvering a tool in the performance of a surgical task.
Real-time visualization of target anatomy during a microsurgery requires adequate task lighting. Surgical task lighting is often task-specific, with available lighting devices possibly including a microscope-mounted lamp, an overhead lighting array, a surgeon-worn headlight, and/or an endoilluminator. Each lighting device emits light in a particular wavelength range and color temperature. Surgical task illumination may coincide with digital image processing of collected image data to present a useful representation of the target anatomy to the surgeon within the operating suite.
Disclosed herein are automated methods and hardware-based systems for selectively enhancing digital images in a region-specific manner, e.g., during performance of a visualization procedure or a microsurgery. In a representative ophthalmic context, for instance, the microsurgery may include cataract surgery, minimally invasive glaucoma surgery (MIGS), or vitreoretinal surgery, with these and other possible surgeries of the eye or other target anatomy of a human patient benefitting from the present teachings.
In accordance with the disclosure, real-time image segmentation and enhancement is performed by one or more processors of an electronic control unit (ECU), itself possibly constructed as one or more networked processors or computing nodes. The ECU separates a collected digital image into different surgeon-requested and/or ECU-requested regions after first using artificial intelligence (AI) logic to identify the requested region(s) in the digital image. The present strategy is readily customizable and possibly interactive, e.g., by considering specific clinical needs of the particular surgeon performing the procedure. Real-time digital images processed in the manner described below may help guide the surgeon during the procedure, for instance through presentation of an improved red reflex response as described below.
In particular, a representative computer-based method is disclosed herein for enhancing a digital image of a patient's eye during an ophthalmic surgery or other procedure. An implementation of the method may include illuminating the eye with light from a modulable lighting source, as well as collecting a digital image or images of the eye while the eye is illuminated with light from the modulable lighting source, and while the eye is tracked via motion tracking logic of the above-noted ECU. The method also includes receiving input signals via the ECU during the ophthalmic procedure, the input signals including a request to enhance an area-of-focus of the digital image, and identifying the area-of-focus via AI logic of the ECU in response to the input signals.
Additionally, the method in this particular embodiment includes selectively adjusting respective characteristics of the modulable lighting source and constituent pixels of the digital image located outside of the requested area-of-focus. This occurs via the ECU in response to the input signals. Display control signals are thereafter transmitted to one or more display screens to present an enhanced digital image of the patient's eye.
A system for enhancing a digital image of a patient's eye during an ophthalmic surgery is also disclosed herein. A non-limiting construction of the system includes a modulable lighting source, a digital camera, and an ECU. The modulable lighting source is operable for illuminating the patient's eye with light. The digital camera in turn is operable for collecting the digital image or images of the patient's eye as the eye is illuminated by the light and tracked by motion tracking logic. The ECU, which is in communication with the digital camera and the modulable lighting source, is configured to receive input signals. The input signals include a request to enhance an area-of-focus of the digital image.
Additionally, the ECU is operable for identifying the requested area-of-focus via AI logic in response to the input signals, with the AI logic including image segmentation logic, a neural network, and/or a trained model. As part of the ECU's envisioned construction, the ECU selectively adjusts respective characteristics of the modulable lighting source and the constituent pixels of the digital image located outside of the area-of-focus. This control action occurs in response to the input signals. As noted above, the ECU is also configured to transmit display control signals to one or more display screens to thereby present an enhanced digital image of the patient's eye.
A computer-readable storage medium is also disclosed herein on which is recorded instructions for enhancing a digital image of a patient's eye during an ophthalmic procedure. Execution of the instructions by one or more processors causes the processor(s) to receive a digital image or images of the eye from a digital camera, with the digital camera being in wired and/or wireless communication with the processor(s) as the eye is illuminated by light from a modulable lighting source and tracked via motion tracking logic. The processor in a possible implementation may be caused to receive input signals from a microphone. The input signals may be spoken utterances or phrases from a surgeon performing the ophthalmic procedure, such that the input signals include a request to enhance an area-of-focus of the digital image, e.g., a pupil, iris, sclera, or limbus region of the eye.
In this representative construction, execution of the instructions causes the processor(s) to identify the area-of-focus using AI logic, with this action occurring in response to the input signals. The processor then selectively adjusts respective characteristics of the modulable lighting source and constituent pixels of the digital image located outside of the area-of-focus in response to the input signals, and transmits display control signals to one or more display screens to thereby present an enhanced digital image of the patient's eye.
The above-described and other possible features and advantages of the present disclosure will be apparent from the following detailed description of the best modes for carrying out the disclosure when taken in connection with the accompanying drawings.
The drawings described herein are for illustrative purposes only, are schematic in nature, and are intended to be exemplary rather than to limit the scope of the disclosure.
The above summary is not intended to represent every possible embodiment or every aspect of the subject disclosure. Rather, the foregoing summary is intended to exemplify some of the novel aspects and features disclosed herein. The above features and advantages, and other features and advantages of the subject disclosure, will be readily apparent from the following detailed description of representative embodiments and modes for carrying out the subject disclosure when taken in connection with the accompanying drawings and the appended claims.
Embodiments of the present disclosure are described herein. It is to be understood, however, that the disclosed embodiments are merely examples and other embodiments can take various and alternative forms. The Figures are not necessarily to scale. Some features could be exaggerated or minimized to show details of particular components. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a representative basis for teaching one skilled in the art to variously employ the present disclosure.
Referring now to the drawings wherein like reference numbers refer to like components, and beginning with
Using associated hardware and software of the surgical microscope 16 and an electronic control unit (ECU) 50C as described below, the surgeon is able to view magnified enhanced digital images 19 of the target anatomy. Visualization may be facilitated via one or more high-resolution display screens 22 and/or 220, one or more of which may include a touch screen 220T, e.g., a capacitive display surface. As shown, the enhanced digital images 19 are of the target eye 30 of
Also present within the operating suite 10 is an optional cabinet 24 containing the ECU 50C, a processor 52 of which is shown in
The ECU 50C of
Real-time images of the patient's eye 30 during eye surgery tend to be rich in content, with the collected digital image (arrow 25) often showing a large part of the eye 30, possibly with different illumination patterns. However, at any given time during the course of an eye surgery, the surgeon may choose to focus on a relatively narrow region of the displayed digital image (arrow 25). The automated solutions disclosed herein are thus intended to reduce the surgeon's stress factor and improve surgical outcomes when using digital images (arrow 25), in particular by providing automated tools that functionally augment the digital image (arrow 25) within a desired area-of-focus.
In particular, the method 50 described herein provides localized image enhancement and image-guided surgical visualization for ophthalmic surgeries and other microsurgeries that combine the following concepts: (i) localization, (ii) image enhancement, and (iii) optional feedback. These three concepts are applied below in describing representative ophthalmic use cases, including cataract surgery, MIGS, and vitreoretinal surgery, without limiting the present teachings to such microsurgeries.
The ECU 50C depicted in
Referring to
In a possible embodiment, the light (arrow LL) may be a form of white light, e.g., warm white light having a color temperature of less than about 4500° K. The color, color temperature, brightness, and/or other possible characteristics of the light (arrow LL) may be selectable by the surgeon in response to input signals (CC50) to the ECU 50C, e.g., as stated or uttered voice commands 51. For example, the modulable lighting source 18 may be configured as a red, green, blue (RGB) diode array or other lighting system having a variable output, including red, green, and blue light, individually or collectively. The modulable lighting source 18 may output the light (arrow LL) outside of the visible spectrum in some implementations, e.g., as near infrared, ultraviolet, etc.
The system 26 of
Other embodiments may be realized in which instructions embodying the method 50 are recorded on a non-transitory computer-readable storage medium, e.g., in memory 54 of the ECU 50C, and executed by the processor(s) 52 of the ECU 50C as shown, or one or more processors 52 located apart from the ECU 50C in other embodiments. Such structure would allow the ECU 50C to cause disclosed actions of the system 26 to occur. As noted above, the processor(s) 52 in alternative embodiments may be integrated into other hardware, e.g., the surgical microscope 16 and/or the digital camera 20, with inclusion of the processor(s) 52 in the construction of the ECU 50C being non-limiting.
During predetermined stages of the representative eye surgery during which the surgeon desires to test and evaluate the red reflex of the patient's eye 30, with such stages of the ophthalmic procedure possibly identified by the ECU 50C as an identified stage, the ECU 50C causes the modulable lighting source 18 to emit the light (arrow LL). This action may entail simply turning on the modulable lighting source 18 at the onset of the microsurgery. At the same time, the ECU 50C may command the digital camera 20, e.g., via corresponding camera control signals (arrow CC20), to collect the digital images (arrow 25). The collected digital images (arrow 25) may be communicated or transmitted over transfer conductors and/or wirelessly to the processor(s) 52 for execution of the various digital image processing steps embodying the method 50.
When selectively enhancing the digital images (arrow 25), the processor(s) 52 of
The ECU 50C is depicted schematically in
The memory 54 may take many forms, including but not limited to non-volatile media and volatile media. Instructions embodying the method 50 may be stored in the memory 54 and selectively executed by the processor(s) 52 to perform the various functions described below. The ECU 50C, either as a standalone device or integrated into the digital camera 20 and/or the surgical microscope 16 of
As will be appreciated by those skilled in the art, non-volatile computer readable storage media may include optical and/or magnetic disks or other persistent memory, while volatile media may include dynamic random-access memory (DRAM), static RAM (SRAM), etc., any or all which may constitute part of the memory 54 of the ECU 50C. The input/output (I/O) circuitry 56 may be used to facilitate connection to and communication with various peripheral devices used during the surgery, inclusive of the digital camera 20, the modulable lighting source 18, and the high-resolution display screen(s) 22 and/or 220. Other hardware not depicted but commonly used in the art may be included as part of the ECU 50C, including but not limited to a local oscillator or high-speed clock, signal buffers, filters, amplifiers, etc.
Still referring to
Image enhancement involves the application of various image optimization and enhancement strategies within the identified area-of-focus, e.g., using quantifiable visual image quality metrics. Feedback may be used, e.g., in a closed-loop using zoom features of the surgical microscope 16 and/or illumination control of the modulable lighting source 18, to realize the localization and image enhancement functionality. For instance, targeted surgeon voice commands 51 such as “enhance red reflex”, “focus on iris”, “auto-center and auto-zoom on pupil”, or “auto-white on sclera” may trigger corresponding actuation states of a microscope motor and particular digital processing actions when rendering a view of the digital image (arrow 5).
The ECU 50C may also be configured in or more embodiments to selectively enhance segmented regions of the patient's eye 30 in the digital image data (arrow 25), e.g., to optimize the red reflex response. As appreciated in the art, a surgeon may wish to detect and evaluate reflective performance of the patient's eye 30 in response to incident light during cataract surgery. The term “red reflex” refers to a detectable reflective phenomenon that normally occurs when light enters the pupil 300 and reflects off of the retina 31 at the posterior of the vitreous cavity 32. Red reflex tests are frequently used by eye surgeons and other clinicians to detect possible abnormalities of the eye's posterior anatomy.
Also detectable via red reflex tests are opacities located along the optical axis 11, likewise shown in
The present automated strategy may be illustrated by way of four representative use cases: (i) enhancing the red reflex within the imaged pupil 300 through digital and/or physical means while maintaining a ‘normal’ view of the sclera 400 (see
Referring to
Note that in
Use Case #1: as noted above, red reflex is the light reflected back from the eye 30 of
The ECU 50C as contemplated herein may proceed as follows: (i) using the AI logic 59, the segmentation algorithm 50 (method 50), and the vision tracking algorithm 58 (
Use Case #2: the surgical microscope 16 of
The method 50 proposed herein in non-limiting Use Case #2 may include performing the following process steps via the ECU 50C: (i) segmenting the digital image (arrow 25) of the patient's eye 30 of
Use Case #3: in minimal invasive glaucoma surgery (MIGS), certain MIGS devices are inserted into the patient's eye 30 in close proximity to the trabecular meshwork, which in turn is located between the cornea 301 and the iris 350 (see
The method 50 proposed herein may be implemented by the ECU 50C according to the following steps: (i) segmenting the view through the surgical microscope 16 into different anatomical regions, in particular within the iris 350 and its boundaries in this example, (ii) estimating the color of iris 350 and its adjacent region, e.g., using machine vision capabilities or inputs from the surgeon, and (iii) adjusting the color of the illumination by the modulable lighting device 18 of
Use Case #4: during vitreoretinal surgery, the posterior chamber of the patient' eye 30 is illuminated by directed light, such as a light pipe/endoilluminator. Illumination in this manner creates a large intensity variation between regions within the displayed view. For some procedures such as an air-fluid exchange, which temporarily replaces the aqueous humor of the eye 30 with air to maintain its shape, strong specular reflection and diffusive glare are common.
A representative glare region is illustrated in
The method 50 when performed during representative Use Case #4 may proceed via the following process steps: (i) detecting different intensity regions such as the dark region 360, the highlight 65, and the fundus region 370, and thereafter estimating statistics of a pixel intensity distribution within each of the constituent regions, (ii) using the calculated intensity statistics to define a tone mapping function for the digital image (arrow 25 of
Referring now to
Beginning with block B52, the method 50 includes illuminating the patient's eye 30 with light from a modulable lighting source, e.g., the light (arrow LL) and modulable lighting source 18 of
The method 50 also includes collecting digital image data of the patient's eye 30. Block B52 may entail collecting a digital image or multiple images (arrow 25 of
As appreciated in the art, full spectrum white light uses the full wavelength range of human-visible light, conventionally defined as about 380 nanometers (nm) to about 700 nm. In addition to wavelength, visible light/white light is often described in terms of its color temperature using descriptions such as “warm white”, “daylight white”, and “cool white”. Color temperature is generally expressed in degrees Kelvin (° K), with warm white light in particular typically referring to light having a color temperature of less than about 4000° K. Such light falls predominantly within the orange and red ranges of full spectrum light. In contrast to warm white light, cool white light has a higher color temperature of about 5500° K to 7000° K or more, and is often dominated by blue light. Daylight white light falls somewhere between the conventionally defined color temperature limits of warm white light and cool white light. Any or all such varieties of white light may be used to illuminate the eye 30 in one or more of the embodiments contemplated herein.
At block B52, the method 50 includes receiving the input signals (CC50 and/or CC60) via the ECU 50C of
Block B55 entails determining, via the ECU 50C, whether the input signals (CC50 and/or CC60) from block B54 correspond to a request for a particular area-of-focus, or a change to a previously requested area-of-focus. The method 50 proceeds to block B56 when the surgeon has requested or changed the area-of-focus, and repeats block B52 in the alternative when the surgeon has not requested or changed the area-of-focus.
At block B56, the ECU 50C identifies and segments corresponding pixels of the area-of-focus in response to the input signals (CC50 and/or CC60) of block B54 and B55, e.g., using artificial intelligence (AI) logic 59 of the ECU 50C and vision-tracking algorithm 58 of
As appreciated by those skilled in the art, the control actions performed by the ECU 50C in block B56 may include using edge detection techniques to identify telltale discontinuities in pixel intensity corresponding to edges of areas of interest in the image, or the use of feature matching via neural networks or other prior-trained models of the patient's eye 30. When the regions of interest have distinctive shapes or contours, the ECU 50C may use an optional Hough transform or other application-suitable technique to detect such features. Likewise, the pixel color, brightness, or other characteristics may be segmented into different regions to enable identification of the area-of-focus desired by the surgeon. The method 50 then continues to block B58.
Block B58 includes selectively adjusting respective characteristics of the modulable lighting source 18 and/or constituent pixels of the digital image (arrow 25( ) located outside of the area-of-focus. This action occurs via the ECU 50C in response to the input signals (CC50 and/or CC60). Block B58 may include enhancing the constituent pixels of the area-of-focus from block B56 to differentiate the area-of-focus from the surrounding area and enhance the resulting image presentation. Block B58 may also include transmitting display control signals (arrow CC22 of
In the exemplary red reflex scenario of
As those of ordinary skill in the art will understand, various features illustrated and described with reference to any one of the figures can be combined with features illustrated in one or more other figures to produce embodiments that are not explicitly illustrated or described. The combinations of features illustrated provide representative embodiments for typical applications. Various combinations and modifications of the features consistent with the teachings of this disclosure, however, could be desired for particular applications or implementations.
Certain terminology may be used in the following description for the purpose of reference only, and thus are not intended to be limiting. For example, terms such as “above” and “below” refer to directions in the drawings to which reference is made. Terms such as “front,” “back,” “fore,” “aft,” “left,” “right,” “rear,” and “side” describe the orientation and/or location of portions of the components or elements within a consistent but arbitrary frame of reference which is made clear by reference to the text and the associated drawings describing the components or elements under discussion. Moreover, terms such as “first,” “second,” “third,” and so on may be used to describe separate components. Such terminology may include the words specifically mentioned above, derivatives thereof, and words of similar import.
The detailed description and the drawings are supportive and descriptive of the disclosure, but the scope of the disclosure is defined solely by the claims. While some of the best modes and other embodiments for carrying out the claimed disclosure have been described in detail, various alternative designs and embodiments exist for practicing the disclosure defined in the appended claims. Furthermore, the embodiments shown in the drawings or the characteristics of various embodiments mentioned in the present description are not necessarily to be understood as embodiments independent of each other. Rather, it is possible that each of the characteristics described in one of the examples of an embodiment can be combined with one or a plurality of other desired characteristics from other embodiments, resulting in other embodiments not described in words or by reference to the drawings. Accordingly, such other embodiments fall within the framework of the scope of the appended claims.
The present application claims the benefit of priority to U.S. Provisional Application No. 63/614,706 filed Dec. 26, 2023, which is hereby incorporated by reference in its entirety for all purposes.
Number | Date | Country | |
---|---|---|---|
63614706 | Dec 2023 | US |