The present application claims priority to and the benefit of Switzerland Patent Application 070746/2021 filed Dec. 20, 2021, which is incorporated by reference in its entirety herein.
The present disclosure relates to an ophthalmological treatment device for determining a rotation angle of an eye of a person.
Treatment of a human eye, for example using laser ablation, depends critically on a correct alignment between the eye and the laser such that the correct areas of the eye are treated. The basis of the treatment is defined by a treatment model, which is used to control the laser during treatment. The treatment model is established based on measurements of the eye taken using a diagnostic device.
While setting up the laser for treatment, and also during treatment, it is important to detect an actual position and an actual direction of gaze of the eye such that the laser performs ablation according to the treatment model. In addition, a rotation of the eye, including, for example, a torsion of the eye about an axis (known as cyclotorsion) is important to account for, in particular for astigmatic eyes.
Some eyes experience varying degrees of cyclotorsion when the axis moves from a horizontal orientation, as is typical when the person is upright, to an inclined position, for example when the person is lying down.
Further, a rotation of the head can also lead to a rotation of the eye.
This presents a challenge during eye treatment, as treatment of the eye, for example using laser ablation, must account for the cyclotorsion for best results, in particular for eyes with astigmatism. This is because the treatment model is established based on measurements of the eye taken using a diagnostic device with the person in an upright position, whereas the treatment is usually performed with the person in a supine position.
Known methods for measuring and accounting for a rotation of the eye, in particular a cyclotorsion, include manually marking the eyeball of the person when the person is in the upright position and realigning the treatment model according to the mark.
U.S. Pat. No. 8,708,488B2 describes a method comprising comparing images recorded before the surgery with images recorded during surgery in order to generate a marker, which represents a target orientation. An orientation is determined such as to compensate for a rotation of the eye, using image comparison, which is directed to structures of the sclera of the eye, in particular structures of blood vessels.
U.S. Pat. No. 7,331,767B2 describes a method for aligning diagnostic and therapeutic iris images, via iris pattern recognition, for effecting more accurate laser treatment of the eye. In an example, an iris landmark is identified which is tracked by having a sequential plurality of diagnostic iris images of varying pupil size such that the iris landmark can be tracked between the two images. The aligned, constricted pupil diagnostic image can then be aligned with a constricted pupil treatment image and the ablation pattern rotated accordingly. Limbal edge detection is used in the diagnostic images to provide pupil center translation information for translational alignment of the laser treatment. Iris pattern recognition is used by identifying iris patterns using markers (artificial) or landmarks (natural).
U.S. Pat. No. 9,936,866B2 describes a method of receiving a first optical data set of an eye with a pupil having a first pupil size, and receiving a second optical data set of the eye with the pupil having a second pupil size. A pseudo-rotation related to a pupil size change is determined, a measured cyclotorsion is received, an actual cyclotorsion is calculated from the measured cyclotorsion and the pseudo-rotation, and a laser treatment is adjusted according to the actual cyclotorsion.
U.S. Pat. No. 7,044,602B2 describes methods and systems for tracking a position and torsional orientation of a person's eye, comprising selecting at least one marker on the iris of the eye in the first image. A corresponding marker is located on the iris in the second image. The first image of the eye and the second image of the eye are registered by substantially matching a common reference point in the first and second images and matching the marker on the iris of the image of the first eye and the marker on the iris of the image of the second eye.
These and other methods have the drawbacks such as requiring both the diagnostic image(s) and the images recorded during treatment to be taken under known conditions (e.g. lighting conditions), using known equipment, and/or that they require manual intervention such as manually marking the sclera of the eye. Other drawbacks include relying on particular features of the eye to be detectable, for example the blood-vessels in the sclera, requiring multiple diagnostic images and/or images taken directly before treatment, or requiring a computation involving an iterative optimization to find the cyclotorsion angle.
Disclosed is an ophthalmological treatment device for determining a rotation angle of an eye of a person which overcomes one or more of the disadvantages of the prior art.
In particular, the disclosure provides an ophthalmological treatment device and method for determining a rotation angle of an eye of a person.
According to embodiments of the present disclosure, advantages are achieved by an ophthalmological treatment device comprising a processor and a camera for determining a rotation of an eye of a person. The processor is configured to receive a reference image of the eye of the person. The reference image is an image of the eye of the person, recorded at a prior time by a separate diagnostic device. The reference image having been taken while the person was in an upright position, e.g. sitting on a chair. The processor is configured to record, using the camera, a current image of the eye of the person in a reclined position, e.g. sitting back or lying horizontally. The processor is configured to determine a rotation angle of the eye, using a direct solver, by comparing the reference image to the current image.
In an embodiment, the direct solver includes a software application, an algorithm, and/or a function. The direct solver includes, depending on the implementation, one or more software libraries and/or data-sets. Depending on the embodiment, the direct solver is implemented on the processor of the ophthalmological treatment device and/or on a computer connected to the ophthalmological treatment device, in particular connected via one or more communication networks.
In an embodiment, the processor is further configured to control the ophthalmological treatment device using the rotation angle.
In an embodiment, the processor is further configured to rotate a treatment pattern of a laser by the rotation angle, which treatment pattern is configured for the eye of the person. In a preferred embodiment, the rotation angle includes a cyclotorsion angle.
In an embodiment, the direct solver is configured to determine the rotation angle using a pre-defined number of computational operations, preferably within a pre-defined number of milliseconds.
In an embodiment, the processor, in particular the direct solver, is configured to determine the rotation angles of the eye by: identifying one or more non-local features of the reference image and non-local features of the current image, and matching the one or more identified non-local features of the reference image to the one or more identified non-local features of the current image, respectively. Non-local features refer to features (e.g. patterns or structures) in an image, which exist at scales larger than local features. Local features, by way of contrast to non-local features, include points or edges, pixels (or small groups thereof) of a particular colour, having a particular contrast gradient. Local features may also refer to a sub-sample of the image having a particular texture, colour, or intensity. Local features, as opposed to non-local features, are distinctive, in that they are either present in a distinctive sub-sample of an image, or not present. Further, local features, as opposed to non-local features, are localizable in that they have a unique location in the image associated with the feature. Non-local features, however, refer to features which do not have a defined location. These non-local features may, however, have properties such as an orientation and/or a scale. Examples of non-local features include, for example, an iris colour, radial stripes in an iris, and a texture of an eye.
In an embodiment, the processor is configured to determine the rotation angle of the eye by applying a pre-determined sequence of signal processing filters to both the entire reference image and to the entire current image.
In an embodiment, the pre-determined sequence of signal processing filters comprises one or more of: a convolutional operator, an activation function, or a pooling function.
In an embodiment, the pre-determined sequence of signal processing filters is part of a neural network.
In an embodiment, the neural network is trained to determine the rotation angle using supervised learning. The neural network is trained using a training dataset. The training dataset comprises a plurality of training reference images, a plurality of corresponding training current images, and a plurality of corresponding pre-defined rotation angles, respectively. Each training reference image has an associated training current image and a pre-defined rotation angle. Thereby, the neural network is trained to generate a rotation angle when given a training reference image and a training current image as an input, the rotation angle generated being within a small error margin of the actual pre-defined rotation angle.
In an embodiment, the ophthalmological treatment device comprises a first neural network and a second neural network both having identical architecture and parameters. The first neural network is configured to receive the reference image as an input and to generate a reference image output vector. The second neural network is configured to receive the current image as an input and to generate a current image output vector. The processor is configured to determine the rotation angle using the reference image output vector, the current image output vector, and a distance metric.
In an embodiment, the processor, in particular the direct solver, is configured to determine the rotation angle by generating a reference image output vector using the reference image and the pre-determined sequence of signal processing filters. The processor is configured to determine the rotation angle by generating a current image output vector using the current image and the pre-determined sequence of signal processing filters. The processor is configured to determine the rotation angle by determining a distance between the reference image output vector and the current image output vector using a distance metric.
In an embodiment, the processor is further configured to pre-process the images. Pre-processing comprises: detecting the iris of the eye in the reference image and/or the current image, in particular detecting an edge between the iris and the pupil of the eye; detecting scleral blood vessels of the eye in the reference image and/or the current image; detecting the retina of the eye in the reference image and/or the current image; identifying a covered zone in the reference image, which covered zone is a part of the eye covered by the eyelid; unrolling the reference image and/or the current image using a polar transformation; rescaling the reference image and/or the current image according to a detected pupil dilation in the reference image and/or the current image; image correcting the reference image and/or the current image, in particular by matching an exposure, a contrast, or a color; and/or resizing the reference image and/or the current image such that the reference image and the current image have a matching size.
In an embodiment, the processor is configured to receive a color reference image and/or an infrared reference image. The processor is configured to record, using the camera, a color current image and/or an infrared current image. The processor is, in an embodiment, configured to determine the rotation angle using the color reference image and/or an infrared reference image and the current image and/or an infrared current image, respectively, using the direct solver.
In an embodiment, the processor is further configured to transmit the current image to a further ophthalmological treatment device 1B.
In an embodiment, the processor is configured to determine whether the reference image and the current image are both images of the same eye, in particular of the same eye of the person.
In addition to an ophthalmological treatment device, the present disclosure also relates to a method for determining a rotation of an eye of a person comprising a processor of an ophthalmological treatment device performing the step of receiving a reference image of the eye of the person, the reference image having been recorded of the eye of the person in an upright position by a separate diagnostic device. The method comprises recording, using the camera, a current image of the eye of the person in a reclined position. The method comprises determining a rotation angle of the eye by comparing the reference image to the current image, using a direct solver.
In an embodiment, the method further comprises rotating a treatment pattern of a laser about the rotation angle, which treatment pattern is configured for the eye of the person.
In an embodiment, determining the rotation angle of the eye comprises applying a pre-determined sequence of signal processing filters to both the entire reference image and the entire current image.
In an embodiment, the pre-determined sequence of signal processing filters comprises one or more of: a convolutional operator, an activation function, or a pooling function.
In an embodiment, the pre-determined sequence of signal processing filters is part of a neural network.
In an embodiment, the neural network is trained to determine the rotation angle using supervised learning and a training dataset, wherein the training dataset comprises a plurality of training reference images, a plurality of corresponding training current images, and a plurality of corresponding pre-defined rotation angles, respectively.
In an embodiment, the method comprises using a first neural network and a second neural network both having an identical architecture and identical parameters. The first neural network is configured to receive the reference image as an input and generate a reference image output vector. The second neural network is configured to receive the current image as an input and to generate a current image output vector. Determining the rotation angle comprises using the reference image output vector, the current image output vector, and a distance metric.
In addition to an ophthalmological device and a method, the present disclosure also relates to a computer program product comprising a non-transitory computer-readable medium having stored thereon computer program code for controlling a processor of an ophthalmological device to receive a reference image of an eye of a person, the reference image having been recorded of the eye of the person in an upright position by a separate diagnostic device. The program code controls the processor to record, using a camera of the ophthalmological device, a current image of the eye of the person in a reclined position. The program code controls the processor to determine a rotation angle of the eye by comparing the reference image to the current image using a direct solver.
The herein described disclosure will be more fully understood from the detailed description given herein below and the accompanying drawings which should not be considered limiting to the disclosure described in the appended claims. The drawings in which:
Reference will now be made in detail to certain embodiments, examples of which are illustrated in the accompanying drawings, in which some, but not all features are shown. Indeed, embodiments disclosed herein may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will satisfy applicable legal requirements. Whenever possible, like reference numbers will be used to refer to like components or parts.
As is schematically represented in
The laser source 16 is configured to generate a pulsed laser beam L. The laser source 16 comprises in particular a femtosecond laser for generating femtosecond laser pulses, which have pulse widths of typically from 10 fs to 1000 fs (1 fs=10−16 s). The laser source 3 is arranged in a separate housing or a common housing with the focussing optics 51.
The scanner system 17 is configured to steer the pulsed laser beam L delivered by the laser source 16 by means of the focussing optics 51 in the eye tissue 211 onto treatment points F on a treatment pattern t (comprising a laser trajectory). In an embodiment, the scanner system 17 comprises a divergence modulator for modulating the focal depth, or the treatment height, in the projection direction along the projection axis p. The scanner system 17 comprises, for example, a galvanoscanner or a piezo-driven scanner. Depending on the embodiment, the scanner 17 additionally comprises one or more deflecting mirrors, one or more resonant mirrors, or one or more oscillating mirrors, which are for example piezo-driven, or MEM (Micro-Electromechanical), or the scanner system 17 comprises an AOM (Acousto-Optical Modulator) scanner or an EOM (Electro-Optical Modulator) scanner.
As is schematically represented in
The focussing optics 51 are configured to focus the pulsed laser beam L, or its laser pulses, onto the treatment points F inside the eye tissue 211 for the pointwise tissue disruption. The focussing optics 51 comprise a lens system having one or more optical lenses. Depending on the embodiment, the focussing optics 51 comprise one or more movable lenses and/or a drive for moving the entire focussing optics 51 in order to set and adjust the focal depth, or the treatment height, in the projection direction along the projection axis p. In a further embodiment, a divergence modulator is provided in the beam path between the laser source 16 and the scanner system 17.
For the treatment and incision of incision surfaces C, C′ which have a lateral component in the x/y treatment plane normal to the projection direction which is comparatively larger than the depth component in the projection direction along the projection axis p, the scanner system 17 is configured to displace the treatment points F, onto which the laser pulses are focussed, with a higher scan speed on the treatment pattern t, t′ in relation to the focus adjustment speed of the focussing optics 51.
Although reference is made to the incision according to an incision surface C, C′, the treatment pattern t, t′ also relates to treatment of the eye by surface ablation, depending on the embodiment.
As is schematically represented in
In an embodiment, the patient interface 52, in particular the contact surface 53, has a flattened central section and the eye 21 is removably attached to the patient interface 52 by applanation, in which the normally curved surface of the cornea 211 is held in a flattened state against the contact surface 53 of the patient interface 52 by the suction element 54.
In an embodiment, the patient interface 52 does not have a contact surface 53 or a suction element 54 and the treatment takes place without fixing the eye 21. Specifically, the patient interface 52 and the eye 21 are separated by an air gap of several centimetres, for example.
As is schematically represented in
The communication interface 15 is further configured for data communication with one or more external devices. Preferably, the communication interface 15 comprises a network communications interface, for example an Ethernet interface, a WLAN interface, and/or a wireless radio network interface for wireless and/or wired data communication using one or more networks, comprising, for example, a local network such as a LAN (local area network), and/or the Internet.
The skilled person is aware that at least some of the steps and/or functions described herein as being performed on the processor 11 of the ophthalmological device 1 may be performed on one or more auxiliary processing devices connected to the processor 11 of the ophthalmological device 1 using the communication interface 15. The auxiliary processing devices can be co-located with the ophthalmological device 1 or located remotely, for example on a remote server computer.
The skilled person is also aware that least some of the data associated with the program code (application data) or data associated with a particular patient (patient data) and described as being stored in the memory 14 of the ophthalmological device 1 may be stored on one or more auxiliary storage devices connected to the ophthalmological device 1 using the network interface 15.
The ophthalmological device 1 optionally includes a user interface comprising, for example, one or more user input devices, such as a keyboard, and one or more output devices, such as a display. The user interface is configured to receive user inputs from an eye treatment professional, in particular based on, or in response to, information displayed to the eye treatment professional using the one or more output devices.
As is schematically represented in
The control module 12, more particularly the processor, determines a rotation angle θ of the eye 21 using a direct solver S, in particular a rotation angle θ in relation to the central axis m of the patient interface 52. The direct solver S is stored in the memory 14.
The rotation angle θ is an angle of rotation of the eye 21 about an axis. The axis is, for example, parallel to a central axis m of the patient interface 52.
As described in more detail with reference to
As described herein in more detail, the treatment pattern t is not rotationally symmetric for each eye, owing to some persons 2 having a degree of astigmatism. Therefore, it is important to account for any rotation of the eye 2, in particular by rotating the treatment pattern t according to the rotation angle θ.
The diagnostic device 3 is configured to record and store the reference image 31 and/or reference interferometric data. The reference image 31 and/or reference interferometric data is then provided to the ophthalmological treatment device 1. For example, the reference image 31 and/or reference interferometric data is transmitted to the ophthalmological treatment device 1 using a data communications network, for example the Internet. Alternatively, the reference image 31 and/or reference interferometric data is stored to a portable data carrier which is then connected to the ophthalmological treatment device 1.
Due to the rotation of the eye by the rotation angle θ, the control module 11 is configured to rotate the incision surface C about the rotation angle θ such that a rotated incision surface C′ is incised in the eye.
In an embodiment, in preparatory step, the eye 21 of the person 2 is fixed using a patient interface 54 as shown in
In a step S1, the control module 2, in particular the processor 21, is configured to receive a reference image 31. For example, the processor 21 is configured to receive the reference image 31 from the memory 14 or from an auxiliary memory device via the communication interface 15. The reference image 31 is an image of the eye 21 of the person 2 taken prior to eye treatment, in particular by a diagnostic device 3 as illustrated in
In a step S2, the control module 2, in particular the processor 21, instructs the camera 12 to record a current image 121 of the eye 21. Depending on the embodiment, one or more current images 121 of the eye are recorded.
In a step S3, the processor 21 compares the current image 121 to the reference image 31 to determine a rotation angle θ of the eye 21 in current image 121 with respect to the reference image 31. In particular, the processor 21 uses a direct solver S for comparing the images as shown in
By determining the rotation angle θ in a predictable time, the processor 21, in an embodiment, determines the rotation angle θ successively using successive current images 121 recorded by the camera, in real-time. This ensures that even if the person 2 shifts, rotates, or otherwise moves their head, the treatment pattern t is adjusted accordingly. Advantageously, this allows for treatment without the patient interface 52 being fixed to the surface of the eye 21.
Further, this may result in the processor 21 determining the rotation angle θ more quickly than iterative functions/algorithms and therefore results in a quicker overall treatment time as the person 2 does not have to lie down for as long. A quicker treatment is safer because the person 2 has less opportunity to move.
Depending on the embodiment, the direct solver S is implemented as a software application, an algorithm, and/or a function.
In an embodiment, the processor 21 is configured to display, on a display of the user interface of the ophthalmological treatment device 1, the reference image 31 and/or the current image 121 and the rotation angle θ. The processor 21 is configured to receive, via the user interface, user input from an eye treatment professional relating to the rotation angle θ. In particular, the user input comprises an indication to alter the determined rotation angle θ.
Depending on the embodiment, the processor 21 is configured to display the reference image 31 and the current image 121 simultaneously next to each other, preferably using a polar representation (as “unrolled” images), as explained below in more detail.
In an embodiment, the processor 21 is configured to display the reference image 31 and the current image 121 next to each other, e.g., one above the other, rendering the reference image 31 and/or the current image 121 such that both the reference image 31 and/or the current image 121 are both visible. In a preferred embodiment, a polar representation of the reference image 31 and a polar representation of the current image 121 are displayed. The polar representation maps a ring-shaped part of the images 31, 121 which relates to an area around the iris to a rectangular image, preferably of identical size.
The polar representation is generated, for example, by identifying a center point in the reference image 31 and a center point in the current image 121. The center points preferably correspond to the center of the eye, in particular the pupil, in the respective images 31, 121. The polar representation “unrolls” the images 31, 121, preferably by mapping a radial distance from the center point to a y-coordinate of the polar representation and mapping an azimuthal angle about the center point to an x-coordinate. Preferably, the polar representation of the reference image 31 and/or the current image 121 is displaced according to the rotation angle θ (the rotation angle being mapped to a displacement along the x-axis in the polar representation). If the rotation angle θ determined by the processor 21 is accurate, the displaced polar representation of the reference image 31 and/or the displaced polar representation of the current image 121 will align such that, at least in part, features of the reference image 31 and features of the current image 121 will line up, i.e. be present at the same horizontal positions.
Depending on the indication received as part of the user input, the rotation angle θ is manually updated to an updated rotation angle θ. The transformed reference image 31 and/or the transformed current image 121 are rendered according to the updated rotation angle θ. Thereby, the eye treatment professional is able to fine-tune the rotation angle θ determined by the processor 21 in an iterative and guided interaction. Once the rotation angle θ has been fine-tuned, the eye treatment professional can accept the updated rotation angle θ, which will be used for rotating the treatment pattern t to a determine a rotated treatment pattern t′ as described below.
In an embodiment, the ophthalmological treatment device 1 is controlled using the rotation angle θ. In particular, the processor 21 is configured to rotate the treatment pattern t about the rotation angle θ, thereby resulting in a rotated treatment pattern t′. The ophthalmological treatment device 1 controls the laser L according to the rotated treatment pattern t′ such that the laser L is directed onto one or more treatment points F. In an embodiment, the treatment pattern t, t′ comprises a laser trajectory. The laser trajectory includes, for example, one or more continuous laser paths and/or one or more discrete treatment points F. The treatment pattern t, t′ further includes, depending on the embodiment, one or more laser speeds, one or more laser spot sizes, and/or one or more laser powers.
In an embodiment, the direct solver S, or a specific image pre-processing function, pre-processes the reference image 31 and/or the current image 121.
The neural network N is configured to receive both the reference image 31 and the current image 121 as inputs. The neural network N is further configured to output the rotation angle θ. The neural network N is configured to preprocess the inputs using a sequence of preprocessing steps. The preprocessing steps are designed to process the reference image 31 and the current image 121 such that particular characteristics of the images match. For example, the preprocessing steps comprise image transformations such as a transformation to polar coordinates and/or color adjustments such as histogram matching. The neural network N then contains convolutional layers, activation functions (ReLU), pooling operations, fully-connected layers, and/or skip connections. In particular, the neural network N comprises two final and dense fully-connected layers configured to directly output the rotation angle θ.
In an embodiment, the the neural network N includes a ResNet-34 architecture with two fully-connected layers at the end configured to directly output the rotation angle θ.
The neural network N is configured to determine the rotation angle θ by identifying non-local features in both the reference image 31 and the current image 121. Non-local features identified in both images 31, 121 are matched and a distance between them is determined. This distance is then used to determine the rotation angle θ.
The neural network N is trained to output the rotation angle θ, for example, in the manner shown in
In a preferred embodiment, the neural network N is executed by the processor 11 in a GPU and/or in a TPU for faster execution.
The Siamese neural networks N1, N2 start with a chain of preprocessing steps as described above, which may include image transformations such as a transformation to polar coordinates and color adjustments such as histogram matching. The neural network architecture for the neural networks N1, N2 then contains convolutional layers, activation functions (ReLU), pooling operations, fully connected layers, and/or skip connections. One or two downstream dense and fully-connected layers, preferably connected directly to the output, perform a low-dimensional embedding (preferably n<100) to generate the reference image output vector V1 and the current image output vector V2.
The final dense and fully-connected layers are trained during the training phase such that the distance obtained using the distance metric is minimized for input image pairs where no rotation is present (between the images of the image pair) and maximized for input image pairs having a large rotation (between the images of the image pair). In a preferred embodiment, the neural networks N1, N2 include a ResNet-34 architecture with two downstream fully-connected layers, preferably directly upstream from the output. The reference image output vector V1 and the current image output vector V2 are then used to determine the rotation angle θ using the distance metric as described above.
The neural network N is initialized as an untrained neural network N of a particular architecture with random parameters, e.g. random weights and biases. The untrained neural network is trained using a training dataset comprising a large number, preferably in the order of 1000, more preferably at least 300 training reference images and associated training current images and training rotation angles. It is important that the training data set comprises a wide variety of lighting conditions as well as different eye shapes and iris colors to avoid any bias into the detector towards or against different ethnic groups. The training rotation angles are obtained from image pairs where the horizontal reference axis in the eye is marked before the patient is lying in supine position. The training dataset is then used to train the untrained neural network iteratively using supervised learning to generate the trained neural network N. In particular, the training dataset is segregated into a training subset, a test subset, and a validation subset. Data augmentation using rotation and/or mirroring of the input image pairs is used to enlarge the training set.
The neural network N can be successfully trained using, for example, the Adam optimizer using a learning rate of 3·10−4. Training is helped significantly by injecting readily available pre-trained weights for a ResNet-34 architecture trained on the ImageNet database as a starting point.
The Siamese neural networks N1, N2 can be successfully trained using image pairs and a contrasting loss function.
Training of the neural networks N, N1, N2 takes place prior to treatment and is necessary only once. The trained neural networks N, N1, N2 is then stored in the memory 14 of the ophthalmological treatment device 1.
The above-described embodiments of the disclosure are exemplary and the person skilled in the art knows that at least some of the components and/or steps described in the embodiments above may be rearranged, omitted, or introduced into other embodiments without deviating from the scope of the present disclosure.
Number | Date | Country | Kind |
---|---|---|---|
070746/2021 | Dec 2021 | CH | national |