The present invention is in the field of medical applications, and relates to a retinal imaging system and method.
Retinal imaging systems typically utilize a fundus camera, which images the rear of the eye through the pupil and typically uses illumination and imaging optics with a common optical path. During the imaging procedure, the fundus camera is operated by an operator (specialist), being a technician or physician, as the case may be. The operator has to align and focus the fundus camera properly on the patient's pupil. To this end, the patient's head is kept steady in the chinrest and headrest of the fundus camera; and the operator first assess the “field of the eye”, and then moves the camera from side to side to ascertain the width of the pupil and the focusing peculiarities of the particular cornea and lens. The operator inspects the eye through the camera lens, moving the camera back and forth and up and down, looking for fundus details (e.g., retinal blood vessels), and then determines the single best position from which to acquire images of the retina. A working distance, being a distance between the pupil and the fundus camera along the optical axis of the camera, should also be properly adjusted. That as a result of being too close to the eye, a bright, crescent-shaped light reflex appears at the edge of the viewing screen or a bright spot appears at its center, and if the camera is too far away, a hazy, poorly contrasted image results.
The procedure of camera location adjustment is time consuming, requiring involvement of a skilled operator, and also requiring the patient's patience while keeping his/her head steady in the chinrest and headrest of the fundus camera.
Various techniques have been developed for semi-automatic or automatic alignment/positioning of fundus camera and/or automatic focus adjustment of the fundus camera, and are described for example in the following patent publications: JP2010035728; US2008089480; CN110215186.
There is a need in the art in a novel approach in professional retinal imaging enabling the use of a self-operated or at least semi-autonomous imaging system, that combines automatic alignment, positioning and focusing with safety control functions to perform efficient retinal imaging. Also, such a system should preferably be configured for self-calibration.
Such a self-operated and fully or partially autonomous retinal imaging system is particularly useful for eye checks of many people that are usually attentive. The system provides the results of eye inspection in an almost automatic fashion. These results (image data) can be further processed/analyzed using AI and Deep Learning methodology in order to reduce human (physician) involvement in the process. It should be understood that such an autonomous system is aimed at screening large amount of population, enabling automatic processing of image data to discern people with various retinal and systemic diseases.
As described above, a retinal imaging system typically includes a fundus camera mounted on a camera support assembly movable along at least vertical and two horizontal axes. As will be described further below, the system may utilize rotation of the fundus camera (or at least the optics therein) about one or more axes. Also, the fundus camera module typically includes a face cradle unit. In the conventional systems of the kind specified, a support plane, on which the fundus camera support and the face cradle unit are mounted, is a horizontal plane, and the face cradle unit includes a chinrest and headrest elements to keep the patient's head steady during the imaging session.
The inventors have found that such conventional configuration is less comfortable for a patient/user to properly place his/her face and keep it in a target position (fixation position) during retinal imaging, and moreover such configuration is practically unsuitable for autonomous or semi-autonomous system implementation. Thus, in some embodiments of the invention, the fundus camera assembly is configured such that an optical axis (central axis of field of view) of the fundus camera is tilted with respect to the horizontal plane, and a face support surface of a face cradle is appropriately tilted (e.g. is substantially perpendicular to the optical axis of the fundus camera) allowing the user to position his/her face such that it freely lays on the face support surface of the face cradle (avoiding any chinrest element) with the user's eyes pointing generally forward and downwards towards the field of view of the fundus camera.
The invention also preferably provides the use of a face contact frame projected from the face support surface, which allows making the face contact frame from a properly elastic and flexible material composition (e.g. rubber, silicone, etc.) making the entire procedure more comfortable for the user. The face contact frame (being elastic/flexible or not) may be removably mountable on/attachable to the face cradle, thus enabling it to be disposable or replaceable and easy to disinfect.
In some embodiments, the system of the invention provides for automatically adjusting position of a face cradle unit with respect to a fundus camera. Such adjustment might be needed to adjust the procedure to a specific user/patient. A typical example is that the users of different heights might require tunning of the face cradle position.
To this end, the system includes a face cradle unit position controller, and the face cradle unit is associated/equipped with a positioning mechanism (movement mechanism) controllably operable by data operational data provided by the controller to automatically adjust the position of the face cradle, e.g., based on estimated user data, such as user's height.
More specifically, the face cradle position controller is configured and operable to analyze image data of a scene including the region of interest acquired by an imaging module to detect the user's face in the image and estimate one or more user's parameters/conditions (e.g. the height) relative to standard average expected values of respective parameter/condition (e.g., height), and generate (if needed) position adjustment data to the movement mechanism of the face cradle unit. The latter utilizes this data to automatically adjust the face cradle position, i.e., its height with respect to the camera's field of view.
It should be understood, that for a self-operable retinal imaging system, in which various mechanical parts, or at least the fundus camera itself, is to be automatically moved with respect to the user's face while the face is at a registered position (e.g., freely lays on the face cradle and looking onto a so-called “fixation target”), it is important to provide a high-degree safety functionality of the system as well as a high-degree self-calibration functionality. Thus, according to the invention, the retinal imaging system includes an imaging module configured and operable to generate image data enabling registration of a line of sight (LOS) of user's eye at the user's eye target position, i.e. a fixation position or registration position enabling to bring the fundus camera to its alignment position with the user's line of sight, and includes a sensing system configured and operable to monitor a user's face position in the dedicated cradle (and possibly also with respect to a predetermined registration position with respect to the face contact frame) and also monitor a distance between the fundus camera and the user's face. The sensing system is associated with (connected to) a safety controller, which is responsive to the sensing data to monitor the degree of safety in the relative position between the user's face and the fundus camera.
The image data generated by the imaging module and the sensing data generated by the sensing system, as well as the sensing data analysis provided by the safety controller, are properly used to operate a positioning and alignment system of the fundus camera to bring and keep the fundus camera at an operative position such that its optical axis substantially coincides with the line of sight of user's eye and a working distance to the user's face is maintained. When the data analysis result is indicative of that alignment of the optical axis and/or working distance condition(s) appear(s) to be breached (either one of them does not satisfy the predetermined requirement), the system operates to stop the retinal imaging process and avoid any movement within the system.
With regard to self-calibration requirement, it should be noted that self-calibration is the process requiring reading of sensing data from the sensing system, where such sensing data relates to physical measures, such as distances, motor step size to linear dimension (e.g. millimeters) conversion, pixel to millimeters conversion, etc. Since in the self-operable system of the present invention there are moving parts, the self-calibration becomes more important in order to avoid increased positioning error after a while.
To achieve the user's eye target position, the imaging system includes a specifically designed fixation target (e.g. image, pattern) exposed to the user when his/her face is properly positioned on the face cradle. Practically, the system provides instructions to the user (audible and/or visual instructions). It should be understood that the autonomous or semi-autonomous system of the invention is suitable for use by people that are usually attentive.
The imaging module acquires images of the user's face, eyes, and irises, e.g., using IR illumination to detect the eye pupil, and generates corresponding image data indicative of a relative orientation of the line of sight of the user's eye (while at the user's eye target position) with respect to the optical axis of the fundus camera, and enabling to move the fundus camera to the aligned position at which its optical axis substantially coincides with the line of sight of user's eye.
Detection of the eye pupil is typically performed by video-based eye-trackers. A camera focuses on one or both eyes and records eye movement as the viewer looks at some kind of stimulus. Some of the known eye-trackers utilize detection of the center of the pupil and utilize infrared/near-infrared non-collimated light to create corneal reflections, such that a vector between the pupil center and the corneal reflections can be used to determine a reference point on a surface or the eye gaze direction. To this end, a simple calibration procedure of the user is usually needed before using the eye tracker. The suitable eye-tracking techniques based on infrared/near-infrared illumination may be the techniques known as bright-pupil and dark-pupil techniques, differing from one another in the location of an illumination source with respect to the light directing optics: with the illumination source being coaxial with the optical path, the eye acts as a retroreflector creating a bright pupil effect (similar to red eye); and with the illumination source being offset from the optical path, the pupil appears dark. Bright-pupil tracking creates greater iris/pupil contrast, allowing more robust eye-tracking with all iris pigmentation, and greatly reduces interference caused by eyelashes and other obscuring features; and also allows tracking in lighting conditions ranging from total darkness to very bright. The eye-tracking techniques are generally known, and although the system of the present invention may utilize any of the known suitable eye-tracking technique, this does not form part of the invention, and therefore needs not be described in more details.
Thus, according to one broad aspect of the present invention, there is provided a self-operable retinal imaging system comprising:
a fundus camera having a focusing mechanism;
an imaging module configured for imaging user's face and eyes and providing image date indicative of a relative orientation between an optical axis of the fundus camera and a line of sight of user's eye at user's eye target position;
a position and alignment system configured and operable to utilize the image data indicative of said relative orientation for positioning the fundus camera at an operative position such that the optical axis substantially coincides with the line of sight of user's eye, to enable focusing the fundus camera on the retina;
a sensing system comprising one or more sensors, configured and operable for monitoring a user's face position with respect to a predetermined registration position and generating corresponding sensing data; and a safety controller configured and operable to be responsive to the sensing data, and upon identifying that the user's face position with respect to the predetermined registration position corresponds to a predetermined risk condition, generating a control signal to the position and alignment system to halt movements of the fundus camera.
It should be noted that user's eye is to be brought to the fixation target position, corresponding to a predetermined orientation of the user eye's line of sight with respect to at least one predetermined target exposed to the user. In particular, such target position corresponds to intersection of the user eye's line of sight with a predetermined target (e.g. pattern) presented by the fundus camera.
The system may include a calibration mechanism configured and operable to perform self-calibration of the system. The self-calibration is aimed at detecting relative accommodation between an optical head of the fundus camera with respect to the user eye, and determining a distance (typically in a millimeter scale) that the optical head sis to be moved and a direction of such movement. To this end, calibration targets are used which are internal system targets such as two-dimensional element(s) and/or color pattern and/or QR codes, which are used for scene analysis in the vicinity of a region of interest.
Thus, the system utilizes fixation target(s) which are presented to the user in order to modify his line of sight orientation (move his eye in the requested position) that he will gaze in specific direction (in order to capture different part of his retina). The system may further use calibration target(s) of a different type for the scene analysis, i.e. determining whether and how the position of the optical head is to be tuned relative to the user's eye location.
It should be understood that such self-calibration might be needed, periodically or prior to each inspection stage, in order to avoid increased positioning error that might occur after a while. A calibration controller receives and analyzes the sensing data indicative of physical measures, such as distances, conversion of a motor step size to linear dimension (e.g. millimeters), pixel to millimeters conversion, etc., and identifies whether the target position has been changed from a nominal one in order to take this change into account for proper positioning of the fundus camera. Such self-calibration is more important in the self-operable system which utilizes moving parts.
For example, the system may utilize two targeting stages aimed at different purposes, which may be implemented using the common or different targets. The first targeting stage is aimed at the system self-calibration by image processing, and the second targeting stage is aimed at tracking the user's eye in order to enable capture different retinal areas/regions.
The self-calibration is performed by image processing and may for example be implemented using the target or physical elements serving as calibration element(s). Such elements may include one or more of the following: QR codes, color patterns, physical 2D or 3D shapes, etc. The calibration element(s) may be arranged within the system aside the face cradle or on the fundus camera, or on arear panel or anywhere within the system packaging.
Generally speaking, during the imaging session (by the fundus camera), the user may be asked/instructed to look at a small target presented by the fundus camera in order to capture different retinal areas. The natural eye movement that tracks the target enables the view line to the desired retinal area.
The retinal imaging system is associated with (i.e. includes or is connectable to) a control system which comprises inter alia a position controller configured and operable to be responsive to the image data and the sensing data to generate position and alignment data to said position and alignment system to perform controllable movements of the fundus camera to bring the fundus camera to the operative position; and a movement controller configured and operable to be responsive to the sensing data and to the control signal from the safety controller to operate the position and alignment system to halt the movements of the fundus camera.
The safety controller may be configured and operable to analyze the sensing data from one or more sensors of the sensing system indicative of a distance between the user's face and the fundus camera to enable generation of said control signal upon identifying a change in said distance corresponding to the risk condition. Preferably, such one or more sensors providing the distance data comprises at least one ultrasound sensor.
The position and alignment system comprises: a first driving mechanism operable in accordance with the alignment data for moving the fundus camera to a vertical aligned position of the optical axis corresponding to a vertical alignment with user's pupil; a second driving mechanism operable in accordance with the alignment data for moving the fundus camera to a lateral aligned position of the optical axis corresponding to substantial coincidence of the optical axis with the line of sight; and a third driving mechanism operable in accordance with the sensing data and a focal data of the fundus camera for moving the fundus camera along the optical axis to position a focal plane of the focusing mechanism at the retina of the user's eye. In some embodiments, the positioning system may be further configured for rotating the fundus camera in at least one plane.
The system comprises a registration assembly for registering a position of user's face with respect to the fundus camera. The registration assembly comprises a support platform defining a general support plane tilted with respect to a horizontal plane and carrying a face cradle defining a face support surface for supporting the user's face at the registered position during imaging such that user's eyes look general forward and downwards towards the fundus camera during retinal imaging. The face cradle preferably comprises a face contact frame projecting from said face support surface. The face contact frame may be made from elastic and flexible material composition. Alternatively or additionally, the face contact frame may be removable attachable to said face support surface to be disposable or replaceable.
The sensing system may comprise one or more sensors on the face cradle for monitoring a degree of contact of the user's face to the face support surface. Such one or more sensors on said face cradle may include at least one of the following: at least one pressure sensor, proximity sensor, or at least one IR sensor. Generally, one or more pressure sensors may be used to monitor the contact of the user's face with the face support surface. In some examples, at least three sensing elements may be used being located in three spaced-apart locations to monitor the degree of contact in respective at least three contact points with the face cradle.
The imaging module comprises one or more cameras (a pixel matrix detector) and is configured and operable to acquire images of a region of interest enabling to perform either naïve-approach image processing or direct 3D image acquisition. Thus, the imaging module may include at least two 2D imagers (cameras) with intersecting fields of view, or a 3D imager to generate the image data which is indicative of (allows determination of) the user's eye line of sight orientation with respect to the optical axis of the fundus camera. The camera(s) of the imaging module may be standalone unit(s) properly located with respect to the face cradle and the fundus camera and/or may be attached to/integral with the fundus camera.
In some embodiments, a single 2D camera can be used in combination with physical element(s) (e.g. target or calibration element), and this arrangement is calibrated to extract 3D positioning data of the optical system and the scene. Such physical elements can be QR codes, color patterns, physical 2D or 3D shapes, etc., that are positioned on the optical head and/or on various positions within the system packaging. In this implementation, there is no need to extract explicitly 3D data from image data, but 3D positioning data can be estimated using the size of the physical element(s), and perspective analysis (element's positioning and hiding).
The retinal imaging system preferably also comprises a user interface utility configured and operable to provide position and target instructions to the user. The position and target instructions correspond to registration of, respectively, the user's face position and orientation of the line of sight, and may comprise at least one of audio and visual instructions.
Preferably, an illumination system is provided being configured and operable to provide diffused (soft) light within a region of interest where the user's face is positioned during imaging by the fundus camera. Also preferably the diffused (soft) light has a temperature profile substantially not exceeding 4500° K. It should be noted that NIR illumination of about 780-940 nm can be used. It may be used for pupil detection. The illumination intensity/power is selected to be sufficient for the 2D imager operation.
In some embodiments, the system comprises a triggering utility configured and operable to be responsive to the alignment data and the distance data from the position controller and movement controller to generate a triggering signal to the fundus camera upon identifying that the alignment data and the distance data satisfy an operational condition.
The retinal imaging system may be associated with a control system, which is generally a computerized system including inter alia a data processor and analyzer, may be a part of the fundus camera or of a separate computer system configured and operable for data communication (e.g., wireless communication) with the imaging module, the sensing system, the positioning and alignment system and the fundus camera.
The control system may be further configured to apply AI and Deep Learning processing to the image data provided by the fundus camera to identify people with various retinal and systemic diseases, and generate data indicative of patient retinal condition and patient health condition. Alternatively, or additionally, the control system may be configured for data communication with a central station to transmit data indicative of the retinal image data obtained by the fundus camera to the central station for recording and further processing using AI and Deep learning methodology to determine patient retinal condition and patient health condition based on the image data obtained by the fundus camera. Generally, various functional utilities of the data processing software may be properly distributed between the control system associated with the fundus camera and a remote (central) data processing station. Such a central station may receive image data from multiple retinal imaging systems, configured according to the invention, and analyze such multiple measured data pieces to optimize the AI and Deep learning algorithms. Typically, the data processor may be associated with (has access to) a database storing various retinal image data pieces in association with corresponding retinal conditions and patient health conditions.
According to another broad aspect of the invention, there is provided a retinal imaging system comprising a face cradle and a fundus camera, wherein: the fundus camera is configured such that its optical axis is tilted with respect to a horizontal plane; and the face cradle defines a tilted face support surface for supporting a user's face in a free laying state with user's eyes looking forward and downwards towards a field of view of the fundus camera.
The fundus camera is associated with a position and alignment system configured as described above, enabling movement of the fundus camera with respect to said face cradle along at least three axes.
The face cradle preferably comprises a face contact frame projecting from said face support surface. The face contact frame may be made from elastic and flexible material composition. Alternatively, or additionally, the face contact frame may be removable attachable to said face support surface to be disposable or replaceable.
In order to better understand the subject matter that is disclosed herein and to exemplify how it may be carried out in practice, embodiments will now be described, by way of non-limiting example only, with reference to the accompanying drawings, in which:
Referring to
Data indicative of retinal images are properly stored and can be accessed by a physician for on-line or off-line analysis. For example, the stored data can be transmitted to a central computer station and be accessed from a remote device via a communication network using any known suitable communication techniques and protocols. As described above, the image data can be processed using AI and Deep Learning techniques.
The system 100 includes such main parts as a fundus camera 104, an imaging module 112, a sensing system 116, a position and alignment system 120, a safety controller 144, and a control system 128. The fundus camera 104 is typically positioned in association with a face cradle unit 136.
The configuration may be such that the face cradle is equipped with a movement mechanism which is controllably operable to move the cradle unit enabling automatic adjustment of its position to meet the requirements for a specific user/patient (e.g. take into account user's height difference from an average or nominal value).
Although not shown in this schematic illustration, the fundus camera and the face cradle may be mounted on a common support platform. As will be described further below, the invention also provides a novel configuration for the support platform.
As described above, the invention is aimed at providing a self-operable retinal imaging system which provides user-safety and effective retinal imaging. During the retinal imaging session, the user is requested/instructed to bring his face and eyes to a target position, by positioning his face on the face cradle and pointing his view to a target image presented by the fundus camera.
The imaging module 112 includes at least one imaging unit, which includes one or more imagers configured an operable to acquire images of user's face, eyes, irises and possibly also pupils (e.g., using appropriate eye tracking technique or eye and gaze tracking technique) and generate corresponding image data. As described above, the imaging module 112 may include one or more additional imaging units adapted for imaging the scene including a region of interest outside the fundus camera field of view and generating corresponding “external” image data, which can be used for self-calibration purposes. \Hence, the image data ID from the imaging module 112 may be also used for the self-calibration of the system, which may be implemented using the calibration target(s) in the form of QR codes, color patterns, physical 2D or 3D shapes, etc.). Further, while at the user's eye target position (as described above) the image data ID indicative of a relative orientation of an optical axis OA of the fundus camera with respect to the line of sight LOS of user's eye is analyzed. As described above the targets used at the self-calibration and imaging stages may or may not be the same.
Analysis of the image data ID is used to operate the position and alignment system 120 for positioning the fundus camera 104 at an operative position with a proper alignment of the optical axis OA of the fundus camera 104 such that it substantially coincides with the line of sight LOS of user's eye, while at said target position, and, while at the aligned position, to operate a focusing mechanism 108 of the fundus camera 104 to focus the fundus camera on the retina. To this end, the position and alignment system 120 is configured and operable for moving the fundus camera 104 along three axes with respect to the user's eye while at said user's eye target position.
The sensing system 116 is configured and operable for monitoring a relative position between a user's face 150 and the fundus camera 104 and generating corresponding sensing data SD. The sensing data is received and analyzed by a safety controller 144 to properly generate a control/alarm signal. Also, both the sensing data (or results of sensing data analysis) and the image data are used by the control system 128 to initiate (trigger) the retinal imaging session by the fundus camera and monitor the progression of the imaging session.
The control system 128 is a computer system including inter alia data input and output utilities, memory, and a data processor and analyzer. The data processor and analyzer comprises a position controller utility 124 (typically in software) configured and operable to be responsive to the image data ID from the imaging module 112 to generate position and alignment data PAD to the position and alignment system 120 to control the movements of the fundus camera to bring the fundus camera to the operative position. The position controller 124 also includes a calibration utility 125 configured and operable to utilize the image data to generate operational data to the position and alignment system to bring the fundus camera to the operational position.
As mentioned above, the face cradle may be associated with a movement mechanism enabling automatic adjustment of its position. To this end, the same position controller 124, or a separate controller of the control system 128, may be configured and operable to generate movement data to operate the movement mechanism of the face cradle to implement controllable movement of the face cradle to automatically adjust the position of the face cradle.
Such face cradle position controller may be responsive to image data ID from an imager, which may be that of the imaging module 120 or a separate imager (one or more 2D cameras), adapted to image a scene in the vicinity of a region of interest (i.e. vicinity of face cradle) to identify user's face in the image and generate corresponding estimated user's data, e.g. user's height relative to standard average expected height. Based on this estimate, the controller generates position adjustment data including movement data indicative of a movement required to be performed by the face cradle to automatically bring the face cradle to the proper position in association with a specific user, i.e., adjust the face cradle height with respect to the camera's field of view.
Also, the data processor and analyzer includes a movement controller 132 (typically in software) configured and operable to be responsive to the sensing data SD from the sensing system 116 to properly control the movement of the fundus camera to keep the required and safety working distance and responsive to signals from the safety controller 144. Hence, when the safety controller properly identifies that there exists/appears a predetermined risk condition in the relative position between the user's face and the fundus camera, it generate a corresponding control signal CS to the movement controller 132 which operates the position and alignment system to halt any movement of the fundus camera.
The safety controller 144 may be a separate processing unit or may be part of the control system 128. The safety controller is preprogrammed to determine whether position data, as well as movement data indicative of a predicted change in the position of the fundus camera relative to the user's face, arrived or is approaching a critical value corresponding to a risk condition, to properly generate the control signal CS. It should also be noted that the safety controller may utilize the sensing date to identify a change in the user's face position with respect to the face cradle and generate corresponding control/alarm signal, which may initiate generation of predetermined instructions to the user, together with and independent from the respective operation of the position and alignment system.
As also exemplified in the figure, the control system 128 includes a data processor 127 configured and operable to receive retinal image data RID from the fundus camera unit 104 and process this data to determine whether it is indicative of a specific anomality (disease). To this end, the data processor 127 is configured to apply AI and deep learning processing to the image data RID and utilize/access predetermined database storing various retinal image data pieces in association with corresponding retinal conditions (and corresponding individual's health condition). Alternatively, or additionally, the control system 128 may be configured for data communication with a central station 129 to transmit the raw data including retinal image data RID obtained by the fundus camera to the central station or transmit to the central station data indicative of the retinal image data resulting from some preprocessing performed by the data processor 127, for further processing at the central station using AI and deep learning techniques. The retinal image data RID and/or results of the processing of such data may be recorded at the control system 128 and/or at the central station 129. As described above, the central station 129 may be configured for communication with a plurality of retinal imaging systems, and analyze data received from these to optimize the AI and Deep learning algorithms as well as update the central database.
Referring to
In a next step, the image data and the sensing data, while being continuously provided, are continuously analyzed by a data processor and analyzing utility of the control system (step 208). The image data is initially indicative of the user's face position with respect to the face cradle and also with respect to the fundus camera (i.e., a relative orientation of the line of sight of user's eye, while pointing to the target) and the optical axis of the fundus camera (i.e., along the x- and y-axes), and possibly also is indicative of a distance between the user's face and the fundus camera). The sensing data is indicative of the proper contact between the user's face and the face cradle, and also of a distance between the user's face and the fundus camera. It should be understood that the distance determination may be performed in a double-check mode using both the image data of the imaging module and the sensing data of the sensing system.
The image data analysis may include generation of position adjustment data for the face cradle unit in association with a specific user/patient, in order to operate a movement mechanism of the face cradle unit to automatically adjust the position of the face cradle unit with respect to the fundus camera (step 225).
The image and sensing data analysis includes navigation/guidance data generation to the position and alignment system and a risk condition analysis/prediction to identify, while controlling position and movement steps, whether such navigation approaches a risk condition (step 210). With regards to the navigation procedure, it should be noted that position and alignment data analysis provides for bringing the fundus camera to the proper operational position, i.e., position of the alignment of the optical axis of the fundus camera with the user's eye line of sight and positioning of the so-aligned fundus camera at a required working distance to the user's eye. When the control system identifies such a proper operational position of the fundus camera, a triggering signal is generated which actuates an auto-focus and auto illumination managed by the fundus camera using any suitable auto-focusing technique, e.g., that typically used in imaging systems including fundus cameras. However, it should be noted that these processes of auto-focus and auto illumination are triggered (capturing triggered) by the control system upon identifying that the fundus camera, while being navigated, approaches the fundus camera working distance. From the point the system triggered the fundus camera, all its operations are fully automatic (focus, illumination, image processing, etc.).
If during navigation or later during the fundus camera operation (imaging session) a risk condition is identified, the control/alarm signal is generated (step 212) and movements (and possibly also operation) of the fundus camera are halted (step 250). Such a risk condition may be associated with exaggerated proximity of the fundus camera to the user's eye, and/or user's face movement from the registered position, and/or insertion of hands or other things in between the face cradle and the fundus camera. All such unsafety situations can be properly detected by the sensing system (e.g. ultrasound sensor(s)), which determines the distance between the fundus camera and the face cradle and detects obstacle at distance below the working distance. It should also be understood that the imaging module, i.e. the camera(s), can also detect any change towards a risk condition, thus performing together with the sensing system a double-check to keep the safety operation of the system.
As long as safety is maintained, i.e. risk condition is not identified, the process continues with generating operational data (step 216) and performing retinal imaging process (step 240). As the retinal imaging session proceeds, respective instructions are being provided to the user for directing the user's gaze towards the field of view of the fundus camera (e.g. towards the target) and maintaining user's face position and gaze (e.g. by instructing the user to keep the eyes open). The method is performing iteratively the steps above until the retinal imaging process is completed consecutively for the two eyes.
Reference is made to
As shown in
The image data can thus be used to identify whether the user's face is properly positioned and if not enable generation of instructions to the user; and identify whether the user is looking onto the target, and if not enable generation of instructions to the user. Also, the image data can be used by face cradle position controller 133 to determine whether and how the position of the face cradle 136 is to be adjusted, via movement mechanism 137, to bring the user's face to proper position with respect to the camera field of view and/or registration target.
Further, the image data is used to determine required movements of the fundus camera along x- and y-axes in the plane perpendicular to the optical axis of the fundus camera (and possibly also along the optical axis, or z-axis) to bring the fundus camera to the operative position with respect to the user's eye.
The system 300 further includes a sensing system 116 associated with a safety controller 144, configured and operable as described above with reference to
As described above, and not specifically shown in
Further provided in the retinal imaging system 300 is a position and alignment system 120 including appropriate drive mechanisms performing displacement of the fundus camera with respect to the face cradle. Generally, the drive mechanisms provide movement of the fundus camera along three perpendicular axes, including two axes, x- and y-axes in the plane perpendicular to the optical axis of the camera and the z-axis being the optical axis of the fundus camera. It should be noted that an additional drive mechanism may be provided for rotation or pivotal movement of the fundus camera or at least its optical axis.
It should be noted that in the description the x- and y-axes are at times referred to as, respectively, horizontal and vertical axes. However, as mentioned above and will be described more specifically further below, the support plane supporting the fundus camera and the face cradle may be tilted with respect to the horizontal plane. In this case the x- and y-axes are respectively parallel and perpendicular to the support plane, and these terms should be interpreted and understood accordingly. Generally, the configuration may be such that the optical axis of the fundus camera, i.e. its field of view, is oriented at a certain angle (tilted) with respect to the horizontal plane “looking” in a generally forward and upward direction, and the face cradle is configured such that, when user's face is fixed on the face cradle user's field of view is oriented generally forward and downwards towards the field of view of the fundus camera.
The position and alignment system 120 operates by the operational data provided by the control system for bringing the fundus camera to an operative position (via navigation of its movements based on the analysis of the image and sensing data) such that the optical axis of the fundus camera substantially coincides with the line of sight of user's eye, while at said target position and the required working distance from the fundus camera, to keep the level of safety and enable focusing the fundus camera on the retina. As shown in the figure, the control system 128 is provided being in data communication with the imaging module 112, the safety controller 144 and possibly also directly with the sensing system 116, and data communication with the position and alignment system 120. The control system 128 is configured and operable as described above with reference to
It should be noted, although not specifically shown in the figure, that the retinal imaging system 300 may include or may be used with an illumination system configured and operable to provide diffused (soft) light and/or NIR illumination within a region of interest where the user's face is positioned during imaging by the fundus camera. The diffused (soft) light preferably has an appropriate temperature profile, e.g. substantially not exceeding 4500° K, and proper illumination intensity.
It should be understood that, generally, the fundus camera and the face cradle may or may not be mounted on the same physical surface, but the orientations of the user's gaze and the optical axis of the fundus are to be considered with respect to a predetermined general plane. Hence, the common support plane 410 may or may not be constituted by a physical surface. In this not limiting example this is achieved by placing the fundus camera 104 and the face cradle 136 on a tilted surface 410 (defining the general support plane) of a wedge element 414. This configuration allows the face cradle 136 to define a face support surface 136A properly inclined with respect to a vertical plane, such that user's face can be positioned on said surface 136A freely laying on the face support surface with the user's eyes pointing generally forward and downwards towards the optical axis of the fundus camera (while looing on the target).
As also schematically illustrated in the example of
Although in this specific not limiting example of
As shown schematically in
The face support surface has an appropriate optical window 504 (e.g. opening) allowing imaging of user's eyes via the optical window. As also shown in the figure, the face cradle 500 may for example include a face contact frame 506 located on and projecting from the face support surface 502. The face contact frame 506 may be removably mountable on/attachable to the face cradle 500. Also, the face contact frame 506 may be made from a properly elastic and flexible material composition (e.g. rubber, silicone, etc.) making the entire procedure more comfortable for the user providing for ergonomic and more stable position during the imaging session. The face cradle may be equipped with one or more sensing elements—three such sensing elements S1, S2 and S3 being shown in this specific not limiting example. It should be understood, although not specifically shown, that the imaging module may be integral with/mounted on the fundus camera housing or may be a separate unit appropriately located to acquire the images of user's face, eye, iris, pupil. Also, the safety controller, as well as the control system may be integral with the fundus camera housing or may be stand alone device(s) connectable to the respective devices/units of the system as described above.
Thus, the present invention provides a novel configuration of the self-operable retinal imaging system, enabling a user to perform retinal imaging without a need for highly-skilled operator, and actually any operator's assistance, owing to the high-degree safety functionality of the system. The retina images may be stored in a memory of the control system, to be accessed by a skilled person for analysis; and/or may be communicated to an external control station. The invention also provides a novel face cradle configuration, as well as a novel configuration of an integral retina imaging system.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/IL2021/050021 | 1/6/2020 | WO |
Number | Date | Country | |
---|---|---|---|
62957484 | Jan 2020 | US |