Automated refractors, or auto-refractors, are instruments designed to quickly measure ocular aberrations, or refractive errors, of the eye. Auto-refractors are commonly used by eye care professionals (ECPs) to assist with determining the eyeglass or contact lens correction numbers of their patients. Historically, auto-refractors were not accurate enough to determine lens correction numbers directly but have found their use by ECPs as a pre-screening tool prior to manual or subjective refraction. Subjective refraction via phoropter remains the tried-and-true method for reaching the final lens correction numbers but is time-consuming, requires an ECP with substantial training, and not all ECPs can accomplish it repeatably and with high accuracy.
The accuracy of conventional auto-refraction techniques based on retinal reflex measurement (optometer, scheiner, retinoscopic, or photo-refractive) is generally limited by different degrees of the following: (a) gaze misalignment with respect to sensor optics, (b) poor control over the eye's accommodative state during measurement, (c) limited ability to detect high order aberrations in the eye, (d) inability to detect medial opacities, and (e) inability to detect eye disease on the anterior or posterior surfaces that affect visual acuity outside of refractive errors. Issues of the retinal reflex techniques are described in more detail below.
The problem of gaze-misalignment is particularly problematic when one camera only is used. In a conventional auto-refractor setup where a single camera sensor is used, it is likely that the position probed on the retina does not coincide with the location of the fovea due to gaze-misalignment of the subject's eye and the sensor's camera. A misaligned eye is essentially rotated in the eye socket pointing some angle away to the camera axis. The fovea is where central vision happens and probing the refractive error outside the fovea, namely the periphery, can cause measurement errors in excess of 0.5 diopters. Several strategies exist to coax the subject's gaze towards the optical center of the camera including displaying visual stimuli and guides, audio prompts, or guidance by the ECP, but these are not guaranteed to work due to involuntary eye movement, difficulty understanding instructions, or being uncomfortable or stressed by the device interface.
The influence of high-order aberrations (HOAs) on normal vision generally depends on the diameter of the pupil. A large pupil diameter (typically in dark settings, e.g. driving at night) increases the influence of HOAs on normal vision since more area of the eye's optics becomes part of image formation. People with significant HOAs (e.g. astigmatism, coma, trefoil) often report “seeing halos” when looking at lights at night. Detecting the full extent of HOAs can be challenging due to changing pupil diameters and the inability to map the entire refractive state of the eye via conventional auto-refraction techniques. Some auto-refraction systems attempt to adjust refraction readings based on pupil diameter and other meta inputs such as age and gender via empirically derived lookup tables, however, this corrects for population averages only.
The eye uses the ciliary muscle to change optical power to focus on near and far objects in what is called accommodation. If the eye is not focused on the desired image plane relative to the sensor camera during measurement, the results may become significantly skewed sometimes in excess of 1.0 diopters. Conventional auto-refractors have no way of determining or validating if the patient is focused on the correct target distance during measurement. Some auto-refractors utilize the fogging technique and set the focal plane of the target image optically beyond infinity, thereby relaxing accommodation during measurement. The fogging is best accomplished in systems where the auto-refractor optics are relatively close to the subjects' eyes which means that the instrument touches the subject's face.
Recent advances in auto-refraction technology have begun to address some of the above issues. The Shack-Hartmann or wavefront measurement techniques are good at determining HOAs and in some cases have been clinically proven to provide more accurate results than subjective refraction by the average ECP with respect to the patients' preferred lens correction values. Up to this point, however, no auto-refraction system has been developed that systematically addresses most of the above outlined issues that lead to limited accuracy.
Additionally, from a usability standpoint auto-refractors were designed to be operated by ECPs in a clinical setting, not by the patient themselves. This effectively forces patients to visit an ECP office to obtain lens correction numbers for new corrective eyewear which is a significant barrier to properly maintained vision care for the average citizen due to exam cost and time availability.
There remains a need therefore, for systems and methods for capturing retinal reflex information of the eye using more efficient and more economical processes, and there remains further a need for such systems and methods that are more easily and economically accessed by more people.
In accordance with an aspect, the invention provides a system for capturing diagnostic eye information. The system includes at least one energy source for directing electromagnetic energy into an eye of a subject, a plurality of perception units, each perception unit being associated with an associated position in the visual field of the eye, and each perception unit being adapted to capture refractive information from the eye responsive to the electromagnetic energy, and a processing system for determining refractive error information associated with each position of each perception unit in the visual field of the eye, and for determining refractive error composite information regarding the eye responsive to the refractive error information associated with each perception unit and independent of a direction of gaze of the eye.
In accordance with another aspect, the invention provides a system for capturing diagnostic eye information that includes at least one energy source for directing electromagnetic energy into an eye of a subject, a perception system adapted to capture refractive information from the eye responsive to the electromagnetic energy as well as pupil diameter information representative or a pupil diameter of the eye, and a processing system for determining refractive error information of the eye and associating the refractive error information with the pupil diameter of the eye.
In accordance with a further aspect, the invention provides a system for capturing diagnostic eye information that includes at least one energy source for directing electromagnetic energy into an eye of a subject, a perception system adapted to capture refractive information from the eye responsive to the electromagnetic energy, a partially reflective mirror through which the perception unit is directed toward the eye, an object image that is visible to the subject through the partially reflective mirror, a mirror control system for rotating the partially reflective mirror to change an apparent distance of the object image between a first distance and a second distance, and a processing system for determining refractive error information of the eye and associating the refractive error information with any of the first distance and the second distance.
In accordance with a further aspect, the invention provides an automated eye examination system for capturing diagnostic eye information. The automated eye examination system includes an alignment system for providing alignment information regarding an alignment of a subject with respect to an alignment camera system, a diagnostics analysis system for determining refractive error information associated with at least one eye of the subject within a field of view of the diagnostics analysis system, and an alignment correction system for adjusting the field of view of the diagnostics analysis system responsive to the alignment information.
In accordance with a further aspect, the invention provides a method of capturing diagnostic eye information. The method includes directing electromagnetic energy into an eye of a subject, capturing perception data at each of a plurality of perception units refractive information from the eye responsive to the electromagnetic energy, each perception unit being associated with an associated position in the visual field of the eye, determining refractive error information associated with each position of each perception unit in the visual field of the eye, and determining refractive error composite information regarding the eye responsive to the refractive error information associated with each perception unit and independent of a direction of gaze of the eye.
The following description may be further understood with reference to the accompanying drawings in which:
The drawings are shown for illustrative purposes only.
In accordance with various aspects, the invention provides a new type of refraction technique and eye exam apparatus that remedies the above accuracy limiters while simultaneously enabling automated self-measurement by laypersons in or outside clinical settings. The refraction techniques outlined herein are rooted in the physics of the eccentric photo-refractive principle, a subcategory of the retinal reflex techniques. The refraction techniques may be combined with sensors in a kiosk such that fully automatic refraction can be done after the subject pushes a button to start. Novel visual acuity testing techniques are outlined for inclusion or combination with the novel refraction techniques described herein in a self-serve eye testing kiosk in accordance with an aspect of the present invention.
In illustrative implementations of this invention, the above-mentioned problems with conventional refraction, and in particular photo-refraction, are remediated. In particular, in accordance with an aspect, the invention solves the problem of gaze-misalignment during photo-refraction. In illustrative implementations, a camera-cluster with multiple eccentric light sources performs photo-refraction to measure refractive aberrations, in a manner that is independent of the subject's gaze direction. In some implementations, a photo-refractor is coupled with a pupil diameter control system that steers the pupil diameter at a desired rate or sets the pupil diameter to a desired value. Performing photo-refractive measurement at multiple pupil diameters enables the detection of high order aberrations and setting the pupil diameter to a desired reduced value may limit the influence of high order aberrations. In certain cases, the gaze-independent photo-refraction or the pupil diameter control system is combined with a visual acuity test that is self-administered by the subject. Performing photo-refraction during a visual acuity exam enables improved control over the subject's accommodative state because the subject's focus is engaged and focused at a real or virtual far point (for example 20 ft as per standard Snellen chart distance) or there is more time to acquire multiple photo-refractive measurements that may enable the determination of the accommodative state of the subject's eye or both eyes simultaneously. In further cases, the visual target of the exam apparatus may be set to different distances during refraction or visual acuity testing to gain measurements at eye focus distances which may provide additional information about refractive or accommodative states of the eye.
The overall system may be embedded into a self-serve or ECP-guided vision examination kiosk, or may be compact enough for other tabletop or handheld configurations.
In illustrative implementations, a photo-refractor camera-cluster with multiple cameras and eccentric sources performs photo-refraction to measure refractive aberrations, in a manner that is independent of the subject's gaze direction in accordance with an aspect of the present invention. A camera cluster may be employed to measure the retinal reflex at multiple discrete positions on the retina (probe positions). These probe positions may cover parts of the central and peripheral retinal visual field. The measurement results and relative location of each probed position may be spatially mapped in a scatter plot describing the refractive errors or eye information across the eye's visual field. By fitting a 3-dimensional surface to the map, a continuous curve may be found that models the central and peripheral visual field refractive errors. (This is because, in almost all healthy human eyes, the refractive power changes from central to peripheral vision, and the change is monotonic up to about 25 degrees into the periphery. Due to person-to-person differences in the human eye, the refractive power either increases, remains the same, or decreases monotonically regardless of myopic, emmetropic, and hyperopic eyes.) In illustrative implementations of this invention, each of one or more types of refractive error at central vision are calculated by searching for an extremum of a fitted curve. Which type of extremum (e.g., global minimum, global maximum, local minimum or local maximum) is searched for may depend on, among other things, the type of refractive error and the closeness of the fit for the fitted curve. As discussed herein photo-refraction may be performed independent of the user's gaze direction, in order to measure refractive error of a subject's central and peripheral vision.
A cluster of cameras may be arranged adjacent to each other and each point toward the subject's eyes. The cameras may be positioned along a curved geometric surface (e.g., concave or convex) and may be equidistant from the subject's eyes. Alternatively, the cameras may be positioned on a geometric plane. Each camera may be paired with a combination of eccentric light sources. The light sources may be infrared LEDs (IR). The cluster configuration may form an acceptance cone between the subject's eye and the cameras, and any gaze angle of the subject's eye within the acceptance cone may result in a valid reading. The gaze angle may be the angle between: (a) the eye's visual axis; and (b) the straight line from the eye to the center of the camera cluster (henceforth the “center axis”). The largest gaze angle that results in a reliable photo-refraction reading may occur when the subject's gaze direction is pointing towards the outermost camera of the camera cluster.
Each camera may be surrounded by multiple energy sources (e.g., IR lights). In some cases, the cameras are spaced at equidistant angles from one another. In some cases, the IR lights are oriented along the same geometric surface as the camera cluster surface, or are parallel to or equidistant from that geometric surface. The IR lights may emit IR light that enters and then exits an eye being tested in the eccentric photo-refraction method. The IR lights may be positioned at multiple meridians around the camera, the subject's eyes may be probed for spherical and cylindrical (astigmatic) errors. This may be accomplished by turning each IR light on-off sequentially while recording the double pass retinal reflex present on the subject's pupil. The retinal reflex may be extracted from the IR light pixel intensities of the pupil on the camera images. The refractive state from each IR light on-off cycle may be calculated using one of the common photo-refractor image-to-diopter conversion methods such as the intensity slope-based method or the crescent shape method. Conversion to refractive error from the pupil pixel intensities may also be done via image classifiers from trained neural networks, or by other artificial intelligence (AI) image processing-based classifier techniques. In some cases, the refractive error result is determined by an empirically found lookup table that correlates the value extracted from the pupil pixels via the aforementioned methods to a spherical refractive error (SP: defocus power error in diopters), a cylindrical refractive error (CY: astigmatic power error in diopters), and the angles of cylindrical refractive error (AX: angle of astigmatism in degrees or radians) over a specified range (e.g. −7 to +7 diopters SP or CY).
Each camera and its paired IR lights in the cluster may form an independent photo-refractor that probes a specific position (e.g., very small region) on the retina's visual field. Light from the IR light source may enter the cornea, then pass through the pupil aperture, then the lens, and then translate onto a small region on the retina, then reflect off the retina (retinal reflex) and back through the lens, pupil aperture, and cornea and finally arrive at the camera. In this approach, the IR light passes through the pupil twice, thus we sometimes call it double pass reflex or retinal reflex. With a cluster of cameras and IR light sources, multiple reflex positions on the retina may be probed simultaneously or within a single measurement session. The pattern of positions probed on the retina may form the same pattern as the configuration of the cameras in the cluster. Each probe position on the retina may give a refractive error measurement result in SP, CY, and AX; or higher order aberrations such as trefoil or coma. These results may also be combined into their spherical equivalent value SE in diopters defined as SE=SP+CY/2 (the AX may be discarded).
Having the individual components of the refractive error for each probe position enables the calculation of separate discrete spatial maps each describing the SP, CY, and AX result only. Alternatively, a spatial map of the SE of each probed position may be calculated. The spatial map, or “map”, may be a 3D scatter plot with each point having coordinates (X,Y,V), where X and Y are the position on the retina and V is the value of a given probed point SP, CY, AX, or SE. The unit of X and Y may describe a distance (e.g. mm) or angle on the retina (e.g. degrees or radians). The unit of V may describe the refractive error in diopters, or the absolute power in diopters, or the difference in refractive error to a given calibration constant in the SP, CY, or SE maps, where the greater the refractive value the greater the value V. The unit of V may describe the angle in degrees or radians for the AX map.
Using either the SP, CY, or SE discrete maps, a three-dimensional curve may be fitted to the points on the map. The curve fitting may be performed on SP, CY, or SE discrete maps separately and independently resulting in a SP curve, a CY curve, or a SE curve.
Fitting may be done via a non-linear least squares, linear least squares, least absolute residual, bi-square, polynomial regression, or piece-wise linear regression fitting method. The surface function to fit to the map points may be a predefined polynomial, a nth-order polynomial, 3D spline, or 3D surface from a lookup table forming a continuous spatial map of SP, CY, or SE values. The maps may describe the components of central and peripheral refractive errors of the eye spatially. A curve may be fitted to one of these maps, and an extremum of the fitted curve may provide a refractive error of central vision (at the fovea).
In most eyes, the refractive error of SP, CY, or SE increases or decreases monotonically from the fovea position up to about 25 degrees into the periphery (gaze angle), regardless of myopic, emmetropic, and hyperopic eyes. In other words, an eye's peripheral vision generally has a different refractive power than at central vision. The gaze angle is the angle between the center axis and the eye's visual axis. The center axis is the straight line between the center of the pupil and the center of the photo-refractor cluster. The visual axis is the straight line between the center of the fovea and the center of the pupil. When the eye focuses on the center of the photo-refractor cluster, the center axis aligns with the center of the fovea and the gaze angle is 0. When the eye rotates with respect to the center axis, the center axis may pass through a point on the retina outside the fovea, on the periphery. The gaze angle into the periphery increases in all directions from the fovea with increased eye rotation. A centroid is a non-limiting example of each “center” of a region (e.g., pupil, fovea or camera cluster) that is referred to herein.
For each type of refractive error, a curve describing average central and peripheral refractive errors may be found from empirical datasets and studies for a given population. This curve may look similar to a 3D conical surface or gaussian surface and may be a 3D function with independent variables that enables scaling in the X,Y, and V direction. The empirically derived surface function of central and peripheral vision may be curve fitted to a refractive error map as described above.
In some implementations of this invention: (a) to find the SP refractive error in diopters at the fovea, the value at an extremum of the SP fitted curve is determined; (b) to find the CY refractive error in diopters at the fovea, the value at an extremum of the CY fitted curve is determined; and/or (c) to find the SE refractive error in diopters at the fovea, the value at an extremum of the SE fitted curve is determined. Again, in illustrative implementations, this approach yields highly accurate calculations of refractive error for most humans, because refractive power in most humans increases or decreases monotonically up to about 25 degrees from central to peripheral vision regardless of myopic, emmetropic, or hyperopic central vision refractive error. This approach does not require the subject to adjust their gaze direction towards a central camera to find the refractive error at central vision (the fovea).
In some cases, instead of using a predefined polynomial, a nth-order polynomial may be fitted to the refractive error maps. To find a refractive error (e.g., SP, CY, or SE) at the fovea, the position and value (e.g., SP, CY, or SE) at an extremum of the fitted curve may be determined.
In many use scenarios, one or more computers calculate a particular refractive error of central vision by finding a global maximum or a global minimum of the fitted curve. In some other use scenarios, one or more computers calculate a particular refractive error of central vision by finding a local minimum or local maximum of the fitted curve. As a non-limiting example, in some use scenarios, the computer(s) calculate a very high-resolution polynomial fit, and the point on the fitted curve that corresponds to the fovea is at a local minimum or local maximum of the fitted curve.
In some cases, a 3D interpolation may be performed on maps using the nearest-neighbor, cubic, or linear method or via a Voronoi tessellation forming up-sampled discrete point maps of SP, CY, or SE values. From the discrete point maps, the position and refractive error value (SP, CY, or SE) at the fovea may be found by calculating the center of mass of the map, or by finding one or more extrema on the discrete up-sampled point map. As noted above, depending on the particular use scenario, particular patient and particular refractive error: (a) the extremum that is used to calculate the particular refractive error may be a global minimum, global maximum, local minimum or local maximum; and (b) a computer may determine that a value at the calculated extremum is equal to the particular refractive error.
In some cases, the steps to determine position and value of refractive errors of central and peripheral vision may be repeated over multiple measurement cycles to create a set of retina probed position values, such that multiple curves are created with respect to each set of probed positions. These curves may be averaged spatially to improve the accuracy of the method.
In the illustrative example in
An illustrative example of a camera cluster configuration is shown in
In some cases, there may be cameras arranged in the pattern illustrated in
The pattern of camera positions (e.g.,
The information (e.g., refractive error values) from each probed position 800 on the retina may be spatially mapped and plotted with coordinates (X,Y,V), where X and Y are the position on the retina, and V is a value of refractive error. The value of V may be spherical power (SP), cylindrical power (CY), or spherical equivalent power (SE) in diopters, or the axis of the cylinder (AX) in degrees or radians.
In some cases, the polynomial surface 801 after fitting to the probed positions p1-p7 may be convex with respect to the axis V, after which the maxima 802 in the curve may be found to determine the refractive error values of central vision.
In some cases, the curved surface 801 that is fit to the probed positions p1-p7 may be a piece-wise linear or piece-wise polynomial curve, or based on a function from a lookup table. In some cases, the surface 801 that is fit to the probed positions p1-p7 may be a plane.
In other cases, the camera cluster and IR light sources may be arranged in a grid as exemplified in
In some implementations of the photo-refractor configurations disclosed in this invention, the light sources may emit near infrared light between 750 nm to 1000 nm wavelengths, or ultraviolet light between 250 nm to 450 nm, or a broadband white light across the visible spectrum, or combinations of specific wavelengths in the visible spectrum.
In some cases, a self-serve vision exam kiosk 100 houses some of the apparatus and may be employed as a platform to deliver vision tests.
In
In typical implementations, the apparatus uses the various cameras and sensors of the kiosk combined with the refraction technique to automate the refraction measurements or visual acuity test. The subject or the ECP simply presses a button on the display 102 to start the exam and the system takes over to complete the measurements automatically. During the exam the subject may be asked to stand still while the kiosk adapts to the subject's position while the apparatus takes refraction readings automatically, or the subject may be asked to follow prompts as in the case of a visual acuity test.
An audio feedback speaker bar 213 enables the kiosk to provide virtual assistant audio feedback. The virtual assistant audio tracks welcome and instruct the user to perform certain tasks during the tests performed at the kiosk. The kiosk is therefore fully automated, including camera detection systems that capture both refractive information as well as visual acuity information. The system may be trigger by either a single start button or simply the presence of a subject standing in front of the system (such as a kiosk). Once initiated the system will automatically align with the subject, and perform diagnostic and visual acuity analyses (potentially even simultaneously), as well as detection of any of a variety of health issues regarding a subject based on, for example, a subject's pupils reactions to changes in visible light.
As noted above, in some implementations, this invention controls the diameter of the pupil of an eye being tested. The system may comprise an adjustable brightness control light facing the subject, a camera that faces the subject to record the pupil diameter, and a control system that steers the current pupil diameter to the target pupil diameter. The control system may be running on a computer or on a microcontroller. In a healthy eye and subject, a high intensity light entering the eye causes the pupil to constrict via a process called myosis, whereas a low intensity light entering the eye or no light at all causes the pupil to dilate via a process called mydriasis.
In illustrative implementations of this invention, the pupil diameter may be reduced by increasing the intensity of the control light. The pupil diameter may be increased by lowering the intensity of the control light. The ambient light may be reduced by turning off the room lights, or by shielding the eye from ambient light via a booth or kiosk design that includes side blinders, or a chamber design with an enclosure to block light to the subject. The wavelength of the control light is in the visible range of the human eye so myosis may occur and may be characterized as a chromatic color such as a red, green, or blue, or as an achromatic color such as white or gray. The control light may be diffusely propagating and may enter both eyes simultaneously, or the control light may be focused to shine light into one eye at a time.
The pupil diameter steering method may include a control system that sets the intensity of the control light so that the pupil reaches the target pupil diameter. The control system may include an open loop controller that takes in the target pupil diameter and sets the intensity of the control light from a lookup table that relates light intensity to pupil diameters. The lookup table may take into account age, gender, race, wavelength, light source spatial configuration, and if light is being delivered to one eye or both eyes simultaneously. The open loop controller may wait an estimated amount of time until the desired range of pupil diameter is reached.
In some cases, the control system includes a closed loop feedback controller that takes in a target pupil diameter value and compares to the current pupil diameter value measured by the camera to then actively drive the intensity of the control light. The closed loop feedback controller may be a proportional controller, a proportional-integral-derivative controller (PID), a state-space feedback controller, or a fuzzy logic controller. The feedback controller may also be a multi-loop closed-loop feedback controller. In certain applications, the control system takes in a target pupil diameter rate-of-change value and drives the control system to change the subject's pupil diameter at the target rate of change.
In further applications, the control light comprises multiple diffuse area light sources spaced adjacent from one another as illustrated in
In accordance with certain aspects of the present invention (e.g., with a gaze-independent photo-refractor), pupil diameter steering is employed to detect high order aberrations. In some implementations, higher order aberrations are detected as follows: A computer: (a) may determine whether the cylinder axis angles map shows that the AX values of the retina probe positions are pointing in different directions; and (b) if they are, may determine that there are significant coma or trefoil aberrations. This approach yields accurate results because, in a typical eye with spherical (SP) and cylindrical (CY) refractive error (low order aberrations) but without significant high order aberrations, the cylinder axis angles (AX) of central and peripheral vision tend to point in the same direction.
In some implementations, higher order aberrations are detected by deliberately reducing the pupil diameter of the subject's eye during measurement (e.g., in a series of different steps of diameter of the pupil). Reducing the pupil diameter tends to cause the photo-refractive measurement to be less affected by high order aberrations in the eye. This effect may be used to compare the refractive error between a dilated pupil and a constricted pupil via the pupil diameter steering system. A significant difference in refractive error between a constricted pupil and a dilated pupil may indicate the presence of high order aberrations.
In some implementations, the magnitude of high order aberrations are determined by taking the cylinder refractive error CY from the center of the map or the fitted curve and comparing it to the periphery CY values. The difference in diopters between center and periphery CY values may be correlated to a table of magnitudes of trefoil and coma residual aberrations.
In accordance with certain aspects of this invention (e.g., with a gaze-independent photo-refractor), pupil diameter steering is employed to detect symptoms of eye disease or neurological disorders such as asymmetric pupil diameters (i.e. anisocoria) between both eyes, non-responsive pupils, abnormal rate of change of pupil diameter, or pulsating pupil diameters.
In some implementations, the user may see a graphic (“virtual object” 306) appearing inside the exam window 300. The virtual object may be a fixed graphic, an animated graphic, or a combination of both. The virtual object may be used to guide the subject's attention, focus position, or eye gaze direction to a specific location when looking into the exam window. To the subject, the virtual object may appear at a given distance behind the exam window 300 as if “hovering” inside. In some implementations of the kiosk, the virtual object may be combined with a photo-refractor, or a gaze-independent-photo-refractor configuration disclosed herein, or a visual acuity system to serve as a viewing target to assist in administering the tests.
In some implementations, the partial reflection mirror can be rotated to such that the virtual object appears at an alternative distance.
The partial reflection mirror 302 may be a glass or plastic slab and covered on one or both sides with specialty materials, films, or optical coatings. The slab may be covered with an optical filter coating that passes and reflects certain wavelengths of light, or an anti-reflection coating, or an optical absorber coating, or a polarizer film or coating. The partial reflection mirror may be a beam-splitter mirror a or a teleprompter mirror with anti-reflective coating on one or both sides. In some implementations, the slab reflection and transmission ratio for a given wavelength is 50% reflection and 50% transmission, or 40% reflection and 60% transmission, but can be any ratio combination between reflection and transmission.
The optical path compression configuration exemplified in
The optical systems illustrated in
The position of the virtual object display 303, or the parabolic mirror 301 may also be moved relative to the optical axes or other components to create additional virtual object distances and accommodative states of the subject's eyes.
The virtual object size on the display 303 may be coupled to the value received from the positional sensor 308. This enables the adjustment of the size of the virtual object 306 regardless of the position of the subject with respect to the kiosk. That is, the virtual object 306 can be made to appear the same size regardless of the subject's standing position (e.g. for subjects standing close to the exam window 300 the virtual object 306 is reduced in size, and for subject standing further away to the exam window 300 the virtual object is increased in size).
The angle of the partial reflection mirror 302 may also be coupled to the value received from the positional sensor 308 such that the virtual object 303 always appears in a specific position inside the exam window 300 regardless of the subject's head height or position with regards to the kiosk. For example, for a subject that is short the partial reflection mirror 302 can rotate such that the virtual object 306 moves up, and for a subject that is tall the partial reflection mirror 302 can rotate such that the virtual object 306 moves down.
In some implementations, the above dynamic virtual object positioning methods may be employed during a visual acuity test, or during auto-refraction with a photo-refractor, or with a gaze-independent-photo-refractor. The size and position of the virtual object as it appears inside the exam window is therefore adjustable depending on where the subject is standing or is otherwise positioned. This ensures that all users see the same object (regardless of their height or position near the exam window.
In some implementations, the combination of a virtual object and a photo-refractor may be used to monitor a subject's accommodative state. As the subject performs the test routine, the photo-refractor may continuously measure the refractive state of the subject's eyes and record the results over time. The time series may show relative changes in refractive power as the subject is changing their accommodative state. The visual acuity test may prompt the subject to attempt their best focusing ability on the virtual object seen inside the exam window. Tracking a refractive error time series may be used to estimate the subject's best focusing ability by identifying the maximum or minimum value on the time series graph.
Multiple accommodative states may be monitored during a measurement session by taking photo-refractor measurements at a multitude of virtual object distances and utilizing the aforementioned virtual object positioning methods. For example, the virtual object distance is first set to 3 feet from the subject and then refractive measurements are taken. Next, the virtual object distance is set to 20 ft and then refractive measurements are taken. If both refractive measurements are the same or similar, it may indicate a high probability that the user has focused at the correct distances and the results are valid. If the refractive measurements are not the same between two virtual object distances, that may indicate issues with accommodating by the subject.
Correctly accommodating to a target distance of a virtual object may also be an issue depending on what type of refractive error the subject has. For farsighted subjects (hyperopic) a virtual object 3 feet away may appear too blurry to correctly focus at that distance. The virtual object may instead be set to a further distance, such as the typical Snellen chart distance of 16 ft or 20 ft, so that the subject can more reliably focus on the virtual object and therefore the photo-refractor can get a measurement with a higher probability of being valid. For nearsighted subjects (myopic), the opposite strategy may be employed. A 20 ft virtual object distance may be too blurry to properly focus, so instead the virtual object distance may be set to 3 feet for easier focus and improved measurement reliability.
A self-administered visual acuity test may be performed by having the subject (kiosk user) follow instructions from the kiosk and input responses back to the kiosk as illustrated in
During the test, the subject may be prompted by the kiosk to perform an action or a series of actions that the kiosk receives as input. This input may then be used to adjust the virtual object 306 in size, shape, or position as seen in the exam window 103.
In one implementation, the subject may be prompted via audio commands coming from the kiosk console's loudspeaker 213 to rotate an input wheel 205 after observing the virtual object 306. Turning the input wheel or pushing the wheel's button adjusts the letter(s), symbol(s), or graphic(s) of the virtual object 306 to new letter(s), symbol(s), or graphic(s), and/or to a new size, position, or initiates a prompt to a new step in the test.
Instead of providing audio commands to the subject to advance to the next step, the kiosk may prompt the subject to perform a new test action via graphics or text on the kiosk's display or via letters, symbols (e.g. arrows), or graphics displayed by the virtual object 306.
The central computer 214 housed inside the kiosk 200 reacts to the subject's input by advancing the test step and outputs audio commands or a change in the virtual display's state.
In one implementation, instead of using the input wheel 205, the subject may input a response to the test by pressing buttons on the kiosk's touch display, or by performing a “swiping” action on the touch display, or by pressing a touch pad with buttons on the kiosk, or by performing gestures that the cameras or sensors 308 can detect (e.g. head nodding, head turning, hand waving, hand positions, holding up all or some fingers, eye blinking, mouth opening or closing).
In accordance with further aspects, the system may detect any of a wide variety of gestures that indicate (positively or negatively) whether a subject is able to clearly see information in a visual acuity test. Such gestures may include, for example, nodding or shaking their head, or providing a thumbs-up or side-to-side movement of a horizontal hand with the palm facing downward. In accordance with further aspects, the system may detect voice of a subject answering yes/no questions and/or reading text in lines during a visual acuity test.
In the examples of
In some cases, the rotation of the input wheel 205 or buttons on the touch display 103 enables the subject to move letters or symbols shown in window 208 along a circular path, or rotate the orientation of letters or symbols, where for example, the top positioned letter or symbol is the selection made by the subject as illustrated at 208′ in
The subject for example, may rotate the input wheel 205 or press buttons on the touch display 103 such that the letters or symbols seen through the window 208 adjust in size in response to the input wheel rotation direction as illustrated at 208″ in
In some cases, the cameras or sensors 308 may detect if the subject is holding their hand in front of their eye at the appropriate point in the test. If the hand is not in the right position, the kiosk may prompt the subject move the hand back in front of the eye. Further, the cameras or sensors 308 may detect that the subject is not wearing eyeglasses or contact lenses when they are required for the test, and may be prompted by the kiosk to wear the eyeglasses or contact lenses. In some cases, the cameras or sensors 308 may detect that the subject is wearing eyeglasses or contact lenses when they are not supposed to for the test and are then prompted by the kiosk to remove the eyeglasses or contact lenses.
The self-administered visual acuity test may also be combined with the auto-refraction techniques described above. In one example, the subject may follow the visual acuity test while the refraction sensor is recording refractive data continuously over the periods of seconds or minutes. In accordance with further aspects, the system may determine whether a subject is wearing glasses, and confirm whether a subject is blocking one or the other eye when prompted to do so during an automated visual acuity test.
The terms “a” and “an”, when modifying a noun, do not imply that only one of the noun exists. For example, a statement that “an apple is hanging from a branch”: (i) does not imply that only one apple is hanging from the branch; (ii) is true if one apple is hanging from the branch; and (iii) is true if multiple apples are hanging from the branch.
To say that a calculation is “according to” a first equation means that the calculation includes (a) solving the first equation; or (b) solving a second equation, where the second equation is derived from the first equation. Non-limiting examples of “solving” an equation include solving the equation in closed form or by numerical approximation or by optimization.
To compute “based on” specified data means to perform a computation that takes the specified data as an input.
Non-limiting examples of a “camera” include: (a) a digital camera; (b) a digital grayscale camera; (c) a digital color camera; (d) a video camera; (e) a light sensor, imaging sensor, or photodetector; (f) a set or array of light sensors, imaging sensors or photodetectors; (h) a light field camera or plenoptic camera; (i) a time-of-flight camera; and (j) a depth camera. In some cases, a camera includes any computers or circuits that process data captured by the camera.
The term “comprise” (and grammatical variations thereof) shall be construed as if followed by “without limitation”. If A comprises B, then A includes B and may include other things.
Each of the following is a non-limiting example of a “computer”, as that term is used herein: (a) a digital computer; (b) an analog computer; (c) a computer that performs both analog and digital computations; (d) a microcontroller; (e) a microprocessor; (f) a controller; (g) a tablet computer; (h) a notebook computer; (i) a laptop computer, (j) a personal computer; (k) a mainframe computer; and (l) a quantum computer. However, a human is not a “computer”, as that term is used herein.
“Defined Term” means a term or phrase that is set forth in quotation marks in this Definitions section.
For an event to occur “during” a time period, it is not necessary that the event occur throughout the entire time period. For example, an event that occurs during only a portion of a given time period occurs “during” the given time period.
The term “e.g.” means for example.
The fact that an “example” or multiple examples of something are given does not imply that they are the only instances of that thing. An example (or a group of examples) is merely a non-exhaustive and non-limiting illustration.
“For instance” means for example.
To say a “given” X is simply a way of identifying the X, such that the X may be referred to later with specificity. To say a “given” X does not create any implication regarding X. For example, to say a “given” X does not create any implication that X is a gift, assumption, or known fact.
“Herein” means in this document, including text, specification, claims, abstract, and drawings.
As used herein: (1) “implementation” means an implementation of this invention; (2) “embodiment” means an embodiment of this invention; (3) “case” means an implementation of this invention; and (4) “use scenario” means a use scenario of this invention.
The term “include” (and grammatical variations thereof) shall be construed as if followed by “without limitation”.
Unless the context clearly indicates otherwise, “or” means and/or. For example, A or B is true if A is true, or B is true, or both A and B are true. Also, for example, a calculation of A or B means a calculation of A, or a calculation of B, or a calculation of A and B.
The term “such as” means for example.
Except to the extent that the context clearly requires otherwise, if steps in a method are described herein, then the method includes variations in which: (1) steps in the method occur in any order or sequence, including any order or sequence different than that described herein; (2) any step or steps in the method occur more than once; (3) any two steps occur the same number of times or a different number of times during the method; (4) one or more steps in the method are done in parallel or serially; (5) any step in the method is performed iteratively; (6) a given step in the method is applied to the same thing each time that the given step occurs or is applied to a different thing each time that the given step occurs; (7) one or more steps occur simultaneously; or (8) the method includes other steps, in addition to the steps described herein.
Headings are included herein merely to facilitate a reader's navigation of this document. A heading for a section does not affect the meaning or scope of that section.
This Definitions section shall, in all cases, control over and override any other definition of the Defined Terms. The Applicant or Applicants are acting as his, her, its or their own lexicographer with respect to the Defined Terms. For example, the definitions of Defined Terms set forth in this Definitions section override common usage and any external dictionary. If a given term is explicitly or implicitly defined in this document, then that definition shall be controlling, and shall override any definition of the given term arising from any source (e.g., a dictionary or common usage) that is external to this document. If this document provides clarification regarding the meaning of a particular term, then that clarification shall, to the extent applicable, override any definition of the given term arising from any source (e.g., a dictionary or common usage) that is external to this document. Unless the context clearly indicates otherwise, any definition or clarification herein of a term or phrase applies to any grammatical variation of the term or phrase, taking into account the difference in grammatical form. For example, the grammatical variations include noun, verb, participle, adjective, and possessive forms, and different declensions, and different tenses.
This invention may be implemented in many different ways.
Each description herein of any method, apparatus or system of this invention describes a non-limiting example of this invention. This invention is not limited to those examples, and may be implemented in other ways.
Each description herein of any prototype of this invention describes a non-limiting example of this invention. This invention is not limited to those examples, and may be implemented in other ways.
Each description herein of any implementation, embodiment or case of this invention (or any use scenario for this invention) describes a non-limiting example of this invention. This invention is not limited to those examples, and may be implemented in other ways.
Each Figure, diagram, schematic or drawing herein (or in the Provisional) that illustrates any feature of this invention shows a non-limiting example of this invention. This invention is not limited to those examples, and may be implemented in other ways.
The above description (including without limitation any attached drawings and figures) describes illustrative implementations of the invention. However, the invention may be implemented in other ways. The methods and apparatus which are described herein are merely illustrative applications of the principles of the invention. Other arrangements, methods, modifications, and substitutions by one of ordinary skill in the art are also within the scope of the present invention. Numerous modifications may be made by those skilled in the art without departing from the scope of the invention. Also, this invention includes without limitation each combination and permutation of one or more of the items (including any hardware, hardware components, methods, processes, steps, software, algorithms, features, and technology) that are described herein.
The present application claims priority to U.S. Provisional Patent Application No. 63/270,907 filed Oct. 22, 2021, the disclosure of which is hereby incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
63270907 | Oct 2021 | US |