The disclosure relates generally to visual assessment, and more particularly, to assessing one or more visual functions using moving contrasting areas.
Visual acuity is clearness of vision, and is defined as an ability to perceive or resolve a linear minimum angle of separation or “Minimum Angle of Resolution” between two stationary lines or points. Historical measures of visual acuity are defined and tested using unmoving or static optotypes. The optotypes are generally composed of static high contrast, black letters or pictograms on a white background, or the converse, and are generally commonly recognized characters or shapes. Illustrative sets of optotypes include Snellen letters, Sloan letters and numbers, Landolt C's, Tumbling E's, and pictograms (typically used with illiterate individuals and children).
The human eye's neuro-biologic function that produces visual sensation fundamentally requires changes in light stimulus on a given retinal area rather than strictly resolving static differentiation of two points. A classical demonstration of this phenomena has been described by various researchers using an image that was mechanically stabilized, i.e. not moving on the retina, resulting in the image fully fading and no longer visible within a few seconds. As another example, Troxler's Fading also demonstrates the importance of visual stimuli area movement showing that visual stimuli that do not change position, or move, rapidly no longer are visually perceived. These retinal stabilization experiments demonstrate that the human visual system has evolved to optimally detect changes in stimulation and motion, which is not directly the primary feature tested using historical visual acuity testing methods.
In addition to visual acuity, historical assessment of other visual functions such as contrast sensitivity, color vision, refraction, and perception of distance and depth, commonly utilize static targets in which a subject is requested to correctly identify optotypes (e.g., text, numbers, or pictograms) to arrive at a confusion point defining the perceptual threshold for the corresponding visual function.
A previously described vision test includes at least one animated dynamic optotype image shape for measuring the visual acuity of a subject. The animated dynamic optotype is described as a rotating image with a linear gap width, which can be scaled in size in such a way as to test vision in a manner that compares directly to the distance and acuity scale of previous vision tests.
An advanced testing method in which various aspects of vision performance can assessed leveraging the innate features of the vision system's stimulus varying requirement is herein described. Aspects of the invention provide a solution for targeting a visual function using varying contrasting areas. An animation including a changing figure can be generated. The changing figure can include contrasting areas having attributes that change substantially continually during the animation. For example, a location of the contrasting areas within the changing figure can be changed to create an appearance of motion of the contrasting areas within the changing figure. The shape attributes can be determined based on a target visual function, a target performance level of the target visual function, and a plurality of display attributes of a display environment for an observer. The animation can be provided for display to the observer, and an indication of whether the observer is able to perceive the changes can be received and used to assess a performance level of the visual function.
A first aspect of the invention provides a method comprising: determining a plurality of shape attributes for a changing figure based on a target visual function, a target performance level of the target visual function, and a plurality of display attributes of a display environment for an observer, wherein the changing figure includes a plurality of contrasting areas; generating an animation including the changing figure, wherein the generating includes varying the plurality of contrasting areas during the animation to create a substantially continuously changing figure; and providing the animation for display to the observer.
A second aspect of the invention provides a system comprising: a computer system including at least one computing device, wherein the computer system performs a process for assessing a target visual function including: determining a plurality of shape attributes for a changing figure based on the target visual function, a target performance level of the target visual function, and a plurality of display attributes of a display environment for an observer, wherein the changing figure includes a plurality of contrasting areas; generating an animation including the changing figure, wherein the generating includes varying the plurality of contrasting areas during the animation to create a substantially continuously changing figure; and providing the animation for display to the observer.
A third aspect of the invention provides a method comprising: determining a plurality of shape attributes for a changing figure based on a target visual function, a target performance level of the target visual function, and a plurality of display attributes of a display environment for an observer; generating an animation including the changing figure, wherein the generating includes creating an appearance of motion within the changing figure, and wherein the changing figure comprises a rectangular shape; and providing the animation for display to the observer.
Other aspects of the invention provide methods, systems, program products, and methods of using and generating each, which include and/or implement some or all of the actions described herein. The illustrative aspects of the invention are designed to solve one or more of the problems herein described and/or one or more other problems not discussed.
These and other features of the disclosure will be more readily understood from the following detailed description of the various aspects of the invention taken in conjunction with the accompanying drawings that depict various aspects of the invention.
It is noted that the drawings may not be to scale. The drawings are intended to depict only typical aspects of the invention, and therefore should not be considered as limiting the scope of the invention. In the drawings, like numbering represents like elements between the drawings.
As indicated above, aspects of the invention provide a solution for targeting a visual function using varying contrasting areas. An animation including a changing figure can be generated. The changing figure can include contrasting areas having attributes that change substantially continually during the animation. For example, a location of the contrasting areas within the changing figure can be changed to create an appearance of motion of the contrasting areas within the changing figure. The shape attributes can be determined based on a target visual function, a target performance level of the target visual function, and a plurality of display attributes of a display environment for an observer. The animation can be provided for display to the observer, and an indication of whether the observer is able to perceive the changes can be received and used to assess a performance level of the visual function. As used herein, unless otherwise noted, the term “set” means one or more (i.e., at least one) and the phrase “any solution” means any now known or later developed solution.
Turning to the drawings,
The computer system 20 is shown including a processing component 22 (e.g., one or more processors), a storage component 24 (e.g., a storage hierarchy), an input/output (I/O) component 26 (e.g., one or more I/O interfaces and/or devices), and a communications pathway 28. In general, the processing component 22 executes program code, such as the assessment program 30, which is at least partially fixed in storage component 24. While executing program code, the processing component 22 can process data, which can result in reading and/or writing transformed data from/to the storage component 24 and/or the I/O component 26 for further processing. The pathway 28 provides a communications link between each of the components in the computer system 20. The I/O component 26 can comprise one or more human I/O devices, which enable a human user 12 to interact with the computer system 20 and/or one or more communications devices to enable a system user 12 to communicate with the computer system 20 using any type of communications link. To this extent, the assessment program 30 can manage a set of interfaces (e.g., graphical user interface(s), application program interface, and/or the like) that enable human and/or system users 12 to interact with the assessment program 30. Furthermore, the assessment program 30 can manage (e.g., store, retrieve, create, manipulate, organize, present, etc.) the data, such as assessment data 34, using any solution.
In any event, the computer system 20 can comprise one or more general purpose computing articles of manufacture (e.g., computing devices) capable of executing program code, such as the assessment program 30, installed thereon. As used herein, it is understood that “program code” means any collection of instructions, in any language, code or notation, that cause a computing device having an information processing capability to perform a particular action either directly or after any combination of the following: (a) conversion to another language, code or notation; (b) reproduction in a different material form; and/or (c) decompression. To this extent, the assessment program 30 can be embodied as any combination of system software and/or application software.
Furthermore, the assessment program 30 can be implemented using a set of modules 32. In this case, a module 32 can enable the computer system 20 to perform a set of tasks used by the assessment program 30, and can be separately developed and/or implemented apart from other portions of the assessment program 30. As used herein, the term “component” means any configuration of hardware, with or without software, which implements the functionality described in conjunction therewith using any solution, while the term “module” means program code that enables a computer system 20 to implement the actions described in conjunction therewith using any solution. When fixed in a storage component 24 of a computer system 20 that includes a processing component 22, a module is a substantial portion of a component that implements the actions. Regardless, it is understood that two or more components, modules, and/or systems may share some/all of their respective hardware and/or software. Furthermore, it is understood that some of the functionality discussed herein may not be implemented or additional functionality may be included as part of the computer system 20.
When the computer system 20 comprises multiple computing devices, each computing device can have only a portion of the assessment program 30 fixed thereon (e.g., one or more modules 32). However, it is understood that the computer system 20 and the assessment program 30 are only representative of various possible equivalent computer systems that may perform a process described herein. To this extent, in other embodiments, the functionality provided by the computer system 20 and the assessment program 30 can be at least partially implemented by one or more computing devices that include any combination of general and/or specific purpose hardware with or without program code. In each embodiment, the hardware and program code, if included, can be created using standard engineering and programming techniques, respectively.
Regardless, when the computer system 20 includes multiple computing devices, the computing devices can communicate over any type of communications link. Furthermore, while performing a process described herein, the computer system 20 can communicate with one or more other computer systems using any type of communications link. In either case, the communications link can comprise any combination of various types of optical fiber, wired, and/or wireless links; comprise any combination of one or more types of networks; and/or utilize any combination of various types of transmission techniques and protocols.
To this extent, the user 12 and/or the patient 14 can be a computer system, either of which also can be a general purpose computer system as described herein in conjunction with the computer system 20. When the user 12 and/or the patient 14 is a computer system, the computer system 20 can generate a user interface, such as a graphical user interface, for presentation to an individual utilizing the user 12 and/or the patient 14. Alternatively, the user 12 and/or the patient 14 can be an individual. In this case, the computer system 20 can generate and present the user interface to the user 12 and/or the patient 14. In either case, the user 12 and patient 14 can be different computer systems/individuals or the same computer system/individual. More particular illustrative environments 10 include: a visual assessment system (e.g., including an interface for providing a controlled viewing environment); a desktop/laptop computing device; a tablet computing device; a computing device communicating to a server over a network, such as the Internet, and/or the like.
As described herein, the computer system 20 can assess a performance level of a patient 14 with respect to one or more visual functions. It is understood that the patient 14 can be a human or other animal for which one or more visual functions are to be assessed. The assessment can be performed in a medical environment, such as a physician's office, an optometrist's office, and/or the like, or in any environment selected by the patient 14, such as his/her home, office, and/or the like. In an embodiment, some or all of the assessment is performed by a professional, such as a user 12 who is a medical practitioner (general practice or specialist), an optometrist, and/or the like. In an alternative embodiment, the assessment is self-administered by the patient 14. Regardless, when the user 12, such as a professional, is different from the patient 14, it is understood that the user 12 and patient 14 can be at the same location or remotely located from one another.
As discussed herein, the assessment program 30 enables the computer system 20 to assess a performance level of the patient 14 with respect to one or more visual functions. To this extent, the computer system 20 can generate a targeting interface 36 for presentation to a patient 14. The targeting interface 36 can include one or more figures which are individually and/or collectively configured to target one or more visual functions of the patient 14. The computer system 20 can use the targeting interface 36 to receive an indication corresponding to an ability of the patient 14 to perceive one or more features present in the targeting interface 36. The indication can be used, e.g., by the computer system 20 and/or the user 12, to assess the performance level of the patient 14 with respect to the visual function(s).
The computer system 20 can determine the shape attributes for one or more of the
The computer system 20 can obtain the display attributes of the display environment using any solution. For example, the computer system 20 can use a default set of display attributes of a typical or known display environment. Illustrative display attributes can include one or more of: a distance from a display screen, a viewing angle of the patient 14 to the display screen, ambient lighting in the display environment, a size of the display screen, a resolution of the display screen, and/or the like. Furthermore, the computer system 20 can enable the user 12 and/or the patient 14 to alter one or more of the display attributes using any solution. For example, the computer system 20 can generate a user interface, which when presented to the user 12 and/or the patient 14, enables selection and alteration of one or more of the display attributes. In an embodiment, the computer system 20 can receive video input data of the patient 14 at the location at which the shapes will be viewed from a camera having a known orientation with respect to the display screen. The computer system 20 can determine one or more of the display attributes by processing the video input data using any solution.
Once the target visual function(s), a target performance level for each of the target visual function(s), and/or display attributes of the display environment are available, the computer system 20 can determine the shape attributes of a figure to be displayed to the patient 14 using any solution. For example, depending on the target visual function(s), the computer system 20 can adjust one or more of: a color of a feature in the figure and/or the background, a contrast between two or more features in the figure and/or between the figure and the background, a size of the figure in the targeting interface, a relative size between features in the figure, and/or the like. The computer system 20 can calculate the appropriate shape attributes using any solution. For example, to evaluate visual acuity of a patient 14, the computer system 20 can calculate shape attributes such that a size of the figure perceived by the patient 14 corresponds to a size utilized for a corresponding visual acuity used in a Snellen chart or other mechanism for evaluating visual acuity. In an embodiment, the computer system 20 can derive one or more of the shape attributes empirically, e.g., by repeatedly generating and observing perceptions using patients 14 with known performance level(s) for the target visual function(s).
In an embodiment, one or more of the
In an embodiment, while the computer system 20 varies one or more attributes of the figure in the animation, a location and extent of the figure in the animation can be defined by a static border in the targeting interface 36A, 36B for a predefined period of time. In this case, a location and/or size of the figure within the display region will not be perceived to be changing by the patient 14. In another embodiment, the computer system 20 can vary a size of the figure, e.g., after the predefined period of time, which will cause the border defining the extent of the figure in the targeting interface 36A, 36B to change. For example, the size can be varied in a manner that the figure appears to be stationary, but getting larger as if a distance between the patient 14 and the figure is getting smaller. Alternatively, the computer system 20 can vary the relative size of one or more features of a figure with respect to the overall size of the figure. For example, the computer system 20 can adjust a diameter of the inner opening of the
The computer system 20 can vary the attributes of the figure in a manner that results in changes in light stimulus on a given retinal area of the patient 14. In an illustrative embodiment, the variation can be perceived as motion within the border of the figure by a patient 14 having a sufficient performance level for the corresponding target visual function. In another embodiment, the variation can be perceived as a pulsing within the border of the figure by such a patient. However, it is understood that motion and pulsing are only illustrative of various changes that the computer system 20 can simulate for the figure. Regardless, the shape attributes can include a velocity and/or acceleration of the variation within the border of the figure, as well as a velocity and/or acceleration of variation of a size of the figure in the targeting interface 36A, 36B, and/or the like.
Additional details regarding varying the attributes of one or more of the
The colors of the groups of contrasting areas 50, 52 can be selected to provide a desired contrast between the groups of contrasting areas 50, 52 and between each group of contrasting areas 50, 52 and the corresponding background color of a targeting interface 36 (
The computer system 20 (
The computer system 20 (
The changes in light stimulus on a given retinal area 14A of the patient 14 are further illustrated in
It is understood that simulated rotational movement in a curvilinear figure is only illustrative of various solutions for changing light stimulus on a given retinal area 14A, which can be implemented by the computer system 20. To this extent,
As illustrated in
As described herein, a figure can include contrasting areas of any number of colors/contrasts. To this extent,
Similarly, it is understood that a figure including contrasting areas with defined visual angles, visual arc-widths, and visual arc-areas is only illustrative of various types of approaches that can be used to create contrasting areas within the figure. For example,
In an embodiment, the computer system 20 (
In this case, as described herein, the computer system 20 can determine the shape attributes for the
It is understood that the use of apparent motion within a figure is only illustrative of numerous types of variations of shape attributes, which the computer system 20 (
As described herein, the computer system 20 can generate a targeting interface, such as the targeting interfaces 36A, 36B shown in
When the targeting interface, such as the targeting interfaces 36A-36E, includes multiple figures, the figures can be of similar configurations or can have one or more attributes that differ. Furthermore, when generating an animation, the computer system 20 (
Returning to
Furthermore, the computer system 20 can receive an indication corresponding to an ability of the patient 14 to perceive the changes in the appearance of the figure(s) in the animation using any solution. For example, the targeting interface 36 can include an ability for the patient 14 and/or user 12 to enter the information using a user interface control (e.g., a button, or the like), which is subsequently received by the computer system 20. Similarly, the computer system 20 can receive and process an indication spoken by the patient 14 and/or the user 12. In an embodiment, the indication includes a perceived direction of movement (e.g., right/left, clockwise/counterclockwise, up/down, and/or the like) in the figure, which the computer system 20 can compare with the actual simulated movement for accuracy. Still further, the computer system 20 can receive video data including the eyes of the patient 14, and process the video data to determine when the patient 14 directs his/her attention to the figure(s) in the targeting interface 36 that are changing and correlate the patient's 14 change in attention with his/her ability to perceive the changes in the figure.
In an embodiment, the targeting interface 36 includes multiple figures, each having different feature sizes for use in assessing the target visual function(s) of the patient 14. In this case, the computer system 20 can receive an indication as to which of the figures the patient 14 can perceive the changes, and which of the figures do not appear to be changing. Similarly, in response to an indication that the patient 14 can or cannot perceive the changes in any figure(s) in the targeting interface 36, the computer system 20 can alter a size of the figure(s) and/or visual arc areas of the contrasting areas in the targeting interface 36 until receiving the opposite indication from the patient 14. The computer system 20 can repeat the process to more accurately identify the actual size of the figure(s) and/or visual arc areas of the contrasting areas required for the patient 14 to perceive the changes, which the computer system 20 can correlate with a performance level for the visual function using any solution. For example, the computer system 20 can correlate the visual arc areas of the contrasting areas with a size of the retinal area of the patient 14 that required stimulation in order to perceive the changes.
In an embodiment, the computer system 20 determines a minimum arc area in which the variation in the changing figure is perceived by the patient 14 for a color or group of colors, and correlates the minimum arc area with the individual sensitivity of one or more of the various types of retinal photoreceptors (e.g., cone types (red—L, green—M, blue—S), rod types, and/or the like) of the patient 14. In this case, the computer system 20 can assess sensitivity of perception of specific colors or combinations of colors by the patient 14 by measuring a minimum perceived arc area of the acuity end point for the corresponding color or combination of colors. In an embodiment, the computer system 20 can assess the sensitivity of perception for each color and/or contrast combination specific to retinal photoreceptor types or photoreceptor functional conditions of interest (e.g., each of the photoreceptor types). For example, for an image to be accurately perceived by a human patient 14 with 20/20 visual acuity at a viewing distance of approximately 6.1 meters (20 feet), the arc minute angular area required for the image can vary based on the foreground/background colors used to generate the image, which elicit individual and either similar or dissimilar thresholds per photoreceptor type or photoreceptor condition. As an example, differences in the acuity end-points for a green image and its background contrast compared to a red image and its background contrast can provide functional information regarding the relative distribution or relative function of the related photoreceptors of the patient 14.
Embodiments of the invention can be directed to solve an existing technical problem with assessing visual function(s) of a patient 14. For example, the traditional use of static letters or pictograms in assessing a visual function relies on an evaluation of a statistically meaningful number of errors, misidentifications, or confusions of the static letters or pictograms that are based upon linear, minimum angle of resolution made by the patient 14. In contrast, an embodiment of the invention can identify a perceptual threshold of the visual function(s) of a patient 14 using a binary “on or off” indication, which can be applied to assessing any of various visual functions (e.g., visual acuity, dynamic visual acuity, refraction, distance detection, visual size differentiation, motion, color detection, color sensitivity, contrast sensitivity, and/or the like).
By evaluating an ability to perceive movement of a plurality of calibrated, contrasting areas displayed over a significantly larger total area of retinal receptors, a more precise assessment of the visual function(s) can be made by the computer system 20 than that provided by the prior art. Additional advantages in using a changing shape as described herein as compared to detection of an angular separation of static points used with historical visual acuity testing can include: application to various aspects of visual function; higher precision; higher reproducibility over both time and from subject to subject; higher test time efficiency; less confusion of endpoints; insensitivity to cultural and literacy biases; and/or the like.
As part of an assessment, the computer system 20 can vary the visual arc-areas of a figure by one or more features to illicit details specific to a target visual function. The features can include: area size, velocity of motion (including no motion), acceleration, direction or directions of motion, relative contrast of adjacent areas, colors or relative colors of the adjacent areas, periodic frequency of the adjacent areas, and/or the like. The computer system 20 can control the variables to elicit an indication of whether the patient 14 observed either movement or no movement for the figure. The indication can be used, e.g., by the computer system 20, to assess: visual acuity; a precision of visual detection of velocity or relative velocities of a plurality of objects; refractive characteristics of the eye such as myopia, hyperopia, accommodation, or astigmatism amount or axis; visual contrast sensitivity; depth perception; color vision deficiencies; reduced sensitivity to color ranges; reduced sensitivity to color ranges such that relative deficiencies or strengths can be attenuated or augmented with color filters; preferential viewing; and/or the like.
Aspects of the invention described herein can be used in other applications apart from assessing visual function(s) of a patient in a medical treatment context. For example, an embodiment can be utilized to provide a distance-related warning or other information to an observer (e.g., patient 14). For example, the observer can be an operator of a vehicle, an individual approaching a restricted area, an individual approaching a place of business, and/or the like. In this case, a changing figure described herein can have feature sizes that can be detected by an individual having a minimum performance level for one or more visual functions at a predetermined distance. In operation, the changing figure will be detected by individuals once they are sufficiently close to the display device (e.g., a traffic light, a traffic sign, a warning light, a vehicle running light, a vehicle brake light, an advertising sign, and/or the like). To this extent, a changing figure described herein can be used in an application to: assess or provide feedback on distance between the observer and the changing figure (e.g., including dynamic variation in the characteristics of the figure in conjunction with velocity of the observer and/or device displaying the changing figure); assess nominal threshold detection of distances between the observer and the changing figure; and/or the like.
While primarily shown and described herein as a method and system for assessing visual function(s) of a patient, it is understood that aspects of the invention further provide various alternative embodiments. For example, in one embodiment, the invention provides a computer program fixed in at least one computer-readable medium, which when executed, enables a computer system to assess visual function(s) of a patient using a process described herein. To this extent, the computer-readable medium includes program code, such as the assessment program 30 (
In another embodiment, the invention provides a method of providing a copy of program code, such as the assessment program 30 (
In still another embodiment, the invention provides a method of generating a system for assessing visual function(s) of a patient. In this case, the generating can include configuring a computer system, such as the computer system 20 (
The foregoing description of various aspects of the invention has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form disclosed, and obviously, many modifications and variations are possible. Such modifications and variations that may be apparent to an individual in the art are included within the scope of the invention as defined by the accompanying claims.