The invention relates to vision assessments, particularly functional vision assessments using virtual reality.
Assessment of vision in patients with inherited retinal diseases, such as Leber congenital amaurosis (“LCA”), retinitis pigmentosa, or other conditions with very low vision is a significant challenge in the clinical trial setting. LCA is a group of ultra-rare inherited retinal dystrophies characterized by profound vision loss beginning in infancy. LCA10 is a subtype of LCA that accounts for over 20% of all cases and is characterized by mutations in the CEP290 (centrosomal protein 290) gene. Most patients with LCA10 have essentially no rod-based vision but retain a central island of poorly functioning cone photoreceptors. This results in poor peripheral vision, nyctalopia (night blindness), and a wide range of visual acuities ranging from No Light Perception (“NLP”) to approximately 20/50 vision.
Physical navigation courses have been used in, for example, clinical studies to assess functional vision in patients with low vision. For example, the Multi-luminance Mobility Test (“1\1LMT”) is a physical navigation course designed to assess functional vision at various light levels in patients with a form of LCA caused by a mutation in the RPE65 gene (LCA2). A similar set of four navigation courses (Ora® Mobility Courses) was designed by Ora®, Inc. and used in LCAlO clinical trials. Although physical navigation courses provide a valuable measurement of visual impairment, they require large dedicated spaces, time-consuming illuminance calibration, time and labor to reconfigure the course, and manual (subjective) scoring. Equipment systems and methods are thus desired to conduct functional vision assessments for use in, for example, clinical studies that avoid the disadvantages of these physical navigation courses.
One aspect of the present invention has been developed to avoid disadvantages of the physical navigation courses discussed above using a virtual reality environment. Although this aspect of the present invention has various advantages over the physical navigation courses, the invention is not limited to embodiments of functional vision assessment in patients with low vision disorders discussed in the background. As will be apparent from the following disclosure, the devices, systems, and methods discussed herein encompass many aspects of using a virtual reality environment for the assessment of vision in individuals.
In one aspect, the invention relates to a method of evaluating visual impairment of a user including: generating, using a processor, a virtual navigation course for the user to navigate; displaying portions of the virtual navigation course on a head-mounted display as the user navigates the virtual navigation course, the head-mounted display being communicatively coupled to the processor; and measuring the progress of the user as user navigates the virtual navigation course using at least one performance metric.
In another aspect, the invention relates to a method of evaluating visual impairment of a user including: generating, using a processor, a virtual reality environment including a virtual object having a directionality; displaying the virtual reality environment including the virtual object on a head-mounted display, the head-mounted display being communicatively coupled to the processor; increasing, using the processor, the size of the virtual object displayed on the head-mounted display; and measuring at least one performance metric when the processor receives an input that a user has indicated the directionality of the virtual object.
In a further aspect, the invention relates to a method of evaluating visual impairment of a user including generating, using a processor, a virtual reality environment including a virtual eye chart located on a virtual wall. The virtual eye chart has a plurality of lines each of which include at least one alphanumeric character. The at-least-one alphanumeric character in a first line of the eye chart is a different size than the at-least-one alphanumeric character in a second line of the eye chart. The method further includes: displaying the virtual reality environment including the virtual eye chart and virtual wall on a head-mounted display, the head-mounted display being communicatively coupled to the processor; displaying, on a head-mounted display, an indication in the virtual reality environment to instruct a user to read one line of the eye chart; and measuring the progress of the user as user reads the at-least-one alphanumeric character of the line of the eye chart using at least one performance metric.
In still another aspect, the invention relates to a method of evaluating visual impairment of a user including: generating, using a processor, a virtual reality environment including a target; displaying the virtual reality environment including the target on a head-mounted display, the head-mounted display being communicatively coupled to the processor and including eye-tracking sensors; tracking the center of the pupil with the eye-tracking sensors to generate eye tracking data as the user stares at the target; and measuring the visual impairment of the user based on the eye tracking data.
In yet another aspect, the invention relates to a method of evaluating visual impairment of a user including: generating, using a processor, a virtual reality environment including a virtual scene having a plurality of virtual objects arranged therein; displaying the virtual reality environment including the virtual scene and the plurality of virtual objects on a head-mounted display, the head-mounted display being communicatively coupled to the processor; and measuring the performance of the user using at least one performance metric when the processor receives an input that a user has selected an object of the plurality of virtual objects.
In still a further aspect, the invention relates to a method of evaluating visual impairment of a user including: generating, using a processor, a virtual driving course for the user to navigate; displaying portions of the virtual driving course on a head-mounted display as the user navigates the virtual navigation course, the head-mounted display being communicatively coupled to the processor; and measuring the progress of the user as user navigates the virtual navigation course using at least one performance metric.
Additional aspects of these inventions also include non-transitory computer readable storage media having stored thereon sequences of instruction for a processor to execute the forgoing methods and those discussed further below. Similarly, additional aspects of the invention include systems configured to be used in conjunction with these methods.
These and other aspects of the invention will become apparent from the following disclosure.
In a preferred embodiment of the invention, a functional vision assessment is conducted using a virtual reality system 100 and a virtual reality environment 200 developed for this assessment. In one embodiment, the functional vision assessment is a navigation assessment using a virtual navigation course 202. The virtual navigation course 202 may be used to assess the progression of a patient's disease or the efficacy or benefit of his or her treatment. The patient or user 10 navigates the virtual navigation course 202, and the time to completion and various other performance metrics can be measured to determine the patient's level of visual impairment; those metrics can also be stored and compared across repeated navigations by the patient (user 10).
A virtual navigation course 202 has technical advantages over physical navigation courses. For example, the virtual reality navigation course 202 of this embodiment is readily portable. The virtual navigation course 202 only requires a virtual reality system 100 (including for example a head-mounted display 110 and controllers 120) and a physical room 20 of sufficient size to use the virtual reality system 100. In contrast, the physical navigation course requires all the components and objects in the room to be shipped to and stored onsite. The physical room 20 used for the virtual reality navigation course can be a smaller size than the room used for the physical navigation courses. “Installation” or setup of the virtual navigation course 202 is as simple as starting up the virtual reality system 100 and offers the ability for instant, randomized course reconfiguration. In contrast, the physical navigation courses are time-and labor-intensive to install and reconfigure. Additionally, the environment the patient sees in the virtual navigation course can be adjusted in numerous ways that can be used in the visual impairment evaluation, including by varying the illumination and brightness levels, as discussed below, the chromatic range, and other controlled image patterns that would be difficult to precisely change and measure in a non-virtual environment.
Another disadvantage of the physical navigation courses is a time-consuming process to calibrate the illuminance of the course correctly. When the physical navigation course is established, a lighting calibration is conducted at about one-foot increments along the total length of the path of the physical maze. This calibration this then repeated in this one-foot increment for every different level of light for which the physical navigation course will be used. In addition, spot verification needs to be performed periodically (such as each day of testing) to confirm that the physical navigation course is property calibrated and the conditions have not changed. In contrast, the virtual reality environment 200 and virtual reality system 100 offer complete control of lighting conditions without the need for frequent recalibration. The head-mounted display 110 physically prevents light leakage from the surrounding environment ensuring consistency across clinical trial sites. Luminance levels of varying difficulty are determined mathematically by the virtual reality system 100. The luminance levels can be verified empirically using, for example, a spot photometer (such as ColorCal MK.II Colorimeter by Cambridge Research Systems Ltd. of Kent, United Kingdom). This empirical verification can be performed by placing the spot photometer over the integrated display 112 of the head-mounted display 110 while the virtual reality system 100 systematically renders different lighting conditions within the exact same virtual scene.
Moreover, scoring for the physical navigation course is done by physical observation by two independent graders and thus is a subjective scoring system with inherent uncertainty. In embodiments discussed herein, the scoring is assessed by the virtual reality system 100 and thus provides more objective scoring, resulting in a more precise assessment of a patient's performance and the progress of his or her disease or treatment. A further cumulative benefit of these advantages is a shorter visit for the patient. In the virtual reality system 100, virtual navigation courses 202 can be customized for each patient without the need for physical changes to the room. Moreover, the system may also be used for visual impairment therapy, whereby the course configurations can be gradually changed as the patient makes progress on improving his or visual impairment. These and other advantages of this preferred embodiment of the invention will become apparent from the following disclosure.
Still a further advantage of the virtual navigation course 202 over a physical navigation course is that the virtual navigation course 202 can be readily used by patients (users 10) that have physical disabilities other than their vision. For example, a user 10 that is in a wheelchair or a walking assist device (e.g., walker or crutches) can easily use the virtual navigation course 202, but the typical physical navigation course does not allow for such patients.
The vision assessments discussed herein are performed using a virtual reality system 100. Any suitable virtual reality system 100 may be used. For example, Oculus® virtual reality systems, such as the Oculus Quest®, or the Oculus Rift® made by Facebook Technologies of Menlo Park, CA, may be used. In another example, the HTC Vive® virtual reality systems, including the HTC Vive Focus®, HTC Vive Focus Plus®, HTC Vive Pro Eye®, and HTC Vive Cosmos® headsets, made by HTC Corporation of New Taipei City, Taiwan, may be used. Other virtual reality systems and head-mounted displays, such as Windows Mixed Reality systems, may also be used.
The head-mounted display 110 and the user system 130 are described herein as separate components, but the virtual reality system 100 is not so limited. For example, the head-mounted display 110 may incorporate some or all of the functionality associated with the user system 130. In addition, various functionality and components that are shown in this embodiment as part of the head-mounted display 110, the controller 120, and the user system 130 may be separate from these components. For example, sensors 114 are described as being part of the head-mounted display 110 to track and determine the position and movement of the user 10 and, in particular, the head of the user 10, the hands of the user 10, and/or controllers 120. Such tracking is sometimes referred to as inside-out tracking. However, some or all of the functionality of the sensors 114 may be implemented by sensors located on the physical walls 22 of a physical room 20 (see
In this embodiment, the head-mounted display 110 includes a facial interface 116. The facial interface 116 is a facial interface foam that surrounds the eyes of the user 10 and prevents at least some of the ambient light from the physical room 20 from entering a space between the eyes of the user 10 and the integrated display 112. The facial interface 116 of many of the commercial head-mounted displays 110, such as those discussed above, are contoured to fit the face of the user 10 and fit over the nose of the user 10. In some cases, the facial interface 116 is contoured to have a nose hole such that a gap 118 is formed between the nose of the user 10 and the facial interface 116, as can be seen in
The nose insert 140 is shown in
As shown in
In this embodiment, the sensors 114 are located on the head-mounted display 110, but location of the sensors 114 is not so limited and the sensors 114 may be placed in other locations.
As show schematically in
The user system 130 also includes non-volatile storage 138 connected to the processor 132 and main memory 134 through the bus 136. The non-volatile storage 138 provides non-volatile storage of computer-readable instructions, data structures, program modules, and other data for the user system 130. These instructions, data structures, and program modules include those used in generating the virtual reality environment 200, which will be discussed below, and those used to carry out the vision assessments, also discussed further below. Typically, the data, instructions, and program modules stored in the non-volatile storage 138 are loaded into the main memory 134 for execution by the processor 132. The non-volatile storage 138 may be any suitable non-volatile storage including, for example, solid state memory, magnetic memory, optical memory, and flash memory.
When the user system 130 is co-located with the head-mounted display 110, the integrated display 112 may be directly connected to the processor 132 by the bus 136. Alternatively, the user system 130 may be commutatively coupled to the head-mounted display 110, including the integrated display 112, using any suitable interface. For example, either wired or wireless connections to the user system 130 may be possible. Suitable wired communication interfaces include USB®, HDMI, DVI, VGA, fiber optics, DisplayPort®, Lightening connectors, and ethernet, for example. Suitable wireless communication interfaces include, for example, Wi-Fi®, a Bluetooth®, and radio frequency communication. The head-mounted display 110 and user system 130 shown in
The user system 130 may determine the position, orientation, and movement of the user 10 based on the sensors 114 for the head-mounted display 110 alone, and subsequently adjust what is displayed on the integrated display 112 based on this determination. The user system 130 and processor 132 communicatively coupled to the sensors 114 and configured to receive data from the sensors 114. The virtual reality system 100 of this embodiment, however, also optionally includes a pair of controllers 120.
The controller 120 of this embodiment includes various features to enable a user to interface with the virtual reality system 100 and virtual reality environment 200. These user interfaces may include a button 122 such as the “X” and “Y” button shown in
In some embodiments discussed herein, the user 10 walks through a physical room 20 as they navigate a virtual room 220 (discussed further below). However, the invention is not so limited and user 10 may navigate the virtual room 220 using other methods. In one example, the user 10 may be stationary (either standing or sitting) and navigate the virtual room 220 by using the thumb stick 124 or other controls of the controller 120. In another example, the user 10 may move through the virtual room 220 as they walk on a treadmill.
In one aspect, hardware that performs a particular function includes a software component (e.g, computer-readable instructions, data structures, and program modules) stored in a non-volatile storage 138 in connection with the necessary hardware components, such as the processor 132, main memory 134, bus 136, integrated display 112, sensors 114 for the head-mounted display 110, button 122, thumb stick 124, sensors 126 for the controller 120, and so forth, to carry out the function. In another aspect, the system can use a processor and computer-readable storage medium to store instructions which, when executed by the processor, cause the processor to perform a method or other specific actions. The basic components and appropriate variations are contemplated depending on the type of device, such as whether the user system 130 is implemented on a small, hand-held computing device, a standalone headset, or on a desktop computer, or a computer server.
In a preferred embodiment of the invention, the functional vision assessment is performed using a navigation course developed in a virtual reality environment 200, which may be referred to herein as a virtual navigation course 202. A patient (user 10) navigates the virtual navigation course 202 and the virtual reality system 100 monitors the progress of a user 10 through the virtual navigation course 202. The performance of the user 10 is then determined by using one or more metrics (performance metrics), which will be discussed further below. In this embodiment, these performance metrics are calculated by the virtual reality system 100 and in particular the user system 130 and processor 132, using data received from the sensors 114 and sensors 126. This functional vison assessment may be repeated over time for a user 10 to assess, for example, the progression of his or her eye disease or improvements from a treatment. For such an assessment over time, the performance metrics from each time the user 10 navigates the virtual navigation course 202 are compared against each other.
The virtual navigation course 202 is stored in the non-volatile storage 138, and the processor 132 displays on the integrated display 112 aspects of the virtual navigation course 202 depending upon input received from the sensors 114. Features of the virtual navigation course 202 will be discussed further below. Various features of the virtual reality environment 200 that are rendered by the processor and shown on the integrated display 112 will generally be referred to as “simulated” or “virtual” objects in order to distinguish them from an actual or “physical” object. Likewise, the term “physical” is used herein to describe a non-simulated or non-virtual object. For example, the room of a building in which the user 10 uses the virtual reality system 100 is referred to as a physical room 20 having physical walls 22. In contrast, a room of the virtual reality environment 200 that is rendered by the processor 132 and shown on the integrated display 112 is a simulated room or virtual room 220. In this embodiment, the virtual navigation course 202 approximates an indoor home environment, however, it is not so limited. For example, the virtual reality environment 200 may resemble any suitable environment, including for example, an outdoor environment such as a crosswalk, parking lot, or street.
For the functional vision assessment, a patient (user 10) navigates a path 210 through the virtual navigation course 202. The path 210 includes a starting location and an ending location. In this embodiment, the path 210 is set in a simulated room 220 with virtual obstacles. Examples of such virtual rooms are shown in the figures, including a first virtual room 220a (
In this embodiment, each virtual room 220 includes simulated walls 222 and a virtual floor 224. Each virtual room 220 also includes a start position 212 and an exit 214. The start position 212 of the first virtual room 220a is the starting location of the path 210, and the exit 214 of the last room used in the assessment, which in this embodiment is the third virtual room 220c, is the ending location.
The path 210 and direction the user 10 should take to navigate the path 210 is designed to be readily apparent to the user 10. In many instances, the user 10 has but one way to go, with boundaries of the path 210 being used to direct the user 10. Audio prompts and directions, however, may be programmed into the virtual navigation course 202 such that when the processor 132 identifies that the user 10 has reached a predetermined position in the path 210, the processor 132 plays an audio instruction on speakers (not shown) integrated into the head-mounted display 110.
Navigation of the virtual navigation course 202 by a user will now be described with reference to
As can be seen in
The path 210, which is shown by the broken line in
As described below, the user 10 will traverse the path 210 by navigating around each column 302 to reach the checkpoint at the exit 214. After the user stands on the green checkpoint at the exit 214, the virtual room 220 automatically re-configures from the first virtual room 220a to the second virtual room 220b. The user 10 is then instructed to turn around and continue navigating the path 210 in the second virtual room 220a. In other words, the exit 214 of the first virtual room 220a is the start position 212 of the second virtual room 220b. This process is repeated for each virtual room 220 in the virtual navigation course 202. This configuration allows the same physical room 20, such as a 24 foot by 14 foot space, to be used for an infinite number of rooms. The second virtual room 220b and third virtual room 220c are 21 feet by 11 feet, in this embodiment.
When the virtual reality environment 200 is initially loaded and displayed on the integrated display 112, the user is placed at the start position 212 in the first virtual room 220a.
One of the performance metrics used to evaluate the patient's vision and efficacy of any treatment is the time it takes for the user 10 to navigate (traverse) the path 210. In this embodiment, the start position 212 for the first virtual room 220 is the starting position of the path 210 and thus the time is recorded by the virtual reality system 100 when the user 10 starts at the start position 212 of the first virtual room 220a. The time is also recorded when the user 10 reaches various other checkpoints (also referred to as waypoints), such as the exit 214 of each virtual room 220, and the ending location of the path 210, which in this embodiment is the exit 214 of the third virtual room 220c. In this embodiment, the first virtual room 220a includes an intermediate checkpoint 216. Although shown here with only one intermediate checkpoint 216, any suitable number of intermediate checkpoints 216 may be used in each virtual room 220. From these times, the virtual reality system 100 can precisely determine the time it takes for a user 10 to navigate the virtual navigation course 202 and traverse the path 210. When time is recorded for other checkpoints, the time for the user 10 to reach these checkpoints may also be similarly determined.
The virtual reality system 100 also tracks the position, and thus the distance a user travels in completing the virtual navigation course 202 can be calculated. Although the virtual navigation course 202 is designed to be readily apparent to the user 10 and there is an optimal, shortest way to traverse the path 210, a user 10 may deviate from this optimal route. The user 10 may, for example, not realize a turn and travel farther, such as closer to a virtual walls 222 or other virtual object, before making the turn, thus increasing the distance traveled by the user 10 in navigating the virtual navigation course 202. The total distance traveled and/or the deviation from the optimal route may be another performance metric used to evaluate the performance of a user 10 in navigating the virtual navigation course 202.
A further performance metric used to evaluate the performance of a user 10 in navigating the virtual navigation course 202 is the number of times that the user 10 collides with the virtual objects in each virtual room 220. In the first virtual room 220a, the virtual objects with which the user 10 could collide include, for example, the virtual walls 222 and the column 302. In this embodiment, a collision with a virtual object is determined as follows, although any suitable method may be used. The virtual reality system 100 records the precise movement of the head of the user 10 using the sensors 114 for the head-mounted display 110. As discussed above, these sensors 114 report the real-time position of the head of the user 10. From the real-time position of the head of the user 10, the virtual reality system 100 extrapolates the dimensions of the entire body of the user 10 to compute a virtual box around the user 10. When the virtual box contacts or enters a space in the virtual reality environment 200 in which the virtual objects are located, the virtual reality system 100 determines that a collision has occurred and records this occurrence. Additional sensors on (or that detect) other portions of the user 10, such as the feet, shoulders, and hands (e.g., sensors 126 of the controllers 120), may also be used to determine whether a limb or other body part collided with the virtual object. The functional vision assessment of the present embodiment can thus precisely and accurately determine the number of collisions.
Still another performance metric used to evaluate the performance of a user 10 in navigating the virtual navigation course 202 is the amount of the course completed at each luminance level (discussed further below). As discussed above, the path 210 contains a plurality of checkpoints including the exits 214 of each virtual room 220 and any intermediate checkpoints, such as the intermediate checkpoint 216 in the first virtual room 220a. When the user 10 reaches a checkpoint, the virtual reality system 100 records the checkpoints reached by the user 10. If the entire virtual navigation course 202 is too difficult for the user 10 to complete (by becoming stuck and unable to find their way through the path 210 or by hitting too many (predetermined number) virtual objects such as virtual walls 222 and virtual obstacles), the user 10 may complete only portions of the virtual navigation course 202. Comparing between successive navigations of the virtual navigation course 202, such as when evaluating a treatment, for example, the user 10 may be able to complete the same portion of the course faster, or potentially complete additional portions of the course (e.g., reach additional checkpoints). Thus, an advantage of the embodiments described herein is that a single course that can be used for all participants, accommodating the wide range of visual abilities of the patient population, because an individual user 10 does not necessarily have to complete the most difficult portions of the course if they are unable to do so. In contrast, separate physical navigation courses would be required, each with different levels of difficulty, and would need to be able to accommodate the wide range of visual abilities of the patient population.
When the user 10 reaches the exit 214 of the first virtual room 220a, the second virtual room 220b is displayed on the display screen with the user 10 being located in the start position 212 of the second virtual room 220b, as shown in
The plurality of virtual furniture in the second virtual room 220b has a plurality of heights and sizes. The bookcase 316, for example, preferably has a height of at least 5 feet. Other virtual furniture has lower heights; for example, the square table 304 and media console 310 each have a height between 18 inches and 36 inches.
In the second virtual room 220b, the virtual navigation course 202 also includes a plurality of virtual obstacles that can be removed (referred to hereinafter as removable virtual obstacles). In this embodiment, the removable virtual obstacles are located in the path 210 and are toys located on a virtual floor 224 of the second virtual room 220b. The removeable virtual obstacles are preferably designed to have a lower height than the virtual furniture used to define the boundaries of the path 210. The user 10 is instructed to remove the obstacles as they are encountered along the path. If the user 10 does not remove the removable virtual obstacle, the user 10 may collide with the obstacle and the collision may be determined as discussed above for collisions with the virtual furniture. The number of collisions with the removeable virtual obstacles is another example of a performance metric used to evaluate the performance of the user 10 and may be evaluated separately or together with the number of collisions with the virtual furniture or other boundaries of the path 210.
The removeable virtual obstacles are preferably objects that could be found in a walking path in the real world and in this embodiment are preferably toys, but the removeable virtual obstacles are not so limited and may include other items such as colored balls, colored squares, and other items commonly found in a household (e.g., vases and the like). Toys may be particularly preferred as potential users 10 include children (pediatric patients) that have toys in their own household. Additionally, it is reasonable to expect that many users are familiar with and would reasonably expect toys to be in a walking path as many users have children and/or grandchildren. In this embodiment, the removeable virtual obstacles include a multicolored toy xylophone 402, a toy truck 404, and a toy train 406. In this embodiment, the removeable virtual obstacles are located on the virtual floor 224, but they are not so limited. Instead, for example and without limitation, the removeable virtual obstacles may appear to be floating, that is they are positioned at approximately eye level (about 5 feet for adult users 10 and lower, such as 2.5 feet for users 10 who are children) within the path 210. The virtual reality system 100 may use the sensors 114 of the head-mounted display 110 to determine the head height of the user 10 and then place the removeable virtual obstacles at head height for the user, for example. The removeable virtual obstacles also may randomly appear in the path 210.
Any suitable method may be used to remove the virtual obstacles. In this embodiment, the removeable virtual obstacles may be removed by the user 10 looking directly at a virtual obstacle. The user 10 may move his or her head so that the virtual obstacle is located approximately in the center of his or her field of view, such as in the center of the integrated display 112, and holding that position (dwelling) for a predetermined period of time. The virtual reality system 100 then removes the virtual obstacle from the virtual reality environment 200. When the virtual reality system 100 includes a controller 120, the virtual reality system 100 may remove the virtual obstacle from the virtual reality environment 200 in response to a user input received from a user input on the controller 120. For example, the user 10 can press a button 122 on the controller 120 with the virtual obstacle in the center of his or her field of view, and in response to the input received from the button 122 the virtual reality system 100 removes the virtual obstacle.
When the user 10 reaches the exit 214 of the second virtual room 2206, the third virtual room 220c is displayed on the display screen with the user 10 being located in the start position 212 of the third virtual room 220c, as shown in
In this embodiment, the second virtual room 2206 and the third virtual room 220c have different contrasts. The second virtual room 2206 is a high-contrast room where the virtual obstacles, have a high contrast with their surroundings. In this embodiment, the backgrounds, such as the virtual walls 222 and virtual floor 224, have a light color (light tan, in this embodiment), and the virtual obstacles have dark or vibrant colors. Similarly, the removable virtual obstacles of this embodiment are brightly colored children's toys, which stand out from the light, neutral-colored background. On the other hand, the third virtual room 220c is a low-contrast room in which the virtual obstacles, have coloring similar to that of the background. For example, the virtual obstacles, may be white or gray in color with the background being a light tan or white. With the low-contrast room located after the high-contrast room, the virtual navigation course 202 of this embodiment is progressively more difficult.
The placement of the virtual objects, their color, light intensity, and other physical attributes, thus may be strategized to test for specific visual functions. With color, for example, the objects in the second virtual room 220b are all dark colored having high contrast with the white walls, and in the third virtual room 220c, all of the objects are white or gray having low contrast with the white walls and white floor. This increases the difficulty of the third virtual room 220c for participants that have trouble with contrast sensitivity (a specific visual function). In another example of light intensity, the columns 302 in the third virtual room 220c are glowing to make them possible to see for patients with severe vision loss (e.g. light perception vision).
The functional vision assessment may be performed under a plurality of different environmental conditions. In a preferred embodiment of the invention, a user 10 navigates the virtual navigation course 202 under one environmental condition and then navigates the virtual navigation course 202 at least one other time with a change in the environmental condition. Instead of repeating the virtual navigation course 202 under different environmental conditions, this assessment may also be implemented by virtual rooms of virtual navigation course 202 with each room of the virtual navigation course 202 having the changed environmental condition.
One such environmental condition is the luminance of the virtual reality environment 200. In one preferred embodiment, the user 10 may navigate the virtual navigation course 202 a plurality of times in a single evaluation period, and with each navigation of the course, the virtual reality environment 200 has a different luminance. For example, the user 10 may navigate the virtual navigation course 202 the first time with the lowest luminance value of 0.1 cd/m2. The virtual navigation course 202 is then repeated with a brighter luminance value of 0.3 cd/m2, for example. Then, the user 10 navigates the course a third time, with another brighter luminance value of 1 cd/m2, for example. In this embodiment, the user 10 navigates the virtual navigation course 202 multiple time each at sequentially brighter luminance value between 0.1 cd/m2 and 100 cd/m2. The luminance values are equally spaced (½ log between each light level) and thus the luminance values are 0.5 cd/m2 (similar to the light level on a clear night with a full moon), 1 cd/m2 (similar to twilight), 2 cd/m2 (similar to minimum security risk lighting), 5 cd/m2 (typical lighting level for lighting on the side of the road), 10 cd/m2 (similar to sunset), 20 cd/m2 (similar to a very dark, overcast day), 50 cd/m2 (similar to the lighting of a passageway or outside working area), and 100 cd/m2 (similar to the lighting in a kitchen). To navigate at the lowest luminance values, the user 10 undergoes about 20 minutes of dark adaptation before starting the test, so that the eyes of the user 10 can adjust to the dark and allow them the best chance possible to be able to navigate the virtual navigation course 202 at the lowest light level. It is thus advantageous to begin the test at the lowest luminance value and sequentially increase the luminance value. This approach also helps to standardize and effectively compare results between different evaluation periods.
One of the performance metrics used may include the lowest luminance value passed. For example, a user may not be able to complete the virtual navigation course 202 at one level, by becoming stuck and unable to find their way through the path 210 or by hitting too many virtual objects such as virtual walls 222 and virtual obstacles. Completing the virtual navigation course 202 at a certain luminance level or having a number of collisions lower than a predetermined value may be considered passing the luminance value.
The head-mounted display 110 may be equipped with eye tracking (an eye tracking enabled device). The virtual reality system 100 could collect data on the position of the eye, which could be used for further analysis. This eye tracking data may be a further performance metric.
As discussed above, the functional vision assessment discussed herein can be used to assess the progress of a patient's disease or treatment over time. The user 10 navigates the virtual navigation course 202 a first time and then after a period of time, such as days or months, the user 10 navigates the virtual navigation course 202 again. The performance metrics of the first navigation can then be compared to the subsequent navigation as an indication of how the disease or treatment is progressing over time. Additional further navigations of the virtual navigation course 202 can then be used to further assess the disease or treatment over time.
With repeated navigation of the virtual navigation course 202, there is a risk that the user 10 may start to “learn” the course. For example, the user 10 may remember the location of the virtual obstacles and thus the virtual navigation course 202 loses its effectiveness as an assessment tool. To avoid this, one of a plurality of unique course configurations (16 unique course configurations in this embodiment, for example) are selected at random at the start of the assessment. Between each of the plurality of unique course configurations, the total length of the path 210 is kept the same, as is the number of left/right turns and virtual obstacles during randomization. The position of the virtual obstacles and the order in which they appear also may be changed between each of the plurality of unique course configurations. Likewise, the position and orientation of the various virtual furniture also may be changed between each of the plurality of unique course configurations.
As described above, the environmental conditions, such as luminance, and the contrast is static. The luminance level is set at the same level for all three virtual rooms 220. Likewise, the contrast is generally the same within each of the first virtual room 220a, second virtual room 2206, and third virtual room 220c. The invention, however, is not so limited and other approaches could be taken, including, for example, making the environmental conditions dynamic. For example, either one or both of the luminance level and contrast could be dynamic, such that either parameter increases or decreases in a continuous fashion as the user navigates the virtual navigation course 202.
A preferred implementation of the functional vision assessment is described as follows. In this embodiment, the functional vision assessment using the virtual navigation course 202 involves a 20-minute period of dark adaptation before the user 10 attempts to navigate the virtual navigation course 202 at increasing levels of luminance. When the user 10 completes the virtual navigation course 202 (or is unable to continue navigating the virtual navigation course 202), a technician may ensure the participant is correctly aligned before moving on to the next luminance level. With a click of a button, a new course configuration is randomly chosen from the 16 unique course configurations with the same number of turns and/or obstacles.
The base course configuration for the virtual navigation course 202 is, as described in more detail above, designed with a series of three virtual rooms 220 (first virtual room 220a, second virtual room 2206, third virtual room 220c) and four checkpoints (the exit 214 of each virtual room 220 and intermediate checkpoint 216) that permit the participant (user 10) to complete only a portion of the virtual navigation course 202, if the remainder of the virtual navigation course 202 is too difficult to navigate. The first virtual room 220a, which may be referred to herein as the Glowing Column Hallway, is designed to simulate a hallway with dark virtual walls 222 and virtual floor 224 and four tall columns 302. As the luminance (cd/m2) level increases, the luminance emitting from the column 302 increases. The Glowing Column Hallway is the easiest of the three column 302 to navigate and may be designed for participants with severe vision loss (e.g., Light Perception only or LP vision). The second virtual room 220b, herein referred to as the High Contrast Room, is a 21-foot by 11-foot room with light virtual walls 222 and virtual floor 224 and dark colored virtual furniture (virtual obstacles) that delineates the path 210 the participant (user 10) should traverse. At various points along the path, there are brightly colored virtual toys (removeable virtual obstacles) obstructing the path 210 that can be removed if the participant looks directly at the toy and presses a button 122 on the controller 120 in their hand. The third virtual room 220c, herein referred to as the Low Contrast Room, is similar to the High Contrast Room (second virtual room 220b), but there are an increased number of turns, increased overall length, and the all of the objects (both virtual furniture and virtual toys) are white and/or grey, providing very low contrast with the virtual walls 222 and virtual floor 224 in the third virtual room 220c.
A study was conducted to assess the reliability and construct validity of the virtual navigation course 202. This study was conducted using 30 healthy volunteers, having approximately 20/20 vision or vision that is corrected to approximately 20/20 vision. The study participants ranged in age from 25 years old to 44 years old. Forty percent of them were female and 57% wore glasses or contacts.
The study was conducted over 3 weeks. Each participant (user 10) was tested five times. In the first and second weeks, the participant (user 10) conducted a test and a retest, and in the third week, the third week the participant (user 10) conducted a single test. Each test or retest comprised the user 10 navigating the path 210 of the virtual navigation course 202 discussed above three different times. The environmental condition of luminance level was changed between each of the three times the user 10 navigated the path 210. The first time the user 10 traversed the path 210 the luminance level was set at 1 cd/m2. The second time the user 10 traversed the path 210 the luminance level was set at 8 cd/m2. And, the third time the user 10 traversed the path 210 the luminance level was set at 100 cd/m2.
Some of the participants conducted each test under simulated visual impairment conditions.
The performance metrics evaluated in this study included the lowest luminance level passed (measured in cd/m2), the time to complete the virtual navigation course 202, the number of virtual obstacles hit, and the total distance traveled.
The study showed that no significant test-retest differences, after applying the Hochberg multiplicity correction, were detected for each performance metric when considered by within the week, luminance level, and impairment condition, with two exceptions. There were test-retest differences detected for the two groups with the worst impairment at the middle luminance level (8 cd/m2) for the first week only. As can be seen in
The study showed that there are many significant differences detected between groups with simulated visual impairment for the time to complete the virtual navigation course 202 and most of these differences are detected at the lowest luminance levels (1 cd/m2 and 8 cd/m2), as shown in
The virtual reality system 100 discussed herein may be used for additional vision assessments beyond the functional vision assessment using the virtual navigation course 202. Unless otherwise stated, each of the vision assessments described in the following sections uses the virtual reality system 100 discussed above, and features of one virtual reality environment 200 described herein may be applicable the other virtual reality environments 200 described herein. Where a feature or a component in the following vision assessments is the same or similar to those discussed above, the same reference numeral will be used for these features and components and a detailed description will be omitted.
Many visual acuity assessments use a standard eye chart, such as the Early Treatment Diabetic Retinopathy Study (“ETDRS”) chart. However, patients with very low vision, such as patients from No Light Perception (NLP) to 20/800 vision, are unable to read the letters of the ETDRS chart. Existing methods for assessing the visual acuity of these patients have poor granularity. Such methods typically use different letter sizes at discrete intervals. For patients with very low vision, these intervals are large (having, for example a LogMAR value of 0.2 between the letter sizes). There is thus a large unmet need in clinical trials for a low vision visual acuity assessment with more granular scoring than those available on the market. The low vision visual acuity test (low vision visual acuity assessment) of this embodiment uses the virtual reality system 100 and a virtual reality environment 500 that allows for higher resolution scoring of patients with very low vision.
In the virtual reality environment 500 of this embodiment, the user 10 is presented with virtual object having a high contrast with the background. In this embodiment the virtual objects are black and the background (such as virtual walls 222 and/or virtual floor 224 of the virtual room 220) is white or another light color. The black virtual objects of this embodiment change size or change the virtual distance from the user 10. In this embodiment of the low vision visual acuity test, the user 10 is asked to complete two different tasks. The first task is referred to herein as the Letter Orientation Discrimination Task and the second task is referred to herein as the Grating Resolution Task. In some cases, the user 10 may be unable to complete the Grating Resolution Task. In such a case, the user 10 will be asked complete an alternative second task (a third task) which is referred to herein as the Light Perception Task.
The virtual reality environment 500 for Letter Orientation Discrimination Task is shown in
The sensors 114 and/or sensors 126 of the virtual reality system 100 identify the direction that the user 10 is pointing and the virtual reality system 100 records the size of the letter in response to input received from the button 122 of the controller 120, when pressed by the user 10. In this embodiment, the performance metrics for the Letter Orientation Discrimination Task are related to the size of the alphanumeric character 512. Such performance metrics may thus include minimum angle of resolution measurements for the alphanumeric character 512, such as MAR and LogMAR. MAR and LogMAR may be calculated using standard methods such as those described by Kalloniatis, Michael and Luu, Charles the chapter on “Visual Acuity” from Webvision (Moran Eye Center, Jun. 5, 2007, available at https://webvision.med.utah.edu/book/part-viii-psychophysics-of -vision/visual-acuity/(last accessed Feb. 20, 2020)), the disclosure of which is incorporated by reference herein in its entirety.
The alphanumeric character 512 may appear in one of a plurality of different directions. In this embodiment, there are four possible directions the alphanumeric character 512 may be facing. These directions are described herein relative to the direction the user 10 would point.
For the low vision visual acuity test of this embodiment, the Letter Orientation Discrimination Task is repeated a plurality of times. Each time the Letter Orientations Discrimination Task is repeated one alphanumeric character 512 from a plurality of alphanumeric characters 512 is randomly chosen, and the alphanumeric character 512 direction the alphanumeric character 512 faces is also randomly chosen from one of the plurality of directions. In the embodiment, described above the alphanumeric character 512 appears to at a fixed distance from the user 10 in the virtual reality environment 500 and gradually and continuously gets larger. In alternative embodiments, the alphanumeric character 512 could appear to get closer to the user 10 by either automatically and continuously moving toward the user 10 or the user 10 walking/navigating toward the alphanumeric character 512 in the virtual reality environment 500.
Next, the user 10 is asked to complete the Grating Resolution Task. The virtual reality environment 500 for Grating Resolution Task is shown in
The grating 514 appears in the virtual reality environment 500 on the virtual screen 502 with each bar having an initial width. The width of each bar in the grating 514 then increases in size in a continuous manner (as the width increases the number of bars decrease).
As with the Letter Orientation Discrimination Task, for the low vision visual acuity test of this embodiment, the Grating Resolution Task may be repeated a plurality of times. Each time the Grating Resolution Task one grating 514 from a plurality of grating 514 is randomly chosen and displayed on the virtual screen 502.
If the participant is unable to complete the Grating Resolution Task, a Light Perception Task will be performed. In this task, the integrated display 112 of the head mounted display 110 will display a completely white light with 100% brightness. The completely white light will be displayed after a predetermined amount of time. The predetermined amount of time will be selected from a plurality of predetermined amount of time, such as randomly selecting a time between 1-15 seconds. The participant is instructed to click the button 122 of the controller 120 when they can see the light. In response to an input received from the button 122 of the controller 120 the virtual reality system 100 determines the amount of time between when the input is received (user 10 presses the button 122) and when the light was displayed on the integrated display 112. In this embodiment the brightness 100%, but the invention is not so limited and in other embodiments, the brightness of the light displayed on the integrated display 112 may be varied.
Although the three tasks are described as part of the same test, in this embodiment each of the tasks may be used individually or in different combinations to provide a low-vision visual acuity assessment.
The low-vision visual acuity assessment discussed is designed for patients with very low vision, where standard eye charts are not sufficient. Visual acuity assessment for other patients using the Early Treatment Diabetic Retinopathy Study (ETDRS) protocol may also benefit from using the virtual reality system 100 discussed herein. As discussed above, the virtual reality system 100 discussed herein, allows standardized lighting conditions for visual assessments, at a wide variety of locations including home, that is not otherwise suitable for the assessment. The virtual reality system 100 discussed herein could allow for remote assessment of visual acuity, such as at home under standardized lighting conditions.
In the virtual reality environment 520 of this embodiment, the user 10 is presented with a virtual eye chart 522 on a virtual wall 222 of a virtual room 220. The eye chart 522 may be any suitable eye chart, including for example the eye chart using the ETDRS protocol. Although the eye chart 522 is not so limited, and any suitable alphanumeric and symbol/image-based eye charts may be utilized. They eye chart includes a plurality of lines of alphanumeric characters. Each line of alphanumeric characters having at least one alphanumeric character. The alphanumeric characters in a first line of alphanumeric characters 524 are a different size than the alphanumeric characters in a second line of alphanumeric characters 526. When, for example, symbol/image-based eye charts are used, each line includes at least one character (image or symbol) and characters in a first line are a different size than the characters in a second line.
The virtual reality environment 520 of this embodiment is shown in
The visual acuity assessment could be managed by a technician. When managed by a technician, the technician can toggle between different eye charts using a computer (not shown) communicatively coupled to the user system 130. Any suitable connection may be used, including for example, the internet, where the technician is connected to the user system 130 using a web interface operable on a web browser of the computer. The technician can toggle between the plurality of different eye charts (three in this embodiment), and virtual reality system 100, in response to an input received from the user interface associated with the technician, displays one of the plurality of eye charts as the virtual eye chart 522 on the virtual wall 222. The technician can move an arrow 528 up or down to indicate which line the user 10 should read, and virtual reality system 100, in response to an input received from the user interface associated with the technician, positions the arrow 528 to point to a row of the virtual eye chart 522. The arrow 528 is an example of an indication indicating which line of the virtual eye chart 522 the user 10 should read, and this embodiment is not limited to using an arrow 528 as the indication. Where the technician is located locally with the user 10, the technician could use the controller 120 of the virtual reality system 100 to move the arrow 528.
The process for moving the arrow 528 is not so limited and may, for example, be automated. In this embodiment, for example, the virtual reality system 100 may include a microphone and include voice recognition software. The virtual reality system 100 could determine, using the voice recognition software, if the user 10 says the correct letter as the user 10 reads aloud the letters on the virtual eye chart 522. The virtual reality system 100 then moves the arrow 528 starting at the top line and moving down the chart as correct letters are read.
The performance metrics for visual acuity assessment of this embodiment may be measured in the number of characters (such as the number of alphanumeric characters) correctly identified and the size of those characters. As with the low vision visual acuity assessment, the performance metric related to the size of the character may be calculated as MAR and LogMAR, as discussed above.
The head mounted display 110 may include the ability to track users eye movements using a sensor 114 of the head mounted display 110 while the user 10 performs tasks. The virtual reality system 100 then generates eye movement data. The eye movement data can be uploaded (automatically, for example) to a server using the virtual reality system 100 and a variety of outcome variables can be calculated that evaluate oculomotor instability. The oculomotor instability assessment of this embodiment may use the virtual reality environment 500 of the low vision visual acuity assessment discussed above. The user 10 stares at a target 504 which may be the virtual screen 502, which is blank, or another object, such as the alphanumeric character 512, for example. The oculomotor instability assessment is not limited to these environments and other suitable targets for the user 10 to stare at may be used.
As the user 10 stares at the target, the head mounted display 110 tracks the location of the center of the pupil and generates eye tracking data. The eye tracking data can then be analyzed to calculate performance metrics. One such performance metric may be median gaze offset, which is the median distance from actual pupil location to normal primary gaze (staring straight ahead at the target). Another performance metric may be variability (2 SD) of the radial distance between actual pupil location and primary gaze. Other metrics could be the interquartile range (IQR) or the median absolute deviation from the normal primary gaze.
Geographic atrophy, Glaucoma, or any (low vision) ocular condition, including inherited retinal dystrophies, may also be assessed using the virtual reality system 100 discussed herein. One such assessment may include presenting the user 10 with a plurality of scenes (or scenarios) and asking the user 10 to identify a one virtual item of a plurality of virtual items within the scene. In such scenarios, the user 10 could virtually grasp or pick up the item, point at the item and click a button 122 of the controller 120, and/or read or say something that will confirm they saw the item. When the head mounted display 110 is equipped with eye tracking software and devices, the virtual reality system 100 can monitor the eye of the user 10 and, if the user 10 fixated on the intended object, determine that the user 10 saw the requested item. In this embodiment, the virtual reality system 100 and virtual reality environment 550 for this test may include audio prompts to tell the participant what item to identify.
Any suitable scenes or scenarios could be used. As with the virtual navigation course 202 discussed above, each of the scenes of the virtual reality environment 550 could have various different luminance levels to test the user 10 in both well-lit and poorly lit environments. In this embodiment, the luminance level may be chosen in randomized fashion.
Another scenario includes, for example, a plurality of objects arrayed on a table, such as the objects shown in
Further scenarios may include facial recognition tasks. One type of facial recognition task may be an odd-one-out task, where the user 10 identifies the face that is different (odd one) from others presented. The odd-one-out task could help eliminate effects of memory as compared to other memory tasks. In the odd-one-out facial recognition task, four virtual people may be located in a virtual room 220, such as a room that simulates a hallway, and walk toward the user 10. Alternatively, the user 10 could walk towards the four virtual people. Each of the four virtual people would have the same height, hair, clothing, and the like, but one of the four virtual people would have slightly different facial features (“the odd virtual person”). The user 10 would be asked to identify the odd virtual person, by for example, pointing at the odd virtual person and pressing a button 122 of the controller 120.
Another functional vision assessment that may be used to assess, for example, Geographic atrophy, Glaucoma, or other (low vision) ocular conditions, includes a driving assessment. As with the virtual navigation course 202 and virtual reality environment 550 discussed above, the virtual reality environment 550 could have tasks with various different luminance levels to test the user 10 in both well-lit and poorly lit environments.
The controller 120 may be used for driving. For example, different buttons 122 of the controller 120 may be used to accelerate and brake and the controller 120 rotated (or the thumb stick 124 used) to steer. As shown in
The performance metrics used in this embodiment may be based on reaction time. For example, the virtual reality system 100 may measure the reaction time of the user 10 by comparing the time the virtual person 564 starts crossing the road 562 with the time the virtual reality system 100 receives input from the pedal assembly 150 that the user 10 has depressed the brake pedal 154. Other suitable performance metrics may also be used, including for example, whether or not the user 10 successfully brakes in time to prevent a collision with the virtual person 564.
Although this invention has been described with respect to certain specific exemplary embodiments, many additional modifications and variations will be apparent to those skilled in the art in light of this disclosure. It is, therefore, to be understood that this invention may be practiced otherwise than as specifically described. Thus, the exemplary embodiments of the invention should be considered in all respects to be illustrative and not restrictive, and the scope of the invention to be determined by any claims supportable by this application and the equivalents thereof, rather than by the foregoing description.
This application is a continuation of U.S. patent application Ser. No. 17/180,130, Feb. 19, 2021, which claims the benefit under 35 USC 119 (e) of U.S. Provisional Application No. 62/979,575, filed Feb. 21, 2020, which are herein incorporated by reference in their entireties and serve as the basis of a priority and/or benefit claim for the present application.
Number | Date | Country | |
---|---|---|---|
62979575 | Feb 2020 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17180130 | Feb 2021 | US |
Child | 18781786 | US |