The present disclosure relates to electronic devices having a graphical display, and in particular, to a vision correction system, method and graphical user interface for implementation on such electronic devices.
The operating systems of current electronic devices having graphical displays offer certain “Accessibility” features built into the software of the device to attempt to provide users with reduced vision the ability to read and view content on the electronic device. Specifically, current accessibility options include the ability to invert images, increase the image size, adjust brightness and contrast settings, bold text, view the device display only in grey, and for those with legal blindness, the use of speech technology.
These techniques focus on the limited ability of software to manipulate display images through conventional image manipulation, with limited success. Other techniques, as reported for example in Fu-Chung Huang, Gordon Wetzstein, Brian A. Barsky, and Ramesh Raskar. “Eyeglasses-free Display: Towards Correcting Visual Aberrations with Computational Light Field Displays”. ACM Transaction on Graphics, xx:0, August 2014, the entire contents of which are hereby incorporated herein by reference, have resulted either in a low-contrast image, a low-resolution image, or both. In any event, current techniques have thus far failed to provide a reliable solution for electronic device users having reduced visual acuity and who may wish to interact with their device's graphical display without the use of corrective eyewear, for example.
Furthermore, current techniques generally involve device-specific implementations based on device-resident image adjustment controls and parameters requiring direct user configuration.
This background information is provided to reveal information believed by the applicant to be of possible relevance. No admission is necessarily intended, nor should be construed, that any of the preceding information constitutes prior art.
The following presents a simplified summary of the general inventive concept(s) described herein to provide a basic understanding of some aspects of the invention. This summary is not an extensive overview of the invention. It is not intended to restrict key or critical elements of the invention or to delineate the scope of the invention beyond that which is explicitly or implicitly described by the following description and claims.
A need exists for a vision correction system, method and graphical user interface for implementation on electronic devices having a graphical display, that overcome some of the drawbacks of known techniques, or at least, provide a useful alternative thereto. Some aspects of disclosure provide embodiments of such systems, methods, GUIs and devices.
In accordance with one aspect, there is provided an electronic device for use by a prescribed user having reduced visual acuity, the device comprising: a digital display; a hardware processor; and a computer-readable medium having statements and instructions stored thereon for execution by said hardware processor in correcting an output image to be rendered by said digital display in accordance with a designated image correction, wherein said image correction function receives as input at least one designated user-specific vision correction parameter selected from a plurality of available correction parameters to correspond with the reduced visual acuity of the user and thereby output a correspondingly corrected output image; wherein output of said correspondingly corrected output image via said digital display at least partially compensates for the user's reduced visual acuity.
In one embodiment, said digital display comprises a light-field display having a digital output display screen and a light-field display optics layered thereon and defined by at least one light-field optics parameter, and wherein said image correction function comprises an image pre-filtering function that receives as input said at least one light-field optics parameter and said at least one designated user-specific vision correction parameter to output said correspondingly corrected output image via said light-field display. In one such embodiment, the light-field display optics comprises a pinhole mask forming a parallax barrier light-field display. In another such embodiment, the light-field display optics comprises a lenslet array.
In one embodiment, said computer-readable medium has further statements and instructions stored thereon for execution by said hardware processor to implement and render an interactive graphical user interface (GUI) on said display, wherein said interactive GUI incorporates a dynamic vision correction scaling function that dynamically adjusts said at least one vision correction parameter in real-time in response to a designated user interaction therewith via said GUI.
In one such embodiment, said dynamic vision correction scaling function comprises a graphically rendered scaling function and wherein said designated user interaction comprises a continuous slide motion operation, and wherein said GUI is configured to capture and translate a user's given continuous slide motion operation to a corresponding adjustment to said vision correction parameter scalable with a degree of said user's given slide motion operation. In one such embodiment, said graphically rendered scaling function comprises a substantially circular graphical scale and wherein said continuous slide motion operation consists of a substantially circular motion on said substantially circular graphical scale. In another such embodiment, said light-field display comprises a touch-sensitive display and wherein said designated user interaction comprises a recognizable touch-activated gesture on said touch-sensitive display.
In one embodiment, the electronic device further comprises a communication interface operable to communicate over a network with a network-accessible vision correction resource having stored in association therewith said plurality of available correction parameters and a user profile associated with the user; wherein said user profile has stored in association therewith said at least one designated vision correction parameter; and wherein identification of said user profile is communicated by the electronic device to said network-accessible resource via said communication interface to access said at least one designated vision correction parameter therefrom. In one such embodiment, he electronic device further comprises statements and instructions that, when executed by said hardware processor, render a user login interface that receives as input user profile credentials and relays said user credentials to said network-accessible vision correction resource to access said at least one designated vision correction parameter therefrom. In another such embodiment, a given user profile is rendered accessible in response to a corresponding user login via two or more distinct electronic devices. In another such embodiment, said at least one designated vision correction parameter is automatically calculated by a hardware processor associated with said network-accessible resource as a function of at least one user visual acuity factor input by the user via the electronic device and communicated to the network-accessible resource via said communication interface for storage against said user profile, wherein said user visual acuity factor comprises at least one of a user demographic and a predefined user vision correction prescription.
In one embodiment, the device consists of a digital vehicle user interface, a digital watch, or a digital reader.
In one embodiment, the device further comprises an onboard or remotely interfaceable digital camera operable to display an image captured by said camera on said digital display such that said captured image is automatically corrected in accordance with said vision correction function for consumption by the user via said digital display.
In one embodiment, said computer-readable medium has further statements and instructions stored thereon for execution by said hardware processor to implement and render an interactive graphical user interface (GUI) on said digital display, wherein said interactive GUI incorporates a vision toggle function that dynamically toggles responsive to user action between distinct predefined vision correction modes. In one such embodiment, said distinct predefined vision correction modes include a non-corrected mode.
In accordance with another aspect, there is provided a computer-readable medium having statements and instructions stored thereon for execution by a hardware processor to implement a vision correction application on an electronic device having a digital display to at least partially compensate for a user's reduced visual acuity, said statements and instructions executable by said hardware processor to: access at least one designated user vision correction parameter selected from a plurality of available correction parameters to correspond with the reduced visual acuity of the user; correct an output image of the electronic device in accordance with a designated image correction function to output a correspondingly corrected output image, wherein said image correction function receives as input said at least one designated user vision correction parameter; and output said correspondingly corrected output image via said digital display so to at least partially compensate for the user's reduced visual acuity.
In accordance with one embodiment, said computer-readable medium has further statements and instructions stored thereon for execution by said hardware processor to implement and render an interactive graphical user interface (GUI) on said digital display, wherein said interactive GUI incorporates a dynamic vision correction scaling function that dynamically adjusts said at least one vision correction parameter in real-time in response to a designated user interaction therewith via said GUI. In one such embodiment, said dynamic vision correction scaling function comprises a graphically rendered scaling function and wherein said designated user interaction comprises a continuous slide motion operation, and wherein said GUI is configured to capture and translate a user's given continuous slide motion operation to a corresponding adjustment to said vision correction parameter scalable with a degree of said user's given slide motion operation. In one such embodiment, said graphically rendered scaling function comprises a substantially circular graphical scale and wherein said continuous slide motion operation consists of a substantially circular motion on said substantially circular graphical scale.
In one embodiment, the computer-readable medium further comprises statements and instructions to implement and render an interactive graphical user interface (GUI) on said digital display, wherein said interactive GUI incorporates a vision toggle function that dynamically toggles responsive to user action between distinct predefined vision correction modes corresponding to distinct vision correction parameters. In one such embodiment, said distinct predefined vision correction modes include a non-corrected mode.
In one embodiment, said computer-readable medium further comprises statements and instructions to process an image captured by an onboard or remotely interfaceable camera such that said captured image is automatically corrected in accordance with said vision correction function for consumption by the user via said digital display.
In one embodiment, the computer-readable medium is operable to access a display distance parameter representative of a distance between the user and the digital display and execute said vision correction function as a function of said distance. In one such embodiment, said display distance parameter is predefined as an average distance of the display screen in operation. In another such embodiment, the computer-readable medium is executable on distinct device types, and wherein said display distance parameter is predefined for each of said distinct device types. In another such embodiment, said average distance is at least partially defined for each given user as a function of a demographic of said given user.
In accordance with another aspect, there is provided a network-enabled vision correction system to implement vision correction on a plurality of electronic devices, each having a digital output display screen, a hardware processor, a computer-readable medium, and a communication interface, the system comprising: a network-accessible vision correction server having stored in association therewith a user profile for each system user, wherein each said user profile has stored in association therewith a respective system user identifier and at least one respective vision correction parameter selected from a plurality of vision correction parameters to at least partially correspond with a reduced visual acuity of said respective system user; a software application executable on each of the devices and comprising statements and instructions executable by the hardware processor thereof in correcting an output image to be rendered by the digital display thereof in accordance with a designated image correction function, wherein said image correction function receives as input said at least one vision correction parameter accessed from a given user profile as selected for a given system user, and thereby outputs a correspondingly corrected output image via said digital display to at least partially compensate for a reduced visual acuity of said given system user.
In one embodiment, the system further comprises a light-field optics to be layered on the digital output display screen of each of the devices, wherein said light-field optics is defined by at least one light-field optics parameter, and wherein said image correction function is configured to account for said light-field optics parameter in correcting said output image.
In one embodiment, said software application further comprises statements and instructions that, when executed by the hardware processor, render a user login, authentication or identification interface that receives as input user profile credentials, authentication or identification metrics, and relays said user credentials or metrics to said server in accessing said at least one vision correction parameter therefrom.
In one embodiment, said at least one vision correction parameter is automatically calculated by a server-accessible hardware processor as a function of at least one user visual acuity factor input by the user via the electronic device and communicated to said server via the communication interface for storage against said user profile, wherein said user visual acuity factor comprises at least one of a user demographic and a predefined user vision correction prescription.
In one embodiment, the system further comprises the plurality of electronic devices.
In one embodiment, said user login interface enables any given user to access its at least one vision correction parameter via respective electronic devices and have any said respective electronic device output said correspondingly corrected output image via said digital display upon successful login therewith.
In one embodiment, said user profile is remotely accessible upon user identification from any of said electronic devices so to execute said correspondingly corrected output image via any of said electronic devices in response to said user identification.
In one embodiment, said electronic devices comprises any one or more of cellular telephones, smartphones, smart watches or other smart devices, an onboard vehicle navigation or entertainment system, a network interfaceable vehicle dashboard and/or controls, and the like.
In accordance with another aspect, there is provided a network-enabled vision correction method to implement vision correction on a plurality of electronic devices, each having a digital output display screen, a hardware processor, a computer-readable medium, and a communication interface, the method comprising: providing access to a vision correction application executable on each of the remote electronic devices to correct an output image to be rendered by the digital display in accordance with a designated image correction function; storing on a remote server a respective user profile for each of a plurality of registered users, and storing in association therewith at least one designated vision correction parameter corresponding with a respective reduced visual acuity for each of said registered user and a respective digital user identifier usable in remotely identifying each of said registered users; receiving at an application server over the network a given digital user identifier from a given registered user operating any given one of the remote electronic devices; the application server: identifying said given registered user against a corresponding stored user profile as a function of said given digital user identifier; retrieving said at least one designated vision correction parameter stored in association therewith; and transmitting said at least one designated vision correction parameter over the network to said given one of the remote electronic devices so to invoke execution of said designated image correction function thereon based at least in part on said at least one designated vision correction parameter and thereby output a correspondingly corrected output image via the digital display to at least partially compensate for a reduced visual acuity of said given registered user.
In one embodiment, the vision correction application is further executable to graphically render a real-time vision correction adjustment interface that dynamically adjusts said at least one designated vision correction parameter in real-time responsive to user interaction with said interface in dynamically adjusting said corrected output image accordingly, and digitally record an adjusted vision correction parameter corresponding to a preferred corrected output image setting selected by said given registered user via said interface, wherein the method further comprises: receiving over the network a vision correction parameter adjustment command at said application server from said given one of the remote electronic devices indicative of said adjusted vision correction parameter; and storing said adjusted vision correction parameter against said given user profile.
As introduced above, and in accordance with some aspects, a method and system are provided for the correction of vision on an electronic device, for instance where a combination of resident software and hardware on a user's electronic device can be dynamically controlled to manipulate the image displayed thereby in order to make the image clearer, at least to some significant level, to users with reduced visual acuity and/or visual impairments, commonly referred to herein as reduced visual acuity. For example, the software and hardware combination may allow for vision corrections similar to that achievable using conventional prescription lens, adjusting any one of more of a rendered image's hue, contrast, and brightness, for example.
In some embodiments, the system may be configured to invoke a server-based calibration process that not only allows for the centralized management of a user's calibration parameters, which may facilitate, enhance or enable various user-centric account or profile features such as calibration portability between user, public or shared devices, but also allow for the accumulation, tracking and analysis of calibration parameters from multiple users or subscribers. The latter may be used to better predict and deliver more accurate display correction settings to each user based on similarities observed between reported user conditions and selected settings, thus further enabling the provision of visual settings that allow a greater cross section of the population to use their device without the need for corrective lenses.
Other aspects, features and/or advantages will become more apparent upon reading of the following non-restrictive description of specific embodiments thereof, given by way of example only with reference to the accompanying drawings.
Several embodiments of the present disclosure will be provided, by way of examples only, with reference to the appended drawings, wherein:
The systems and methods described herein provide, in accordance with different embodiments, different examples an electronic device having an adjustable graphical display, and a vision correction system, method and graphical user interface therefor.
Electronic device 100 includes a processing unit 110, a display 120, and internal memory 130. Display 120 can be an LCD screen, a monitor, a plasma display panel, an e-mounted display, or any other type of electronic display. Internal memory 130 can be any form of electronic storage, including a disk drive, optical drive, read-only memory, random-access memory, or flash memory. Memory 130 has stored in it vision correction application 140. Electronic device 100 may optionally include a front-facing camera 150, and an accelerometer 160. Accelerometer 160 is capable of determining the tilt and/or orientation of electronic device 100.
The device of
In the embodiment shown in
where Δx is the pinhole separation and d is the width of spacer 310.
While
In one embodiment, vision correction application 140 runs as a process on processing unit 110 of electronic device 100. As it runs, it pre-filters the output of display 120.
In step 500, a user's vision correction parameters are retrieved from internal memory 130, which may permanently store the user's vision correction parameter(s) or again retrieve them from an external database upon user login and/or client application launch. For instance, in the latter example, the user's current vision correction parameter(s) may be actively stored and accessed from an external database operated within the context of a server-based vision correction subscription system or the like, and/or unlocked for local access via the client application post user authentication with the server-based system.
In optional step 502, on electronic devices that include front-facing camera 150, the distance from the screen to the user is calculated using information retrieved from front-facing camera 150.
In optional step 504, on electronic devices that include accelerometer 160, the tilt and orientation of electronic device 100 are retrieved from accelerometer 160.
In step 506, the vision correction information and, if applicable, the distance from the screen to the user and/or the tilt and orientation of electronic device 100 are used as input to an image pre-filtering function to pre-filter the image.
Several different pre-filtering algorithms may be used for this stop, either alone or in combination, including deconvolution algorithms, an iterative Richardson-Lucy algorithm, an all-pass kernel pre-filtering algorithm, and a light field pre-filtering algorithm. Some examples of pre-filtering algorithms are described in Fu-Chung Huang, Gordon Wetzstein, Brian A. Barsky, and Ramesh Raskar. “Eyeglasses-free Display: Towards Correcting Visual Aberrations with Computational Light Field Displays”. ACM Transaction on Graphics, xx:0, August 2014, the entire contents of which are hereby incorporated herein by reference.
In step 508, the pre-filtered image is displayed on screen 120 as a corrected output image.
In step 510, the light field emitted from the pre-filtered display passes through pinhole mask 200, and is diffracted by pinhole mask 200.
Through the method illustrated in
In yet another example, the user of a camera-enabled electronic device may use this function, along with the image correction capabilities described herein, to read or view printed or other materials via the electronic device rather than directly. For example, a user of a camera-enabled smartphone may use their smartphone as a visual aid to read a menu at a restaurant or a form at a medical appointment by activating the vision correction application along with a back-facing camera feature of the application (or again simply activating the camera function of the smartphone), and pointing the phone to the hardcopy materials to be viewed. By virtue of the image correction application, while the camera may automatically focus on the image, the rendered image on the screen will be displayed so to correct for the user's visual acuity and thus, may appear somewhat blurred or out of focus to an individual with perfect vision, but appear perfectly clearly to the user as if he were otherwise wearing their glasses.
With reference to
In step 602, a screen, such as that shown for example at
With reference to
With reference to
With reference again to
In step 606, in response to a successful login, the user's information is retrieved from an external database. This information includes preset or current vision correction parameters. This information may also include eye prescription information. The eye prescription information may include the following data: left eye near spherical, right eye near spherical, left eye distant spherical, right eye distant spherical, left eye near cylindrical, right eye near cylindrical, left eye distant cylindrical, right eye distant cylindrical, left eye near axis, right eye near axis, left eye distant axis, right eye distant axis, left eye near prism, right eye near prism, left eye distant prism, right eye distant prism, left eye near base, right eye near base, left eye distant base, and right eye distant base. The eye prescription information may also include the date of the eye exam and the name of the eye doctor that performed the eye exam.
In step 608, the retrieved information is used to pre-filter the display as described above with respect to the method in
In step 610, the user selects an option to refocus the display, for example via an “edit profile” button rendered on the profile screen.
In step 612, a calibration screen is presented to the user.
In step 614, the user moves input pointer 750 around circular track 720. As input pointer 750 is moved, the vision correction information is updated based on the position of input pointer 750. In addition, the image on display 120 is adjusted based on the updated vision correction information.
With reference to
The method of inputting and/or calibrating the vision correction information is not limited to the calibration screen shown in
In step 616, the user presses button 740 and the vision correction information is saved in the external database for later retrieval.
In step 618, the user selects an option to input prescription and demographic information.
In step 620, a prescription input screen is presented to the user. This screen includes data entry fields for each of the prescription information settings. The user then inputs the prescription information and the prescription information is saved in internal memory 130.
Alternatively, a left eye prescription input screen may be presented to the user first, followed by a right eye prescription input screen. In this embodiment, the user inputs the corresponding prescription information in each screen and the prescription information is saved in the external database.
In step 622, a demographic information input screen is presented to the user. The screen includes data entry fields for demographic information, e.g. race, sex and age. The user then inputs the demographic information and the demographic information is stored in the external database.
In step 624, the prescription and demographic information is associated with the user's vision correction information in the external database.
In step 626, the user selects an option to enable/disable the vision correcting function of vision correction application 140.
In step 628, if the vision correcting function was enabled, it is disabled. If it was disabled, it is enabled. The state of the vision correcting function, whether enabled or disabled, is stored in internal memory 130 of electronic device 100.
The present disclosure also contemplates a method for recommending vision correction parameters based on a user's prescription and demographic information. This method is described below with respect to
In step 800, the user inputs prescription and demographic information as in steps 620 and 622 of the method of
In step 802, the user's prescription and demographic information are sent to the external database.
In step 804, the external database computes recommended vision correction parameters based on the user's prescription and demographic information. The database computes these parameters using the vision correction parameters of other users with similar prescription and demographic information.
In step 806, the external database sends the recommended vision correction parameters to electronic device 100.
While the present disclosure describes various exemplary embodiments, the disclosure is not so limited. To the contrary, the disclosure is intended to cover various modifications and equivalent arrangements included within the general scope of the present disclosure.
Number | Date | Country | Kind |
---|---|---|---|
CA 2901477 | Aug 2015 | CA | national |
This application is a Continuation of U.S. patent application Ser. No. 15/246,255 filed Aug. 24, 2016, which claims the benefit of priority to Canadian Patent Application No. 2,901,477 filed Aug. 25, 2015, each one of which is incorporated herein by reference in its entirety and for all purposes.
Number | Name | Date | Kind |
---|---|---|---|
5032754 | Iwao et al. | Jul 1991 | A |
5959664 | Woodgate | Sep 1999 | A |
6192341 | Becker et al. | Feb 2001 | B1 |
6309117 | Bunce | Oct 2001 | B1 |
6386707 | Pellicano | May 2002 | B1 |
6483485 | Huang | Nov 2002 | B1 |
6536907 | Towner | Mar 2003 | B1 |
6543898 | Griffin et al. | Apr 2003 | B1 |
6784905 | Brown et al. | Aug 2004 | B2 |
6809704 | Kulas | Oct 2004 | B2 |
6820979 | Stark et al. | Nov 2004 | B1 |
6876758 | Polat et al. | Apr 2005 | B1 |
6953249 | Maguire, Jr. | Oct 2005 | B1 |
7062547 | Brown et al. | Jun 2006 | B2 |
7147605 | Ragauskas | Dec 2006 | B2 |
7517086 | Kürkure | Apr 2009 | B1 |
7819818 | Ghajar | Oct 2010 | B2 |
7866817 | Polat | Jan 2011 | B2 |
7891813 | Ogilvie | Feb 2011 | B2 |
7973850 | Ishiga | Jul 2011 | B2 |
8089512 | Okabe et al. | Jan 2012 | B2 |
8098440 | Jethmalani et al. | Jan 2012 | B2 |
8164598 | Kimpe | Apr 2012 | B2 |
8231220 | Baranton | Jul 2012 | B2 |
8322857 | Barbur et al. | Dec 2012 | B2 |
8540375 | Destain | Sep 2013 | B2 |
8717254 | Nave et al. | May 2014 | B1 |
8783871 | Pamplona et al. | Jul 2014 | B2 |
8798317 | Wu | Aug 2014 | B2 |
8823742 | Kweon | Sep 2014 | B2 |
8857984 | Clarke et al. | Oct 2014 | B2 |
8967809 | Kirschen et al. | Mar 2015 | B2 |
9010929 | Lewis | Apr 2015 | B2 |
9041833 | Hatakeyama | May 2015 | B2 |
9052502 | Caldeira et al. | Jun 2015 | B2 |
9066683 | Zhou | Jun 2015 | B2 |
9104233 | Alberth | Aug 2015 | B2 |
9159299 | Lee | Oct 2015 | B2 |
9177355 | Buchheit | Nov 2015 | B1 |
9183806 | Felt | Nov 2015 | B2 |
9198571 | Kiderman et al. | Dec 2015 | B2 |
9301680 | Fassi et al. | Apr 2016 | B2 |
9307940 | MacLullich et al. | Apr 2016 | B2 |
9492074 | Lee et al. | Nov 2016 | B1 |
9642522 | Samadani et al. | May 2017 | B2 |
9844323 | Pamplona et al. | Dec 2017 | B2 |
9895057 | Tumlinson | Feb 2018 | B2 |
10058241 | Patella et al. | Aug 2018 | B2 |
10085631 | Shimizu et al. | Oct 2018 | B2 |
10182717 | Lindig et al. | Jan 2019 | B2 |
10206566 | Skolianos et al. | Feb 2019 | B2 |
10247941 | Fürsich | Apr 2019 | B2 |
10335027 | Pamplona et al. | Jul 2019 | B2 |
10345590 | Samec et al. | Jul 2019 | B2 |
10394322 | Gotsch | Aug 2019 | B1 |
10420467 | Krall et al. | Sep 2019 | B2 |
10548473 | Escalier et al. | Feb 2020 | B2 |
20020024633 | Kim et al. | Feb 2002 | A1 |
20020099305 | Fukushima et al. | Jul 2002 | A1 |
20060119705 | Liao | Jun 2006 | A1 |
20070247522 | Holliman | Oct 2007 | A1 |
20080309764 | Kubota et al. | Dec 2008 | A1 |
20090290132 | Shevlin | Nov 2009 | A1 |
20100156214 | Yang | Jun 2010 | A1 |
20100277693 | Martinez-Conde et al. | Nov 2010 | A1 |
20100298735 | Suffin | Nov 2010 | A1 |
20110019056 | Hirsch et al. | Jan 2011 | A1 |
20110122144 | Gabay | May 2011 | A1 |
20110157180 | Burger et al. | Jun 2011 | A1 |
20110261173 | Lin et al. | Oct 2011 | A1 |
20110268868 | Dowski, Jr. et al. | Nov 2011 | A1 |
20120010474 | Olsen et al. | Jan 2012 | A1 |
20120113389 | Mukai et al. | May 2012 | A1 |
20120206445 | Chiba | Aug 2012 | A1 |
20120249951 | Hirayama | Oct 2012 | A1 |
20120254779 | Ollivierre et al. | Oct 2012 | A1 |
20120262477 | Buchheit | Oct 2012 | A1 |
20130027384 | Ferris | Jan 2013 | A1 |
20130096820 | Agnew | Apr 2013 | A1 |
20130120390 | Marchand et al. | May 2013 | A1 |
20130222652 | Akeley et al. | Aug 2013 | A1 |
20140028662 | Liao et al. | Jan 2014 | A1 |
20140055692 | Kroll et al. | Feb 2014 | A1 |
20140063332 | Miyawaki | Mar 2014 | A1 |
20140118354 | Pais et al. | May 2014 | A1 |
20140137054 | Gandhi et al. | May 2014 | A1 |
20140200079 | Bathiche et al. | Jul 2014 | A1 |
20140253876 | Klin et al. | Sep 2014 | A1 |
20140267284 | Blanche et al. | Sep 2014 | A1 |
20140268060 | Lee et al. | Sep 2014 | A1 |
20140282285 | Sadhvani et al. | Sep 2014 | A1 |
20140300711 | Kroon | Oct 2014 | A1 |
20140327750 | Malachowsky et al. | Nov 2014 | A1 |
20140327771 | Malachowsky et al. | Nov 2014 | A1 |
20140340390 | Lanman et al. | Nov 2014 | A1 |
20150016777 | Abovitz | Jan 2015 | A1 |
20150049390 | Lanman et al. | Feb 2015 | A1 |
20150177514 | Maimone et al. | Jun 2015 | A1 |
20150185501 | Bakaraju et al. | Jul 2015 | A1 |
20150234187 | Lee | Aug 2015 | A1 |
20150234188 | Lee | Aug 2015 | A1 |
20150262424 | Tabaka et al. | Sep 2015 | A1 |
20150336511 | Ukeda | Nov 2015 | A1 |
20160042501 | Huang et al. | Feb 2016 | A1 |
20160103419 | Callagy et al. | Apr 2016 | A1 |
20160134815 | Ishiguro et al. | May 2016 | A1 |
20160260258 | Lo | Sep 2016 | A1 |
20160306390 | Vertegaal et al. | Oct 2016 | A1 |
20160335749 | Kano | Nov 2016 | A1 |
20170027435 | Boutinon et al. | Feb 2017 | A1 |
20170060399 | Hough et al. | Mar 2017 | A1 |
20170123209 | Spitzer et al. | May 2017 | A1 |
20170212352 | Cobb et al. | Jul 2017 | A1 |
20170227781 | Banerjee et al. | Aug 2017 | A1 |
20170302913 | Tonar et al. | Oct 2017 | A1 |
20170307898 | Vdovin et al. | Oct 2017 | A1 |
20170353717 | Zhou | Dec 2017 | A1 |
20170365101 | Samec et al. | Dec 2017 | A1 |
20170365189 | Halpin et al. | Dec 2017 | A1 |
20180033209 | Akeley | Feb 2018 | A1 |
20180070820 | Fried et al. | Mar 2018 | A1 |
20180084245 | Lapstun | Mar 2018 | A1 |
20180136486 | Macnamara et al. | May 2018 | A1 |
20180203232 | Bouchier et al. | Jul 2018 | A1 |
20180252935 | Vertegaal et al. | Sep 2018 | A1 |
20180290593 | Cho | Oct 2018 | A1 |
20180329485 | Carothers | Nov 2018 | A1 |
20180330652 | Perreault et al. | Nov 2018 | A1 |
20190094552 | Shousha | Mar 2019 | A1 |
20190125179 | Xu et al. | May 2019 | A1 |
20190150729 | Huang et al. | May 2019 | A1 |
20190175011 | Jensen et al. | Jun 2019 | A1 |
20190228586 | Bar-Zeev et al. | Jul 2019 | A1 |
20190246095 | Kishimoto | Aug 2019 | A1 |
20190246889 | Marin et al. | Aug 2019 | A1 |
20190310478 | Marin et al. | Oct 2019 | A1 |
20200012090 | Lapstun | Jan 2020 | A1 |
20200272232 | Lussier et al. | Aug 2020 | A1 |
20210271091 | Xu | Sep 2021 | A1 |
Number | Date | Country |
---|---|---|
2015100739 | Jul 2015 | AU |
9410161 | Dec 1994 | DE |
102004038822 | Mar 2006 | DE |
102016212761 | May 2018 | DE |
102018121742 | Mar 2020 | DE |
102018129600 | May 2020 | DE |
102019102373 | Jul 2020 | DE |
2127949 | Dec 2009 | EP |
1509121 | Sep 2012 | EP |
2589020 | May 2013 | EP |
2678804 | Jan 2014 | EP |
2760329 | Aug 2014 | EP |
2999393 | Mar 2016 | EP |
2547248 | May 2017 | EP |
3262617 | Jan 2018 | EP |
3339943 | Jun 2018 | EP |
3367307 | Dec 2018 | EP |
2828834 | Nov 2019 | EP |
3620846 | Mar 2020 | EP |
3631770 | Apr 2020 | EP |
3657440 | May 2020 | EP |
3659109 | Jun 2020 | EP |
3689225 | Aug 2020 | EP |
3479344 | Dec 2020 | EP |
3059537 | May 2019 | FR |
2003038443 | Feb 2003 | JP |
2011156721 | Dec 2011 | WO |
2013166570 | Nov 2013 | WO |
2014174168 | Oct 2014 | WO |
2014197338 | Dec 2014 | WO |
2015162098 | Oct 2015 | WO |
2017192887 | Nov 2017 | WO |
2017218539 | Dec 2017 | WO |
2018022521 | Feb 2018 | WO |
2018092989 | May 2018 | WO |
2018129310 | Jul 2018 | WO |
WO 2021-038430 | Mar 2021 | WO |
Entry |
---|
“No Need for Reading Glasses With Vision-Correcting Display”, by Sarah Lewin, taken from ieee.org, pp. 1-3 (Year: 2014). |
Ciuffreda, Kenneth J , et al., Understanding the effects of mild traumatic brain injury on the pupillary light reflex, Concussion (2017) 2(3), CNC36. |
Fielmann Annual Report 2019 (https://www.fielmann.eu/downloads/fielmann_annual_report_2019.pdf). |
Gray, Margot, et al., Female adolescents demonstrate greater oculomotor and vestibular dysfunction than male adolescents following concussion, Physical Therapy in Sport 43 (2020) 68-74. |
Howell, David R., et al., Near Point of Convergence and Gait Deficits in Adolescents After Sport-Related Concussion, Clin J Sport Med, 2017. |
Howell, David R., et al., Receded Near Point of Convergence and Gait are Associated After Concussion, Br J Sports Med, Jun. 2017; 51:e1, p. 9 (Abstract). |
Kawata, K., et al., Effect of Repetitive Sub-concussive Head Impacts on Ocular Near Point of Convergence, In t. J Sports Med 2016; 37; 405-410. |
Murray, Nicholas G., et al., Smooth Pursuit and Saccades after Sport-Related Concussion, Journal of Neurotrauma 36: 1-7 (2019). |
Ventura, Rachel E., et al., Diagnostic Tests for Concussion: Is Vision Part of the Puzzle?, Journal of Neuro-Ophthalmology 2015; 35; 73-81. |
Zahid, Abdullah Bin, et al., Eye Tracking as a Biomarker for Concussion in Children, Clin J Sport Med 2018. |
International Search Report dated Feb. 2, 2021 for International Patent Application No. PCT/US20/58392. 16 pages. |
Huang, F.C., “A Computational Light Field Display for Correcting Visual Aberrations,” Technical Report No. UCB/EECS-2013-206, Electrical Engineering and Computer Sciences University of California at Berkeley, available at http://www.eecs.berkeley.edu/Pubs/TechRpts/2013/EECS-2013-206.html, Dec. 15, 2013 (119 pages). |
Huang, F.C. et al., “Eyeglasses-Free Display: Towards Correcting Visual Aberrations With Computational Light Field Displays,” ACM Transactions on Graphics (TOG)—Proceedings of ACM SIGGRAPH 2014, vol. 33, Issue 4, Article No. 59, Jul. 2014 (12 pages). |
International Search Report for International Application No. PCT/CA2016/051006 dated Sep. 30, 2016 in 5 pages. |
Written Opinion of the International Searching Authority received in International Application No. PCT/CA2016/051006 dated Sep. 30, 2016 in 9 pages. |
Agus M. et al., “GPU Accelerated Direct Volume Rendering on an Interactive Light Field Display”, Eurographics 2008, vol. 27, No. 2, 2008. |
Burnett T., “FoVI3D Extreme Multi-view Rendering for Light-field Displays”, GTC 2018 (GPU Technology Conference), Silicon Valley, 2018. |
Halle M., “Autostereoscopic displays and computer graphics”, Computer Graphics, ACM SIGGRAPH, 31(2), May 1997, pp. 58-62. |
Masia B. et al., “A survey on computational displays: Pushing the boundaries of optics, computation, and perception”, Computer & Graphics, vol. 37, 2013, pp. 1012-1038. |
Wetzstein, G. et al., “Tensor Displays: Compressive Light Field Synthesis using Multilayer Displays with Directional Backlighting”, https://web.media.mit.edu/˜gordonw/TensorDisplays/TensorDisplays.pdf. |
Fattal, D. et al., A Multi-Directional Backlight for a Wide-Angle, Glasses-Free Three-Dimensional Display, Nature, Mar. 21, 2013, pp. 348-351, vol. 495. |
“Eyeglasses-free Display: Towards Correcting Visual Aberrations with Computational Light Field Displays”, by Huang et al., taken from http://web.media.mit.edu/˜gordonw/VisionCorrectingDisplay/, publicshed Aug. 2, 2014, pp. 1-15. |
Andrew Maimone, et al. “Focus 3D: Compressive accommodation display,” ACM Trans. Graph. 32.5 (2013). |
Pamplona V. F. et al., “Tailored Displays to Compensate for Visual Aberrations,” ACM Transactions on Graphics (TOG), Jul. 2012 Article No. 81, https://doi.org/10.1145/2185520.2185577. |
Pamplona V. F., Thesis (Ph.D.)—Universidade Federal do Rio Grande do Sul. Programa de Pós-Graduação em Computação, Porto Alegre, BR—RS, 2012. Advisor: Manuel Menezes de Oliveira Neto. |
Number | Date | Country | |
---|---|---|---|
20200192561 A1 | Jun 2020 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15246255 | Aug 2016 | US |
Child | 16717023 | US |