Methods and apparatus for contrast sensitivity compensation

Information

  • Patent Grant
  • 11475547
  • Patent Number
    11,475,547
  • Date Filed
    Thursday, October 29, 2020
    3 years ago
  • Date Issued
    Tuesday, October 18, 2022
    a year ago
Abstract
A system and methods for contrast sensitivity compensation provides for correcting the vision of users whose vision is deficient for discerning high spatial frequencies. The system and methods use measurements of the user's contrast detection as a function of spatial frequency in the image to correct images in real time. The system includes a head-mountable device that includes a camera and a processor that can provide enhanced images at video framing rates.
Description
BACKGROUND OF THE INVENTION
Field of the Invention

The present invention generally relates to methods and apparatus to assist users with compromised vision, and more specifically to methods and apparatus for presenting images that are enhanced to compensate for regions of spatial frequency contrast loss.


Discussion of the Background

One characteristic of the visual system is the ability to discern contrast in an image—that is to recognize variations between light and dark, or between different colors, in a region of the visual field. The visual systems of some individuals are impaired relative to those having “normal” vision in that they have problems discerning objects of a certain size, such as small objects. Another way of considering this impairment is that the visual system has decreased contrast sensitivity for objects having high spatial frequencies in the image.


Thus, for example, patients with macular degeneration lose foveal vision accompanied by a loss of contrast sensitivity at high spatial frequencies.


Currently there are no approved treatments to correct or compensate for a loss of contrast sensitivity, and there are no approved treatments to correct or compensate for a loss of contrast sensitivity that is tailored to the spatial frequency range discernable by a user. There is a need in the art for a method and apparatus that can compensate for a loss of contrast sensitivity.


BRIEF SUMMARY OF THE INVENTION

It is one aspect to provide an apparatus or method to re-establish some, or all, of a user's ability to detect contrast in an image.


It is another aspect to provide an apparatus or method that enhances images based on the discernable spatial frequencies of a user.


It is one aspect to provide a method of providing enhanced video images to a user using a programmable electronic device. The method includes: obtaining input video images comprising a plurality of input images A; computing, in the programmable electronic device, the application of a contrast enhancement function (CEF) to the plurality of input images A to form a plurality of contrast enhanced images comprising contrast enhanced video images C where the contrast enhancement function is user-specific, and where the contrast enhancement function is frequency-dependent; and presenting, on a display of the programmable electronic device and to the user, the contrast enhanced video images C. The method is such that the contrast enhanced video images are preferentially enhanced at spatial frequencies discernable to the user.


It is another aspect to provide a method of providing enhanced images to a user using a programmable electronic device. The method includes: obtaining input video images comprising a plurality of input images A where each image of the plurality of input images includes a luminance image Y(x, y) and chrominance images CB(x,y), CR(x, y), forming a contrast enhancement function CEFp(u), as








CEF
p



(
u
)


=

{







CSF
n



(


u
×

c
n



c
p


)




CSF
p



(
u
)







for





u



c
p






1




for





u

>

c
p





,







where CSFp(u) is a contrast sensitivity function for the user p corresponding to minimum discernable contrasts to the user as a function spatial frequency u where c, is a spatial frequency cut-off for the user, which is the maximum spatial frequency at which the user can discern contrast; where CSFn(u) is a contrast sensitivity function for persons, n, having normal contrast sensitivity and where cn is a spatial cut-off frequency for persons, n, which is the maximum spatial frequency at which the user can discern contrast. The method further includes computing, in the programmable electronic device, the application of the contrast enhancement function to the plurality of input images A to form a plurality of contrast enhanced images comprising contrast enhanced video images C where the application includes performing a Fourier transform on the luminance image Y(x,y) to obtain a luminance amplitude AY(u) and phase PY(u), enhancing the luminance amplitude by A′Y(u)=AY(u)×CEFP(u) performing an inverse Fourier transform on A′Y(u) and PY(u) to obtain an enhanced luminance image Y′(x, y), and combining the enhanced luminance image with unaltered chrominance images to form contrast enhanced video images C(x, y)=[Y′(x, y), CB(x, y), CR(x, y)]; and presenting, on a display of the programmable electronic device and to the user, the contrast enhanced video images C(x,y). The method is such that the contrast enhanced video images are preferentially enhanced at spatial frequencies discernable to the user.


It is yet another aspect to provide a contrast sensitivity compensation system wearable by a user. The system includes: a memory including a stored program; a camera mounted on the user aimed to view the scene in front of the user and operable to obtain input video images of the scene comprising a plurality of input images A; a processor programmed to execute the stored program to compute the application of a contrast enhancement function to the plurality of input images A to form a plurality of contrast enhanced images comprising contrast enhanced video images C where the contrast enhancement function is user-specific, and where the contrast enhancement function is frequency-dependent; and present, to the user on a display of the programmable electronic device, the contrast enhanced video images C.


It is another aspect to provide a contrast sensitivity compensation system wearable by a user. The system includes: a memory including a stored program; a camera mounted on the user aimed to view the scene in front of the user and operable to obtain input video images of the scene comprising a plurality of input images A, where each image of the plurality of input images includes a luminance image Y(x, y) and chrominance images CH(x,y), CR(x, y); a processor programmed to execute the stored program to compute the application of a contrast enhancement function to the plurality of input images A to form a plurality of contrast enhanced images comprising contrast enhanced video images C where the contrast enhancement function is user-specific and is








CEF
p



(
u
)


=

{







CSF
n



(


u
×

c
n



c
p


)




CSF
p



(
u
)







for





u



c
p






1




for





u

>

c
p





,







where CSFP(u) is a contrast sensitivity function, for the user p corresponding to minimum discernable contrasts to the user as a function spatial frequency u where CP is a spatial frequency cut-off for the user, which is the maximum spatial frequency at which the user can discern contrast, where CSFn(u) is a contrast sensitivity function for persons n having normal contrast sensitivity, where cn is a spatial cut-off frequency for persons n, which is the maximum spatial frequency at which the user can discern contrast, and where the processor is further programmed to execute the stored program to perform a Fourier transform on the luminance image Y(x,y) to obtain a luminance amplitude AY(u) and phase PY(u), enhance the luminance amplitude by A′Y(u)=AY(u)×CEFP(u), perform an inverse Fourier transform on A′Y(u) and PY(u) to obtain an enhanced luminance image Y′(x, y), and the enhanced luminance image is combined with unaltered chrominance images to form contrast enhanced video images C(x, y)=[Y′(x, y), CB(x, y), CR(x, y)]; and present, to the user on a display of the programmable electronic device, the contrast enhanced video images C.


These features, together with the various ancillary provisions and features which will become apparent to those skilled in the art from the following detailed description, are attained by the method and apparatus of the present invention, preferred embodiments thereof being shown with reference to the accompanying drawings, by way of example only, wherein:





BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWING


FIG. 1A shows a first embodiment user-controllable contrast sensitivity compensation system on a user;



FIG. 1B shows a smartphone used in the system of FIG. 1A;



FIG. 1C shows the body of the goggle used in the system of FIG. 1A;



FIG. 2 is a two-dimensional chart of contrast versus spatial frequency;



FIG. 3 is a graph of the logarithm of the CSF as a function of the logarithm of the spatial frequency for several users;



FIG. 4 as a graph of the contrast attenuation for 2 different users;



FIG. 5 is illustrative image used to show the transformation of images.



FIG. 6A shows a simulation of the view of the image of FIG. 5 for the first user;



FIG. 6B shows a simulation of the view of the image of FIG. 5 for the second user;



FIG. 7 as a graph of the contrast enhancement function versus spatial frequency for 2 users.



FIG. 8A shows an enhanced image which is the result of transforming the image of FIG. 5 by the contrast enhancement function for the first user;



FIG. 8B shows an enhanced image which is the result of transforming the image of FIG. 5 by the contrast enhancement function for the second user;



FIG. 9A is an image which is a simulation of how the first user perceives the contrast enhanced image of FIG. 8A;



FIG. 9B is an image which is a simulation of how the second user perceives the contrast enhanced image of FIG. 8B;



FIG. 10 shows a graph of the contrast enhancement function for 2 users for the image of FIG. 5 at a magnification of 2;



FIG. 11 shows a graph of the contrast enhancement function for 2 users for the image of FIG. 5 at a magnification of 4;



FIG. 12 is an image of FIG. 5 at a magnification of 4;



FIG. 13A shows a simulation of the view of the image of FIG. 12 for the first user;



FIG. 13B shows a simulation of the view of the image of FIG. 12 for the second user;



FIG. 14A shows an enhanced image resulting from transforming image of FIG. 12 for the first user;



FIG. 14B shows an enhanced image resulting from transforming the image of FIG. 12 for the second user;



FIG. 15A shows a simulation of how the first user perceives the contrast enhanced image of FIG. 14A; and



FIG. 15B shows a simulation of how the second user perceives the contrast enhanced image of FIG. 14B.





Reference symbols are used in the Figures to indicate certain components, aspects or features shown therein, with reference symbols common to more than one Figure indicating like components, aspects or features shown therein.


DETAILED DESCRIPTION OF THE INVENTION

Certain embodiments of the present invention are directed to an apparatus to provide images that enhance the vision of users having a loss of contrast sensitivity of high spatial frequencies. The apparatus presents modified images that enhance the contrast, specifically for high spatial frequencies, to correct for deficiencies in a user's visual system. Certain other embodiments enhance images within the discernable spatial frequency range of the user.


By way of a specific embodiment, FIGS. 1A, 1B, and 1C show a first embodiment contrast sensitivity compensation system 100, where FIG. 1A shows the system on a user U; FIG. 1B shows a smartphone used in the system; and FIG. 1C shows the body of the goggle used in the system. System 100 includes a smartphone 110 and pair of goggles 120. Smartphone 110 includes the electronics necessary for the contrast sensitivity compensation system 100, including a processor and memory (not shown), a forward-facing camera 111, as shown in FIG. 1A, and a screen 113 on the side opposite the camera, as shown in FIG. 1B. Smartphone 110 also includes an electrical connector 117. As described subsequently, processed camera images are displayed on one portion of screen 113 shown as a left area 112 and a second portion of the screen is shown as right area 114.


Goggles 120 include a body 122 and a strap 125 for holding the goggles on the user's head and a connector 128 that mates with smartphone connector 117. Body 122 includes, as shown in FIG. 1A, a pair of clamps 121 for removably restraining smartphone 110 and making the electrical connection between connectors 117 and 128, and input device 123 for providing input to the smartphone through the connectors and, as shown in FIG. 1C, a left lens 124 and right lens 126 and a focusing wheel 127. When assembled as in FIG. 1A, with smartphone 110 held in place by clamps 121, system 100 presents what is displayed in area 112 of screen 113, through lens 124, to the user's right eye, and what is displayed in area 114 of the screen, through lens 126, to the user's left eye. The user may use focusing wheel 127 to adjust the focus. In certain embodiments, goggles 120 are adapted to accept user input from input device 123, which may control or otherwise provide inputs to the accepted smartphone 110.


In certain embodiments, smartphone 110 is provided with programming, as through a contrast sensitivity compensation application (referred to herein as a “CSC App”) which can: 1) operate camera 111 in a video mode to capture a stream of “input images”; 2) perform image processing on each input image to generate a stream of “output images”; and 3) present the stream of output images to screen 113. In certain embodiments, the stream of output images is presented sequentially side-by-side as two identical images—one in area 112 and one in area 114. Further, it is preferred that contrast sensitivity compensation system 100 operate so that the time delay between when the input images are obtained and when the output images are provided to screen 113 be as short as possible so that a user may safely walk and interact with the environment with goggles 120 covering their eyes.


Contrast sensitivity compensation system 100 has adjustable features that allow it to match the physiology of the user for use in different settings. These features are generally set once for each user, possibly with the need for periodic adjustment. Thus, for example, given the spacing between screen 113 and the eyes of user U, focusing wheel 127 permits for an optimal setting of the distance from the display (113) to lens 124 and 126. In addition, lens 124 and/or 126 may include refractive error correction. Further, it is important that the viewed spacing between the images in areas 112 and 114 match the user's interpupillary distance (IPD) to facilitate comfortable binocular viewing and preventing diplopia. This may be accounted for, by example, by shifting the spacing of the output images in areas 112 and 114 to match the IPD. Certain embodiments, described subsequently, include eye tracking to determine a user's gaze direction. For these systems, it is sometimes necessary to calibrate the system to obtain a correlation between the eye tracking measurement and actual gaze direction.


In various embodiments, the user may adjust setting using: input device 123 which may be a touchpad and which is electrically connected to smartphone 110, which is further programmed to modify the CSC App according to such inputs; a Bluetooth game controller that communicates with the smartphone 110 via Bluetooth; voice control using the microphone of the phone; gesture control using available devices such as the NOD gesture control ring (see, for example, http://techcrunch.com/2014/04/29/nod-bluetooth-gesture-control-ring/); or by the user of an eye tracker to implement gaze-directed control.


In addition, there are other features of contrast sensitivity compensation system 100 that can either be set up once for a user or may be user-adjustable. These features may include, but are not limited to, adjustments to the magnitude, shape, size, or placement of magnified portions of the output image, and color enhancement functions such as contrast, blur, ambient light level or edge enhancement of the entire image or portions of the image. In other embodiments, the compass and/or accelerometers within smartphone 110 may be used for enhancing orientation, location, or positioning of output images.


In certain embodiments, sound and/or vibration may be provided on smartphone 110 to generate for proximity and hazard cues. In other embodiments, the microphone of smartphone 110 can be used to enter voice commands to modify the CSC App. In certain other embodiments, image stabilization features or programming of smartphone 110 are used to generate output images.


In one embodiment, by way of example only, goggles 120 are commercially available virtual-reality goggles, such as Samsung Gear VR (Samsung Electronics Co. Ltd., Ridgefield Park, N.J.), and smartphone 110 is a Galaxy S8 (Samsung Electronics Co. Ltd., Ridgefield Park, N.J.). The Samsung Gear VR includes a micro USB to provide an electrical connection to the Galaxy Note 4 and has, as input devices 123, a touchpad and buttons.


It will be understood by those in the field that contrast sensitivity compensation system 100 may, instead of including a combination of smartphone and goggles, be formed from a single device which includes one or more cameras, a processor, display device, and lenses that provide an image to each eye of the user. In an alternative embodiment, some of the components are head-mounted and the other components are in communication with the head-mounted components using wired or wireless communication. Thus, for example, the screen and, optionally, the camera may be head-mounted, while the processor communicates with the screen and camera using wired or wireless communication.


Further, it will be understood that other combinations of elements may form the contrast sensitivity compensation system 100. Thus, an electronic device which is not a smartphone, but which has a processor, memory, camera, and display may be mounted in goggles 120. Alternatively, some of the electronic features described as being included in smartphone 110 may be included in goggles 120, such as the display or communications capabilities. Further, the input control provided by input device 123 may be provided by a remote-control unit that is in communication with smartphone 110.


One embodiment of the transformation of camera images into a displayed image is illustrated using an illustrative image 500 in FIG. 5. As discussed subsequently in greater detail, a user U has a deficiency in their ability to discern contrast for small objects. Image 600B of FIG. 6B is a simulation of how a user with impaired vision perceives image 500. As a user with normal vision will note by comparing images 500 and 600B, the user sees a blurred image, with loss of visibility more pronounced for smaller features in the image.


To correct for the loss of contrast, user U may wear contrast sensitivity compensation system 100 and run the CSC App with camera 111 directed at the scene of image 500. The CSC App operates camera 111 to obtain image 500, which is processed to generate an image 800B of FIG. 8B using contrast enhancement function 703 illustrated on a log scale in FIG. 7. CEF 703 is the inverse of the contrast attenuation function 403 illustrated on a log scale in FIG. 4. Specifically, CSC App processes images in a manner that is tailored to a specific user. This is done using a previously determined contrast enhancement function for the user. In certain embodiments, image 500 is processed by obtaining a Fast Fourier Transform (FFT) of image 500, modifying the FFT of the image by the contrast enhancement function 705 (illustrated on a log scale in FIG. 7 and represents the inverse of contrast attenuation function 405 in FIG. 4), and then performing an inverse FFT to form image 800B. Image 800B is then displayed on screen 113 for viewing by user U. In certain other embodiments, a CEF is obtained from a test of the user's vision. In certain other embodiments, the CEF is spatial frequency dependent, and thus, as shown in image 800A of FIG. 8A, the contrast enhancement of image 500 using CEF 703 in FIG. 7 is frequency dependent and may, for example, provide a lesser amount of contrast enhancement at certain spatial frequencies and a greater amount at other spatial frequencies (e.g., compared to CEF 703).


Image 900B of FIG. 9A is a simulation of how the user with CSF 303 in FIG. 3 will perceive image 800B. A comparison of image 600B (which is how that user sees image 500) and image 900B (which is how the user sees the image processed image 500 of image 800B) shows how contrast sensitivity compensation system 100 improves the vision of user U. However, that is simply an attempted compensation of an unmagnified image that does not take into consideration: 1) that it is not possible to generate more than 100% contrast and 2) the user can see no contrast in any image at spatial frequencies above his or her cut-off frequency (cp). Thus, the custom enhanced images still appear distorted to the respective user.


To prevent distortions, in addition to performing a spatial frequency dependent contrast adjustment customized to the user's CSF, it is necessary to increase the magnification, followed by a customized contrast adjustment within the envelop of the user's CSF to enhance the image for optimal vision.


Determination of the Contrast Enhancement Function


In certain embodiments, the CEF as a function of frequency u for a user p (written as CEFp(u)) is obtained from a subjective measurement how a user's visual system discerns contrast as a function of spatial frequency and then mathematically manipulating the measurement to obtain the CEF. Determination of contrast sensitivity as a function of spatial frequency is known in the art (see, for example, Pelli and Bex, Measuring contrast sensitivity, Vision Res. 2013 Sep. 20; 90: 10-14. doi:10.1016/j.visres.2013.04.015.), and Chung S T et al, Comparing the Shape of Contrast Sensitivity Functions for Normal and Low Vision. Invest Ophthalmol Vis Sci. (2016).


A useful way of characterizing the sensitivity of the visual system is the contrast sensitivity CS as a function of spatial frequency which is written as CSFp(u). The CSFp(u) can be written as a mathematical function or a linear array, as is appropriate for its use. While not meant to limit the scope of the present invention, the CSF and other functions derived from or related to the CSF may be used to calculate a CEF, which may then be used to modify images.


The variation of contrast sensitivity with spatial frequency is demonstrated in FIG. 2 as a two-dimensional chart of spatial frequency versus contrast. One feature of chart 200 is light and dark bars having decreasing contrast as they are followed upwards. By visually following the light and dark bands at given spatial frequency upwards, it is seen that there is a minimum contrast for each spatial frequency that is discernable by the user. Generally, a user's contrast sensitivity is highest at mid-spatial frequency and falls off at low and high spatial frequencies. Thus, the light and dark bands seen as corresponding to low contrast at high and low spatial frequencies and to high contrast at mid frequencies.


The detectible contrast ranges from 0% for no contrast between dark and light, to 100% for a maximum contrast between dark and light. The CSF is the sensitivity and is the inverse of contrast detection with a corresponding range of from −∞ to 1, and the log of CSF has a corresponding range of from −∞ to 0.


In practice, the CSF may be determined by providing a user with an image or images having differing amounts of contrast and spatial frequency and by having them report on the limits of their contrast detection. Thus, the user is presented with several images each having a single spatial frequency (that is, with light and dark bands having the same spacing) and a contrast (that is, with a certain contrast between the light and dark bands). The user is prompted to indicate which image is at their limit of contrast detection. This is then repeated for several spatial frequencies. The result is a list of contrast detection thresholds for each spatial frequency, which is that user's CSF.



FIG. 3 includes a graph 300 of the logarithm of the CSF as a function of the log of the spatial frequency for several users. The data of graph 300 was obtained from Chung S T et al, Graph 300 includes a first curve 301, a second curve 303, and a third curve 305. First curve 301 is the CSF for a user with normal vision, or CSFn(u), which may be a curve obtained from one person with normal contrast sensitivity, or from the average of a plurality of people with normal contrast sensitivity, second curve 303 is the CSF for a first user with impaired contrast sensitivity, or CSF1(u), and third curve 305 is the CSF for a second user with impaired contrast sensitivity, or CSF2(u). The CSF for users p with impaired contrast sensitivity is referred to generally as CSFp. In the discussion that follows, examples are provided for the users whose vision is characterized by CSF 303 and CSF 305. To simplify the discussion, the user whose vision is characterized by CSF 303 will be referred to as “a first user,” and the user whose vision is characterized by CSF 305 will be referred to as “a second user.”


In the examples of FIG. 3, the values of CSF1(u) and CSF2(u) are less than those of CSFn(u) at all spatial frequencies, with CSF1(u) representing a better degree of contrast sensitivity than CSF2(u). Each CSF has a value of zero at a cut-off frequency, c, which is the maximum discernable spatial frequency to that user. Specifically, CSFn(u) has a cut-off frequency for a user with normal vision of cn, which is higher that a cut-off frequency of c1 for the user with CSF1(u) or a cut-off frequency c2 for the user with CSF2(u).


A useful measure for considering the loss of contrast detection relative to a user with a normal visual system is the contrast attenuation (CAp) which is the ratio of the value of the CSFp of a user to the CSF of a user with normal vision, or CAp=CSFp/CSFn. CAp provides an easy determination how a user with decreased contrast sensitivity views an image relative to how a user with normal contrast sensitivity views an image relative. FIG. 4 as a graph 400 on a log scale of CAp showing a curve 403, which is CA1 (the ratio of CSF1(u) to CSFn(u)), and a curve 405, which is CA2, (the ratio of CSF2(u) to CSFn(u)).


For the examples provided herein, at a spatial frequency less than f*, as indicted by the arrow 410, the contrast attenuation ratio is constant and less than 0—that is, the contrast loss is not size dependent. At a spatial frequency greater than f*, as indicted by the arrow 420, the contrast attenuation ratio decreases (relative contrast sensitivity loss increases) with frequency. It is thus seen that correcting for contrast loss requires a constant enhancement at low spatial frequencies and an increasing enhancement at higher spatial frequencies, as discussed subsequently.


Simulation of a User's Visual System Using the Contrast Enhancement Function


In considering the effect of processing images to be viewed by users having contrast loss, it is useful to have a simulation of how particular images appears to such a user. CSFp is a measure of how a user subjectively views an object and may be used to simulate how an image would appear to a user according to their CSFp. In discussions of these simulations and the viewing of all transformed images, it is assumed that the reader has normal contrast detection.


Thus, for example, consider how an image appears to a user with a given CSFp. Mathematically, an image may be described as a 2-D array A of intensity values. The array A may be viewed, for example, on a computer display and presented as an image A, and the terms “array” and “image” are generally used interchangeably herein.


The application of a CSFp to an image is performed by acting upon the CSFp with Fourier transform of A, custom character{A}, by CSFp, which may be written as follows:

Vp[A]=custom character{A}×CSFp(u),  Eq. 1a

followed by the inverse Fourier transform

B=custom character−1{Vp[A]},  Eq. 1b

where B is an image obtained by modifying A by CSFp. In other words, a user whose vision is characterized by a CSF will view the adjustment of image A as image B.



FIG. 5 is illustrative image 500 which is used to show the transformation of images. FIG. 6A shows a simulation of the view of the image of FIG. 5 for the first user as an image 600A. Specifically, image 600A is the result of transforming image 500 by CSF1 using equations 1a and 1b. Specifically, image 500 is A in Eq. 1a, CSF1(u) is CSFp(u) in Eq. 1a, and image 600A is B of Eq. 1b. Image 600A is a simulation of how image 500 appears to the first user.


In another example, FIG. 6B shows a simulation of the view of the image of FIG. 5 for the second user as an image 600B. Specifically, image 600B is the result of transforming image 500 by CSF 305 using equations 1a and 1b. That is, image 500 is A in Eq. 1a, CSF2(u) is CSFp(u) in Eq. 1a, and image 600B is B of Eq. 1b. Image 600B is a simulation of how image 500 appears to the second user.


A comparison of CSF1(u) (curve 303) and image 600A to CSF 305 and image 600B reveals that the values of CSF2(u) are lower than the values of CSF1(u), and that image 600B, which corresponds to CSF2(u), has much less spatial resolution than image 600A, which corresponds to CSF1(u). Thus, the second user discerns far less detail that does the first user.


Contrast Compensation


In certain embodiments, a user's loss of contrast sensitivity may be compensated for by adjusting the contrast of an image using the contrast sensitivity data according the normal contrast sensitivity and the cut off frequency.


In one embodiment, the following contrast compensation method is used to enhance the contrast of image. Each image A(x, y) may be specified in terms of the image's luminance image Y(x,y), and chrominance images CB(x, y), CR(x, y), as A(x, y)=[Y(x, y), CB(x, y), CR(x, y)]. First, a Fourier transform is performed on the luminance image Y(x, y) to obtain amplitude MY(u) and phase PY(u) spectra (vs spatial frequency, u). Next, the luminance amplitude is enhanced using the user's contrast enhancement function as follows: M′Y(u)=MY(u)×CEFp(u). Next, an inverse Fourier transform is performed on the enhanced luminance amplitude and the unaltered luminance phase function to obtain enhanced luminance image Y′(x,y). Lastly, the enhanced luminance image is combined with unaltered chrominance images to obtain enhanced full color image: C(x, y)=[Y′(x, y), CB(x, y), CR(x, y)].


In certain embodiments, the compensation may be accomplished, for example, as follows. A contrast enhancement function for user CEFp as a function of spatial frequency u is defined as:











CEF
p



(
u
)


=

{







CSF
n



(


u
×

c
n



c
p


)




CSF
p



(
u
)







for





u



c
p






1




for





u

>

c
p





.






Eq
.




2








This CEFp provides for enhancement of the contrast at spatial frequencies that the user can discern to make an appropriately magnified image (cn/cp) appear to the patient the way the unmagnified image would appear to the normally sighted person.



FIG. 7 is a graph 700 of contrast enhancement function showing curve 703, which is CEF1(u), and curve 705, which is CEF2(u) if there is no correction for required magnification required by the reduction of the cut-off frequency (i.e., simply the inverse of the contrast attenuation function).



FIG. 8A shows an enhanced image 800A, which is the result of transforming image 500 as described above by CEF1(u), as defined by graph 703 in FIG. 7. FIG. 8B shows an enhanced image 800B, which is the result of transforming image 500 by CEF2(u), as defined by graph 705 in FIG. 7.


In certain embodiments, image 500 is captured by camera 111 of contrast sensitivity compensation system 100 and the image, along with the user's CEF, is stored in the memory of smartphone 110 that is running the CSC App. The CSC App also includes programming to read image 500, apply the contrast enhancement described above, including Eq. 2a, and provide the transformed images C to screen 113, as noted in FIGS. 8A and 8B. In certain embodiments, the programming of the CSC App is performed using Fast Fourier Transforms or other techniques that are fast enough to allow for the viewing of images at the video rate of smartphone 110.


In certain other embodiments, the application of the CEF by the CSC App to an image may require additional computations or have other limitations. Thus, for example, it is not possible for an image to exceed 100% contrast, and thus the intensity in an image will be clipped at the maximum and minimum if the product of the Fourier transform and the contrast enhancement function exceeds 100% contrast. In addition, the mean luminance must remain fixed in order to not saturate the display with the inverse Fourier transform of the enhanced image. These ceiling and floor corrections are built into the compensation algorithm that generate the images shown above.


The effectiveness of enhancing the image is illustrated by simulating how the enhanced images appear. This may be accomplished by taking the contrast enhance images (as shown, for example, in FIG. 8A) and using Eqs. 1a and 1b to simulate how the enhanced image appears to the user. FIG. 9A is an image 900A, which is a simulation of how a user with CSF 303 perceives contrast enhanced image 800A, and FIG. 9B is an image 900B, which is a simulation of how a user with CSF 305 perceives contrast enhanced image 800B. A comparison of the original image 500 with the simulation of how the contrast enhanced images 900A and 900B shows the ability to provide contrast enhancement to people suffering from the loss of contrast detection in their vision.


Magnification of an image must also be used, in conjunction with contrast enhancement, to compensate for a reduction in the cut-off frequency (cp<cn), which corresponds to a loss of visual acuity, as well as a loss of contrast sensitivity. Increased magnification shifts the spatial frequency spectrum of the scene down an “octave” or more, so that frequencies below the cutoff become visible, and can be enhanced, for the viewer.


As the magnification of the image is increased, the CEF changes accordingly to minimize distortion (for magnifications less than cn/cp, substitute the magnification for the cut-off frequency ratio in eq. 2). Thus, FIG. 7 shows graph 700 as the CEF for image 500, FIG. 10 as a graph 1000 of contrast enhancement function for image 500 at a magnification of 2×, showing curve 1003, which is CEF1(u), and curve 1005, which is CEF2(u). FIG. 11 is a graph 1000 of contrast enhancement function for image 1200 as shown in FIG. 12, which is image 500 at a magnification of 4×. Graph 1000 shows curve 1103, which is CEF1(u), and curve 1105, which is CEF2(u). The change in the CEF values is due to the change in spatial frequencies due to the magnification of the image.


Examples of images transformed by a CSF for an image with a magnification of 4 are presented herein using an illustrative image 1200. The transformed images are similar to the images described with reference image 500.



FIG. 13A shows a simulation of the view in image 1300A of the image of FIG. 12 for the first user. Specifically, image 1300A is the result of transforming image 1200 by CSF1(u) using Eqs. 1a and 1b.



FIG. 13B shows a simulation of the view in image 1300B of the image of FIG. 12 for the second user. Specifically, image 1300B is the result of transforming image 1200 by CSF2(u) using Eqs. 1a and 1b.


Examples of images which may be provided to individuals according to their CSF to correct for their lack of contrast sensitivity of image 1200 are shown in FIG. 14A, which shows an enhanced image 1400A resulting from transforming image 1200 by curve 1103 as described above, including Eq. 2, and FIG. 14B shows an enhanced image 1400B resulting from transforming image 1200 by curve 1105 as described above, including Eq. 2.


The effectiveness of enhancing the image is illustrated by simulating how the enhanced images appear. FIG. 15A is an image 1500A, which is a simulation of how the first user perceives contrast enhanced image 1400A, and FIG. 15B is an image 1500B, which is a simulation of how the second user perceives contrast enhanced image 1400B.


The effect of magnification and contrast enhancement is seen by comparing simulations of FIGS. 8A and 15A, which show the effect of magnification for the user having CSF1, and FIGS. 8B and 15B, which show the effect of magnification for the user having CSF2.


It is to be understood that the invention includes all of the different combinations embodied herein. Throughout this specification, the term “comprising” shall be synonymous with “including,” “containing,” or “characterized by,” and is inclusive or open-ended and does not exclude additional, unrecited elements or method steps. “Comprising” is a term of art which means that the named elements are essential, but other elements may be added and still form a construct within the scope of the statement. “Comprising” leaves an opening for the inclusion of unspecified ingredients even in major amounts.


It will be understood that the steps of methods discussed are performed in one embodiment by an appropriate processor (or processors) of a processing (i.e., computer) system, electronic device, or smartphone, executing stored instructions (code segments). It will also be understood that the invention is not limited to any particular implementation or programming technique and that the invention may be implemented using any appropriate techniques for implementing the functionality described herein. The invention is not limited to any particular programming language or operating system.


Further, it is one aspect of the contrast compensation method described herein to enhance images within or near the discernable spatial frequency range of the user's vision. Thus, while examples are provided using the contrast compensation method discussed above, it will be obvious to one skilled in the art that other algorithms or combinations of algorithms may be substituted which approximate this method in that images are enhanced within certain ranges of spatial frequencies. Thus, for example, other contrast enhancement functions, image transformations, and/or methods of characterizing the user's vision, or approximating a characterization of the user's vision fall within the scope of the present invention.


Reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures or characteristics may be combined in any suitable manner as would be apparent to one of ordinary skill in the art from this disclosure in one or more embodiments.


Similarly, it should be appreciated that in the above description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the Detailed Description are hereby expressly incorporated into this Detailed Description, with each claim standing on its own as a separate embodiment of this invention.


Thus, while there has been described what is believed to be the preferred embodiments of the invention, those skilled in the art will recognize that other and further modifications may be made thereto without departing from the spirit of the invention, and it is intended to claim all such changes and modifications as fall within the scope of the invention. For example, any formulas given above are merely representative of procedures that may be used. Functionality may be added or deleted from the block diagrams and operations may be interchanged among functional blocks. Steps may be added or deleted to methods described within the scope of the present invention.

Claims
  • 1. A non-transitory computer program product adapted to be executed by a programmable electronic device configured to be worn on a head of a user and adapted to obtain and enhance video images captured by the programmable electronic device, the product comprising: programmatic instructions that, when executed by the programmable electronic device, obtain the video images, wherein the video images comprise a plurality of input images;programmatic instructions that, when executed by the programmable electronic device, apply a contrast enhancement function to the plurality of input images to form a plurality of contrast enhanced images comprising contrast enhanced video images, wherein the contrast enhancement function is customized to the user's vision and wherein the contrast enhancement function is frequency-dependent, andprogrammatic instructions that, when executed by the programmable electronic device, present, on a display of the programmable electronic device, the contrast enhanced video images, such that the contrast enhanced video images are preferentially enhanced at a maximum spatial frequency at which the user can discern contrast.
  • 2. The non-transitory computer program product of claim 1, where the contrast enhancement function depends, at least in part, on a contrast sensitivity function that corresponds to minimum discernable contrasts to the user as a function of spatial frequency.
  • 3. The non-transitory computer program product of claim 2, where the contrast enhancement function depends on the maximum spatial frequency.
  • 4. The non-transitory computer program product of claim 1, wherein each image of the plurality of input images includes a luminance image and chrominance images and wherein applying the contrast enhancement function to the plurality of input images comprises performing a Fourier transform on the luminance image to obtain a luminance amplitude.
  • 5. The non-transitory computer program product of claim 4, wherein applying the contrast enhancement function to the plurality of input images further comprises performing the Fourier transform on the luminance image to obtain a luminance phase.
  • 6. The non-transitory computer program product of claim 5, wherein applying the contrast enhancement function to the plurality of input images further comprises enhancing the luminance amplitude.
  • 7. The non-transitory computer program product of claim 6, wherein applying the contrast enhancement function to the plurality of input images further comprises performing an inverse Fourier transform on the enhanced luminance amplitude and the luminance phase to obtain an enhanced luminance image.
  • 8. The non-transitory computer program product of claim 7, wherein applying the contrast enhancement function to the plurality of input images further comprises combining the enhanced luminance image with the chrominance images to form the contrast enhanced video images.
  • 9. The non-transitory computer program product of claim 1, wherein the programmable electronic device is a smartphone.
  • 10. The non-transitory computer program product of claim 1, wherein the programmable electronic device is integrated into a set of goggles configured to be worn by the user.
  • 11. A non-transitory computer program product adapted to be executed by a programmable electronic device configured to be worn on a head of a user and adapted to obtain and enhance video images captured by the programmable electronic device, the product comprising: programmatic instructions that, when executed by the programmable electronic device, obtain the video images, wherein the video images comprise a plurality of input images;programmatic instructions that, when executed by the programmable electronic device, apply a contrast enhancement function to the plurality of input images to form a plurality of contrast enhanced images comprising contrast enhanced video images, wherein the contrast enhancement function is customized to the user's vision, wherein the contrast enhancement function is frequency-dependent, and wherein the contrast enhancement function is adapted to enhance a luminance image of the plurality of input images without altering chrominance images of the plurality of input images, andprogrammatic instructions that, when executed by the programmable electronic device, present, on a display of the programmable electronic device, the contrast enhanced video images.
  • 12. The non-transitory computer program product of claim 11, further comprising programmatic instructions that, when executed by the programmable electronic device, present on said display the contrast enhanced video images such that the contrast enhanced video images are preferentially enhanced at a maximum spatial frequency at which the user can discern contrast.
  • 13. The non-transitory computer program product of claim 11, where the contrast enhancement function depends, at least in part, on a contrast sensitivity function that corresponds to minimum discernable contrasts to the user as a function of spatial frequency.
  • 14. The non-transitory computer program product of claim 11, where the contrast enhancement function depends on a maximum spatial frequency at which the user can discern contrast.
  • 15. The non-transitory computer program product of claim 11, wherein each image of the plurality of input images includes the luminance image and the chrominance images and wherein applying the contrast enhancement function to the plurality of input images comprises performing a Fourier transform on the luminance image to obtain a luminance amplitude.
  • 16. The non-transitory computer program product of claim 15, wherein applying the contrast enhancement function to the plurality of input images further comprises performing the Fourier transform on the luminance image to obtain a luminance phase.
  • 17. The non-transitory computer program product of claim 16, wherein applying the contrast enhancement function to the plurality of input images further comprises enhancing the luminance amplitude.
  • 18. The non-transitory computer program product of claim 17, wherein applying the contrast enhancement function to the plurality of input images further comprises performing an inverse Fourier transform on the enhanced luminance amplitude and the luminance phase to enhance the luminance image.
  • 19. The non-transitory computer program product of claim 18, wherein applying the contrast enhancement function to the plurality of input images further comprises combining the enhanced luminance image with the chrominance images to form the contrast enhanced video images.
  • 20. The non-transitory computer program product of claim 11, wherein the programmable electronic device is a smartphone.
  • 21. The non-transitory computer program product of claim 11, wherein the programmable electronic device is integrated into a set of goggles configured to be worn by the user.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims the benefit of U.S. Provisional Application No. 62/629,774, filed Feb. 13, 2018, the contents of which are hereby incorporated by reference in its entirety.

US Referenced Citations (205)
Number Name Date Kind
4461551 Blaha Jul 1984 A
4586892 Ichizawa May 1986 A
4634243 Massof Jan 1987 A
4751507 Hama Jun 1988 A
4848898 Massof Jul 1989 A
4856892 Ben-Tovim Aug 1989 A
5151722 Massof Sep 1992 A
5359675 Siwoff Oct 1994 A
5717834 Werblin Feb 1998 A
5719593 De Lange Feb 1998 A
5808589 Fergason Sep 1998 A
5933082 Abita Aug 1999 A
6061064 Reichlen May 2000 A
6067112 Wellner May 2000 A
6529331 Massof Mar 2003 B2
6545685 Dorbie Apr 2003 B1
6590583 Soohoo Jul 2003 B2
6591008 Surve Jul 2003 B1
6704034 Rodriguez Mar 2004 B1
6766041 Golden Jul 2004 B2
6889006 Kobayashi May 2005 B2
7486302 Shoemaker Feb 2009 B2
7522344 Curatu Apr 2009 B1
7542210 Chirieleison, Sr. Jun 2009 B2
7612804 Marcu Nov 2009 B1
7806528 Bedell Oct 2010 B2
7883210 Filar Feb 2011 B2
8103352 Fried Jan 2012 B2
8239031 Fried Aug 2012 B2
8253787 Yamamoto Aug 2012 B2
8311328 Spruck Nov 2012 B2
8350898 Chang Jan 2013 B2
8454166 Fateh Jun 2013 B2
8490194 Moskovitch Jul 2013 B2
8511820 Trachtman Aug 2013 B2
8516584 Moskovitch Aug 2013 B2
8571670 Fried Oct 2013 B2
8725210 Yang May 2014 B2
8760569 Yang Jun 2014 B2
8798453 Lawton Aug 2014 B2
8836778 Ignatovich Sep 2014 B2
8862183 Kulas Oct 2014 B2
D717856 Slawson Nov 2014 S
8879813 Solanki Nov 2014 B1
8888288 Iravani Nov 2014 B2
8905543 Davis Dec 2014 B2
8922366 Honoré Dec 2014 B1
8976247 Karner Mar 2015 B1
9019420 Hurst Apr 2015 B2
9031610 Kulas May 2015 B2
9066683 Zhou Jun 2015 B2
9149179 Barnard Oct 2015 B2
9213185 Starner Dec 2015 B1
9215977 Kohn Bitran Dec 2015 B2
9545422 Greenberg Jan 2017 B2
9607652 Bose Mar 2017 B2
9706918 Myung Jul 2017 B2
9891435 Boger Feb 2018 B2
10092182 Myung Oct 2018 B2
10146304 Werblin Dec 2018 B2
10188294 Myung Jan 2019 B2
D848420 Boger May 2019 S
10345591 Samec Jul 2019 B2
D863300 Boger Oct 2019 S
10444833 Werblin Oct 2019 B2
10488659 Boger Nov 2019 B2
10613323 Wheelwright Apr 2020 B1
10743761 Myung Aug 2020 B2
11144119 Werblin Oct 2021 B2
20020101568 Eberl Aug 2002 A1
20020181115 Massof Dec 2002 A1
20030182394 Ryngler Sep 2003 A1
20040136570 Ullman Jul 2004 A1
20040208343 Golden Oct 2004 A1
20050162512 Seakins Jul 2005 A1
20050200707 Yogesan Sep 2005 A1
20050237485 Blum Oct 2005 A1
20050270484 Maeda Dec 2005 A1
20060129207 Fried Jun 2006 A1
20060167530 Flaherty Jul 2006 A1
20060282129 Fried Dec 2006 A1
20060290712 Hong Dec 2006 A1
20070106143 Flaherty May 2007 A1
20070198941 Baar Aug 2007 A1
20070235648 Teich Oct 2007 A1
20070280677 Drake Dec 2007 A1
20070294768 Moskovitch Dec 2007 A1
20080106489 Brown May 2008 A1
20080184371 Moskovitch Jul 2008 A1
20080238947 Keahey Oct 2008 A1
20080247620 Lewis Oct 2008 A1
20080278821 Rieger Nov 2008 A1
20090059364 Brown Mar 2009 A1
20090062686 Hyde Mar 2009 A1
20090322859 Shelton Dec 2009 A1
20100016730 Tanaka Jan 2010 A1
20100079356 Hoellwarth Apr 2010 A1
20100283800 Cragun Nov 2010 A1
20100289632 Seder Nov 2010 A1
20100328420 Roman Dec 2010 A1
20110085138 Filar Apr 2011 A1
20110102579 Sung May 2011 A1
20110221656 Haddick Sep 2011 A1
20110224145 Greenberg Sep 2011 A1
20110241976 Boger Oct 2011 A1
20110299036 Goldenholz Dec 2011 A1
20120062840 Ballou, Jr. Mar 2012 A1
20120176689 Brown Jul 2012 A1
20120194550 Osterhout Aug 2012 A1
20120212594 Barnes Aug 2012 A1
20120229617 Yates Sep 2012 A1
20120242678 Border Sep 2012 A1
20120249797 Haddick Oct 2012 A1
20120262558 Boger Oct 2012 A1
20120277826 Fried Nov 2012 A1
20120316776 Brown Dec 2012 A1
20120320340 Coleman, III Dec 2012 A1
20130050273 Fujimura Feb 2013 A1
20130083185 Coleman, III Apr 2013 A1
20130110236 Nirenberg May 2013 A1
20130127980 Haddick May 2013 A1
20130128364 Wheeler May 2013 A1
20130150123 Kulas Jun 2013 A1
20130242262 Lewis Sep 2013 A1
20130293840 Bartels Nov 2013 A1
20130300919 Fletcher Nov 2013 A1
20130329190 Lewis Dec 2013 A1
20140002792 Filar Jan 2014 A1
20140071547 O'Neill Mar 2014 A1
20140078594 Springer Mar 2014 A1
20140085603 Su Mar 2014 A1
20140098120 Ritts Apr 2014 A1
20140114208 Smith Apr 2014 A1
20140132932 Jung May 2014 A1
20140192022 Yamamoto Jul 2014 A1
20140218193 Huston Aug 2014 A1
20140228668 Wakizaka Aug 2014 A1
20140268053 Fabian Sep 2014 A1
20140327753 Prabhakar Nov 2014 A1
20140327754 Prabhakar Nov 2014 A1
20140327755 Prabhakar Nov 2014 A1
20140350379 Verdooner Nov 2014 A1
20150002950 O'Neill Jan 2015 A1
20150042873 Hunt Feb 2015 A1
20150045012 Siminou Feb 2015 A1
20150077565 Karner Mar 2015 A1
20150098060 Zhou Apr 2015 A1
20150103317 Goldfain Apr 2015 A1
20150104087 Katuwal Apr 2015 A1
20150138048 Park May 2015 A1
20150169531 Campbell Jun 2015 A1
20150223678 Goldfain Aug 2015 A1
20150223686 Wang Aug 2015 A1
20150234189 Lyons Aug 2015 A1
20150254524 Dickrell, III Sep 2015 A1
20150257639 Manquez Hatta Sep 2015 A1
20150313462 Reis Nov 2015 A1
20150320313 Stamile Nov 2015 A1
20150339589 Fisher Nov 2015 A1
20150346348 Liu Dec 2015 A1
20150348327 Zalewski Dec 2015 A1
20160015264 Pankajakshan Jan 2016 A1
20160045388 Krenik Feb 2016 A1
20160048203 Blum Feb 2016 A1
20160051142 Howes Feb 2016 A1
20160063893 Kanuganti Mar 2016 A1
20160097930 Robbins Apr 2016 A1
20160104453 Borenstein Apr 2016 A1
20160113489 Myung Apr 2016 A1
20160156850 Werblin Jun 2016 A1
20160173752 Caviedes Jun 2016 A1
20160199649 Barnes Jul 2016 A1
20160264051 Werblin Sep 2016 A1
20160314564 Jones Oct 2016 A1
20160379593 Borenstein Dec 2016 A1
20170172675 Jarc Jun 2017 A1
20170200296 Jones Jul 2017 A1
20170236332 Kipman Aug 2017 A1
20170280996 Myung Oct 2017 A1
20180017820 Cheng Jan 2018 A1
20180116509 Myung May 2018 A1
20180125716 Cho May 2018 A1
20180144554 Watola May 2018 A1
20180239137 Boger Aug 2018 A1
20180239425 Jang Aug 2018 A1
20190026958 Gausebeck Jan 2019 A1
20190056783 Werblin Feb 2019 A1
20190094552 Shousha Mar 2019 A1
20190180421 Kim Jun 2019 A1
20190208186 Kawabe Jul 2019 A1
20190222817 Abou Shousha Jul 2019 A1
20190251672 Lim Aug 2019 A1
20190251679 Werblin Aug 2019 A1
20190302886 Werblin Oct 2019 A1
20200008673 Myung Jan 2020 A1
20200097019 Yu Mar 2020 A1
20200112691 Werblin Apr 2020 A1
20200151859 Long, II May 2020 A1
20200311887 Kar Oct 2020 A1
20210153741 Berdahl May 2021 A1
20210271318 Bradley Sep 2021 A1
20210290056 Karandikar Sep 2021 A1
20210373656 Watola Dec 2021 A1
20220043513 Werblin Feb 2022 A1
20220171456 Siddiqi Jun 2022 A1
Foreign Referenced Citations (37)
Number Date Country
103110401 May 2013 CN
104688179 Jun 2015 CN
113019973 Jun 2021 CN
2621169 Jul 2013 EP
2004279733 Oct 2004 JP
2005524462 Aug 2005 JP
2006212102 Aug 2006 JP
2007178409 Jul 2007 JP
2007520243 Jul 2007 JP
2008093118 Apr 2008 JP
2008295725 Dec 2008 JP
2009031685 Feb 2009 JP
2013104986 May 2013 JP
2013125038 Jun 2013 JP
1992008157 May 1992 WO
1995006288 Mar 1995 WO
1998044468 Oct 1998 WO
2002086590 Oct 2002 WO
2002099597 Dec 2002 WO
03043363 May 2003 WO
2007069294 Jun 2007 WO
2008055262 May 2008 WO
2011159757 Dec 2011 WO
2012142202 Oct 2012 WO
2012176960 Dec 2012 WO
2014181096 Nov 2014 WO
2014194182 Dec 2014 WO
2015035229 Mar 2015 WO
2015054672 Apr 2015 WO
2015071779 May 2015 WO
2016077343 May 2016 WO
2016144419 Sep 2016 WO
WO 2016144419 Sep 2016 WO
2016205709 Dec 2016 WO
2018053509 Mar 2018 WO
2019094047 May 2019 WO
2019160962 Aug 2019 WO
Non-Patent Literature Citations (36)
Entry
Birkfellner, W. “Computer-enhanced stereoscopic vision in a head-mounted operating binocular” Physics in Medicine & Biology, vol. 48, No. 3, Jan. 22, 2003.
Victorson, E. “A Head Mounted Digital Image Warping Prosthesis for Age-Related Macular Degeneration” Univ. of Minnesota, May 2014, pp. 1-170.
Web Search History for U.S. Appl. No. 16/447,481, filed Sep. 10, 2020.
Chen-Yung Hsu and Mark M. Uslan; When is a Little Magnification Enough? A Review of Microsoft Magnifier, AccessWorld Magazine, Jul. 2000, vol. 1, No. 4.
Richard D. Juday and David S. Loshin; Some Examples of Image Warping for Low Vision Prosthesis; Speidigitallibrary.org, Aug. 22, 1988.
Google Search of How to Install and Use Microsoft Magnifier, Mar. 29, 2018.
Gergely Vass and Tama Perlaki; Applying and removing lens distortion in post production, Colorfront Ltd., Budapest, 2003.
Eric Kenneth Victorson; A Head Mounted Digital Image Warping Prosthesis for Age-Related Macular Degeneration; U of Minn., May 2014.
International Search Report for PCT/US20/40726, dated Sep. 14, 2020.
Written Opinion of the International Searching Authority for PCT/US20/40726, dated Sep. 14, 2020.
International Search Report for PCT/US15/59950, dated Apr. 11, 2016.
Written Opinion of the International Searching Authority for PCT/US15/59950, dated Apr. 11, 2016.
International Search Report for PCT/US16/12135, dated Apr. 29, 2016.
Written Opinion of the International Searching Authority for PCT/US16/12135, dated Apr. 29, 2016.
International Search Report for PCT/US19/17860, dated May 14, 2019.
Written Opinion of the International Searching Authority for PCT/US19/17860, dated May 14, 2019.
Sample et al.; “Imaging and Perimtery Society Standards and Guidelines” Optometry and Vision Science, vol. 88, No. 1, Jan. 2011, pp. 4-7.
International Search Report for PCT/US21/22491, dated Aug. 12, 2021.
Written Opinion of the International Searching Authority for PCT/US21/22491, dated Aug. 12, 2021.
Stelmack, et al. “Is there a standard of care for eccentric viewing training?” Journal of Rehabilitation Research & Development; vol. 41, No. 5, pp. 729-738; Sep./Oct. 2004.
Hassan, et al. “Changes in the Properties of the Preferred Retinal Locus with Eccentric Viewing Training”, Optom Vis Sci 2019;96:79-86. doi: 10.1097/OPX.0000000000001324.
International Search Report for PCT/US16/38176, dated Sep. 7, 2016.
GITHUB; RNCryptor/RNCryptor; 7 pages; retrieved from the internet (https://github.com/RNCryptor/RNCryptor).
Haddock et al.; Simple, inexpensive technique for high-quality smartphone fundus photography in human and animal eyes; Journal of Opththalmology; 2013; pp. 1-5; published online Sep. 19, 2013.
Hester et al.; Smart Phoneography—how to take slit lamp photographs with an iphone; 12 pages; retrieved from internet (http://eyewiki.aao.org/Smart_Phoneography_-_How_to_take_slit_lamp_photographs_with_an_iPhone).
Kim et al.; Smartphone photography safety; Ophthalmology; 119(10); pp. 220-2201; Oct. 2012.
Lord et al.; Novel uses of smartphones in ophthalmology; Ophthalmology; 117(6); pp. 1274-1274 e3; Jun. 2010.
Teichman et al.; From iphone to eyephone: a technique for photodocumentation; Can. J. Ophthalmol.; 46(3); pp. 284-286; Jun. 2011.
Wikipedia: Soap note; 6 pages; retreived from the interet (http://en.wikipedia.org/wiki/SOAP_note).
Apple Developer; Apple app store connect user guide; 4 pages; retrieved from the internet (https://developer.apple.com/support/ap-store-connect/).
Bastawrous; Smartphone fundoscopy; Ophthalmology; 119(2); pp. 432-433. e2; Feb. 2012.
Chakrabarti; Application of mobile technology in ophthalmology to meet the demands of low-resource settings; Journal of Mobile Technology in Medicine; 1(4S); pp. 1-3; Dec. 2012.
Chhablani et al.; Smartphones in ophthalmology; Indian J. Ophthalmol.; 60(2); pp. 127-131; Mar./Apr. 2012 (Author Manuscript).
Echanique et al.; Ocular Cellscope; University of California at Berkeley; Electrical engineering and computer sciences; 23 pages; retrieved from the internet (http://digitalassets.lib.berkeley.edu/techreports/ucb/text/EECS-2014-91 .pdf); May 16, 2014.
GITHUB; Nicklockwood/iCarousel; A simple, highly customisable, data-driven 3D carousel for iOS and Mac OS; 30 pages; retrieved from teh internet (https://github.com/nicklockwood/iCarousel).
GITHUB; Project—imas / encrypted-core-data; 6 pages; retrieved from the internet (https://github.com/project-imas/encrypted-core-data).
Related Publications (1)
Number Date Country
20210118106 A1 Apr 2021 US
Provisional Applications (1)
Number Date Country
62629774 Feb 2018 US
Continuations (1)
Number Date Country
Parent 16274976 Feb 2019 US
Child 17084233 US