MOBILE TREATMENT SYSTEM FOR DRY EYE SYNDROME

Abstract
Disclosed are devices and systems for treating dry eye syndrome by recording and analyzing video of the anterior eye surface using a binocular illuminator device operating with a computer running a software application. The illuminator includes a projector for projecting a simplified Placido ring pattern onto each eye, and an eye cup for aligning the illuminator with the eye(s). Some embodiments of the illuminator include an onboard camera while other embodiments use the camera system included with a mobile computer such as a smartphone. Also disclosed are methods to use an illuminator and a software application running on a computer to record videos of the anterior eye surface, to perform digital image processing on the videos, and to use the processed images to determine one or more dry eye-related parameters.
Description
BACKGROUND
Field of the Invention

The disclosed invention relates to devices, systems, and methods for treating dry eye syndrome by recording and analyzing video of the anterior eye using an illumination device with attached mobile computer, wherein the recordings can include an overlaid simplified Placido ring pattern. Also disclosed are methods to use the recorded videos to measure and analyze dry eye-related parameters, to perform dry eye condition monitoring, and to provide dry eye disease self-treatment guidance.


Relevant Background

Dry eye syndrome (DES) is a prevalent condition among all adults. Based on data from the National Health and Wellness Survey, 6.8 percent of the United States adult population (approximately 16.4 million people) have been diagnosed with dry eye syndrome (Farrand, K. F., et al., Am. J. Ophthalmol. 2017; 182:90.). DES affects middle-aged and older adults, contact lens wearers, computer users, gamers, and people living and/or working in dry environments. There is no cure for DES, but symptoms can be relieved with proper diagnosis and treatment.


DES occurs when the eyes do not produce enough tears, or when tears evaporate too quickly or are obstructed from the surface of the eye. Current methods to diagnose DES involve measurements of the following: tear film break-up time, tear film thickness, lipid layer thickness, and/or water sample layer thickness, as well as performing tests, such as a symptom assessment questionnaire, a fluorescein staining test, a Schirmer test, or another appropriate test.


There are several devices in the art for diagnosing and monitoring DES that use corneal topography measurement methods, which typically illuminate the eye with a set of concentric lighted rings, known as a Placido Ring or keratoscope pattern. Corneal topography began as a means of treatment for astigmatism and other eye irregularities affecting the topography or shape of the cornea. Such devices are often repurposed to treat DES, and commonly include desktop cameras such as KOWA DR-1α from Kowa Company Ltd, OCULUS Keratograph® 5M from Oculus Inc., and E300 Corneal Topographer from K+Medmont.


A subset of corneal topography devices is designed specifically for DES treatment, for example, TEARSCIENCE LIPIVIEW™ II from Johnson & Johnson. There are several patents or patent applications that disclose such devices, which include, for example, U.S. Pat. No. 10,980,413, which discloses a device and accompanying method for diagnosing, measuring, or analyzing DES, and discusses the ability to determine tear film break-up time and to detect lid margin contact and blink rates. Similarly, U.S. Pat. No. 8,888,286 discloses a DES-specific device capable of measuring the tear film layer thickness of the ocular tear film, a lipid layer thickness, and an aqueous layer thickness on the ocular surface. Another DES-specific device is disclosed in U.S. Pat. No. 8,591,033, wherein the device can measure the relative thickness of the lipid layer of the precorneal tear film after it is disturbed by blinking. Japan Pat. No. JP2010273800A discloses a DES device and method for measuring an individual's blinking action. Finally, WIPO Application No. WO2015073664A2 discloses a device and accompanying method for diagnosing, measuring, or analyzing DES, to include the detection of eyelid margin contact and blink rates. Additionally, portable corneal topography measurement devices for medical professionals are in use, such as EasyTear@ View+ from EasyTear, and Tearscope from Keeler. These devices can be attached to a split lamp to provide illumination, and a mobile computing device, e.g., a tablet, for user interface and data processing.


Unfortunately, because the precision required of corneal topography devices for treating eye irregularities is greater than the precision required to treat DES, such devices tend to be more expensive than is necessary to provide reliable DES diagnosis and monitoring. In addition to being prohibitively costly, such DES treatment devices are typically non-portable, and only provided to medical professionals, meaning there is no way for patients to monitor DES at home.


Also known in the art are portable devices to be used with a smart phone for assessing and treating eye irregularities. As with other corneal topography devices, portable versions are prohibitively expensive, provide unnecessary capabilities, and are not optimized for DES diagnosis and treatment. For example, U.S. Pat. No. 9,839,352 discloses a corneal topography device that includes a Placido disc illumination system configured to be used with the camera of a mobile communication device.


Therefore, a clear need exists for a portable DES diagnosis and monitoring device that allows a clinician to rapidly record and measure a patient's dry eye condition. The clinical device disclosed includes a camera for each of a patient's right eye and left eye so that both eyes can be imaged simultaneously. Accompanying software changes allow automatic right eye and left eye identification and analysis. Also included is the capability to adjust the interpupillary distance (IPD) between the cameras, as well as the up and down position of the cameras to allow the cameras to be positioned correctly in front of a patient's eyes. Such embodiments further include a capability to change the Placido ring projector pattern rapidly.


These and other deficiencies of the prior art are addressed by one or more embodiments of the disclosed invention. Additional advantages and novel features of this invention shall be set forth in part in the description that follows, and in part will become apparent to those skilled in the art upon examination of the following specification or may be learned by the practice of the invention. The advantages of the invention may be realized and attained by means of the instrumentalities, combinations, compositions, and methods particularly pointed out hereafter.


The features and advantages described in this disclosure and in the following detailed description are not all-inclusive. Many additional features and advantages will be apparent to one of ordinary skill in the relevant art in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter; reference to the claims is necessary to determine such inventive subject matter.





BRIEF DESCRIPTION OF THE DRAWINGS

Features and objects of the present invention and the manner of attaining them will become more apparent, and the invention itself will be best understood, by reference to the following description of one or more embodiments taken in conjunction with the accompanying drawings and figures imbedded in the text below and attached following this description.



FIG. 1 depicts a high-level block diagram of a general-purpose mobile computer for executing elements of the disclosed invention;



FIG. 2 depicts a top cross-sectional view of at least a portion of an embodiment of the disclosed invention;



FIG. 3 depicts a top exploded view of at least a portion of an embodiment of the disclosed invention;



FIG. 4 depicts a perspective view at least a portion of an embodiment of the disclosed invention shown mounted on a mobile computer;



FIG. 5 depicts a top cross-sectional view of at least a portion of an embodiment of the disclosed invention, shown mounted on a mobile computer;



FIG. 6 depicts a front internal view of at least a portion of an embodiment of the disclosed invention;



FIG. 7 depicts a front/top perspective view of at least a portion of an embodiment of the disclosed invention;



FIG. 8 depicts a front/side perspective view of at least a portion of an embodiment of the disclosed invention;



FIG. 9 depicts a rear/top perspective view of at least a portion of an embodiment of the disclosed invention;



FIG. 10 depicts an example flow diagram of the process for determining proper IPD.



FIG. 11 depicts an example flow diagram of the process for determining proper up or down alignment of the eye cups.



FIG. 12 depicts lighting PCBAs for use with a visual system for aligning the imaging components with the eyes.



FIG. 13 depicts an example flow diagram of the process for providing visual cues to facilitate proper IPD alignment.



FIG. 14 depicts a flow chart showing an example user-performed sequence according to an embodiment of the disclosed invention;



FIG. 15 depicts a flow chart showing an example user-performed sequence according to an embodiment of the disclosed invention;



FIG. 16 depicts a flow chart showing a method of dry eye syndrome diagnosis or treatment according to an embodiment of the disclosed invention;



FIG. 17 depicts a flow chart showing a method of dry eye syndrome diagnosis or treatment according to an embodiment of the disclosed invention;



FIG. 18 depicts a flow chart showing a method for dry eye syndrome diagnosis or treatment according to an embodiment of the disclosed invention;



FIG. 19 depicts an example graphical user interface (GUI) according to an embodiment of the disclosed invention, as displayed on a mobile computer;



FIG. 20 depicts an example GUI according to an embodiment of the disclosed invention, as displayed on a mobile computer;



FIG. 21 depicts an example GUI according to an embodiment of the disclosed invention, as displayed on a mobile computer;



FIG. 22 depicts an example GUI according to an embodiment of the disclosed invention, as displayed on a mobile computer;



FIG. 23 depicts an example GUI according to an embodiment of the disclosed invention, as displayed on a mobile computer; and



FIG. 24 depicts an example GUI report layout according to an embodiment of the disclosed invention, as displayed on a mobile computer.





The Figures depict embodiments of the present invention for purposes of illustration only. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the invention described herein.


DEFINITIONS

Dry eye syndrome (DES) is a medical condition having various causes and is characterized by having dry eyes. DES is accompanied by related symptoms, such as eye irritation and redness, fluid discharge from the eyes, and eyes that tire easily. Symptoms may range from mild and occasional to severe and continuous. The condition is also known as keratoconjunctivitis sicca.


DESCRIPTION

The disclosed invention includes devices, systems, and methods for dry eye condition measurement and analysis that are intended for day-to-day use by laypeople. Disclosed are embodiments of an illuminator device, including a handheld version and a smartphone-mounted version. Also disclosed are embodiments of a mobile software application, and accompanying methods to perform dry eye syndrome diagnosis, assessment, and treatment. The illuminator devices are configured to perform anterior eye video recording and can overlay such recordings with projected grid patterns. The application operates the illuminator, performs video analysis, computes dry eye-related parameters, summarizes the measured values, presents longitudinal data trends, and guides self-treatment. It also provides user administrative functions and facilitates data storage locally and on cloud servers.


The present invention employ Artificial Intelligence(AI) parameters to determine and categorize the severity of a patient's dry eye condition and provide corresponding treatment recommendations. In one instance, parameters including blink rate, tear meniscus height measurement are used to provide a quick assessment of the patient's condition to recommend over-the-counter dry eye treatment solutions. In another instance, parameters including, blink rate, tear meniscus height measurement, iris surface interferometric pattern, and tear film pattern with Placido Ring Projection are used to provide an in-depth analysis of the patient's dry eye condition within an ophthalmic setting. In both instances, each clinical parameter is evaluated and assigned a weighting, in alignment with a pre-determined weighting system, to provide a clinically relevant dry eye categorization for the patient, with corresponding treatment recommendations based on the severity of the condition detected.


The disclosed invention will now be described in detail with reference to several embodiments thereof as illustrated in the accompanying Figures. In the following description, specific details are set forth to provide a thorough understanding of embodiments of the disclosed invention. It will be apparent, however, to one skilled in the art that embodiments may be practiced without some or all these specific details. In other instances, well known process steps and/or structures have not been described in detail to not unnecessarily obscure the invention. The features and advantages of embodiments may be better understood with reference to the drawings and discussions that follow.


It should be apparent to those skilled in the art that the described embodiments of the disclosed invention provided herein are illustrative only and not limiting, having been presented by way of example only. All features disclosed in this description may be replaced by alternative features serving the same or similar purpose, unless expressly stated otherwise. Therefore, numerous other embodiments of the modifications thereof are contemplated as falling within the scope of the disclosed invention as defined herein and equivalents thereto. Hence, use of absolute and/or sequential terms, such as, for example, “always,” “will,” “will not,” “shall,” “shall not,” “must,” “must not,” “first,” “initially,” “next,” “subsequently,” “before,” “after,” “lastly,” and “finally,” are not meant to limit the scope of the disclosed invention as the embodiments disclosed herein are merely exemplary.


It will be also understood that when an element is referred to as being “on,” “attached” to, “connected” to, “coupled” with, “contacting”, “mounted” etc., another element, it can be directly on, attached to, connected to, coupled with, or contacting the other element or intervening elements may also be present. In contrast, when an element is referred to as being, for example, “directly on,” “directly attached” to, “directly connected” to, “directly coupled” with or “directly contacting” another element, there are no intervening elements present. It will also be appreciated by those of skill in the art that references to a structure or feature that is disposed “adjacent” another feature may have portions that overlap or underlie the adjacent feature.


Spatially relative terms, such as “under,” “below,” “lower,” “over,” “upper” and the like, may be used to describe one element or feature's relationship to another element(s) or feature(s) as illustrated in the figures. Such spatially relative terms are intended to encompass different orientations of a device in use or operation in addition to the orientation depicted in the figures. For example, if a device in the figures is inverted, elements described as “under,” or “beneath” other elements or features would then be oriented “over” the other elements or features. Thus, the exemplary term “under” can encompass both an orientation of “over” and “under”. The device may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly. Similarly, the terms “upwardly,” “downwardly,” “vertical,” “horizontal” and the like are used herein for the purpose of explanation only unless specifically indicated otherwise.


Included in the description are flowcharts depicting examples of the methodology which may be used in a travel system for individuals with cognitive disabilities. In the following description, it will be understood that each block of the flowchart illustrations, and combinations of blocks in the flowchart illustrations, can be implemented by computer program instructions. These computer program instructions may be loaded onto a computer or other programmable apparatus to produce a machine such that the instructions that execute on the computer or other programmable apparatus create means for implementing the functions specified in the flowchart block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable apparatus to function in a particular manner such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means that implement the function specified in the flowchart block or blocks. The computer program instructions may also be loaded onto a computer or other programmable apparatus to cause a series of operational steps to be performed in the computer or on the other programmable apparatus to produce a computer implemented process such that the instructions that execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart block or blocks.


Accordingly, blocks of the flowchart illustrations support combinations of means for performing the specified functions and combinations of steps for performing the specified functions. It will also be understood that each block of the flowchart illustrations, and combinations of blocks in the flowchart illustrations, can be implemented by special purpose hardware-based computer systems that perform the specified functions or steps, or combinations of special purpose hardware and computer instructions.


Unless specifically stated otherwise, discussions herein using words such as “processing,” “computing,” “calculating,” “determining,” “presenting,” “displaying,” or the like may refer to actions or processes of a machine (e.g., a computer) that manipulates or transforms data represented as physical (e.g., electronic, magnetic, or optical) quantities within one or more memories (e.g., volatile memory, non-volatile memory, or a combination thereof), registers, or other machine components that receive, store, transmit, or display information.


Mobile Computer

One of reasonable skill will also recognize that portions of the present invention may be implemented on a general-purpose mobile computing system, such as a smartphone, a personal communication device, a mobile device, a notebook computer, a tablet, or the like. FIG. 1 is a generalized block diagram of a computer system in which software-implemented processes of the present invention may be embodied. As shown, system 100 comprises a central processing unit(s) (CPU) or processor(s) 101 coupled to a random-access memory (RAM) 102, a graphics processor unit(s) (GPU) 120, a read-only memory (ROM) 103, a touchscreen or user interface 106, a display or video adapter 104 connected to a display device 105, a mass storage device 115 (e.g., flash memory, disk, or the like), a fixed (mass) storage device 116 (e.g., flash memory, a hard disk), a communication (COMM) port(s) or interface(s) 110, and a network interface card (NIC) or controller 111 (e.g., cellular, Ethernet, WIFI). Although not shown separately, various antennae and a real time system clock is included with the system 100, in a conventional manner.


CPU 101 comprises a suitable processor for implementing the present invention. The CPU 101 communicates with other components of the system via a bi-directional system bus 120 (including any necessary input/output (I/O) controller 107 circuitry and other “glue” logic). The bus, which includes address lines for addressing system memory, provides data transfer between and among the various components. Random-access memory 102 serves as the working memory for the CPU 101. The read-only memory (ROM) 103 contains the basic input/output system code (BIOS), a set of low-level routines in the ROM that application programs and the operating systems can use to interact with the hardware, including reading characters from the touchscreen or keyboard, outputting characters to screens or printers, and so forth.


Mass storage devices 115, 116 provide persistent storage on fixed and removable media, such as magnetic, optical, or magnetic-optical storage systems, flash memory, or any other available mass storage technology. The mass storage may be shared on a network 150, or it may be a dedicated mass storage. As shown in FIG. 1, fixed storage 116 stores a body of program and data for directing operation of the computer system, including an operating system, user application programs, driver, and other support files, as well as other data files of all sorts. Typically, the fixed storage 116 serves as the main memory for the system.


In basic operation, program logic (including that which implements methodology of the present invention described below) is loaded from the removable storage 115 or fixed storage 116 into the main (RAM) memory 102, for execution by the CPU 101. During operation of the program logic, the system 100 accepts user input from a keyboard and pointing device, as well as speech-based input from a voice recognition system (not shown). The user interface permits selection of application programs, entry of keyboard-based input or data, and selection and manipulation of individual data objects displayed on the screen or display device 105. Likewise, the pointing device, such as a mouse, track ball, pen device, touch screen, or the like, permits selection and manipulation of objects on the display device. In this manner, these input devices support manual user input for any process running on the system.


The computer system 100 displays text and/or graphic images and other data on the display device 105. The video adapter 104, which is interposed between the display 105 and the system's bus, drives the display device 105. The video adapter 104, which includes video memory accessible to the CPU 101, provides circuitry that converts pixel data stored in the video memory to a raster signal suitable for use by a monitor or touchscreen. A hard copy of the displayed information, or other information within the system 100, may be obtained from a printer or other output device.


The system itself communicates with other devices (e.g., other computers) via the network interface card (NIC) 111 connected to a network (e.g., cellular network, Wi-Fi network, Bluetooth wireless network, or the like). The system 100 may also communicate with local occasionally connected devices (e.g., serial cable-linked devices) via the communication (COMM) interface 110, which may include a RS-232 serial port, a Universal Serial Bus (USB) interface, or the like. Devices that will be commonly connected locally to the interface 110 include laptop computers, handheld organizers, digital cameras, and the like.


The system itself communicates with other devices (e.g., other handheld devices or computers) via the NIC 111 connected to a network (e.g., cellular network, Wi-Fi network, Bluetooth wireless network, etc.). The system 100 may also communicate with local occasionally connected devices (e.g., serial cable-linked devices) via the COMM interface 110, which may include a RS-232 serial port, a Universal Serial Bus (USB) interface, or the like. Devices that will be commonly connected locally to the interface 110 include laptop computers, handheld computers, digital cameras, etc.


The system may be implemented through various wireless networks and their associated communication devices. Such networks may include modems, mainframe computers, or servers, such as a gateway computer or application server which may have access to a database. A gateway computer serves as a point of entry into each network and may be coupled to another network by means of a communications link. The gateway may also be directly or indirectly coupled to one or more devices using a communications link or may be coupled to a storage device such as a data repository or database.


Portable DES Diagnosis and Monitoring Device

The disclosed invention includes devices, systems, and methods for the non-invasive diagnosis and treatment of DES in a portable format suitable for everyday use by patients. The invention includes a system comprising embodiments of an illuminator device for eye examination that is configured for use with a smartphone camera, and a software application (app) configured to run on a smartphone. Also included are methods for use of the system to perform DES diagnosis and treatment. Using the smartphone camera, the disclosed device records video imagery of a patient's eye. The accompanying application analyzes feature distortions on the ocular surface by performing feature extraction on the recorded video. Using such data, the application determines tear film break-up time and meniscus layer height, among other DES-relevant parameters. The application can compare these results to historical results gathered on a given individual, a baseline value for the individual, or a normalized baseline for healthy individuals to diagnose or measure the progression of DES.


Clinical DES Diagnosis and Monitoring Device

The disclosed invention includes devices, systems, and methods for the non-invasive diagnosis and treatment of DES in a portable format suitable for use in a clinical setting. The invention includes a system comprising embodiments of a binocular illuminator device for eye examination that includes components for the rapid imaging and analysis of both eyes simultaneously. The binocular illuminator includes an imaging component for each eye, each imaging component including a camera, light source, simplified Placido ring projector, and lens system. The binocular illuminator also includes an adjustment mechanism to move the imaging components closer together or further apart to adjust for a patient's IPD. This embodiment further includes an adjustment mechanism to move the imaging components up or down relative to the casing to adjust for alignment with the patient's eyes. Each imaging component includes a removable Placido ring projector and a mechanism for rapidly changing the ring pattern in use.


The illuminator is configured for use with a smartphone that can be installed in the binocular illuminator case, and a software application (app) configured to run on a smartphone. Also included are methods for use of the system to perform DES diagnosis and treatment. Using the dual onboard cameras, the disclosed device records video imagery of each of a patient's eyes and the application stitches the images into a composite image of the two eyes together, allowing the eyes to be imaged and analyzed together, speeding up the treatment process. The application also assesses the IPD and up and down alignment and informs the user if the imaging components need to be adjusted to be properly positioned and aligned in front of each eye.


The accompanying application analyzes feature distortions on the ocular surface by performing feature extraction on the recorded video. Using such data, the application determines tear film break-up time and meniscus layer height, among other DES-relevant parameters. The application can compare these results to historical results gathered on a given individual, a baseline value for the individual, or a normalized baseline for healthy individuals to diagnose or measure the progression of DES.


Hand-Held Illuminator

With reference to FIG. 2, a top cross-sectional view of an illuminator device 200 of the disclosed invention is depicted. In some embodiments the illuminator 200 includes a camera 220 and is a self-contained hand-held device configured to connect to a smartphone via wired or wireless connection, such as a USB connection system, a near field communication system, a Bluetooth system, or other suitable connection. A power module 210 includes the camera 220, which is configured to capture videos of the cornea, and a power source such as a battery (not shown). In some embodiments, power is drawn from the mobile device through the USB connector. Because it has an onboard camera, and therefore does not need to physically mount on the mobile device to access the device's camera system, this embodiment is suitable for use with a wide variety of smartphones, tablets, or other mobile computers.


The camera 220 is housed in a first casing 214 that mechanically interacts with a second casing 230 to house the illuminator components. The second casing includes side walls 232 that attach to the first casing, and an eye cup 234 configured to ergonomically interact with an eye socket of a patient. The eye cup 234 may be shaped to conform to the patient's face to block out ambient light and may include a rounded lip 236 to improve user comfort. The lip may be rubberized or covered with a flexible material such as foam rubber (not shown) to further improve comfort or to block out additional ambient light. Within the eye cup, the second casing includes an oculus 238 or opening that provides access to the patient's eye. The eye cup 234 is rotatably mounted so that it may be oriented to cover the right or left eye. As shown, the eye cup is oriented for placement over a patent's left eye. The eye cup would be rotated 180 degrees for placement over the patient's right eye. In the depicted embodiment, the second casing 230 rotates on the first casing 214 to allow the eye cup to rotate relative to the camera, but other arrangements are possible and contemplated.


Housed within the casings 214, 230, the illuminator includes a light ring 240 to provide illumination for device operation. The light ring includes a plurality of high intensity white lights arranged in a circular pattern around the camera, and may be, e.g., a plurality of white light emitting diodes (LED), or other suitable high intensity, low power light source. The light ring 240 receives power from the battery and is configured to provide sufficient illumination for operation of a projector cone 250. Brightness of the light ring can be manually adjusted by use of a control knob (not shown), for example, to provide additional illumination of the cornea, or to reduce brightness for improved patient comfort.


The projector cone 250 comprises a transparent or translucent plastic film arranged in a truncated cone shape and is printed with a ring pattern. The cone has a narrow end that encircles the camera lens system 260, and a wide end that fits around the circumference of the oculus 238. The projector cone 250 is arranged so that light from the light ring 240 shines through the sides of the cone and projects the ring pattern onto the cornea. The projector cone 250 is removable and interchangeable with other projector cones that may be printed with different grid or ring patterns, each of which is suitable for a different application. The lens system 260 is situated between the camera 220 and the patient's eye and focuses light entering the camera to provide suitably clear images of the patient's eye for DES diagnosis and treatment. Like the light ring, the lens system is powered by the battery.


With reference to FIG. 3, an exploded view of the illuminator embodiment 300 of FIG. 2 is depicted, wherein like numbers refer to like components. Shown are the power module 310, the camera 320, the second casing 330, the light ring 340, the projector cone 350, and the lens system 360.


The illuminator 300 is configured for use with a software application hosted on a smartphone or other mobile device. The user (the user and patient may be the same individual) first connects the illuminator to the smartphone via, e.g., a USB connector or via Bluetooth, and controls and operates the illuminator through the application. From the application, the user activates the illuminator camera 320 to record video of a patient's eye. The user places the illuminator eye cup over the patient's eye socket, and the application accesses imagery from the camera. The application then uses a machine learning model to detect when the eye is in focus and correctly aligned within the projector cone 350 ring pattern. The application also uses video analytics to determine whether the left eye or the right eye is being imaged. Once focus and alignment conditions are met, the application automatically begins recording video and instructs the patient to perform a series of blinks. The application then evaluates whether a video of the eye of acceptable quality has been captured, including checking the blink sequence, the length of video, the eye alignment, and focus. When a suitable video is recorded, the application informs the user, and performs analysis on the recorded video. Then the application reports the results to the user.


Mounted Illuminator

With reference to FIG. 4, an embodiment of the illuminator configured to be mounted on a smartphone and to use the smartphone's internal camera is depicted. The illuminator 400 is shown mounted on a smartphone 12 with the lens system 460 and projector cone 450 aligned over the smartphone camera (not shown). The second casing 430 includes an oculus 438, through which (in this diagram) can be seen the projector cone 450 and the lens system 460.


With reference to FIG. 5, a cross-sectional view of the mounted illuminator 500 is depicted. In this embodiment a docking slot 516 is located between the first casing 514 and the second casing 530, and is configured to accommodate a smartphone 12 and removably secure it in place. A bridge 518 connects the first casing to the second casing and provides electrical connection between the battery located in the power module 510 and the components in the second casing. The bridge also serves to position the smartphone so that the smartphone camera is properly aligned with the lens system when the phone is fully seated within the slot 516. As with the previous embodiment, the mounted illuminator also includes a light ring 540, an interchangeable projector cone 550, an oculus 538, and a rotatable eye cup 534. Similarly, the mounted illuminator may be powered through the smartphone's USB port, rather than from an onboard power source.


Also like the previous embodiment, the mounted embodiment is operated through the software application run on the smartphone. The user/patient first mounts the illuminator onto the smartphone by sliding the smartphone into the mounting slot until the phone is seated in place. The illuminator is then connected to the smartphone via USB connector or other suitable means, e.g., near field communication system, Bluetooth, etc., and the user controls and operates the illuminator through the application. The user places the illuminator eye cup over the patient's eye socket, and the application accesses imagery from the smartphone camera. From the application, the user activates the smartphone camera to record video of a patient's eye. The remainder of the application functionality is like the previous embodiment.


Some embodiments of the mounted illuminator are configured with an alternate system for activating the smartphone camera. In such embodiments, illuminator is further equipped with an electromagnet (not shown) powered by the power source and electronically connected to the light ring. The application is further configured to monitor the smartphone compass for variations. When the application is used to turn on the light ring, the electromagnet is also turned on, creating a small change in the magnetic flux captured by the compass. The application detects the change detected by the compass and activates the camera to record video of the cornea. The electromagnet system adds a backup actuation means to ensure the smartphone camera activates on time in the event the app's camera focus and alignment determinations do not consistently initiate video recording.


The illuminator embodiments described herein improve existing DES systems in several ways, three of which are identified here. The first improvement is reduced ring pattern complexity. Placido rings or keratoscopes used to identify eye irregularities project a detailed, fine ring pattern onto the eye to determine the curvature of the cornea, which has much higher precision requirements than DES treatment. As disclosed, the illuminator uses a much-simplified projected Placido ring pattern designed for use with the discrete light sources of the light ring. The light sources are arranged along the ring so that when they shine through the projector cone and reflect of the cornea, they provide the necessary illumination and reference points for the application to determine various metrics relevant to DES. The simplified ring pattern needs only provide consistent features with which to compare the distortion of light as it reflects off the fluid coating the eye.


Another advantage of the disclosed DES treatment system is the shape of the eye cup, which automatically aligns the camera with the patient's cornea and simplifies camera focusing. When the user/patient fits the eye cup over the eye, the camera self-aligns with, and is located a fixed distance from, the cornea. The eye cup thus allows self-testing by eliminating the need for a trained user, or reliance on camera autofocus. The eye cup design also improves the quality of images and thus improves result consistency by limiting interference from ambient light.


In addition, embodiments of the disclosed DES device perform automatic eye placement recognition. The rotatable eye cup allows use of the illuminator on either the left or the right eye without changing illuminator orientation, and application software uses video analytics to recognize which eye is it facing. The illuminator system can therefore calibrate its analysis for the appropriate eye without reliance on gyroscope information.


In another embodiment, the disclosed invention includes devices, systems, and methods for dry eye condition measurement and analysis that are intended for portable DES diagnosis and monitoring device that allows a clinician to rapidly record and measure a patient's dry eye condition. Whereas the prior embodiments present a single camera system with Placido Ring projection and non-projection pattern that can scan the left and right eye separately and record multiple individual videos, another embodiment of the present invention provides a binocular camera system. Disclosed hereafter are embodiments of a binocular illuminator device for clinical use, including a camera for each of a patient's right eye and left eye so that both eyes can be imaged simultaneously. The binocular illuminator devices are configured to perform anterior eye video recording and can overlay such recordings with projected grid patterns. The video captures both eyes simultaneously, and thereafter separates the left and right eyes for analysis. Also included is the capability to adjust the interpupillary distance (IPD) between the cameras to allow the cameras to be positioned correctly in front of a patient's eyes. Such embodiments further include a capability to change the Placido ring projector pattern rapidly.


Binocular Illuminator

With reference to FIG. 6, a front interior view of the binocular illuminator is shown. The binocular illuminator includes an imaging component 610 for each eye. The imaging components each include a camera, a lens system 630, a projector cone 620, and a light source 640. In some embodiments, the light source 640 can be shared by both imaging components. The self-contained hand-held device is configured to house and connect to a smartphone via wired (such as a USB connection system) or wireless connection, a near field communication system, a Bluetooth system, or other suitable connection. An onboard power source such as a battery may be included, or the illuminator may receive power from the smartphone.


The cameras are configured to capture videos of the cornea and are housed in a casing to house the binocular illuminator components. The imaging components 610 are moveable relative to each other to adjust the IPD between the focal point of each camera. The distance between camera focal points can be adjusted by a manual thumbwheel 650 to ensure a patient's eyes are left/right aligned with the cameras for effective imaging. The imaging components are also moveable up or down relative to the casing to adjust the up/down alignment of the focal point of each camera with the eyes. An up and down adjustment wheel may also be included in the casing. The imaging components are mounted on suitable mechanisms 660 to allow left-right and up-down adjustments, such as sets of gears and rails that are configured to mechanically interact. The casing includes side walls, a top and bottom, and two eye cups located in the front of the illuminator that are configured to ergonomically interact with the left and right eye sockets of a patient. The eye cups may be shaped to conform to the patient's face to block out ambient light and may include a rounded lip to improve user comfort. The lip may be rubberized or covered with a flexible material such as foam rubber (not shown) to further improve comfort and/or to block out additional ambient light. Within the eye cups, the casing includes an oculus or opening that provides access to the patient's eye. Some embodiments may include a skirt or lip (not shown) surrounding both eye cups to block out ambient light.


Housed within the casing, the illuminator includes two light rings to provide illumination for device operation. The light rings include a plurality of high intensity white lights arranged in a circular pattern around each camera, and may be, e.g., a plurality of white light emitting diodes (LED), or other suitable high intensity, low power light source. The light ring receives power from the illuminator and is configured to provide sufficient illumination for operation of a projector cone. Brightness of the light ring can be manually adjusted by use of a control knob (not shown), for example, to provide additional illumination of the cornea, or to reduce brightness for improved patient comfort. A switch for powering the light rings on or off is also included.


The projector cone for each imaging component comprises a transparent or translucent plastic film arranged in a truncated cone shape and is printed with a ring pattern. The cone has a narrow end that encircles the camera lens system, and a wide end that fits around the circumference of the oculus. The projector cone is arranged so that light from the light ring shines through the sides of the cone and projects the ring pattern onto the cornea. The projector cone is removable and interchangeable with other projector cones that may be printed with different grid or ring patterns, each of which is suitable for a different application. The binocular illuminator includes a magnetic attachment or screw on mechanism to allow the projector cones to be changed rapidly. The lens system is situated between the camera and the patient's eye and focuses light entering the camera to provide suitably clear images of the patient's eye for DES diagnosis and treatment. Like the light ring, the lens system is powered by the battery or smartphone.


With reference to FIGS. 7, 8, and 9, three views of a binocular illuminator embodiment are depicted, including a front/top perspective view, a front/side perspective view, and a rear/top perspective view, respectively. The eye cup 710 for each eye, and oculus 720, 725 for each imaging component are visible, as are the thumbwheel 650 for IPD adjustment, and the switch 810 for powering the light rings on and off.


The binocular illuminator is configured for use with a software application hosted on a smartphone or other mobile device. The user (the user and patient may be the same individual) first connects the illuminator to the smartphone via, e.g., a USB connector, Bluetooth, or the like and controls and operates the illuminator through the application. From the application, the user activates the illuminator cameras to record video of a patient's eyes. The user places the illuminator eye cups over the patient's eye sockets, and the application accesses imagery from the cameras. The application then uses a machine learning model to detect when the eyes are in focus and correctly aligned within the projector cone ring pattern of each imaging component.


With reference to FIG. 10, an example flow diagram of the process for determining proper IPD is depicted. The process begins by extracting 1010 one frame from a video of the patient's eyes captured by the illuminator. The frame is split 1020 into left and right segments consistent with the patient's left and right eyes. The iris region for the left frame is detected 1030 and a determination 1040 is made of the horizontal distance error of the iris centroid from the center of the optical axis. Likewise, the iris region of the right eye is detected 1050 in the right frame segment. The horizontal distance error of the iris centroid from the optical axis is determined 1060. Responsive 1070 to one or both horizontal errors exceeding a predetermined maximum error tolerance, a prerecorded message is played 1080 instructing the user to adjust the IPD using the thumbwheel on the illuminator.


With reference to FIG. 11, an example flow diagram of the process for determining proper up or down alignment of the eye cups is depicted. Again, the process begins by extracting 1110 one frame from a video of the patient's eyes captured by the illuminator. The frame is split 1120 into left and right segments consistent with the patient's left and right eyes. The iris region for the left frame is detected 1130 and a determination 1140 is made of the vertical distance error of the iris centroid from the center of the optical axis. Similarly, the iris region of the right eye is detected 1150 in the right frame segment and the vertical distance error of the iris centroid from the optical axis is determined 1160. Responsive 1170 to one or both vertical errors exceeding a predetermined maximum error tolerance, a prerecorded message is played 1180 instructing the user to adjust the vertical alignment.


With reference to FIG. 12, an alternative system for aligning the imaging components with the eyes is depicted. In such embodiments, the application communicates through the smartphone via Bluetooth or the like to a micro-controller on the binocular illuminator. For the purpose of this discussion, a left light ring is shown. A light ring 1210 is mounted on a lighting printed circuit board assembly that includes both white LEDs 1220 for illuminating the projector cones, as well as colored LEDs 1225, 1230, 1240, 1250 for communicating alignment. In this illustration there are 8 white LEDs 1220 for illuminating the projection cones and four colored LEDs 1225, 1230, 1240, 1250 for communicating alignment. The microcontroller turns on individual-colored LEDs which act as direction indicators based on iris alignment with the optical axis. The colored LEDs can be seen through the Placido ring as they contrast against the white light background from the white LEDs. When the application requires the user to adjust the IPD mechanism wider, colored LEDs 1225 on the outer sides of the eye cups illuminate, visually signaling the user to widen IPD separation. When the application requires the user to adjust the IPD mechanism narrower, colored LEDs on the inner sides 1230 of the eye cups illuminate, visually signaling the user to narrow IPD separation. Similarly, when the application requires the user to adjust the imaging components lower, colored LEDs on the bottoms 1240 of the eye cups illuminate, visually signaling the user to move the eye cups down. And when the application requires the user to adjust the imaging components higher, colored LEDs on the tops 1250 of the eye cups illuminate, visually signaling the user to move the eye cups up. Upon reaching the correct separation, the colored LEDs turn off.


With reference to FIG. 13, an example flow diagram of the process for displaying visual cues to inform a user to adjust IPD is depicted. As before, the process begins by extracting 1310 one frame from a video of the patient's eyes captured by the illuminator. The frame is split 1315 into left and right segments consistent with the patient's left and right eyes. The iris region for the left frame is detected 1320 and a determination 1325 is made of the vertical and horizontal distance error of the iris centroid from the center of the optical axis. Similarly, the iris region of the right eye is detected 1330 in the right frame segment and the vertical and horizontal distance error of the iris centroid from the optical axis is determined 1335, 1355. Responsive 1340, 1360 to one or both vertical errors exceeding a predetermined maximum error tolerance or one or both horizontal errors exceeding a predetermined maximum error tolerance, corresponding LEDs are illuminated 1345, 1350, 1365, 1370 directing the patient to reposition. For example, when the IPD is too narrow 1345, the left-colored LED on the left illuminator and the right colored LED on the right illuminator will illuminate directing the user to adjust the IPD wider. When the IPD is too wide the right colored LED on the left illuminator and the left-colored LED on the right illuminator will illuminate 1350 directing the user to adjust the IPD to be narrower.


When the distance of either eye exceeds 1360 the maximum vertical tolerance a similar illumination of LEDs occurs. When the IPD is too low the upper colored LEDs on the left and right illuminators will illuminate 1365 directing the user to adjust the IPD higher. When the IPD is too high the lower colored LEDs on the left and right illuminator will illuminate 1370 directing the user to adjust the IPD lower. When the IPD is correct, the LEDs will not illuminate. Thus, the patient strives to turn off the LEDs to arrive at the correct alignment, ending the process 1390.


Once focus, IPD, and up/down alignment conditions are met, the application automatically begins recording video of both eyes and instructs the patient to perform a series of blinks. Video of each eye is stitched together into a composite image of both eyes together. The application then evaluates whether a video of acceptable quality has been captured, including checking the blink sequence, the length of video, the eye alignment, and focus. When a suitable video is recorded, the application informs the user, and performs analysis on the recorded video. The application then reports the results to the user.


Software Application

The mobile software application is the primary system for providing operational, analytic, and general functionalities for DES diagnosis and treatment through use of the illuminator device. As described above, the application controls illuminator operational functions, including left or right eye detection, camera focus and alignment, light ring activation, and anterior eye video recording. Certain of these functions, including left or right eye detection and camera focus and alignment, IPD determination, and camera focus are performed using machine learning models. Video recording also includes issuing instructions to the patient for when to blink, as well as performing an initial video quality assessment.


The application also performs digital image processing on a recorded video to allow DES analysis and parameter measurement. The application uses image processing techniques in conjunction with deep learning algorithms to perform object detection and segmentation of the anterior eye images. Using such techniques, the application can recognize the locations of the iris and lower tear meniscus and recognize when an eyelid is closed or open during a blink. Once features are identified, the application performs segmentation of relevant features, e.g., the lower tear meniscus. The application also uses a ring pattern from the projector cone to divide the eye into separate regions and performs image segmentation on each region. Image processing is sophisticated enough to ignore certain image artifacts, such as eyelash shadows or other image imperfections. Note that a Placido ring projection is not required for tear meniscus height measurements, blink rate calculations, and tear interferometric pattern analysis. A corneal surface analysis is conducted without a Placido Ring projection to measure a tear meniscus height and/or an optimal tear meniscus height.


Once it completes object detection and image segmentation on the recorded video, the application can analyze the processed images to characterize parameters relevant to DES. For example, the application uses the ring pattern segmentation to perform a grid projection deformation analysis. The deformation analysis allows the application to measure the tear film breakup time. Recognition of blink events and timing their frequency allows the application to characterize a blink rate. Analysis results can be refined or corrected based on user observations or user review of intermediate data. The application may also compare corneal lipid imagery with example interferometric patterns to perform corneal lipid layer interferometric pattern recognition. Corneal lipid layer analysis may also be accomplished manually if insufficient data is collected, requiring the user to manually compare the corneal lipid images to known interferometric patterns.


Besides the operational, image processing and analysis functions, the application also provides some general functionalities. These include administrative functions, such as user registration and login, as well as results reporting and summarization. A user/patient can access a report on the current eye scan results or generate a historical trend report over a selected period. The application can also provide dry eye self-treatment instructions and suggestions based on a patient's current or historical results. The application can also send smartphone notifications to the patient, such as a prompting a regular eye scan, or prompting eye drop administration or other self-treatments. Finally, the application has connectivity functionalities to provide a patent's eye scan data to a cloud server for storage, intermediate processing or analysis, or advanced data processing or analysis.


With reference to FIG. 14, a flow chart is depicted summarizing an example process for an existing user to use the application and illuminator to perform DES treatment. A user and/or patient accesses the software application on a smartphone and is presented a login prompt, where the user can log in 1410. Once logged in, an existing user is automatically presented a summary of their last eye scan results 1411. When the user has completed their review of their previous scan, they can navigate to the Ocular Surface Disease Index (OSDI) questionnaire 1412 where they must respond to the questionnaire prompts. Once the questionnaire is complete, the application guides the user to perform anterior eye imaging and analysis 1413, which step includes several sub-steps. First the application guides the user to complete video recordings of each eye, once with the projected Placido ring pattern and once without the ring pattern. Then the application guides the user to check the video quality. If the quality of a video is inadequate, the user will be guided through re-taking the deficient video(s).


Once videos of sufficient quality are acquired, the application performs digital image processing, and performs analysis on the videos. The application will then instruct the user to perform manual adjustments to intermediate or final results if such adjustments are required by change criteria. Once the analysis is complete, the application provides the user a summary page displaying the current eye scan results 1414. If longitudinal data is available, i.e., the user has performed multiple eye scans over a period of time, the application will prompt the user to select a period for the display of trend data, and the application will produce a trend report 1415 for the user to compare eye health over the course of the reported period. Finally, the application will provide the user with tailored self-treatment suggestions, eye care tips, and useful external links related to DES based on the user's reported current and historical results 1416.


With reference to FIG. 15, a flow chart is depicted summarizing an example process for a new user to sign up, login, and use the application and illuminator to perform DES treatment for the first time. First a user and/or patient accesses the software application on a smartphone and is presented a login prompt, where the user can sign up for an account 1510 and login 1511. Once logged in, the application directs the new user to an information page 1512 where the user is prompted to provide details about their eye condition, history of treatment, the medications they are taking, physician contact information, etc. Next, the application directs the new user to a notifications settings page 1513, where the user is prompted to provide notifications preferences, such as when the application should send eye drop use reminders, when and how frequently the user would like the application to remind them to perform an eye scan, e.g., daily, every two days, once a week, etc., and when and how the application should notify them to perform self-treatment. The application then directs the new user to the OSDI questionnaire 1514 where they must respond to the questionnaire prompts. Once the questionnaire is complete, the application guides the user to perform anterior eye imaging and analysis 1515, which step includes several sub-steps. First the application guides the user to complete video recordings of each eye, once with the projected Placido ring pattern and once without the ring pattern. Then the application guides the user to check the video quality. If the quality of a video is inadequate, the user will be guided through re-taking the deficient video(s). The application then performs image calibration on the new user's video recordings. Once the application has calibrated to the new user, the remaining step in the process, including digital image processing, analysis, presenting the scan summary 1516 and providing self-treatment recommendations 1517 are like that for an existing user.


With reference to FIG. 16 a flow chart is depicted showing an example method for performing object identification, digital image processing, and parameter measurement on extracted frames from a video of the anterior eye as recorded by an illuminator device. The method includes frame sequence extraction, i.e., extracting individual frames in sequence at a defined interval over a defined period, e.g., a frame is extracted every 100 or 200 milliseconds over a period of 10 seconds, from a video recording of the anterior eye 1610. Next, the method uses a trained deep learning model to process each frame and detects an iris region and a lower tear meniscus region in each frame 1611. The method measures a height dimension of the detected iris regions to determine the height dimension of an open eye 1612, which will be the maximum height of all detected iris regions. From the open eye height, the method compares iris region height across sequential frames to calculate an eye blink rate 1613.


The method then uses the detected lower tear meniscus regions to perform analysis. First the method isolates the detected tear meniscus regions from each frame by cropping out a rectangular image of the tear meniscus region 1614. Then the method locates the center of each cropped rectangle 1615, and crops out a smaller section, e.g., 200×100 pixels centered on the rectangle center 1616. The method then uses another trained learning model to perform digital image processing on the smaller sections, which yields a segmentation and mask of the tear meniscus 1617. The method next uses the tear meniscus mask to measure a tear meniscus height and determine a tear meniscus location 1618. The method performs steps 1614 through 1618 on three consecutive frames and uses the results to determine an optimal tear meniscus height 1619.


With reference to FIG. 17 a flow chart is depicted showing an example method for performing ring pattern distortion analysis. The method includes recording anterior eye video with a projected Placido ring pattern and performing frame sequence extraction 1710. Then, iris region detection is performed on each frame, and iris region masks are created 1711. The iris region marks are then cropped and extracted from the frames 1712. The cropped iris regions are processed using a threshold method to develop a ring pattern binary mask for the iris region 1713. Then the center of the Placido ring pattern is detected and used as the origin of a set of polar coordinates. The binary mask is then converted from the Cartesian coordinate to a Polar coordinate 1714, and the Polar binary ring pattern masks are divided into smaller sections. An object edge slope accumulation method is applied to the smaller sections and a ring pattern distortion value is calculated for each section 1715. The distortion values are arranged in a matrix and converted back to the original Cartesian coordinate 1716. The cartesian distortion values are then used to create a ring pattern distortion map 1717. Finally, the distortion map is interpreted and a distortion heat map is derived 1718. The distortion heat map displays distortion values in terms of colors, numbers, or other suitable index, e.g., high distortion levels are characterized as red or 5, low distortion level are green or 1, etc.


With reference to FIG. 18 a flow chart is depicted showing an example method for creating a ring pattern distortion map. The method includes recording anterior eye video with a projected Placido ring pattern and performing frame sequence extraction. Then, iris region detection is performed on each frame, and iris region masks are created. The iris region marks are then cropped and extracted from the frames. The cropped iris regions are processed using a threshold method to develop a ring pattern binary mask for the iris region. Then the center of the Placido ring pattern is detected and used as the origin of a set of polar coordinates. The binary mask is then converted from the Cartesian coordinate to a Polar coordinate 1810, and the polar binary ring pattern mask is divided into segments of 50×50 pixels along the Radial coordinate (r) and Angular coordinate (θ) in the Polar coordinate system 1811. Then the method excludes segments not found within the radians of the ring pattern, as well as those segments containing eyelash shadows and sets their distortion values to a predefined base value 1812. Next the method constructs a histogram for each segment 1813. Histogram construction includes identifying all the objects in the segment, identifying points along the boundaries (edges) of each object, and finding the contour slopes of each of the boundary points. Indeed, each edge or boundary includes a plurality of points, and a slope is determined for each point on each edge. Slope values of the four segment edges outside the region of interest are excluded from the histogram. After accumulating the slopes within the segments of interest a Point Slope Histogram (PSH) is created. Then the method calculates a ring pattern distortion value for each segment using the segment's histogram 1814 and specifically the PSH. The segment histogram values are weighted according to weighting parameters multiplied by the histogram values of the adjacent segments. Specifically, the histogram values are weighted according to weighting parameters related to the distribution (locations) of the histogram values in the histogram. Higher weights are assigned to bins (portions of the PSH) representing greater tear film distortion. The weighted some of the bins represents the tear film distortion for the current segment. Finally, a ring pattern distortion map of the eye is created by assembling the distortion values for each segment 1815.


Also disclosed are embodiments of a mobile software application, and accompanying methods to perform dry eye syndrome diagnosis, assessment, and treatment. Software application embodiments for the binocular illuminator allow automatic right eye and left eye identification and analysis. The application operates the illuminator, performs video analysis, computes dry eye-related parameters, summarizes the measured values, presents longitudinal data trends, and guides self-treatment. It also provides user administrative functions and facilitates data storage locally and on cloud servers.


Graphical User Interface

With reference to FIG. 19, an example Graphical User Interface (GUI) layout for video recording 1900 is depicted. The GUI is generated by the software application for use with the disclosed illuminator and displayed on the touchscreen area 14 of a mobile device 12 such as a smartphone. A record button 1910 when activated alternately starts and stops a video recording process. A text field 1920 displays recording information, such as video recording duration, identifies the eye being recorded, indicates whether the ring pattern is illuminated, and/or other pertinent information. A help button 1930 when activated alternately displays or hides a pop-up window that contains help information, and may include a help topic search function, or a list of help topics, etc. A slider 1940 controls the zoom function of the camera, and operates, for example, so that the camera is fully zoomed out when the slider is positioned to the left of the field, and fully zoomed in when the slider is at the right of the field. Four boundary guides 1950 indicate the area within which the eye should be contained for a quality video recording of the anterior eye. A pupil locator 1960 should be centered on the pupil for proper camera alignment with the eye. An accept button 1970, when activated, allows the user to accept the recorded video and return to the original view of the recording screen 1900 so that additional video may be recorded.


With reference to FIG. 20, an example GUI layout for tear meniscus height and iris interferometric pattern analysis 2000 is depicted. The GUI is generated by the application and displayed on the touchscreen area 14 of a mobile device 12. A primary display window 2010 is used to display either a segmented tear meniscus region image or a segmented iris region image depending on activation of a meniscus selector button 2012 or an iris selector button 2014. A left eye selector button 2016 when activated causes images from the left eye to be displayed in the display window 2010, and similarly a right eye selector button 2018 causes images from the right eye to be displayed. A manual adjustment controller 2020 contains three sliding bar controllers (not shown) and is used to adjust a detected rectangular region overlaying the image displayed in the display window 2010. One bar controller adjusts the height of the rectangular region, a second bar controller adjusts the x-axis, and a third bar controller adjusts the y-axis. Another button 2022 does something. A reset button 2024 returns the rectangular region to the original detected condition if manual adjustment is unsatisfactory. A text field displays blink rates, tear meniscus heights, tear film interferometric patterns, left or right eye selection, or other suitable messaging. A help button 2030 when activated displays help information as previously described, and a back button 2040 when activated returns the user to the previous step.


With reference to FIG. 21, an example GUI layout for a Placido ring pattern projection image with an overlaid Placido ring distortion heatmap 2100 is depicted. The GUI is generated by the application and displayed on the touchscreen area 14 of a mobile device 12. The GUI has a primary display window 2110 for displaying the Placido ring pattern projection with overlaid Placido ring distortion heatmap, as selected. A group of seven image selector buttons 2112 when activated allow the user to select a specific frame image from a video to be displayed in the display window 2110. A left eye selector button 2116 when activated causes images from the left eye to be displayed in the display window 2110, and similarly a right eye selector button 2118 causes images from the right eye to be displayed. A text field 2120 displays the detected non-invasive tear film breakup time (NiBUT) as generated by the application, and functions as a text editor field, allowing the user to manually input a NiBUT value if desired. A sliding bar controller 2122 allows the user to change the opacity of the overlaid heatmap image displayed in the display window 2110, e.g., if the bar is positioned to the left, the heatmap image is fully opaque, and if the bar is positioned on the right, the heatmap image is fully transparent. A reset button 2124 returns the NiBUT value to the value generated by the application if the user decides a manual change is unsatisfactory. A help button 2130 when activated displays help information as previously described, and a back button 2140 when activated returns the user to the previous step.


With reference to FIG. 22, an example GUI layout for a scrolling summary display of current measures and analysis 2200 is depicted. The GUI is generated by the application and displayed on the touchscreen area 14 of a mobile device 12. The scrolling summary page 2210 shows all measured data and analysis results derived from the left eye and right eye videos in a sequential manner, wherein the user can use a pair of buttons, a back button 2240, and a next button 2242 to navigate through the data set. The scrolling display includes a fixed title window 2212, which identifies the portion of information being displayed on the scrolling page 2210. A series of text fields 2213, 2214, 2215, 2216, identifies or labels the type of information being displayed and includes the numerical value of the information. For example, the text fields may show a blink rate, an iris interferometric pattern, a tear meniscus height, a distortion heat map, or other result as follows: “Blink Rate: 20/min”; “Tear Meniscus Height: 0.15 mm,” etc. A visual display window 2218 shows results requiring a visual representation, such as a distortion heatmap, a frame of recorded video, cropped iris section, etc.


With reference to FIG. 23, an example GUI layout for a scrolling trend data display 2300 is depicted. The GUI is generated by the application and displayed on the touchscreen area 14 of a mobile device 12. The scrolling summary page 2310 shows a period of trend data that can be navigated in a sequential manner, wherein the user can use a pair of buttons, a back button 2340, and a next button 2342, to navigate through the data set. The scrolling display includes a fixed title window 2312, which identifies the period of trend information being displayed on the scrolling page 2310. A starting date field 2313 is for inputting and displaying a start date for the trend data, and an end date field 2314 is for inputting and displaying an end date for the trend data. A data retrieval button 2315, when activated, refreshes the displayed information to match the start and end dates entered in their respective fields 2313, 2314. An OSDI field 2316 is for displaying the OSDI Score during the selected period. A visual display window 2318 displays diagrams relevant to the selected period arranged by alphabetical order according to the diagram title. A summary button 2317, when activated, displays summary content.


With reference to FIG. 24, an example GUI layout for an eye scan report 2400 is depicted. The GUI is generated by the application and displayed on the touchscreen area 14 of a mobile device (not shown). A logo window 2411 displays a company logo, and a title window 2412 lists the title of the displayed report. A date window 2413 displays the date the eye scan was performed, and the name of the service provider, if applicable. A patient window 2414 displays patient information, such as name, age, eye condition, etc. An OSDI window displays the patient's OSDI score. Two image windows 2421, 2422 display images of the left eye and right eye, respectively. A first title window 2430 displays title “NiBUT” since the window 2431 immediately below displays NiBUT results. Two image windows 2432, 2433 display a ring pattern projection image and a corresponding ring pattern distortion heatmap for the left eye and right eye, respectively. Two additional image windows 2434, 2435 display eye blink monitoring diagrams drawn from ring pattern projection videos for the left eye and right eye, respectively. A second title window 2440 displays the title “Iris and Meniscus Measures” since the window 2441 immediately below displays iris interferometric pattern and meniscus measurement results. Two image windows 2442, 2443 display tear meniscus images with text indicating measured meniscus heights for the left eye and right eye, respectively. Two additional image windows 2444, 2445 display iris interferometric pattern images for the left eye and right eye, respectively. Two final image windows 2446, 2447 display eye blink monitoring diagrams for the left eye and right eye, respectively. A text field 2450 displays explanatory information based on the displayed eye scan results.


Some portions of this specification are presented in terms of algorithms or symbolic representations of operations on data stored as bits or binary digital signals within a machine memory (e.g., a computer memory). These algorithms or symbolic representations are examples of techniques used by those of ordinary skill in the data processing arts to convey the substance of their work to others skilled in the art. As used herein, an “algorithm” is a self-consistent sequence of operations or similar processing leading to a desired result. In this context, algorithms and operations involve the manipulation of information elements. Typically, but not necessarily, such elements may take the form of electrical, magnetic, or optical signals capable of being stored, accessed, transferred, combined, compared, or otherwise manipulated by a machine. It is convenient at times, principally for reasons of common usage, to refer to such signals using words such as “data,” “content,” “bits,” “values,” “elements,” “symbols,” “characters,” “terms,” “numbers,” “numerals,” “words,” “materials,” etc. These specific words, however, are merely convenient labels and are to be associated with appropriate information elements.


Unless specifically stated otherwise, discussions herein using words such as “processing,” “computing,” “calculating,” “determining,” “presenting,” “displaying,” or the like may refer to actions or processes of a machine (e.g., a computer) that manipulates or transforms data represented as physical (e.g., electronic, magnetic, or optical) quantities within one or more memories (e.g., volatile memory, non-volatile memory, or a combination thereof), registers, or other machine components that receive, store, transmit, or display information.


Upon reading this disclosure, those of skill in the art will appreciate still additional alternative structural and functional designs for a system and a process for cognitive training through the disclosed principles herein. Thus, while embodiments and applications have been illustrated and described, it is to be understood that the disclosed embodiments are not limited to the precise construction and components disclosed herein. Various modifications, changes, and variations, which will be apparent to those skilled in the art, may be made in the arrangement, operation and details of the method and apparatus disclosed herein without departing from the spirit and scope of the invention.


It will also be understood by those familiar with the art, that the invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. Likewise, the naming and division of the modules, managers, functions, systems, engines, layers, features, attributes, methodologies, and other aspects are not mandatory or significant, and the mechanisms that implement the invention or its features may have different names, divisions, and/or formats. Furthermore, as will be apparent to one of ordinary skill in the relevant art, the modules, managers, functions, systems, engines, layers, features, attributes, methodologies, and other aspects of the invention can be implemented as software, hardware, firmware, or any combination of the three. Of course, wherever a component of the present invention is implemented as software, the component can be implemented as a script, as a standalone program, as part of a larger program, as a plurality of separate scripts and/or programs, as a statically or dynamically linked library, as a kernel loadable module, as a device driver, and/or in every and any other way known now or in the future to those of skill in the art of computer programming. Additionally, the present invention is in no way limited to implementation in any specific programming language, or for any specific operating system or environment.


While this invention has been described in terms of several embodiments, there are alterations, modifications, permutations, and substitute equivalents, which fall within the scope of this invention. Although subsection titles have been provided to aid in the description of the invention, these titles are merely illustrative and are not intended to limit the scope of the present invention. In addition, where claim limitations have been identified, for example, by a numeral or letter, they are not intended to imply any specific sequence.


This has been a description of the disclosed invention along with a preferred method of practicing the invention.

Claims
  • 1. A system for treatment of dry eye syndrome, comprising: an binocular illuminator comprising one or more projectors, including two projector cones and a light source for each projector cone, each projector cone configured to project a ring pattern onto an anterior surface of an eye of a patient, a lens system for focusing into a camera an image of the anterior surface and the ring pattern, and an eye cup for aligning the camera and projector with each eye; anda computer comprising a software application to perform one or more of the following: operation of the binocular illuminator to capture a video of the anterior surface and ring pattern of each eye simultaneously, splitting the video to isolate an anterior image and ring pattern of each eye, aligning an iris centroid of each eye with an optical axis, segmenting the ring pattern of the anterior image of each eye, converting the ring pattern into polar coordinates, constructing a segment histogram of each segment of the ring pattern; calculating a ring pattern distortion value for each segment; creating a ring pattern distortion map of each eye, and analysis of each ring pattern distortion map to characterize a parameter relevant to a dry eye condition.
  • 2. The system for treatment of dry eye syndrome of claim 1, wherein the binocular illuminator further includes one or more of the following for communication with the computer: a wired connection, a wireless connection, a universal serial bus connection, a near field communication system, and a Bluetooth system.
  • 3. The system for treatment of dry eye syndrome of claim 1, wherein the camera is housed within the binocular illuminator.
  • 4. The system for treatment of dry eye syndrome of claim 1, wherein each segment histogram includes one or more histogram values and wherein the one or more histogram values are weighted according to weighting parameters related to the distribution of the histogram values in the histogram.
  • 5. The system for treatment of dry eye syndrome of claim 1, wherein the light source for each projector cone is a light ring and wherein each light ring includes one or more white light emitting diodes and two or more colored light emitting diodes.
  • 6. The system for treatment of dry eye syndrome of claim 5, wherein the two or more colored light emitting diodes are aligned orthogonal to the optical axis and configured to communicate alignment of the iris centroid of each eye with the optical axis.
  • 7. The system for treatment of dry eye syndrome of claim 5, wherein the two or more colored light emitting diodes are aligned orthogonal to the optical axis and configured to direct adjustment of interpupillary distance.
  • 8. A method of treating dry eye syndrome, comprising: connecting a binocular illuminator to a computer that hosts a program of instructions executable by the computer and wherein the binocular illuminator includes two eye cups;positioning an eye cup of the binocular illuminator over each eye of a patient;adjusting the interpupillary distance to position each eye in front of a camera;capturing a video by the camera of each eye;executing the program of instructions wherein the program of instructions includes code for: extracting a frame of the video,splitting the frame of the video into a left segment and a right segment consistent with a left eye and right eye, respectively,detecting an iris region in each eye and determining, between an iris centroid and an optical axis, a vertical distance error and a horizontal distance error,directing, for each eye, alignment, within a predetermined maximum error tolerance, of the iris centroid with the optical axis,adjusting a lens system until each eye is in a focused condition,projecting a ring pattern on an anterior surface of each eye,initiating a video recording and directing a series of blinks,stitching together recordings of each eye to form a composite image, andanalyzing each ring pattern for distortion to characterize a parameter relevant to a dry eye condition.
  • 9. The method of treating dry eye syndrome of claim 8, wherein directing includes illumination of one more colored light emitting diodes.
  • 10. The method of treating dry eye syndrome of claim 8 wherein the program of instructions further includes code for: identifying an iris location on the video,identifying a lower tear meniscus on the video,identifying a series of eye blinks, including identifying a fully open condition of an eyelid and a fully closed condition of the eyelid,segmenting the video using the iris location and the lower tear meniscus location, andsegmenting the video using the ring pattern.
  • 11. The method of treating dry eye syndrome of claim 8, wherein the program of instructions further includes code for: analyzing the video to develop one or more dry eye condition parameters,performing a ring pattern deformation analysis to measure one or more of: a distortion heat map, a tear film breakup time, a tear meniscus height, and an optimal tear meniscus height,recognizing a plurality of eye blink events to measure an eye blink rate, andcomparing an image of a corneal lipid layer to an example interferometric pattern to recognize a corneal lipid layer interferometric pattern.
  • 12. The method of treating dry eye syndrome of claim 11, wherein the one or more dry eye parameters are partially developed or corrected by manual input from the user.
  • 13. The method of treating dry eye syndrome of claim 8, wherein the program of instructions further includes code for: converting the ring pattern from Cartesian coordinates to Polar coordinates;dividing the ring pattern into a plurality of segments wherein each segment includes a plurality of edges and a slope for each edge;constructing a histogram for each of the plurality of segments of objects within each segment using the slopes of the plurality of edges; andcalculating a ring pattern distortion value for one or more of the plurality of segments using the histogram of the one or more of the plurality of segments.
  • 14. The method of treating dry eye syndrome of claim 8, wherein the program of instructions further includes code for weighting the histogram of the one or more of the plurality of segments by multiplying weighting parameters of the histogram of segments adjacent to the one or more of the plurality of segments.
  • 15. A method of treating dry eye syndrome, comprising: connecting a binocular illuminator to a computer that hosts a program of instructions executable by the computer and wherein the binocular illuminator includes two eye cups;positioning an eye cup of the binocular illuminator over each eye of a patient;adjusting the interpupillary distance to position each eye in front of a camera;capturing a video by the camera of each eye;executing the program of instructions wherein the program of instructions includes code for: extracting a frame of the video,splitting the frame of the video into a left segment and a right segment consistent with a left eye and right eye, respectively,detecting an iris region in each eye and determining, between an iris centroid and an optical axis, a vertical distance error and a horizontal distance error,directing, for each eye, alignment, within a predetermined maximum error tolerance, of the iris centroid with the optical axis,adjusting a lens system until each eye is in a focused condition,identifying an iris location on the video,identifying a lower tear meniscus on the video,identifying a series of eye blinks, including identifying a fully open condition of an eyelid and a fully closed condition of the eyelid, andsegmenting the video using the iris location and the lower tear meniscus location.
  • 16. The method of treating dry eye syndrome of claim 15, wherein the program of instructions further includes code for projecting a ring pattern on an anterior surface of each eye,initiating a video recording and directing a series of blinks,stitching together recordings of each eye to form a composite image, andanalyzing each ring pattern for distortion to characterize a parameter relevant to a dry eye condition.
  • 17. The method of treating dry eye syndrome of claim 16, wherein the program of instructions further includes code for: converting the ring pattern from Cartesian coordinates to Polar coordinates;dividing the ring pattern into a plurality of segments wherein each segment includes a plurality of edges with edge having of a plurality of points and wherein dividing includes calculating a slope for each point on each edge;constructing a histogram for each of the plurality of segments of objects within each segment using the slopes of the plurality of edges; andcalculating a ring pattern distortion value for one or more of the plurality of segments using the histogram of the one or more of the plurality of segments.
  • 18. The method of treating dry eye syndrome of claim 17, wherein the program of instructions further includes code for: analyzing the video to develop one or more dry eye condition parameters,performing a ring pattern deformation analysis to measure a distortion heat map and/or tear film breakup time;performing a corneal surface analysis to measure a tear meniscus height and/or an optimal tear meniscus height,recognizing a plurality of eye blink events to measure an eye blink rate, andcomparing an image of a corneal lipid layer to an example interferometric pattern to recognize a corneal lipid interferometric pattern to recognize a corneal lipid layer interferometric pattern.
  • 19. The method of treating dry eye syndrome of claim 17, wherein the program of instructions further includes code for weighting the histogram of the one or more of the plurality of segments by multiplying weighting parameters of the histogram of segments adjacent to the one or more of the plurality of segments.
CROSS-REFERENCE TO RELATED APPLICATIONS

The present application is a continuation-in-part of and claims priority to U.S. Non-Provisional patent application Ser. No. 18/298827, filed 11 Apr. 2023 which claims priority to U.S. Provisional Application Ser. No. 63/372842, filed 11 Apr. 2022, and U.S. Provisional Application Ser. No. 63/495,411, filed 11 Apr. 2023, both of which are hereby incorporated by reference in their entirety for all purposes as if fully set forth herein.

Provisional Applications (1)
Number Date Country
63495411 Apr 2023 US
Continuation in Parts (1)
Number Date Country
Parent 18298827 Apr 2023 US
Child 18631190 US