People with type 1 or type 2 diabetes can develop eye disease as a result of having diabetes. One of the most common diabetic eye diseases is diabetic retinopathy, which is damage to the blood vessels of the light-sensitive tissue at the back of the eye, known as the retina. Trained medical professionals use cameras during eye examinations for diabetic retinopathy screening. The cameras can produce images of the back of the eye and trained medical professionals use those images to diagnose and treat diabetic retinopathy.
These images are produced either with pharmacological pupil dilation, known as mydriatic fundus imaging, or without pharmacological pupil dilation, known as non-mydriatic fundus imaging. Because pupil dilation is inversely related, in part, to the amount of ambient light, non-mydriatic fundus imaging usually occurs in low lighting environments. Medical professionals can also use fundus imaging apparatus to detect or monitor other diseases, such as hypertension, glaucoma, and papilledema.
In one aspect, an apparatus for producing a fundus image includes: a processor and a memory; an illumination component including a light source and operatively coupled to the processor; a camera including a lens and operatively coupled to the processor, wherein the memory stores instructions that, when executed by the processor, cause the apparatus to: execute an automated script for capture of the fundus image; and allow for manual capture of the fundus image.
In another aspect, a method for capturing one or more images of a fundus of a patient includes: capturing the one or more images of the fundus using an apparatus; automatically uploading the one or more images of the fundus to a remote device; and receiving feedback on a quality of one or more of the one or more images of the fundus.
In yet another aspect, a method for capturing one or more images of a fundus of a patient includes: automatically analyzing, by an apparatus, the one or more images of the fundus; and presenting results of the analyzing on the apparatus; the results including at least one entry for each of the one or more images of the fundus, the at least one entry including a description of the one or more images and a quality indication for the one or more images.
The following figures, which form a part of this application, are illustrative of described technology and are not meant to limit the scope of the claims in any manner, which scope shall be based on the claims appended hereto.
The fundus imaging system 102 functions to create a set of digital images of a patient's P eye fundus. As used herein, “fundus” refers to the eye fundus and includes the retina, optic nerve, macula, vitreous, choroid and posterior pole.
In this example, one or more images of the eye are desired. For instance, the patient P is being screened for an eye disease, such as diabetic retinopathy. The fundus imaging system 102 can also be used to provide images of the eye for other purposes, such as to diagnose or monitor the progression of a disease such as diabetic retinopathy.
The fundus imaging system 102 includes a handheld housing that supports the system's components. The housing supports one or two apertures for imaging one or two eyes at a time. In embodiments, the housing supports positional guides for the patient P, such as an optional adjustable chin rest. The positional guide or guides help to align the patient's P eye or eyes with the one or two apertures. In embodiments, the housing supports means for raising and lowering the one or more apertures to align them with the patient's P eye or eyes. Once the patient's P eyes are aligned, the clinician C then initiates the image captures by the fundus imaging system 102.
One technique for fundus imaging requires mydriasis, or the dilation of the patient's pupil, which can be painful and/or inconvenient to the patient P. Example system 100 does not require a mydriatic drug to be administered to the patient P before imaging, although the system 100 can image the fundus if a mydriatic drug has been administered.
The system 100 can be used to assist the clinician C in screening for, monitoring, or diagnosing various eye diseases, such as hypertension, diabetic retinopathy, glaucoma and papilledema. It will be appreciated that the clinician C that operates the fundus imaging system 102 can be different from the clinician C evaluating the resulting image.
In the example embodiment 100, the fundus imaging system 102 includes a camera 104 in communication with an image processor 106. In this embodiment, the camera 104 is a digital camera including a lens, an aperture, and a sensor array. The camera 104 lens is a variable focus lens, such as a lens moved by a step motor, or a fluid lens, also known as a liquid lens in the art. The camera 104 is configured to record images of the fundus one eye at a time. In other embodiments, the camera 104 is configured to record an image of both eyes substantially simultaneously. In those embodiments, the fundus imaging system 102 can include two separate cameras, one for each eye.
In example system 100, the image processor 106 is operatively coupled to the camera 104 and configured to communicate with the network 110 and display 108.
The image processor 106 regulates the operation of the camera 104. Components of an example computing device, including an image processor, are shown in more detail in
The display 108 is in communication with the image processor 106. In the example embodiment, the housing supports the display 108. In other embodiments, the display connects to the image processor, such as a smart phone, tablet computer, or external monitor. The display 108 functions to reproduce the images produced by the fundus imaging system 102 in a size and format readable by the clinician C. For example, the display 108 can be a liquid crystal display (LCD) and active matrix organic light emitting diode (AMOLED) display. The display can be touch sensitive.
The example fundus imaging system 102 is connected to a network 110. The network 110 may include any type of wireless network, a wired network, or any communication network known in the art. For example, wireless connections can include cellular network connections and connections made using protocols such as 802.11a, b, and/or g. In other examples, a wireless connection can be accomplished directly between the fundus imaging system 102 and an external display using one or more wired or wireless protocols, such as Bluetooth, Wi-Fi Direct, radio-frequency identification (RFID), or Zigbee. Other configurations are possible.
In one of the embodiments, the variable focus lens 180 is a liquid lens. A liquid lens is an optical lens whose focal length can be controlled by the application of an external force, such as a voltage. The lens includes a transparent fluid, such as water or water and oil, sealed within a cell and a transparent membrane. By applying a force to the fluid, the curvature of the fluid changes, thereby changing the focal length. This effect is known as electrowetting.
Generally, a liquid lens can focus between about −10 diopters to about +30 diopters. The focus of a liquid lens can be made quickly, even with large changes in focus. For instance, some liquid lenses can autofocus in tens of milliseconds or faster. Liquid lenses can focus from about 10 cm to infinity and can have an effective focal length of about 16 mm or shorter.
In another embodiment of example fundus imaging system 102, the variable focus lens 180 is one or more movable lenses that are controlled by a stepping motor, a voice coil, an ultrasonic motor, or a piezoelectric actuator. Additionally, a stepping motor can also move the image sensor array 186. In those embodiments, the variable focus lens 180 and/or the image sensor array 186 are oriented normal to an optical axis of the fundus imaging system 102 and move along the optical axis. An example stepping motor is shown and described below with reference to
The example fundus imaging system 102 also includes an illumination light-emitting diode (LED) 182. The illumination LED 182 can be single color or multi-color. For example, the illumination LED 182 can be a three-channel RGB LED, where each die is capable of independent and tandem operation.
Optionally, the illumination LED 182 is an assembly including one or more visible light LEDs and a near-infrared LED. The optional near-infrared LED can be used in a preview mode, for example, for the clinician C to determine or estimate the patient's P eye focus without illuminating visible light that could cause the pupil to contract or irritate the patient P.
The illumination LED 182 is in electrical communication with the computing device 1800. Thus, the illumination of illumination LED 182 is coordinated with the adjustment of the variable focus lens 180 and image capture. The illumination LED 182 can be overdriven to draw more than the maximum standard current draw rating. In other embodiments, the illumination LED 182 can also include a near-infrared LED. The near-infrared LED is illuminated during a preview mode.
The example fundus imaging system 102 also optionally includes a fixation LED 184. The fixation LED 184 is in communication with the computing device 1800 and produces a light to guide the patient's P eye for alignment. The fixation LED 184 can be a single color or multicolor LED. For example, the fixation LED 184 can produce a beam of green light that appears as a green dot when the patient P looks into the fundus imaging system 102. Other colors and designs, such as a cross, “x” and circle are possible.
The example fundus imaging system 102 also includes an image sensor array 186 that receives and processes light reflected by the patient's fundus. The image sensor array 186 is, for example, a complementary metal-oxide semiconductor (CMOS) sensor array, also known as an active pixel sensor (APS), or a charge coupled device (CCD) sensor.
The image sensor array 186 has a plurality of rows of pixels and a plurality of columns of pixels. In some embodiments, the image sensor array has about 1280 by 1024 pixels, about 640 by 480 pixels, about 1500 by 1152 pixels, about 2048 by 1536 pixels, or about 2560 by 1920 pixels.
In some embodiments, the pixel size in the image sensor array 186 is from about four micrometers by about four micrometers; from about two micrometers by about two micrometers; from about six micrometers by about six micrometers; or from about one micrometer by about one micrometer.
The example image sensor array 186 includes photodiodes that have a light-receiving surface and have substantially uniform length and width. During exposure, the photodiodes convert the incident light to a charge. The image sensor array 186 can be operated as a global reset, that is, substantially all of the photodiodes are exposed simultaneously and for substantially identical lengths of time.
The example fundus imaging system 102 also includes a display 108, discussed in more detail above with reference to
The embodiment of method 200 begins with setting a depth of field operation 204. In embodiments, the variable focus lens 180 is capable of focusing from about −20 diopters to about +20 diopters. Set depth of field operation 204 defines the lower and upper bounds in terms of diopters. For example, the depth of field range could be set to about −10 to +10 diopters; about −5 to about +5 diopters; about −10 to about +20 diopters; about −5 to about +20 diopters; about −20 to about +0 diopters; or about −5 to about +5 diopters. Other settings are possible. The depth of field can be preprogrammed by the manufacturer. Alternatively, the end user, such as the clinician C, can set the depth of field.
As shown in
For example, when the depth of field is from −10 to +10 diopters, the focus of the variable focus lens can be changed by 4 diopters before each image capture. Thus, in this example, images would be captured at −10, −6, −2, +2, +6 and +10 diopters. Or, images could be captured at −8, −4, 0, +4 and +8 diopters, thereby capturing an image in zones −10 to −6 diopters, −6 to −2 diopters, −2 to +2 diopters, +2 to +6 diopters and +6 to +10 diopters, respectively. In that instance, the depth of focus is about +/−2 diopters. Of course, the number of zones and the depth of field can vary, resulting in different ranges of depth of field image capture.
In embodiments, both depth of field and number of zones are predetermined. For example, −10 D to +10 D and 5 zones. Both can be changed by a user.
After the depth of field and number of zones are set, the next operation in embodiment of method 200 is the image capture process, which includes illuminate lighting operation 208, adjust lens focus operation 210 and capture image operation 212. As shown in
The illumination LED 182 is illuminated in lighting operation 208. The illumination LED 182 can remain illuminated throughout the duration of each image capture. Alternatively, the illumination LED 182 can be turned on and off for each image capture. In embodiments, the illumination LED 182 only turns on for the same period of time as the image sensor array 186 exposure time period.
Optionally, lighting operation 208 can additionally include illuminating a near-infrared LED. The clinician C can use the illumination of the near-infrared LED as a way to preview the position of the patient's P pupil.
The focus of variable focus lens 180 is adjusted in lens focus operation 210. Autofocusing is not used in embodiment of method 200. That is, the diopter setting is provided to the lens without regard to the quality of the focus of the image. Indeed, traditional autofocusing fails in the low-lighting non-mydriatic image capturing environment. The embodiment of method 200 results in a plurality of images at least one of which, or a combination of which, yields an in-focus view of the patient's P fundus.
Additionally, the lack of autofocusing enables the fundus imaging system 102 to rapidly capture multiple images in capture image operation 212 at different diopter ranges. That is, variable focus lens 180 can be set to a particular diopter range and an image captured without the system verifying that the particular focus level will produce an in-focus image, as is found in autofocusing systems. Because the system does not attempt to autofocus, and the focus of the variable focus lens 180 can be altered in roughly tens of milliseconds, images can be captured throughout the depth of field in well under a second, in embodiments. Thus, in the embodiment of method 200, the fundus imaging system 102 can capture images of the entire depth of field before the patient's P eye can react to the illuminated light. Without being bound to a particular theory, depending on the patient P, the eye might react to the light from illumination LED 182 in about 150 milliseconds.
The image sensor array 186 captures an image of the fundus in capture image operation 212. As discussed above, the embodiment of method 200 includes multiple image captures of the same fundus at different diopter foci. The example fundus imaging system 102 uses a global reset or global shutter array, although other types of shutter arrays, such as a rolling shutter, can be used. The entire image capture method 200 can also be triggered by passive eye tracking and automatically capture, for example, 5 frames of images. An embodiment of example method for passive eye tracking is shown and described in more detail with reference to
After the fundus imaging system 102 captures an image of the fundus, the embodiment of method 200 returns in loop 213 to either the illuminate lighting operation 208 or the adjust lens focus operation 210. That is, operations 208, 210 and 212 are repeated until an image is captured in each of the preset zones from zones operation 206. It is noted that the image capture does not need to be sequential through the depth of field. Additionally, each of the images does not need to be captured in a single loop; a patient could have one or more fundus images captured and then one or more after a pause or break.
After an image is captured in each of the zones (capture image operation 212) in embodiment of method 200, either the images are displayed in show images operation 214 or a representative image is determined in operation 216 and then the image is displayed. Show images operation 214 can include showing all images simultaneously or sequentially on display 108. A user interface shown on display 108 can then enable the clinician C or other reviewing medical professional to select or identify the best or a representative image of the patient's P fundus.
In addition to, or in place of, show images operation 214, the computing device can determine a representative fundus image in operation 216. Operation 216 can also produce a single image by compiling aspects of one or more of the images captured. This can be accomplished by, for example, using a wavelet feature reconstruction method to select, interpolate, and/or synthesize the most representative frequency or location components.
The fundus imaging system 102 can also produce a three-dimensional image of the fundus by compiling the multiple captured images. Because the images are taken at different focus ranges of the fundus, the compilation of the pictures can contain three-dimensional information about the fundus.
In turn, the image or images from operation 214 or 216 can be sent to a patient's electronic medical record or to a different medical professional via network 110.
The housing 401 of example fundus imaging system 400 is sized to be hand held. In embodiments, the housing 401 additionally supports one or more user input buttons near display 408, not shown in
Fixation LED 402 is an optional component of the fundus imaging system 400. The fixation LED 402 is a single or multi-colored LED. Fixation LED 402 can be more than one LED.
As shown in
The embodiment of example fundus imaging system 400 also includes a variable focus lens assembly 406. As shown in
The example printed circuit board 410 is shown positioned within one distal end of the housing 401 near the display 408. However, the printed circuit board 410 can be positioned in a different location. The printed circuit board 410 supports the components of the example computing device 1800. A power supply can also be positioned near printed circuit board 410 and configured to power the components of the embodiment of example fundus imaging system 400.
Step motor 412 is an optional component in the example embodiment 400. Step motor 412 can also be, for example, a voice coil, an ultrasonic motor, or a piezoelectric actuator. In the example embodiment 400, step motor 412 moves the variable focus lens assembly 406 and/or the sensor array 414 to achieve variable focus. The step motor 412 moves the variable focus lens assembly 406 or the sensor array 414 in a direction parallel to a longitudinal axis of the housing 401 (the optical axis). The movement of step motor 412 is actuated by computing device 1800.
The example image sensor array 414 is positioned normal to the longitudinal axis of the housing 401. As discussed above, the image sensor array 414 is in electrical communication with the computing device. Also, as discussed above, the image sensor array can be a CMOS (APS) or CCD sensor.
An illumination LED 416 is positioned near the variable focus lens assembly 406. However, the illumination LED 416 can be positioned in other locations, such as near or with the fixation LED 402.
Initially, at step 303, the pupil or fovea or both of the patient P are monitored. The fundus imaging system 102 captures images in a first image capture mode. In the first image capture mode, the fundus imaging system 102 captures images at a higher frame rate. In some embodiments, in the first image capture mode, the fundus imaging system 102 captures images with infra-red illumination and at lower resolutions. In some embodiments, the infra-red illumination is created by the illumination LED 182 operating to generate and direct light of a lower intensity towards the subject. The first image capture mode may minimize discomfort to the patient P, allow the patient P to relax, and allow for a larger pupil size without dilation (non-mydriatic).
Next, at step 305, the computing device 1800 processes at least a portion of the images captured by the fundus imaging system 102. The computing device 1800 processes the images to identify the location of the pupil or fovea or both of the patient P. Using the location of the pupil or fovea or both in one of the images, a vector corresponding to the pupil/fovea orientation is calculated. In some embodiments, the pupil/fovea orientation is approximated based on the distance between the pupil and fovea in the image. In other embodiments, the pupil/fovea orientation is calculated by approximating the position of the fovea relative to the pupil in three dimensions using estimates of the distance to the pupil and the distance between the pupil and the fovea. In other embodiments, the pupil/fovea orientation is approximated from the position of the pupil alone. In yet other embodiments, other methods of approximating the pupil/fovea orientation are used.
Next, at step 307, the pupil/fovea orientation is compared to the optical axis of the fundus imaging system 102. If the pupil/fovea orientation is substantially aligned with the optical axis of the fundus imaging system 102, the process proceeds to step 309 to capture a fundus image. If not, the process returns to step 303 to continue to monitor the pupil or fovea. In some embodiments, the pupil/fovea orientation is substantially aligned with the optical axis when the angle between them is less than two to fifteen degrees.
Next, at step 309, fundus images are captured by triggering the embodiment of example thru focusing image capturing method 200. In embodiments, five images are captured at step 309. In some embodiments, the fundus image is captured in a second image capture mode. In some embodiments, in the second image capture mode, the fundus imaging system 102 captures images with visible illumination and at higher resolutions. In some embodiments, the visible illumination is created by the illumination LED 182 operating to generate and direct light of a higher intensity towards the subject. In other embodiments, the higher illumination is created by an external light source or ambient light. The second image capture mode may facilitate capturing a clear, well-illuminated, and detailed fundus image.
In some embodiments, after step 309, the initiate retinal imaging step 306 returns to step 303 to continue to monitor the pupil/fovea orientation. The initiate retinal imaging step 306 may continue to collect fundus images indefinitely or until a specified number of images have been collected. Further information regarding passive eye tracking can be found in U.S. patent application Ser. No. 14/177,594 filed on Feb. 11, 2014, titled Ophthalmoscope Device, which is hereby incorporated by reference in its entirety
The embodiment of example use 500 begins by positioning the fundus imaging system (operation 502). In embodiments, the clinician first initiates an image capture sequence via a button on the housing or a graphical user interface shown by the display. The graphical user interface can instruct the clinician to position the fundus imaging system over a particular eye of the patient. Alternatively, the clinician can use the graphical user interface to indicate which eye fundus is being imaged first.
In operation 502, the clinician positions the fundus imaging system near the patient's eye socket. The clinician positions the aperture of the system flush against the patient's eye socket such that the aperture, or a soft material eye cup extending from the aperture, seals out most of the ambient light. Of course, the example use 500 does not require positioning the aperture flush against the patient's eye socket.
When the fundus imaging system is in position, the system captures more than one image of the fundus in operation 504. As discussed above, the system does not require the clinician to manually focus the lens. Additionally, the system does not attempt to autofocus on the fundus. Rather, the clinician simply initiates the image capture, via a button or the GUI, and the fundus imaging system controls when to capture the images and the focus of the variable focus lens. Also, as discussed above at least with reference to
The patient may require the fundus imaging system to be moved away from the eye socket during image capture operation 504. The clinician can re-initiate the image capture sequence of the same eye using the button or the GUI on the display.
After capturing an image in each of the specified zones, the fundus imaging system notifies the clinician that the housing should be positioned over the other eye (operation 506). The notification can be audible, such as a beep, and/or the display can show a notification. In embodiments, the system is configured to capture a set of images of only one eye, wherein the example method 500 proceeds to view images operation 520 after image capture operation 504.
Similar to operation 502, the clinician then positions the fundus imaging system near or flush with the patient's other eye socket in operation 506. Again, when the system is in place, an image is captured in every zone in operation 508.
After images have been captured of the fundus in each pre-set zone, the clinician can view the resulting images in operation 520. As noted above with reference to
As stated above, a number of program modules and data files may be stored in the system memory 1804. While executing on the at least one processing unit 1802, the program modules 1806 may perform processes including, but not limited to, generate list of devices, broadcast user-friendly name, broadcast transmitter power, determine proximity of wireless computing device, connect with wireless computing device, transfer vital sign data to a patient's EMR, sort list of wireless computing devices within range, and other processes described with reference to the figures as described herein. Other program modules that may be used in accordance with embodiments of the present disclosure, and in particular to generate screen content, may include electronic mail and contacts applications, word processing applications, spreadsheet applications, database applications, slide presentation applications, drawing or computer-aided application programs, etc.
Furthermore, embodiments of the disclosure may be practiced in an electrical circuit comprising discrete electronic elements, packaged or integrated electronic chips containing logic gates, a circuit utilizing a microprocessor, or on a single chip containing electronic elements or microprocessors. For example, embodiments of the disclosure may be practiced via a system-on-a-chip (SOC) where each or many of the components illustrated in
The computing device 1800 may also have one or more input device(s) 1812 such as a keyboard, a mouse, a pen, a sound or voice input device, a touch or swipe input device, etc. The output device(s) 1814 such as a display, speakers, a printer, etc. may also be included. The aforementioned devices are examples and others may be used. The computing device 1800 may include one or more communication connections 1816 allowing communications with other computing devices. Examples of suitable communication connections 1816 include, but are not limited to, RF transmitter, receiver, and/or transceiver circuitry; universal serial bus (USB), parallel, and/or serial ports.
The term computer readable media as used herein may include non-transitory computer storage media. Computer storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, or program modules. The system memory 1804, the removable storage device 1809, and the non-removable storage device 1810 are all computer storage media examples (i.e., memory storage.) Computer storage media may include RAM, ROM, electrically erasable read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other article of manufacture which can be used to store information and which can be accessed by the computing device 1800. Any such computer storage media may be part of the computing device 1800. Computer storage media does not include a carrier wave or other propagated or modulated data signal.
Communication media may be embodied by computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave or other transport mechanism, and includes any information delivery media. The term “modulated data signal” may describe a signal that has one or more characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media may include wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, radio frequency (RF), infrared, and other wireless media.
Although the example medical devices described herein are devices used to monitor patients, other types of medical devices can also be used. For example, the different components of the CONNEX™ system, such as the intermediary servers that communication with the monitoring devices, can also require maintenance in the form of firmware and software updates. These intermediary servers can be managed by the systems and methods described herein to update the maintenance requirements of the servers.
Embodiments of the present invention may be utilized in various distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network in a distributed computing environment.
The block diagrams depicted herein are just examples. There may be many variations to these diagrams described therein without departing from the spirit of the disclosure. For instance, components may be added, deleted or modified.
While embodiments have been described, it will be understood that those skilled in the art, both now and in the future, may make various improvements and enhancements can be made.
As used herein, “about” refers to a degree of deviation based on experimental error typical for the particular property identified. The latitude provided the term “about” will depend on the specific context and particular property and can be readily discerned by those skilled in the art. The term “about” is not intended to either expand or limit the degree of equivalents which may otherwise be afforded a particular value. Further, unless otherwise stated, the term “about” shall expressly include “exactly,” consistent with the discussions regarding ranges and numerical data. Concentrations, amounts, and other numerical data may be expressed or presented herein in a range format. It is to be understood that such a range format is used merely for convenience and brevity and thus should be interpreted flexibly to include not only the numerical values explicitly recited as the limits of the range, but also to include all the individual numerical values or sub-ranges encompassed within that range as if each numerical value and sub-range is explicitly recited. As an illustration, a numerical range of “about 4 percent to about 7 percent” should be interpreted to include not only the explicitly recited values of about 4 percent to about 7 percent, but also include individual values and sub-ranges within the indicated range. Thus, included in this numerical range are individual values such as 4.5, 5.25 and 6 and sub-ranges such as from 4-5, from 5-7, and from 5.5-6.5; etc. This same principle applies to ranges reciting only one numerical value. Furthermore, such an interpretation should apply regardless of the breadth of the range or the characteristics being described.
Referring now to
The fundus imaging system 600 includes a housing 601 that supports a display 602 at a first end and an opposite end 603 configured to engage an eye of the patient. As described herein, the fundus imaging system 600 can be used to implement one or more of the described methods for imaging of the fundus.
Yet another embodiment of an example fundus imaging system 605 is shown in
Referring now to
Specifically, an end 607 of an example eye cup 606, shown in
In another example system 700 for recording and viewing an image of a patient's fundus shown in
Upon completion, the image can be uploaded to a cloud system 704 using a batch or more instant configuration. When uploaded, the image can be tagged with device and patient information, such as a barcode associated with the patient and/or a patient picture. The cloud system 704 can be configured to provide patient lists and to accept or reject an image based upon given criteria, such a patient name and quality of image. The cloud system 704 can also be used to provide notifications, such as image availability, to the clinician C and/or patient. In addition, the cloud can forward the image and patient information to an EMR 706 for storage.
In addition, the cloud system 704 can be used to provide a portal to allow for access to images by a device 708 of the clinician C and/or patient device 710 using a computing device such as a personal computing device, tablet, and/or mobile device. This can allow the images to be viewed, manipulated, etc. The cloud system 704 can be used to capture clinician C annotations and diagnoses. In addition, the cloud system 704 can be configured to interface with other third parties, such as insurance companies to allow for billing.
In some examples the systems 600, 605 can be configured to operate in both manual and automatic modes when interfacing with the cloud system 704. In one example, the automatic mode includes one or more scripts that automate certain processes for the systems 600, 605. See
A notification scheme is used for charging of the systems 600, 605. In these examples, the systems 600, 605 are wireless and include a rechargeable battery pack, such as a lithium-ion battery or similar battery. In this example, a bi-color LED is used to indicate a status of charging of the battery pack when placed in a charging cradle 703. The LED is left off if charging is not occurring—this is the default state. When the systems 600, 605 are charging (e.g., when plugged into a dock), the LED is illuminated a solid amber to indicate charging of the battery and a solid green when the battery charging is completed. If an error occurs during charging, the LED flashes an amber color. Other configurations are possible.
Different example operating states for the fundus imaging systems 600, 605 are possible. For a clinician that gathers the images from the patient, the systems 600, 605 can be used to select a patient, adjust the eye cap, take an image, determine a quality of the image, and review the status of an image capture process. In addition, various other features, such as adjustment of connectivity (e.g., WiFi) and cleaning of the device can be accomplished. Additional details on some of these processes are provided below.
Further, a physician (sometimes the same individual who captured the images or a different individual, such as an individual located at a remote location) can review the results of the image captures and develop/review a report based upon the same. See
Example processes are performed in the cloud system 704 based upon each individual or service within the system. For the clinician capturing the images, the cloud system 704 can be used to add new patients, schedule the procedure, and check the status of the procedure. For the physician reviewing the images, the cloud system 704 can be used to check status, review the images, and generate/review a report based upon review of the images. Notifications can also be created and sent to, for example, the clinician or patient.
The systems 600, 605 can be used to transmit scheduling and/or image information to and from the cloud system 704. The EMR 706 is in communication with the cloud system 704 to transmit and store image and diagnoses information for each patient. Other configurations are possible.
An over read service 712 is also shown in
For example, in one embodiment, the device 702 is used to capture one or more fundus images. After capture, the device 702 is returned to the charging cradle 703. Upon placement of the device 702 into the cradle 703, the captured images are automatically transferred to the cloud system 704. This transfer can be automated, so that no further action is required by the user to transfer the images from the device 702 to the cloud system 704.
Upon submission to the cloud system 704, the images can be automatically reviewed for quality. The images can also be automatically forwarded to the over read service 712 for review. One or more clinicians can thereupon review the images and provide feedback from the over read service 712 back to the cloud system 704. At this point, the cloud system 704 can provide notification to the devices 708, 710 regarding the information from the over read service 712.
An example method for using the systems 600, 605 to capture fundus images includes preliminary tasks such as the capturing of patient vitals and education of the patient on the procedure are done. Once this is done, the system 600, 605 is powered on and the patient is selected on the device. The eye cup is then positioned on the patient and one or more images are captured using automated and/or manual processes. The images can then be checked. If accepted, the images are saved and/or uploaded to the cloud. The system 600, 605 can be powered off and returned to its cradle for charging. A physician can thereupon review the images, and the clinician C or patient can be notified of the results.
In an example method for obtaining a good quality image of the fundus using the systems 600, 605, after an image is captured, the clinician can accept or reject the image. If rejected, a script can be executed that provides manual or automated instructions on how to capture a desired image quality. The clinician thereupon gets another opportunity to capture an image and then to accept or reject it. If the image is accepted, automated processes can be used to determine a quality of the image. If accepted, further scripting can occur. If not, the clinician can be prompted to take another image.
An example method is provided to allow for capture of images even when the system 600, 605 loses connectivity with the cloud. In such an instance, automated quality checks may not be provided, and the clinician may be prompted as such. The clinician can then decide whether or not to accept the image without the quality check or to cancel the procedure. In addition, the system 600, 605 can be used to trouble shoot connectivity issues, as described further below.
An example method for allowing the clinician to select the patient on the system 600, 605 includes a work list that is provided that identifies patients based upon one or more given criteria, such as the clinician, location, time of day, etc. The clinician is thereupon able to select the patient and confirms the proper patient has been selected, such as by comparing a code with one worn by the patient for from a picture of the patient. Thereupon, after selection of the patient, one or more images can be captured and stored. The captured images are associated with the selected patient.
In a similar manner, an example method allows the clinician to assure that the proper patient is selected. Upon power-up of the system 600, 605, unique information is sent to the cloud, such as the system's serial number. The could looks-up the serial number and returns a list of patients associated with that system. The clinician can thereupon select the patient from the list or manually enter the patient into the system 600, 605 if the patient is not on the work list.
A user interface allows the user to pick between a selection of patients, examinations, review, and settings. If a patient is selected, the system 600, 605 proceeds with imaging of the fundus using an automated and/or manual process. Icons are used to represent different contexts on the user interfaces of the system 600, 605.
The following example workflows/methods are implemented by the systems 600, 605. Additional details regarding these workflows can also be found with reference to
An example method for automatic examination and image capture starts when the clinician selects the examination icon on the system 600, 605. Upon initiation, the clinician is presented with an interface that allows for automatic acquisition of the fundus image. This can be accomplished in three stages, including pre acquisition, acquisition, and post-acquisition. During pre-acquisition, the clinician selects the patient and configures the system as desired. During acquisition, the image is captured using automated or manual processes. Finally, post-acquisition, quality checks are performed and the clinician can save the image(s) if desired. See
An example method for adjusting certain settings of the system 600, 605 includes, for example, brightness and focus, which can be selected automatically or manually manipulated by the clinician.
An example method for manually acquiring an image is similar to the method described above, except the acquisition of the images is done manually by the clinician. This is accomplished by the clinician manually indicating when an image is to be taken. Upon capture, the image can be verified manually or automatically.
An example method for navigating one or more captured images includes a user interface that is used to scroll through the captured images in a sequence. Upon review, the images can be submitted, if desired.
An example method for selecting a patient from a worklist starts upon selection of the patient icon from the interface for the system 600, 605. A list of patients is presented to the clinician in the worklist. The clinician can select a patient from the list to be presented with additional information about that patient, such as full name, date of birth, and patient ID. If any unsaved images exist, those images are associated with the selected patient. If not, a new examination routine is executed to allow for capture of images to be associated with the selected patient.
An example method allows for the clinician to manually enter new patient information into the system 600, 605. This includes patient name, date of birth, and/or patient ID. Once entered, the patient information can be associated with captured images.
An example method allows the clinician to search for a specific patient using such parameters as patient name, date of birth, and/or patient ID. Once found, the clinician selects the patient for further processing.
An example method for refreshing the patient worklist includes assuming there is connectivity (e.g., to the cloud), the clinician selecting a refresh button to manually refresh the list with the most current patient names. The system 600, 605 is also programmed to periodically refresh the list automatically at given intervals and at other given periods, such as upon startup or shutdown. Other configurations are possible.
An example method allows a clinician to review a patient test on the system 600, 605. Upon selection of a patient, the clinician can review patient summary information (e.g., full name, date of birth, and patient ID) and previous examination summary information, such as items from the examination and image quality scores, which indicate how good the image quality was from those examinations.
An example method for saving images allows, after acquisition, the clinician to review the images in sequence. For each image in the workspace, the image is quality-checked and the status of the image is displayed to the clinician. The clinician uses the user interface to review each acquired image and to save or discard the image.
An example method labeling eye position allows the clinician to select upon five eye positions, including off (default), left eye optic disc centered, left eye macula centered, right eye macula centered, and right eye optic disc centered.
An example method allows for manual adjustment of settings for image acquisition. In this example, the clinician has access to various settings that can be adjusted manually, such as PET and focus and brightness. See
An example method for adding images includes, once an image is captured, the clinician manually adding the image to a workspace if desired. Once added, the image can be reviewed and stored, if desired. In this example, up to four images can be added to a workspace for review. Other configurations are possible. See
An example method for entering advanced settings such settings as volume, time, date, etc. can be accessed upon entering of a password or access code by the clinician. In one method, an access code is needed to change certain settings, and an advanced settings code is needed to change other advance settings. Other configurations are possible.
In an example method for selecting network connectivity, a plurality of WiFi networks are shown, and the clinician can select one for connection thereto. Upon successful connection, the system 600, 605 can communicate with the cloud.
In an example method for image inspection, once an image is selected, it is displayed to the clinician for review. The user can discard the image or move forward with image capture, as desired.
In an example method for discard of an image, a number of discards is tallied. If over a threshold amount (e.g., 2, 3, 5, 10, etc.), a warning can be given that further image acquisition could be uncomfortable for the patient.
In an example method for returning to a home screen, a home button is provided on each interface. When selected, the home screen interface is shown, allowing the clinician to make initial selections like patient list, examination, review, and settings.
If the home button is selected when there are unsaved images, the clinician is first prompted to save or discard the images before returning to the home screen. In this example, the method includes displaying a prompt with a save button to allow the clinician to save the images. Once saved, the home screen is displayed.
In an example method for docking the system 600, 605, the system 600, 605 is placed in a charging cradle. Upon connection with the cradle, an icon indicating a USB connection is displayed on the dock and/or the system 600, 605. If acquisition is complete, the screen is turned off and sleep is instituted without a certain time period (e.g., one minute). If acquisition is not complete, the clinician is prompted to complete acquisition.
In an example method for assuring that all items for an examination have been received or overridden, if items are missing, the save button is disabled. However, the clinician can select the override button and, in certain contexts, allow for saving of data without all required items (e.g., a skipped indication) being present.
In an example method for updating software on the system 600, 605, software can be uploaded from a removable storage medium (e.g., SD card) during boot to update the software on the system 600, 605. In other examples, software can be downloaded, such as from the cloud.
In another example for waking the system 600, 605 from sleep, the user can press the home button to wake the system. Upon wake, a login screen can be presented, requiring the clinician to enter an access code to use the system 600, 605.
In some examples a method is provided for training purposes. In this embodiment, training information can be accessed from the home screen. The training can provide user interface information that trains the user on the configuration and use of the system 600, 605.
Referring now to
For example,
At
In addition to the messaging between the device 702 and the cloud system 704 described above, the cloud system 704 can be used to store various information associated with the examination of a given patient. For example, as the fundus images are captured, the clinician C can adjust various settings associated with the device 702, such as brightness, focus, etc. Once a desired set of settings is identified for a particular patient, these settings can be stored in the cloud system 704 (e.g., in a database) and/or the EMR 706 and associated with the patient. When the patient returns for a subsequent examination, the device 702 can be configured to automatically access the settings for the device 702 by downloading the settings from the cloud system 704. In this manner, the device 702 can be automatically configured according to those settings for subsequent capture of the patient's fundus images.
Referring now to
At the selection stage 732, the clinician C is presented with a menu of options, including an examination icon. The clinician C selects the examination icon to initiate the workflow 730.
At the pre-acquisition stage 734, the clinician C is presented by the device 702 with options to start the workflow 730 or to perform a manual capture of fundus images (see
At the acquisition stage 736, the device 702 automatically captures the desired fundus images from the patient P. The image capture can include one or more tones indicating the capture of images, along with automated quality checks on the images. An example of such a process for automating the capture of fundus images is described in U.S. patent application Ser. No. 15/009,988.
Finally, at the post-acquisition stage 738, the clinician C can review the captured images. The clinician C can perform such actions as discarding images and/or adding images, as described further below.
For example, the clinician C can decide to discard one or more of the fundus images. In one example, the clinician C is provided with various options. If one option is selected (e.g., a “Close” icon 742), the device 702 returns to the pre-acquisition stage 734. If another option is selected (e.g., a trash icon 744), the device 702 returns to the acquisition stage 736 to allow for the immediate retake of the fundus image(s). Other configurations are possible.
In another example, clinician C can add images for the patient P. In this example shown in
In addition, other workflows can be performed by the device 702. For example, the workflow 730 can be a default workflow for the device 702, but the device 702 can be configured to perform a modified workflow depending on which over read service 712 is used. For example, a particular over read service 712 may be defined requirements for such parameters as: number of fundus images; type of fundus images (e.g., left, right, macula entered, optic disc centered, etc.); order of capture sequence.
In some examples, the workflow for the device 702 is defined by one or more scripts. The scripts can be downloaded from the cloud system 704 to allow for the modification of the functionality of the device 702. A particular script can be selected by the clinician C to modify the workflow for the device 702. In addition, the device 702 can be programmed to automatically select a script based upon such parameters as clinician C preference, over read service, etc.
In addition to the automated workflow 730, other configurations are possible. For example, as part of the automated capture of fundus images, the clinician C can select to manually capture one or more fundus images. Specifically, during the pre-acquisition stage 734 or the acquisition stage 736, the clinician C can select one of the manual capture icons 737, 758 to have the device 702 capture an image. Other configurations are possible.
Referring now to
When manually capturing images, the device 702 is programmed as depicted in
Referring now to
Once fundus images have been captures, the device 702 provides a reporting table 850 shown in
The description and illustration of one or more embodiments provided in this application are not intended to limit or restrict the scope of the invention as claimed in any way. The embodiments, examples, and details provided in this application are considered sufficient to convey possession and enable others to make and use the best mode of claimed invention. The claimed invention should not be construed as being limited to any embodiment, example, or detail provided in this application. Regardless whether shown and described in combination or separately, the various features (both structural and methodological) are intended to be selectively included or omitted to produce an embodiment with a particular set of features. Having been provided with the description and illustration of the present application, one skilled in the art may envision variations, modifications, and alternate embodiments falling within the spirit of the broader aspects of the claimed invention and the general inventive concept embodied in this application that do not depart from the broader scope.
This patent application claims priority to U.S. Patent Application Ser. No. 62/249,931 filed on Nov. 2, 2015, the entirety of which is hereby incorporated by reference. This patent application is related to U.S. patent application Ser. No. 14/633,601 filed on Feb. 27, 2015, U.S. patent application Ser. No. 15/009,988 filed on Jan. 29, 2016, and U.S. patent application Ser. No. 14/177,594 filed on Feb. 11, 2014, the entireties of which are hereby incorporated by reference.
Number | Name | Date | Kind |
---|---|---|---|
5048946 | Sklar et al. | Sep 1991 | A |
5557350 | Yano | Sep 1996 | A |
5599276 | Hauptli et al. | Feb 1997 | A |
5703261 | Martin et al. | Dec 1997 | A |
5713047 | Kohayakawa | Jan 1998 | A |
5776060 | Smith et al. | Jul 1998 | A |
5784148 | Heacock | Jul 1998 | A |
5943116 | Zeimer | Aug 1999 | A |
6000799 | Van de Velde | Dec 1999 | A |
6011585 | Anderson | Jan 2000 | A |
6120461 | Smyth | Sep 2000 | A |
6296358 | Cornsweet et al. | Oct 2001 | B1 |
6301440 | Bolle et al. | Oct 2001 | B1 |
6307526 | Mann | Oct 2001 | B1 |
6309070 | Svetilza et al. | Oct 2001 | B1 |
6325511 | Mizouchi | Dec 2001 | B1 |
6350031 | Ashkari et al. | Feb 2002 | B1 |
6556853 | Cabib et al. | Apr 2003 | B1 |
6666857 | Smith | Dec 2003 | B2 |
7134754 | Kerr et al. | Nov 2006 | B2 |
7264355 | Rathjen | Sep 2007 | B2 |
7284859 | Ferguson | Oct 2007 | B2 |
7311400 | Wakil et al. | Dec 2007 | B2 |
7364297 | Goldfain et al. | Apr 2008 | B2 |
7377643 | Chock et al. | May 2008 | B1 |
7380938 | Chmielewski, Jr. et al. | Jun 2008 | B2 |
7387384 | Heine et al. | Jun 2008 | B2 |
7404640 | Ferguson et al. | Jul 2008 | B2 |
7470024 | Chinaglia et al. | Dec 2008 | B2 |
7488294 | Torch | Feb 2009 | B2 |
7502639 | Kerr | Mar 2009 | B2 |
7568628 | Wang et al. | Aug 2009 | B2 |
7611060 | Wang et al. | Nov 2009 | B2 |
7621636 | Su et al. | Nov 2009 | B2 |
7784940 | Goldfain et al. | Aug 2010 | B2 |
7809160 | Vertegaal et al. | Oct 2010 | B2 |
7871164 | Luther et al. | Jan 2011 | B2 |
7926945 | Dick et al. | Apr 2011 | B2 |
7963653 | Ellman | Jun 2011 | B1 |
7976162 | Flitcroft | Jul 2011 | B2 |
8109634 | Gil | Feb 2012 | B2 |
8109635 | Allon et al. | Feb 2012 | B2 |
8347106 | Tsuria et al. | Jan 2013 | B2 |
8366270 | Pujol Ramo et al. | Feb 2013 | B2 |
8388523 | Vivenzio et al. | Mar 2013 | B2 |
8444269 | Ellman | May 2013 | B1 |
8488895 | Muller et al. | Jul 2013 | B2 |
8534837 | Sayeram et al. | Sep 2013 | B2 |
8577644 | Ksondzyk et al. | Nov 2013 | B1 |
8585203 | Aikawa et al. | Nov 2013 | B2 |
8620048 | Nakano et al. | Dec 2013 | B2 |
8649008 | Kashani et al. | Feb 2014 | B2 |
8696122 | Hammer et al. | Apr 2014 | B2 |
8714743 | Verdooner | May 2014 | B2 |
8879813 | Solanki et al. | Nov 2014 | B1 |
9204790 | Wada et al. | Dec 2015 | B2 |
9211064 | Wang | Dec 2015 | B2 |
9237847 | Wang et al. | Jan 2016 | B2 |
9398851 | Anand et al. | Jul 2016 | B2 |
9462945 | Barriga et al. | Oct 2016 | B1 |
9498126 | Wang | Nov 2016 | B2 |
9757031 | Wang et al. | Sep 2017 | B2 |
9901242 | Cheng et al. | Feb 2018 | B2 |
9918629 | Wang | Mar 2018 | B2 |
10136804 | Wang et al. | Nov 2018 | B2 |
10154782 | Farchione et al. | Dec 2018 | B2 |
10159409 | Wang et al. | Dec 2018 | B2 |
10335029 | Wang et al. | Jul 2019 | B2 |
10376141 | Wang | Aug 2019 | B2 |
10413179 | Wang | Sep 2019 | B2 |
10413180 | Barriga | Sep 2019 | B1 |
10524653 | Farchione et al. | Jan 2020 | B2 |
10674907 | Wang et al. | Jun 2020 | B2 |
10758119 | Wang et al. | Sep 2020 | B2 |
10772495 | Farchione et al. | Sep 2020 | B2 |
10799115 | Wang | Oct 2020 | B2 |
20020052551 | Sinclair et al. | May 2002 | A1 |
20020101568 | Eberl et al. | Aug 2002 | A1 |
20030009155 | Pawlowski et al. | Jan 2003 | A1 |
20030071970 | Donnerhacke et al. | Apr 2003 | A1 |
20030163031 | Madden et al. | Aug 2003 | A1 |
20030208125 | Watkins | Nov 2003 | A1 |
20040258285 | Hansen et al. | Dec 2004 | A1 |
20050012899 | Ferguson | Jan 2005 | A1 |
20050043588 | Tsai | Feb 2005 | A1 |
20050110949 | Goldfain et al. | May 2005 | A1 |
20050254008 | Ferguson et al. | Nov 2005 | A1 |
20060113386 | Olmstead | Jun 2006 | A1 |
20060119858 | Knighton et al. | Jun 2006 | A1 |
20060147095 | Usher et al. | Jul 2006 | A1 |
20060159367 | Zeineh et al. | Jul 2006 | A1 |
20060202036 | Wang et al. | Sep 2006 | A1 |
20060202038 | Wang et al. | Sep 2006 | A1 |
20060268231 | Gil et al. | Nov 2006 | A1 |
20070030450 | Liang et al. | Feb 2007 | A1 |
20070174152 | Bjornberg et al. | Jul 2007 | A1 |
20070188706 | Pearson | Aug 2007 | A1 |
20080084538 | Maeda | Apr 2008 | A1 |
20080165322 | Su et al. | Jul 2008 | A1 |
20080231803 | Feldon | Sep 2008 | A1 |
20080316426 | Shibata et al. | Dec 2008 | A1 |
20090096885 | Robinson et al. | Apr 2009 | A1 |
20090225277 | Gil | Sep 2009 | A1 |
20090275929 | Zickler | Nov 2009 | A1 |
20090316115 | Itoh et al. | Dec 2009 | A1 |
20090323022 | Uchida | Dec 2009 | A1 |
20090323023 | Kogawa et al. | Dec 2009 | A1 |
20100007848 | Murata | Jan 2010 | A1 |
20100007849 | Liesfeld et al. | Jan 2010 | A1 |
20100014052 | Koschmieder et al. | Jan 2010 | A1 |
20100085538 | Masaki et al. | Apr 2010 | A1 |
20100110375 | Nishio et al. | May 2010 | A1 |
20100149489 | Kikawa et al. | Jun 2010 | A1 |
20100208961 | Zahniser | Aug 2010 | A1 |
20100238402 | Itoh et al. | Sep 2010 | A1 |
20110001927 | Kasper | Jan 2011 | A1 |
20110028513 | Zhuo | Feb 2011 | A1 |
20110043756 | Kahn et al. | Feb 2011 | A1 |
20110169935 | Henriksen | Jul 2011 | A1 |
20110234977 | Verdooner | Sep 2011 | A1 |
20110242306 | Bressler et al. | Oct 2011 | A1 |
20110261184 | Mason et al. | Oct 2011 | A1 |
20110299034 | Walsh et al. | Dec 2011 | A1 |
20110299036 | Goldenholz | Dec 2011 | A1 |
20120002167 | Kondoh | Jan 2012 | A1 |
20120033227 | Bower et al. | Feb 2012 | A1 |
20120044456 | Hayashi | Feb 2012 | A1 |
20120050677 | Ohban | Mar 2012 | A1 |
20120121158 | Sekine et al. | May 2012 | A1 |
20120147327 | Shikaumi et al. | Jun 2012 | A1 |
20120169995 | Mohr et al. | Jul 2012 | A1 |
20120200690 | Beasley | Aug 2012 | A1 |
20120213423 | Xu et al. | Aug 2012 | A1 |
20120218301 | Miller | Aug 2012 | A1 |
20120229617 | Yates et al. | Sep 2012 | A1 |
20120229764 | Tomatsu et al. | Sep 2012 | A1 |
20120248196 | Wang | Oct 2012 | A1 |
20120249956 | Narasimha-Iyer et al. | Oct 2012 | A1 |
20120257163 | Dyer et al. | Oct 2012 | A1 |
20120281874 | Lure | Nov 2012 | A1 |
20120287255 | Ignatovich et al. | Nov 2012 | A1 |
20120320340 | Coleman, III | Dec 2012 | A1 |
20130002711 | Sakagawa | Jan 2013 | A1 |
20130010260 | Tumlinson et al. | Jan 2013 | A1 |
20130016320 | Naba | Jan 2013 | A1 |
20130028484 | Wada et al. | Jan 2013 | A1 |
20130033593 | Chinnock et al. | Feb 2013 | A1 |
20130057828 | de Smet | Mar 2013 | A1 |
20130063698 | Akiba et al. | Mar 2013 | A1 |
20130128223 | Wood et al. | May 2013 | A1 |
20130162950 | Umekawa | Jun 2013 | A1 |
20130169934 | Verdooner | Jul 2013 | A1 |
20130176533 | Raffle et al. | Jul 2013 | A1 |
20130194548 | Francis et al. | Aug 2013 | A1 |
20130201449 | Walsh | Aug 2013 | A1 |
20130208241 | Lawson et al. | Aug 2013 | A1 |
20130211285 | Fuller et al. | Aug 2013 | A1 |
20130215387 | Makihira et al. | Aug 2013 | A1 |
20130222763 | Bublitz et al. | Aug 2013 | A1 |
20130229622 | Murase et al. | Sep 2013 | A1 |
20130234930 | Palacios Goerger | Sep 2013 | A1 |
20130250237 | Ueno | Sep 2013 | A1 |
20130250242 | Cheng et al. | Sep 2013 | A1 |
20130301004 | Kahn et al. | Nov 2013 | A1 |
20140022270 | Rice-Jones et al. | Jan 2014 | A1 |
20140031688 | Perrey | Jan 2014 | A1 |
20140055745 | Sato et al. | Feb 2014 | A1 |
20140104573 | Iwanaga | Apr 2014 | A1 |
20140111773 | Itoh | Apr 2014 | A1 |
20140118693 | Matsuoka | May 2014 | A1 |
20140118697 | Tanaka et al. | May 2014 | A1 |
20140132926 | Hara | May 2014 | A1 |
20140180081 | Verdooner | Jun 2014 | A1 |
20140192320 | Tsao | Jul 2014 | A1 |
20140198298 | Cheng et al. | Jul 2014 | A1 |
20140204340 | Verdooner | Jul 2014 | A1 |
20140204341 | Murase | Jul 2014 | A1 |
20140211162 | Matsuoka et al. | Jul 2014 | A1 |
20140267668 | Ignatovich et al. | Sep 2014 | A1 |
20140268046 | Narasimha-Iyer et al. | Sep 2014 | A1 |
20140307228 | Ohban | Oct 2014 | A1 |
20140330352 | Luttrull et al. | Nov 2014 | A1 |
20150002811 | Ota | Jan 2015 | A1 |
20150009357 | Seibel et al. | Jan 2015 | A1 |
20150021228 | Su et al. | Jan 2015 | A1 |
20150110348 | Solanki et al. | Apr 2015 | A1 |
20150150449 | Matsumoto | Jun 2015 | A1 |
20150170360 | Fletcher et al. | Jun 2015 | A1 |
20150178946 | Krishnaswamy et al. | Jun 2015 | A1 |
20150223686 | Wang | Aug 2015 | A1 |
20150223688 | Wang et al. | Aug 2015 | A1 |
20150272434 | Satake et al. | Oct 2015 | A1 |
20150342459 | Robert et al. | Dec 2015 | A1 |
20160007845 | Utagawa | Jan 2016 | A1 |
20160058284 | Wang et al. | Mar 2016 | A1 |
20160092721 | Kanagasingam et al. | Mar 2016 | A1 |
20160120405 | Tokuda et al. | May 2016 | A1 |
20160135678 | Hernandez Castanedas et al. | May 2016 | A1 |
20160166141 | Kanagasingam et al. | Jun 2016 | A1 |
20160188993 | Beato | Jun 2016 | A1 |
20160213249 | Cornsweet et al. | Jul 2016 | A1 |
20160249804 | Wang | Sep 2016 | A1 |
20160287068 | Murase et al. | Oct 2016 | A1 |
20160307341 | Sato et al. | Oct 2016 | A1 |
20170020389 | Wang et al. | Jan 2017 | A1 |
20170035292 | Wang | Feb 2017 | A1 |
20170119241 | Farchione et al. | May 2017 | A1 |
20170161892 | Tellatin et al. | Jun 2017 | A1 |
20170172675 | Jarc et al. | Jun 2017 | A1 |
20170181625 | Kawakami et al. | Jun 2017 | A1 |
20170196452 | Wang | Jul 2017 | A1 |
20170209044 | Ito et al. | Jul 2017 | A1 |
20170239012 | Wood et al. | Aug 2017 | A1 |
20170266041 | Kim et al. | Sep 2017 | A1 |
20170311800 | Wang | Nov 2017 | A1 |
20170316565 | Leahy et al. | Nov 2017 | A1 |
20170332903 | Wang et al. | Nov 2017 | A1 |
20180092530 | Hart et al. | Apr 2018 | A1 |
20180140188 | Wang | May 2018 | A1 |
20180249907 | Wang et al. | Sep 2018 | A1 |
20180263486 | Farchione et al. | Sep 2018 | A1 |
20190038124 | Wang et al. | Feb 2019 | A1 |
20190082950 | Farchione et al. | Mar 2019 | A1 |
20190269324 | Wang et al. | Sep 2019 | A1 |
20190357769 | Wang | Nov 2019 | A1 |
Number | Date | Country |
---|---|---|
1704017 | Dec 2005 | CN |
101658412 | Mar 2010 | CN |
202025321 | Nov 2011 | CN |
102324014 | Jan 2012 | CN |
102626304 | Aug 2012 | CN |
102917634 | Feb 2013 | CN |
103429142 | Dec 2013 | CN |
205006859 | Feb 2016 | CN |
105433899 | Mar 2016 | CN |
205181314 | Apr 2016 | CN |
2 374 404 | Oct 2011 | EP |
2 425 763 | Jul 2012 | EP |
2378600 | Dec 2003 | GB |
2006-101943 | Apr 2006 | JP |
2009-172157 | Aug 2009 | JP |
2009-219644 | Oct 2009 | JP |
2010-57547 | Mar 2010 | JP |
2011-97992 | May 2011 | JP |
2012-213575 | Nov 2012 | JP |
2013-46850 | Mar 2013 | JP |
2013-59551 | Apr 2013 | JP |
10-2013-001079 | Jan 2013 | KR |
2004089214 | Oct 2004 | WO |
2006016366 | Feb 2006 | WO |
2008106802 | Sep 2008 | WO |
2010080576 | Jul 2010 | WO |
2010115195 | Oct 2010 | WO |
2011029064 | Mar 2011 | WO |
2012009702 | Jan 2012 | WO |
2012134272 | Oct 2012 | WO |
2013041658 | Mar 2013 | WO |
2013082387 | Jun 2013 | WO |
2013107464 | Jul 2013 | WO |
2014182769 | Nov 2014 | WO |
2015044366 | Apr 2015 | WO |
2015100294 | Jul 2015 | WO |
2015170947 | Nov 2015 | WO |
2016070781 | May 2016 | WO |
2017223208 | Dec 2017 | WO |
Entry |
---|
User Manual Non-Mydriatic Retinal Camera TRC-NW400, Topcon Corporation, Tokyo, Japan, 2014. (Year: 2014). |
Brown et al., “Comparison of image-assisted versus traditional fundus examination,” Eye and Brain, Dovepress, Feb. 2013, vol. 5, pp. 1-8. |
Girdwain, “Goggles Differentiate Between Stroke and Vertigo,” Today's Geriatric Medicine, vol. 6 No. 4 p. 8, 2 pages (Oct. 1, 2013). |
Muller et al., “Non-Mydriatic Confocal Retinal Imaging Using a Digital Light Projector,” Ophthalmic Technologies XXIII, 2013, downloaded from: http://proceedings.spiedigitallibrary.org (8 pages). |
Paques et al., “Panretinal, High-Resolution Color Photography of the Mouse Fundus,” Investigative Ophthalmology & Visual Science, Jun. 2007, vol. 48, No. 6, pp. 2769-2774. |
Visucampro NM—The Non-Mydriatic Fundus Camera System from Carl Zeiss, Carl Zeiss Meditec, International, 2005 (1 page). |
International Search Report & Written Opinion dated May 15, 2015, Application No. PCT/US2015/015124, 10pgs. |
“A Portable, Scalable Retinal Imaging System,” TI Engibous Competition Report (Spring 2012), Rice University, http://www.ti.com/corp/docs/university/docs/Rice_University_mobileVision%20Final%20Report.pdf (96 pages). |
Johns Hopkins Medicine, “Small Johns Hopkins-led study finds portable device diagnoses stroke with 100 percent accuracy,” www.hopkinsmedicine.org/se/util/display_mod.cfm?MODULE=/se-server/mod/modules/semod_printpage/mod_default.cfm&PageURL-/news/media/releases/is_i . . . , 2 pages (Mar. 5, 2013). |
Spector, The Pupils, Clinical Methods: The History, Physical, and Laboratory Examinations, 3rd Edition, pp. 300-304, Chapter 58 (1990). |
Dilating Eye Drops, AAPOS, http://web.archive.org/web/2012020409024/http://www.aapos.org/terms/conditions/43, Dilating Eye Drops, 2pgs, Dec. 17, 2015. |
International Search Report and Written Opinion dated Jan. 10, 2017 (PCT/US2016/059150), 12pgs. |
Anastasakis et al., SLO-Infrared Imaging of the Macula and its Correlation with Functional Loss and Structural Changes in Patients with Stargardt Disease, May 1, 2012, 19pgs. |
Carrasco-Zevallos, O. et al., “Pupil Tracking Optical Coherence Tomography for Precise Control of Pupil Entry Position,” Biomedical Optics Express: 6(9): 3405-3419, Sep. 1, 2015, 15pgs. |
EIDON—The First True Color Confocal Scanner on the Market, www.centervue.com, Jul. 27, 2015, 12pgs. |
Grieve et al., Multi-wavelength imaging with the adaptive optics scnaning laser Ophthalmoscope, Optics Express 12230, Dec. 11, 2006, vol. 14, No. 25, 13pgs. |
Hammer et al., Adaptive Optics Scanning Laser Ophthalmoscope for Stablized Retinal Imaging, Optics Express: 14 (8): 3354-3367, Apr. 17, 2006, 14pgs. |
Markow et al., “Real-Time Algorithm for Retinal Tracking,” IEEE Transactions on Biomedical Engineering; 40(12): 1269-1281, Dec. 1993, 13pgs. |
Moscaritolo et al., “A Machine Vision Method for Automated Alignment of Fundus Imaging Systems,” Ophthalmic Surgery Lasers & Imaging: 41(6): 607-613, Sep. 29, 2010, 7pgs. |
Navilas, Navigated Laser Therapy—A New Era in Retinal Disease Management, www.od-os.com, @ 2015, 16pgs. |
Sahin et al., “Adaptive Optics with Pupil Tracking for High Resolution Retinal Imaging,” Biomedical Optics Express: 3(2): 225-239, Feb. 1, 2012, 15pgs. |
Sheehy et al., “High-speed, Image-based eye tracking with a scanning laser ophthalmoscope,” Biomedical Optics Express; 3(10): 2611-2622, Oct. 1, 2012, 12pgs. |
Mayer et al., “Wavelet denoising of multiframe optical coherence tomography data,” Biomedical Optics Express, vol. 3, No. 3, pp. 572-589 (Mar. 1, 2012). |
Land, Edwin H., “The Retinex Theory of Color Visison,” Scientific America, Dec. 1977, vol. 237 No. 6 p. 108-128. |
Extended European Search Report for EP 16862740.4, dated May 15, 2019, 7 pages. |
Labeeb et al., “A Framework for Automatic Analysis of Digital Fundus Images,” IEEE, pp. 5-10 (2013). |
User Manual Non-Mydriatic Retinal Camera, Topcon Corporation, Tokyo, Japan, 106 pages (2014). |
Number | Date | Country | |
---|---|---|---|
20200359885 A1 | Nov 2020 | US |
Number | Date | Country | |
---|---|---|---|
62249931 | Nov 2015 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15054558 | Feb 2016 | US |
Child | 16947532 | US |