This application relates generally to vision screeners for performing vision tests using external media.
Mass vision screening events are utilized to assess the vision of many children in a short amount of time. Users operating on-site vision screening devices can assess children directly. For example, a user without specialized training can operate a vision screening device to perform an autorefraction assessment on multiple children at a screening event. These tests can be performed without feedback from the children being tested. For example, the vision screening device may project an infrared pattern on the eye of a child, and identify whether the child should follow up for a formal eye exam, by evaluating the reflection of the pattern on the eye.
Some vision tests, however, require feedback. For example, a color vision test can be utilized to assess whether a child subjectively ascertains a particular color pattern. However, existing vision screening devices are not equipped to receive feedback from children. Some children may be illiterate or minimally literate, and would struggle to provide feedback about vision tests using conventional computer-based user interfaces. Currently, these feedback-dependent tests are administered to children manually by trained practitioners, which is not conducive to mass screening.
In addition, some vision tests can be offered in a variety of formats. For example, color vision tests can be Ishihara tests or Color Vision Test Made Easy (CVTME) examinations. Furthermore, different jurisdictions may have policies that implement different vision tests. For example, one state may require children to be screened with an Ishihara test, whereas another state may require children to be screened with a CVTME test.
Various implementations of the present disclosure relate to techniques for vision screening devices that can assess the vision tests on subjects using external media. These devices may be suitable for assessing conditions of multiple subjects in mass screening events, such as screening events conducted in schools.
In various cases, a vision test is visually output by an external medium, such as a card, a poster, or a computing device. A vision screening device may identify the vision test output by the external medium. For example, the external medium may display and/or transmit a code indicative of the vision test to the vision screening device. The vision screening device may identify feedback characterizing the vision test from a subject. In some cases, the subject directly inputs the feedback, or a user may input the feedback. The vision screening device, in various implementations, may identify whether the subject is suspected and/or expected to have one or more ocular conditions by analyzing the feedback in view of the identified vision test.
According to some examples, a single vision screening device can assess conditions of subjects using a large variety of different vision tests. In some cases, the vision screening device is compatible with external media configured to output different vision tests to subjects. The vision screening device may identify a particular vision test being output to a particular subject by an external medium in order to evaluate a condition of the subject. By incorporating vision tests output by external media, the vision screening device may evaluate subjects based on feedback characterizing a wide variety of vision tests.
In some cases, the vision screening devices can facilitate reception of feedback characterizing the vision tests from subjects (e.g., children) who may be unable to operate complex user interfaces. According to some examples, the feedback can be input by a user who is different than the subject being evaluated. In some cases, the subject can input the feedback by tracing a shape on a touchscreen, speaking the feedback, selecting graphic icons, or other user-friendly methods that are achievable by children.
Various implementations of the present disclosure are directed to technological improvements in the field of vision screening devices. Existing vision screening devices, designed for mass screening, are unable to provide vision tests (e.g., color vision tests, visual acuity tests, etc.) that are evaluated based on subject feedback. Accordingly, these tests are often excluded from mass screening events. By incorporating the use of external media, various example vision screening devices described herein can facilitate administering feedback-oriented vision tests to many subjects in a short amount of time.
The following figures, which form a part of this disclosure, are illustrative of described technology and are not meant to limit the scope of the claims in any manner.
Various implementations of the present disclosure will be described in detail with reference to the drawings, wherein like reference numerals present like parts and assemblies throughout the several views. Additionally, any samples set forth in this specification are not intended to be limiting and merely set forth some of the many possible implementations.
A vision screening device 104 is configured to determine whether the subject 102 is suspected to have one or more ocular conditions. As used herein, a subject may be “suspected to have,” and/or “likely to have” a condition if at least one parameter associated with the subject is outside of a range associated with a particular screening exam. For example, the vision screening device 104 may not specifically diagnose the subject 102 with an ocular condition, but may determine whether the subject 102 should be evaluated by a trained care provider (e.g., an optometrist or ophthalmologist) for the ocular condition. That is, the vision screening device 104 may be a tool for determining whether a follow-up examination is indicated for the subject 102. The vision screening device 104 may be operated by a user 106. As shown, the user 106 is different than the subject 102, but implementations are not so limited. According to various implementations, the user 106 screens multiple subjects including the subject 102 for one or more ocular conditions in a mass screening event. For example, the subjects could be children at a school, residents of a nursing home, or other groups who are screened in a relatively short amount of time.
In various implementations, the vision screening device 104 identifies the performance of the subject 102 on a vision test 108 output by an external medium 110. As used herein, the term “vision test,” and its equivalents, can refer to displayed information that can be used to assess the vision of an individual. For example, the vision test 108 may include a color deficiency test, which can also be referred to as a “color blindness” test. Examples of color deficiency tests include Ishihara tests and CVTME tests. The vision test 108 may include one or more pictures that display a symbol (e.g., a number) in at least one first color and at least one second color as a background to the symbol. If the subject 102 has sufficient color sensitivity, the subject may see the symbol. If the subject 102 is color deficient, the subject 102 may be unable to discern the symbol.
In some cases, the vision test 108 includes a visual acuity test. According to some implementations, a visual acuity test displays symbols of with different sizes. The visual acuity of the subject 102 is determined based on the sizes at which the subject 102 can visually recognize one or more of the symbols. There are multiple types of visual acuity tests, such as near vision tests and distance vision tests. Near vision tests display the symbols at a relatively close distance from the eye of the subject 102, such as 35 centimeters (cm). The results of a near vision test are indicative of whether the subject 102 is nearsighted. Distance vision tests display the symbols at a relatively long distance from the eye of the subject 102, such as 6 meters (m). The results of a distance vision test are indicative of whether the subject is farsighted.
In various examples, the vision test 108 includes a reading speed test. For example, the vision test 108 displays multiple words. The speed at which the subject 102 reads the words corresponds to the reading speed of the subject 102. In some instances, the vision test 108 includes a reading comprehension test. The vision test 108 may display a passage of words. Upon reading the passage, the subject 102 may indicate what the passage discusses, thereby demonstrating whether the subject 102 adequately understands the passage. Reading speed tests and reading comprehension tests may be used to assess whether the subject 102 has a learning disability or other condition.
According to various implementations, the vision test 108 includes a concussion test. For instance, the vision test 108 may include a test described in U.S. Pat. No. 10,506,165, which is incorporated by reference herein in its entirety. For instance, the vision test 108 may include one or more symbols that the subject 102 focuses on visually. The vision screening device 104 may capture one or more images of the eyes of the subject 102 while the subject is focusing on the vision test 108. In some cases, the vision screening device 104 determines a pupil size of the subject 102 based on the image(s) and determines whether the subject 102 is predicted to have a concussion based on a pupil size of the subject 102.
In some cases, the vision test 108 is gamified for the subject 102. For example, the external medium 110 may display a shape (e.g., a butterfly) that moves along the external medium 110. The subject 102 may play a game by inputting feedback based on the position of the shape. For example, the subject 102 may “capture” a virtual butterfly displayed by the external medium 110 by controlling an input device (e.g., a touchscreen), and based on the feedback, the vision screening device 104 may evaluate the vision of the subject 102.
In various examples, the vision test 108 is displayed by the external medium 110. As used herein, the term “external medium,” and its equivalents, can refer to a device and/or object that is separate from a device used to identify the results of a vision test (e.g., the vision screening device 104). In some cases, the external medium 110 includes a substrate (e.g., a passive object), such as a projection screen reflecting a projection of the vision test 108, a poster displaying the vision test 108, a card displaying the vision test 108, or some other printed substrate displaying the vision test 108. As used herein, the term “substrate,” and its equivalents, can refer to a solid or semisolid material that can absorb and/or reflect light. In various examples, the external medium 110 includes an active device, such as a tablet computer or smartphone that displays the vision test 108 on a touchscreen, a smart TV that displays the vision test 108, a virtual reality (VR) headset that displays the vision test 108, an augmented reality device that displays the vision test 108, or some other computing device that displays the vision test 108 on a screen.
According to various implementations, the vision screening device 104 may be configured to assess the results of multiple different vision tests including the vision test 108. To identify the vision test 108 among the multiple vision tests, the vision screening device 104 may identify a code 112 that is associated with the vision test 108. As used herein, the term “code,” and its equivalents, can refer to one or more symbols that indicate the identity of a vision test. In some examples, the code 112 is displayed on the external medium 110 with the vision test 108. For example, the vision screening device 104 includes a camera that captures an image of the code 112 on the external medium 110. As used herein, the term “image,” and its equivalents, can refer to a set of data including multiple pixels and/or voxels that respectively represent regions of a real-world scene. A two-dimensional (2D) image is represented by an array of pixels. A three-dimensional (3D) image is represented by an array of voxels. An individual pixel and/or voxel in an image is defined according to at least one value representing an amount and/or frequency of light emitted by the corresponding region in the real-world scene.
In some implementations, a signal indicative of the code 112 is transmitted from the external medium 110 to the vision screening device 104. For instance, the vision screening device 104 includes a transceiver that receives a signal (e.g., a wireless signal) indicative of the code 112 from the external medium 110. In some examples, the vision screening device 104 and/or the external medium 110 may be connected, such as via a Bluetooth connection and/or a Nearfield Communication (NFC). For example, the vision screening device 104 may be paired to the external medium 110, or vice versa, such that the two devices may communicate with and send data to and from one another.
The code 112 may be uniquely associated with the vision test 108 displayed by the external medium 110, such that no other vision test is displayed with the code 112. In some cases, the code 112 is a barcode, such as a QR code. In various implementations, the code 112 is indicative of a string of one or more letters or numbers that are associated with the vision test 108. In various implementations, the vision screening 104 identifies the vision test 108 by identifying an entry in a test datastore 114 that includes the code 112. For example, the test datastore 114 includes a database and/or lookup table indexed by codes associated with respective vision tests. By finding the entry of the database with the code 112, the vision screening device 104 may identify the vision test 108. In some cases, the test datastore 114 is part of the vision screening device 104. In some examples, the test datastore 114 is hosted in a device that is external to the vision screening device 104.
The subject 102 may view the vision test 108 and produce feedback based on the vision test 108. As used herein, the term “feedback,” and its equivalents, can refer to data representing an individual's performance on a vision test. The feedback, for example, is detected by a feedback device 116. In some implementations, the feedback device 116 is part of the vision screening device 104 and/or the external medium 110. In various cases, the feedback device 116 includes a sensor configured to detect the feedback from the subject 102. In some examples, the feedback device 116 may be in communication with the vision screening device 104 and/or the external medium 110, such that the feedback device 116 may send data indicative of the feedback from the subject 102 to the screening device 104 and/or the external medium 110. The feedback device 116 may be in communication with the vision screening device 104 and/or the external medium 110 via a Bluetooth connection and/or a NFC connection, to name a few examples. In some examples, the feedback from the subject 116 may be send upon a determination, from the feedback device 116, that the vision screening is complete. For example, based on the feedback received from the subject 102, the feedback device 116 may determine that the test has concluded. Additionally, or alternatively, the feedback device 116 may receive an input, such as from the subject 102 and/or the user 106, of a conclusion of the test. In other examples, the feedback device 116 may send the results continuously, as they are received by the feedback device 116.
Various types of feedback can be detected by the feedback device 116. In some implementations, the feedback device 116 includes one or more touch sensors incorporated with the external medium 110. The feedback may be a touch of the subject 102 on at least a portion of the external medium 110. For example, the subject 102 may trace a symbol of the vision test 108, which is detected by the touch sensor(s) of the feedback device 116. In some cases, the subject 102 may touch an icon displayed on the external medium 110 that is detected by the feedback device 116 as the feedback from the vision test 108.
In various examples, the feedback device 116 includes one or more cameras that visually detect the feedback from the subject 102. For example, the vision test 108 may be a reading speed test and the camera(s) capture images of an eye of the subject 102 as the subject is reading the passage. The feedback may be the change in the gaze angle of the subject 102 over time.
In some cases, the feedback device 116 includes other types of input devices that can detect feedback directly from the subject 102. For example, the feedback device 116 may include a microphone that detects the voice of the subject 102 that serves as the feedback about the vision test 108. In some examples, the feedback device 116 includes physical buttons, a keyboard, or any other device configured to detect an input signal indicative of the feedback from the subject 102.
According to some examples, the user 106 inputs the feedback from the subject 102 into the feedback device 116. For example, the subject 102 may audibly report the feedback to the user 106, who may manually input the feedback into the feedback device 116 using a button, keyboard, touch screen, or other input device.
The feedback device 116 may provide the feedback to the vision screening device 104. In various implementations, the vision screening device 104 may determine whether the subject 102 is suspected to have an ocular condition by analyzing the feedback in view of the vision test 108. In some cases, the entry in the test datastore 114 indicating the vision test 108 may further include a key associated with the vision test 108. For instance, if the vision test 108 is an Ishihara color deficiency test, the key may be the identity of the symbol that is displayed in the vision test 108. The feedback device 116 may compare the feedback to the key. In other words, the feedback device may, based on receiving the feedback, compare the feedback to the key to determine one or more discrepancies between the feedback and the key, wherein one or more discrepancies may indicate the suspect is suspected to have an ocular condition. In some examples, a greater number of discrepancies may indicate a higher likelihood of an ocular condition, whereas a lower number of discrepancies may indicate a lower likelihood of an ocular condition. In some examples, the number of discrepancies may be over a threshold number of discrepancies, which may indicate that the subject 102 does have an ocular condition. In some examples, the vision screening device 108 may determine a type of ocular condition a subject 102 is suspect to have or has. For example, the feedback received from the subject 102 may correspond to various ocular conditions. In other words, tracing an object but failing to determine a correct color of the object may indicate that the subject 102 has adequate vision, but is colorblind. In some examples, determining that the subject is suspected to have or has an ocular condition (such as, for example, comparing the feedback to the key) is done manually by the user 104. However, in other examples, this may be done automatically by one or more algorithms of the vision screening deice 108. For example, the vision screening device 108 may trained to compare the feedback to the key to identify one or more discrepancies. Based at least in part on factors such as the type of discrepancies and/or number of discrepancies, for example, the feedback device 108 may output a likelihood that a subject 102 has a ocular condition, and/or the ocular condition(s) the subject 102 is likely to have.
In some implementations, the key is defined as a shape that is within a threshold distance (e.g., 1 centimeter) of the symbol. The feedback device 116 may determine whether the subject 102 traces a shape that is within the key. In some implementations, the subject 102 traces the symbol on a touchscreen, and the feedback may be highlighted on the touchscreen as the subject 102 is tracing the symbol. In some implementations, the subject 102 traces the symbol with a writing instrument (e.g., a marker, pen, or pencil) on a paper substrate. The highlighted and/or written feedback may be viewed manually. If the feedback matches the key, then the vision screening device 104 may determine that the subject 102 has passed the vision test 108. If the feedback is different than the key, then the vision screening device 104 may determine that the subject 102 has not passed the vision test 108. In various implementations, the vision screening device 104 may determine that the subject 102 is suspected to have an ocular condition based on determining that the subject 102 has not passed the vision test 108.
In some implementations, the key indicates a threshold that the vision screening device 104 compares to the feedback. For example, if the vision test 108 is a reading speed test, and the feedback represents a reading speed of the subject 102, the vision screening device 104 may compare the reading speed of the subject 102 to a threshold speed in order to determine whether the subject 102 is at an appropriate reading level or is suspected of having a learning disability.
The vision screening device 104 may perform additional tests on the subject 102 that are independent of the external medium. In various implementations, the vision screening device 104 performs an automated autorefraction assessment on the subject 102. For example, the vision screening device 104 may include at least one light source configured to project an infrared pattern on an eye of the subject 102. As used herein, the term “light source,” and its equivalents, can refer to an element configured to output light, such as a light emitting diode (LED) or a halogen bulb.
The vision screening device 104, in some instances, further includes at least one camera configured to capture an image of a reflection of the pattern from the eye of the subject 102. The vision screening device 104 may determine a condition of the subject 102 based on the reflection of the pattern. For example, the vision screening device 104 may determine that the subject has myopia, hyperopia, astigmatism, or a combination thereof, based on the reflection of the pattern. In some cases, the vision screening device 104 is or includes a specialized device, such as the Welch Allyn Spot Vision Screener by Hill-Rom Services, Inc. of Chicago, IL. In some cases, the vision screening device 104 performs a red reflex examination on the subject 102.
According to various implementations, the vision screening device 104 may output and/or store a result of the vision test 108 or the result of any other vision test identified by the vision screening device 104. The result, for example, is an indication of the feedback, a discrepancy between the feedback and the key, whether the subject 102 is suspected to have the ocular condition, or a combination thereof. In some implementations, the vision screening device 104 outputs the result to the user 106. For example, the vision screening device 104 may display the result on a screen and/or audibly output the result using a speaker. In some cases, the visions screening device 104 stores the result (e.g., with an indication of the identity of the subject 102).
In some cases, the vision screening device 104 determines an identity of the subject 102. For instance, the user 106 may input a code, name, or other identifier associated with the subject 102 into the vision screening device. The vision screening device 104 may generate and/or store the result with the identifier of the subject 102.
The vision screening device 104 may be communicatively coupled to an electronic medical record (EMR) system 118. In some cases, the vision screening device 104 transmits the result (and the identifier of the subject 102) to the EMR system 118. The EMR system 118 may include one or more servers storing EMRs of multiple individuals including the subject 102. As used herein, the terms “electronic medical record,” “EMR,” “electronic health record,” and their equivalents, can refer to a data indicating previous or current medical conditions, diagnostic tests, or treatments of a patient. The EMRs may also be accessible via computing devices operated by care providers. In some cases, data stored in the EMR of a subject is accessible to a user via an application operating on a computing device. For instance, the stored data may indicate demographics of a subject, parameters of the subject, vital signs of the subject, notes from one or more medical appointments attended by the subject, medications prescribed or administered to the subject, therapies (e.g., surgeries, outpatient procedures, etc.) administered to the subject, results of diagnostic tests performed on the subject, subject identifying information (e.g., a name, birthdate, etc.), or any combination thereof. In various implementations, the EMR system 118 stores the feedback and/or result in an EMR associated with the subject 102.
In some examples, the vision screening device 104 transmits the result to one or more web servers 120. In various implementations, the web server(s) 120 may store indications of the result. In addition, the web server(s) 120 may output a website to an external computing device (not illustrated) indicating the result. In some cases, the external computing device may be operated by a parent of the subject 102, such that the parent may view the indication of the result by accessing the website. In some implementations, the web server(s) 120 further stores additional information about the vision test 108 and/or recommended follow-up care for the subject 102. For instance, based on the result, the website may indicate that the subject 102 should be seen by an optometrist and/or ophthalmologist for follow-up care.
Various elements of the environment 100 communicate via one or more communication networks 122. The communication network(s) 122 include wired (e.g., electrical or optical) and/or wireless (e.g., radio access, BLUETOOTH, WI-FI, or near-field communication (NFC)) networks. The communication network(s) 122 may forward data in the form of data packets and/or segments between various endpoints, such as computing devices, medical devices, servers, and other networked devices in the environment 100.
Although not specifically illustrated in
In particular examples, the vision screening device 104 is configured to assess the performance of the subject 102 on the vision test 108, which is output by the external medium 110. Using various implementations described herein, the vision screening device 104 is adapted to efficiently screen numerous subjects (including the subject 102) in a mass screening event, even in cases where the user 106 is not a trained clinician and/or when the subjects are children.
According to various examples, the vision screening device 104 facilitates vision testing of a subject. The external medium 110 may display or otherwise output a vision test to the subject. In some cases, the external medium 110 further outputs the code 112 to the vision screening device 104. The code 112, for example, is uniquely associated with the vision test that is output by the external medium 110.
The vision screening device 104 may identify the vision test using the code 112. For instance, the vision screening device 104 may identify an entry in the test datastore 114 that includes the code 112. In various cases, the entry includes a key 202 that is associated with the vision test. The vision screening device 104 may retrieve and/or receive the key 202 from the entry of the test datastore 114.
In various implementations, the subject 102 may view the vision test output by the external medium 110. The subject 102 may enter an input signal into the feedback device 116. Based on the input signal, the feedback device 116 may generate feedback 204 that indicates the subject's perception, reaction, performance, or a combination thereof, of the vision test. The feedback device 116 may provide the feedback 204 to the vision screening device 104, such as via a Bluetooth connection and/or NFC, as described above. In other examples, the feedback device 116 may display the feedback 204 via the feedback device 116, such as a user interface (UI). In some examples, the display associated with the feedback device 116 may include one or more selectable options which may allow the user 106 and/or the subject 102 to send the results 206, such as to the EMR system 118 or the web server 120.
According to examples, the vision screening device 104 may generate a result 206 based on the key 202 and the feedback 204. For instance, the key 202 may correspond to the vision test being administered to the subject 102. In some examples, the key 202 may be one or multiple keys which may be uploaded to the test datastore 144 such that the vision screening device 104 may determine the result 206 of the test. For instance, based on receiving the feedback 204 from the test, the vision screening device 104 may compare the key 202 and the feedback 204. In some cases, the result 206 indicates a discrepancy between the key 202 and the feedback 204. For instance, if the key 202 is a shape that is within a threshold distance of a symbol, and the feedback 204 is an attempt by the subject to trace the symbol, then the discrepancy may be a number of times and/or an amount that the feedback 204 moves outside of the key 202. In various implementations, the vision screening device 104 determines whether the subject is suspected to have an ocular condition based on the discrepancy between the key 202 and the feedback 204. The result 206 may indicate whether the subject is suspected to have the ocular condition. The vision screening device 104 may store the result 206, output the result 206 to a user, transmit a signal including the result 206 to an external device, or a combination thereof. In some examples, the key 202 may be updated and/or removed based on the test being administered and/or the subject taking the test.
For example, the vision screening device 300 illustrated in
In some examples, the user 1204 may physically select and/or placing a vision test printed on a card onto to the tablet 1200 side facing the subject 1202. Additionally or alternatively, the user 1204 may select a vision test from the various vision tests capable of being displayed on the tablet 1200. Based at least in part on the user 1204 presenting the vision test to the subject 1202, the subject 1202 can audibly respond to prompts from the user 1204 or can be observed for bodily behaviors, among other response actions. In the current illustration, the subject 1202 is standing at a distance away from the user 1204 and the testing tablet 1200 where the user 1204 has yet to select a vision test to be administered.
At 1802, the entity identifies a vision test output by an external medium. For example, the entity may receive a signal from the external medium. The signal may be a wireless signal (e.g., an RFID signal, an NFC signal, etc.) or light, in some cases. In various implementations, the signal is indicative of a code associated with the vision test. For instance, the entity captures an image of a QR code or other type of barcode that is uniquely associated with the vision test and displayed by the external medium. In various implementations, the external medium is a passive medium, such as a card, a poster, or other type of printed substrate. In some cases, the external medium is a device, such as a mobile phone, a tablet computer, a VR headset, or a laptop computer. The vision test, for instance, includes at least one of a color vision test, a reading comprehension test, a concussion test, a near vision test, a reading speed test, or a visual acuity test.
At 1804, the entity identifies feedback about the vision test from a subject. In some implementations, the feedback is directly received by the entity from the subject. For example, the entity identifies the feedback by detecting the subject tracing a shape on the surface of a screen (e.g., detected using one or more touch sensors), the subject touching an icon displayed on the screen, or a voice of the subject indicating the feedback. In some implementations, the entity receives the feedback from a user who is not the subject or by receiving a signal from an external device that detected the feedback.
At 1806, the entity evaluates the subject by analyzing the feedback based on the vision test. In various implementations, the entity may determine whether the subject is suspected to have at least one ocular condition by analyzing the feedback in view of the vision test. In some cases, the entity identifies a key associated with the vision test and compares the key to the feedback. The entity may compare a discrepancy between the key and the feedback to one or more thresholds. For example, if the discrepancy is above a first threshold or below a threshold, the entity may determine whether the subject is suspected to have at least one ocular condition. In some implementations, the entity may store, transmit, and/or output an indication of whether the subject is suspected to have the ocular condition.
As illustrated, the device(s) 1900 comprise a memory 1904. In various embodiments, the memory 1904 is volatile (including a component such as Random Access Memory (RAM)), non-volatile (including a component such as Read Only Memory (ROM), flash memory, etc.) or some combination of the two.
The memory 1904 may include various components, such as at least of the vision screening device 104, the vision test 108, the code 112, the key 202, or the result 206. Any of the vision screening device 104, the vision test 108, the code 112, the key 202, or the result 206 can include methods, threads, processes, applications, or any other sort of executable instructions. The vision screening device 104, the vision test 108, the code 112, the key 202, or the result 206 and various other elements stored in the memory 1904 can also include files and databases.
The memory 1904 may include various instructions (e.g., instructions in the vision screening device 104, the vision test 108, the code 112, the key 202, or the result 206), which can be executed by at least one processor 1914 to perform operations. In some embodiments, the processor(s) 1914 includes a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), or both CPU and GPU, or other processing unit or component known in the art.
The device(s) 1900 can also include additional data storage devices (removable and/or non-removable) such as, for example, magnetic disks, optical disks, or tape. Such additional storage is illustrated in
The device(s) 1900 also can include input device(s) 1922, such as a keypad, a cursor control, a touch-sensitive display, voice input device, etc., and output device(s) 1924 such as a display, speakers, printers, etc. These devices are well known in the art and need not be discussed at length here. In particular implementations, a user can provide input to the device(s) 500 via a user interface associated with the input device(s) 1922 and/or the output device(s) 1924.
As illustrated in
In some implementations, the transceiver(s) 1916 can be used to communicate between various functions, components, modules, or the like, that are comprised in the device(s) 1900. For instance, the transceivers 1916 may facilitate communications between the vision screening device 104 and other devices storing the vision test 108, the code 112, the key 202, or the result 206.
In some instances, one or more components may be referred to herein as “configured to,” “configurable to,” “operable/operative to,” “adapted/adaptable,” “able to,” “conformable/conformed to,” etc. Those skilled in the art will recognize that such terms (e.g., “configured to”) can generally encompass active-state components and/or inactive-state components and/or standby-state components, unless context requires otherwise.
As used herein, the term “based on” can be used synonymously with “based, at least in part, on” and “based at least partly on.”
As used herein, the terms “comprises/comprising/comprised” and “includes/including/included,” and their equivalents, can be used interchangeably. An apparatus, system, or method that “comprises A, B, and C” includes A, B, and C, but also can include other components (e.g., D) as well. That is, the apparatus, system, or method is not limited to components A, B, and C.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described.
A: A vision screening system, comprising: an external medium displaying a vision test and a code; at least one camera configured to capture an image of the external medium; at least one input device configured to detect, from a subject, a response of the subject to viewing to the vision test; and a processor configured to: identify the code based on the image of the external medium; identify the vision test based on the code; determine based on the vision test and the response, whether an eye of the subject is characterized by a condition; and generate an output indicating whether the eye is characterized by the condition.
B: The vision screening system of paragraph A, wherein the image is a first image, the condition is a first condition, and the system further comprises a light source configured to project infrared radiation onto the eye of the subject, the camera being further configured to capture a second image of the eye, the second image being indicative of a response of the eye to the infrared radiation; and the processor being further configured to: determine, based on the second image, whether the eye is characterized by a second condition; and generate an additional output indicating whether the eye is characterized by the second condition.
C: The vision screening system of paragraph B, further comprising a transceiver, wherein the processor is configured to at least one of: cause the transceiver to provide a first signal, via a network, to an electronic device indicating whether the eye is characterized by the first condition; or cause the transceiver to provide a second signal, via the network, to the electronic device indicating whether the eye is characterized by the second condition.
D: The vision screening system of paragraph A, B, or C, wherein the external medium comprises at least one of: a printed substrate; a projector configured to project the vision test and the code; or a screen configured to display the vision test and the code.
E: The vision screening system of paragraph A, B, C, or D, wherein the vision test comprises at least one of: a color vision test; a reading comprehension test; a concussion test; a near vision test; a reading speed test; or a visual acuity test.
F: The vision screening system of paragraph A, B, C, D, or E, wherein the at least one input device comprises at least one of: a microphone configured to detect an audible signal indicative of the response; a touch sensor configured to detect a touch signal indicative of the response; or a button configured to receive a press signal indicative of the response.
G: The vision screening system of paragraph A, B, C, D, E, or F, wherein the image is a first image, and the at least one camera is configured to capture a second image of the eye, the second image being indicative of the response.
H: The vision screening system of paragraph A, B, C, D, E, F, or G, wherein: the at least one camera, the at least one input device, and the processor are integrated into a handheld housing, and the external medium is separate from the housing.
I: A method, comprising: capturing an image of an external medium; identifying a vision test associated with the external medium based on the image; receiving feedback characterizing the vision test from a subject; and determining whether the subject has an ocular condition based on the feedback characterizing the vision test.
J: The method of paragraph I, wherein identifying the vision test associated with the external medium based on the image comprises: identifying a code displayed by the external medium based on the image; and identifying the vision test based on the code.
K: The method of paragraph I or J, further comprising: receiving, from the external medium, at least one of an RFID signal or an NFC signal identifying the vision test.
L: The method of paragraph I, J, or K, wherein receiving feedback characterizing the vision test from the subject comprises receiving at least one of: a signal indicative of the subject tracing a shape on a substrate; an audio signal; or a signal indicative of the subject selecting an item on a substrate.
M: The method of paragraph I, J, K, or L, wherein determining whether the subject has the ocular condition comprises: identifying a key associated with the vision test; determining one or more discrepancies between the key and the feedback; and determining that the subject has the ocular condition based on the discrepancy.
N: The method of paragraph I, J, K, L, or M, further comprising: transmitting, to an external device, a signal indicating whether the subject is suspected to have the ocular condition; and storing the determination of whether the subject has the ocular condition.
O: The method of paragraph I, J, K, L, M, or N, further comprising outputting a signal indicating whether the subject is suspected to have the ocular condition.
P: A device, comprising: a processor; and memory storing instructions that, when executed by the processor, cause the processor to perform operations comprising: receiving a first signal from an external medium; identifying a vision test associated with the external medium based on the first signal; receiving a second signal from an input device, the second signal indicating a response of a subject viewing to the vision test; and determining, based on the vision test and the second signal, whether the subject has an ocular condition.
Q: The device of paragraph P, further comprising: at least one camera configured to capture an image of the external medium, wherein identifying the vision test associated with the external medium based on the first signal comprises: identifying a code displayed by the external medium based on the image; and identifying the vision test based on the code.
R: The device of paragraph P or Q, further comprising: a transceiver configured to receive the second signal from the external medium, the second signal comprising at least one of an RFID signal or an NFC signal.
S: The device of paragraph P, Q, or R, further comprising: one or more touch sensors configured to detect an indication of the subject touching the external medium, wherein the second signal includes a shape traced by the subject touching the external medium.
T: The device of paragraph P, Q, R, or S, wherein determining whether the subject has the ocular condition comprises: identifying a key associated with the vision test; determining one or more discrepancies between the key and the second signal; and determining that the subject has the ocular condition based on the one or more discrepancies between the key and the second signal.
This application claims priority to and is a non-provisional application of U.S. Provisional Patent Application No. 63/355,050 filed on Jun. 23, 2022, the entire contents of which are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
63355050 | Jun 2022 | US |